* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-09-15 19:09 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-09-15 19:09 UTC (permalink / raw
To: gentoo-commits
commit: ae97578dcfe880dfa3dd24be70f99bd246a9fdd4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Sep 15 19:09:01 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Sep 15 19:09:01 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ae97578d
Remove redundant patch
Removed:
1900_xfs-finobt-count-blocks-fix.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ---
1900_xfs-finobt-count-blocks-fix.patch | 55 ----------------------------------
2 files changed, 59 deletions(-)
diff --git a/0000_README b/0000_README
index 6fb817a0..0367acc2 100644
--- a/0000_README
+++ b/0000_README
@@ -55,10 +55,6 @@ Patch: 1730_parisc-Disable-prctl.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
-Patch: 1900_xfs-finobt-count-blocks-fix.patch
-From: https://lore.kernel.org/linux-xfs/20240813152530.GF6051@frogsfrogsfrogs/T/#mdc718f38912ccc1b9b53b46d9adfaeff0828b55f
-Desc: xfs: xfs_finobt_count_blocks() walks the wrong btree
-
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1900_xfs-finobt-count-blocks-fix.patch b/1900_xfs-finobt-count-blocks-fix.patch
deleted file mode 100644
index 02f60712..00000000
--- a/1900_xfs-finobt-count-blocks-fix.patch
+++ /dev/null
@@ -1,55 +0,0 @@
-xfs: xfs_finobt_count_blocks() walks the wrong btree
-
-From: Dave Chinner <dchinner@redhat.com>
-
-As a result of the factoring in commit 14dd46cf31f4 ("xfs: split
-xfs_inobt_init_cursor"), mount started taking a long time on a
-user's filesystem. For Anders, this made mount times regress from
-under a second to over 15 minutes for a filesystem with only 30
-million inodes in it.
-
-Anders bisected it down to the above commit, but even then the bug
-was not obvious. In this commit, over 20 calls to
-xfs_inobt_init_cursor() were modified, and some we modified to call
-a new function named xfs_finobt_init_cursor().
-
-If that takes you a moment to reread those function names to see
-what the rename was, then you have realised why this bug wasn't
-spotted during review. And it wasn't spotted on inspection even
-after the bisect pointed at this commit - a single missing "f" isn't
-the easiest thing for a human eye to notice....
-
-The result is that xfs_finobt_count_blocks() now incorrectly calls
-xfs_inobt_init_cursor() so it is now walking the inobt instead of
-the finobt. Hence when there are lots of allocated inodes in a
-filesystem, mount takes a -long- time run because it now walks a
-massive allocated inode btrees instead of the small, nearly empty
-free inode btrees. It also means all the finobt space reservations
-are wrong, so mount could potentially given ENOSPC on kernel
-upgrade.
-
-In hindsight, commit 14dd46cf31f4 should have been two commits - the
-first to convert the finobt callers to the new API, the second to
-modify the xfs_inobt_init_cursor() API for the inobt callers. That
-would have made the bug very obvious during review.
-
-Fixes: 14dd46cf31f4 ("xfs: split xfs_inobt_init_cursor")
-Reported-by: Anders Blomdell <anders.blomdell@gmail.com>
-Signed-off-by: Dave Chinner <dchinner@redhat.com>
----
- fs/xfs/libxfs/xfs_ialloc_btree.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/fs/xfs/libxfs/xfs_ialloc_btree.c b/fs/xfs/libxfs/xfs_ialloc_btree.c
-index 496e2f72a85b..797d5b5f7b72 100644
---- a/fs/xfs/libxfs/xfs_ialloc_btree.c
-+++ b/fs/xfs/libxfs/xfs_ialloc_btree.c
-@@ -749,7 +749,7 @@ xfs_finobt_count_blocks(
- if (error)
- return error;
-
-- cur = xfs_inobt_init_cursor(pag, tp, agbp);
-+ cur = xfs_finobt_init_cursor(pag, tp, agbp);
- error = xfs_btree_count_blocks(cur, tree_blocks);
- xfs_btree_del_cursor(cur, error);
- xfs_trans_brelse(tp, agbp);
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-09-21 15:21 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-09-21 15:21 UTC (permalink / raw
To: gentoo-commits
commit: a8e26a1f71a789c59980dfc041cec65d047dcc70
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 21 15:21:01 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep 21 15:21:01 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a8e26a1f
dtrace patch for 6.11.X (CTF, modules.builtin.objs) p1
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
2995_dtrace-6.11_p1.patch | 2368 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2372 insertions(+)
diff --git a/0000_README b/0000_README
index 0367acc2..963defab 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
From: https://lore.kernel.org/bpf/
Desc: libbpf: workaround -Wmaybe-uninitialized false positive
+Patch: 2995_dtrace-6.11_p1.patch
+From: https://github.com/thesamesam/linux/tree/dtrace-sam/v2/6.11-flat
+Desc: dtrace patch for 6.11.X (CTF, modules.builtin.objs)
+
Patch: 3000_Support-printing-firmware-info.patch
From: https://bugs.gentoo.org/732852
Desc: Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev
diff --git a/2995_dtrace-6.11_p1.patch b/2995_dtrace-6.11_p1.patch
new file mode 100644
index 00000000..71caafec
--- /dev/null
+++ b/2995_dtrace-6.11_p1.patch
@@ -0,0 +1,2368 @@
+diff --git a/Documentation/dontdiff b/Documentation/dontdiff
+index 3c399f132e2db..75b9655e57914 100644
+--- a/Documentation/dontdiff
++++ b/Documentation/dontdiff
+@@ -179,7 +179,7 @@ mkutf8data
+ modpost
+ modules-only.symvers
+ modules.builtin
+-modules.builtin.modinfo
++modules.builtin.*
+ modules.nsdeps
+ modules.order
+ modversions.h*
+diff --git a/Documentation/kbuild/kbuild.rst b/Documentation/kbuild/kbuild.rst
+index 9c8d1d046ea56..4e2d666f167aa 100644
+--- a/Documentation/kbuild/kbuild.rst
++++ b/Documentation/kbuild/kbuild.rst
+@@ -17,11 +17,21 @@ modules.builtin
+ This file lists all modules that are built into the kernel. This is used
+ by modprobe to not fail when trying to load something builtin.
+
++modules.builtin.objs
++-----------------------
++This file contains object mapping of modules that are built into the kernel
++to their corresponding object files used to build the module.
++
+ modules.builtin.modinfo
+ -----------------------
+ This file contains modinfo from all modules that are built into the kernel.
+ Unlike modinfo of a separate module, all fields are prefixed with module name.
+
++modules.builtin.ranges
++----------------------
++This file contains address offset ranges (per ELF section) for all modules
++that are built into the kernel. Together with System.map, it can be used
++to associate module names with symbols.
+
+ Environment variables
+ =====================
+diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
+index 3fc63f27c226d..a8baff63b44af 100644
+--- a/Documentation/process/changes.rst
++++ b/Documentation/process/changes.rst
+@@ -64,9 +64,13 @@ GNU tar 1.28 tar --version
+ gtags (optional) 6.6.5 gtags --version
+ mkimage (optional) 2017.01 mkimage --version
+ Python (optional) 3.5.x python3 --version
++GNU AWK (optional) 5.1.0 gawk --version
++GNU C\ [#f2]_ 12.0 gcc --version
++binutils\ [#f2]_ 2.36 ld -v
+ ====================== =============== ========================================
+
+ .. [#f1] Sphinx is needed only to build the Kernel documentation
++.. [#f2] These are needed at build-time when CONFIG_CTF is enabled
+
+ Kernel compilation
+ ******************
+@@ -192,6 +196,12 @@ platforms. The tool is available via the ``u-boot-tools`` package or can be
+ built from the U-Boot source code. See the instructions at
+ https://docs.u-boot.org/en/latest/build/tools.html#building-tools-for-linux
+
++GNU AWK
++-------
++
++GNU AWK is needed if you want kernel builds to generate address range data for
++builtin modules (CONFIG_BUILTIN_MODULE_RANGES).
++
+ System utilities
+ ****************
+
+diff --git a/Makefile b/Makefile
+index 34bd1d5f96720..fd75baefcf741 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1010,6 +1010,7 @@ include-$(CONFIG_UBSAN) += scripts/Makefile.ubsan
+ include-$(CONFIG_KCOV) += scripts/Makefile.kcov
+ include-$(CONFIG_RANDSTRUCT) += scripts/Makefile.randstruct
+ include-$(CONFIG_GCC_PLUGINS) += scripts/Makefile.gcc-plugins
++include-$(CONFIG_CTF) += scripts/Makefile.ctfa-toplevel
+
+ include $(addprefix $(srctree)/, $(include-y))
+
+@@ -1137,7 +1138,11 @@ PHONY += vmlinux_o
+ vmlinux_o: vmlinux.a $(KBUILD_VMLINUX_LIBS)
+ $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.vmlinux_o
+
+-vmlinux.o modules.builtin.modinfo modules.builtin: vmlinux_o
++MODULES_BUILTIN := modules.builtin.modinfo
++MODULES_BUILTIN += modules.builtin
++MODULES_BUILTIN += modules.builtin.objs
++
++vmlinux.o $(MODULES_BUILTIN): vmlinux_o
+ @:
+
+ PHONY += vmlinux
+@@ -1482,9 +1487,10 @@ endif # CONFIG_MODULES
+
+ # Directories & files removed with 'make clean'
+ CLEAN_FILES += vmlinux.symvers modules-only.symvers \
+- modules.builtin modules.builtin.modinfo modules.nsdeps \
++ modules.builtin modules.builtin.* modules.nsdeps vmlinux.o.map \
+ compile_commands.json rust/test \
+- rust-project.json .vmlinux.objs .vmlinux.export.c
++ rust-project.json .vmlinux.objs .vmlinux.export.c \
++ vmlinux.ctfa
+
+ # Directories & files removed with 'make mrproper'
+ MRPROPER_FILES += include/config include/generated \
+@@ -1578,6 +1584,8 @@ help:
+ @echo ' (requires a recent binutils and recent build (System.map))'
+ @echo ' dir/file.ko - Build module including final link'
+ @echo ' modules_prepare - Set up for building external modules'
++ @echo ' ctf - Generate CTF type information, installed by make ctf_install'
++ @echo ' ctf_install - Install CTF to INSTALL_MOD_PATH (default: /)'
+ @echo ' tags/TAGS - Generate tags file for editors'
+ @echo ' cscope - Generate cscope index'
+ @echo ' gtags - Generate GNU GLOBAL index'
+@@ -1934,7 +1942,7 @@ clean: $(clean-dirs)
+ $(call cmd,rmfiles)
+ @find $(or $(KBUILD_EXTMOD), .) $(RCS_FIND_IGNORE) \
+ \( -name '*.[aios]' -o -name '*.rsi' -o -name '*.ko' -o -name '.*.cmd' \
+- -o -name '*.ko.*' \
++ -o -name '*.ko.*' -o -name '*.ctf' \
+ -o -name '*.dtb' -o -name '*.dtbo' \
+ -o -name '*.dtb.S' -o -name '*.dtbo.S' \
+ -o -name '*.dt.yaml' -o -name 'dtbs-list' \
+diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
+index 01067a2bc43b7..d2193b8dfad83 100644
+--- a/arch/arm/vdso/Makefile
++++ b/arch/arm/vdso/Makefile
+@@ -14,6 +14,10 @@ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+ ccflags-y := -fPIC -fno-common -fno-builtin -fno-stack-protector
+ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO32
+
++# CTF in the vDSO would introduce a new section, which would
++# expand the vDSO to more than a page.
++ccflags-y += $(call cc-option,-gctf0)
++
+ ldflags-$(CONFIG_CPU_ENDIAN_BE8) := --be8
+ ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
+ -z max-page-size=4096 -shared $(ldflags-y) \
+diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
+index d11da6461278f..abba0916369eb 100644
+--- a/arch/arm64/kernel/vdso/Makefile
++++ b/arch/arm64/kernel/vdso/Makefile
+@@ -33,6 +33,10 @@ ldflags-y += -T
+ ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
+ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+
++# CTF in the vDSO would introduce a new section, which would
++# expand the vDSO to more than a page.
++ccflags-y += $(call cc-option,-gctf0)
++
+ # -Wmissing-prototypes and -Wmissing-declarations are removed from
+ # the CFLAGS of vgettimeofday.c to make possible to build the
+ # kernel with CONFIG_WERROR enabled.
+diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
+index d724d46b07c84..fbedb95223ae1 100644
+--- a/arch/loongarch/vdso/Makefile
++++ b/arch/loongarch/vdso/Makefile
+@@ -21,7 +21,8 @@ cflags-vdso := $(ccflags-vdso) \
+ -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
+ -fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+ $(call cc-option, -fno-asynchronous-unwind-tables) \
+- $(call cc-option, -fno-stack-protector)
++ $(call cc-option, -fno-stack-protector) \
++ $(call cc-option,-gctf0)
+ aflags-vdso := $(ccflags-vdso) \
+ -D__ASSEMBLY__ -Wa,-gdwarf-2
+
+diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
+index b289b2c1b2946..6c8d777525f9b 100644
+--- a/arch/mips/vdso/Makefile
++++ b/arch/mips/vdso/Makefile
+@@ -30,7 +30,8 @@ cflags-vdso := $(ccflags-vdso) \
+ -O3 -g -fPIC -fno-strict-aliasing -fno-common -fno-builtin -G 0 \
+ -mrelax-pic-calls $(call cc-option, -mexplicit-relocs) \
+ -fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+- $(call cc-option, -fno-asynchronous-unwind-tables)
++ $(call cc-option, -fno-asynchronous-unwind-tables) \
++ $(call cc-option,-gctf0)
+ aflags-vdso := $(ccflags-vdso) \
+ -D__ASSEMBLY__ -Wa,-gdwarf-2
+
+diff --git a/arch/sparc/vdso/Makefile b/arch/sparc/vdso/Makefile
+index 243dbfc4609d8..e4f3e47074e9d 100644
+--- a/arch/sparc/vdso/Makefile
++++ b/arch/sparc/vdso/Makefile
+@@ -44,7 +44,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=medlow -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+ $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
+ -fno-omit-frame-pointer -foptimize-sibling-calls \
+- -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
++ $(call cc-option,-gctf0) -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+
+ SPARC_REG_CFLAGS = -ffixed-g4 -ffixed-g5 -fcall-used-g5 -fcall-used-g7
+
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index c9216ac4fb1eb..fbbf4de9ff8ea 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -54,6 +54,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+ $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
+ -fno-omit-frame-pointer -foptimize-sibling-calls \
++ $(call cc-option,-gctf0) \
+ -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+
+ ifdef CONFIG_MITIGATION_RETPOLINE
+@@ -132,6 +133,7 @@ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+ KBUILD_CFLAGS_32 += -fno-stack-protector
+ KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+ KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
++KBUILD_CFLAGS_32 += $(call cc-option,-gctf0)
+ KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
+
+ ifdef CONFIG_MITIGATION_RETPOLINE
+diff --git a/arch/x86/um/vdso/Makefile b/arch/x86/um/vdso/Makefile
+index 6a77ea6434ffd..6db233b5edd75 100644
+--- a/arch/x86/um/vdso/Makefile
++++ b/arch/x86/um/vdso/Makefile
+@@ -40,7 +40,7 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+ #
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+ $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
+- -fno-omit-frame-pointer -foptimize-sibling-calls
++ -fno-omit-frame-pointer -foptimize-sibling-calls $(call cc-option,-gctf0)
+
+ $(vobjs): KBUILD_CFLAGS += $(CFL)
+
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 1ae44793132a8..9267498218a79 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -1007,6 +1007,7 @@
+ *(.discard.*) \
+ *(.export_symbol) \
+ *(.modinfo) \
++ *(.ctf) \
+ /* ld.bfd warns about .gnu.version* even when not emitted */ \
+ *(.gnu.version*) \
+
+diff --git a/include/linux/module.h b/include/linux/module.h
+index 88ecc5e9f5230..471a422b15a6e 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -186,7 +186,13 @@ extern void cleanup_module(void);
+ #ifdef MODULE
+ #define MODULE_FILE
+ #else
+-#define MODULE_FILE MODULE_INFO(file, KBUILD_MODFILE);
++#ifdef CONFIG_CTF
++#define MODULE_FILE \
++ MODULE_INFO(file, KBUILD_MODFILE); \
++ MODULE_INFO(objs, KBUILD_MODOBJS);
++#else
++#define MODULE_FILE MODULE_INFO(file, KBUILD_MODFILE);
++#endif
+ #endif
+
+ /*
+diff --git a/init/Kconfig b/init/Kconfig
+index 5783a0b875172..d91af1f9fee8d 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -113,6 +113,12 @@ config PAHOLE_VERSION
+ int
+ default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+
++config HAVE_CTF_TOOLCHAIN
++ def_bool $(cc-option,-gctf) && $(ld-option,-lbfd -liberty -lctf -lbfd -liberty -lz -ldl -lc -o /dev/null)
++ depends on CC_IS_GCC
++ help
++ GCC and binutils support CTF generation.
++
+ config CONSTRUCTORS
+ bool
+
+diff --git a/lib/Kconfig b/lib/Kconfig
+index b38849af6f130..340d526906190 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -634,6 +634,16 @@ config DIMLIB
+ #
+ config LIBFDT
+ bool
++#
++# CTF support is select'ed if needed
++#
++config CTF
++ bool "Compact Type Format generation"
++ depends on HAVE_CTF_TOOLCHAIN
++ help
++ Emit a compact, compressed description of the kernel's datatypes and
++ global variables into the vmlinux.ctfa archive (for in-tree modules)
++ or into .ctf sections in kernel modules (for out-of-tree modules).
+
+ config OID_REGISTRY
+ tristate
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index a30c03a661726..5e2f30921cb25 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -571,6 +571,21 @@ config VMLINUX_MAP
+ pieces of code get eliminated with
+ CONFIG_LD_DEAD_CODE_DATA_ELIMINATION.
+
++config BUILTIN_MODULE_RANGES
++ bool "Generate address range information for builtin modules"
++ depends on !LTO
++ depends on VMLINUX_MAP
++ help
++ When modules are built into the kernel, there will be no module name
++ associated with its symbols in /proc/kallsyms. Tracers may want to
++ identify symbols by module name and symbol name regardless of whether
++ the module is configured as loadable or not.
++
++ This option generates modules.builtin.ranges in the build tree with
++ offset ranges (per ELF section) for the module(s) they belong to.
++ It also records an anchor symbol to determine the load address of the
++ section.
++
+ config DEBUG_FORCE_WEAK_PER_CPU
+ bool "Force weak per-cpu definitions"
+ depends on DEBUG_KERNEL
+diff --git a/scripts/Makefile b/scripts/Makefile
+index dccef663ca820..d06ab9d59a9d9 100644
+--- a/scripts/Makefile
++++ b/scripts/Makefile
+@@ -54,6 +54,7 @@ targets += module.lds
+
+ subdir-$(CONFIG_GCC_PLUGINS) += gcc-plugins
+ subdir-$(CONFIG_MODVERSIONS) += genksyms
++subdir-$(CONFIG_CTF) += ctf
+ subdir-$(CONFIG_SECURITY_SELINUX) += selinux
+
+ # Let clean descend into subdirs
+diff --git a/scripts/Makefile.ctfa b/scripts/Makefile.ctfa
+new file mode 100644
+index 0000000000000..b65d9d391c29c
+--- /dev/null
++++ b/scripts/Makefile.ctfa
+@@ -0,0 +1,92 @@
++# SPDX-License-Identifier: GPL-2.0-only
++# ===========================================================================
++# Module CTF/CTFA generation
++# ===========================================================================
++
++include include/config/auto.conf
++include $(srctree)/scripts/Kbuild.include
++
++# CTF is already present in every object file if CONFIG_CTF is enabled.
++# vmlinux.lds.h strips it out of the finished kernel, but if nothing is done
++# it will be deduplicated into module .ko's. For out-of-tree module builds,
++# this is what we want, but for in-tree modules we can save substantial
++# space by deduplicating it against all the core kernel types as well. So
++# split the CTF out of in-tree module .ko's into separate .ctf files so that
++# it doesn't take up space in the modules on disk, and let the specialized
++# ctfarchive tool consume it and all the CTF in the vmlinux.o files when
++# 'make ctf' is invoked, and use the same machinery that the linker uses to
++# do CTF deduplication to emit vmlinux.ctfa containing the deduplicated CTF.
++
++# Nothing special needs to be done if CTF is turned off or if a standalone
++# module is being built.
++module-ctf-postlink = mv $(1).tmp $(1)
++
++ifdef CONFIG_CTF
++
++# This is quite tricky. The CTF machinery needs to be told about all the
++# built-in objects as well as all the external modules -- but Makefile.modfinal
++# only knows about the latter. So the toplevel makefile emits the names of the
++# built-in objects into a temporary file, which is then catted and its contents
++# used as prerequisites by this rule.
++#
++# We write the names of the object files to be scanned for CTF content into a
++# file, then use that, to avoid hitting command-line length limits.
++
++ifeq ($(KBUILD_EXTMOD),)
++ctf-modules := $(shell find . -name '*.ko.ctf' -print)
++quiet_cmd_ctfa_raw = CTFARAW
++ cmd_ctfa_raw = scripts/ctf/ctfarchive $@ .tmp_objects.builtin modules.builtin.objs $(ctf-filelist)
++ctf-builtins := .tmp_objects.builtin
++ctf-filelist := .tmp_ctf.filelist
++ctf-filelist-raw := .tmp_ctf.filelist.raw
++
++define module-ctf-postlink =
++ $(OBJCOPY) --only-section=.ctf $(1).tmp $(1).ctf && \
++ $(OBJCOPY) --remove-section=.ctf $(1).tmp $(1) && rm -f $(1).tmp
++endef
++
++# Split a list up like shell xargs does.
++define xargs =
++$(1) $(wordlist 1,1024,$(2))
++$(if $(word 1025,$(2)),$(call xargs,$(1),$(wordlist 1025,$(words $(2)),$(2))))
++endef
++
++$(ctf-filelist-raw): $(ctf-builtins) $(ctf-modules)
++ @rm -f $(ctf-filelist-raw);
++ $(call xargs,@printf "%s\n" >> $(ctf-filelist-raw),$^)
++ @touch $(ctf-filelist-raw)
++
++$(ctf-filelist): $(ctf-filelist-raw)
++ @rm -f $(ctf-filelist);
++ @cat $(ctf-filelist-raw) | while read -r obj; do \
++ case $$obj in \
++ $(ctf-builtins)) cat $$obj >> $(ctf-filelist);; \
++ *.a) $(AR) t $$obj > $(ctf-filelist);; \
++ *.builtin) cat $$obj >> $(ctf-filelist);; \
++ *) echo "$$obj" >> $(ctf-filelist);; \
++ esac; \
++ done
++ @touch $(ctf-filelist)
++
++# The raw CTF depends on the output CTF file list, and that depends
++# on the .ko files for the modules.
++.tmp_vmlinux.ctfa.raw: $(ctf-filelist) FORCE
++ $(call if_changed,ctfa_raw)
++
++quiet_cmd_ctfa = CTFA
++ cmd_ctfa = { echo 'int main () { return 0; } ' | \
++ $(CC) -x c -c -o $<.stub -; \
++ $(OBJCOPY) '--remove-section=.*' --add-section=.ctf=$< \
++ $<.stub $@; }
++
++# The CTF itself is an ELF executable with one section: the CTF. This lets
++# objdump work on it, at minimal size cost.
++vmlinux.ctfa: .tmp_vmlinux.ctfa.raw FORCE
++ $(call if_changed,ctfa)
++
++targets += vmlinux.ctfa
++
++endif # KBUILD_EXTMOD
++
++endif # !CONFIG_CTF
++
+diff --git a/scripts/Makefile.ctfa-toplevel b/scripts/Makefile.ctfa-toplevel
+new file mode 100644
+index 0000000000000..210bef3854e9b
+--- /dev/null
++++ b/scripts/Makefile.ctfa-toplevel
+@@ -0,0 +1,54 @@
++# SPDX-License-Identifier: GPL-2.0-only
++# ===========================================================================
++# CTF rules for the top-level makefile only
++# ===========================================================================
++
++KBUILD_CFLAGS += $(call cc-option,-gctf)
++KBUILD_LDFLAGS += $(call ld-option, --ctf-variables)
++
++ifeq ($(KBUILD_EXTMOD),)
++
++# CTF generation for in-tree code (modules, built-in and not, and core kernel)
++
++# This contains all the object files that are built directly into the
++# kernel (including built-in modules), for consumption by ctfarchive in
++# Makefile.modfinal.
++# This is made doubly annoying by the presence of '.o' files which are actually
++# thin ar archives, and the need to support file(1) versions too old to
++# recognize them as archives at all. (So we assume that everything that is notr
++# an ELF object is an archive.)
++ifeq ($(SRCARCH),x86)
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),bzImage) FORCE
++else
++ifeq ($(SRCARCH),arm64)
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),Image) FORCE
++else
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),vmlinux) FORCE
++endif
++endif
++ @echo $(KBUILD_VMLINUX_OBJS) | \
++ tr " " "\n" | grep "\.o$$" | xargs -r file | \
++ grep ELF | cut -d: -f1 > .tmp_objects.builtin
++ @for archive in $$(echo $(KBUILD_VMLINUX_OBJS) |\
++ tr " " "\n" | xargs -r file | grep -v ELF | cut -d: -f1); do \
++ $(AR) t "$$archive" >> .tmp_objects.builtin; \
++ done
++
++ctf: vmlinux.ctfa
++PHONY += ctf ctf_install
++
++# Making CTF needs the builtin files. We need to force everything to be
++# built if not already done, since we need the .o files for the machinery
++# above to work.
++vmlinux.ctfa: KBUILD_BUILTIN := 1
++vmlinux.ctfa: modules.builtin.objs .tmp_objects.builtin
++ $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modfinal vmlinux.ctfa
++
++ctf_install:
++ $(Q)mkdir -p $(MODLIB)/kernel
++ @ln -sf $(abspath $(srctree)) $(MODLIB)/source
++ $(Q)cp -f $(objtree)/vmlinux.ctfa $(MODLIB)/kernel
++
++CLEAN_FILES += vmlinux.ctfa
++
++endif
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index 207325eaf1d1c..8586d212bd3de 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -118,6 +118,8 @@ modname-multi = $(sort $(foreach m,$(multi-obj-ym),\
+ __modname = $(or $(modname-multi),$(basetarget))
+
+ modname = $(subst $(space),:,$(__modname))
++modname-objs = $($(modname)-objs) $($(modname)-y) $($(modname)-Y)
++modname-objs-prefixed = $(sort $(strip $(addprefix $(obj)/, $(modname-objs))))
+ modfile = $(addprefix $(obj)/,$(__modname))
+
+ # target with $(obj)/ and its suffix stripped
+@@ -133,6 +135,10 @@ modname_flags = -DKBUILD_MODNAME=$(call name-fix,$(modname)) \
+ -D__KBUILD_MODNAME=kmod_$(call name-fix-token,$(modname))
+ modfile_flags = -DKBUILD_MODFILE=$(call stringify,$(modfile))
+
++ifdef CONFIG_CTF
++modfile_flags += -DKBUILD_MODOBJS=$(call stringify,$(modfile).o:$(subst $(space),|,$(modname-objs-prefixed)))
++endif
++
+ _c_flags = $(filter-out $(CFLAGS_REMOVE_$(target-stem).o), \
+ $(filter-out $(ccflags-remove-y), \
+ $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(ccflags-y)) \
+@@ -238,7 +244,7 @@ modkern_rustflags = \
+
+ modkern_aflags = $(if $(part-of-module), \
+ $(KBUILD_AFLAGS_MODULE) $(AFLAGS_MODULE), \
+- $(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL))
++ $(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL) $(modfile_flags))
+
+ c_flags = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \
+ -include $(srctree)/include/linux/compiler_types.h \
+@@ -248,7 +254,7 @@ c_flags = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \
+ rust_flags = $(_rust_flags) $(modkern_rustflags) @$(objtree)/include/generated/rustc_cfg
+
+ a_flags = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \
+- $(_a_flags) $(modkern_aflags)
++ $(_a_flags) $(modkern_aflags) $(modname_flags)
+
+ cpp_flags = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \
+ $(_cpp_flags)
+diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
+index 306a6bb86e4dc..9842c69a88dff 100644
+--- a/scripts/Makefile.modfinal
++++ b/scripts/Makefile.modfinal
+@@ -30,11 +30,16 @@ quiet_cmd_cc_o_c = CC [M] $@
+ %.mod.o: %.mod.c FORCE
+ $(call if_changed_dep,cc_o_c)
+
++# for module-ctf-postlink
++include $(srctree)/scripts/Makefile.ctfa
++
+ quiet_cmd_ld_ko_o = LD [M] $@
+ cmd_ld_ko_o += \
+ $(LD) -r $(KBUILD_LDFLAGS) \
+ $(KBUILD_LDFLAGS_MODULE) $(LDFLAGS_MODULE) \
+- -T scripts/module.lds -o $@ $(filter %.o, $^)
++ -T scripts/module.lds $(LDFLAGS_$(modname)) -o $@.tmp \
++ $(filter %.o, $^) && \
++ $(call module-ctf-postlink,$@) \
+
+ quiet_cmd_btf_ko = BTF [M] $@
+ cmd_btf_ko = \
+diff --git a/scripts/Makefile.modinst b/scripts/Makefile.modinst
+index 0afd75472679f..e668469ce098c 100644
+--- a/scripts/Makefile.modinst
++++ b/scripts/Makefile.modinst
+@@ -30,10 +30,12 @@ $(MODLIB)/modules.order: modules.order FORCE
+ quiet_cmd_install_modorder = INSTALL $@
+ cmd_install_modorder = sed 's:^\(.*\)\.o$$:kernel/\1.ko:' $< > $@
+
+-# Install modules.builtin(.modinfo) even when CONFIG_MODULES is disabled.
+-install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo)
++# Install modules.builtin(.modinfo,.ranges,.objs) even when CONFIG_MODULES is disabled.
++install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.objs)
+
+-$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo): $(MODLIB)/%: % FORCE
++install-$(CONFIG_BUILTIN_MODULE_RANGES) += $(MODLIB)/modules.builtin.ranges
++
++$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.ranges modules.builtin.objs): $(MODLIB)/%: % FORCE
+ $(call cmd,install)
+
+ endif
+diff --git a/scripts/Makefile.vmlinux b/scripts/Makefile.vmlinux
+index 5ceecbed31eb7..2524c8e2edbdb 100644
+--- a/scripts/Makefile.vmlinux
++++ b/scripts/Makefile.vmlinux
+@@ -33,7 +33,25 @@ targets += vmlinux
+ vmlinux: scripts/link-vmlinux.sh vmlinux.o $(KBUILD_LDS) FORCE
+ +$(call if_changed_dep,link_vmlinux)
+
++# ---------------------------------------------------------------------------
++ifdef CONFIG_BUILTIN_MODULE_RANGES
++__default: modules.builtin.ranges
++
++quiet_cmd_modules_builtin_ranges = GEN $@
++ cmd_modules_builtin_ranges = gawk -f $(real-prereqs) > $@
++
++targets += modules.builtin.ranges
++modules.builtin.ranges: $(srctree)/scripts/generate_builtin_ranges.awk \
++ modules.builtin vmlinux.map vmlinux.o.map FORCE
++ $(call if_changed,modules_builtin_ranges)
++
++vmlinux.map: vmlinux
++ @:
++
++endif
++
+ # Add FORCE to the prerequisites of a target to force it to be always rebuilt.
++
+ # ---------------------------------------------------------------------------
+
+ PHONY += FORCE
+diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
+index d64070b6b4bce..e8d5c98173d74 100644
+--- a/scripts/Makefile.vmlinux_o
++++ b/scripts/Makefile.vmlinux_o
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+
+ PHONY := __default
+-__default: vmlinux.o modules.builtin.modinfo modules.builtin
++__default: vmlinux.o modules.builtin.modinfo modules.builtin modules.builtin.objs
+
+ include include/config/auto.conf
+ include $(srctree)/scripts/Kbuild.include
+@@ -27,6 +27,20 @@ ifdef CONFIG_LTO_CLANG
+ initcalls-lds := .tmp_initcalls.lds
+ endif
+
++# Generate a linker script to delete CTF sections
++# -----------------------------------------------
++
++quiet_cmd_gen_remove_ctf.lds = GEN $@
++ cmd_gen_remove_ctf.lds = \
++ $(LD) $(KBUILD_LDFLAGS) -r --verbose | awk -f $(real-prereqs) > $@
++
++.tmp_remove-ctf.lds: $(srctree)/scripts/remove-ctf-lds.awk FORCE
++ $(call if_changed,gen_remove_ctf.lds)
++
++ifdef CONFIG_CTF
++targets := .tmp_remove-ctf.lds
++endif
++
+ # objtool for vmlinux.o
+ # ---------------------------------------------------------------------------
+ #
+@@ -42,13 +56,26 @@ vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION) += --noinstr \
+
+ objtool-args = $(vmlinux-objtool-args-y) --link
+
+-# Link of vmlinux.o used for section mismatch analysis
++# Link of vmlinux.o used for section mismatch analysis: we also strip the CTF
++# section out at this stage, since ctfarchive gets it from the underlying object
++# files and linking it further is a waste of time.
+ # ---------------------------------------------------------------------------
+
++vmlinux-o-ld-args-$(CONFIG_BUILTIN_MODULE_RANGES) += -Map=$@.map
++
++ifdef CONFIG_CTF
++ctf_strip_script_arg = -T .tmp_remove-ctf.lds
++ctf_target = .tmp_remove-ctf.lds
++else
++ctf_strip_script_arg =
++ctf_target =
++endif
++
+ quiet_cmd_ld_vmlinux.o = LD $@
+ cmd_ld_vmlinux.o = \
+ $(LD) ${KBUILD_LDFLAGS} -r -o $@ \
+- $(addprefix -T , $(initcalls-lds)) \
++ $(vmlinux-o-ld-args-y) \
++ $(addprefix -T , $(initcalls-lds)) $(ctf_strip_script_arg) \
+ --whole-archive vmlinux.a --no-whole-archive \
+ --start-group $(KBUILD_VMLINUX_LIBS) --end-group \
+ $(cmd_objtool)
+@@ -58,7 +85,7 @@ define rule_ld_vmlinux.o
+ $(call cmd,gen_objtooldep)
+ endef
+
+-vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) FORCE
++vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) $(ctf_target) FORCE
+ $(call if_changed_rule,ld_vmlinux.o)
+
+ targets += vmlinux.o
+@@ -87,7 +114,20 @@ targets += modules.builtin
+ modules.builtin: modules.builtin.modinfo FORCE
+ $(call if_changed,modules_builtin)
+
+-# Add FORCE to the prerequisites of a target to force it to be always rebuilt.
++# module.builtin.objs
++# ---------------------------------------------------------------------------
++quiet_cmd_modules_builtin_objs = GEN $@
++ cmd_modules_builtin_objs = \
++ tr '\0' '\n' < $< | \
++ sed -n 's/^[[:alnum:]:_]*\.objs=//p' | \
++ tr ' ' '\n' | uniq | sed -e 's|:|: |' -e 's:|: :g' | \
++ tr -s ' ' > $@
++
++targets += modules.builtin.objs
++modules.builtin.objs: modules.builtin.modinfo FORCE
++ $(call if_changed,modules_builtin_objs)
++
++# Add FORCE to the prequisites of a target to force it to be always rebuilt.
+ # ---------------------------------------------------------------------------
+
+ PHONY += FORCE
+diff --git a/scripts/ctf/Makefile b/scripts/ctf/Makefile
+new file mode 100644
+index 0000000000000..3b83f93bb9f9a
+--- /dev/null
++++ b/scripts/ctf/Makefile
+@@ -0,0 +1,5 @@
++ifdef CONFIG_CTF
++hostprogs-always-y := ctfarchive
++ctfarchive-objs := ctfarchive.o modules_builtin.o
++HOSTLDLIBS_ctfarchive := -lctf
++endif
+diff --git a/scripts/ctf/ctfarchive.c b/scripts/ctf/ctfarchive.c
+new file mode 100644
+index 0000000000000..92cc4912ed0ee
+--- /dev/null
++++ b/scripts/ctf/ctfarchive.c
+@@ -0,0 +1,413 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * ctfmerge.c: Read in CTF extracted from generated object files from a
++ * specified directory and generate a CTF archive whose members are the
++ * deduplicated CTF derived from those object files, split up by kernel
++ * module.
++ *
++ * Copyright (c) 2019, 2023, Oracle and/or its affiliates.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#define _GNU_SOURCE 1
++#include <errno.h>
++#include <stdio.h>
++#include <stdlib.h>
++#include <string.h>
++#include <ctf-api.h>
++#include "modules_builtin.h"
++
++static ctf_file_t *output;
++
++static int private_ctf_link_add_ctf(ctf_file_t *fp,
++ const char *name)
++{
++#if !defined (CTF_LINK_FINAL)
++ return ctf_link_add_ctf(fp, NULL, name);
++#else
++ /* Non-upstreamed, erroneously-broken API. */
++ return ctf_link_add_ctf(fp, NULL, name, NULL, 0);
++#endif
++}
++
++/*
++ * Add a file to the link.
++ */
++static void add_to_link(const char *fn)
++{
++ if (private_ctf_link_add_ctf(output, fn) < 0)
++ {
++ fprintf(stderr, "Cannot add CTF file %s: %s\n", fn,
++ ctf_errmsg(ctf_errno(output)));
++ exit(1);
++ }
++}
++
++struct from_to
++{
++ char *from;
++ char *to;
++};
++
++/*
++ * The world's stupidest hash table of FROM -> TO.
++ */
++static struct from_to **from_tos[256];
++static size_t alloc_from_tos[256];
++static size_t num_from_tos[256];
++
++static unsigned char from_to_hash(const char *from)
++{
++ unsigned char hval = 0;
++
++ const char *p;
++ for (p = from; *p; p++)
++ hval += *p;
++
++ return hval;
++}
++
++/*
++ * Note that we will need to add a CU mapping later on.
++ *
++ * Present purely to work around a binutils bug that stops
++ * ctf_link_add_cu_mapping() working right when called repeatedly
++ * with the same FROM.
++ */
++static int add_cu_mapping(const char *from, const char *to)
++{
++ ssize_t i, j;
++
++ i = from_to_hash(from);
++
++ for (j = 0; j < num_from_tos[i]; j++)
++ if (strcmp(from, from_tos[i][j]->from) == 0) {
++ char *tmp;
++
++ free(from_tos[i][j]->to);
++ tmp = strdup(to);
++ if (!tmp)
++ goto oom;
++ from_tos[i][j]->to = tmp;
++ return 0;
++ }
++
++ if (num_from_tos[i] >= alloc_from_tos[i]) {
++ struct from_to **tmp;
++ if (alloc_from_tos[i] < 16)
++ alloc_from_tos[i] = 16;
++ else
++ alloc_from_tos[i] *= 2;
++
++ tmp = realloc(from_tos[i], alloc_from_tos[i] * sizeof(struct from_to *));
++ if (!tmp)
++ goto oom;
++
++ from_tos[i] = tmp;
++ }
++
++ j = num_from_tos[i];
++ from_tos[i][j] = malloc(sizeof(struct from_to));
++ if (from_tos[i][j] == NULL)
++ goto oom;
++ from_tos[i][j]->from = strdup(from);
++ from_tos[i][j]->to = strdup(to);
++ if (!from_tos[i][j]->from || !from_tos[i][j]->to)
++ goto oom;
++ num_from_tos[i]++;
++
++ return 0;
++ oom:
++ fprintf(stderr,
++ "out of memory in add_cu_mapping\n");
++ exit(1);
++}
++
++/*
++ * Finally tell binutils to add all the CU mappings, with duplicate FROMs
++ * replaced with the most recent one.
++ */
++static void commit_cu_mappings(void)
++{
++ ssize_t i, j;
++
++ for (i = 0; i < 256; i++)
++ for (j = 0; j < num_from_tos[i]; j++)
++ ctf_link_add_cu_mapping(output, from_tos[i][j]->from,
++ from_tos[i][j]->to);
++}
++
++/*
++ * Add a CU mapping to the link.
++ *
++ * CU mappings for built-in modules are added by suck_in_modules, below: here,
++ * we only want to add mappings for names ending in '.ko.ctf', i.e. external
++ * modules, which appear only in the filelist (since they are not built-in).
++ * The pathnames are stripped off because modules don't have any, and hyphens
++ * are translated into underscores.
++ */
++static void add_cu_mappings(const char *fn)
++{
++ const char *last_slash;
++ const char *modname = fn;
++ char *dynmodname = NULL;
++ char *dash;
++ size_t n;
++
++ last_slash = strrchr(modname, '/');
++ if (last_slash)
++ last_slash++;
++ else
++ last_slash = modname;
++ modname = last_slash;
++ if (strchr(modname, '-') != NULL)
++ {
++ dynmodname = strdup(last_slash);
++ dash = dynmodname;
++ while (dash != NULL) {
++ dash = strchr(dash, '-');
++ if (dash != NULL)
++ *dash = '_';
++ }
++ modname = dynmodname;
++ }
++
++ n = strlen(modname);
++ if (strcmp(modname + n - strlen(".ko.ctf"), ".ko.ctf") == 0) {
++ char *mod;
++
++ n -= strlen(".ko.ctf");
++ mod = strndup(modname, n);
++ add_cu_mapping(fn, mod);
++ free(mod);
++ }
++ free(dynmodname);
++}
++
++/*
++ * Add the passed names as mappings to "vmlinux".
++ */
++static void add_builtins(const char *fn)
++{
++ if (add_cu_mapping(fn, "vmlinux") < 0)
++ {
++ fprintf(stderr, "Cannot add CTF CU mapping from %s to \"vmlinux\"\n",
++ ctf_errmsg(ctf_errno(output)));
++ exit(1);
++ }
++}
++
++/*
++ * Do something with a file, line by line.
++ */
++static void suck_in_lines(const char *filename, void (*func)(const char *line))
++{
++ FILE *f;
++ char *line = NULL;
++ size_t line_size = 0;
++
++ f = fopen(filename, "r");
++ if (f == NULL) {
++ fprintf(stderr, "Cannot open %s: %s\n", filename,
++ strerror(errno));
++ exit(1);
++ }
++
++ while (getline(&line, &line_size, f) >= 0) {
++ size_t len = strlen(line);
++
++ if (len == 0)
++ continue;
++
++ if (line[len-1] == '\n')
++ line[len-1] = '\0';
++
++ func(line);
++ }
++ free(line);
++
++ if (ferror(f)) {
++ fprintf(stderr, "Error reading from %s: %s\n", filename,
++ strerror(errno));
++ exit(1);
++ }
++
++ fclose(f);
++}
++
++/*
++ * Pull in modules.builtin.objs and turn it into CU mappings.
++ */
++static void suck_in_modules(const char *modules_builtin_name)
++{
++ struct modules_builtin_iter *i;
++ char *module_name = NULL;
++ char **paths;
++
++ i = modules_builtin_iter_new(modules_builtin_name);
++ if (i == NULL) {
++ fprintf(stderr, "Cannot iterate over builtin module file.\n");
++ exit(1);
++ }
++
++ while ((paths = modules_builtin_iter_next(i, &module_name)) != NULL) {
++ size_t j;
++
++ for (j = 0; paths[j] != NULL; j++) {
++ char *alloc = NULL;
++ char *path = paths[j];
++ /*
++ * If the name doesn't start in ./, add it, to match the names
++ * passed to add_builtins.
++ */
++ if (strncmp(paths[j], "./", 2) != 0) {
++ char *p;
++ if ((alloc = malloc(strlen(paths[j]) + 3)) == NULL) {
++ fprintf(stderr, "Cannot allocate memory for "
++ "builtin module object name %s.\n",
++ paths[j]);
++ exit(1);
++ }
++ p = alloc;
++ p = stpcpy(p, "./");
++ p = stpcpy(p, paths[j]);
++ path = alloc;
++ }
++ if (add_cu_mapping(path, module_name) < 0) {
++ fprintf(stderr, "Cannot add path -> module mapping for "
++ "%s -> %s: %s\n", path, module_name,
++ ctf_errmsg(ctf_errno(output)));
++ exit(1);
++ }
++ free (alloc);
++ }
++ free(paths);
++ }
++ free(module_name);
++ modules_builtin_iter_free(i);
++}
++
++/*
++ * Strip the leading .ctf. off all the module names: transform the default name
++ * from _CTF_SECTION into shared_ctf, and chop any trailing .ctf off (since that
++ * derives from the intermediate file used to keep the CTF out of the final
++ * module).
++ */
++static char *transform_module_names(ctf_file_t *fp __attribute__((__unused__)),
++ const char *name,
++ void *arg __attribute__((__unused__)))
++{
++ if (strcmp(name, ".ctf") == 0)
++ return strdup("shared_ctf");
++
++ if (strncmp(name, ".ctf", 4) == 0) {
++ size_t n = strlen (name);
++ if (strcmp(name + n - 4, ".ctf") == 0)
++ n -= 4;
++ return strndup(name + 4, n - 4);
++ }
++ return NULL;
++}
++
++int main(int argc, char *argv[])
++{
++ int err;
++ const char *output_file;
++ unsigned char *file_data = NULL;
++ size_t file_size;
++ FILE *fp;
++
++ if (argc != 5) {
++ fprintf(stderr, "Syntax: ctfarchive output-file objects.builtin modules.builtin\n");
++ fprintf(stderr, " filelist\n");
++ exit(1);
++ }
++
++ output_file = argv[1];
++
++ /*
++ * First pull in the input files and add them to the link.
++ */
++
++ output = ctf_create(&err);
++ if (!output) {
++ fprintf(stderr, "Cannot create output CTF archive: %s\n",
++ ctf_errmsg(err));
++ return 1;
++ }
++
++ suck_in_lines(argv[4], add_to_link);
++
++ /*
++ * Make sure that, even if all their types are shared, all modules have
++ * a ctf member that can be used as a child of the shared CTF.
++ */
++ suck_in_lines(argv[4], add_cu_mappings);
++
++ /*
++ * Then pull in the builtin objects list and add them as
++ * mappings to "vmlinux".
++ */
++
++ suck_in_lines(argv[2], add_builtins);
++
++ /*
++ * Finally, pull in the object -> module mapping and add it
++ * as appropriate mappings.
++ */
++ suck_in_modules(argv[3]);
++
++ /*
++ * Commit the added CU mappings.
++ */
++ commit_cu_mappings();
++
++ /*
++ * Arrange to fix up the module names.
++ */
++ ctf_link_set_memb_name_changer(output, transform_module_names, NULL);
++
++ /*
++ * Do the link.
++ */
++ if (ctf_link(output, CTF_LINK_SHARE_DUPLICATED |
++ CTF_LINK_EMPTY_CU_MAPPINGS) < 0)
++ goto ctf_err;
++
++ /*
++ * Write the output.
++ */
++
++ file_data = ctf_link_write(output, &file_size, 4096);
++ if (!file_data)
++ goto ctf_err;
++
++ fp = fopen(output_file, "w");
++ if (!fp)
++ goto err;
++
++ while ((err = fwrite(file_data, file_size, 1, fp)) == 0);
++ if (ferror(fp)) {
++ errno = ferror(fp);
++ goto err;
++ }
++ if (fclose(fp) < 0)
++ goto err;
++ free(file_data);
++ ctf_file_close(output);
++
++ return 0;
++err:
++ free(file_data);
++ fprintf(stderr, "Cannot create output CTF archive: %s\n",
++ strerror(errno));
++ return 1;
++ctf_err:
++ fprintf(stderr, "Cannot create output CTF archive: %s\n",
++ ctf_errmsg(ctf_errno(output)));
++ return 1;
++}
+diff --git a/scripts/ctf/modules_builtin.c b/scripts/ctf/modules_builtin.c
+new file mode 100644
+index 0000000000000..10af2bbc80e0c
+--- /dev/null
++++ b/scripts/ctf/modules_builtin.c
+@@ -0,0 +1,2 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#include "../modules_builtin.c"
+diff --git a/scripts/ctf/modules_builtin.h b/scripts/ctf/modules_builtin.h
+new file mode 100644
+index 0000000000000..5e0299e5600c2
+--- /dev/null
++++ b/scripts/ctf/modules_builtin.h
+@@ -0,0 +1,2 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#include "../modules_builtin.h"
+diff --git a/scripts/generate_builtin_ranges.awk b/scripts/generate_builtin_ranges.awk
+new file mode 100755
+index 0000000000000..b9ec761b3befc
+--- /dev/null
++++ b/scripts/generate_builtin_ranges.awk
+@@ -0,0 +1,508 @@
++#!/usr/bin/gawk -f
++# SPDX-License-Identifier: GPL-2.0
++# generate_builtin_ranges.awk: Generate address range data for builtin modules
++# Written by Kris Van Hees <kris.van.hees@oracle.com>
++#
++# Usage: generate_builtin_ranges.awk modules.builtin vmlinux.map \
++# vmlinux.o.map > modules.builtin.ranges
++#
++
++# Return the module name(s) (if any) associated with the given object.
++#
++# If we have seen this object before, return information from the cache.
++# Otherwise, retrieve it from the corresponding .cmd file.
++#
++function get_module_info(fn, mod, obj, s) {
++ if (fn in omod)
++ return omod[fn];
++
++ if (match(fn, /\/[^/]+$/) == 0)
++ return "";
++
++ obj = fn;
++ mod = "";
++ fn = substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
++ if (getline s <fn == 1) {
++ if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
++ mod = substr(s, RSTART + 16, RLENGTH - 16);
++ gsub(/['"]/, "", mod);
++ } else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
++ mod = substr(s, RSTART + 13, RLENGTH - 13);
++ }
++ close(fn);
++
++ # A single module (common case) also reflects objects that are not part
++ # of a module. Some of those objects have names that are also a module
++ # name (e.g. core). We check the associated module file name, and if
++ # they do not match, the object is not part of a module.
++ if (mod !~ / /) {
++ if (!(mod in mods))
++ mod = "";
++ }
++
++ gsub(/([^/ ]*\/)+/, "", mod);
++ gsub(/-/, "_", mod);
++
++ # At this point, mod is a single (valid) module name, or a list of
++ # module names (that do not need validation).
++ omod[obj] = mod;
++
++ return mod;
++}
++
++# Update the ranges entry for the given module 'mod' in section 'osect'.
++#
++# We use a modified absolute start address (soff + base) as index because we
++# may need to insert an anchor record later that must be at the start of the
++# section data, and the first module may very well start at the same address.
++# So, we use (addr << 1) + 1 to allow a possible anchor record to be placed at
++# (addr << 1). This is safe because the index is only used to sort the entries
++# before writing them out.
++#
++function update_entry(osect, mod, soff, eoff, sect, idx) {
++ sect = sect_in[osect];
++ idx = sprintf("%016x", (soff + sect_base[osect]) * 2 + 1);
++ entries[idx] = sprintf("%s %08x-%08x %s", sect, soff, eoff, mod);
++ count[sect]++;
++}
++
++# (1) Build a lookup map of built-in module names.
++#
++# The first file argument is used as input (modules.builtin).
++#
++# Lines will be like:
++# kernel/crypto/lzo-rle.ko
++# and we record the object name "crypto/lzo-rle".
++#
++ARGIND == 1 {
++ sub(/kernel\//, ""); # strip off "kernel/" prefix
++ sub(/\.ko$/, ""); # strip off .ko suffix
++
++ mods[$1] = 1;
++ next;
++}
++
++# (2) Collect address information for each section.
++#
++# The second file argument is used as input (vmlinux.map).
++#
++# We collect the base address of the section in order to convert all addresses
++# in the section into offset values.
++#
++# We collect the address of the anchor (or first symbol in the section if there
++# is no explicit anchor) to allow users of the range data to calculate address
++# ranges based on the actual load address of the section in the running kernel.
++#
++# We collect the start address of any sub-section (section included in the top
++# level section being processed). This is needed when the final linking was
++# done using vmlinux.a because then the list of objects contained in each
++# section is to be obtained from vmlinux.o.map. The offset of the sub-section
++# is recorded here, to be used as an addend when processing vmlinux.o.map
++# later.
++#
++
++# Both GNU ld and LLVM lld linker map format are supported by converting LLVM
++# lld linker map records into equivalent GNU ld linker map records.
++#
++# The first record of the vmlinux.map file provides enough information to know
++# which format we are dealing with.
++#
++ARGIND == 2 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
++ map_is_lld = 1;
++ if (dbg)
++ printf "NOTE: %s uses LLVM lld linker map format\n", FILENAME >"/dev/stderr";
++ next;
++}
++
++# (LLD) Convert a section record fronm lld format to ld format.
++#
++# lld: ffffffff82c00000 2c00000 2493c0 8192 .data
++# ->
++# ld: .data 0xffffffff82c00000 0x2493c0 load address 0x0000000002c00000
++#
++ARGIND == 2 && map_is_lld && NF == 5 && /[0-9] [^ ]+$/ {
++ $0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
++}
++
++# (LLD) Convert an anchor record from lld format to ld format.
++#
++# lld: ffffffff81000000 1000000 0 1 _text = .
++# ->
++# ld: 0xffffffff81000000 _text = .
++#
++ARGIND == 2 && map_is_lld && !anchor && NF == 7 && raw_addr == "0x"$1 && $6 == "=" && $7 == "." {
++ $0 = " 0x"$1 " " $5 " = .";
++}
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++# lld: 11480 11480 1f07 16 vmlinux.a(arch/x86/events/amd/uncore.o):(.text)
++# ->
++# ld: .text 0x0000000000011480 0x1f07 arch/x86/events/amd/uncore.o
++#
++ARGIND == 2 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
++ gsub(/\)/, "");
++ sub(/ vmlinux\.a\(/, " ");
++ sub(/:\(/, " ");
++ $0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (LLD) Convert a symbol record from lld format to ld format.
++#
++# We only care about these while processing a section for which no anchor has
++# been determined yet.
++#
++# lld: ffffffff82a859a4 2a859a4 0 1 btf_ksym_iter_id
++# ->
++# ld: 0xffffffff82a859a4 btf_ksym_iter_id
++#
++ARGIND == 2 && map_is_lld && sect && !anchor && NF == 5 && $5 ~ /^[_A-Za-z][_A-Za-z0-9]*$/ {
++ $0 = " 0x"$1 " " $5;
++}
++
++# (LLD) We do not need any other ldd linker map records.
++#
++ARGIND == 2 && map_is_lld && /^[0-9a-f]{16} / {
++ next;
++}
++
++# (LD) Section records with just the section name at the start of the line
++# need to have the next line pulled in to determine whether it is a
++# loadable section. If it is, the next line will contains a hex value
++# as first and second items.
++#
++ARGIND == 2 && !map_is_lld && NF == 1 && /^[^ ]/ {
++ s = $0;
++ getline;
++ if ($1 !~ /^0x/ || $2 !~ /^0x/)
++ next;
++
++ $0 = s " " $0;
++}
++
++# (LD) Object records with just the section name denote records with a long
++# section name for which the remainder of the record can be found on the
++# next line.
++#
++# (This is also needed for vmlinux.o.map, when used.)
++#
++ARGIND >= 2 && !map_is_lld && NF == 1 && /^ [^ \*]/ {
++ s = $0;
++ getline;
++ $0 = s " " $0;
++}
++
++# Beginning a new section - done with the previous one (if any).
++#
++ARGIND == 2 && /^[^ ]/ {
++ sect = 0;
++}
++
++# Process a loadable section (we only care about .-sections).
++#
++# Record the section name and its base address.
++# We also record the raw (non-stripped) address of the section because it can
++# be used to identify an anchor record.
++#
++# Note:
++# Since some AWK implementations cannot handle large integers, we strip off the
++# first 4 hex digits from the address. This is safe because the kernel space
++# is not large enough for addresses to extend into those digits. The portion
++# to strip off is stored in addr_prefix as a regexp, so further clauses can
++# perform a simple substitution to do the address stripping.
++#
++ARGIND == 2 && /^\./ {
++ # Explicitly ignore a few sections that are not relevant here.
++ if ($1 ~ /^\.orc_/ || $1 ~ /_sites$/ || $1 ~ /\.percpu/)
++ next;
++
++ # Sections with a 0-address can be ignored as well.
++ if ($2 ~ /^0x0+$/)
++ next;
++
++ raw_addr = $2;
++ addr_prefix = "^" substr($2, 1, 6);
++ base = $2;
++ sub(addr_prefix, "0x", base);
++ base = strtonum(base);
++ sect = $1;
++ anchor = 0;
++ sect_base[sect] = base;
++ sect_size[sect] = strtonum($3);
++
++ if (dbg)
++ printf "[%s] BASE %016x\n", sect, base >"/dev/stderr";
++
++ next;
++}
++
++# If we are not in a section we care about, we ignore the record.
++#
++ARGIND == 2 && !sect {
++ next;
++}
++
++# Record the first anchor symbol for the current section.
++#
++# An anchor record for the section bears the same raw address as the section
++# record.
++#
++ARGIND == 2 && !anchor && NF == 4 && raw_addr == $1 && $3 == "=" && $4 == "." {
++ anchor = sprintf("%s %08x-%08x = %s", sect, 0, 0, $2);
++ sect_anchor[sect] = anchor;
++
++ if (dbg)
++ printf "[%s] ANCHOR %016x = %s (.)\n", sect, 0, $2 >"/dev/stderr";
++
++ next;
++}
++
++# If no anchor record was found for the current section, use the first symbol
++# in the section as anchor.
++#
++ARGIND == 2 && !anchor && NF == 2 && $1 ~ /^0x/ && $2 !~ /^0x/ {
++ addr = $1;
++ sub(addr_prefix, "0x", addr);
++ addr = strtonum(addr) - base;
++ anchor = sprintf("%s %08x-%08x = %s", sect, addr, addr, $2);
++ sect_anchor[sect] = anchor;
++
++ if (dbg)
++ printf "[%s] ANCHOR %016x = %s\n", sect, addr, $2 >"/dev/stderr";
++
++ next;
++}
++
++# The first occurrence of a section name in an object record establishes the
++# addend (often 0) for that section. This information is needed to handle
++# sections that get combined in the final linking of vmlinux (e.g. .head.text
++# getting included at the start of .text).
++#
++# If the section does not have a base yet, use the base of the encapsulating
++# section.
++#
++ARGIND == 2 && sect && NF == 4 && /^ [^ \*]/ && !($1 in sect_addend) {
++ if (!($1 in sect_base)) {
++ sect_base[$1] = base;
++
++ if (dbg)
++ printf "[%s] BASE %016x\n", $1, base >"/dev/stderr";
++ }
++
++ addr = $2;
++ sub(addr_prefix, "0x", addr);
++ addr = strtonum(addr);
++ sect_addend[$1] = addr - sect_base[$1];
++ sect_in[$1] = sect;
++
++ if (dbg)
++ printf "[%s] ADDEND %016x - %016x = %016x\n", $1, addr, base, sect_addend[$1] >"/dev/stderr";
++
++ # If the object is vmlinux.o then we will need vmlinux.o.map to get the
++ # actual offsets of objects.
++ if ($4 == "vmlinux.o")
++ need_o_map = 1;
++}
++
++# (3) Collect offset ranges (relative to the section base address) for built-in
++# modules.
++#
++# If the final link was done using the actual objects, vmlinux.map contains all
++# the information we need (see section (3a)).
++# If linking was done using vmlinux.a as intermediary, we will need to process
++# vmlinux.o.map (see section (3b)).
++
++# (3a) Determine offset range info using vmlinux.map.
++#
++# Since we are already processing vmlinux.map, the top level section that is
++# being processed is already known. If we do not have a base address for it,
++# we do not need to process records for it.
++#
++# Given the object name, we determine the module(s) (if any) that the current
++# object is associated with.
++#
++# If we were already processing objects for a (list of) module(s):
++# - If the current object belongs to the same module(s), update the range data
++# to include the current object.
++# - Otherwise, ensure that the end offset of the range is valid.
++#
++# If the current object does not belong to a built-in module, ignore it.
++#
++# If it does, we add a new built-in module offset range record.
++#
++ARGIND == 2 && !need_o_map && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
++ if (!(sect in sect_base))
++ next;
++
++ # Turn the address into an offset from the section base.
++ soff = $2;
++ sub(addr_prefix, "0x", soff);
++ soff = strtonum(soff) - sect_base[sect];
++ eoff = soff + strtonum($3);
++
++ # Determine which (if any) built-in modules the object belongs to.
++ mod = get_module_info($4);
++
++ # If we are processing a built-in module:
++ # - If the current object is within the same module, we update its
++ # entry by extending the range and move on
++ # - Otherwise:
++ # + If we are still processing within the same main section, we
++ # validate the end offset against the start offset of the
++ # current object (e.g. .rodata.str1.[18] objects are often
++ # listed with an incorrect size in the linker map)
++ # + Otherwise, we validate the end offset against the section
++ # size
++ if (mod_name) {
++ if (mod == mod_name) {
++ mod_eoff = eoff;
++ update_entry(mod_sect, mod_name, mod_soff, eoff);
++
++ next;
++ } else if (sect == sect_in[mod_sect]) {
++ if (mod_eoff > soff)
++ update_entry(mod_sect, mod_name, mod_soff, soff);
++ } else {
++ v = sect_size[sect_in[mod_sect]];
++ if (mod_eoff > v)
++ update_entry(mod_sect, mod_name, mod_soff, v);
++ }
++ }
++
++ mod_name = mod;
++
++ # If we encountered an object that is not part of a built-in module, we
++ # do not need to record any data.
++ if (!mod)
++ next;
++
++ # At this point, we encountered the start of a new built-in module.
++ mod_name = mod;
++ mod_soff = soff;
++ mod_eoff = eoff;
++ mod_sect = $1;
++ update_entry($1, mod, soff, mod_eoff);
++
++ next;
++}
++
++# If we do not need to parse the vmlinux.o.map file, we are done.
++#
++ARGIND == 3 && !need_o_map {
++ if (dbg)
++ printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
++ exit;
++}
++
++# (3) Collect offset ranges (relative to the section base address) for built-in
++# modules.
++#
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++ARGIND == 3 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
++ gsub(/\)/, "");
++ sub(/:\(/, " ");
++
++ sect = $6;
++ if (!(sect in sect_addend))
++ next;
++
++ sub(/ vmlinux\.a\(/, " ");
++ $0 = " "sect " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (3b) Determine offset range info using vmlinux.o.map.
++#
++# If we do not know an addend for the object's section, we are interested in
++# anything within that section.
++#
++# Determine the top-level section that the object's section was included in
++# during the final link. This is the section name offset range data will be
++# associated with for this object.
++#
++# The remainder of the processing of the current object record follows the
++# procedure outlined in (3a).
++#
++ARGIND == 3 && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
++ osect = $1;
++ if (!(osect in sect_addend))
++ next;
++
++ # We need to work with the main section.
++ sect = sect_in[osect];
++
++ # Turn the address into an offset from the section base.
++ soff = $2;
++ sub(addr_prefix, "0x", soff);
++ soff = strtonum(soff) + sect_addend[osect];
++ eoff = soff + strtonum($3);
++
++ # Determine which (if any) built-in modules the object belongs to.
++ mod = get_module_info($4);
++
++ # If we are processing a built-in module:
++ # - If the current object is within the same module, we update its
++ # entry by extending the range and move on
++ # - Otherwise:
++ # + If we are still processing within the same main section, we
++ # validate the end offset against the start offset of the
++ # current object (e.g. .rodata.str1.[18] objects are often
++ # listed with an incorrect size in the linker map)
++ # + Otherwise, we validate the end offset against the section
++ # size
++ if (mod_name) {
++ if (mod == mod_name) {
++ mod_eoff = eoff;
++ update_entry(mod_sect, mod_name, mod_soff, eoff);
++
++ next;
++ } else if (sect == sect_in[mod_sect]) {
++ if (mod_eoff > soff)
++ update_entry(mod_sect, mod_name, mod_soff, soff);
++ } else {
++ v = sect_size[sect_in[mod_sect]];
++ if (mod_eoff > v)
++ update_entry(mod_sect, mod_name, mod_soff, v);
++ }
++ }
++
++ mod_name = mod;
++
++ # If we encountered an object that is not part of a built-in module, we
++ # do not need to record any data.
++ if (!mod)
++ next;
++
++ # At this point, we encountered the start of a new built-in module.
++ mod_name = mod;
++ mod_soff = soff;
++ mod_eoff = eoff;
++ mod_sect = osect;
++ update_entry(osect, mod, soff, mod_eoff);
++
++ next;
++}
++
++# (4) Generate the output.
++#
++# Anchor records are added for each section that contains offset range data
++# records. They are added at an adjusted section base address (base << 1) to
++# ensure they come first in the second records (see update_entry() above for
++# more information).
++#
++# All entries are sorted by (adjusted) address to ensure that the output can be
++# parsed in strict ascending address order.
++#
++END {
++ for (sect in count) {
++ if (sect in sect_anchor) {
++ idx = sprintf("%016x", sect_base[sect] * 2);
++ entries[idx] = sect_anchor[sect];
++ }
++ }
++
++ n = asorti(entries, indices);
++ for (i = 1; i <= n; i++)
++ print entries[indices[i]];
++}
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index d16d0ace27751..7bf2477ded5b4 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -727,6 +727,7 @@ static const char *const section_white_list[] =
+ ".comment*",
+ ".debug*",
+ ".zdebug*", /* Compressed debug sections. */
++ ".ctf", /* Type info */
+ ".GCC.command.line", /* record-gcc-switches */
+ ".mdebug*", /* alpha, score, mips etc. */
+ ".pdr", /* alpha, score, mips etc. */
+diff --git a/scripts/modules_builtin.c b/scripts/modules_builtin.c
+new file mode 100644
+index 0000000000000..df52932a4417b
+--- /dev/null
++++ b/scripts/modules_builtin.c
+@@ -0,0 +1,200 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * A simple modules_builtin reader.
++ *
++ * (C) 2014, 2022 Oracle, Inc. All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#include <errno.h>
++#include <stdio.h>
++#include <stdlib.h>
++#include <string.h>
++
++#include "modules_builtin.h"
++
++/*
++ * Read a modules.builtin.objs file and translate it into a stream of
++ * name / module-name pairs.
++ */
++
++/*
++ * Construct a modules.builtin.objs iterator.
++ */
++struct modules_builtin_iter *
++modules_builtin_iter_new(const char *modules_builtin_file)
++{
++ struct modules_builtin_iter *i;
++
++ i = calloc(1, sizeof(struct modules_builtin_iter));
++ if (i == NULL)
++ return NULL;
++
++ i->f = fopen(modules_builtin_file, "r");
++
++ if (i->f == NULL) {
++ fprintf(stderr, "Cannot open builtin module file %s: %s\n",
++ modules_builtin_file, strerror(errno));
++ return NULL;
++ }
++
++ return i;
++}
++
++/*
++ * Iterate, returning a new null-terminated array of object file names, and a
++ * new dynamically-allocated module name. (The module name passed in is freed.)
++ *
++ * The array of object file names should be freed by the caller: the strings it
++ * points to are owned by the iterator, and should not be freed.
++ */
++
++char ** __attribute__((__nonnull__))
++modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name)
++{
++ size_t npaths = 1;
++ char **module_paths;
++ char *last_slash;
++ char *last_dot;
++ char *trailing_linefeed;
++ char *object_name = i->line;
++ char *dash;
++ int composite = 0;
++
++ /*
++ * Read in all module entries, computing the suffixless, pathless name
++ * of the module and building the next arrayful of object file names for
++ * return.
++ *
++ * Modules can consist of multiple files: in this case, the portion
++ * before the colon is the path to the module (as before): the portion
++ * after the colon is a space-separated list of files that should be
++ * considered part of this module. In this case, the portion before the
++ * name is an "object file" that does not actually exist: it is merged
++ * into built-in.a without ever being written out.
++ *
++ * All module names have - translated to _, to match what is done to the
++ * names of the same things when built as modules.
++ */
++
++ /*
++ * Reinvocation of exhausted iterator. Return NULL, once.
++ */
++retry:
++ if (getline(&i->line, &i->line_size, i->f) < 0) {
++ if (ferror(i->f)) {
++ fprintf(stderr, "Error reading from modules_builtin file:"
++ " %s\n", strerror(errno));
++ exit(1);
++ }
++ rewind(i->f);
++ return NULL;
++ }
++
++ if (i->line[0] == '\0')
++ goto retry;
++
++ trailing_linefeed = strchr(i->line, '\n');
++ if (trailing_linefeed != NULL)
++ *trailing_linefeed = '\0';
++
++ /*
++ * Slice the line in two at the colon, if any. If there is anything
++ * past the ': ', this is a composite module. (We allow for no colon
++ * for robustness, even though one should always be present.)
++ */
++ if (strchr(i->line, ':') != NULL) {
++ char *name_start;
++
++ object_name = strchr(i->line, ':');
++ *object_name = '\0';
++ object_name++;
++ name_start = object_name + strspn(object_name, " \n");
++ if (*name_start != '\0') {
++ composite = 1;
++ object_name = name_start;
++ }
++ }
++
++ /*
++ * Figure out the module name.
++ */
++ last_slash = strrchr(i->line, '/');
++ last_slash = (!last_slash) ? i->line :
++ last_slash + 1;
++ free(*module_name);
++ *module_name = strdup(last_slash);
++ dash = *module_name;
++
++ while (dash != NULL) {
++ dash = strchr(dash, '-');
++ if (dash != NULL)
++ *dash = '_';
++ }
++
++ last_dot = strrchr(*module_name, '.');
++ if (last_dot != NULL)
++ *last_dot = '\0';
++
++ /*
++ * Multifile separator? Object file names explicitly stated:
++ * slice them up and shuffle them in.
++ *
++ * The array size may be an overestimate if any object file
++ * names start or end with spaces (very unlikely) but cannot be
++ * an underestimate. (Check for it anyway.)
++ */
++ if (composite) {
++ char *one_object;
++
++ for (npaths = 0, one_object = object_name;
++ one_object != NULL;
++ npaths++, one_object = strchr(one_object + 1, ' '));
++ }
++
++ module_paths = malloc((npaths + 1) * sizeof(char *));
++ if (!module_paths) {
++ fprintf(stderr, "%s: out of memory on module %s\n", __func__,
++ *module_name);
++ exit(1);
++ }
++
++ if (composite) {
++ char *one_object;
++ size_t i = 0;
++
++ while ((one_object = strsep(&object_name, " ")) != NULL) {
++ if (i >= npaths) {
++ fprintf(stderr, "%s: num_objs overflow on module "
++ "%s: this is a bug.\n", __func__,
++ *module_name);
++ exit(1);
++ }
++
++ module_paths[i++] = one_object;
++ }
++ } else
++ module_paths[0] = i->line; /* untransformed module name */
++
++ module_paths[npaths] = NULL;
++
++ return module_paths;
++}
++
++/*
++ * Free an iterator. Can be called while iteration is underway, so even
++ * state that is freed at the end of iteration must be freed here too.
++ */
++void
++modules_builtin_iter_free(struct modules_builtin_iter *i)
++{
++ if (i == NULL)
++ return;
++ fclose(i->f);
++ free(i->line);
++ free(i);
++}
+diff --git a/scripts/modules_builtin.h b/scripts/modules_builtin.h
+new file mode 100644
+index 0000000000000..5138792b42ef0
+--- /dev/null
++++ b/scripts/modules_builtin.h
+@@ -0,0 +1,48 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * A simple modules.builtin.objs reader.
++ *
++ * (C) 2014, 2022 Oracle, Inc. All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#ifndef _LINUX_MODULES_BUILTIN_H
++#define _LINUX_MODULES_BUILTIN_H
++
++#include <stdio.h>
++#include <stddef.h>
++
++/*
++ * modules.builtin.objs iteration state.
++ */
++struct modules_builtin_iter {
++ FILE *f;
++ char *line;
++ size_t line_size;
++};
++
++/*
++ * Construct a modules_builtin.objs iterator.
++ */
++struct modules_builtin_iter *
++modules_builtin_iter_new(const char *modules_builtin_file);
++
++/*
++ * Iterate, returning a new null-terminated array of object file names, and a
++ * new dynamically-allocated module name. (The module name passed in is freed.)
++ *
++ * The array of object file names should be freed by the caller: the strings it
++ * points to are owned by the iterator, and should not be freed.
++ */
++
++char ** __attribute__((__nonnull__))
++modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name);
++
++void
++modules_builtin_iter_free(struct modules_builtin_iter *i);
++
++#endif
+diff --git a/scripts/package/kernel.spec b/scripts/package/kernel.spec
+index ac3e5ac01d8a4..fc0e9e51529c1 100644
+--- a/scripts/package/kernel.spec
++++ b/scripts/package/kernel.spec
+@@ -53,12 +53,18 @@ patch -p1 < %{SOURCE2}
+
+ %build
+ %{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release}
++%if %{with_ctf}
++%{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release} ctf
++%endif
+
+ %install
+ mkdir -p %{buildroot}/lib/modules/%{KERNELRELEASE}
+ cp $(%{make} %{makeflags} -s image_name) %{buildroot}/lib/modules/%{KERNELRELEASE}/vmlinuz
+ # DEPMOD=true makes depmod no-op. We do not package depmod-generated files.
+ %{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} DEPMOD=true modules_install
++%if %{with_ctf}
++%{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} ctf_install
++%endif
+ %{make} %{makeflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install
+ cp System.map %{buildroot}/lib/modules/%{KERNELRELEASE}
+ cp .config %{buildroot}/lib/modules/%{KERNELRELEASE}/config
+diff --git a/scripts/package/mkspec b/scripts/package/mkspec
+index 4dc1466dfc815..ddbdefbc538b5 100755
+--- a/scripts/package/mkspec
++++ b/scripts/package/mkspec
+@@ -23,6 +23,11 @@ else
+ echo '%define with_devel 0'
+ fi
+
++if grep -q CONFIG_CTF=y include/config/auto.conf; then
++echo '%define with_ctf %{?_without_ctf: 0} %{?!_without_ctf: 1}'
++else
++echo '%define with_ctf 0'
++fi
+ cat<<EOF
+ %define ARCH ${ARCH}
+ %define KERNELRELEASE ${KERNELRELEASE}
+diff --git a/scripts/remove-ctf-lds.awk b/scripts/remove-ctf-lds.awk
+new file mode 100644
+index 0000000000000..5d94d6ee99227
+--- /dev/null
++++ b/scripts/remove-ctf-lds.awk
+@@ -0,0 +1,12 @@
++# SPDX-License-Identifier: GPL-2.0
++# See Makefile.vmlinux_o
++
++BEGIN {
++ discards = 0; p = 0
++}
++
++/^====/ { p = 1; next; }
++p && /\.ctf/ { next; }
++p && !discards && /DISCARD/ { sub(/\} *$/, " *(.ctf) }"); discards = 1 }
++p && /^\}/ && !discards { print " /DISCARD/ : { *(.ctf) }"; }
++p { print $0; }
+diff --git a/scripts/verify_builtin_ranges.awk b/scripts/verify_builtin_ranges.awk
+new file mode 100755
+index 0000000000000..0de7ed5216011
+--- /dev/null
++++ b/scripts/verify_builtin_ranges.awk
+@@ -0,0 +1,370 @@
++#!/usr/bin/gawk -f
++# SPDX-License-Identifier: GPL-2.0
++# verify_builtin_ranges.awk: Verify address range data for builtin modules
++# Written by Kris Van Hees <kris.van.hees@oracle.com>
++#
++# Usage: verify_builtin_ranges.awk modules.builtin.ranges System.map \
++# modules.builtin vmlinux.map vmlinux.o.map
++#
++
++# Return the module name(s) (if any) associated with the given object.
++#
++# If we have seen this object before, return information from the cache.
++# Otherwise, retrieve it from the corresponding .cmd file.
++#
++function get_module_info(fn, mod, obj, s) {
++ if (fn in omod)
++ return omod[fn];
++
++ if (match(fn, /\/[^/]+$/) == 0)
++ return "";
++
++ obj = fn;
++ mod = "";
++ fn = substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
++ if (getline s <fn == 1) {
++ if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
++ mod = substr(s, RSTART + 16, RLENGTH - 16);
++ gsub(/['"]/, "", mod);
++ } else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
++ mod = substr(s, RSTART + 13, RLENGTH - 13);
++ } else {
++ print "ERROR: Failed to read: " fn "\n\n" \
++ " For kernels built with O=<objdir>, cd to <objdir>\n" \
++ " and execute this script as ./source/scripts/..." \
++ >"/dev/stderr";
++ close(fn);
++ total = 0;
++ exit(1);
++ }
++ close(fn);
++
++ # A single module (common case) also reflects objects that are not part
++ # of a module. Some of those objects have names that are also a module
++ # name (e.g. core). We check the associated module file name, and if
++ # they do not match, the object is not part of a module.
++ if (mod !~ / /) {
++ if (!(mod in mods))
++ mod = "";
++ }
++
++ gsub(/([^/ ]*\/)+/, "", mod);
++ gsub(/-/, "_", mod);
++
++ # At this point, mod is a single (valid) module name, or a list of
++ # module names (that do not need validation).
++ omod[obj] = mod;
++
++ return mod;
++}
++
++# Return a representative integer value for a given hexadecimal address.
++#
++# Since all kernel addresses fall within the same memory region, we can safely
++# strip off the first 6 hex digits before performing the hex-to-dec conversion,
++# thereby avoiding integer overflows.
++#
++function addr2val(val) {
++ sub(/^0x/, "", val);
++ if (length(val) == 16)
++ val = substr(val, 5);
++ return strtonum("0x" val);
++}
++
++# Determine the kernel build directory to use (default is .).
++#
++BEGIN {
++ if (ARGC < 6) {
++ print "Syntax: verify_builtin_ranges.awk <ranges-file> <system-map>\n" \
++ " <builtin-file> <vmlinux-map> <vmlinux-o-map>\n" \
++ >"/dev/stderr";
++ total = 0;
++ exit(1);
++ }
++}
++
++# (1) Load the built-in module address range data.
++#
++ARGIND == 1 {
++ ranges[FNR] = $0;
++ rcnt++;
++ next;
++}
++
++# (2) Annotate System.map symbols with module names.
++#
++ARGIND == 2 {
++ addr = addr2val($1);
++ name = $3;
++
++ while (addr >= mod_eaddr) {
++ if (sect_symb) {
++ if (sect_symb != name)
++ next;
++
++ sect_base = addr - sect_off;
++ if (dbg)
++ printf "[%s] BASE (%s) %016x - %016x = %016x\n", sect_name, sect_symb, addr, sect_off, sect_base >"/dev/stderr";
++ sect_symb = 0;
++ }
++
++ if (++ridx > rcnt)
++ break;
++
++ $0 = ranges[ridx];
++ sub(/-/, " ");
++ if ($4 != "=") {
++ sub(/-/, " ");
++ mod_saddr = strtonum("0x" $2) + sect_base;
++ mod_eaddr = strtonum("0x" $3) + sect_base;
++ $1 = $2 = $3 = "";
++ sub(/^ +/, "");
++ mod_name = $0;
++
++ if (dbg)
++ printf "[%s] %s from %016x to %016x\n", sect_name, mod_name, mod_saddr, mod_eaddr >"/dev/stderr";
++ } else {
++ sect_name = $1;
++ sect_off = strtonum("0x" $2);
++ sect_symb = $5;
++ }
++ }
++
++ idx = addr"-"name;
++ if (addr >= mod_saddr && addr < mod_eaddr)
++ sym2mod[idx] = mod_name;
++
++ next;
++}
++
++# Once we are done annotating the System.map, we no longer need the ranges data.
++#
++FNR == 1 && ARGIND == 3 {
++ delete ranges;
++}
++
++# (3) Build a lookup map of built-in module names.
++#
++# Lines from modules.builtin will be like:
++# kernel/crypto/lzo-rle.ko
++# and we record the object name "crypto/lzo-rle".
++#
++ARGIND == 3 {
++ sub(/kernel\//, ""); # strip off "kernel/" prefix
++ sub(/\.ko$/, ""); # strip off .ko suffix
++
++ mods[$1] = 1;
++ next;
++}
++
++# (4) Get a list of symbols (per object).
++#
++# Symbols by object are read from vmlinux.map, with fallback to vmlinux.o.map
++# if vmlinux is found to have inked in vmlinux.o.
++#
++
++# If we were able to get the data we need from vmlinux.map, there is no need to
++# process vmlinux.o.map.
++#
++FNR == 1 && ARGIND == 5 && total > 0 {
++ if (dbg)
++ printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
++ exit;
++}
++
++# First determine whether we are dealing with a GNU ld or LLVM lld linker map.
++#
++ARGIND >= 4 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
++ map_is_lld = 1;
++ next;
++}
++
++# (LLD) Convert a section record fronm lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && /[0-9] [^ ]+$/ {
++ $0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
++}
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
++ if (/\.a\(/ && !/ vmlinux\.a\(/)
++ next;
++
++ gsub(/\)/, "");
++ sub(/:\(/, " ");
++ sub(/ vmlinux\.a\(/, " ");
++ $0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (LLD) Convert a symbol record from lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /^[A-Za-z_][A-Za-z0-9_]*$/ {
++ $0 = " 0x" $1 " " $5;
++}
++
++# (LLD) We do not need any other ldd linker map records.
++#
++ARGIND >= 4 && map_is_lld && /^[0-9a-f]{16} / {
++ next;
++}
++
++# Handle section records with long section names (spilling onto a 2nd line).
++#
++ARGIND >= 4 && !map_is_lld && NF == 1 && /^[^ ]/ {
++ s = $0;
++ getline;
++ $0 = s " " $0;
++}
++
++# Next section - previous one is done.
++#
++ARGIND >= 4 && /^[^ ]/ {
++ sect = 0;
++}
++
++# Get the (top level) section name.
++#
++ARGIND >= 4 && /^\./ {
++ # Explicitly ignore a few sections that are not relevant here.
++ if ($1 ~ /^\.orc_/ || $1 ~ /_sites$/ || $1 ~ /\.percpu/)
++ next;
++
++ # Sections with a 0-address can be ignored as well (in vmlinux.map).
++ if (ARGIND == 4 && $2 ~ /^0x0+$/)
++ next;
++
++ sect = $1;
++
++ next;
++}
++
++# If we are not currently in a section we care about, ignore records.
++#
++!sect {
++ next;
++}
++
++# Handle object records with long section names (spilling onto a 2nd line).
++#
++ARGIND >= 4 && /^ [^ \*]/ && NF == 1 {
++ # If the section name is long, the remainder of the entry is found on
++ # the next line.
++ s = $0;
++ getline;
++ $0 = s " " $0;
++}
++
++# Objects linked in from static libraries are ignored.
++# If the object is vmlinux.o, we need to consult vmlinux.o.map for per-object
++# symbol information
++#
++ARGIND == 4 && /^ [^ ]/ && NF == 4 {
++ if ($4 ~ /\.a\(/)
++ next;
++
++ idx = sect":"$1;
++ if (!(idx in sect_addend)) {
++ sect_addend[idx] = addr2val($2);
++ if (dbg)
++ printf "ADDEND %s = %016x\n", idx, sect_addend[idx] >"/dev/stderr";
++ }
++ if ($4 == "vmlinux.o") {
++ need_o_map = 1;
++ next;
++ }
++}
++
++# If data from vmlinux.o.map is needed, we only process section and object
++# records from vmlinux.map to determine which section we need to pay attention
++# to in vmlinux.o.map. So skip everything else from vmlinux.map.
++#
++ARGIND == 4 && need_o_map {
++ next;
++}
++
++# Get module information for the current object.
++#
++ARGIND >= 4 && /^ [^ ]/ && NF == 4 {
++ msect = $1;
++ mod_name = get_module_info($4);
++ mod_eaddr = addr2val($2) + addr2val($3);
++
++ next;
++}
++
++# Process a symbol record.
++#
++# Evaluate the module information obtained from vmlinux.map (or vmlinux.o.map)
++# as follows:
++# - For all symbols in a given object:
++# - If the symbol is annotated with the same module name(s) that the object
++# belongs to, count it as a match.
++# - Otherwise:
++# - If the symbol is known to have duplicates of which at least one is
++# in a built-in module, disregard it.
++# - If the symbol us not annotated with any module name(s) AND the
++# object belongs to built-in modules, count it as missing.
++# - Otherwise, count it as a mismatch.
++#
++ARGIND >= 4 && /^ / && NF == 2 && $1 ~ /^0x/ {
++ idx = sect":"msect;
++ if (!(idx in sect_addend))
++ next;
++
++ addr = addr2val($1);
++
++ # Handle the rare but annoying case where a 0-size symbol is placed at
++ # the byte *after* the module range. Based on vmlinux.map it will be
++ # considered part of the current object, but it falls just beyond the
++ # module address range. Unfortunately, its address could be at the
++ # start of another built-in module, so the only safe thing to do is to
++ # ignore it.
++ if (mod_name && addr == mod_eaddr)
++ next;
++
++ # If we are processing vmlinux.o.map, we need to apply the base address
++ # of the section to the relative address on the record.
++ #
++ if (ARGIND == 5)
++ addr += sect_addend[idx];
++
++ idx = addr"-"$2;
++ mod = "";
++ if (idx in sym2mod) {
++ mod = sym2mod[idx];
++ if (sym2mod[idx] == mod_name) {
++ mod_matches++;
++ matches++;
++ } else if (mod_name == "") {
++ print $2 " in " mod " (should NOT be)";
++ mismatches++;
++ } else {
++ print $2 " in " mod " (should be " mod_name ")";
++ mismatches++;
++ }
++ } else if (mod_name != "") {
++ print $2 " should be in " mod_name;
++ missing++;
++ } else
++ matches++;
++
++ total++;
++
++ next;
++}
++
++# Issue the comparison report.
++#
++END {
++ if (total) {
++ printf "Verification of %s:\n", ARGV[1];
++ printf " Correct matches: %6d (%d%% of total)\n", matches, 100 * matches / total;
++ printf " Module matches: %6d (%d%% of matches)\n", mod_matches, 100 * mod_matches / matches;
++ printf " Mismatches: %6d (%d%% of total)\n", mismatches, 100 * mismatches / total;
++ printf " Missing: %6d (%d%% of total)\n", missing, 100 * missing / total;
++
++ if (mismatches || missing)
++ exit(1);
++ }
++}
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-09-30 16:02 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-09-30 16:02 UTC (permalink / raw
To: gentoo-commits
commit: 1bedb1495a59b76f8a23f009777ee490a061906a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 30 16:01:57 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 30 16:01:57 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1bedb149
Linux patch 6.11.1
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1000_linux-6.11.1.patch | 738 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 742 insertions(+)
diff --git a/0000_README b/0000_README
index 963defab..d99e35a4 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-6.11.1.patch
+From: https://www.kernel.org
+Desc: Linux 6.11.1
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1000_linux-6.11.1.patch b/1000_linux-6.11.1.patch
new file mode 100644
index 00000000..810584bf
--- /dev/null
+++ b/1000_linux-6.11.1.patch
@@ -0,0 +1,738 @@
+diff --git a/Makefile b/Makefile
+index 34bd1d5f967201..df9c90cdd1c575 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 11
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/drivers/accel/drm_accel.c b/drivers/accel/drm_accel.c
+index 16c3edb8c46ee1..aa826033b0ceb9 100644
+--- a/drivers/accel/drm_accel.c
++++ b/drivers/accel/drm_accel.c
+@@ -8,7 +8,7 @@
+
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+-#include <linux/idr.h>
++#include <linux/xarray.h>
+
+ #include <drm/drm_accel.h>
+ #include <drm/drm_auth.h>
+@@ -18,8 +18,7 @@
+ #include <drm/drm_ioctl.h>
+ #include <drm/drm_print.h>
+
+-static DEFINE_SPINLOCK(accel_minor_lock);
+-static struct idr accel_minors_idr;
++DEFINE_XARRAY_ALLOC(accel_minors_xa);
+
+ static struct dentry *accel_debugfs_root;
+
+@@ -117,99 +116,6 @@ void accel_set_device_instance_params(struct device *kdev, int index)
+ kdev->type = &accel_sysfs_device_minor;
+ }
+
+-/**
+- * accel_minor_alloc() - Allocates a new accel minor
+- *
+- * This function access the accel minors idr and allocates from it
+- * a new id to represent a new accel minor
+- *
+- * Return: A new id on success or error code in case idr_alloc failed
+- */
+-int accel_minor_alloc(void)
+-{
+- unsigned long flags;
+- int r;
+-
+- spin_lock_irqsave(&accel_minor_lock, flags);
+- r = idr_alloc(&accel_minors_idr, NULL, 0, ACCEL_MAX_MINORS, GFP_NOWAIT);
+- spin_unlock_irqrestore(&accel_minor_lock, flags);
+-
+- return r;
+-}
+-
+-/**
+- * accel_minor_remove() - Remove an accel minor
+- * @index: The minor id to remove.
+- *
+- * This function access the accel minors idr and removes from
+- * it the member with the id that is passed to this function.
+- */
+-void accel_minor_remove(int index)
+-{
+- unsigned long flags;
+-
+- spin_lock_irqsave(&accel_minor_lock, flags);
+- idr_remove(&accel_minors_idr, index);
+- spin_unlock_irqrestore(&accel_minor_lock, flags);
+-}
+-
+-/**
+- * accel_minor_replace() - Replace minor pointer in accel minors idr.
+- * @minor: Pointer to the new minor.
+- * @index: The minor id to replace.
+- *
+- * This function access the accel minors idr structure and replaces the pointer
+- * that is associated with an existing id. Because the minor pointer can be
+- * NULL, we need to explicitly pass the index.
+- *
+- * Return: 0 for success, negative value for error
+- */
+-void accel_minor_replace(struct drm_minor *minor, int index)
+-{
+- unsigned long flags;
+-
+- spin_lock_irqsave(&accel_minor_lock, flags);
+- idr_replace(&accel_minors_idr, minor, index);
+- spin_unlock_irqrestore(&accel_minor_lock, flags);
+-}
+-
+-/*
+- * Looks up the given minor-ID and returns the respective DRM-minor object. The
+- * refence-count of the underlying device is increased so you must release this
+- * object with accel_minor_release().
+- *
+- * The object can be only a drm_minor that represents an accel device.
+- *
+- * As long as you hold this minor, it is guaranteed that the object and the
+- * minor->dev pointer will stay valid! However, the device may get unplugged and
+- * unregistered while you hold the minor.
+- */
+-static struct drm_minor *accel_minor_acquire(unsigned int minor_id)
+-{
+- struct drm_minor *minor;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&accel_minor_lock, flags);
+- minor = idr_find(&accel_minors_idr, minor_id);
+- if (minor)
+- drm_dev_get(minor->dev);
+- spin_unlock_irqrestore(&accel_minor_lock, flags);
+-
+- if (!minor) {
+- return ERR_PTR(-ENODEV);
+- } else if (drm_dev_is_unplugged(minor->dev)) {
+- drm_dev_put(minor->dev);
+- return ERR_PTR(-ENODEV);
+- }
+-
+- return minor;
+-}
+-
+-static void accel_minor_release(struct drm_minor *minor)
+-{
+- drm_dev_put(minor->dev);
+-}
+-
+ /**
+ * accel_open - open method for ACCEL file
+ * @inode: device inode
+@@ -227,7 +133,7 @@ int accel_open(struct inode *inode, struct file *filp)
+ struct drm_minor *minor;
+ int retcode;
+
+- minor = accel_minor_acquire(iminor(inode));
++ minor = drm_minor_acquire(&accel_minors_xa, iminor(inode));
+ if (IS_ERR(minor))
+ return PTR_ERR(minor);
+
+@@ -246,7 +152,7 @@ int accel_open(struct inode *inode, struct file *filp)
+
+ err_undo:
+ atomic_dec(&dev->open_count);
+- accel_minor_release(minor);
++ drm_minor_release(minor);
+ return retcode;
+ }
+ EXPORT_SYMBOL_GPL(accel_open);
+@@ -257,7 +163,7 @@ static int accel_stub_open(struct inode *inode, struct file *filp)
+ struct drm_minor *minor;
+ int err;
+
+- minor = accel_minor_acquire(iminor(inode));
++ minor = drm_minor_acquire(&accel_minors_xa, iminor(inode));
+ if (IS_ERR(minor))
+ return PTR_ERR(minor);
+
+@@ -274,7 +180,7 @@ static int accel_stub_open(struct inode *inode, struct file *filp)
+ err = 0;
+
+ out:
+- accel_minor_release(minor);
++ drm_minor_release(minor);
+
+ return err;
+ }
+@@ -290,15 +196,13 @@ void accel_core_exit(void)
+ unregister_chrdev(ACCEL_MAJOR, "accel");
+ debugfs_remove(accel_debugfs_root);
+ accel_sysfs_destroy();
+- idr_destroy(&accel_minors_idr);
++ WARN_ON(!xa_empty(&accel_minors_xa));
+ }
+
+ int __init accel_core_init(void)
+ {
+ int ret;
+
+- idr_init(&accel_minors_idr);
+-
+ ret = accel_sysfs_init();
+ if (ret < 0) {
+ DRM_ERROR("Cannot create ACCEL class: %d\n", ret);
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 1c7631f22c522b..5a95eb267676e9 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -1208,7 +1208,7 @@ static int btintel_pcie_setup_hdev(struct btintel_pcie_data *data)
+ int err;
+ struct hci_dev *hdev;
+
+- hdev = hci_alloc_dev();
++ hdev = hci_alloc_dev_priv(sizeof(struct btintel_data));
+ if (!hdev)
+ return -ENOMEM;
+
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 259a917da75f35..073ca9caf52ac5 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -554,12 +554,15 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf,
+ }
+
+ if (value == prev)
+- return;
++ goto cpufreq_policy_put;
+
+ WRITE_ONCE(cpudata->cppc_req_cached, value);
+
+ amd_pstate_update_perf(cpudata, min_perf, des_perf,
+ max_perf, fast_switch);
++
++cpufreq_policy_put:
++ cpufreq_cpu_put(policy);
+ }
+
+ static int amd_pstate_verify(struct cpufreq_policy_data *policy)
+diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
+index 93543071a5008c..c734e6a1c4ce25 100644
+--- a/drivers/gpu/drm/drm_drv.c
++++ b/drivers/gpu/drm/drm_drv.c
+@@ -34,6 +34,7 @@
+ #include <linux/pseudo_fs.h>
+ #include <linux/slab.h>
+ #include <linux/srcu.h>
++#include <linux/xarray.h>
+
+ #include <drm/drm_accel.h>
+ #include <drm/drm_cache.h>
+@@ -54,8 +55,7 @@ MODULE_AUTHOR("Gareth Hughes, Leif Delgass, José Fonseca, Jon Smirl");
+ MODULE_DESCRIPTION("DRM shared core routines");
+ MODULE_LICENSE("GPL and additional rights");
+
+-static DEFINE_SPINLOCK(drm_minor_lock);
+-static struct idr drm_minors_idr;
++DEFINE_XARRAY_ALLOC(drm_minors_xa);
+
+ /*
+ * If the drm core fails to init for whatever reason,
+@@ -83,6 +83,18 @@ DEFINE_STATIC_SRCU(drm_unplug_srcu);
+ * registered and unregistered dynamically according to device-state.
+ */
+
++static struct xarray *drm_minor_get_xa(enum drm_minor_type type)
++{
++ if (type == DRM_MINOR_PRIMARY || type == DRM_MINOR_RENDER)
++ return &drm_minors_xa;
++#if IS_ENABLED(CONFIG_DRM_ACCEL)
++ else if (type == DRM_MINOR_ACCEL)
++ return &accel_minors_xa;
++#endif
++ else
++ return ERR_PTR(-EOPNOTSUPP);
++}
++
+ static struct drm_minor **drm_minor_get_slot(struct drm_device *dev,
+ enum drm_minor_type type)
+ {
+@@ -101,25 +113,31 @@ static struct drm_minor **drm_minor_get_slot(struct drm_device *dev,
+ static void drm_minor_alloc_release(struct drm_device *dev, void *data)
+ {
+ struct drm_minor *minor = data;
+- unsigned long flags;
+
+ WARN_ON(dev != minor->dev);
+
+ put_device(minor->kdev);
+
+- if (minor->type == DRM_MINOR_ACCEL) {
+- accel_minor_remove(minor->index);
+- } else {
+- spin_lock_irqsave(&drm_minor_lock, flags);
+- idr_remove(&drm_minors_idr, minor->index);
+- spin_unlock_irqrestore(&drm_minor_lock, flags);
+- }
++ xa_erase(drm_minor_get_xa(minor->type), minor->index);
+ }
+
++/*
++ * DRM used to support 64 devices, for backwards compatibility we need to maintain the
++ * minor allocation scheme where minors 0-63 are primary nodes, 64-127 are control nodes,
++ * and 128-191 are render nodes.
++ * After reaching the limit, we're allocating minors dynamically - first-come, first-serve.
++ * Accel nodes are using a distinct major, so the minors are allocated in continuous 0-MAX
++ * range.
++ */
++#define DRM_MINOR_LIMIT(t) ({ \
++ typeof(t) _t = (t); \
++ _t == DRM_MINOR_ACCEL ? XA_LIMIT(0, ACCEL_MAX_MINORS) : XA_LIMIT(64 * _t, 64 * _t + 63); \
++})
++#define DRM_EXTENDED_MINOR_LIMIT XA_LIMIT(192, (1 << MINORBITS) - 1)
++
+ static int drm_minor_alloc(struct drm_device *dev, enum drm_minor_type type)
+ {
+ struct drm_minor *minor;
+- unsigned long flags;
+ int r;
+
+ minor = drmm_kzalloc(dev, sizeof(*minor), GFP_KERNEL);
+@@ -129,25 +147,14 @@ static int drm_minor_alloc(struct drm_device *dev, enum drm_minor_type type)
+ minor->type = type;
+ minor->dev = dev;
+
+- idr_preload(GFP_KERNEL);
+- if (type == DRM_MINOR_ACCEL) {
+- r = accel_minor_alloc();
+- } else {
+- spin_lock_irqsave(&drm_minor_lock, flags);
+- r = idr_alloc(&drm_minors_idr,
+- NULL,
+- 64 * type,
+- 64 * (type + 1),
+- GFP_NOWAIT);
+- spin_unlock_irqrestore(&drm_minor_lock, flags);
+- }
+- idr_preload_end();
+-
++ r = xa_alloc(drm_minor_get_xa(type), &minor->index,
++ NULL, DRM_MINOR_LIMIT(type), GFP_KERNEL);
++ if (r == -EBUSY && (type == DRM_MINOR_PRIMARY || type == DRM_MINOR_RENDER))
++ r = xa_alloc(&drm_minors_xa, &minor->index,
++ NULL, DRM_EXTENDED_MINOR_LIMIT, GFP_KERNEL);
+ if (r < 0)
+ return r;
+
+- minor->index = r;
+-
+ r = drmm_add_action_or_reset(dev, drm_minor_alloc_release, minor);
+ if (r)
+ return r;
+@@ -163,7 +170,7 @@ static int drm_minor_alloc(struct drm_device *dev, enum drm_minor_type type)
+ static int drm_minor_register(struct drm_device *dev, enum drm_minor_type type)
+ {
+ struct drm_minor *minor;
+- unsigned long flags;
++ void *entry;
+ int ret;
+
+ DRM_DEBUG("\n");
+@@ -186,13 +193,12 @@ static int drm_minor_register(struct drm_device *dev, enum drm_minor_type type)
+ goto err_debugfs;
+
+ /* replace NULL with @minor so lookups will succeed from now on */
+- if (minor->type == DRM_MINOR_ACCEL) {
+- accel_minor_replace(minor, minor->index);
+- } else {
+- spin_lock_irqsave(&drm_minor_lock, flags);
+- idr_replace(&drm_minors_idr, minor, minor->index);
+- spin_unlock_irqrestore(&drm_minor_lock, flags);
++ entry = xa_store(drm_minor_get_xa(type), minor->index, minor, GFP_KERNEL);
++ if (xa_is_err(entry)) {
++ ret = xa_err(entry);
++ goto err_debugfs;
+ }
++ WARN_ON(entry);
+
+ DRM_DEBUG("new minor registered %d\n", minor->index);
+ return 0;
+@@ -205,20 +211,13 @@ static int drm_minor_register(struct drm_device *dev, enum drm_minor_type type)
+ static void drm_minor_unregister(struct drm_device *dev, enum drm_minor_type type)
+ {
+ struct drm_minor *minor;
+- unsigned long flags;
+
+ minor = *drm_minor_get_slot(dev, type);
+ if (!minor || !device_is_registered(minor->kdev))
+ return;
+
+ /* replace @minor with NULL so lookups will fail from now on */
+- if (minor->type == DRM_MINOR_ACCEL) {
+- accel_minor_replace(NULL, minor->index);
+- } else {
+- spin_lock_irqsave(&drm_minor_lock, flags);
+- idr_replace(&drm_minors_idr, NULL, minor->index);
+- spin_unlock_irqrestore(&drm_minor_lock, flags);
+- }
++ xa_store(drm_minor_get_xa(type), minor->index, NULL, GFP_KERNEL);
+
+ device_del(minor->kdev);
+ dev_set_drvdata(minor->kdev, NULL); /* safety belt */
+@@ -234,16 +233,15 @@ static void drm_minor_unregister(struct drm_device *dev, enum drm_minor_type typ
+ * minor->dev pointer will stay valid! However, the device may get unplugged and
+ * unregistered while you hold the minor.
+ */
+-struct drm_minor *drm_minor_acquire(unsigned int minor_id)
++struct drm_minor *drm_minor_acquire(struct xarray *minor_xa, unsigned int minor_id)
+ {
+ struct drm_minor *minor;
+- unsigned long flags;
+
+- spin_lock_irqsave(&drm_minor_lock, flags);
+- minor = idr_find(&drm_minors_idr, minor_id);
++ xa_lock(minor_xa);
++ minor = xa_load(minor_xa, minor_id);
+ if (minor)
+ drm_dev_get(minor->dev);
+- spin_unlock_irqrestore(&drm_minor_lock, flags);
++ xa_unlock(minor_xa);
+
+ if (!minor) {
+ return ERR_PTR(-ENODEV);
+@@ -1036,7 +1034,7 @@ static int drm_stub_open(struct inode *inode, struct file *filp)
+
+ DRM_DEBUG("\n");
+
+- minor = drm_minor_acquire(iminor(inode));
++ minor = drm_minor_acquire(&drm_minors_xa, iminor(inode));
+ if (IS_ERR(minor))
+ return PTR_ERR(minor);
+
+@@ -1071,7 +1069,7 @@ static void drm_core_exit(void)
+ unregister_chrdev(DRM_MAJOR, "drm");
+ debugfs_remove(drm_debugfs_root);
+ drm_sysfs_destroy();
+- idr_destroy(&drm_minors_idr);
++ WARN_ON(!xa_empty(&drm_minors_xa));
+ drm_connector_ida_destroy();
+ }
+
+@@ -1080,7 +1078,6 @@ static int __init drm_core_init(void)
+ int ret;
+
+ drm_connector_ida_init();
+- idr_init(&drm_minors_idr);
+ drm_memcpy_init_early();
+
+ ret = drm_sysfs_init();
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index 714e42b0510800..f917b259b33421 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -364,7 +364,7 @@ int drm_open(struct inode *inode, struct file *filp)
+ struct drm_minor *minor;
+ int retcode;
+
+- minor = drm_minor_acquire(iminor(inode));
++ minor = drm_minor_acquire(&drm_minors_xa, iminor(inode));
+ if (IS_ERR(minor))
+ return PTR_ERR(minor);
+
+diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
+index 690505a1f7a5db..12acf44c4e2400 100644
+--- a/drivers/gpu/drm/drm_internal.h
++++ b/drivers/gpu/drm/drm_internal.h
+@@ -81,10 +81,6 @@ void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv);
+ void drm_prime_remove_buf_handle(struct drm_prime_file_private *prime_fpriv,
+ uint32_t handle);
+
+-/* drm_drv.c */
+-struct drm_minor *drm_minor_acquire(unsigned int minor_id);
+-void drm_minor_release(struct drm_minor *minor);
+-
+ /* drm_managed.c */
+ void drm_managed_release(struct drm_device *dev);
+ void drmm_add_final_kfree(struct drm_device *dev, void *container);
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index da57947130cc7a..e01b1332d245a7 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -90,6 +90,11 @@ enum nvme_quirks {
+ */
+ NVME_QUIRK_NO_DEEPEST_PS = (1 << 5),
+
++ /*
++ * Problems seen with concurrent commands
++ */
++ NVME_QUIRK_QDEPTH_ONE = (1 << 6),
++
+ /*
+ * Set MEDIUM priority on SQ creation
+ */
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index c0533f3f64cbad..7990c3f22ecf66 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2563,15 +2563,8 @@ static int nvme_pci_enable(struct nvme_dev *dev)
+ else
+ dev->io_sqes = NVME_NVM_IOSQES;
+
+- /*
+- * Temporary fix for the Apple controller found in the MacBook8,1 and
+- * some MacBook7,1 to avoid controller resets and data loss.
+- */
+- if (pdev->vendor == PCI_VENDOR_ID_APPLE && pdev->device == 0x2001) {
++ if (dev->ctrl.quirks & NVME_QUIRK_QDEPTH_ONE) {
+ dev->q_depth = 2;
+- dev_warn(dev->ctrl.device, "detected Apple NVMe controller, "
+- "set queue depth=%u to work around controller resets\n",
+- dev->q_depth);
+ } else if (pdev->vendor == PCI_VENDOR_ID_SAMSUNG &&
+ (pdev->device == 0xa821 || pdev->device == 0xa822) &&
+ NVME_CAP_MQES(dev->ctrl.cap) == 0) {
+@@ -3442,6 +3435,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ NVME_QUIRK_BOGUS_NID, },
+ { PCI_VDEVICE(REDHAT, 0x0010), /* Qemu emulated controller */
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
++ { PCI_DEVICE(0x1217, 0x8760), /* O2 Micro 64GB Steam Deck */
++ .driver_data = NVME_QUIRK_QDEPTH_ONE },
+ { PCI_DEVICE(0x126f, 0x2262), /* Silicon Motion generic */
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS |
+ NVME_QUIRK_BOGUS_NID, },
+@@ -3576,7 +3571,12 @@ static const struct pci_device_id nvme_id_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0xcd02),
+ .driver_data = NVME_QUIRK_DMA_ADDRESS_BITS_48, },
+ { PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2001),
+- .driver_data = NVME_QUIRK_SINGLE_VECTOR },
++ /*
++ * Fix for the Apple controller found in the MacBook8,1 and
++ * some MacBook7,1 to avoid controller resets and data loss.
++ */
++ .driver_data = NVME_QUIRK_SINGLE_VECTOR |
++ NVME_QUIRK_QDEPTH_ONE },
+ { PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2003) },
+ { PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2005),
+ .driver_data = NVME_QUIRK_SINGLE_VECTOR |
+diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
+index 3cffa6c7953880..7c0cea2c828d99 100644
+--- a/drivers/powercap/intel_rapl_common.c
++++ b/drivers/powercap/intel_rapl_common.c
+@@ -1285,6 +1285,7 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
+
+ X86_MATCH_VENDOR_FAM(AMD, 0x17, &rapl_defaults_amd),
+ X86_MATCH_VENDOR_FAM(AMD, 0x19, &rapl_defaults_amd),
++ X86_MATCH_VENDOR_FAM(AMD, 0x1A, &rapl_defaults_amd),
+ X86_MATCH_VENDOR_FAM(HYGON, 0x18, &rapl_defaults_amd),
+ {}
+ };
+@@ -2128,6 +2129,21 @@ void rapl_remove_package(struct rapl_package *rp)
+ }
+ EXPORT_SYMBOL_GPL(rapl_remove_package);
+
++/*
++ * RAPL Package energy counter scope:
++ * 1. AMD/HYGON platforms use per-PKG package energy counter
++ * 2. For Intel platforms
++ * 2.1 CLX-AP platform has per-DIE package energy counter
++ * 2.2 Other platforms that uses MSR RAPL are single die systems so the
++ * package energy counter can be considered as per-PKG/per-DIE,
++ * here it is considered as per-DIE.
++ * 2.3 New platforms that use TPMI RAPL doesn't care about the
++ * scope because they are not MSR/CPU based.
++ */
++#define rapl_msrs_are_pkg_scope() \
++ (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || \
++ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
++
+ /* caller to ensure CPU hotplug lock is held */
+ struct rapl_package *rapl_find_package_domain_cpuslocked(int id, struct rapl_if_priv *priv,
+ bool id_is_cpu)
+@@ -2135,8 +2151,14 @@ struct rapl_package *rapl_find_package_domain_cpuslocked(int id, struct rapl_if_
+ struct rapl_package *rp;
+ int uid;
+
+- if (id_is_cpu)
+- uid = topology_logical_die_id(id);
++ if (id_is_cpu) {
++ uid = rapl_msrs_are_pkg_scope() ?
++ topology_physical_package_id(id) : topology_logical_die_id(id);
++ if (uid < 0) {
++ pr_err("topology_logical_(package/die)_id() returned a negative value");
++ return NULL;
++ }
++ }
+ else
+ uid = id;
+
+@@ -2168,9 +2190,14 @@ struct rapl_package *rapl_add_package_cpuslocked(int id, struct rapl_if_priv *pr
+ return ERR_PTR(-ENOMEM);
+
+ if (id_is_cpu) {
+- rp->id = topology_logical_die_id(id);
++ rp->id = rapl_msrs_are_pkg_scope() ?
++ topology_physical_package_id(id) : topology_logical_die_id(id);
++ if ((int)(rp->id) < 0) {
++ pr_err("topology_logical_(package/die)_id() returned a negative value");
++ return ERR_PTR(-EINVAL);
++ }
+ rp->lead_cpu = id;
+- if (topology_max_dies_per_package() > 1)
++ if (!rapl_msrs_are_pkg_scope() && topology_max_dies_per_package() > 1)
+ snprintf(rp->name, PACKAGE_DOMAIN_NAME_LENGTH, "package-%d-die-%d",
+ topology_physical_package_id(id), topology_die_id(id));
+ else
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 6bd9fe56538589..34e46ef308abfd 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -754,7 +754,7 @@ static struct urb *usbtmc_create_urb(void)
+ if (!urb)
+ return NULL;
+
+- dmabuf = kmalloc(bufsize, GFP_KERNEL);
++ dmabuf = kzalloc(bufsize, GFP_KERNEL);
+ if (!dmabuf) {
+ usb_free_urb(urb);
+ return NULL;
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index d93f5d58455782..8e327fcb222f73 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -118,6 +118,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
+ { USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) },
+ { USB_DEVICE(IBM_VENDOR_ID, IBM_PRODUCT_ID) },
++ { USB_DEVICE(MACROSILICON_VENDOR_ID, MACROSILICON_MS3020_PRODUCT_ID) },
+ { } /* Terminating entry */
+ };
+
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index 732f9b13ad5d59..d60eda7f6edaf8 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -171,3 +171,7 @@
+ /* Allied Telesis VT-Kit3 */
+ #define AT_VENDOR_ID 0x0caa
+ #define AT_VTKIT3_PRODUCT_ID 0x3001
++
++/* Macrosilicon MS3020 */
++#define MACROSILICON_VENDOR_ID 0x345f
++#define MACROSILICON_MS3020_PRODUCT_ID 0x3020
+diff --git a/include/drm/drm_accel.h b/include/drm/drm_accel.h
+index f4d3784b1dce05..8867ce0be94cdd 100644
+--- a/include/drm/drm_accel.h
++++ b/include/drm/drm_accel.h
+@@ -51,11 +51,10 @@
+
+ #if IS_ENABLED(CONFIG_DRM_ACCEL)
+
++extern struct xarray accel_minors_xa;
++
+ void accel_core_exit(void);
+ int accel_core_init(void);
+-void accel_minor_remove(int index);
+-int accel_minor_alloc(void);
+-void accel_minor_replace(struct drm_minor *minor, int index);
+ void accel_set_device_instance_params(struct device *kdev, int index);
+ int accel_open(struct inode *inode, struct file *filp);
+ void accel_debugfs_init(struct drm_device *dev);
+@@ -73,19 +72,6 @@ static inline int __init accel_core_init(void)
+ return 0;
+ }
+
+-static inline void accel_minor_remove(int index)
+-{
+-}
+-
+-static inline int accel_minor_alloc(void)
+-{
+- return -EOPNOTSUPP;
+-}
+-
+-static inline void accel_minor_replace(struct drm_minor *minor, int index)
+-{
+-}
+-
+ static inline void accel_set_device_instance_params(struct device *kdev, int index)
+ {
+ }
+diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
+index ab230d3af138db..8c0030c7730816 100644
+--- a/include/drm/drm_file.h
++++ b/include/drm/drm_file.h
+@@ -45,6 +45,8 @@ struct drm_printer;
+ struct device;
+ struct file;
+
++extern struct xarray drm_minors_xa;
++
+ /*
+ * FIXME: Not sure we want to have drm_minor here in the end, but to avoid
+ * header include loops we need it here for now.
+@@ -434,6 +436,9 @@ static inline bool drm_is_accel_client(const struct drm_file *file_priv)
+
+ void drm_file_update_pid(struct drm_file *);
+
++struct drm_minor *drm_minor_acquire(struct xarray *minors_xa, unsigned int minor_id);
++void drm_minor_release(struct drm_minor *minor);
++
+ int drm_open(struct inode *inode, struct file *filp);
+ int drm_open_helper(struct file *filp, struct drm_minor *minor);
+ ssize_t drm_read(struct file *filp, char __user *buffer,
+diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c
+index 12cdff640492d4..0a8883a93e8369 100644
+--- a/net/netfilter/nft_socket.c
++++ b/net/netfilter/nft_socket.c
+@@ -61,8 +61,8 @@ static noinline int nft_socket_cgroup_subtree_level(void)
+ struct cgroup *cgrp = cgroup_get_from_path("/");
+ int level;
+
+- if (!cgrp)
+- return -ENOENT;
++ if (IS_ERR(cgrp))
++ return PTR_ERR(cgrp);
+
+ level = cgrp->level;
+
+diff --git a/sound/soc/amd/acp/acp-legacy-common.c b/sound/soc/amd/acp/acp-legacy-common.c
+index 4422cec81e3c4d..04bd605fdce3d9 100644
+--- a/sound/soc/amd/acp/acp-legacy-common.c
++++ b/sound/soc/amd/acp/acp-legacy-common.c
+@@ -321,6 +321,8 @@ int acp_init(struct acp_chip_info *chip)
+ pr_err("ACP reset failed\n");
+ return ret;
+ }
++ if (chip->acp_rev >= ACP70_DEV)
++ writel(0, chip->base + ACP_ZSC_DSP_CTRL);
+ return 0;
+ }
+ EXPORT_SYMBOL_NS_GPL(acp_init, SND_SOC_ACP_COMMON);
+@@ -336,6 +338,9 @@ int acp_deinit(struct acp_chip_info *chip)
+
+ if (chip->acp_rev != ACP70_DEV)
+ writel(0, chip->base + ACP_CONTROL);
++
++ if (chip->acp_rev >= ACP70_DEV)
++ writel(0x01, chip->base + ACP_ZSC_DSP_CTRL);
+ return 0;
+ }
+ EXPORT_SYMBOL_NS_GPL(acp_deinit, SND_SOC_ACP_COMMON);
+diff --git a/sound/soc/amd/acp/amd.h b/sound/soc/amd/acp/amd.h
+index 87a4813783f911..c095a34a7229a6 100644
+--- a/sound/soc/amd/acp/amd.h
++++ b/sound/soc/amd/acp/amd.h
+@@ -103,6 +103,8 @@
+ #define ACP70_PGFSM_CONTROL ACP6X_PGFSM_CONTROL
+ #define ACP70_PGFSM_STATUS ACP6X_PGFSM_STATUS
+
++#define ACP_ZSC_DSP_CTRL 0x0001014
++#define ACP_ZSC_STS 0x0001018
+ #define ACP_SOFT_RST_DONE_MASK 0x00010001
+
+ #define ACP_PGFSM_CNTL_POWER_ON_MASK 0xffffffff
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-10-04 15:21 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-10-04 15:21 UTC (permalink / raw
To: gentoo-commits
commit: 1784ff81b100c3a570a4b8c3651e6b841dbd5778
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Oct 4 15:21:32 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Oct 4 15:21:32 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1784ff81
Linux patch 6.11.2
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1001_linux-6.11.2.patch | 29073 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 29077 insertions(+)
diff --git a/0000_README b/0000_README
index d99e35a4..5383d35f 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch: 1000_linux-6.11.1.patch
From: https://www.kernel.org
Desc: Linux 6.11.1
+Patch: 1001_linux-6.11.2.patch
+From: https://www.kernel.org
+Desc: Linux 6.11.2
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1001_linux-6.11.2.patch b/1001_linux-6.11.2.patch
new file mode 100644
index 00000000..93dc4030
--- /dev/null
+++ b/1001_linux-6.11.2.patch
@@ -0,0 +1,29073 @@
+diff --git a/.gitignore b/.gitignore
+index 7902adf4f7f1a5..58fdbb35e2f136 100644
+--- a/.gitignore
++++ b/.gitignore
+@@ -142,7 +142,6 @@ GTAGS
+ # id-utils files
+ ID
+
+-*.orig
+ *~
+ \#*#
+
+diff --git a/Documentation/ABI/testing/sysfs-bus-iio-filter-admv8818 b/Documentation/ABI/testing/sysfs-bus-iio-filter-admv8818
+index 31dbb390573ff2..c431f0a13cf502 100644
+--- a/Documentation/ABI/testing/sysfs-bus-iio-filter-admv8818
++++ b/Documentation/ABI/testing/sysfs-bus-iio-filter-admv8818
+@@ -3,7 +3,7 @@ KernelVersion:
+ Contact: linux-iio@vger.kernel.org
+ Description:
+ Reading this returns the valid values that can be written to the
+- on_altvoltage0_mode attribute:
++ filter_mode attribute:
+
+ - auto -> Adjust bandpass filter to track changes in input clock rate.
+ - manual -> disable/unregister the clock rate notifier / input clock tracking.
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index 50327c05be8d1b..39c52385f11fb3 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -55,6 +55,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Ampere | AmpereOne | AC03_CPU_38 | AMPERE_ERRATUM_AC03_CPU_38 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Ampere | AmpereOne AC04 | AC04_CPU_10 | AMPERE_ERRATUM_AC03_CPU_38 |
+++----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A510 | #2457168 | ARM64_ERRATUM_2457168 |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/devicetree/bindings/iio/magnetometer/asahi-kasei,ak8975.yaml b/Documentation/devicetree/bindings/iio/magnetometer/asahi-kasei,ak8975.yaml
+index 9790f75fc669ef..fe5145d3b73cf2 100644
+--- a/Documentation/devicetree/bindings/iio/magnetometer/asahi-kasei,ak8975.yaml
++++ b/Documentation/devicetree/bindings/iio/magnetometer/asahi-kasei,ak8975.yaml
+@@ -23,7 +23,6 @@ properties:
+ - ak8963
+ - ak09911
+ - ak09912
+- - ak09916
+ deprecated: true
+
+ reg:
+diff --git a/Documentation/devicetree/bindings/pci/fsl,layerscape-pcie.yaml b/Documentation/devicetree/bindings/pci/fsl,layerscape-pcie.yaml
+index 793986c5af7ff3..daeab5c0758d14 100644
+--- a/Documentation/devicetree/bindings/pci/fsl,layerscape-pcie.yaml
++++ b/Documentation/devicetree/bindings/pci/fsl,layerscape-pcie.yaml
+@@ -22,18 +22,20 @@ description:
+
+ properties:
+ compatible:
+- enum:
+- - fsl,ls1021a-pcie
+- - fsl,ls2080a-pcie
+- - fsl,ls2085a-pcie
+- - fsl,ls2088a-pcie
+- - fsl,ls1088a-pcie
+- - fsl,ls1046a-pcie
+- - fsl,ls1043a-pcie
+- - fsl,ls1012a-pcie
+- - fsl,ls1028a-pcie
+- - fsl,lx2160a-pcie
+-
++ oneOf:
++ - enum:
++ - fsl,ls1012a-pcie
++ - fsl,ls1021a-pcie
++ - fsl,ls1028a-pcie
++ - fsl,ls1043a-pcie
++ - fsl,ls1046a-pcie
++ - fsl,ls1088a-pcie
++ - fsl,ls2080a-pcie
++ - fsl,ls2085a-pcie
++ - fsl,ls2088a-pcie
++ - items:
++ - const: fsl,lx2160ar2-pcie
++ - const: fsl,ls2088a-pcie
+ reg:
+ maxItems: 2
+
+diff --git a/Documentation/devicetree/bindings/spi/spi-nxp-fspi.yaml b/Documentation/devicetree/bindings/spi/spi-nxp-fspi.yaml
+index 4a5f41bde00f3c..902db92da83207 100644
+--- a/Documentation/devicetree/bindings/spi/spi-nxp-fspi.yaml
++++ b/Documentation/devicetree/bindings/spi/spi-nxp-fspi.yaml
+@@ -21,6 +21,7 @@ properties:
+ - nxp,imx8mm-fspi
+ - nxp,imx8mp-fspi
+ - nxp,imx8qxp-fspi
++ - nxp,imx8ulp-fspi
+ - nxp,lx2160a-fspi
+ - items:
+ - enum:
+diff --git a/Documentation/driver-api/ipmi.rst b/Documentation/driver-api/ipmi.rst
+index e224e47b6b0944..dfa021eacd63c4 100644
+--- a/Documentation/driver-api/ipmi.rst
++++ b/Documentation/driver-api/ipmi.rst
+@@ -540,7 +540,7 @@ at module load time (for a module) with::
+ alerts_broken
+
+ The addresses are normal I2C addresses. The adapter is the string
+-name of the adapter, as shown in /sys/class/i2c-adapter/i2c-<n>/name.
++name of the adapter, as shown in /sys/bus/i2c/devices/i2c-<n>/name.
+ It is *NOT* i2c-<n> itself. Also, the comparison is done ignoring
+ spaces, so if the name is "This is an I2C chip" you can say
+ adapter_name=ThisisanI2cchip. This is because it's hard to pass in
+diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
+index 02880d5552d5fa..198a5a8f26c2da 100644
+--- a/Documentation/virt/kvm/locking.rst
++++ b/Documentation/virt/kvm/locking.rst
+@@ -9,7 +9,7 @@ KVM Lock Overview
+
+ The acquisition orders for mutexes are as follows:
+
+-- cpus_read_lock() is taken outside kvm_lock
++- cpus_read_lock() is taken outside kvm_lock and kvm_usage_lock
+
+ - kvm->lock is taken outside vcpu->mutex
+
+@@ -24,6 +24,13 @@ The acquisition orders for mutexes are as follows:
+ are taken on the waiting side when modifying memslots, so MMU notifiers
+ must not take either kvm->slots_lock or kvm->slots_arch_lock.
+
++cpus_read_lock() vs kvm_lock:
++
++- Taking cpus_read_lock() outside of kvm_lock is problematic, despite that
++ being the official ordering, as it is quite easy to unknowingly trigger
++ cpus_read_lock() while holding kvm_lock. Use caution when walking vm_list,
++ e.g. avoid complex operations when possible.
++
+ For SRCU:
+
+ - ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections
+@@ -227,10 +234,17 @@ time it will be set using the Dirty tracking mechanism described above.
+ :Type: mutex
+ :Arch: any
+ :Protects: - vm_list
+- - kvm_usage_count
++
++``kvm_usage_lock``
++^^^^^^^^^^^^^^^^^^
++
++:Type: mutex
++:Arch: any
++:Protects: - kvm_usage_count
+ - hardware virtualization enable/disable
+-:Comment: KVM also disables CPU hotplug via cpus_read_lock() during
+- enable/disable.
++:Comment: Exists because using kvm_lock leads to deadlock (see earlier comment
++ on cpus_read_lock() vs kvm_lock). Note, KVM also disables CPU hotplug via
++ cpus_read_lock() when enabling/disabling virtualization.
+
+ ``kvm->mn_invalidate_lock``
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+@@ -290,11 +304,12 @@ time it will be set using the Dirty tracking mechanism described above.
+ wakeup.
+
+ ``vendor_module_lock``
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++^^^^^^^^^^^^^^^^^^^^^^
+ :Type: mutex
+ :Arch: x86
+ :Protects: loading a vendor module (kvm_amd or kvm_intel)
+-:Comment: Exists because using kvm_lock leads to deadlock. cpu_hotplug_lock is
+- taken outside of kvm_lock, e.g. in KVM's CPU online/offline callbacks, and
+- many operations need to take cpu_hotplug_lock when loading a vendor module,
+- e.g. updating static calls.
++:Comment: Exists because using kvm_lock leads to deadlock. kvm_lock is taken
++ in notifiers, e.g. __kvmclock_cpufreq_notifier(), that may be invoked while
++ cpu_hotplug_lock is held, e.g. from cpufreq_boost_trigger_state(), and many
++ operations need to take cpu_hotplug_lock when loading a vendor module, e.g.
++ updating static calls.
+diff --git a/Makefile b/Makefile
+index df9c90cdd1c575..7bcf0c32ea5eab 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 11
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/boot/dts/microchip/sam9x60.dtsi b/arch/arm/boot/dts/microchip/sam9x60.dtsi
+index 291540e5d81e76..d077afd5024db1 100644
+--- a/arch/arm/boot/dts/microchip/sam9x60.dtsi
++++ b/arch/arm/boot/dts/microchip/sam9x60.dtsi
+@@ -1312,7 +1312,7 @@ rtt: rtc@fffffe20 {
+ compatible = "microchip,sam9x60-rtt", "atmel,at91sam9260-rtt";
+ reg = <0xfffffe20 0x20>;
+ interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>;
+- clocks = <&clk32k 0>;
++ clocks = <&clk32k 1>;
+ };
+
+ pit: timer@fffffe40 {
+@@ -1338,7 +1338,7 @@ rtc: rtc@fffffea8 {
+ compatible = "microchip,sam9x60-rtc", "atmel,at91sam9x5-rtc";
+ reg = <0xfffffea8 0x100>;
+ interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>;
+- clocks = <&clk32k 0>;
++ clocks = <&clk32k 1>;
+ };
+
+ watchdog: watchdog@ffffff80 {
+diff --git a/arch/arm/boot/dts/microchip/sama7g5.dtsi b/arch/arm/boot/dts/microchip/sama7g5.dtsi
+index 75778be126a3d9..17bcdcf0cf4a05 100644
+--- a/arch/arm/boot/dts/microchip/sama7g5.dtsi
++++ b/arch/arm/boot/dts/microchip/sama7g5.dtsi
+@@ -272,7 +272,7 @@ rtt: rtc@e001d020 {
+ compatible = "microchip,sama7g5-rtt", "microchip,sam9x60-rtt", "atmel,at91sam9260-rtt";
+ reg = <0xe001d020 0x30>;
+ interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk32k 0>;
++ clocks = <&clk32k 1>;
+ };
+
+ clk32k: clock-controller@e001d050 {
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6ul-geam.dts b/arch/arm/boot/dts/nxp/imx/imx6ul-geam.dts
+index cdbb8c435cd6aa..601d89b904cdfb 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6ul-geam.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx6ul-geam.dts
+@@ -365,7 +365,7 @@ MX6UL_PAD_ENET1_RX_ER__PWM8_OUT 0x110b0
+ };
+
+ pinctrl_tsc: tscgrp {
+- fsl,pin = <
++ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO01__GPIO1_IO01 0xb0
+ MX6UL_PAD_GPIO1_IO02__GPIO1_IO02 0xb0
+ MX6UL_PAD_GPIO1_IO03__GPIO1_IO03 0xb0
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6ull-seeed-npi-dev-board.dtsi b/arch/arm/boot/dts/nxp/imx/imx6ull-seeed-npi-dev-board.dtsi
+index 6bb12e0bbc7ec6..50654dbf62e02c 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6ull-seeed-npi-dev-board.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6ull-seeed-npi-dev-board.dtsi
+@@ -339,14 +339,14 @@ MX6UL_PAD_JTAG_TRST_B__SAI2_TX_DATA 0x120b0
+ };
+
+ pinctrl_uart1: uart1grp {
+- fsl,pin = <
++ fsl,pins = <
+ MX6UL_PAD_UART1_TX_DATA__UART1_DCE_TX 0x1b0b1
+ MX6UL_PAD_UART1_RX_DATA__UART1_DCE_RX 0x1b0b1
+ >;
+ };
+
+ pinctrl_uart2: uart2grp {
+- fsl,pin = <
++ fsl,pins = <
+ MX6UL_PAD_UART2_TX_DATA__UART2_DCE_TX 0x1b0b1
+ MX6UL_PAD_UART2_RX_DATA__UART2_DCE_RX 0x1b0b1
+ MX6UL_PAD_UART2_CTS_B__UART2_DCE_CTS 0x1b0b1
+@@ -355,7 +355,7 @@ MX6UL_PAD_UART2_RTS_B__UART2_DCE_RTS 0x1b0b1
+ };
+
+ pinctrl_uart3: uart3grp {
+- fsl,pin = <
++ fsl,pins = <
+ MX6UL_PAD_UART3_TX_DATA__UART3_DCE_TX 0x1b0b1
+ MX6UL_PAD_UART3_RX_DATA__UART3_DCE_RX 0x1b0b1
+ MX6UL_PAD_UART3_CTS_B__UART3_DCE_CTS 0x1b0b1
+@@ -364,21 +364,21 @@ MX6UL_PAD_UART3_RTS_B__UART3_DCE_RTS 0x1b0b1
+ };
+
+ pinctrl_uart4: uart4grp {
+- fsl,pin = <
++ fsl,pins = <
+ MX6UL_PAD_UART4_TX_DATA__UART4_DCE_TX 0x1b0b1
+ MX6UL_PAD_UART4_RX_DATA__UART4_DCE_RX 0x1b0b1
+ >;
+ };
+
+ pinctrl_uart5: uart5grp {
+- fsl,pin = <
++ fsl,pins = <
+ MX6UL_PAD_UART5_TX_DATA__UART5_DCE_TX 0x1b0b1
+ MX6UL_PAD_UART5_RX_DATA__UART5_DCE_RX 0x1b0b1
+ >;
+ };
+
+ pinctrl_usb_otg1_id: usbotg1idgrp {
+- fsl,pin = <
++ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO00__ANATOP_OTG1_ID 0x17059
+ >;
+ };
+diff --git a/arch/arm/boot/dts/nxp/imx/imx7d-zii-rmu2.dts b/arch/arm/boot/dts/nxp/imx/imx7d-zii-rmu2.dts
+index 521493342fe972..8f5566027c25a2 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx7d-zii-rmu2.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx7d-zii-rmu2.dts
+@@ -350,7 +350,7 @@ MX7D_PAD_SD3_RESET_B__SD3_RESET_B 0x59
+
+ &iomuxc_lpsr {
+ pinctrl_enet1_phy_interrupt: enet1phyinterruptgrp {
+- fsl,phy = <
++ fsl,pins = <
+ MX7D_PAD_LPSR_GPIO1_IO02__GPIO1_IO2 0x08
+ >;
+ };
+diff --git a/arch/arm/mach-ep93xx/clock.c b/arch/arm/mach-ep93xx/clock.c
+index 85a496ddc6197e..e9f72a529b5089 100644
+--- a/arch/arm/mach-ep93xx/clock.c
++++ b/arch/arm/mach-ep93xx/clock.c
+@@ -359,7 +359,7 @@ static unsigned long ep93xx_div_recalc_rate(struct clk_hw *hw,
+ u32 val = __raw_readl(psc->reg);
+ u8 index = (val & psc->mask) >> psc->shift;
+
+- if (index > psc->num_div)
++ if (index >= psc->num_div)
+ return 0;
+
+ return DIV_ROUND_UP_ULL(parent_rate, psc->div[index]);
+diff --git a/arch/arm/mach-versatile/platsmp-realview.c b/arch/arm/mach-versatile/platsmp-realview.c
+index 6965a1de727b07..d38b2e174257e8 100644
+--- a/arch/arm/mach-versatile/platsmp-realview.c
++++ b/arch/arm/mach-versatile/platsmp-realview.c
+@@ -70,6 +70,7 @@ static void __init realview_smp_prepare_cpus(unsigned int max_cpus)
+ return;
+ }
+ map = syscon_node_to_regmap(np);
++ of_node_put(np);
+ if (IS_ERR(map)) {
+ pr_err("PLATSMP: No syscon regmap\n");
+ return;
+diff --git a/arch/arm/vfp/vfpinstr.h b/arch/arm/vfp/vfpinstr.h
+index 3c7938fd40aad6..32090b0fb250b8 100644
+--- a/arch/arm/vfp/vfpinstr.h
++++ b/arch/arm/vfp/vfpinstr.h
+@@ -64,33 +64,37 @@
+
+ #ifdef CONFIG_AS_VFP_VMRS_FPINST
+
+-#define fmrx(_vfp_) ({ \
+- u32 __v; \
+- asm(".fpu vfpv2\n" \
+- "vmrs %0, " #_vfp_ \
+- : "=r" (__v) : : "cc"); \
+- __v; \
+- })
+-
+-#define fmxr(_vfp_,_var_) \
+- asm(".fpu vfpv2\n" \
+- "vmsr " #_vfp_ ", %0" \
+- : : "r" (_var_) : "cc")
++#define fmrx(_vfp_) ({ \
++ u32 __v; \
++ asm volatile (".fpu vfpv2\n" \
++ "vmrs %0, " #_vfp_ \
++ : "=r" (__v) : : "cc"); \
++ __v; \
++})
++
++#define fmxr(_vfp_, _var_) ({ \
++ asm volatile (".fpu vfpv2\n" \
++ "vmsr " #_vfp_ ", %0" \
++ : : "r" (_var_) : "cc"); \
++})
+
+ #else
+
+ #define vfpreg(_vfp_) #_vfp_
+
+-#define fmrx(_vfp_) ({ \
+- u32 __v; \
+- asm("mrc p10, 7, %0, " vfpreg(_vfp_) ", cr0, 0 @ fmrx %0, " #_vfp_ \
+- : "=r" (__v) : : "cc"); \
+- __v; \
+- })
+-
+-#define fmxr(_vfp_,_var_) \
+- asm("mcr p10, 7, %0, " vfpreg(_vfp_) ", cr0, 0 @ fmxr " #_vfp_ ", %0" \
+- : : "r" (_var_) : "cc")
++#define fmrx(_vfp_) ({ \
++ u32 __v; \
++ asm volatile ("mrc p10, 7, %0, " vfpreg(_vfp_) "," \
++ "cr0, 0 @ fmrx %0, " #_vfp_ \
++ : "=r" (__v) : : "cc"); \
++ __v; \
++})
++
++#define fmxr(_vfp_, _var_) ({ \
++ asm volatile ("mcr p10, 7, %0, " vfpreg(_vfp_) "," \
++ "cr0, 0 @ fmxr " #_vfp_ ", %0" \
++ : : "r" (_var_) : "cc"); \
++})
+
+ #endif
+
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index a2f8ff354ca670..c8cba20a4d11b2 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -423,7 +423,7 @@ config AMPERE_ERRATUM_AC03_CPU_38
+ default y
+ help
+ This option adds an alternative code sequence to work around Ampere
+- erratum AC03_CPU_38 on AmpereOne.
++ errata AC03_CPU_38 and AC04_CPU_10 on AmpereOne.
+
+ The affected design reports FEAT_HAFDBS as not implemented in
+ ID_AA64MMFR1_EL1.HAFDBS, but (V)TCR_ELx.{HA,HD} are not RES0
+diff --git a/arch/arm64/boot/dts/exynos/exynos7885-jackpotlte.dts b/arch/arm64/boot/dts/exynos/exynos7885-jackpotlte.dts
+index 47a389d9ff7d71..9d74fa6bfed9fb 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7885-jackpotlte.dts
++++ b/arch/arm64/boot/dts/exynos/exynos7885-jackpotlte.dts
+@@ -32,7 +32,7 @@ memory@80000000 {
+ device_type = "memory";
+ reg = <0x0 0x80000000 0x3da00000>,
+ <0x0 0xc0000000 0x40000000>,
+- <0x8 0x80000000 0x40000000>;
++ <0x8 0x80000000 0x80000000>;
+ };
+
+ gpio-keys {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+index afdab5724eaaac..eccbd7b597bb11 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+@@ -353,7 +353,8 @@ &dpi {
+ pinctrl-names = "default", "sleep";
+ pinctrl-0 = <&dpi_pins_default>;
+ pinctrl-1 = <&dpi_pins_sleep>;
+- status = "okay";
++ /* TODO Re-enable after DP to Type-C port muxing can be described */
++ status = "disabled";
+ };
+
+ &dpi_out {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186.dtsi b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+index 4763ed5dc86cfb..d63a9defe73e17 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+@@ -731,7 +731,7 @@ opp-850000000 {
+ opp-900000000-3 {
+ opp-hz = /bits/ 64 <900000000>;
+ opp-microvolt = <850000>;
+- opp-supported-hw = <0x8>;
++ opp-supported-hw = <0xcf>;
+ };
+
+ opp-900000000-4 {
+@@ -743,13 +743,13 @@ opp-900000000-4 {
+ opp-900000000-5 {
+ opp-hz = /bits/ 64 <900000000>;
+ opp-microvolt = <825000>;
+- opp-supported-hw = <0x30>;
++ opp-supported-hw = <0x20>;
+ };
+
+ opp-950000000-3 {
+ opp-hz = /bits/ 64 <950000000>;
+ opp-microvolt = <900000>;
+- opp-supported-hw = <0x8>;
++ opp-supported-hw = <0xcf>;
+ };
+
+ opp-950000000-4 {
+@@ -761,13 +761,13 @@ opp-950000000-4 {
+ opp-950000000-5 {
+ opp-hz = /bits/ 64 <950000000>;
+ opp-microvolt = <850000>;
+- opp-supported-hw = <0x30>;
++ opp-supported-hw = <0x20>;
+ };
+
+ opp-1000000000-3 {
+ opp-hz = /bits/ 64 <1000000000>;
+ opp-microvolt = <950000>;
+- opp-supported-hw = <0x8>;
++ opp-supported-hw = <0xcf>;
+ };
+
+ opp-1000000000-4 {
+@@ -779,7 +779,7 @@ opp-1000000000-4 {
+ opp-1000000000-5 {
+ opp-hz = /bits/ 64 <1000000000>;
+ opp-microvolt = <875000>;
+- opp-supported-hw = <0x30>;
++ opp-supported-hw = <0x20>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+index fe5400e17b0f43..d3a52acbe48a14 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+@@ -1404,6 +1404,7 @@ &xhci1 {
+ rx-fifo-depth = <3072>;
+ vusb33-supply = <&mt6359_vusb_ldo_reg>;
+ vbus-supply = <&usb_vbus>;
++ mediatek,u3p-dis-msk = <1>;
+ };
+
+ &xhci2 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index 2ee45752583c00..98c15eb68589a5 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -3251,10 +3251,10 @@ dp_intf0: dp-intf@1c015000 {
+ compatible = "mediatek,mt8195-dp-intf";
+ reg = <0 0x1c015000 0 0x1000>;
+ interrupts = <GIC_SPI 657 IRQ_TYPE_LEVEL_HIGH 0>;
+- clocks = <&vdosys0 CLK_VDO0_DP_INTF0>,
+- <&vdosys0 CLK_VDO0_DP_INTF0_DP_INTF>,
++ clocks = <&vdosys0 CLK_VDO0_DP_INTF0_DP_INTF>,
++ <&vdosys0 CLK_VDO0_DP_INTF0>,
+ <&apmixedsys CLK_APMIXED_TVDPLL1>;
+- clock-names = "engine", "pixel", "pll";
++ clock-names = "pixel", "engine", "pll";
+ status = "disabled";
+ };
+
+@@ -3521,10 +3521,10 @@ dp_intf1: dp-intf@1c113000 {
+ reg = <0 0x1c113000 0 0x1000>;
+ interrupts = <GIC_SPI 513 IRQ_TYPE_LEVEL_HIGH 0>;
+ power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+- clocks = <&vdosys1 CLK_VDO1_DP_INTF0_MM>,
+- <&vdosys1 CLK_VDO1_DPINTF>,
++ clocks = <&vdosys1 CLK_VDO1_DPINTF>,
++ <&vdosys1 CLK_VDO1_DP_INTF0_MM>,
+ <&apmixedsys CLK_APMIXED_TVDPLL2>;
+- clock-names = "engine", "pixel", "pll";
++ clock-names = "pixel", "engine", "pll";
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts b/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
+index 4b5f6cf16f7076..096fa999aa59ec 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
+@@ -898,6 +898,7 @@ &xhci1 {
+ usb2-lpm-disable;
+ vusb33-supply = <&mt6359_vusb_ldo_reg>;
+ vbus-supply = <&vsys>;
++ mediatek,u3p-dis-msk = <1>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3701-0008.dtsi b/arch/arm64/boot/dts/nvidia/tegra234-p3701-0008.dtsi
+index 553fa4ba1cd48a..62c4fdad0b600b 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234-p3701-0008.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234-p3701-0008.dtsi
+@@ -44,39 +44,6 @@ i2c@c240000 {
+ status = "okay";
+ };
+
+- i2c@c250000 {
+- power-sensor@41 {
+- compatible = "ti,ina3221";
+- reg = <0x41>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- input@0 {
+- reg = <0x0>;
+- label = "CVB_ATX_12V";
+- shunt-resistor-micro-ohms = <2000>;
+- };
+-
+- input@1 {
+- reg = <0x1>;
+- label = "CVB_ATX_3V3";
+- shunt-resistor-micro-ohms = <2000>;
+- };
+-
+- input@2 {
+- reg = <0x2>;
+- label = "CVB_ATX_5V";
+- shunt-resistor-micro-ohms = <2000>;
+- };
+- };
+-
+- power-sensor@44 {
+- compatible = "ti,ina219";
+- reg = <0x44>;
+- shunt-resistor = <2000>;
+- };
+- };
+-
+ rtc@c2a0000 {
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002.dtsi b/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002.dtsi
+index 527f2f3aee3ad4..377f518bd3e57b 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002.dtsi
+@@ -183,6 +183,39 @@ usb@3610000 {
+ phy-names = "usb2-0", "usb2-1", "usb2-2", "usb2-3",
+ "usb3-0", "usb3-1", "usb3-2";
+ };
++
++ i2c@c250000 {
++ power-sensor@41 {
++ compatible = "ti,ina3221";
++ reg = <0x41>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ input@0 {
++ reg = <0x0>;
++ label = "CVB_ATX_12V";
++ shunt-resistor-micro-ohms = <2000>;
++ };
++
++ input@1 {
++ reg = <0x1>;
++ label = "CVB_ATX_3V3";
++ shunt-resistor-micro-ohms = <2000>;
++ };
++
++ input@2 {
++ reg = <0x2>;
++ label = "CVB_ATX_5V";
++ shunt-resistor-micro-ohms = <2000>;
++ };
++ };
++
++ power-sensor@44 {
++ compatible = "ti,ina219";
++ reg = <0x44>;
++ shunt-resistor = <2000>;
++ };
++ };
+ };
+
+ vdd_3v3_dp: regulator-vdd-3v3-dp {
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index 23f1b2e5e62471..95691ab58a23e5 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -3070,6 +3070,7 @@ apps_smmu: iommu@15000000 {
+ reg = <0x0 0x15000000 0x0 0x100000>;
+ #iommu-cells = <2>;
+ #global-interrupts = <2>;
++ dma-coherent;
+
+ interrupts = <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>,
+@@ -3208,6 +3209,7 @@ pcie_smmu: iommu@15200000 {
+ reg = <0x0 0x15200000 0x0 0x80000>;
+ #iommu-cells = <2>;
+ #global-interrupts = <2>;
++ dma-coherent;
+
+ interrupts = <GIC_SPI 920 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 921 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index cd732ef88cd8e0..6174160b56c49f 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -4402,14 +4402,14 @@ mdss_dp2: displayport-controller@ae9a000 {
+
+ assigned-clocks = <&dispcc DISP_CC_MDSS_DPTX2_LINK_CLK_SRC>,
+ <&dispcc DISP_CC_MDSS_DPTX2_PIXEL0_CLK_SRC>;
+- assigned-clock-parents = <&mdss_dp2_phy 0>,
+- <&mdss_dp2_phy 1>;
++ assigned-clock-parents = <&usb_1_ss2_qmpphy QMP_USB43DP_DP_LINK_CLK>,
++ <&usb_1_ss2_qmpphy QMP_USB43DP_DP_VCO_DIV_CLK>;
+
+ operating-points-v2 = <&mdss_dp2_opp_table>;
+
+ power-domains = <&rpmhpd RPMHPD_MMCX>;
+
+- phys = <&mdss_dp2_phy>;
++ phys = <&usb_1_ss2_qmpphy QMP_USB43DP_DP_PHY>;
+ phy-names = "dp";
+
+ #sound-dai-cells = <0>;
+@@ -4597,8 +4597,8 @@ dispcc: clock-controller@af00000 {
+ <&usb_1_ss0_qmpphy QMP_USB43DP_DP_VCO_DIV_CLK>,
+ <&usb_1_ss1_qmpphy QMP_USB43DP_DP_LINK_CLK>, /* dp1 */
+ <&usb_1_ss1_qmpphy QMP_USB43DP_DP_VCO_DIV_CLK>,
+- <&mdss_dp2_phy 0>, /* dp2 */
+- <&mdss_dp2_phy 1>,
++ <&usb_1_ss2_qmpphy QMP_USB43DP_DP_LINK_CLK>, /* dp2 */
++ <&usb_1_ss2_qmpphy QMP_USB43DP_DP_VCO_DIV_CLK>,
+ <&mdss_dp3_phy 0>, /* dp3 */
+ <&mdss_dp3_phy 1>;
+ power-domains = <&rpmhpd RPMHPD_MMCX>;
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi b/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
+index 18ef297db93363..20fb5e41c5988c 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
+@@ -210,8 +210,8 @@ gic: interrupt-controller@11900000 {
+ #interrupt-cells = <3>;
+ #address-cells = <0>;
+ interrupt-controller;
+- reg = <0x0 0x11900000 0 0x40000>,
+- <0x0 0x11940000 0 0x60000>;
++ reg = <0x0 0x11900000 0 0x20000>,
++ <0x0 0x11940000 0 0x40000>;
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_LOW>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+index d3838e5820fca1..c9b9b60a3a36e4 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+@@ -1043,8 +1043,8 @@ gic: interrupt-controller@11900000 {
+ #interrupt-cells = <3>;
+ #address-cells = <0>;
+ interrupt-controller;
+- reg = <0x0 0x11900000 0 0x40000>,
+- <0x0 0x11940000 0 0x60000>;
++ reg = <0x0 0x11900000 0 0x20000>,
++ <0x0 0x11940000 0 0x40000>;
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_LOW>;
+ };
+
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+index 1de2e5f0917d91..8a9b61bd759a79 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+@@ -1051,8 +1051,8 @@ gic: interrupt-controller@11900000 {
+ #interrupt-cells = <3>;
+ #address-cells = <0>;
+ interrupt-controller;
+- reg = <0x0 0x11900000 0 0x40000>,
+- <0x0 0x11940000 0 0x60000>;
++ reg = <0x0 0x11900000 0 0x20000>,
++ <0x0 0x11940000 0 0x40000>;
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_LOW>;
+ };
+
+diff --git a/arch/arm64/boot/dts/renesas/r9a08g045.dtsi b/arch/arm64/boot/dts/renesas/r9a08g045.dtsi
+index 0d5c47a65e46c5..34e29463a672d1 100644
+--- a/arch/arm64/boot/dts/renesas/r9a08g045.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a08g045.dtsi
+@@ -269,8 +269,8 @@ gic: interrupt-controller@12400000 {
+ #interrupt-cells = <3>;
+ #address-cells = <0>;
+ interrupt-controller;
+- reg = <0x0 0x12400000 0 0x40000>,
+- <0x0 0x12440000 0 0x60000>;
++ reg = <0x0 0x12400000 0 0x20000>,
++ <0x0 0x12440000 0 0x40000>;
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_LOW>;
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+index 294eb2de263deb..f5e124b235c83c 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+@@ -32,12 +32,12 @@ chosen {
+ backlight: edp-backlight {
+ compatible = "pwm-backlight";
+ power-supply = <&vcc_12v>;
+- pwms = <&pwm0 0 740740 0>;
++ pwms = <&pwm0 0 125000 0>;
+ };
+
+ bat: battery {
+ compatible = "simple-battery";
+- charge-full-design-microamp-hours = <9800000>;
++ charge-full-design-microamp-hours = <10000000>;
+ voltage-max-design-microvolt = <4350000>;
+ voltage-min-design-microvolt = <3000000>;
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-odroid-m1.dts b/arch/arm64/boot/dts/rockchip/rk3568-odroid-m1.dts
+index a337f547caf538..6a02db4f073f29 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-odroid-m1.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-odroid-m1.dts
+@@ -13,7 +13,7 @@
+
+ / {
+ model = "Hardkernel ODROID-M1";
+- compatible = "rockchip,rk3568-odroid-m1", "rockchip,rk3568";
++ compatible = "hardkernel,odroid-m1", "rockchip,rk3568";
+
+ aliases {
+ ethernet0 = &gmac0;
+diff --git a/arch/arm64/boot/dts/ti/k3-am654-idk.dtso b/arch/arm64/boot/dts/ti/k3-am654-idk.dtso
+index 8bdb87fcbde007..1674ad564be1fd 100644
+--- a/arch/arm64/boot/dts/ti/k3-am654-idk.dtso
++++ b/arch/arm64/boot/dts/ti/k3-am654-idk.dtso
+@@ -58,9 +58,7 @@ icssg0_eth: icssg0-eth {
+ <&main_udmap 0xc107>, /* egress slice 1 */
+
+ <&main_udmap 0x4100>, /* ingress slice 0 */
+- <&main_udmap 0x4101>, /* ingress slice 1 */
+- <&main_udmap 0x4102>, /* mgmnt rsp slice 0 */
+- <&main_udmap 0x4103>; /* mgmnt rsp slice 1 */
++ <&main_udmap 0x4101>; /* ingress slice 1 */
+ dma-names = "tx0-0", "tx0-1", "tx0-2", "tx0-3",
+ "tx1-0", "tx1-1", "tx1-2", "tx1-3",
+ "rx0", "rx1";
+@@ -126,9 +124,7 @@ icssg1_eth: icssg1-eth {
+ <&main_udmap 0xc207>, /* egress slice 1 */
+
+ <&main_udmap 0x4200>, /* ingress slice 0 */
+- <&main_udmap 0x4201>, /* ingress slice 1 */
+- <&main_udmap 0x4202>, /* mgmnt rsp slice 0 */
+- <&main_udmap 0x4203>; /* mgmnt rsp slice 1 */
++ <&main_udmap 0x4201>; /* ingress slice 1 */
+ dma-names = "tx0-0", "tx0-1", "tx0-2", "tx0-3",
+ "tx1-0", "tx1-1", "tx1-2", "tx1-3",
+ "rx0", "rx1";
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts b/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
+index a2925555fe8180..fb899c99753ecd 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
+@@ -123,7 +123,7 @@ main_r5fss1_core1_memory_region: r5f-memory@a5100000 {
+ no-map;
+ };
+
+- c66_1_dma_memory_region: c66-dma-memory@a6000000 {
++ c66_0_dma_memory_region: c66-dma-memory@a6000000 {
+ compatible = "shared-dma-pool";
+ reg = <0x00 0xa6000000 0x00 0x100000>;
+ no-map;
+@@ -135,7 +135,7 @@ c66_0_memory_region: c66-memory@a6100000 {
+ no-map;
+ };
+
+- c66_0_dma_memory_region: c66-dma-memory@a7000000 {
++ c66_1_dma_memory_region: c66-dma-memory@a7000000 {
+ compatible = "shared-dma-pool";
+ reg = <0x00 0xa7000000 0x00 0x100000>;
+ no-map;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-sk.dts b/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
+index 89fbfb21e5d3b2..e709edeb95cf71 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
+@@ -120,7 +120,7 @@ main_r5fss1_core1_memory_region: r5f-memory@a5100000 {
+ no-map;
+ };
+
+- c66_1_dma_memory_region: c66-dma-memory@a6000000 {
++ c66_0_dma_memory_region: c66-dma-memory@a6000000 {
+ compatible = "shared-dma-pool";
+ reg = <0x00 0xa6000000 0x00 0x100000>;
+ no-map;
+@@ -132,7 +132,7 @@ c66_0_memory_region: c66-memory@a6100000 {
+ no-map;
+ };
+
+- c66_0_dma_memory_region: c66-dma-memory@a7000000 {
++ c66_1_dma_memory_region: c66-dma-memory@a7000000 {
+ compatible = "shared-dma-pool";
+ reg = <0x00 0xa7000000 0x00 0x100000>;
+ no-map;
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 5fd7caea441936..5a7dfeb8e8eb55 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -143,6 +143,7 @@
+ #define APPLE_CPU_PART_M2_AVALANCHE_MAX 0x039
+
+ #define AMPERE_CPU_PART_AMPERE1 0xAC3
++#define AMPERE_CPU_PART_AMPERE1A 0xAC4
+
+ #define MICROSOFT_CPU_PART_AZURE_COBALT_100 0xD49 /* Based on r0p0 of ARM Neoverse N2 */
+
+@@ -212,6 +213,7 @@
+ #define MIDR_APPLE_M2_BLIZZARD_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_MAX)
+ #define MIDR_APPLE_M2_AVALANCHE_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_MAX)
+ #define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
++#define MIDR_AMPERE1A MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1A)
+ #define MIDR_MICROSOFT_AZURE_COBALT_100 MIDR_CPU_MODEL(ARM_CPU_IMP_MICROSOFT, MICROSOFT_CPU_PART_AZURE_COBALT_100)
+
+ /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
+diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
+index 56c148890daf5c..2f3d56857a9745 100644
+--- a/arch/arm64/include/asm/esr.h
++++ b/arch/arm64/include/asm/esr.h
+@@ -10,63 +10,63 @@
+ #include <asm/memory.h>
+ #include <asm/sysreg.h>
+
+-#define ESR_ELx_EC_UNKNOWN (0x00)
+-#define ESR_ELx_EC_WFx (0x01)
++#define ESR_ELx_EC_UNKNOWN UL(0x00)
++#define ESR_ELx_EC_WFx UL(0x01)
+ /* Unallocated EC: 0x02 */
+-#define ESR_ELx_EC_CP15_32 (0x03)
+-#define ESR_ELx_EC_CP15_64 (0x04)
+-#define ESR_ELx_EC_CP14_MR (0x05)
+-#define ESR_ELx_EC_CP14_LS (0x06)
+-#define ESR_ELx_EC_FP_ASIMD (0x07)
+-#define ESR_ELx_EC_CP10_ID (0x08) /* EL2 only */
+-#define ESR_ELx_EC_PAC (0x09) /* EL2 and above */
++#define ESR_ELx_EC_CP15_32 UL(0x03)
++#define ESR_ELx_EC_CP15_64 UL(0x04)
++#define ESR_ELx_EC_CP14_MR UL(0x05)
++#define ESR_ELx_EC_CP14_LS UL(0x06)
++#define ESR_ELx_EC_FP_ASIMD UL(0x07)
++#define ESR_ELx_EC_CP10_ID UL(0x08) /* EL2 only */
++#define ESR_ELx_EC_PAC UL(0x09) /* EL2 and above */
+ /* Unallocated EC: 0x0A - 0x0B */
+-#define ESR_ELx_EC_CP14_64 (0x0C)
+-#define ESR_ELx_EC_BTI (0x0D)
+-#define ESR_ELx_EC_ILL (0x0E)
++#define ESR_ELx_EC_CP14_64 UL(0x0C)
++#define ESR_ELx_EC_BTI UL(0x0D)
++#define ESR_ELx_EC_ILL UL(0x0E)
+ /* Unallocated EC: 0x0F - 0x10 */
+-#define ESR_ELx_EC_SVC32 (0x11)
+-#define ESR_ELx_EC_HVC32 (0x12) /* EL2 only */
+-#define ESR_ELx_EC_SMC32 (0x13) /* EL2 and above */
++#define ESR_ELx_EC_SVC32 UL(0x11)
++#define ESR_ELx_EC_HVC32 UL(0x12) /* EL2 only */
++#define ESR_ELx_EC_SMC32 UL(0x13) /* EL2 and above */
+ /* Unallocated EC: 0x14 */
+-#define ESR_ELx_EC_SVC64 (0x15)
+-#define ESR_ELx_EC_HVC64 (0x16) /* EL2 and above */
+-#define ESR_ELx_EC_SMC64 (0x17) /* EL2 and above */
+-#define ESR_ELx_EC_SYS64 (0x18)
+-#define ESR_ELx_EC_SVE (0x19)
+-#define ESR_ELx_EC_ERET (0x1a) /* EL2 only */
++#define ESR_ELx_EC_SVC64 UL(0x15)
++#define ESR_ELx_EC_HVC64 UL(0x16) /* EL2 and above */
++#define ESR_ELx_EC_SMC64 UL(0x17) /* EL2 and above */
++#define ESR_ELx_EC_SYS64 UL(0x18)
++#define ESR_ELx_EC_SVE UL(0x19)
++#define ESR_ELx_EC_ERET UL(0x1a) /* EL2 only */
+ /* Unallocated EC: 0x1B */
+-#define ESR_ELx_EC_FPAC (0x1C) /* EL1 and above */
+-#define ESR_ELx_EC_SME (0x1D)
++#define ESR_ELx_EC_FPAC UL(0x1C) /* EL1 and above */
++#define ESR_ELx_EC_SME UL(0x1D)
+ /* Unallocated EC: 0x1E */
+-#define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */
+-#define ESR_ELx_EC_IABT_LOW (0x20)
+-#define ESR_ELx_EC_IABT_CUR (0x21)
+-#define ESR_ELx_EC_PC_ALIGN (0x22)
++#define ESR_ELx_EC_IMP_DEF UL(0x1f) /* EL3 only */
++#define ESR_ELx_EC_IABT_LOW UL(0x20)
++#define ESR_ELx_EC_IABT_CUR UL(0x21)
++#define ESR_ELx_EC_PC_ALIGN UL(0x22)
+ /* Unallocated EC: 0x23 */
+-#define ESR_ELx_EC_DABT_LOW (0x24)
+-#define ESR_ELx_EC_DABT_CUR (0x25)
+-#define ESR_ELx_EC_SP_ALIGN (0x26)
+-#define ESR_ELx_EC_MOPS (0x27)
+-#define ESR_ELx_EC_FP_EXC32 (0x28)
++#define ESR_ELx_EC_DABT_LOW UL(0x24)
++#define ESR_ELx_EC_DABT_CUR UL(0x25)
++#define ESR_ELx_EC_SP_ALIGN UL(0x26)
++#define ESR_ELx_EC_MOPS UL(0x27)
++#define ESR_ELx_EC_FP_EXC32 UL(0x28)
+ /* Unallocated EC: 0x29 - 0x2B */
+-#define ESR_ELx_EC_FP_EXC64 (0x2C)
++#define ESR_ELx_EC_FP_EXC64 UL(0x2C)
+ /* Unallocated EC: 0x2D - 0x2E */
+-#define ESR_ELx_EC_SERROR (0x2F)
+-#define ESR_ELx_EC_BREAKPT_LOW (0x30)
+-#define ESR_ELx_EC_BREAKPT_CUR (0x31)
+-#define ESR_ELx_EC_SOFTSTP_LOW (0x32)
+-#define ESR_ELx_EC_SOFTSTP_CUR (0x33)
+-#define ESR_ELx_EC_WATCHPT_LOW (0x34)
+-#define ESR_ELx_EC_WATCHPT_CUR (0x35)
++#define ESR_ELx_EC_SERROR UL(0x2F)
++#define ESR_ELx_EC_BREAKPT_LOW UL(0x30)
++#define ESR_ELx_EC_BREAKPT_CUR UL(0x31)
++#define ESR_ELx_EC_SOFTSTP_LOW UL(0x32)
++#define ESR_ELx_EC_SOFTSTP_CUR UL(0x33)
++#define ESR_ELx_EC_WATCHPT_LOW UL(0x34)
++#define ESR_ELx_EC_WATCHPT_CUR UL(0x35)
+ /* Unallocated EC: 0x36 - 0x37 */
+-#define ESR_ELx_EC_BKPT32 (0x38)
++#define ESR_ELx_EC_BKPT32 UL(0x38)
+ /* Unallocated EC: 0x39 */
+-#define ESR_ELx_EC_VECTOR32 (0x3A) /* EL2 only */
++#define ESR_ELx_EC_VECTOR32 UL(0x3A) /* EL2 only */
+ /* Unallocated EC: 0x3B */
+-#define ESR_ELx_EC_BRK64 (0x3C)
++#define ESR_ELx_EC_BRK64 UL(0x3C)
+ /* Unallocated EC: 0x3D - 0x3F */
+-#define ESR_ELx_EC_MAX (0x3F)
++#define ESR_ELx_EC_MAX UL(0x3F)
+
+ #define ESR_ELx_EC_SHIFT (26)
+ #define ESR_ELx_EC_WIDTH (6)
+diff --git a/arch/arm64/include/uapi/asm/sigcontext.h b/arch/arm64/include/uapi/asm/sigcontext.h
+index 8a45b7a411e045..57f76d82077ea5 100644
+--- a/arch/arm64/include/uapi/asm/sigcontext.h
++++ b/arch/arm64/include/uapi/asm/sigcontext.h
+@@ -320,10 +320,10 @@ struct zt_context {
+ ((sizeof(struct za_context) + (__SVE_VQ_BYTES - 1)) \
+ / __SVE_VQ_BYTES * __SVE_VQ_BYTES)
+
+-#define ZA_SIG_REGS_SIZE(vq) ((vq * __SVE_VQ_BYTES) * (vq * __SVE_VQ_BYTES))
++#define ZA_SIG_REGS_SIZE(vq) (((vq) * __SVE_VQ_BYTES) * ((vq) * __SVE_VQ_BYTES))
+
+ #define ZA_SIG_ZAV_OFFSET(vq, n) (ZA_SIG_REGS_OFFSET + \
+- (SVE_SIG_ZREG_SIZE(vq) * n))
++ (SVE_SIG_ZREG_SIZE(vq) * (n)))
+
+ #define ZA_SIG_CONTEXT_SIZE(vq) \
+ (ZA_SIG_REGS_OFFSET + ZA_SIG_REGS_SIZE(vq))
+@@ -334,7 +334,7 @@ struct zt_context {
+
+ #define ZT_SIG_REGS_OFFSET sizeof(struct zt_context)
+
+-#define ZT_SIG_REGS_SIZE(n) (ZT_SIG_REG_BYTES * n)
++#define ZT_SIG_REGS_SIZE(n) (ZT_SIG_REG_BYTES * (n))
+
+ #define ZT_SIG_CONTEXT_SIZE(n) \
+ (sizeof(struct zt_context) + ZT_SIG_REGS_SIZE(n))
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index f6b6b450735715..dfefbdf4073a6a 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -456,6 +456,14 @@ static const struct midr_range erratum_spec_ssbs_list[] = {
+ };
+ #endif
+
++#ifdef CONFIG_AMPERE_ERRATUM_AC03_CPU_38
++static const struct midr_range erratum_ac03_cpu_38_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_AMPERE1),
++ MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
++ {},
++};
++#endif
++
+ const struct arm64_cpu_capabilities arm64_errata[] = {
+ #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE
+ {
+@@ -772,7 +780,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ {
+ .desc = "AmpereOne erratum AC03_CPU_38",
+ .capability = ARM64_WORKAROUND_AMPERE_AC03_CPU_38,
+- ERRATA_MIDR_ALL_VERSIONS(MIDR_AMPERE1),
++ ERRATA_MIDR_RANGE_LIST(erratum_ac03_cpu_38_list),
+ },
+ #endif
+ {
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index f01f0fd7b7feb4..3b3f6b56e73303 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -68,7 +68,7 @@ enum ipi_msg_type {
+ IPI_RESCHEDULE,
+ IPI_CALL_FUNC,
+ IPI_CPU_STOP,
+- IPI_CPU_CRASH_STOP,
++ IPI_CPU_STOP_NMI,
+ IPI_TIMER,
+ IPI_IRQ_WORK,
+ NR_IPI,
+@@ -85,6 +85,8 @@ static int ipi_irq_base __ro_after_init;
+ static int nr_ipi __ro_after_init = NR_IPI;
+ static struct irq_desc *ipi_desc[MAX_IPI] __ro_after_init;
+
++static bool crash_stop;
++
+ static void ipi_setup(int cpu);
+
+ #ifdef CONFIG_HOTPLUG_CPU
+@@ -823,7 +825,7 @@ static const char *ipi_types[MAX_IPI] __tracepoint_string = {
+ [IPI_RESCHEDULE] = "Rescheduling interrupts",
+ [IPI_CALL_FUNC] = "Function call interrupts",
+ [IPI_CPU_STOP] = "CPU stop interrupts",
+- [IPI_CPU_CRASH_STOP] = "CPU stop (for crash dump) interrupts",
++ [IPI_CPU_STOP_NMI] = "CPU stop NMIs",
+ [IPI_TIMER] = "Timer broadcast interrupts",
+ [IPI_IRQ_WORK] = "IRQ work interrupts",
+ [IPI_CPU_BACKTRACE] = "CPU backtrace interrupts",
+@@ -867,9 +869,9 @@ void arch_irq_work_raise(void)
+ }
+ #endif
+
+-static void __noreturn local_cpu_stop(void)
++static void __noreturn local_cpu_stop(unsigned int cpu)
+ {
+- set_cpu_online(smp_processor_id(), false);
++ set_cpu_online(cpu, false);
+
+ local_daif_mask();
+ sdei_mask_local_cpu();
+@@ -883,21 +885,26 @@ static void __noreturn local_cpu_stop(void)
+ */
+ void __noreturn panic_smp_self_stop(void)
+ {
+- local_cpu_stop();
++ local_cpu_stop(smp_processor_id());
+ }
+
+-#ifdef CONFIG_KEXEC_CORE
+-static atomic_t waiting_for_crash_ipi = ATOMIC_INIT(0);
+-#endif
+-
+ static void __noreturn ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs)
+ {
+ #ifdef CONFIG_KEXEC_CORE
++ /*
++ * Use local_daif_mask() instead of local_irq_disable() to make sure
++ * that pseudo-NMIs are disabled. The "crash stop" code starts with
++ * an IRQ and falls back to NMI (which might be pseudo). If the IRQ
++ * finally goes through right as we're timing out then the NMI could
++ * interrupt us. It's better to prevent the NMI and let the IRQ
++ * finish since the pt_regs will be better.
++ */
++ local_daif_mask();
++
+ crash_save_cpu(regs, cpu);
+
+- atomic_dec(&waiting_for_crash_ipi);
++ set_cpu_online(cpu, false);
+
+- local_irq_disable();
+ sdei_mask_local_cpu();
+
+ if (IS_ENABLED(CONFIG_HOTPLUG_CPU))
+@@ -962,14 +969,12 @@ static void do_handle_IPI(int ipinr)
+ break;
+
+ case IPI_CPU_STOP:
+- local_cpu_stop();
+- break;
+-
+- case IPI_CPU_CRASH_STOP:
+- if (IS_ENABLED(CONFIG_KEXEC_CORE)) {
++ case IPI_CPU_STOP_NMI:
++ if (IS_ENABLED(CONFIG_KEXEC_CORE) && crash_stop) {
+ ipi_cpu_crash_stop(cpu, get_irq_regs());
+-
+ unreachable();
++ } else {
++ local_cpu_stop(cpu);
+ }
+ break;
+
+@@ -1024,8 +1029,7 @@ static bool ipi_should_be_nmi(enum ipi_msg_type ipi)
+ return false;
+
+ switch (ipi) {
+- case IPI_CPU_STOP:
+- case IPI_CPU_CRASH_STOP:
++ case IPI_CPU_STOP_NMI:
+ case IPI_CPU_BACKTRACE:
+ case IPI_KGDB_ROUNDUP:
+ return true;
+@@ -1138,79 +1142,109 @@ static inline unsigned int num_other_online_cpus(void)
+
+ void smp_send_stop(void)
+ {
++ static unsigned long stop_in_progress;
++ cpumask_t mask;
+ unsigned long timeout;
+
+- if (num_other_online_cpus()) {
+- cpumask_t mask;
++ /*
++ * If this cpu is the only one alive at this point in time, online or
++ * not, there are no stop messages to be sent around, so just back out.
++ */
++ if (num_other_online_cpus() == 0)
++ goto skip_ipi;
+
+- cpumask_copy(&mask, cpu_online_mask);
+- cpumask_clear_cpu(smp_processor_id(), &mask);
++ /* Only proceed if this is the first CPU to reach this code */
++ if (test_and_set_bit(0, &stop_in_progress))
++ return;
+
+- if (system_state <= SYSTEM_RUNNING)
+- pr_crit("SMP: stopping secondary CPUs\n");
+- smp_cross_call(&mask, IPI_CPU_STOP);
+- }
++ /*
++ * Send an IPI to all currently online CPUs except the CPU running
++ * this code.
++ *
++ * NOTE: we don't do anything here to prevent other CPUs from coming
++ * online after we snapshot `cpu_online_mask`. Ideally, the calling code
++ * should do something to prevent other CPUs from coming up. This code
++ * can be called in the panic path and thus it doesn't seem wise to
++ * grab the CPU hotplug mutex ourselves. Worst case:
++ * - If a CPU comes online as we're running, we'll likely notice it
++ * during the 1 second wait below and then we'll catch it when we try
++ * with an NMI (assuming NMIs are enabled) since we re-snapshot the
++ * mask before sending an NMI.
++ * - If we leave the function and see that CPUs are still online we'll
++ * at least print a warning. Especially without NMIs this function
++ * isn't foolproof anyway so calling code will just have to accept
++ * the fact that there could be cases where a CPU can't be stopped.
++ */
++ cpumask_copy(&mask, cpu_online_mask);
++ cpumask_clear_cpu(smp_processor_id(), &mask);
+
+- /* Wait up to one second for other CPUs to stop */
++ if (system_state <= SYSTEM_RUNNING)
++ pr_crit("SMP: stopping secondary CPUs\n");
++
++ /*
++ * Start with a normal IPI and wait up to one second for other CPUs to
++ * stop. We do this first because it gives other processors a chance
++ * to exit critical sections / drop locks and makes the rest of the
++ * stop process (especially console flush) more robust.
++ */
++ smp_cross_call(&mask, IPI_CPU_STOP);
+ timeout = USEC_PER_SEC;
+ while (num_other_online_cpus() && timeout--)
+ udelay(1);
+
+- if (num_other_online_cpus())
++ /*
++ * If CPUs are still online, try an NMI. There's no excuse for this to
++ * be slow, so we only give them an extra 10 ms to respond.
++ */
++ if (num_other_online_cpus() && ipi_should_be_nmi(IPI_CPU_STOP_NMI)) {
++ smp_rmb();
++ cpumask_copy(&mask, cpu_online_mask);
++ cpumask_clear_cpu(smp_processor_id(), &mask);
++
++ pr_info("SMP: retry stop with NMI for CPUs %*pbl\n",
++ cpumask_pr_args(&mask));
++
++ smp_cross_call(&mask, IPI_CPU_STOP_NMI);
++ timeout = USEC_PER_MSEC * 10;
++ while (num_other_online_cpus() && timeout--)
++ udelay(1);
++ }
++
++ if (num_other_online_cpus()) {
++ smp_rmb();
++ cpumask_copy(&mask, cpu_online_mask);
++ cpumask_clear_cpu(smp_processor_id(), &mask);
++
+ pr_warn("SMP: failed to stop secondary CPUs %*pbl\n",
+- cpumask_pr_args(cpu_online_mask));
++ cpumask_pr_args(&mask));
++ }
+
++skip_ipi:
+ sdei_mask_local_cpu();
+ }
+
+ #ifdef CONFIG_KEXEC_CORE
+ void crash_smp_send_stop(void)
+ {
+- static int cpus_stopped;
+- cpumask_t mask;
+- unsigned long timeout;
+-
+ /*
+ * This function can be called twice in panic path, but obviously
+ * we execute this only once.
++ *
++ * We use this same boolean to tell whether the IPI we send was a
++ * stop or a "crash stop".
+ */
+- if (cpus_stopped)
++ if (crash_stop)
+ return;
++ crash_stop = 1;
+
+- cpus_stopped = 1;
++ smp_send_stop();
+
+- /*
+- * If this cpu is the only one alive at this point in time, online or
+- * not, there are no stop messages to be sent around, so just back out.
+- */
+- if (num_other_online_cpus() == 0)
+- goto skip_ipi;
+-
+- cpumask_copy(&mask, cpu_online_mask);
+- cpumask_clear_cpu(smp_processor_id(), &mask);
+-
+- atomic_set(&waiting_for_crash_ipi, num_other_online_cpus());
+-
+- pr_crit("SMP: stopping secondary CPUs\n");
+- smp_cross_call(&mask, IPI_CPU_CRASH_STOP);
+-
+- /* Wait up to one second for other CPUs to stop */
+- timeout = USEC_PER_SEC;
+- while ((atomic_read(&waiting_for_crash_ipi) > 0) && timeout--)
+- udelay(1);
+-
+- if (atomic_read(&waiting_for_crash_ipi) > 0)
+- pr_warn("SMP: failed to stop secondary CPUs %*pbl\n",
+- cpumask_pr_args(&mask));
+-
+-skip_ipi:
+- sdei_mask_local_cpu();
+ sdei_handler_abort();
+ }
+
+ bool smp_crash_stop_failed(void)
+ {
+- return (atomic_read(&waiting_for_crash_ipi) > 0);
++ return num_other_online_cpus() != 0;
+ }
+ #endif
+
+diff --git a/arch/arm64/kvm/hyp/nvhe/ffa.c b/arch/arm64/kvm/hyp/nvhe/ffa.c
+index e715c157c2c43e..e433dfab882aa5 100644
+--- a/arch/arm64/kvm/hyp/nvhe/ffa.c
++++ b/arch/arm64/kvm/hyp/nvhe/ffa.c
+@@ -426,9 +426,9 @@ static void do_ffa_mem_frag_tx(struct arm_smccc_res *res,
+ return;
+ }
+
+-static __always_inline void do_ffa_mem_xfer(const u64 func_id,
+- struct arm_smccc_res *res,
+- struct kvm_cpu_context *ctxt)
++static void __do_ffa_mem_xfer(const u64 func_id,
++ struct arm_smccc_res *res,
++ struct kvm_cpu_context *ctxt)
+ {
+ DECLARE_REG(u32, len, ctxt, 1);
+ DECLARE_REG(u32, fraglen, ctxt, 2);
+@@ -440,9 +440,6 @@ static __always_inline void do_ffa_mem_xfer(const u64 func_id,
+ u32 offset, nr_ranges;
+ int ret = 0;
+
+- BUILD_BUG_ON(func_id != FFA_FN64_MEM_SHARE &&
+- func_id != FFA_FN64_MEM_LEND);
+-
+ if (addr_mbz || npages_mbz || fraglen > len ||
+ fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) {
+ ret = FFA_RET_INVALID_PARAMETERS;
+@@ -461,6 +458,11 @@ static __always_inline void do_ffa_mem_xfer(const u64 func_id,
+ goto out_unlock;
+ }
+
++ if (len > ffa_desc_buf.len) {
++ ret = FFA_RET_NO_MEMORY;
++ goto out_unlock;
++ }
++
+ buf = hyp_buffers.tx;
+ memcpy(buf, host_buffers.tx, fraglen);
+
+@@ -512,6 +514,13 @@ static __always_inline void do_ffa_mem_xfer(const u64 func_id,
+ goto out_unlock;
+ }
+
++#define do_ffa_mem_xfer(fid, res, ctxt) \
++ do { \
++ BUILD_BUG_ON((fid) != FFA_FN64_MEM_SHARE && \
++ (fid) != FFA_FN64_MEM_LEND); \
++ __do_ffa_mem_xfer((fid), (res), (ctxt)); \
++ } while (0);
++
+ static void do_ffa_mem_reclaim(struct arm_smccc_res *res,
+ struct kvm_cpu_context *ctxt)
+ {
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index dd0bb069df4bbe..59e05a7aea56a5 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -26,7 +26,7 @@
+
+ #define TMP_REG_1 (MAX_BPF_JIT_REG + 0)
+ #define TMP_REG_2 (MAX_BPF_JIT_REG + 1)
+-#define TCALL_CNT (MAX_BPF_JIT_REG + 2)
++#define TCCNT_PTR (MAX_BPF_JIT_REG + 2)
+ #define TMP_REG_3 (MAX_BPF_JIT_REG + 3)
+ #define FP_BOTTOM (MAX_BPF_JIT_REG + 4)
+ #define ARENA_VM_START (MAX_BPF_JIT_REG + 5)
+@@ -63,8 +63,8 @@ static const int bpf2a64[] = {
+ [TMP_REG_1] = A64_R(10),
+ [TMP_REG_2] = A64_R(11),
+ [TMP_REG_3] = A64_R(12),
+- /* tail_call_cnt */
+- [TCALL_CNT] = A64_R(26),
++ /* tail_call_cnt_ptr */
++ [TCCNT_PTR] = A64_R(26),
+ /* temporary register for blinding constants */
+ [BPF_REG_AX] = A64_R(9),
+ [FP_BOTTOM] = A64_R(27),
+@@ -282,13 +282,35 @@ static bool is_lsi_offset(int offset, int scale)
+ * mov x29, sp
+ * stp x19, x20, [sp, #-16]!
+ * stp x21, x22, [sp, #-16]!
+- * stp x25, x26, [sp, #-16]!
++ * stp x26, x25, [sp, #-16]!
++ * stp x26, x25, [sp, #-16]!
+ * stp x27, x28, [sp, #-16]!
+ * mov x25, sp
+ * mov tcc, #0
+ * // PROLOGUE_OFFSET
+ */
+
++static void prepare_bpf_tail_call_cnt(struct jit_ctx *ctx)
++{
++ const struct bpf_prog *prog = ctx->prog;
++ const bool is_main_prog = !bpf_is_subprog(prog);
++ const u8 ptr = bpf2a64[TCCNT_PTR];
++ const u8 fp = bpf2a64[BPF_REG_FP];
++ const u8 tcc = ptr;
++
++ emit(A64_PUSH(ptr, fp, A64_SP), ctx);
++ if (is_main_prog) {
++ /* Initialize tail_call_cnt. */
++ emit(A64_MOVZ(1, tcc, 0, 0), ctx);
++ emit(A64_PUSH(tcc, fp, A64_SP), ctx);
++ emit(A64_MOV(1, ptr, A64_SP), ctx);
++ } else {
++ emit(A64_PUSH(ptr, fp, A64_SP), ctx);
++ emit(A64_NOP, ctx);
++ emit(A64_NOP, ctx);
++ }
++}
++
+ #define BTI_INSNS (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) ? 1 : 0)
+ #define PAC_INSNS (IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL) ? 1 : 0)
+
+@@ -296,7 +318,7 @@ static bool is_lsi_offset(int offset, int scale)
+ #define POKE_OFFSET (BTI_INSNS + 1)
+
+ /* Tail call offset to jump into */
+-#define PROLOGUE_OFFSET (BTI_INSNS + 2 + PAC_INSNS + 8)
++#define PROLOGUE_OFFSET (BTI_INSNS + 2 + PAC_INSNS + 10)
+
+ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
+ bool is_exception_cb, u64 arena_vm_start)
+@@ -308,7 +330,6 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
+ const u8 r8 = bpf2a64[BPF_REG_8];
+ const u8 r9 = bpf2a64[BPF_REG_9];
+ const u8 fp = bpf2a64[BPF_REG_FP];
+- const u8 tcc = bpf2a64[TCALL_CNT];
+ const u8 fpb = bpf2a64[FP_BOTTOM];
+ const u8 arena_vm_base = bpf2a64[ARENA_VM_START];
+ const int idx0 = ctx->idx;
+@@ -359,7 +380,7 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
+ /* Save callee-saved registers */
+ emit(A64_PUSH(r6, r7, A64_SP), ctx);
+ emit(A64_PUSH(r8, r9, A64_SP), ctx);
+- emit(A64_PUSH(fp, tcc, A64_SP), ctx);
++ prepare_bpf_tail_call_cnt(ctx);
+ emit(A64_PUSH(fpb, A64_R(28), A64_SP), ctx);
+ } else {
+ /*
+@@ -372,18 +393,15 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
+ * callee-saved registers. The exception callback will not push
+ * anything and re-use the main program's stack.
+ *
+- * 10 registers are on the stack
++ * 12 registers are on the stack
+ */
+- emit(A64_SUB_I(1, A64_SP, A64_FP, 80), ctx);
++ emit(A64_SUB_I(1, A64_SP, A64_FP, 96), ctx);
+ }
+
+ /* Set up BPF prog stack base register */
+ emit(A64_MOV(1, fp, A64_SP), ctx);
+
+ if (!ebpf_from_cbpf && is_main_prog) {
+- /* Initialize tail_call_cnt */
+- emit(A64_MOVZ(1, tcc, 0, 0), ctx);
+-
+ cur_offset = ctx->idx - idx0;
+ if (cur_offset != PROLOGUE_OFFSET) {
+ pr_err_once("PROLOGUE_OFFSET = %d, expected %d!\n",
+@@ -432,7 +450,8 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+
+ const u8 tmp = bpf2a64[TMP_REG_1];
+ const u8 prg = bpf2a64[TMP_REG_2];
+- const u8 tcc = bpf2a64[TCALL_CNT];
++ const u8 tcc = bpf2a64[TMP_REG_3];
++ const u8 ptr = bpf2a64[TCCNT_PTR];
+ const int idx0 = ctx->idx;
+ #define cur_offset (ctx->idx - idx0)
+ #define jmp_offset (out_offset - (cur_offset))
+@@ -449,11 +468,12 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ emit(A64_B_(A64_COND_CS, jmp_offset), ctx);
+
+ /*
+- * if (tail_call_cnt >= MAX_TAIL_CALL_CNT)
++ * if ((*tail_call_cnt_ptr) >= MAX_TAIL_CALL_CNT)
+ * goto out;
+- * tail_call_cnt++;
++ * (*tail_call_cnt_ptr)++;
+ */
+ emit_a64_mov_i64(tmp, MAX_TAIL_CALL_CNT, ctx);
++ emit(A64_LDR64I(tcc, ptr, 0), ctx);
+ emit(A64_CMP(1, tcc, tmp), ctx);
+ emit(A64_B_(A64_COND_CS, jmp_offset), ctx);
+ emit(A64_ADD_I(1, tcc, tcc, 1), ctx);
+@@ -469,6 +489,9 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ emit(A64_LDR64(prg, tmp, prg), ctx);
+ emit(A64_CBZ(1, prg, jmp_offset), ctx);
+
++ /* Update tail_call_cnt if the slot is populated. */
++ emit(A64_STR64I(tcc, ptr, 0), ctx);
++
+ /* goto *(prog->bpf_func + prologue_offset); */
+ off = offsetof(struct bpf_prog, bpf_func);
+ emit_a64_mov_i64(tmp, off, ctx);
+@@ -721,6 +744,7 @@ static void build_epilogue(struct jit_ctx *ctx, bool is_exception_cb)
+ const u8 r8 = bpf2a64[BPF_REG_8];
+ const u8 r9 = bpf2a64[BPF_REG_9];
+ const u8 fp = bpf2a64[BPF_REG_FP];
++ const u8 ptr = bpf2a64[TCCNT_PTR];
+ const u8 fpb = bpf2a64[FP_BOTTOM];
+
+ /* We're done with BPF stack */
+@@ -738,7 +762,8 @@ static void build_epilogue(struct jit_ctx *ctx, bool is_exception_cb)
+ /* Restore x27 and x28 */
+ emit(A64_POP(fpb, A64_R(28), A64_SP), ctx);
+ /* Restore fs (x25) and x26 */
+- emit(A64_POP(fp, A64_R(26), A64_SP), ctx);
++ emit(A64_POP(ptr, fp, A64_SP), ctx);
++ emit(A64_POP(ptr, fp, A64_SP), ctx);
+
+ /* Restore callee-saved register */
+ emit(A64_POP(r8, r9, A64_SP), ctx);
+diff --git a/arch/m68k/kernel/process.c b/arch/m68k/kernel/process.c
+index 2584e94e213468..fda7eac23f872d 100644
+--- a/arch/m68k/kernel/process.c
++++ b/arch/m68k/kernel/process.c
+@@ -117,7 +117,7 @@ asmlinkage int m68k_clone(struct pt_regs *regs)
+ {
+ /* regs will be equal to current_pt_regs() */
+ struct kernel_clone_args args = {
+- .flags = regs->d1 & ~CSIGNAL,
++ .flags = (u32)(regs->d1) & ~CSIGNAL,
+ .pidfd = (int __user *)regs->d3,
+ .child_tid = (int __user *)regs->d4,
+ .parent_tid = (int __user *)regs->d3,
+diff --git a/arch/powerpc/crypto/Kconfig b/arch/powerpc/crypto/Kconfig
+index 09ebcbdfb34f6b..46a4c85e85e245 100644
+--- a/arch/powerpc/crypto/Kconfig
++++ b/arch/powerpc/crypto/Kconfig
+@@ -107,6 +107,7 @@ config CRYPTO_AES_PPC_SPE
+
+ config CRYPTO_AES_GCM_P10
+ tristate "Stitched AES/GCM acceleration support on P10 or later CPU (PPC)"
++ depends on BROKEN
+ depends on PPC64 && CPU_LITTLE_ENDIAN && VSX
+ select CRYPTO_LIB_AES
+ select CRYPTO_ALGAPI
+diff --git a/arch/powerpc/include/asm/asm-compat.h b/arch/powerpc/include/asm/asm-compat.h
+index 2bc53c646ccd7d..83848b534cb171 100644
+--- a/arch/powerpc/include/asm/asm-compat.h
++++ b/arch/powerpc/include/asm/asm-compat.h
+@@ -39,6 +39,12 @@
+ #define STDX_BE stringify_in_c(stdbrx)
+ #endif
+
++#ifdef CONFIG_CC_IS_CLANG
++#define DS_FORM_CONSTRAINT "Z<>"
++#else
++#define DS_FORM_CONSTRAINT "YZ<>"
++#endif
++
+ #else /* 32-bit */
+
+ /* operations for longs and pointers */
+diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
+index 5bf6a4d49268c7..d1ea554c33ed7e 100644
+--- a/arch/powerpc/include/asm/atomic.h
++++ b/arch/powerpc/include/asm/atomic.h
+@@ -11,6 +11,7 @@
+ #include <asm/cmpxchg.h>
+ #include <asm/barrier.h>
+ #include <asm/asm-const.h>
++#include <asm/asm-compat.h>
+
+ /*
+ * Since *_return_relaxed and {cmp}xchg_relaxed are implemented with
+@@ -197,7 +198,7 @@ static __inline__ s64 arch_atomic64_read(const atomic64_t *v)
+ if (IS_ENABLED(CONFIG_PPC_KERNEL_PREFIXED))
+ __asm__ __volatile__("ld %0,0(%1)" : "=r"(t) : "b"(&v->counter));
+ else
+- __asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m<>"(v->counter));
++ __asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : DS_FORM_CONSTRAINT (v->counter));
+
+ return t;
+ }
+@@ -208,7 +209,7 @@ static __inline__ void arch_atomic64_set(atomic64_t *v, s64 i)
+ if (IS_ENABLED(CONFIG_PPC_KERNEL_PREFIXED))
+ __asm__ __volatile__("std %1,0(%2)" : "=m"(v->counter) : "r"(i), "b"(&v->counter));
+ else
+- __asm__ __volatile__("std%U0%X0 %1,%0" : "=m<>"(v->counter) : "r"(i));
++ __asm__ __volatile__("std%U0%X0 %1,%0" : "=" DS_FORM_CONSTRAINT (v->counter) : "r"(i));
+ }
+
+ #define ATOMIC64_OP(op, asm_op) \
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index fd594bf6c6a9c5..4f5a46a77fa2b6 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -6,6 +6,7 @@
+ #include <asm/page.h>
+ #include <asm/extable.h>
+ #include <asm/kup.h>
++#include <asm/asm-compat.h>
+
+ #ifdef __powerpc64__
+ /* We use TASK_SIZE_USER64 as TASK_SIZE is not constant */
+@@ -92,12 +93,6 @@ __pu_failed: \
+ : label)
+ #endif
+
+-#ifdef CONFIG_CC_IS_CLANG
+-#define DS_FORM_CONSTRAINT "Z<>"
+-#else
+-#define DS_FORM_CONSTRAINT "YZ<>"
+-#endif
+-
+ #ifdef __powerpc64__
+ #ifdef CONFIG_PPC_KERNEL_PREFIXED
+ #define __put_user_asm2_goto(x, ptr, label) \
+diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
+index ac74321b119287..c955a8196d55e1 100644
+--- a/arch/powerpc/kernel/head_8xx.S
++++ b/arch/powerpc/kernel/head_8xx.S
+@@ -41,12 +41,12 @@
+ #include "head_32.h"
+
+ .macro compare_to_kernel_boundary scratch, addr
+-#if CONFIG_TASK_SIZE <= 0x80000000 && CONFIG_PAGE_OFFSET >= 0x80000000
++#if CONFIG_TASK_SIZE <= 0x80000000 && MODULES_VADDR >= 0x80000000
+ /* By simply checking Address >= 0x80000000, we know if its a kernel address */
+ not. \scratch, \addr
+ #else
+ rlwinm \scratch, \addr, 16, 0xfff8
+- cmpli cr0, \scratch, PAGE_OFFSET@h
++ cmpli cr0, \scratch, TASK_SIZE@h
+ #endif
+ .endm
+
+@@ -404,7 +404,7 @@ FixupDAR:/* Entry point for dcbx workaround. */
+ mfspr r10, SPRN_SRR0
+ mtspr SPRN_MD_EPN, r10
+ rlwinm r11, r10, 16, 0xfff8
+- cmpli cr1, r11, PAGE_OFFSET@h
++ cmpli cr1, r11, TASK_SIZE@h
+ mfspr r11, SPRN_M_TWB /* Get level 1 table */
+ blt+ cr1, 3f
+
+diff --git a/arch/powerpc/kernel/vdso/gettimeofday.S b/arch/powerpc/kernel/vdso/gettimeofday.S
+index 48fc6658053aa4..894cb939cd2b31 100644
+--- a/arch/powerpc/kernel/vdso/gettimeofday.S
++++ b/arch/powerpc/kernel/vdso/gettimeofday.S
+@@ -38,11 +38,7 @@
+ .else
+ addi r4, r5, VDSO_DATA_OFFSET
+ .endif
+-#ifdef __powerpc64__
+ bl CFUNC(DOTSYM(\funct))
+-#else
+- bl \funct
+-#endif
+ PPC_LL r0, PPC_MIN_STKFRM + PPC_LR_STKOFF(r1)
+ #ifdef __powerpc64__
+ PPC_LL r2, PPC_MIN_STKFRM + STK_GOT(r1)
+diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
+index 388bba0ab3e7de..15d918dce27d0b 100644
+--- a/arch/powerpc/mm/nohash/8xx.c
++++ b/arch/powerpc/mm/nohash/8xx.c
+@@ -150,11 +150,11 @@ unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top)
+
+ mmu_mapin_immr();
+
+- mmu_mapin_ram_chunk(0, boundary, PAGE_KERNEL_TEXT, true);
++ mmu_mapin_ram_chunk(0, boundary, PAGE_KERNEL_X, true);
+ if (debug_pagealloc_enabled_or_kfence()) {
+ top = boundary;
+ } else {
+- mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL_TEXT, true);
++ mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL_X, true);
+ mmu_mapin_ram_chunk(einittext8, top, PAGE_KERNEL, true);
+ }
+
+diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h
+index fa0f535bbbf024..1d85b661750884 100644
+--- a/arch/riscv/include/asm/kvm_vcpu_pmu.h
++++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h
+@@ -10,6 +10,7 @@
+ #define __KVM_VCPU_RISCV_PMU_H
+
+ #include <linux/perf/riscv_pmu.h>
++#include <asm/kvm_vcpu_insn.h>
+ #include <asm/sbi.h>
+
+ #ifdef CONFIG_RISCV_PMU_SBI
+@@ -64,11 +65,11 @@ struct kvm_pmu {
+
+ #if defined(CONFIG_32BIT)
+ #define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \
+-{.base = CSR_CYCLEH, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, \
+-{.base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm },
++{.base = CSR_CYCLEH, .count = 32, .func = kvm_riscv_vcpu_pmu_read_hpm }, \
++{.base = CSR_CYCLE, .count = 32, .func = kvm_riscv_vcpu_pmu_read_hpm },
+ #else
+ #define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \
+-{.base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm },
++{.base = CSR_CYCLE, .count = 32, .func = kvm_riscv_vcpu_pmu_read_hpm },
+ #endif
+
+ int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid);
+@@ -104,8 +105,20 @@ void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu);
+ struct kvm_pmu {
+ };
+
++static inline int kvm_riscv_vcpu_pmu_read_legacy(struct kvm_vcpu *vcpu, unsigned int csr_num,
++ unsigned long *val, unsigned long new_val,
++ unsigned long wr_mask)
++{
++ if (csr_num == CSR_CYCLE || csr_num == CSR_INSTRET) {
++ *val = 0;
++ return KVM_INSN_CONTINUE_NEXT_SEPC;
++ } else {
++ return KVM_INSN_ILLEGAL_TRAP;
++ }
++}
++
+ #define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \
+-{.base = 0, .count = 0, .func = NULL },
++{.base = CSR_CYCLE, .count = 3, .func = kvm_riscv_vcpu_pmu_read_legacy },
+
+ static inline void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) {}
+ static inline int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid)
+diff --git a/arch/riscv/kernel/perf_callchain.c b/arch/riscv/kernel/perf_callchain.c
+index 3348a61de7d998..2932791e938821 100644
+--- a/arch/riscv/kernel/perf_callchain.c
++++ b/arch/riscv/kernel/perf_callchain.c
+@@ -62,7 +62,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ perf_callchain_store(entry, regs->epc);
+
+ fp = user_backtrace(entry, fp, regs->ra);
+- while (fp && !(fp & 0x3) && entry->nr < entry->max_stack)
++ while (fp && !(fp & 0x7) && entry->nr < entry->max_stack)
+ fp = user_backtrace(entry, fp, 0);
+ }
+
+diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c
+index bcf41d6e0df0e3..2707a51b082ca7 100644
+--- a/arch/riscv/kvm/vcpu_pmu.c
++++ b/arch/riscv/kvm/vcpu_pmu.c
+@@ -391,19 +391,9 @@ int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num,
+ static void kvm_pmu_clear_snapshot_area(struct kvm_vcpu *vcpu)
+ {
+ struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
+- int snapshot_area_size = sizeof(struct riscv_pmu_snapshot_data);
+
+- if (kvpmu->sdata) {
+- if (kvpmu->snapshot_addr != INVALID_GPA) {
+- memset(kvpmu->sdata, 0, snapshot_area_size);
+- kvm_vcpu_write_guest(vcpu, kvpmu->snapshot_addr,
+- kvpmu->sdata, snapshot_area_size);
+- } else {
+- pr_warn("snapshot address invalid\n");
+- }
+- kfree(kvpmu->sdata);
+- kvpmu->sdata = NULL;
+- }
++ kfree(kvpmu->sdata);
++ kvpmu->sdata = NULL;
+ kvpmu->snapshot_addr = INVALID_GPA;
+ }
+
+diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
+index 62f409d4176e41..7de128be8db9bc 100644
+--- a/arch/riscv/kvm/vcpu_sbi.c
++++ b/arch/riscv/kvm/vcpu_sbi.c
+@@ -127,8 +127,8 @@ void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run)
+ run->riscv_sbi.args[3] = cp->a3;
+ run->riscv_sbi.args[4] = cp->a4;
+ run->riscv_sbi.args[5] = cp->a5;
+- run->riscv_sbi.ret[0] = cp->a0;
+- run->riscv_sbi.ret[1] = cp->a1;
++ run->riscv_sbi.ret[0] = SBI_ERR_NOT_SUPPORTED;
++ run->riscv_sbi.ret[1] = 0;
+ }
+
+ void kvm_riscv_vcpu_sbi_system_reset(struct kvm_vcpu *vcpu,
+diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h
+index fbadca645af79f..406746666eb782 100644
+--- a/arch/s390/include/asm/ftrace.h
++++ b/arch/s390/include/asm/ftrace.h
+@@ -6,8 +6,23 @@
+ #define MCOUNT_INSN_SIZE 6
+
+ #ifndef __ASSEMBLY__
++#include <asm/stacktrace.h>
+
+-unsigned long return_address(unsigned int n);
++static __always_inline unsigned long return_address(unsigned int n)
++{
++ struct stack_frame *sf;
++
++ if (!n)
++ return (unsigned long)__builtin_return_address(0);
++
++ sf = (struct stack_frame *)current_frame_address();
++ do {
++ sf = (struct stack_frame *)sf->back_chain;
++ if (!sf)
++ return 0;
++ } while (--n);
++ return sf->gprs[8];
++}
+ #define ftrace_return_address(n) return_address(n)
+
+ void ftrace_caller(void);
+diff --git a/arch/s390/kernel/Makefile b/arch/s390/kernel/Makefile
+index e47a4be54ff8e2..a70f25e9c17da6 100644
+--- a/arch/s390/kernel/Makefile
++++ b/arch/s390/kernel/Makefile
+@@ -36,7 +36,7 @@ CFLAGS_stacktrace.o += -fno-optimize-sibling-calls
+ CFLAGS_dumpstack.o += -fno-optimize-sibling-calls
+ CFLAGS_unwind_bc.o += -fno-optimize-sibling-calls
+
+-obj-y := head64.o traps.o time.o process.o earlypgm.o early.o setup.o idle.o vtime.o
++obj-y := head64.o traps.o time.o process.o early.o setup.o idle.o vtime.o
+ obj-y += processor.o syscall.o ptrace.o signal.o cpcmd.o ebcdic.o nmi.o
+ obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o cpufeature.o
+ obj-y += sysinfo.o lgr.o os_info.o ctlreg.o
+diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
+index 14d324865e33fc..ee051ad81c7119 100644
+--- a/arch/s390/kernel/early.c
++++ b/arch/s390/kernel/early.c
+@@ -183,12 +183,15 @@ void __do_early_pgm_check(struct pt_regs *regs)
+
+ static noinline __init void setup_lowcore_early(void)
+ {
++ struct lowcore *lc = get_lowcore();
+ psw_t psw;
+
+ psw.addr = (unsigned long)early_pgm_check_handler;
+ psw.mask = PSW_KERNEL_BITS;
+- get_lowcore()->program_new_psw = psw;
+- get_lowcore()->preempt_count = INIT_PREEMPT_COUNT;
++ lc->program_new_psw = psw;
++ lc->preempt_count = INIT_PREEMPT_COUNT;
++ lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
++ lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
+ }
+
+ static __init void detect_diag9c(void)
+diff --git a/arch/s390/kernel/earlypgm.S b/arch/s390/kernel/earlypgm.S
+deleted file mode 100644
+index c634871f0d905b..00000000000000
+--- a/arch/s390/kernel/earlypgm.S
++++ /dev/null
+@@ -1,23 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- * Copyright IBM Corp. 2006, 2007
+- * Author(s): Michael Holzheu <holzheu@de.ibm.com>
+- */
+-
+-#include <linux/linkage.h>
+-#include <asm/asm-offsets.h>
+-
+-SYM_CODE_START(early_pgm_check_handler)
+- stmg %r8,%r15,__LC_SAVE_AREA_SYNC
+- aghi %r15,-(STACK_FRAME_OVERHEAD+__PT_SIZE)
+- la %r11,STACK_FRAME_OVERHEAD(%r15)
+- xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
+- stmg %r0,%r7,__PT_R0(%r11)
+- mvc __PT_PSW(16,%r11),__LC_PGM_OLD_PSW
+- mvc __PT_R8(64,%r11),__LC_SAVE_AREA_SYNC
+- lgr %r2,%r11
+- brasl %r14,__do_early_pgm_check
+- mvc __LC_RETURN_PSW(16),STACK_FRAME_OVERHEAD+__PT_PSW(%r15)
+- lmg %r0,%r15,STACK_FRAME_OVERHEAD+__PT_R0(%r15)
+- lpswe __LC_RETURN_PSW
+-SYM_CODE_END(early_pgm_check_handler)
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 749410cfdbc078..6539ec4800cd1b 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -599,6 +599,22 @@ SYM_CODE_START(restart_int_handler)
+ 3: j 3b
+ SYM_CODE_END(restart_int_handler)
+
++SYM_CODE_START(early_pgm_check_handler)
++ STMG_LC %r8,%r15,__LC_SAVE_AREA_SYNC
++ GET_LC %r13
++ aghi %r15,-(STACK_FRAME_OVERHEAD+__PT_SIZE)
++ la %r11,STACK_FRAME_OVERHEAD(%r15)
++ xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
++ stmg %r0,%r7,__PT_R0(%r11)
++ mvc __PT_PSW(16,%r11),__LC_PGM_OLD_PSW(%r13)
++ mvc __PT_R8(64,%r11),__LC_SAVE_AREA_SYNC(%r13)
++ lgr %r2,%r11
++ brasl %r14,__do_early_pgm_check
++ mvc __LC_RETURN_PSW(16,%r13),STACK_FRAME_OVERHEAD+__PT_PSW(%r15)
++ lmg %r0,%r15,STACK_FRAME_OVERHEAD+__PT_R0(%r15)
++ LPSWEY __LC_RETURN_PSW,__LC_RETURN_LPSWE
++SYM_CODE_END(early_pgm_check_handler)
++
+ .section .kprobes.text, "ax"
+
+ #if defined(CONFIG_CHECK_STACK) || defined(CONFIG_VMAP_STACK)
+diff --git a/arch/s390/kernel/stacktrace.c b/arch/s390/kernel/stacktrace.c
+index 640363b2a1059e..9f59837d159e0c 100644
+--- a/arch/s390/kernel/stacktrace.c
++++ b/arch/s390/kernel/stacktrace.c
+@@ -162,22 +162,3 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
+ {
+ arch_stack_walk_user_common(consume_entry, cookie, NULL, regs, false);
+ }
+-
+-unsigned long return_address(unsigned int n)
+-{
+- struct unwind_state state;
+- unsigned long addr;
+-
+- /* Increment to skip current stack entry */
+- n++;
+-
+- unwind_for_each_frame(&state, NULL, NULL, 0) {
+- addr = unwind_get_return_address(&state);
+- if (!addr)
+- break;
+- if (!n--)
+- return addr;
+- }
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(return_address);
+diff --git a/arch/um/Kconfig b/arch/um/Kconfig
+index dca84fd6d00a50..c89575d05021f1 100644
+--- a/arch/um/Kconfig
++++ b/arch/um/Kconfig
+@@ -11,7 +11,6 @@ config UML
+ select ARCH_HAS_KCOV
+ select ARCH_HAS_STRNCPY_FROM_USER
+ select ARCH_HAS_STRNLEN_USER
+- select ARCH_NO_PREEMPT_DYNAMIC
+ select HAVE_ARCH_AUDITSYSCALL
+ select HAVE_ARCH_KASAN if X86_64
+ select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
+diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
+index da8b66dce0da5f..327c45c5013fea 100644
+--- a/arch/x86/coco/tdx/tdx.c
++++ b/arch/x86/coco/tdx/tdx.c
+@@ -16,6 +16,7 @@
+ #include <asm/insn-eval.h>
+ #include <asm/pgtable.h>
+ #include <asm/set_memory.h>
++#include <asm/traps.h>
+
+ /* MMIO direction */
+ #define EPT_READ 0
+@@ -433,6 +434,11 @@ static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
+ return -EINVAL;
+ }
+
++ if (!fault_in_kernel_space(ve->gla)) {
++ WARN_ONCE(1, "Access to userspace address is not supported");
++ return -EINVAL;
++ }
++
+ /*
+ * Reject EPT violation #VEs that split pages.
+ *
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index cd37de5ec40464..d63ba9eaba3e44 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -1366,6 +1366,8 @@ gcm_crypt(struct aead_request *req, int flags)
+ err = skcipher_walk_aead_encrypt(&walk, req, false);
+ else
+ err = skcipher_walk_aead_decrypt(&walk, req, false);
++ if (err)
++ return err;
+
+ /*
+ * Since the AES-GCM assembly code requires that at least three assembly
+@@ -1381,37 +1383,31 @@ gcm_crypt(struct aead_request *req, int flags)
+ gcm_process_assoc(key, ghash_acc, req->src, assoclen, flags);
+
+ /* En/decrypt the data and pass the ciphertext through GHASH. */
+- while ((nbytes = walk.nbytes) != 0) {
+- if (unlikely(nbytes < walk.total)) {
+- /*
+- * Non-last segment. In this case, the assembly
+- * function requires that the length be a multiple of 16
+- * (AES_BLOCK_SIZE) bytes. The needed buffering of up
+- * to 16 bytes is handled by the skcipher_walk. Here we
+- * just need to round down to a multiple of 16.
+- */
+- nbytes = round_down(nbytes, AES_BLOCK_SIZE);
+- aes_gcm_update(key, le_ctr, ghash_acc,
+- walk.src.virt.addr, walk.dst.virt.addr,
+- nbytes, flags);
+- le_ctr[0] += nbytes / AES_BLOCK_SIZE;
+- kernel_fpu_end();
+- err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
+- kernel_fpu_begin();
+- } else {
+- /* Last segment: process all remaining data. */
+- aes_gcm_update(key, le_ctr, ghash_acc,
+- walk.src.virt.addr, walk.dst.virt.addr,
+- nbytes, flags);
+- err = skcipher_walk_done(&walk, 0);
+- /*
+- * The low word of the counter isn't used by the
+- * finalize, so there's no need to increment it here.
+- */
+- }
++ while (unlikely((nbytes = walk.nbytes) < walk.total)) {
++ /*
++ * Non-last segment. In this case, the assembly function
++ * requires that the length be a multiple of 16 (AES_BLOCK_SIZE)
++ * bytes. The needed buffering of up to 16 bytes is handled by
++ * the skcipher_walk. Here we just need to round down to a
++ * multiple of 16.
++ */
++ nbytes = round_down(nbytes, AES_BLOCK_SIZE);
++ aes_gcm_update(key, le_ctr, ghash_acc, walk.src.virt.addr,
++ walk.dst.virt.addr, nbytes, flags);
++ le_ctr[0] += nbytes / AES_BLOCK_SIZE;
++ kernel_fpu_end();
++ err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
++ if (err)
++ return err;
++ kernel_fpu_begin();
+ }
+- if (err)
+- goto out;
++ /* Last segment: process all remaining data. */
++ aes_gcm_update(key, le_ctr, ghash_acc, walk.src.virt.addr,
++ walk.dst.virt.addr, nbytes, flags);
++ /*
++ * The low word of the counter isn't used by the finalize, so there's no
++ * need to increment it here.
++ */
+
+ /* Finalize */
+ taglen = crypto_aead_authsize(tfm);
+@@ -1439,8 +1435,9 @@ gcm_crypt(struct aead_request *req, int flags)
+ datalen, tag, taglen, flags))
+ err = -EBADMSG;
+ }
+-out:
+ kernel_fpu_end();
++ if (nbytes)
++ skcipher_walk_done(&walk, 0);
+ return err;
+ }
+
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 9e519d8a810a68..d879478db3f572 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event *event)
+ x86_pmu.pebs_aliases(event);
+ }
+
+- if (needs_branch_stack(event) && is_sampling_event(event))
+- event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
++ if (needs_branch_stack(event)) {
++ /* Avoid branch stack setup for counting events in SAMPLE READ */
++ if (is_sampling_event(event) ||
++ !(event->attr.sample_type & PERF_SAMPLE_READ))
++ event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
++ }
+
+ if (branch_sample_counters(event)) {
+ struct perf_event *leader, *sibling;
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index b4aa8daa47738f..2959970dd10eb1 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -1606,6 +1606,7 @@ static void pt_event_stop(struct perf_event *event, int mode)
+ * see comment in intel_pt_interrupt().
+ */
+ WRITE_ONCE(pt->handle_nmi, 0);
++ barrier();
+
+ pt_config_stop(event);
+
+@@ -1657,11 +1658,10 @@ static long pt_event_snapshot_aux(struct perf_event *event,
+ return 0;
+
+ /*
+- * Here, handle_nmi tells us if the tracing is on
++ * There is no PT interrupt in this mode, so stop the trace and it will
++ * remain stopped while the buffer is copied.
+ */
+- if (READ_ONCE(pt->handle_nmi))
+- pt_config_stop(event);
+-
++ pt_config_stop(event);
+ pt_read_offset(buf);
+ pt_update_head(pt);
+
+@@ -1673,11 +1673,10 @@ static long pt_event_snapshot_aux(struct perf_event *event,
+ ret = perf_output_copy_aux(&pt->handle, handle, from, to);
+
+ /*
+- * If the tracing was on when we turned up, restart it.
+- * Compiler barrier not needed as we couldn't have been
+- * preempted by anything that touches pt->handle_nmi.
++ * Here, handle_nmi tells us if the tracing was on.
++ * If the tracing was on, restart it.
+ */
+- if (pt->handle_nmi)
++ if (READ_ONCE(pt->handle_nmi))
+ pt_config_start(event);
+
+ return ret;
+diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
+index 21bc53f5ed0c81..5ab1a4598d00bc 100644
+--- a/arch/x86/include/asm/acpi.h
++++ b/arch/x86/include/asm/acpi.h
+@@ -174,6 +174,14 @@ void acpi_generic_reduced_hw_init(void);
+ void x86_default_set_root_pointer(u64 addr);
+ u64 x86_default_get_root_pointer(void);
+
++#ifdef CONFIG_XEN_PV
++/* A Xen PV domain needs a special acpi_os_ioremap() handling. */
++extern void __iomem * (*acpi_os_ioremap)(acpi_physical_address phys,
++ acpi_size size);
++void __iomem *x86_acpi_os_ioremap(acpi_physical_address phys, acpi_size size);
++#define acpi_os_ioremap acpi_os_ioremap
++#endif
++
+ #else /* !CONFIG_ACPI */
+
+ #define acpi_lapic 0
+diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h
+index c67fa6ad098aae..6ffa8b75f4cd33 100644
+--- a/arch/x86/include/asm/hardirq.h
++++ b/arch/x86/include/asm/hardirq.h
+@@ -69,7 +69,11 @@ extern u64 arch_irq_stat(void);
+ #define local_softirq_pending_ref pcpu_hot.softirq_pending
+
+ #if IS_ENABLED(CONFIG_KVM_INTEL)
+-static inline void kvm_set_cpu_l1tf_flush_l1d(void)
++/*
++ * This function is called from noinstr interrupt contexts
++ * and must be inlined to not get instrumentation.
++ */
++static __always_inline void kvm_set_cpu_l1tf_flush_l1d(void)
+ {
+ __this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 1);
+ }
+@@ -84,7 +88,7 @@ static __always_inline bool kvm_get_cpu_l1tf_flush_l1d(void)
+ return __this_cpu_read(irq_stat.kvm_cpu_l1tf_flush_l1d);
+ }
+ #else /* !IS_ENABLED(CONFIG_KVM_INTEL) */
+-static inline void kvm_set_cpu_l1tf_flush_l1d(void) { }
++static __always_inline void kvm_set_cpu_l1tf_flush_l1d(void) { }
+ #endif /* IS_ENABLED(CONFIG_KVM_INTEL) */
+
+ #endif /* _ASM_X86_HARDIRQ_H */
+diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
+index d4f24499b256c8..ad5c68f0509d4d 100644
+--- a/arch/x86/include/asm/idtentry.h
++++ b/arch/x86/include/asm/idtentry.h
+@@ -212,8 +212,8 @@ __visible noinstr void func(struct pt_regs *regs, \
+ irqentry_state_t state = irqentry_enter(regs); \
+ u32 vector = (u32)(u8)error_code; \
+ \
++ kvm_set_cpu_l1tf_flush_l1d(); \
+ instrumentation_begin(); \
+- kvm_set_cpu_l1tf_flush_l1d(); \
+ run_irq_on_irqstack_cond(__##func, regs, vector); \
+ instrumentation_end(); \
+ irqentry_exit(regs, state); \
+@@ -250,7 +250,6 @@ static void __##func(struct pt_regs *regs); \
+ \
+ static __always_inline void instr_##func(struct pt_regs *regs) \
+ { \
+- kvm_set_cpu_l1tf_flush_l1d(); \
+ run_sysvec_on_irqstack_cond(__##func, regs); \
+ } \
+ \
+@@ -258,6 +257,7 @@ __visible noinstr void func(struct pt_regs *regs) \
+ { \
+ irqentry_state_t state = irqentry_enter(regs); \
+ \
++ kvm_set_cpu_l1tf_flush_l1d(); \
+ instrumentation_begin(); \
+ instr_##func (regs); \
+ instrumentation_end(); \
+@@ -288,7 +288,6 @@ static __always_inline void __##func(struct pt_regs *regs); \
+ static __always_inline void instr_##func(struct pt_regs *regs) \
+ { \
+ __irq_enter_raw(); \
+- kvm_set_cpu_l1tf_flush_l1d(); \
+ __##func (regs); \
+ __irq_exit_raw(); \
+ } \
+@@ -297,6 +296,7 @@ __visible noinstr void func(struct pt_regs *regs) \
+ { \
+ irqentry_state_t state = irqentry_enter(regs); \
+ \
++ kvm_set_cpu_l1tf_flush_l1d(); \
+ instrumentation_begin(); \
+ instr_##func (regs); \
+ instrumentation_end(); \
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 4a68cb3eba78f8..b4bcd5108079f0 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1727,6 +1727,8 @@ struct kvm_x86_ops {
+ void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
+ void (*enable_irq_window)(struct kvm_vcpu *vcpu);
+ void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
++
++ const bool x2apic_icr_is_split;
+ const unsigned long required_apicv_inhibits;
+ bool allow_apicv_in_x2apic_without_x2apic_virtualization;
+ void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 9f4618dcd70465..4efecac49863ec 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -1778,3 +1778,14 @@ u64 x86_default_get_root_pointer(void)
+ {
+ return boot_params.acpi_rsdp_addr;
+ }
++
++#ifdef CONFIG_XEN_PV
++void __iomem *x86_acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
++{
++ return ioremap_cache(phys, size);
++}
++
++void __iomem * (*acpi_os_ioremap)(acpi_physical_address phys, acpi_size size) =
++ x86_acpi_os_ioremap;
++EXPORT_SYMBOL_GPL(acpi_os_ioremap);
++#endif
+diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
+index 27892e57c4ef93..6aeeb43dd3f6a8 100644
+--- a/arch/x86/kernel/cpu/sgx/main.c
++++ b/arch/x86/kernel/cpu/sgx/main.c
+@@ -475,24 +475,25 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void)
+ {
+ struct sgx_epc_page *page;
+ int nid_of_current = numa_node_id();
+- int nid = nid_of_current;
++ int nid_start, nid;
+
+- if (node_isset(nid_of_current, sgx_numa_mask)) {
+- page = __sgx_alloc_epc_page_from_node(nid_of_current);
+- if (page)
+- return page;
+- }
+-
+- /* Fall back to the non-local NUMA nodes: */
+- while (true) {
+- nid = next_node_in(nid, sgx_numa_mask);
+- if (nid == nid_of_current)
+- break;
++ /*
++ * Try local node first. If it doesn't have an EPC section,
++ * fall back to the non-local NUMA nodes.
++ */
++ if (node_isset(nid_of_current, sgx_numa_mask))
++ nid_start = nid_of_current;
++ else
++ nid_start = next_node_in(nid_of_current, sgx_numa_mask);
+
++ nid = nid_start;
++ do {
+ page = __sgx_alloc_epc_page_from_node(nid);
+ if (page)
+ return page;
+- }
++
++ nid = next_node_in(nid, sgx_numa_mask);
++ } while (nid != nid_start);
+
+ return ERR_PTR(-ENOMEM);
+ }
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index a817ed0724d1e7..4b9d4557fc94a4 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -559,10 +559,11 @@ void early_setup_idt(void)
+ */
+ void __head startup_64_setup_gdt_idt(void)
+ {
++ struct desc_struct *gdt = (void *)(__force unsigned long)init_per_cpu_var(gdt_page.gdt);
+ void *handler = NULL;
+
+ struct desc_ptr startup_gdt_descr = {
+- .address = (unsigned long)&RIP_REL_REF(init_per_cpu_var(gdt_page.gdt)),
++ .address = (unsigned long)&RIP_REL_REF(*gdt),
+ .size = GDT_SIZE - 1,
+ };
+
+diff --git a/arch/x86/kernel/jailhouse.c b/arch/x86/kernel/jailhouse.c
+index df337860612d84..cd8ed1edbf9ee7 100644
+--- a/arch/x86/kernel/jailhouse.c
++++ b/arch/x86/kernel/jailhouse.c
+@@ -12,6 +12,7 @@
+ #include <linux/kernel.h>
+ #include <linux/reboot.h>
+ #include <linux/serial_8250.h>
++#include <linux/acpi.h>
+ #include <asm/apic.h>
+ #include <asm/io_apic.h>
+ #include <asm/acpi.h>
+diff --git a/arch/x86/kernel/mmconf-fam10h_64.c b/arch/x86/kernel/mmconf-fam10h_64.c
+index c94dec6a18345a..1f54eedc3015e9 100644
+--- a/arch/x86/kernel/mmconf-fam10h_64.c
++++ b/arch/x86/kernel/mmconf-fam10h_64.c
+@@ -9,6 +9,7 @@
+ #include <linux/pci.h>
+ #include <linux/dmi.h>
+ #include <linux/range.h>
++#include <linux/acpi.h>
+
+ #include <asm/pci-direct.h>
+ #include <linux/sort.h>
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 6d3d20e3e43a9b..d8d582b750d4f6 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -798,6 +798,27 @@ static long prctl_map_vdso(const struct vdso_image *image, unsigned long addr)
+
+ #define LAM_U57_BITS 6
+
++static void enable_lam_func(void *__mm)
++{
++ struct mm_struct *mm = __mm;
++
++ if (this_cpu_read(cpu_tlbstate.loaded_mm) == mm) {
++ write_cr3(__read_cr3() | mm->context.lam_cr3_mask);
++ set_tlbstate_lam_mode(mm);
++ }
++}
++
++static void mm_enable_lam(struct mm_struct *mm)
++{
++ /*
++ * Even though the process must still be single-threaded at this
++ * point, kernel threads may be using the mm. IPI those kernel
++ * threads if they exist.
++ */
++ on_each_cpu_mask(mm_cpumask(mm), enable_lam_func, mm, true);
++ set_bit(MM_CONTEXT_LOCK_LAM, &mm->context.flags);
++}
++
+ static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits)
+ {
+ if (!cpu_feature_enabled(X86_FEATURE_LAM))
+@@ -814,6 +835,10 @@ static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits)
+ if (mmap_write_lock_killable(mm))
+ return -EINTR;
+
++ /*
++ * MM_CONTEXT_LOCK_LAM is set on clone. Prevent LAM from
++ * being enabled unless the process is single threaded:
++ */
+ if (test_bit(MM_CONTEXT_LOCK_LAM, &mm->context.flags)) {
+ mmap_write_unlock(mm);
+ return -EBUSY;
+@@ -830,9 +855,7 @@ static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits)
+ return -EINVAL;
+ }
+
+- write_cr3(__read_cr3() | mm->context.lam_cr3_mask);
+- set_tlbstate_lam_mode(mm);
+- set_bit(MM_CONTEXT_LOCK_LAM, &mm->context.flags);
++ mm_enable_lam(mm);
+
+ mmap_write_unlock(mm);
+
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 0c35207320cb4f..390e4fe7433ea3 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -60,6 +60,7 @@
+ #include <linux/stackprotector.h>
+ #include <linux/cpuhotplug.h>
+ #include <linux/mc146818rtc.h>
++#include <linux/acpi.h>
+
+ #include <asm/acpi.h>
+ #include <asm/cacheinfo.h>
+diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
+index 82b128d3f30970..0a2bbd674a6d99 100644
+--- a/arch/x86/kernel/x86_init.c
++++ b/arch/x86/kernel/x86_init.c
+@@ -8,6 +8,7 @@
+ #include <linux/ioport.h>
+ #include <linux/export.h>
+ #include <linux/pci.h>
++#include <linux/acpi.h>
+
+ #include <asm/acpi.h>
+ #include <asm/bios_ebda.h>
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 5bb481aefcbcdc..b1269e2bb76161 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -2453,6 +2453,43 @@ void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu)
+ }
+ EXPORT_SYMBOL_GPL(kvm_lapic_set_eoi);
+
++#define X2APIC_ICR_RESERVED_BITS (GENMASK_ULL(31, 20) | GENMASK_ULL(17, 16) | BIT(13))
++
++int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data)
++{
++ if (data & X2APIC_ICR_RESERVED_BITS)
++ return 1;
++
++ /*
++ * The BUSY bit is reserved on both Intel and AMD in x2APIC mode, but
++ * only AMD requires it to be zero, Intel essentially just ignores the
++ * bit. And if IPI virtualization (Intel) or x2AVIC (AMD) is enabled,
++ * the CPU performs the reserved bits checks, i.e. the underlying CPU
++ * behavior will "win". Arbitrarily clear the BUSY bit, as there is no
++ * sane way to provide consistent behavior with respect to hardware.
++ */
++ data &= ~APIC_ICR_BUSY;
++
++ kvm_apic_send_ipi(apic, (u32)data, (u32)(data >> 32));
++ if (kvm_x86_ops.x2apic_icr_is_split) {
++ kvm_lapic_set_reg(apic, APIC_ICR, data);
++ kvm_lapic_set_reg(apic, APIC_ICR2, data >> 32);
++ } else {
++ kvm_lapic_set_reg64(apic, APIC_ICR, data);
++ }
++ trace_kvm_apic_write(APIC_ICR, data);
++ return 0;
++}
++
++static u64 kvm_x2apic_icr_read(struct kvm_lapic *apic)
++{
++ if (kvm_x86_ops.x2apic_icr_is_split)
++ return (u64)kvm_lapic_get_reg(apic, APIC_ICR) |
++ (u64)kvm_lapic_get_reg(apic, APIC_ICR2) << 32;
++
++ return kvm_lapic_get_reg64(apic, APIC_ICR);
++}
++
+ /* emulate APIC access in a trap manner */
+ void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
+ {
+@@ -2470,7 +2507,7 @@ void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
+ * maybe-unecessary write, and both are in the noise anyways.
+ */
+ if (apic_x2apic_mode(apic) && offset == APIC_ICR)
+- kvm_x2apic_icr_write(apic, kvm_lapic_get_reg64(apic, APIC_ICR));
++ WARN_ON_ONCE(kvm_x2apic_icr_write(apic, kvm_x2apic_icr_read(apic)));
+ else
+ kvm_lapic_reg_write(apic, offset, kvm_lapic_get_reg(apic, offset));
+ }
+@@ -2990,18 +3027,22 @@ static int kvm_apic_state_fixup(struct kvm_vcpu *vcpu,
+
+ /*
+ * In x2APIC mode, the LDR is fixed and based on the id. And
+- * ICR is internally a single 64-bit register, but needs to be
+- * split to ICR+ICR2 in userspace for backwards compatibility.
++ * if the ICR is _not_ split, ICR is internally a single 64-bit
++ * register, but needs to be split to ICR+ICR2 in userspace for
++ * backwards compatibility.
+ */
+- if (set) {
++ if (set)
+ *ldr = kvm_apic_calc_x2apic_ldr(x2apic_id);
+
+- icr = __kvm_lapic_get_reg(s->regs, APIC_ICR) |
+- (u64)__kvm_lapic_get_reg(s->regs, APIC_ICR2) << 32;
+- __kvm_lapic_set_reg64(s->regs, APIC_ICR, icr);
+- } else {
+- icr = __kvm_lapic_get_reg64(s->regs, APIC_ICR);
+- __kvm_lapic_set_reg(s->regs, APIC_ICR2, icr >> 32);
++ if (!kvm_x86_ops.x2apic_icr_is_split) {
++ if (set) {
++ icr = __kvm_lapic_get_reg(s->regs, APIC_ICR) |
++ (u64)__kvm_lapic_get_reg(s->regs, APIC_ICR2) << 32;
++ __kvm_lapic_set_reg64(s->regs, APIC_ICR, icr);
++ } else {
++ icr = __kvm_lapic_get_reg64(s->regs, APIC_ICR);
++ __kvm_lapic_set_reg(s->regs, APIC_ICR2, icr >> 32);
++ }
+ }
+ }
+
+@@ -3194,22 +3235,12 @@ int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr)
+ return 0;
+ }
+
+-int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data)
+-{
+- data &= ~APIC_ICR_BUSY;
+-
+- kvm_apic_send_ipi(apic, (u32)data, (u32)(data >> 32));
+- kvm_lapic_set_reg64(apic, APIC_ICR, data);
+- trace_kvm_apic_write(APIC_ICR, data);
+- return 0;
+-}
+-
+ static int kvm_lapic_msr_read(struct kvm_lapic *apic, u32 reg, u64 *data)
+ {
+ u32 low;
+
+ if (reg == APIC_ICR) {
+- *data = kvm_lapic_get_reg64(apic, APIC_ICR);
++ *data = kvm_x2apic_icr_read(apic);
+ return 0;
+ }
+
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 5ab2c92c7331de..22513133925e0c 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -5062,6 +5062,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
+ .enable_nmi_window = svm_enable_nmi_window,
+ .enable_irq_window = svm_enable_irq_window,
+ .update_cr8_intercept = svm_update_cr8_intercept,
++
++ .x2apic_icr_is_split = true,
+ .set_virtual_apic_mode = avic_refresh_virtual_apic_mode,
+ .refresh_apicv_exec_ctrl = avic_refresh_apicv_exec_ctrl,
+ .apicv_post_state_restore = avic_apicv_post_state_restore,
+diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
+index 0bf35ebe8a1bdb..a70699665e1199 100644
+--- a/arch/x86/kvm/vmx/main.c
++++ b/arch/x86/kvm/vmx/main.c
+@@ -89,6 +89,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
+ .enable_nmi_window = vmx_enable_nmi_window,
+ .enable_irq_window = vmx_enable_irq_window,
+ .update_cr8_intercept = vmx_update_cr8_intercept,
++
++ .x2apic_icr_is_split = false,
+ .set_virtual_apic_mode = vmx_set_virtual_apic_mode,
+ .set_apic_access_page_addr = vmx_set_apic_access_page_addr,
+ .refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 44ac64f3a047ca..a041d2ecd83804 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -503,9 +503,9 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
+ {
+ struct mm_struct *prev = this_cpu_read(cpu_tlbstate.loaded_mm);
+ u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
+- unsigned long new_lam = mm_lam_cr3_mask(next);
+ bool was_lazy = this_cpu_read(cpu_tlbstate_shared.is_lazy);
+ unsigned cpu = smp_processor_id();
++ unsigned long new_lam;
+ u64 next_tlb_gen;
+ bool need_flush;
+ u16 new_asid;
+@@ -619,9 +619,7 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
+ cpumask_clear_cpu(cpu, mm_cpumask(prev));
+ }
+
+- /*
+- * Start remote flushes and then read tlb_gen.
+- */
++ /* Start receiving IPIs and then read tlb_gen (and LAM below) */
+ if (next != &init_mm)
+ cpumask_set_cpu(cpu, mm_cpumask(next));
+ next_tlb_gen = atomic64_read(&next->context.tlb_gen);
+@@ -633,6 +631,7 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
+ barrier();
+ }
+
++ new_lam = mm_lam_cr3_mask(next);
+ set_tlbstate_lam_mode(next);
+ if (need_flush) {
+ this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index d25d81c8ecc009..074b41fafbe3ff 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -273,7 +273,7 @@ struct jit_context {
+ /* Number of bytes emit_patch() needs to generate instructions */
+ #define X86_PATCH_SIZE 5
+ /* Number of bytes that will be skipped on tailcall */
+-#define X86_TAIL_CALL_OFFSET (11 + ENDBR_INSN_SIZE)
++#define X86_TAIL_CALL_OFFSET (12 + ENDBR_INSN_SIZE)
+
+ static void push_r12(u8 **pprog)
+ {
+@@ -403,6 +403,37 @@ static void emit_cfi(u8 **pprog, u32 hash)
+ *pprog = prog;
+ }
+
++static void emit_prologue_tail_call(u8 **pprog, bool is_subprog)
++{
++ u8 *prog = *pprog;
++
++ if (!is_subprog) {
++ /* cmp rax, MAX_TAIL_CALL_CNT */
++ EMIT4(0x48, 0x83, 0xF8, MAX_TAIL_CALL_CNT);
++ EMIT2(X86_JA, 6); /* ja 6 */
++ /* rax is tail_call_cnt if <= MAX_TAIL_CALL_CNT.
++ * case1: entry of main prog.
++ * case2: tail callee of main prog.
++ */
++ EMIT1(0x50); /* push rax */
++ /* Make rax as tail_call_cnt_ptr. */
++ EMIT3(0x48, 0x89, 0xE0); /* mov rax, rsp */
++ EMIT2(0xEB, 1); /* jmp 1 */
++ /* rax is tail_call_cnt_ptr if > MAX_TAIL_CALL_CNT.
++ * case: tail callee of subprog.
++ */
++ EMIT1(0x50); /* push rax */
++ /* push tail_call_cnt_ptr */
++ EMIT1(0x50); /* push rax */
++ } else { /* is_subprog */
++ /* rax is tail_call_cnt_ptr. */
++ EMIT1(0x50); /* push rax */
++ EMIT1(0x50); /* push rax */
++ }
++
++ *pprog = prog;
++}
++
+ /*
+ * Emit x86-64 prologue code for BPF program.
+ * bpf_tail_call helper will skip the first X86_TAIL_CALL_OFFSET bytes
+@@ -424,10 +455,10 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
+ /* When it's the entry of the whole tailcall context,
+ * zeroing rax means initialising tail_call_cnt.
+ */
+- EMIT2(0x31, 0xC0); /* xor eax, eax */
++ EMIT3(0x48, 0x31, 0xC0); /* xor rax, rax */
+ else
+ /* Keep the same instruction layout. */
+- EMIT2(0x66, 0x90); /* nop2 */
++ emit_nops(&prog, 3); /* nop3 */
+ }
+ /* Exception callback receives FP as third parameter */
+ if (is_exception_cb) {
+@@ -453,7 +484,7 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
+ if (stack_depth)
+ EMIT3_off32(0x48, 0x81, 0xEC, round_up(stack_depth, 8));
+ if (tail_call_reachable)
+- EMIT1(0x50); /* push rax */
++ emit_prologue_tail_call(&prog, is_subprog);
+ *pprog = prog;
+ }
+
+@@ -589,13 +620,15 @@ static void emit_return(u8 **pprog, u8 *ip)
+ *pprog = prog;
+ }
+
++#define BPF_TAIL_CALL_CNT_PTR_STACK_OFF(stack) (-16 - round_up(stack, 8))
++
+ /*
+ * Generate the following code:
+ *
+ * ... bpf_tail_call(void *ctx, struct bpf_array *array, u64 index) ...
+ * if (index >= array->map.max_entries)
+ * goto out;
+- * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT)
++ * if ((*tcc_ptr)++ >= MAX_TAIL_CALL_CNT)
+ * goto out;
+ * prog = array->ptrs[index];
+ * if (prog == NULL)
+@@ -608,7 +641,7 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
+ u32 stack_depth, u8 *ip,
+ struct jit_context *ctx)
+ {
+- int tcc_off = -4 - round_up(stack_depth, 8);
++ int tcc_ptr_off = BPF_TAIL_CALL_CNT_PTR_STACK_OFF(stack_depth);
+ u8 *prog = *pprog, *start = *pprog;
+ int offset;
+
+@@ -630,16 +663,14 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
+ EMIT2(X86_JBE, offset); /* jbe out */
+
+ /*
+- * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT)
++ * if ((*tcc_ptr)++ >= MAX_TAIL_CALL_CNT)
+ * goto out;
+ */
+- EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */
+- EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */
++ EMIT3_off32(0x48, 0x8B, 0x85, tcc_ptr_off); /* mov rax, qword ptr [rbp - tcc_ptr_off] */
++ EMIT4(0x48, 0x83, 0x38, MAX_TAIL_CALL_CNT); /* cmp qword ptr [rax], MAX_TAIL_CALL_CNT */
+
+ offset = ctx->tail_call_indirect_label - (prog + 2 - start);
+ EMIT2(X86_JAE, offset); /* jae out */
+- EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */
+- EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */
+
+ /* prog = array->ptrs[index]; */
+ EMIT4_off32(0x48, 0x8B, 0x8C, 0xD6, /* mov rcx, [rsi + rdx * 8 + offsetof(...)] */
+@@ -654,6 +685,9 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
+ offset = ctx->tail_call_indirect_label - (prog + 2 - start);
+ EMIT2(X86_JE, offset); /* je out */
+
++ /* Inc tail_call_cnt if the slot is populated. */
++ EMIT4(0x48, 0x83, 0x00, 0x01); /* add qword ptr [rax], 1 */
++
+ if (bpf_prog->aux->exception_boundary) {
+ pop_callee_regs(&prog, all_callee_regs_used);
+ pop_r12(&prog);
+@@ -663,6 +697,11 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
+ pop_r12(&prog);
+ }
+
++ /* Pop tail_call_cnt_ptr. */
++ EMIT1(0x58); /* pop rax */
++ /* Pop tail_call_cnt, if it's main prog.
++ * Pop tail_call_cnt_ptr, if it's subprog.
++ */
+ EMIT1(0x58); /* pop rax */
+ if (stack_depth)
+ EMIT3_off32(0x48, 0x81, 0xC4, /* add rsp, sd */
+@@ -691,21 +730,19 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog,
+ bool *callee_regs_used, u32 stack_depth,
+ struct jit_context *ctx)
+ {
+- int tcc_off = -4 - round_up(stack_depth, 8);
++ int tcc_ptr_off = BPF_TAIL_CALL_CNT_PTR_STACK_OFF(stack_depth);
+ u8 *prog = *pprog, *start = *pprog;
+ int offset;
+
+ /*
+- * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT)
++ * if ((*tcc_ptr)++ >= MAX_TAIL_CALL_CNT)
+ * goto out;
+ */
+- EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */
+- EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */
++ EMIT3_off32(0x48, 0x8B, 0x85, tcc_ptr_off); /* mov rax, qword ptr [rbp - tcc_ptr_off] */
++ EMIT4(0x48, 0x83, 0x38, MAX_TAIL_CALL_CNT); /* cmp qword ptr [rax], MAX_TAIL_CALL_CNT */
+
+ offset = ctx->tail_call_direct_label - (prog + 2 - start);
+ EMIT2(X86_JAE, offset); /* jae out */
+- EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */
+- EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */
+
+ poke->tailcall_bypass = ip + (prog - start);
+ poke->adj_off = X86_TAIL_CALL_OFFSET;
+@@ -715,6 +752,9 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog,
+ emit_jump(&prog, (u8 *)poke->tailcall_target + X86_PATCH_SIZE,
+ poke->tailcall_bypass);
+
++ /* Inc tail_call_cnt if the slot is populated. */
++ EMIT4(0x48, 0x83, 0x00, 0x01); /* add qword ptr [rax], 1 */
++
+ if (bpf_prog->aux->exception_boundary) {
+ pop_callee_regs(&prog, all_callee_regs_used);
+ pop_r12(&prog);
+@@ -724,6 +764,11 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog,
+ pop_r12(&prog);
+ }
+
++ /* Pop tail_call_cnt_ptr. */
++ EMIT1(0x58); /* pop rax */
++ /* Pop tail_call_cnt, if it's main prog.
++ * Pop tail_call_cnt_ptr, if it's subprog.
++ */
+ EMIT1(0x58); /* pop rax */
+ if (stack_depth)
+ EMIT3_off32(0x48, 0x81, 0xC4, round_up(stack_depth, 8));
+@@ -1311,9 +1356,11 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op)
+
+ #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
+
+-/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
+-#define RESTORE_TAIL_CALL_CNT(stack) \
+- EMIT3_off32(0x48, 0x8B, 0x85, -round_up(stack, 8) - 8)
++#define __LOAD_TCC_PTR(off) \
++ EMIT3_off32(0x48, 0x8B, 0x85, off)
++/* mov rax, qword ptr [rbp - rounded_stack_depth - 16] */
++#define LOAD_TAIL_CALL_CNT_PTR(stack) \
++ __LOAD_TCC_PTR(BPF_TAIL_CALL_CNT_PTR_STACK_OFF(stack))
+
+ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image,
+ int oldproglen, struct jit_context *ctx, bool jmp_padding)
+@@ -2031,7 +2078,7 @@ st: if (is_imm8(insn->off))
+
+ func = (u8 *) __bpf_call_base + imm32;
+ if (tail_call_reachable) {
+- RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth);
++ LOAD_TAIL_CALL_CNT_PTR(bpf_prog->aux->stack_depth);
+ ip += 7;
+ }
+ if (!imm32)
+@@ -2706,6 +2753,10 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
+ return 0;
+ }
+
++/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
++#define LOAD_TRAMP_TAIL_CALL_CNT_PTR(stack) \
++ __LOAD_TCC_PTR(-round_up(stack, 8) - 8)
++
+ /* Example:
+ * __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev);
+ * its 'struct btf_func_model' will be nr_args=2
+@@ -2826,7 +2877,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
+ * [ ... ]
+ * [ stack_arg2 ]
+ * RBP - arg_stack_off [ stack_arg1 ]
+- * RSP [ tail_call_cnt ] BPF_TRAMP_F_TAIL_CALL_CTX
++ * RSP [ tail_call_cnt_ptr ] BPF_TRAMP_F_TAIL_CALL_CTX
+ */
+
+ /* room for return value of orig_call or fentry prog */
+@@ -2955,10 +3006,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
+ save_args(m, &prog, arg_stack_off, true);
+
+ if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) {
+- /* Before calling the original function, restore the
+- * tail_call_cnt from stack to rax.
++ /* Before calling the original function, load the
++ * tail_call_cnt_ptr from stack to rax.
+ */
+- RESTORE_TAIL_CALL_CNT(stack_size);
++ LOAD_TRAMP_TAIL_CALL_CNT_PTR(stack_size);
+ }
+
+ if (flags & BPF_TRAMP_F_ORIG_STACK) {
+@@ -3017,10 +3068,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
+ goto cleanup;
+ }
+ } else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) {
+- /* Before running the original function, restore the
+- * tail_call_cnt from stack to rax.
++ /* Before running the original function, load the
++ * tail_call_cnt_ptr from stack to rax.
+ */
+- RESTORE_TAIL_CALL_CNT(stack_size);
++ LOAD_TRAMP_TAIL_CALL_CNT_PTR(stack_size);
+ }
+
+ /* restore return value of orig_call or fentry prog back into RAX */
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index b33afb240601b0..98a9bb92d75c88 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -980,7 +980,7 @@ static void amd_rp_pme_suspend(struct pci_dev *dev)
+ return;
+
+ rp = pcie_find_root_port(dev);
+- if (!rp->pm_cap)
++ if (!rp || !rp->pm_cap)
+ return;
+
+ rp->pme_support &= ~((PCI_PM_CAP_PME_D3hot|PCI_PM_CAP_PME_D3cold) >>
+@@ -994,7 +994,7 @@ static void amd_rp_pme_resume(struct pci_dev *dev)
+ u16 pmc;
+
+ rp = pcie_find_root_port(dev);
+- if (!rp->pm_cap)
++ if (!rp || !rp->pm_cap)
+ return;
+
+ pci_read_config_word(rp, rp->pm_cap + PCI_PM_PMC, &pmc);
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index f1ce39d6d32cbb..839e6613753dd7 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -2018,10 +2018,7 @@ void __init xen_reserve_special_pages(void)
+
+ void __init xen_pt_check_e820(void)
+ {
+- if (xen_is_e820_reserved(xen_pt_base, xen_pt_size)) {
+- xen_raw_console_write("Xen hypervisor allocated page table memory conflicts with E820 map\n");
+- BUG();
+- }
++ xen_chk_is_e820_usable(xen_pt_base, xen_pt_size, "page table");
+ }
+
+ static unsigned char dummy_mapping[PAGE_SIZE] __page_aligned_bss;
+diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
+index 7c735b730acd25..b52d3e17e2c152 100644
+--- a/arch/x86/xen/p2m.c
++++ b/arch/x86/xen/p2m.c
+@@ -70,6 +70,7 @@
+ #include <linux/memblock.h>
+ #include <linux/slab.h>
+ #include <linux/vmalloc.h>
++#include <linux/acpi.h>
+
+ #include <asm/cache.h>
+ #include <asm/setup.h>
+@@ -80,6 +81,7 @@
+ #include <asm/xen/hypervisor.h>
+ #include <xen/balloon.h>
+ #include <xen/grant_table.h>
++#include <xen/hvc-console.h>
+
+ #include "xen-ops.h"
+
+@@ -792,6 +794,102 @@ int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+ return ret;
+ }
+
++/* Remapped non-RAM areas */
++#define NR_NONRAM_REMAP 4
++static struct nonram_remap {
++ phys_addr_t maddr;
++ phys_addr_t paddr;
++ size_t size;
++} xen_nonram_remap[NR_NONRAM_REMAP] __ro_after_init;
++static unsigned int nr_nonram_remap __ro_after_init;
++
++/*
++ * Do the real remapping of non-RAM regions as specified in the
++ * xen_nonram_remap[] array.
++ * In case of an error just crash the system.
++ */
++void __init xen_do_remap_nonram(void)
++{
++ unsigned int i;
++ unsigned int remapped = 0;
++ const struct nonram_remap *remap = xen_nonram_remap;
++ unsigned long pfn, mfn, end_pfn;
++
++ for (i = 0; i < nr_nonram_remap; i++) {
++ end_pfn = PFN_UP(remap->paddr + remap->size);
++ pfn = PFN_DOWN(remap->paddr);
++ mfn = PFN_DOWN(remap->maddr);
++ while (pfn < end_pfn) {
++ if (!set_phys_to_machine(pfn, mfn))
++ panic("Failed to set p2m mapping for pfn=%lx mfn=%lx\n",
++ pfn, mfn);
++
++ pfn++;
++ mfn++;
++ remapped++;
++ }
++
++ remap++;
++ }
++
++ pr_info("Remapped %u non-RAM page(s)\n", remapped);
++}
++
++#ifdef CONFIG_ACPI
++/*
++ * Xen variant of acpi_os_ioremap() taking potentially remapped non-RAM
++ * regions into account.
++ * Any attempt to map an area crossing a remap boundary will produce a
++ * WARN() splat.
++ * phys is related to remap->maddr on input and will be rebased to remap->paddr.
++ */
++static void __iomem *xen_acpi_os_ioremap(acpi_physical_address phys,
++ acpi_size size)
++{
++ unsigned int i;
++ const struct nonram_remap *remap = xen_nonram_remap;
++
++ for (i = 0; i < nr_nonram_remap; i++) {
++ if (phys + size > remap->maddr &&
++ phys < remap->maddr + remap->size) {
++ WARN_ON(phys < remap->maddr ||
++ phys + size > remap->maddr + remap->size);
++ phys += remap->paddr - remap->maddr;
++ break;
++ }
++ }
++
++ return x86_acpi_os_ioremap(phys, size);
++}
++#endif /* CONFIG_ACPI */
++
++/*
++ * Add a new non-RAM remap entry.
++ * In case of no free entry found, just crash the system.
++ */
++void __init xen_add_remap_nonram(phys_addr_t maddr, phys_addr_t paddr,
++ unsigned long size)
++{
++ BUG_ON((maddr & ~PAGE_MASK) != (paddr & ~PAGE_MASK));
++
++ if (nr_nonram_remap == NR_NONRAM_REMAP) {
++ xen_raw_console_write("Number of required E820 entry remapping actions exceed maximum value\n");
++ BUG();
++ }
++
++#ifdef CONFIG_ACPI
++ /* Switch to the Xen acpi_os_ioremap() variant. */
++ if (nr_nonram_remap == 0)
++ acpi_os_ioremap = xen_acpi_os_ioremap;
++#endif
++
++ xen_nonram_remap[nr_nonram_remap].maddr = maddr;
++ xen_nonram_remap[nr_nonram_remap].paddr = paddr;
++ xen_nonram_remap[nr_nonram_remap].size = size;
++
++ nr_nonram_remap++;
++}
++
+ #ifdef CONFIG_XEN_DEBUG_FS
+ #include <linux/debugfs.h>
+ static int p2m_dump_show(struct seq_file *m, void *v)
+diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
+index 806ddb2391d9bd..c3db71d96c434a 100644
+--- a/arch/x86/xen/setup.c
++++ b/arch/x86/xen/setup.c
+@@ -15,12 +15,12 @@
+ #include <linux/cpuidle.h>
+ #include <linux/cpufreq.h>
+ #include <linux/memory_hotplug.h>
++#include <linux/acpi.h>
+
+ #include <asm/elf.h>
+ #include <asm/vdso.h>
+ #include <asm/e820/api.h>
+ #include <asm/setup.h>
+-#include <asm/acpi.h>
+ #include <asm/numa.h>
+ #include <asm/idtentry.h>
+ #include <asm/xen/hypervisor.h>
+@@ -46,6 +46,9 @@ bool xen_pv_pci_possible;
+ /* E820 map used during setting up memory. */
+ static struct e820_table xen_e820_table __initdata;
+
++/* Number of initially usable memory pages. */
++static unsigned long ini_nr_pages __initdata;
++
+ /*
+ * Buffer used to remap identity mapped pages. We only need the virtual space.
+ * The physical page behind this address is remapped as needed to different
+@@ -212,7 +215,7 @@ static int __init xen_free_mfn(unsigned long mfn)
+ * as a fallback if the remapping fails.
+ */
+ static void __init xen_set_identity_and_release_chunk(unsigned long start_pfn,
+- unsigned long end_pfn, unsigned long nr_pages)
++ unsigned long end_pfn)
+ {
+ unsigned long pfn, end;
+ int ret;
+@@ -220,7 +223,7 @@ static void __init xen_set_identity_and_release_chunk(unsigned long start_pfn,
+ WARN_ON(start_pfn > end_pfn);
+
+ /* Release pages first. */
+- end = min(end_pfn, nr_pages);
++ end = min(end_pfn, ini_nr_pages);
+ for (pfn = start_pfn; pfn < end; pfn++) {
+ unsigned long mfn = pfn_to_mfn(pfn);
+
+@@ -341,15 +344,14 @@ static void __init xen_do_set_identity_and_remap_chunk(
+ * to Xen and not remapped.
+ */
+ static unsigned long __init xen_set_identity_and_remap_chunk(
+- unsigned long start_pfn, unsigned long end_pfn, unsigned long nr_pages,
+- unsigned long remap_pfn)
++ unsigned long start_pfn, unsigned long end_pfn, unsigned long remap_pfn)
+ {
+ unsigned long pfn;
+ unsigned long i = 0;
+ unsigned long n = end_pfn - start_pfn;
+
+ if (remap_pfn == 0)
+- remap_pfn = nr_pages;
++ remap_pfn = ini_nr_pages;
+
+ while (i < n) {
+ unsigned long cur_pfn = start_pfn + i;
+@@ -358,19 +360,19 @@ static unsigned long __init xen_set_identity_and_remap_chunk(
+ unsigned long remap_range_size;
+
+ /* Do not remap pages beyond the current allocation */
+- if (cur_pfn >= nr_pages) {
++ if (cur_pfn >= ini_nr_pages) {
+ /* Identity map remaining pages */
+ set_phys_range_identity(cur_pfn, cur_pfn + size);
+ break;
+ }
+- if (cur_pfn + size > nr_pages)
+- size = nr_pages - cur_pfn;
++ if (cur_pfn + size > ini_nr_pages)
++ size = ini_nr_pages - cur_pfn;
+
+ remap_range_size = xen_find_pfn_range(&remap_pfn);
+ if (!remap_range_size) {
+ pr_warn("Unable to find available pfn range, not remapping identity pages\n");
+ xen_set_identity_and_release_chunk(cur_pfn,
+- cur_pfn + left, nr_pages);
++ cur_pfn + left);
+ break;
+ }
+ /* Adjust size to fit in current e820 RAM region */
+@@ -397,18 +399,18 @@ static unsigned long __init xen_set_identity_and_remap_chunk(
+ }
+
+ static unsigned long __init xen_count_remap_pages(
+- unsigned long start_pfn, unsigned long end_pfn, unsigned long nr_pages,
++ unsigned long start_pfn, unsigned long end_pfn,
+ unsigned long remap_pages)
+ {
+- if (start_pfn >= nr_pages)
++ if (start_pfn >= ini_nr_pages)
+ return remap_pages;
+
+- return remap_pages + min(end_pfn, nr_pages) - start_pfn;
++ return remap_pages + min(end_pfn, ini_nr_pages) - start_pfn;
+ }
+
+-static unsigned long __init xen_foreach_remap_area(unsigned long nr_pages,
++static unsigned long __init xen_foreach_remap_area(
+ unsigned long (*func)(unsigned long start_pfn, unsigned long end_pfn,
+- unsigned long nr_pages, unsigned long last_val))
++ unsigned long last_val))
+ {
+ phys_addr_t start = 0;
+ unsigned long ret_val = 0;
+@@ -436,8 +438,7 @@ static unsigned long __init xen_foreach_remap_area(unsigned long nr_pages,
+ end_pfn = PFN_UP(entry->addr);
+
+ if (start_pfn < end_pfn)
+- ret_val = func(start_pfn, end_pfn, nr_pages,
+- ret_val);
++ ret_val = func(start_pfn, end_pfn, ret_val);
+ start = end;
+ }
+ }
+@@ -494,6 +495,8 @@ void __init xen_remap_memory(void)
+ set_pte_mfn(buf, mfn_save, PAGE_KERNEL);
+
+ pr_info("Remapped %ld page(s)\n", remapped);
++
++ xen_do_remap_nonram();
+ }
+
+ static unsigned long __init xen_get_pages_limit(void)
+@@ -567,7 +570,7 @@ static void __init xen_ignore_unusable(void)
+ }
+ }
+
+-bool __init xen_is_e820_reserved(phys_addr_t start, phys_addr_t size)
++static bool __init xen_is_e820_reserved(phys_addr_t start, phys_addr_t size)
+ {
+ struct e820_entry *entry;
+ unsigned mapcnt;
+@@ -624,6 +627,111 @@ phys_addr_t __init xen_find_free_area(phys_addr_t size)
+ return 0;
+ }
+
++/*
++ * Swap a non-RAM E820 map entry with RAM above ini_nr_pages.
++ * Note that the E820 map is modified accordingly, but the P2M map isn't yet.
++ * The adaption of the P2M must be deferred until page allocation is possible.
++ */
++static void __init xen_e820_swap_entry_with_ram(struct e820_entry *swap_entry)
++{
++ struct e820_entry *entry;
++ unsigned int mapcnt;
++ phys_addr_t mem_end = PFN_PHYS(ini_nr_pages);
++ phys_addr_t swap_addr, swap_size, entry_end;
++
++ swap_addr = PAGE_ALIGN_DOWN(swap_entry->addr);
++ swap_size = PAGE_ALIGN(swap_entry->addr - swap_addr + swap_entry->size);
++ entry = xen_e820_table.entries;
++
++ for (mapcnt = 0; mapcnt < xen_e820_table.nr_entries; mapcnt++) {
++ entry_end = entry->addr + entry->size;
++ if (entry->type == E820_TYPE_RAM && entry->size >= swap_size &&
++ entry_end - swap_size >= mem_end) {
++ /* Reduce RAM entry by needed space (whole pages). */
++ entry->size -= swap_size;
++
++ /* Add new entry at the end of E820 map. */
++ entry = xen_e820_table.entries +
++ xen_e820_table.nr_entries;
++ xen_e820_table.nr_entries++;
++
++ /* Fill new entry (keep size and page offset). */
++ entry->type = swap_entry->type;
++ entry->addr = entry_end - swap_size +
++ swap_addr - swap_entry->addr;
++ entry->size = swap_entry->size;
++
++ /* Convert old entry to RAM, align to pages. */
++ swap_entry->type = E820_TYPE_RAM;
++ swap_entry->addr = swap_addr;
++ swap_entry->size = swap_size;
++
++ /* Remember PFN<->MFN relation for P2M update. */
++ xen_add_remap_nonram(swap_addr, entry_end - swap_size,
++ swap_size);
++
++ /* Order E820 table and merge entries. */
++ e820__update_table(&xen_e820_table);
++
++ return;
++ }
++
++ entry++;
++ }
++
++ xen_raw_console_write("No suitable area found for required E820 entry remapping action\n");
++ BUG();
++}
++
++/*
++ * Look for non-RAM memory types in a specific guest physical area and move
++ * those away if possible (ACPI NVS only for now).
++ */
++static void __init xen_e820_resolve_conflicts(phys_addr_t start,
++ phys_addr_t size)
++{
++ struct e820_entry *entry;
++ unsigned int mapcnt;
++ phys_addr_t end;
++
++ if (!size)
++ return;
++
++ end = start + size;
++ entry = xen_e820_table.entries;
++
++ for (mapcnt = 0; mapcnt < xen_e820_table.nr_entries; mapcnt++) {
++ if (entry->addr >= end)
++ return;
++
++ if (entry->addr + entry->size > start &&
++ entry->type == E820_TYPE_NVS)
++ xen_e820_swap_entry_with_ram(entry);
++
++ entry++;
++ }
++}
++
++/*
++ * Check for an area in physical memory to be usable for non-movable purposes.
++ * An area is considered to usable if the used E820 map lists it to be RAM or
++ * some other type which can be moved to higher PFNs while keeping the MFNs.
++ * In case the area is not usable, crash the system with an error message.
++ */
++void __init xen_chk_is_e820_usable(phys_addr_t start, phys_addr_t size,
++ const char *component)
++{
++ xen_e820_resolve_conflicts(start, size);
++
++ if (!xen_is_e820_reserved(start, size))
++ return;
++
++ xen_raw_console_write("Xen hypervisor allocated ");
++ xen_raw_console_write(component);
++ xen_raw_console_write(" memory conflicts with E820 map\n");
++ BUG();
++}
++
+ /*
+ * Like memcpy, but with physical addresses for dest and src.
+ */
+@@ -683,7 +791,7 @@ static void __init xen_reserve_xen_mfnlist(void)
+ **/
+ char * __init xen_memory_setup(void)
+ {
+- unsigned long max_pfn, pfn_s, n_pfns;
++ unsigned long pfn_s, n_pfns;
+ phys_addr_t mem_end, addr, size, chunk_size;
+ u32 type;
+ int rc;
+@@ -695,9 +803,8 @@ char * __init xen_memory_setup(void)
+ int op;
+
+ xen_parse_512gb();
+- max_pfn = xen_get_pages_limit();
+- max_pfn = min(max_pfn, xen_start_info->nr_pages);
+- mem_end = PFN_PHYS(max_pfn);
++ ini_nr_pages = min(xen_get_pages_limit(), xen_start_info->nr_pages);
++ mem_end = PFN_PHYS(ini_nr_pages);
+
+ memmap.nr_entries = ARRAY_SIZE(xen_e820_table.entries);
+ set_xen_guest_handle(memmap.buffer, xen_e820_table.entries);
+@@ -747,13 +854,35 @@ char * __init xen_memory_setup(void)
+ /* Make sure the Xen-supplied memory map is well-ordered. */
+ e820__update_table(&xen_e820_table);
+
++ /*
++ * Check whether the kernel itself conflicts with the target E820 map.
++ * Failing now is better than running into weird problems later due
++ * to relocating (and even reusing) pages with kernel text or data.
++ */
++ xen_chk_is_e820_usable(__pa_symbol(_text),
++ __pa_symbol(_end) - __pa_symbol(_text),
++ "kernel");
++
++ /*
++ * Check for a conflict of the xen_start_info memory with the target
++ * E820 map.
++ */
++ xen_chk_is_e820_usable(__pa(xen_start_info), sizeof(*xen_start_info),
++ "xen_start_info");
++
++ /*
++ * Check for a conflict of the hypervisor supplied page tables with
++ * the target E820 map.
++ */
++ xen_pt_check_e820();
++
+ max_pages = xen_get_max_pages();
+
+ /* How many extra pages do we need due to remapping? */
+- max_pages += xen_foreach_remap_area(max_pfn, xen_count_remap_pages);
++ max_pages += xen_foreach_remap_area(xen_count_remap_pages);
+
+- if (max_pages > max_pfn)
+- extra_pages += max_pages - max_pfn;
++ if (max_pages > ini_nr_pages)
++ extra_pages += max_pages - ini_nr_pages;
+
+ /*
+ * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
+@@ -762,8 +891,8 @@ char * __init xen_memory_setup(void)
+ * Make sure we have no memory above max_pages, as this area
+ * isn't handled by the p2m management.
+ */
+- maxmem_pages = EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM));
+- extra_pages = min3(maxmem_pages, extra_pages, max_pages - max_pfn);
++ maxmem_pages = EXTRA_MEM_RATIO * min(ini_nr_pages, PFN_DOWN(MAXMEM));
++ extra_pages = min3(maxmem_pages, extra_pages, max_pages - ini_nr_pages);
+ i = 0;
+ addr = xen_e820_table.entries[0].addr;
+ size = xen_e820_table.entries[0].size;
+@@ -819,23 +948,6 @@ char * __init xen_memory_setup(void)
+
+ e820__update_table(e820_table);
+
+- /*
+- * Check whether the kernel itself conflicts with the target E820 map.
+- * Failing now is better than running into weird problems later due
+- * to relocating (and even reusing) pages with kernel text or data.
+- */
+- if (xen_is_e820_reserved(__pa_symbol(_text),
+- __pa_symbol(__bss_stop) - __pa_symbol(_text))) {
+- xen_raw_console_write("Xen hypervisor allocated kernel memory conflicts with E820 map\n");
+- BUG();
+- }
+-
+- /*
+- * Check for a conflict of the hypervisor supplied page tables with
+- * the target E820 map.
+- */
+- xen_pt_check_e820();
+-
+ xen_reserve_xen_mfnlist();
+
+ /* Check for a conflict of the initrd with the target E820 map. */
+@@ -863,7 +975,7 @@ char * __init xen_memory_setup(void)
+ * Set identity map on non-RAM pages and prepare remapping the
+ * underlying RAM.
+ */
+- xen_foreach_remap_area(max_pfn, xen_set_identity_and_remap_chunk);
++ xen_foreach_remap_area(xen_set_identity_and_remap_chunk);
+
+ pr_info("Released %ld page(s)\n", xen_released_pages);
+
+diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
+index 0cf16fc79e0bf0..e1b782e823e6b4 100644
+--- a/arch/x86/xen/xen-ops.h
++++ b/arch/x86/xen/xen-ops.h
+@@ -47,8 +47,12 @@ void xen_mm_unpin_all(void);
+ #ifdef CONFIG_X86_64
+ void __init xen_relocate_p2m(void);
+ #endif
++void __init xen_do_remap_nonram(void);
++void __init xen_add_remap_nonram(phys_addr_t maddr, phys_addr_t paddr,
++ unsigned long size);
+
+-bool __init xen_is_e820_reserved(phys_addr_t start, phys_addr_t size);
++void __init xen_chk_is_e820_usable(phys_addr_t start, phys_addr_t size,
++ const char *component);
+ unsigned long __ref xen_chk_extra_mem(unsigned long pfn);
+ void __init xen_inv_extra_mem(void);
+ void __init xen_remap_memory(void);
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 36a4998c4b378b..1cc40a857fb858 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2911,8 +2911,12 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[a_idx];
+
+ /* if a merge has already been setup, then proceed with that first */
+- if (bfqq->new_bfqq)
+- return bfqq->new_bfqq;
++ new_bfqq = bfqq->new_bfqq;
++ if (new_bfqq) {
++ while (new_bfqq->new_bfqq)
++ new_bfqq = new_bfqq->new_bfqq;
++ return new_bfqq;
++ }
+
+ /*
+ * Check delayed stable merge for rotational or non-queueing
+@@ -3125,10 +3129,12 @@ void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ bfq_put_queue(bfqq);
+ }
+
+-static void
+-bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+- struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++static struct bfq_queue *bfq_merge_bfqqs(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic,
++ struct bfq_queue *bfqq)
+ {
++ struct bfq_queue *new_bfqq = bfqq->new_bfqq;
++
+ bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
+ (unsigned long)new_bfqq->pid);
+ /* Save weight raising and idle window of the merged queues */
+@@ -3222,6 +3228,8 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+ bfq_reassign_last_bfqq(bfqq, new_bfqq);
+
+ bfq_release_process_ref(bfqd, bfqq);
++
++ return new_bfqq;
+ }
+
+ static bool bfq_allow_bio_merge(struct request_queue *q, struct request *rq,
+@@ -3257,14 +3265,8 @@ static bool bfq_allow_bio_merge(struct request_queue *q, struct request *rq,
+ * fulfilled, i.e., bic can be redirected to new_bfqq
+ * and bfqq can be put.
+ */
+- bfq_merge_bfqqs(bfqd, bfqd->bio_bic, bfqq,
+- new_bfqq);
+- /*
+- * If we get here, bio will be queued into new_queue,
+- * so use new_bfqq to decide whether bio and rq can be
+- * merged.
+- */
+- bfqq = new_bfqq;
++ while (bfqq != new_bfqq)
++ bfqq = bfq_merge_bfqqs(bfqd, bfqd->bio_bic, bfqq);
+
+ /*
+ * Change also bqfd->bio_bfqq, as
+@@ -5701,9 +5703,7 @@ bfq_do_early_stable_merge(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ * state before killing it.
+ */
+ bfqq->bic = bic;
+- bfq_merge_bfqqs(bfqd, bic, bfqq, new_bfqq);
+-
+- return new_bfqq;
++ return bfq_merge_bfqqs(bfqd, bic, bfqq);
+ }
+
+ /*
+@@ -6158,6 +6158,7 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
+ bool waiting, idle_timer_disabled = false;
+
+ if (new_bfqq) {
++ struct bfq_queue *old_bfqq = bfqq;
+ /*
+ * Release the request's reference to the old bfqq
+ * and make sure one is taken to the shared queue.
+@@ -6174,18 +6175,18 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
+ * new_bfqq.
+ */
+ if (bic_to_bfqq(RQ_BIC(rq), true,
+- bfq_actuator_index(bfqd, rq->bio)) == bfqq)
+- bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
+- bfqq, new_bfqq);
++ bfq_actuator_index(bfqd, rq->bio)) == bfqq) {
++ while (bfqq != new_bfqq)
++ bfqq = bfq_merge_bfqqs(bfqd, RQ_BIC(rq), bfqq);
++ }
+
+- bfq_clear_bfqq_just_created(bfqq);
++ bfq_clear_bfqq_just_created(old_bfqq);
+ /*
+ * rq is about to be enqueued into new_bfqq,
+ * release rq reference on bfqq
+ */
+- bfq_put_queue(bfqq);
++ bfq_put_queue(old_bfqq);
+ rq->elv.priv[1] = new_bfqq;
+- bfqq = new_bfqq;
+ }
+
+ bfq_update_io_thinktime(bfqd, bfqq);
+@@ -6723,7 +6724,7 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
+ {
+ bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
+
+- if (bfqq_process_refs(bfqq) == 1) {
++ if (bfqq_process_refs(bfqq) == 1 && !bfqq->new_bfqq) {
+ bfqq->pid = current->pid;
+ bfq_clear_bfqq_coop(bfqq);
+ bfq_clear_bfqq_split_coop(bfqq);
+@@ -6821,6 +6822,31 @@ static void bfq_prepare_request(struct request *rq)
+ rq->elv.priv[0] = rq->elv.priv[1] = NULL;
+ }
+
++static struct bfq_queue *bfq_waker_bfqq(struct bfq_queue *bfqq)
++{
++ struct bfq_queue *new_bfqq = bfqq->new_bfqq;
++ struct bfq_queue *waker_bfqq = bfqq->waker_bfqq;
++
++ if (!waker_bfqq)
++ return NULL;
++
++ while (new_bfqq) {
++ if (new_bfqq == waker_bfqq) {
++ /*
++ * If waker_bfqq is in the merge chain, and current
++ * is the only procress.
++ */
++ if (bfqq_process_refs(waker_bfqq) == 1)
++ return NULL;
++ break;
++ }
++
++ new_bfqq = new_bfqq->new_bfqq;
++ }
++
++ return waker_bfqq;
++}
++
+ /*
+ * If needed, init rq, allocate bfq data structures associated with
+ * rq, and increment reference counters in the destination bfq_queue
+@@ -6882,7 +6908,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
+ /* If the queue was seeky for too long, break it apart. */
+ if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq) &&
+ !bic->bfqq_data[a_idx].stably_merged) {
+- struct bfq_queue *old_bfqq = bfqq;
++ struct bfq_queue *waker_bfqq = bfq_waker_bfqq(bfqq);
+
+ /* Update bic before losing reference to bfqq */
+ if (bfq_bfqq_in_large_burst(bfqq))
+@@ -6902,7 +6928,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
+ bfqq_already_existing = true;
+
+ if (!bfqq_already_existing) {
+- bfqq->waker_bfqq = old_bfqq->waker_bfqq;
++ bfqq->waker_bfqq = waker_bfqq;
+ bfqq->tentative_waker_bfqq = NULL;
+
+ /*
+@@ -6912,7 +6938,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
+ * woken_list of the waker. See
+ * bfq_check_waker for details.
+ */
+- if (bfqq->waker_bfqq)
++ if (waker_bfqq)
+ hlist_add_head(&bfqq->woken_list_node,
+ &bfqq->waker_bfqq->woken_list);
+ }
+@@ -6934,7 +6960,8 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
+ * addition, if the queue has also just been split, we have to
+ * resume its state.
+ */
+- if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) {
++ if (likely(bfqq != &bfqd->oom_bfqq) && !bfqq->new_bfqq &&
++ bfqq_process_refs(bfqq) == 1) {
+ bfqq->bic = bic;
+ if (split) {
+ /*
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 1217c2cd66dd88..bc5e8c5eaac9ff 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -799,6 +799,7 @@ void submit_bio_noacct(struct bio *bio)
+
+ switch (bio_op(bio)) {
+ case REQ_OP_READ:
++ break;
+ case REQ_OP_WRITE:
+ if (bio->bi_opf & REQ_ATOMIC) {
+ status = blk_validate_atomic_write_op_size(q, bio);
+diff --git a/block/elevator.c b/block/elevator.c
+index c355b55d010786..4122026b11f1a1 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -715,7 +715,9 @@ int elv_iosched_load_module(struct gendisk *disk, const char *buf,
+
+ strscpy(elevator_name, buf, sizeof(elevator_name));
+
+- return request_module("%s-iosched", strstrip(elevator_name));
++ request_module("%s-iosched", strstrip(elevator_name));
++
++ return 0;
+ }
+
+ ssize_t elv_iosched_store(struct gendisk *disk, const char *buf,
+diff --git a/block/partitions/core.c b/block/partitions/core.c
+index ab76e64f0f6c3b..5bd7a603092ea9 100644
+--- a/block/partitions/core.c
++++ b/block/partitions/core.c
+@@ -555,9 +555,11 @@ static bool blk_add_partition(struct gendisk *disk,
+
+ part = add_partition(disk, p, from, size, state->parts[p].flags,
+ &state->parts[p].info);
+- if (IS_ERR(part) && PTR_ERR(part) != -ENXIO) {
+- printk(KERN_ERR " %s: p%d could not be added: %pe\n",
+- disk->disk_name, p, part);
++ if (IS_ERR(part)) {
++ if (PTR_ERR(part) != -ENXIO) {
++ printk(KERN_ERR " %s: p%d could not be added: %pe\n",
++ disk->disk_name, p, part);
++ }
+ return true;
+ }
+
+diff --git a/crypto/asymmetric_keys/asymmetric_type.c b/crypto/asymmetric_keys/asymmetric_type.c
+index a5da8ccd353ef7..43af5fa510c09f 100644
+--- a/crypto/asymmetric_keys/asymmetric_type.c
++++ b/crypto/asymmetric_keys/asymmetric_type.c
+@@ -60,17 +60,18 @@ struct key *find_asymmetric_key(struct key *keyring,
+ char *req, *p;
+ int len;
+
+- WARN_ON(!id_0 && !id_1 && !id_2);
+-
+ if (id_0) {
+ lookup = id_0->data;
+ len = id_0->len;
+ } else if (id_1) {
+ lookup = id_1->data;
+ len = id_1->len;
+- } else {
++ } else if (id_2) {
+ lookup = id_2->data;
+ len = id_2->len;
++ } else {
++ WARN_ON(1);
++ return ERR_PTR(-EINVAL);
+ }
+
+ /* Construct an identifier "id:<keyid>". */
+diff --git a/crypto/xor.c b/crypto/xor.c
+index a1363162978c77..f39621a57bb33c 100644
+--- a/crypto/xor.c
++++ b/crypto/xor.c
+@@ -83,33 +83,30 @@ static void __init
+ do_xor_speed(struct xor_block_template *tmpl, void *b1, void *b2)
+ {
+ int speed;
+- int i, j;
+- ktime_t min, start, diff;
++ unsigned long reps;
++ ktime_t min, start, t0;
+
+ tmpl->next = template_list;
+ template_list = tmpl;
+
+ preempt_disable();
+
+- min = (ktime_t)S64_MAX;
+- for (i = 0; i < 3; i++) {
+- start = ktime_get();
+- for (j = 0; j < REPS; j++) {
+- mb(); /* prevent loop optimization */
+- tmpl->do_2(BENCH_SIZE, b1, b2);
+- mb();
+- }
+- diff = ktime_sub(ktime_get(), start);
+- if (diff < min)
+- min = diff;
+- }
++ reps = 0;
++ t0 = ktime_get();
++ /* delay start until time has advanced */
++ while ((start = ktime_get()) == t0)
++ cpu_relax();
++ do {
++ mb(); /* prevent loop optimization */
++ tmpl->do_2(BENCH_SIZE, b1, b2);
++ mb();
++ } while (reps++ < REPS || (t0 = ktime_get()) == start);
++ min = ktime_sub(t0, start);
+
+ preempt_enable();
+
+ // bytes/ns == GB/s, multiply by 1000 to get MB/s [not MiB/s]
+- if (!min)
+- min = 1;
+- speed = (1000 * REPS * BENCH_SIZE) / (unsigned int)ktime_to_ns(min);
++ speed = (1000 * reps * BENCH_SIZE) / (unsigned int)ktime_to_ns(min);
+ tmpl->speed = speed;
+
+ pr_info(" %-16s: %5d MB/sec\n", tmpl->name, speed);
+diff --git a/drivers/acpi/acpica/exsystem.c b/drivers/acpi/acpica/exsystem.c
+index f665ffd9a396cf..2c384bd52b9c4b 100644
+--- a/drivers/acpi/acpica/exsystem.c
++++ b/drivers/acpi/acpica/exsystem.c
+@@ -133,14 +133,15 @@ acpi_status acpi_ex_system_do_stall(u32 how_long_us)
+ * (ACPI specifies 100 usec as max, but this gives some slack in
+ * order to support existing BIOSs)
+ */
+- ACPI_ERROR((AE_INFO,
+- "Time parameter is too large (%u)", how_long_us));
++ ACPI_ERROR_ONCE((AE_INFO,
++ "Time parameter is too large (%u)",
++ how_long_us));
+ status = AE_AML_OPERAND_VALUE;
+ } else {
+ if (how_long_us > 100) {
+- ACPI_WARNING((AE_INFO,
+- "Time parameter %u us > 100 us violating ACPI spec, please fix the firmware.",
+- how_long_us));
++ ACPI_WARNING_ONCE((AE_INFO,
++ "Time parameter %u us > 100 us violating ACPI spec, please fix the firmware.",
++ how_long_us));
+ }
+ acpi_os_stall(how_long_us);
+ }
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index dd3d3082c8c76a..28adea68e1cd61 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -171,8 +171,11 @@ show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, wraparound_time);
+ #define GET_BIT_WIDTH(reg) ((reg)->access_width ? (8 << ((reg)->access_width - 1)) : (reg)->bit_width)
+
+ /* Shift and apply the mask for CPC reads/writes */
+-#define MASK_VAL(reg, val) (((val) >> (reg)->bit_offset) & \
++#define MASK_VAL_READ(reg, val) (((val) >> (reg)->bit_offset) & \
+ GENMASK(((reg)->bit_width) - 1, 0))
++#define MASK_VAL_WRITE(reg, prev_val, val) \
++ ((((val) & GENMASK(((reg)->bit_width) - 1, 0)) << (reg)->bit_offset) | \
++ ((prev_val) & ~(GENMASK(((reg)->bit_width) - 1, 0) << (reg)->bit_offset))) \
+
+ static ssize_t show_feedback_ctrs(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+@@ -859,6 +862,7 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+
+ /* Store CPU Logical ID */
+ cpc_ptr->cpu_id = pr->id;
++ spin_lock_init(&cpc_ptr->rmw_lock);
+
+ /* Parse PSD data for this CPU */
+ ret = acpi_get_psd(cpc_ptr, handle);
+@@ -1064,7 +1068,7 @@ static int cpc_read(int cpu, struct cpc_register_resource *reg_res, u64 *val)
+ }
+
+ if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
+- *val = MASK_VAL(reg, *val);
++ *val = MASK_VAL_READ(reg, *val);
+
+ return 0;
+ }
+@@ -1073,9 +1077,11 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ {
+ int ret_val = 0;
+ int size;
++ u64 prev_val;
+ void __iomem *vaddr = NULL;
+ int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu);
+ struct cpc_reg *reg = ®_res->cpc_entry.reg;
++ struct cpc_desc *cpc_desc;
+
+ size = GET_BIT_WIDTH(reg);
+
+@@ -1108,8 +1114,34 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ return acpi_os_write_memory((acpi_physical_address)reg->address,
+ val, size);
+
+- if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
+- val = MASK_VAL(reg, val);
++ if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
++ cpc_desc = per_cpu(cpc_desc_ptr, cpu);
++ if (!cpc_desc) {
++ pr_debug("No CPC descriptor for CPU:%d\n", cpu);
++ return -ENODEV;
++ }
++
++ spin_lock(&cpc_desc->rmw_lock);
++ switch (size) {
++ case 8:
++ prev_val = readb_relaxed(vaddr);
++ break;
++ case 16:
++ prev_val = readw_relaxed(vaddr);
++ break;
++ case 32:
++ prev_val = readl_relaxed(vaddr);
++ break;
++ case 64:
++ prev_val = readq_relaxed(vaddr);
++ break;
++ default:
++ spin_unlock(&cpc_desc->rmw_lock);
++ return -EFAULT;
++ }
++ val = MASK_VAL_WRITE(reg, prev_val, val);
++ val |= prev_val;
++ }
+
+ switch (size) {
+ case 8:
+@@ -1136,6 +1168,9 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ break;
+ }
+
++ if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
++ spin_unlock(&cpc_desc->rmw_lock);
++
+ return ret_val;
+ }
+
+diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
+index 23373faa35ecd6..95a19e3569c83f 100644
+--- a/drivers/acpi/device_sysfs.c
++++ b/drivers/acpi/device_sysfs.c
+@@ -540,8 +540,9 @@ int acpi_device_setup_files(struct acpi_device *dev)
+ * If device has _STR, 'description' file is created
+ */
+ if (acpi_has_method(dev->handle, "_STR")) {
+- status = acpi_evaluate_object(dev->handle, "_STR",
+- NULL, &buffer);
++ status = acpi_evaluate_object_typed(dev->handle, "_STR",
++ NULL, &buffer,
++ ACPI_TYPE_BUFFER);
+ if (ACPI_FAILURE(status))
+ buffer.pointer = NULL;
+ dev->pnp.str_obj = buffer.pointer;
+diff --git a/drivers/acpi/pmic/tps68470_pmic.c b/drivers/acpi/pmic/tps68470_pmic.c
+index ebd03e4729555a..0d1a82eeb4b0b6 100644
+--- a/drivers/acpi/pmic/tps68470_pmic.c
++++ b/drivers/acpi/pmic/tps68470_pmic.c
+@@ -376,10 +376,8 @@ static int tps68470_pmic_opregion_probe(struct platform_device *pdev)
+ struct tps68470_pmic_opregion *opregion;
+ acpi_status status;
+
+- if (!dev || !tps68470_regmap) {
+- dev_warn(dev, "dev or regmap is NULL\n");
+- return -EINVAL;
+- }
++ if (!tps68470_regmap)
++ return dev_err_probe(dev, -EINVAL, "regmap is missing\n");
+
+ if (!handle) {
+ dev_warn(dev, "acpi handle is NULL\n");
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index df5d5a554b388b..cb2aacbb93357e 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -554,6 +554,12 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ * to have a working keyboard.
+ */
+ static const struct dmi_system_id irq1_edge_low_force_override[] = {
++ {
++ /* MECHREV Jiaolong17KS Series GM7XG0M */
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "GM7XG0M"),
++ },
++ },
+ {
+ /* XMG APEX 17 (M23) */
+ .matches = {
+@@ -572,6 +578,12 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "GMxXGxx"),
+ },
+ },
++ {
++ /* TongFang GMxXGxX/TUXEDO Polaris 15 Gen5 AMD */
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "GMxXGxX"),
++ },
++ },
+ {
+ /* TongFang GMxXGxx sold as Eluktronics Inc. RP-15 */
+ .matches = {
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 674b9db7a1ef8b..75a5f559402f87 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -549,6 +549,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "MacBookAir9,1"),
+ },
+ },
++ {
++ .callback = video_detect_force_native,
++ /* Apple MacBook Pro 9,2 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro9,2"),
++ },
++ },
+ {
+ /* https://bugzilla.redhat.com/show_bug.cgi?id=1217249 */
+ .callback = video_detect_force_native,
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 214b935c2ced79..7df9ec9f924c44 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -630,6 +630,14 @@ void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap,
+ list_for_each_entry_safe(scmd, tmp, eh_work_q, eh_entry) {
+ struct ata_queued_cmd *qc;
+
++ /*
++ * If the scmd was added to EH, via ata_qc_schedule_eh() ->
++ * scsi_timeout() -> scsi_eh_scmd_add(), scsi_timeout() will
++ * have set DID_TIME_OUT (since libata does not have an abort
++ * handler). Thus, to clear DID_TIME_OUT, clear the host byte.
++ */
++ set_host_byte(scmd, DID_OK);
++
+ ata_qc_for_each_raw(ap, qc, i) {
+ if (qc->flags & ATA_QCFLAG_ACTIVE &&
+ qc->scsicmd == scmd)
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 473e00a58a8b0c..74f7053f01a138 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -1691,9 +1691,6 @@ static void ata_scsi_qc_complete(struct ata_queued_cmd *qc)
+ set_status_byte(qc->scsicmd, SAM_STAT_CHECK_CONDITION);
+ } else if (is_error && !have_sense) {
+ ata_gen_ata_sense(qc);
+- } else {
+- /* Keep the SCSI ML and status byte, clear host byte. */
+- cmd->result &= 0x0000ffff;
+ }
+
+ ata_qc_done(qc);
+@@ -2359,7 +2356,7 @@ static unsigned int ata_msense_control(struct ata_device *dev, u8 *buf,
+ case ALL_SUB_MPAGES:
+ n = ata_msense_control_spg0(dev, buf, changeable);
+ n += ata_msense_control_spgt2(dev, buf + n, CDL_T2A_SUB_MPAGE);
+- n += ata_msense_control_spgt2(dev, buf + n, CDL_T2A_SUB_MPAGE);
++ n += ata_msense_control_spgt2(dev, buf + n, CDL_T2B_SUB_MPAGE);
+ n += ata_msense_control_ata_feature(dev, buf + n);
+ return n;
+ default:
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 8c0733d3aad8e9..3b0f4b6153fc52 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -4515,9 +4515,11 @@ EXPORT_SYMBOL_GPL(device_destroy);
+ */
+ int device_rename(struct device *dev, const char *new_name)
+ {
++ struct subsys_private *sp = NULL;
+ struct kobject *kobj = &dev->kobj;
+ char *old_device_name = NULL;
+ int error;
++ bool is_link_renamed = false;
+
+ dev = get_device(dev);
+ if (!dev)
+@@ -4532,7 +4534,7 @@ int device_rename(struct device *dev, const char *new_name)
+ }
+
+ if (dev->class) {
+- struct subsys_private *sp = class_to_subsys(dev->class);
++ sp = class_to_subsys(dev->class);
+
+ if (!sp) {
+ error = -EINVAL;
+@@ -4541,16 +4543,19 @@ int device_rename(struct device *dev, const char *new_name)
+
+ error = sysfs_rename_link_ns(&sp->subsys.kobj, kobj, old_device_name,
+ new_name, kobject_namespace(kobj));
+- subsys_put(sp);
+ if (error)
+ goto out;
++
++ is_link_renamed = true;
+ }
+
+ error = kobject_rename(kobj, new_name);
+- if (error)
+- goto out;
+-
+ out:
++ if (error && is_link_renamed)
++ sysfs_rename_link_ns(&sp->subsys.kobj, kobj, new_name,
++ old_device_name, kobject_namespace(kobj));
++ subsys_put(sp);
++
+ put_device(dev);
+
+ kfree(old_device_name);
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index a03ee4b11134cf..324a9a3c087aa2 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -849,6 +849,26 @@ static void fw_log_firmware_info(const struct firmware *fw, const char *name,
+ {}
+ #endif
+
++/*
++ * Reject firmware file names with ".." path components.
++ * There are drivers that construct firmware file names from device-supplied
++ * strings, and we don't want some device to be able to tell us "I would like to
++ * be sent my firmware from ../../../etc/shadow, please".
++ *
++ * Search for ".." surrounded by either '/' or start/end of string.
++ *
++ * This intentionally only looks at the firmware name, not at the firmware base
++ * directory or at symlink contents.
++ */
++static bool name_contains_dotdot(const char *name)
++{
++ size_t name_len = strlen(name);
++
++ return strcmp(name, "..") == 0 || strncmp(name, "../", 3) == 0 ||
++ strstr(name, "/../") != NULL ||
++ (name_len >= 3 && strcmp(name+name_len-3, "/..") == 0);
++}
++
+ /* called from request_firmware() and request_firmware_work_func() */
+ static int
+ _request_firmware(const struct firmware **firmware_p, const char *name,
+@@ -869,6 +889,14 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ goto out;
+ }
+
++ if (name_contains_dotdot(name)) {
++ dev_warn(device,
++ "Firmware load for '%s' refused, path contains '..' component\n",
++ name);
++ ret = -EINVAL;
++ goto out;
++ }
++
+ ret = _request_firmware_prepare(&fw, name, device, buf, size,
+ offset, opt_flags);
+ if (ret <= 0) /* error or already assigned */
+@@ -946,6 +974,8 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ * @name will be used as $FIRMWARE in the uevent environment and
+ * should be distinctive enough not to be confused with any other
+ * firmware image for this or any other device.
++ * It must not contain any ".." path components - "foo/bar..bin" is
++ * allowed, but "foo/../bar.bin" is not.
+ *
+ * Caller must hold the reference count of @device.
+ *
+diff --git a/drivers/base/module.c b/drivers/base/module.c
+index f742ad2a21da02..c4eaa1158d54ed 100644
+--- a/drivers/base/module.c
++++ b/drivers/base/module.c
+@@ -66,27 +66,31 @@ int module_add_driver(struct module *mod, const struct device_driver *drv)
+ driver_name = make_driver_name(drv);
+ if (!driver_name) {
+ ret = -ENOMEM;
+- goto out;
++ goto out_remove_kobj;
+ }
+
+ module_create_drivers_dir(mk);
+ if (!mk->drivers_dir) {
+ ret = -EINVAL;
+- goto out;
++ goto out_free_driver_name;
+ }
+
+ ret = sysfs_create_link(mk->drivers_dir, &drv->p->kobj, driver_name);
+ if (ret)
+- goto out;
++ goto out_remove_drivers_dir;
+
+ kfree(driver_name);
+
+ return 0;
+-out:
+- sysfs_remove_link(&drv->p->kobj, "module");
++
++out_remove_drivers_dir:
+ sysfs_remove_link(mk->drivers_dir, driver_name);
++
++out_free_driver_name:
+ kfree(driver_name);
+
++out_remove_kobj:
++ sysfs_remove_link(&drv->p->kobj, "module");
+ return ret;
+ }
+
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index a9e49b212341be..abafc4edf9ed45 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -3399,10 +3399,12 @@ void drbd_uuid_new_current(struct drbd_device *device) __must_hold(local)
+ void drbd_uuid_set_bm(struct drbd_device *device, u64 val) __must_hold(local)
+ {
+ unsigned long flags;
+- if (device->ldev->md.uuid[UI_BITMAP] == 0 && val == 0)
++ spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
++ if (device->ldev->md.uuid[UI_BITMAP] == 0 && val == 0) {
++ spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
+ return;
++ }
+
+- spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
+ if (val == 0) {
+ drbd_uuid_move_history(device);
+ device->ldev->md.uuid[UI_HISTORY_START] = device->ldev->md.uuid[UI_BITMAP];
+diff --git a/drivers/block/drbd/drbd_state.c b/drivers/block/drbd/drbd_state.c
+index e858e7e0383f26..c2b6c4d9729db9 100644
+--- a/drivers/block/drbd/drbd_state.c
++++ b/drivers/block/drbd/drbd_state.c
+@@ -876,7 +876,7 @@ is_valid_state(struct drbd_device *device, union drbd_state ns)
+ ns.disk == D_OUTDATED)
+ rv = SS_CONNECTED_OUTDATES;
+
+- else if ((ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) &&
++ else if (nc && (ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) &&
+ (nc->verify_alg[0] == 0))
+ rv = SS_NO_VERIFY_ALG;
+
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 41a90150b501d7..8b243144fd64f9 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -181,6 +181,17 @@ static void nbd_requeue_cmd(struct nbd_cmd *cmd)
+ {
+ struct request *req = blk_mq_rq_from_pdu(cmd);
+
++ lockdep_assert_held(&cmd->lock);
++
++ /*
++ * Clear INFLIGHT flag so that this cmd won't be completed in
++ * normal completion path
++ *
++ * INFLIGHT flag will be set when the cmd is queued to nbd next
++ * time.
++ */
++ __clear_bit(NBD_CMD_INFLIGHT, &cmd->flags);
++
+ if (!test_and_set_bit(NBD_CMD_REQUEUED, &cmd->flags))
+ blk_mq_requeue_request(req, true);
+ }
+@@ -339,7 +350,7 @@ static int __nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
+
+ lim = queue_limits_start_update(nbd->disk->queue);
+ if (nbd->config->flags & NBD_FLAG_SEND_TRIM)
+- lim.max_hw_discard_sectors = UINT_MAX;
++ lim.max_hw_discard_sectors = UINT_MAX >> SECTOR_SHIFT;
+ else
+ lim.max_hw_discard_sectors = 0;
+ if (!(nbd->config->flags & NBD_FLAG_SEND_FLUSH)) {
+@@ -488,8 +499,8 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req)
+ nbd_mark_nsock_dead(nbd, nsock, 1);
+ mutex_unlock(&nsock->tx_lock);
+ }
+- mutex_unlock(&cmd->lock);
+ nbd_requeue_cmd(cmd);
++ mutex_unlock(&cmd->lock);
+ nbd_config_put(nbd);
+ return BLK_EH_DONE;
+ }
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 1d53a3f48a0eb9..bca06bfb4bc32f 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -71,9 +71,6 @@ struct ublk_rq_data {
+ struct llist_node node;
+
+ struct kref ref;
+- __u64 sector;
+- __u32 operation;
+- __u32 nr_zones;
+ };
+
+ struct ublk_uring_cmd_pdu {
+@@ -214,6 +211,33 @@ static inline bool ublk_queue_is_zoned(struct ublk_queue *ubq)
+
+ #ifdef CONFIG_BLK_DEV_ZONED
+
++struct ublk_zoned_report_desc {
++ __u64 sector;
++ __u32 operation;
++ __u32 nr_zones;
++};
++
++static DEFINE_XARRAY(ublk_zoned_report_descs);
++
++static int ublk_zoned_insert_report_desc(const struct request *req,
++ struct ublk_zoned_report_desc *desc)
++{
++ return xa_insert(&ublk_zoned_report_descs, (unsigned long)req,
++ desc, GFP_KERNEL);
++}
++
++static struct ublk_zoned_report_desc *ublk_zoned_erase_report_desc(
++ const struct request *req)
++{
++ return xa_erase(&ublk_zoned_report_descs, (unsigned long)req);
++}
++
++static struct ublk_zoned_report_desc *ublk_zoned_get_report_desc(
++ const struct request *req)
++{
++ return xa_load(&ublk_zoned_report_descs, (unsigned long)req);
++}
++
+ static int ublk_get_nr_zones(const struct ublk_device *ub)
+ {
+ const struct ublk_param_basic *p = &ub->params.basic;
+@@ -308,7 +332,7 @@ static int ublk_report_zones(struct gendisk *disk, sector_t sector,
+ unsigned int zones_in_request =
+ min_t(unsigned int, remaining_zones, max_zones_per_request);
+ struct request *req;
+- struct ublk_rq_data *pdu;
++ struct ublk_zoned_report_desc desc;
+ blk_status_t status;
+
+ memset(buffer, 0, buffer_length);
+@@ -319,20 +343,23 @@ static int ublk_report_zones(struct gendisk *disk, sector_t sector,
+ goto out;
+ }
+
+- pdu = blk_mq_rq_to_pdu(req);
+- pdu->operation = UBLK_IO_OP_REPORT_ZONES;
+- pdu->sector = sector;
+- pdu->nr_zones = zones_in_request;
++ desc.operation = UBLK_IO_OP_REPORT_ZONES;
++ desc.sector = sector;
++ desc.nr_zones = zones_in_request;
++ ret = ublk_zoned_insert_report_desc(req, &desc);
++ if (ret)
++ goto free_req;
+
+ ret = blk_rq_map_kern(disk->queue, req, buffer, buffer_length,
+ GFP_KERNEL);
+- if (ret) {
+- blk_mq_free_request(req);
+- goto out;
+- }
++ if (ret)
++ goto erase_desc;
+
+ status = blk_execute_rq(req, 0);
+ ret = blk_status_to_errno(status);
++erase_desc:
++ ublk_zoned_erase_report_desc(req);
++free_req:
+ blk_mq_free_request(req);
+ if (ret)
+ goto out;
+@@ -366,7 +393,7 @@ static blk_status_t ublk_setup_iod_zoned(struct ublk_queue *ubq,
+ {
+ struct ublksrv_io_desc *iod = ublk_get_iod(ubq, req->tag);
+ struct ublk_io *io = &ubq->ios[req->tag];
+- struct ublk_rq_data *pdu = blk_mq_rq_to_pdu(req);
++ struct ublk_zoned_report_desc *desc;
+ u32 ublk_op;
+
+ switch (req_op(req)) {
+@@ -389,12 +416,15 @@ static blk_status_t ublk_setup_iod_zoned(struct ublk_queue *ubq,
+ ublk_op = UBLK_IO_OP_ZONE_RESET_ALL;
+ break;
+ case REQ_OP_DRV_IN:
+- ublk_op = pdu->operation;
++ desc = ublk_zoned_get_report_desc(req);
++ if (!desc)
++ return BLK_STS_IOERR;
++ ublk_op = desc->operation;
+ switch (ublk_op) {
+ case UBLK_IO_OP_REPORT_ZONES:
+ iod->op_flags = ublk_op | ublk_req_build_flags(req);
+- iod->nr_zones = pdu->nr_zones;
+- iod->start_sector = pdu->sector;
++ iod->nr_zones = desc->nr_zones;
++ iod->start_sector = desc->sector;
+ return BLK_STS_OK;
+ default:
+ return BLK_STS_IOERR;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 51d9d4532dda4e..1ec71a2fb63eac 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -1397,7 +1397,10 @@ static int btusb_submit_intr_urb(struct hci_dev *hdev, gfp_t mem_flags)
+ if (!urb)
+ return -ENOMEM;
+
+- size = le16_to_cpu(data->intr_ep->wMaxPacketSize);
++ /* Use maximum HCI Event size so the USB stack handles
++ * ZPL/short-transfer automatically.
++ */
++ size = HCI_MAX_EVENT_SIZE;
+
+ buf = kmalloc(size, mem_flags);
+ if (!buf) {
+diff --git a/drivers/bus/arm-integrator-lm.c b/drivers/bus/arm-integrator-lm.c
+index b715c8ab36e8bd..a65c79b08804f4 100644
+--- a/drivers/bus/arm-integrator-lm.c
++++ b/drivers/bus/arm-integrator-lm.c
+@@ -85,6 +85,7 @@ static int integrator_ap_lm_probe(struct platform_device *pdev)
+ return -ENODEV;
+ }
+ map = syscon_node_to_regmap(syscon);
++ of_node_put(syscon);
+ if (IS_ERR(map)) {
+ dev_err(dev,
+ "could not find Integrator/AP system controller\n");
+diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
+index 14a11880bceafb..71da8864841d42 100644
+--- a/drivers/bus/mhi/host/pci_generic.c
++++ b/drivers/bus/mhi/host/pci_generic.c
+@@ -433,8 +433,7 @@ static const struct mhi_controller_config modem_foxconn_sdx72_config = {
+
+ static const struct mhi_pci_dev_info mhi_foxconn_sdx55_info = {
+ .name = "foxconn-sdx55",
+- .fw = "qcom/sdx55m/sbl1.mbn",
+- .edl = "qcom/sdx55m/edl.mbn",
++ .edl = "qcom/sdx55m/foxconn/prog_firehose_sdx55.mbn",
+ .config = &modem_foxconn_sdx55_config,
+ .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+ .dma_data_width = 32,
+@@ -444,8 +443,7 @@ static const struct mhi_pci_dev_info mhi_foxconn_sdx55_info = {
+
+ static const struct mhi_pci_dev_info mhi_foxconn_t99w175_info = {
+ .name = "foxconn-t99w175",
+- .fw = "qcom/sdx55m/sbl1.mbn",
+- .edl = "qcom/sdx55m/edl.mbn",
++ .edl = "qcom/sdx55m/foxconn/prog_firehose_sdx55.mbn",
+ .config = &modem_foxconn_sdx55_config,
+ .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+ .dma_data_width = 32,
+@@ -455,8 +453,7 @@ static const struct mhi_pci_dev_info mhi_foxconn_t99w175_info = {
+
+ static const struct mhi_pci_dev_info mhi_foxconn_dw5930e_info = {
+ .name = "foxconn-dw5930e",
+- .fw = "qcom/sdx55m/sbl1.mbn",
+- .edl = "qcom/sdx55m/edl.mbn",
++ .edl = "qcom/sdx55m/foxconn/prog_firehose_sdx55.mbn",
+ .config = &modem_foxconn_sdx55_config,
+ .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+ .dma_data_width = 32,
+@@ -502,7 +499,7 @@ static const struct mhi_pci_dev_info mhi_foxconn_dw5932e_info = {
+
+ static const struct mhi_pci_dev_info mhi_foxconn_t99w515_info = {
+ .name = "foxconn-t99w515",
+- .edl = "fox/sdx72m/edl.mbn",
++ .edl = "qcom/sdx72m/foxconn/edl.mbn",
+ .edl_trigger = true,
+ .config = &modem_foxconn_sdx72_config,
+ .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+@@ -513,7 +510,7 @@ static const struct mhi_pci_dev_info mhi_foxconn_t99w515_info = {
+
+ static const struct mhi_pci_dev_info mhi_foxconn_dw5934e_info = {
+ .name = "foxconn-dw5934e",
+- .edl = "fox/sdx72m/edl.mbn",
++ .edl = "qcom/sdx72m/foxconn/edl.mbn",
+ .edl_trigger = true,
+ .config = &modem_foxconn_sdx72_config,
+ .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+@@ -680,6 +677,15 @@ static const struct mhi_pci_dev_info mhi_telit_fn990_info = {
+ .mru_default = 32768,
+ };
+
++static const struct mhi_pci_dev_info mhi_telit_fe990a_info = {
++ .name = "telit-fe990a",
++ .config = &modem_telit_fn990_config,
++ .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
++ .dma_data_width = 32,
++ .sideband_wake = false,
++ .mru_default = 32768,
++};
++
+ /* Keep the list sorted based on the PID. New VID should be added as the last entry */
+ static const struct pci_device_id mhi_pci_id_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0304),
+@@ -697,9 +703,9 @@ static const struct pci_device_id mhi_pci_id_table[] = {
+ /* Telit FN990 */
+ { PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0308, 0x1c5d, 0x2010),
+ .driver_data = (kernel_ulong_t) &mhi_telit_fn990_info },
+- /* Telit FE990 */
++ /* Telit FE990A */
+ { PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0308, 0x1c5d, 0x2015),
+- .driver_data = (kernel_ulong_t) &mhi_telit_fn990_info },
++ .driver_data = (kernel_ulong_t) &mhi_telit_fe990a_info },
+ { PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
+ .driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
+ { PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0309),
+diff --git a/drivers/char/hw_random/Kconfig b/drivers/char/hw_random/Kconfig
+index 01e2e1ef82cf9c..ae5f3a01f55423 100644
+--- a/drivers/char/hw_random/Kconfig
++++ b/drivers/char/hw_random/Kconfig
+@@ -555,6 +555,7 @@ config HW_RANDOM_ARM_SMCCC_TRNG
+ config HW_RANDOM_CN10K
+ tristate "Marvell CN10K Random Number Generator support"
+ depends on HW_RANDOM && PCI && (ARM64 || (64BIT && COMPILE_TEST))
++ default HW_RANDOM if ARCH_THUNDER
+ help
+ This driver provides support for the True Random Number
+ generator available in Marvell CN10K SoCs.
+diff --git a/drivers/char/hw_random/bcm2835-rng.c b/drivers/char/hw_random/bcm2835-rng.c
+index b03e8030062758..aa2b135e3ee230 100644
+--- a/drivers/char/hw_random/bcm2835-rng.c
++++ b/drivers/char/hw_random/bcm2835-rng.c
+@@ -94,8 +94,10 @@ static int bcm2835_rng_init(struct hwrng *rng)
+ return ret;
+
+ ret = reset_control_reset(priv->reset);
+- if (ret)
++ if (ret) {
++ clk_disable_unprepare(priv->clk);
+ return ret;
++ }
+
+ if (priv->mask_interrupts) {
+ /* mask the interrupt */
+diff --git a/drivers/char/hw_random/cctrng.c b/drivers/char/hw_random/cctrng.c
+index c0d2f824769f88..4c50efc464835b 100644
+--- a/drivers/char/hw_random/cctrng.c
++++ b/drivers/char/hw_random/cctrng.c
+@@ -622,6 +622,7 @@ static int __maybe_unused cctrng_resume(struct device *dev)
+ /* wait for Cryptocell reset completion */
+ if (!cctrng_wait_for_reset_completion(drvdata)) {
+ dev_err(dev, "Cryptocell reset not completed");
++ clk_disable_unprepare(drvdata->clk);
+ return -EBUSY;
+ }
+
+diff --git a/drivers/char/hw_random/mtk-rng.c b/drivers/char/hw_random/mtk-rng.c
+index aa993753ab120b..1e3048f2bb38f0 100644
+--- a/drivers/char/hw_random/mtk-rng.c
++++ b/drivers/char/hw_random/mtk-rng.c
+@@ -142,7 +142,7 @@ static int mtk_rng_probe(struct platform_device *pdev)
+ dev_set_drvdata(&pdev->dev, priv);
+ pm_runtime_set_autosuspend_delay(&pdev->dev, RNG_AUTOSUSPEND_TIMEOUT);
+ pm_runtime_use_autosuspend(&pdev->dev);
+- pm_runtime_enable(&pdev->dev);
++ devm_pm_runtime_enable(&pdev->dev);
+
+ dev_info(&pdev->dev, "registered RNG driver\n");
+
+diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
+index 30b4c288c1bbc3..c3fbbf4d3db79a 100644
+--- a/drivers/char/tpm/tpm-dev-common.c
++++ b/drivers/char/tpm/tpm-dev-common.c
+@@ -47,6 +47,8 @@ static ssize_t tpm_dev_transmit(struct tpm_chip *chip, struct tpm_space *space,
+
+ if (!ret)
+ ret = tpm2_commit_space(chip, space, buf, &len);
++ else
++ tpm2_flush_space(chip);
+
+ out_rc:
+ return ret ? ret : len;
+diff --git a/drivers/char/tpm/tpm2-sessions.c b/drivers/char/tpm/tpm2-sessions.c
+index d3521aadd43eef..44f60730cff441 100644
+--- a/drivers/char/tpm/tpm2-sessions.c
++++ b/drivers/char/tpm/tpm2-sessions.c
+@@ -1362,4 +1362,5 @@ int tpm2_sessions_init(struct tpm_chip *chip)
+
+ return rc;
+ }
++EXPORT_SYMBOL(tpm2_sessions_init);
+ #endif /* CONFIG_TCG_TPM2_HMAC */
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 4892d491da8dae..25a66870c165c5 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -169,6 +169,9 @@ void tpm2_flush_space(struct tpm_chip *chip)
+ struct tpm_space *space = &chip->work_space;
+ int i;
+
++ if (!space)
++ return;
++
+ for (i = 0; i < ARRAY_SIZE(space->context_tbl); i++)
+ if (space->context_tbl[i] && ~space->context_tbl[i])
+ tpm2_flush_context(chip, space->context_tbl[i]);
+diff --git a/drivers/clk/at91/sama7g5.c b/drivers/clk/at91/sama7g5.c
+index 91b5c6f1481964..4e9594714b1428 100644
+--- a/drivers/clk/at91/sama7g5.c
++++ b/drivers/clk/at91/sama7g5.c
+@@ -66,6 +66,7 @@ enum pll_component_id {
+ PLL_COMPID_FRAC,
+ PLL_COMPID_DIV0,
+ PLL_COMPID_DIV1,
++ PLL_COMPID_MAX,
+ };
+
+ /*
+@@ -165,7 +166,7 @@ static struct sama7g5_pll {
+ u8 t;
+ u8 eid;
+ u8 safe_div;
+-} sama7g5_plls[][PLL_ID_MAX] = {
++} sama7g5_plls[][PLL_COMPID_MAX] = {
+ [PLL_ID_CPU] = {
+ [PLL_COMPID_FRAC] = {
+ .n = "cpupll_fracck",
+@@ -1038,7 +1039,7 @@ static void __init sama7g5_pmc_setup(struct device_node *np)
+ sama7g5_pmc->chws[PMC_MAIN] = hw;
+
+ for (i = 0; i < PLL_ID_MAX; i++) {
+- for (j = 0; j < 3; j++) {
++ for (j = 0; j < PLL_COMPID_MAX; j++) {
+ struct clk_hw *parent_hw;
+
+ if (!sama7g5_plls[i][j].n)
+diff --git a/drivers/clk/imx/clk-composite-7ulp.c b/drivers/clk/imx/clk-composite-7ulp.c
+index e208ddc511339e..db7f40b07d1abf 100644
+--- a/drivers/clk/imx/clk-composite-7ulp.c
++++ b/drivers/clk/imx/clk-composite-7ulp.c
+@@ -14,6 +14,7 @@
+ #include "../clk-fractional-divider.h"
+ #include "clk.h"
+
++#define PCG_PR_MASK BIT(31)
+ #define PCG_PCS_SHIFT 24
+ #define PCG_PCS_MASK 0x7
+ #define PCG_CGC_SHIFT 30
+@@ -78,6 +79,12 @@ static struct clk_hw *imx_ulp_clk_hw_composite(const char *name,
+ struct clk_hw *hw;
+ u32 val;
+
++ val = readl(reg);
++ if (!(val & PCG_PR_MASK)) {
++ pr_info("PCC PR is 0 for clk:%s, bypass\n", name);
++ return 0;
++ }
++
+ if (mux_present) {
+ mux = kzalloc(sizeof(*mux), GFP_KERNEL);
+ if (!mux)
+diff --git a/drivers/clk/imx/clk-composite-8m.c b/drivers/clk/imx/clk-composite-8m.c
+index 8cc07d056a8384..f187582ba49196 100644
+--- a/drivers/clk/imx/clk-composite-8m.c
++++ b/drivers/clk/imx/clk-composite-8m.c
+@@ -204,6 +204,34 @@ static const struct clk_ops imx8m_clk_composite_mux_ops = {
+ .determine_rate = imx8m_clk_composite_mux_determine_rate,
+ };
+
++static int imx8m_clk_composite_gate_enable(struct clk_hw *hw)
++{
++ struct clk_gate *gate = to_clk_gate(hw);
++ unsigned long flags;
++ u32 val;
++
++ spin_lock_irqsave(gate->lock, flags);
++
++ val = readl(gate->reg);
++ val |= BIT(gate->bit_idx);
++ writel(val, gate->reg);
++
++ spin_unlock_irqrestore(gate->lock, flags);
++
++ return 0;
++}
++
++static void imx8m_clk_composite_gate_disable(struct clk_hw *hw)
++{
++ /* composite clk requires the disable hook */
++}
++
++static const struct clk_ops imx8m_clk_composite_gate_ops = {
++ .enable = imx8m_clk_composite_gate_enable,
++ .disable = imx8m_clk_composite_gate_disable,
++ .is_enabled = clk_gate_is_enabled,
++};
++
+ struct clk_hw *__imx8m_clk_hw_composite(const char *name,
+ const char * const *parent_names,
+ int num_parents, void __iomem *reg,
+@@ -217,6 +245,7 @@ struct clk_hw *__imx8m_clk_hw_composite(const char *name,
+ struct clk_mux *mux;
+ const struct clk_ops *divider_ops;
+ const struct clk_ops *mux_ops;
++ const struct clk_ops *gate_ops;
+
+ mux = kzalloc(sizeof(*mux), GFP_KERNEL);
+ if (!mux)
+@@ -257,20 +286,22 @@ struct clk_hw *__imx8m_clk_hw_composite(const char *name,
+ div->flags = CLK_DIVIDER_ROUND_CLOSEST;
+
+ /* skip registering the gate ops if M4 is enabled */
+- if (!mcore_booted) {
+- gate = kzalloc(sizeof(*gate), GFP_KERNEL);
+- if (!gate)
+- goto free_div;
+-
+- gate_hw = &gate->hw;
+- gate->reg = reg;
+- gate->bit_idx = PCG_CGC_SHIFT;
+- gate->lock = &imx_ccm_lock;
+- }
++ gate = kzalloc(sizeof(*gate), GFP_KERNEL);
++ if (!gate)
++ goto free_div;
++
++ gate_hw = &gate->hw;
++ gate->reg = reg;
++ gate->bit_idx = PCG_CGC_SHIFT;
++ gate->lock = &imx_ccm_lock;
++ if (!mcore_booted)
++ gate_ops = &clk_gate_ops;
++ else
++ gate_ops = &imx8m_clk_composite_gate_ops;
+
+ hw = clk_hw_register_composite(NULL, name, parent_names, num_parents,
+ mux_hw, mux_ops, div_hw,
+- divider_ops, gate_hw, &clk_gate_ops, flags);
++ divider_ops, gate_hw, gate_ops, flags);
+ if (IS_ERR(hw))
+ goto free_gate;
+
+diff --git a/drivers/clk/imx/clk-composite-93.c b/drivers/clk/imx/clk-composite-93.c
+index 81164bdcd6cc9a..6c6c5a30f3282d 100644
+--- a/drivers/clk/imx/clk-composite-93.c
++++ b/drivers/clk/imx/clk-composite-93.c
+@@ -76,6 +76,13 @@ static int imx93_clk_composite_gate_enable(struct clk_hw *hw)
+
+ static void imx93_clk_composite_gate_disable(struct clk_hw *hw)
+ {
++ /*
++ * Skip disable the root clock gate if mcore enabled.
++ * The root clock may be used by the mcore.
++ */
++ if (mcore_booted)
++ return;
++
+ imx93_clk_composite_gate_endisable(hw, 0);
+ }
+
+@@ -222,7 +229,7 @@ struct clk_hw *imx93_clk_composite_flags(const char *name, const char * const *p
+ hw = clk_hw_register_composite(NULL, name, parent_names, num_parents,
+ mux_hw, &clk_mux_ro_ops, div_hw,
+ &clk_divider_ro_ops, NULL, NULL, flags);
+- } else if (!mcore_booted) {
++ } else {
+ gate = kzalloc(sizeof(*gate), GFP_KERNEL);
+ if (!gate)
+ goto fail;
+@@ -238,12 +245,6 @@ struct clk_hw *imx93_clk_composite_flags(const char *name, const char * const *p
+ &imx93_clk_composite_divider_ops, gate_hw,
+ &imx93_clk_composite_gate_ops,
+ flags | CLK_SET_RATE_NO_REPARENT);
+- } else {
+- hw = clk_hw_register_composite(NULL, name, parent_names, num_parents,
+- mux_hw, &imx93_clk_composite_mux_ops, div_hw,
+- &imx93_clk_composite_divider_ops, NULL,
+- &imx93_clk_composite_gate_ops,
+- flags | CLK_SET_RATE_NO_REPARENT);
+ }
+
+ if (IS_ERR(hw))
+diff --git a/drivers/clk/imx/clk-fracn-gppll.c b/drivers/clk/imx/clk-fracn-gppll.c
+index 44462ab50e513c..1becba2b62d0be 100644
+--- a/drivers/clk/imx/clk-fracn-gppll.c
++++ b/drivers/clk/imx/clk-fracn-gppll.c
+@@ -291,6 +291,10 @@ static int clk_fracn_gppll_prepare(struct clk_hw *hw)
+ if (val & POWERUP_MASK)
+ return 0;
+
++ if (pll->flags & CLK_FRACN_GPPLL_FRACN)
++ writel_relaxed(readl_relaxed(pll->base + PLL_NUMERATOR),
++ pll->base + PLL_NUMERATOR);
++
+ val |= CLKMUX_BYPASS;
+ writel_relaxed(val, pll->base + PLL_CTRL);
+
+diff --git a/drivers/clk/imx/clk-imx6ul.c b/drivers/clk/imx/clk-imx6ul.c
+index f9394e94f69d73..05c7a82b751f3c 100644
+--- a/drivers/clk/imx/clk-imx6ul.c
++++ b/drivers/clk/imx/clk-imx6ul.c
+@@ -542,8 +542,8 @@ static void __init imx6ul_clocks_init(struct device_node *ccm_node)
+
+ clk_set_parent(hws[IMX6UL_CLK_ENFC_SEL]->clk, hws[IMX6UL_CLK_PLL2_PFD2]->clk);
+
+- clk_set_parent(hws[IMX6UL_CLK_ENET1_REF_SEL]->clk, hws[IMX6UL_CLK_ENET_REF]->clk);
+- clk_set_parent(hws[IMX6UL_CLK_ENET2_REF_SEL]->clk, hws[IMX6UL_CLK_ENET2_REF]->clk);
++ clk_set_parent(hws[IMX6UL_CLK_ENET1_REF_SEL]->clk, hws[IMX6UL_CLK_ENET1_REF_125M]->clk);
++ clk_set_parent(hws[IMX6UL_CLK_ENET2_REF_SEL]->clk, hws[IMX6UL_CLK_ENET2_REF_125M]->clk);
+
+ imx_register_uart_clocks();
+ }
+diff --git a/drivers/clk/imx/clk-imx8mp-audiomix.c b/drivers/clk/imx/clk-imx8mp-audiomix.c
+index b381d6f784c890..0767d613b68b00 100644
+--- a/drivers/clk/imx/clk-imx8mp-audiomix.c
++++ b/drivers/clk/imx/clk-imx8mp-audiomix.c
+@@ -154,6 +154,15 @@ static const struct clk_parent_data clk_imx8mp_audiomix_pll_bypass_sels[] = {
+ PDM_SEL, 2, 0 \
+ }
+
++#define CLK_GATE_PARENT(gname, cname, pname) \
++ { \
++ gname"_cg", \
++ IMX8MP_CLK_AUDIOMIX_##cname, \
++ { .fw_name = pname, .name = pname }, NULL, 1, \
++ CLKEN0 + 4 * !!(IMX8MP_CLK_AUDIOMIX_##cname / 32), \
++ 1, IMX8MP_CLK_AUDIOMIX_##cname % 32 \
++ }
++
+ struct clk_imx8mp_audiomix_sel {
+ const char *name;
+ int clkid;
+@@ -171,14 +180,14 @@ static struct clk_imx8mp_audiomix_sel sels[] = {
+ CLK_GATE("earc", EARC_IPG),
+ CLK_GATE("ocrama", OCRAMA_IPG),
+ CLK_GATE("aud2htx", AUD2HTX_IPG),
+- CLK_GATE("earc_phy", EARC_PHY),
++ CLK_GATE_PARENT("earc_phy", EARC_PHY, "sai_pll_out_div2"),
+ CLK_GATE("sdma2", SDMA2_ROOT),
+ CLK_GATE("sdma3", SDMA3_ROOT),
+ CLK_GATE("spba2", SPBA2_ROOT),
+ CLK_GATE("dsp", DSP_ROOT),
+ CLK_GATE("dspdbg", DSPDBG_ROOT),
+ CLK_GATE("edma", EDMA_ROOT),
+- CLK_GATE("audpll", AUDPLL_ROOT),
++ CLK_GATE_PARENT("audpll", AUDPLL_ROOT, "osc_24m"),
+ CLK_GATE("mu2", MU2_ROOT),
+ CLK_GATE("mu3", MU3_ROOT),
+ CLK_PDM,
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 670aa2bab3017e..e561ff7b135fb5 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -551,8 +551,8 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+
+ hws[IMX8MP_CLK_IPG_ROOT] = imx_clk_hw_divider2("ipg_root", "ahb_root", ccm_base + 0x9080, 0, 1);
+
+- hws[IMX8MP_CLK_DRAM_ALT] = imx8m_clk_hw_composite("dram_alt", imx8mp_dram_alt_sels, ccm_base + 0xa000);
+- hws[IMX8MP_CLK_DRAM_APB] = imx8m_clk_hw_composite_critical("dram_apb", imx8mp_dram_apb_sels, ccm_base + 0xa080);
++ hws[IMX8MP_CLK_DRAM_ALT] = imx8m_clk_hw_fw_managed_composite("dram_alt", imx8mp_dram_alt_sels, ccm_base + 0xa000);
++ hws[IMX8MP_CLK_DRAM_APB] = imx8m_clk_hw_fw_managed_composite_critical("dram_apb", imx8mp_dram_apb_sels, ccm_base + 0xa080);
+ hws[IMX8MP_CLK_VPU_G1] = imx8m_clk_hw_composite("vpu_g1", imx8mp_vpu_g1_sels, ccm_base + 0xa100);
+ hws[IMX8MP_CLK_VPU_G2] = imx8m_clk_hw_composite("vpu_g2", imx8mp_vpu_g2_sels, ccm_base + 0xa180);
+ hws[IMX8MP_CLK_CAN1] = imx8m_clk_hw_composite("can1", imx8mp_can1_sels, ccm_base + 0xa200);
+diff --git a/drivers/clk/imx/clk-imx8qxp.c b/drivers/clk/imx/clk-imx8qxp.c
+index 7d8883916cacdd..83f2e8bd6d5062 100644
+--- a/drivers/clk/imx/clk-imx8qxp.c
++++ b/drivers/clk/imx/clk-imx8qxp.c
+@@ -170,8 +170,8 @@ static int imx8qxp_clk_probe(struct platform_device *pdev)
+ imx_clk_scu("pwm_clk", IMX_SC_R_LCD_0_PWM_0, IMX_SC_PM_CLK_PER);
+ imx_clk_scu("elcdif_pll", IMX_SC_R_ELCDIF_PLL, IMX_SC_PM_CLK_PLL);
+ imx_clk_scu2("lcd_clk", lcd_sels, ARRAY_SIZE(lcd_sels), IMX_SC_R_LCD_0, IMX_SC_PM_CLK_PER);
+- imx_clk_scu2("lcd_pxl_clk", lcd_pxl_sels, ARRAY_SIZE(lcd_pxl_sels), IMX_SC_R_LCD_0, IMX_SC_PM_CLK_MISC0);
+ imx_clk_scu("lcd_pxl_bypass_div_clk", IMX_SC_R_LCD_0, IMX_SC_PM_CLK_BYPASS);
++ imx_clk_scu2("lcd_pxl_clk", lcd_pxl_sels, ARRAY_SIZE(lcd_pxl_sels), IMX_SC_R_LCD_0, IMX_SC_PM_CLK_MISC0);
+
+ /* Audio SS */
+ imx_clk_scu("audio_pll0_clk", IMX_SC_R_AUDIO_PLL_0, IMX_SC_PM_CLK_PLL);
+@@ -206,18 +206,18 @@ static int imx8qxp_clk_probe(struct platform_device *pdev)
+ imx_clk_scu("usb3_lpm_div", IMX_SC_R_USB_2, IMX_SC_PM_CLK_MISC);
+
+ /* Display controller SS */
+- imx_clk_scu2("dc0_disp0_clk", dc0_sels, ARRAY_SIZE(dc0_sels), IMX_SC_R_DC_0, IMX_SC_PM_CLK_MISC0);
+- imx_clk_scu2("dc0_disp1_clk", dc0_sels, ARRAY_SIZE(dc0_sels), IMX_SC_R_DC_0, IMX_SC_PM_CLK_MISC1);
+ imx_clk_scu("dc0_pll0_clk", IMX_SC_R_DC_0_PLL_0, IMX_SC_PM_CLK_PLL);
+ imx_clk_scu("dc0_pll1_clk", IMX_SC_R_DC_0_PLL_1, IMX_SC_PM_CLK_PLL);
+ imx_clk_scu("dc0_bypass0_clk", IMX_SC_R_DC_0_VIDEO0, IMX_SC_PM_CLK_BYPASS);
++ imx_clk_scu2("dc0_disp0_clk", dc0_sels, ARRAY_SIZE(dc0_sels), IMX_SC_R_DC_0, IMX_SC_PM_CLK_MISC0);
++ imx_clk_scu2("dc0_disp1_clk", dc0_sels, ARRAY_SIZE(dc0_sels), IMX_SC_R_DC_0, IMX_SC_PM_CLK_MISC1);
+ imx_clk_scu("dc0_bypass1_clk", IMX_SC_R_DC_0_VIDEO1, IMX_SC_PM_CLK_BYPASS);
+
+- imx_clk_scu2("dc1_disp0_clk", dc1_sels, ARRAY_SIZE(dc1_sels), IMX_SC_R_DC_1, IMX_SC_PM_CLK_MISC0);
+- imx_clk_scu2("dc1_disp1_clk", dc1_sels, ARRAY_SIZE(dc1_sels), IMX_SC_R_DC_1, IMX_SC_PM_CLK_MISC1);
+ imx_clk_scu("dc1_pll0_clk", IMX_SC_R_DC_1_PLL_0, IMX_SC_PM_CLK_PLL);
+ imx_clk_scu("dc1_pll1_clk", IMX_SC_R_DC_1_PLL_1, IMX_SC_PM_CLK_PLL);
+ imx_clk_scu("dc1_bypass0_clk", IMX_SC_R_DC_1_VIDEO0, IMX_SC_PM_CLK_BYPASS);
++ imx_clk_scu2("dc1_disp0_clk", dc1_sels, ARRAY_SIZE(dc1_sels), IMX_SC_R_DC_1, IMX_SC_PM_CLK_MISC0);
++ imx_clk_scu2("dc1_disp1_clk", dc1_sels, ARRAY_SIZE(dc1_sels), IMX_SC_R_DC_1, IMX_SC_PM_CLK_MISC1);
+ imx_clk_scu("dc1_bypass1_clk", IMX_SC_R_DC_1_VIDEO1, IMX_SC_PM_CLK_BYPASS);
+
+ /* MIPI-LVDS SS */
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 31bf9d13f15419..b59317e234c615 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -1832,6 +1832,58 @@ const struct clk_ops clk_alpha_pll_agera_ops = {
+ };
+ EXPORT_SYMBOL_GPL(clk_alpha_pll_agera_ops);
+
++/**
++ * clk_lucid_5lpe_pll_configure - configure the lucid 5lpe pll
++ *
++ * @pll: clk alpha pll
++ * @regmap: register map
++ * @config: configuration to apply for pll
++ */
++void clk_lucid_5lpe_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
++ const struct alpha_pll_config *config)
++{
++ /*
++ * If the bootloader left the PLL enabled it's likely that there are
++ * RCGs that will lock up if we disable the PLL below.
++ */
++ if (trion_pll_is_enabled(pll, regmap)) {
++ pr_debug("Lucid 5LPE PLL is already enabled, skipping configuration\n");
++ return;
++ }
++
++ clk_alpha_pll_write_config(regmap, PLL_L_VAL(pll), config->l);
++ regmap_write(regmap, PLL_CAL_L_VAL(pll), TRION_PLL_CAL_VAL);
++ clk_alpha_pll_write_config(regmap, PLL_ALPHA_VAL(pll), config->alpha);
++ clk_alpha_pll_write_config(regmap, PLL_CONFIG_CTL(pll),
++ config->config_ctl_val);
++ clk_alpha_pll_write_config(regmap, PLL_CONFIG_CTL_U(pll),
++ config->config_ctl_hi_val);
++ clk_alpha_pll_write_config(regmap, PLL_CONFIG_CTL_U1(pll),
++ config->config_ctl_hi1_val);
++ clk_alpha_pll_write_config(regmap, PLL_USER_CTL(pll),
++ config->user_ctl_val);
++ clk_alpha_pll_write_config(regmap, PLL_USER_CTL_U(pll),
++ config->user_ctl_hi_val);
++ clk_alpha_pll_write_config(regmap, PLL_USER_CTL_U1(pll),
++ config->user_ctl_hi1_val);
++ clk_alpha_pll_write_config(regmap, PLL_TEST_CTL(pll),
++ config->test_ctl_val);
++ clk_alpha_pll_write_config(regmap, PLL_TEST_CTL_U(pll),
++ config->test_ctl_hi_val);
++ clk_alpha_pll_write_config(regmap, PLL_TEST_CTL_U1(pll),
++ config->test_ctl_hi1_val);
++
++ /* Disable PLL output */
++ regmap_update_bits(regmap, PLL_MODE(pll), PLL_OUTCTRL, 0);
++
++ /* Set operation mode to OFF */
++ regmap_write(regmap, PLL_OPMODE(pll), PLL_STANDBY);
++
++ /* Place the PLL in STANDBY mode */
++ regmap_update_bits(regmap, PLL_MODE(pll), PLL_RESET_N, PLL_RESET_N);
++}
++EXPORT_SYMBOL_GPL(clk_lucid_5lpe_pll_configure);
++
+ static int alpha_pll_lucid_5lpe_enable(struct clk_hw *hw)
+ {
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+diff --git a/drivers/clk/qcom/clk-alpha-pll.h b/drivers/clk/qcom/clk-alpha-pll.h
+index df8f0fe155313e..e2cf5c7e501d00 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.h
++++ b/drivers/clk/qcom/clk-alpha-pll.h
+@@ -208,6 +208,8 @@ void clk_agera_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+
+ void clk_zonda_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+ const struct alpha_pll_config *config);
++void clk_lucid_5lpe_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
++ const struct alpha_pll_config *config);
+ void clk_lucid_evo_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+ const struct alpha_pll_config *config);
+ void clk_lucid_ole_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+diff --git a/drivers/clk/qcom/dispcc-sm8250.c b/drivers/clk/qcom/dispcc-sm8250.c
+index 5a09009b72891b..6665e064a55519 100644
+--- a/drivers/clk/qcom/dispcc-sm8250.c
++++ b/drivers/clk/qcom/dispcc-sm8250.c
+@@ -1357,8 +1357,13 @@ static int disp_cc_sm8250_probe(struct platform_device *pdev)
+ disp_cc_sm8250_clocks[DISP_CC_MDSS_EDP_GTC_CLK_SRC] = NULL;
+ }
+
+- clk_lucid_pll_configure(&disp_cc_pll0, regmap, &disp_cc_pll0_config);
+- clk_lucid_pll_configure(&disp_cc_pll1, regmap, &disp_cc_pll1_config);
++ if (of_device_is_compatible(pdev->dev.of_node, "qcom,sm8350-dispcc")) {
++ clk_lucid_5lpe_pll_configure(&disp_cc_pll0, regmap, &disp_cc_pll0_config);
++ clk_lucid_5lpe_pll_configure(&disp_cc_pll1, regmap, &disp_cc_pll1_config);
++ } else {
++ clk_lucid_pll_configure(&disp_cc_pll0, regmap, &disp_cc_pll0_config);
++ clk_lucid_pll_configure(&disp_cc_pll1, regmap, &disp_cc_pll1_config);
++ }
+
+ /* Enable clock gating for MDP clocks */
+ regmap_update_bits(regmap, 0x8000, 0x10, 0x10);
+diff --git a/drivers/clk/qcom/dispcc-sm8550.c b/drivers/clk/qcom/dispcc-sm8550.c
+index 31ae46f180a5c2..19066ea58224e6 100644
+--- a/drivers/clk/qcom/dispcc-sm8550.c
++++ b/drivers/clk/qcom/dispcc-sm8550.c
+@@ -196,7 +196,7 @@ static const struct clk_parent_data disp_cc_parent_data_3[] = {
+ static const struct parent_map disp_cc_parent_map_4[] = {
+ { P_BI_TCXO, 0 },
+ { P_DP0_PHY_PLL_LINK_CLK, 1 },
+- { P_DP1_PHY_PLL_VCO_DIV_CLK, 2 },
++ { P_DP0_PHY_PLL_VCO_DIV_CLK, 2 },
+ { P_DP3_PHY_PLL_VCO_DIV_CLK, 3 },
+ { P_DP1_PHY_PLL_VCO_DIV_CLK, 4 },
+ { P_DP2_PHY_PLL_VCO_DIV_CLK, 6 },
+@@ -213,7 +213,7 @@ static const struct clk_parent_data disp_cc_parent_data_4[] = {
+
+ static const struct parent_map disp_cc_parent_map_5[] = {
+ { P_BI_TCXO, 0 },
+- { P_DSI0_PHY_PLL_OUT_BYTECLK, 4 },
++ { P_DSI0_PHY_PLL_OUT_BYTECLK, 2 },
+ { P_DSI1_PHY_PLL_OUT_BYTECLK, 4 },
+ };
+
+@@ -400,7 +400,7 @@ static struct clk_rcg2 disp_cc_mdss_dptx1_aux_clk_src = {
+ .parent_data = disp_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(disp_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_dp_ops,
++ .ops = &clk_rcg2_ops,
+ },
+ };
+
+@@ -562,7 +562,7 @@ static struct clk_rcg2 disp_cc_mdss_esc0_clk_src = {
+ .parent_data = disp_cc_parent_data_5,
+ .num_parents = ARRAY_SIZE(disp_cc_parent_data_5),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -577,7 +577,7 @@ static struct clk_rcg2 disp_cc_mdss_esc1_clk_src = {
+ .parent_data = disp_cc_parent_data_5,
+ .num_parents = ARRAY_SIZE(disp_cc_parent_data_5),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -1611,7 +1611,7 @@ static struct gdsc mdss_gdsc = {
+ .name = "mdss_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
+- .flags = HW_CTRL | RETAIN_FF_ENABLE,
++ .flags = POLL_CFG_GDSCR | HW_CTRL | RETAIN_FF_ENABLE,
+ };
+
+ static struct gdsc mdss_int2_gdsc = {
+@@ -1620,7 +1620,7 @@ static struct gdsc mdss_int2_gdsc = {
+ .name = "mdss_int2_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
+- .flags = HW_CTRL | RETAIN_FF_ENABLE,
++ .flags = POLL_CFG_GDSCR | HW_CTRL | RETAIN_FF_ENABLE,
+ };
+
+ static struct clk_regmap *disp_cc_sm8550_clocks[] = {
+diff --git a/drivers/clk/qcom/gcc-ipq5332.c b/drivers/clk/qcom/gcc-ipq5332.c
+index f98591148a9767..6a4877d8882946 100644
+--- a/drivers/clk/qcom/gcc-ipq5332.c
++++ b/drivers/clk/qcom/gcc-ipq5332.c
+@@ -3388,6 +3388,7 @@ static struct clk_regmap *gcc_ipq5332_clocks[] = {
+ [GCC_QDSS_DAP_DIV_CLK_SRC] = &gcc_qdss_dap_div_clk_src.clkr,
+ [GCC_QDSS_ETR_USB_CLK] = &gcc_qdss_etr_usb_clk.clkr,
+ [GCC_QDSS_EUD_AT_CLK] = &gcc_qdss_eud_at_clk.clkr,
++ [GCC_QDSS_TSCTR_CLK_SRC] = &gcc_qdss_tsctr_clk_src.clkr,
+ [GCC_QPIC_AHB_CLK] = &gcc_qpic_ahb_clk.clkr,
+ [GCC_QPIC_CLK] = &gcc_qpic_clk.clkr,
+ [GCC_QPIC_IO_MACRO_CLK] = &gcc_qpic_io_macro_clk.clkr,
+diff --git a/drivers/clk/rockchip/clk-rk3228.c b/drivers/clk/rockchip/clk-rk3228.c
+index a24a35553e1349..7343d2d7676bca 100644
+--- a/drivers/clk/rockchip/clk-rk3228.c
++++ b/drivers/clk/rockchip/clk-rk3228.c
+@@ -409,7 +409,7 @@ static struct rockchip_clk_branch rk3228_clk_branches[] __initdata = {
+ RK2928_CLKSEL_CON(29), 0, 3, DFLAGS),
+ DIV(0, "sclk_vop_pre", "sclk_vop_src", 0,
+ RK2928_CLKSEL_CON(27), 8, 8, DFLAGS),
+- MUX(DCLK_VOP, "dclk_vop", mux_dclk_vop_p, 0,
++ MUX(DCLK_VOP, "dclk_vop", mux_dclk_vop_p, CLK_SET_RATE_PARENT | CLK_SET_RATE_NO_REPARENT,
+ RK2928_CLKSEL_CON(27), 1, 1, MFLAGS),
+
+ FACTOR(0, "xin12m", "xin24m", 0, 1, 2),
+diff --git a/drivers/clk/rockchip/clk-rk3588.c b/drivers/clk/rockchip/clk-rk3588.c
+index b30279a96dc8af..3027379f2fdd11 100644
+--- a/drivers/clk/rockchip/clk-rk3588.c
++++ b/drivers/clk/rockchip/clk-rk3588.c
+@@ -526,7 +526,7 @@ PNAME(pmu_200m_100m_p) = { "clk_pmu1_200m_src", "clk_pmu1_100m_src" };
+ PNAME(pmu_300m_24m_p) = { "clk_300m_src", "xin24m" };
+ PNAME(pmu_400m_24m_p) = { "clk_400m_src", "xin24m" };
+ PNAME(pmu_100m_50m_24m_src_p) = { "clk_pmu1_100m_src", "clk_pmu1_50m_src", "xin24m" };
+-PNAME(pmu_24m_32k_100m_src_p) = { "xin24m", "32k", "clk_pmu1_100m_src" };
++PNAME(pmu_24m_32k_100m_src_p) = { "xin24m", "xin32k", "clk_pmu1_100m_src" };
+ PNAME(hclk_pmu1_root_p) = { "clk_pmu1_200m_src", "clk_pmu1_100m_src", "clk_pmu1_50m_src", "xin24m" };
+ PNAME(hclk_pmu_cm0_root_p) = { "clk_pmu1_400m_src", "clk_pmu1_200m_src", "clk_pmu1_100m_src", "xin24m" };
+ PNAME(mclk_pdm0_p) = { "clk_pmu1_300m_src", "clk_pmu1_200m_src" };
+diff --git a/drivers/clk/starfive/clk-starfive-jh7110-vout.c b/drivers/clk/starfive/clk-starfive-jh7110-vout.c
+index 53f7af234cc23e..aabd0484ac23f6 100644
+--- a/drivers/clk/starfive/clk-starfive-jh7110-vout.c
++++ b/drivers/clk/starfive/clk-starfive-jh7110-vout.c
+@@ -145,7 +145,7 @@ static int jh7110_voutcrg_probe(struct platform_device *pdev)
+
+ /* enable power domain and clocks */
+ pm_runtime_enable(priv->dev);
+- ret = pm_runtime_get_sync(priv->dev);
++ ret = pm_runtime_resume_and_get(priv->dev);
+ if (ret < 0)
+ return dev_err_probe(priv->dev, ret, "failed to turn on power\n");
+
+diff --git a/drivers/clk/ti/clk-dra7-atl.c b/drivers/clk/ti/clk-dra7-atl.c
+index d964e3affd42ce..0eab7f3e2eab9e 100644
+--- a/drivers/clk/ti/clk-dra7-atl.c
++++ b/drivers/clk/ti/clk-dra7-atl.c
+@@ -240,6 +240,7 @@ static int of_dra7_atl_clk_probe(struct platform_device *pdev)
+ }
+
+ clk = of_clk_get_from_provider(&clkspec);
++ of_node_put(clkspec.np);
+ if (IS_ERR(clk)) {
+ pr_err("%s: failed to get atl clock %d from provider\n",
+ __func__, i);
+diff --git a/drivers/clocksource/timer-qcom.c b/drivers/clocksource/timer-qcom.c
+index b4afe3a6758351..eac4c95c6127f2 100644
+--- a/drivers/clocksource/timer-qcom.c
++++ b/drivers/clocksource/timer-qcom.c
+@@ -233,6 +233,7 @@ static int __init msm_dt_timer_init(struct device_node *np)
+ }
+
+ if (of_property_read_u32(np, "clock-frequency", &freq)) {
++ iounmap(cpu0_base);
+ pr_err("Unknown frequency\n");
+ return -EINVAL;
+ }
+@@ -243,7 +244,11 @@ static int __init msm_dt_timer_init(struct device_node *np)
+ freq /= 4;
+ writel_relaxed(DGT_CLK_CTL_DIV_4, source_base + DGT_CLK_CTL);
+
+- return msm_timer_init(freq, 32, irq, !!percpu_offset);
++ ret = msm_timer_init(freq, 32, irq, !!percpu_offset);
++ if (ret)
++ iounmap(cpu0_base);
++
++ return ret;
+ }
+ TIMER_OF_DECLARE(kpss_timer, "qcom,kpss-timer", msm_dt_timer_init);
+ TIMER_OF_DECLARE(scss_timer, "qcom,scss-timer", msm_dt_timer_init);
+diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c
+index 4d3f27958fbde9..62dfa42570e425 100644
+--- a/drivers/cpufreq/ti-cpufreq.c
++++ b/drivers/cpufreq/ti-cpufreq.c
+@@ -90,6 +90,9 @@ struct ti_cpufreq_soc_data {
+ unsigned long efuse_shift;
+ unsigned long rev_offset;
+ bool multi_regulator;
++/* Backward compatibility hack: Might have missing syscon */
++#define TI_QUIRK_SYSCON_MAY_BE_MISSING 0x1
++ u8 quirks;
+ };
+
+ struct ti_cpufreq_data {
+@@ -254,6 +257,7 @@ static struct ti_cpufreq_soc_data omap34xx_soc_data = {
+ .efuse_mask = BIT(3),
+ .rev_offset = OMAP3_CONTROL_IDCODE - OMAP3_SYSCON_BASE,
+ .multi_regulator = false,
++ .quirks = TI_QUIRK_SYSCON_MAY_BE_MISSING,
+ };
+
+ /*
+@@ -281,6 +285,7 @@ static struct ti_cpufreq_soc_data omap36xx_soc_data = {
+ .efuse_mask = BIT(9),
+ .rev_offset = OMAP3_CONTROL_IDCODE - OMAP3_SYSCON_BASE,
+ .multi_regulator = true,
++ .quirks = TI_QUIRK_SYSCON_MAY_BE_MISSING,
+ };
+
+ /*
+@@ -295,6 +300,7 @@ static struct ti_cpufreq_soc_data am3517_soc_data = {
+ .efuse_mask = 0,
+ .rev_offset = OMAP3_CONTROL_IDCODE - OMAP3_SYSCON_BASE,
+ .multi_regulator = false,
++ .quirks = TI_QUIRK_SYSCON_MAY_BE_MISSING,
+ };
+
+ static struct ti_cpufreq_soc_data am625_soc_data = {
+@@ -340,7 +346,7 @@ static int ti_cpufreq_get_efuse(struct ti_cpufreq_data *opp_data,
+
+ ret = regmap_read(opp_data->syscon, opp_data->soc_data->efuse_offset,
+ &efuse);
+- if (ret == -EIO) {
++ if (opp_data->soc_data->quirks & TI_QUIRK_SYSCON_MAY_BE_MISSING && ret == -EIO) {
+ /* not a syscon register! */
+ void __iomem *regs = ioremap(OMAP3_SYSCON_BASE +
+ opp_data->soc_data->efuse_offset, 4);
+@@ -381,7 +387,7 @@ static int ti_cpufreq_get_rev(struct ti_cpufreq_data *opp_data,
+
+ ret = regmap_read(opp_data->syscon, opp_data->soc_data->rev_offset,
+ &revision);
+- if (ret == -EIO) {
++ if (opp_data->soc_data->quirks & TI_QUIRK_SYSCON_MAY_BE_MISSING && ret == -EIO) {
+ /* not a syscon register! */
+ void __iomem *regs = ioremap(OMAP3_SYSCON_BASE +
+ opp_data->soc_data->rev_offset, 4);
+diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
+index a6e123dfe394d8..5bb3401220d296 100644
+--- a/drivers/cpuidle/cpuidle-riscv-sbi.c
++++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
+@@ -8,6 +8,7 @@
+
+ #define pr_fmt(fmt) "cpuidle-riscv-sbi: " fmt
+
++#include <linux/cleanup.h>
+ #include <linux/cpuhotplug.h>
+ #include <linux/cpuidle.h>
+ #include <linux/cpumask.h>
+@@ -236,19 +237,16 @@ static int sbi_cpuidle_dt_init_states(struct device *dev,
+ {
+ struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
+ struct device_node *state_node;
+- struct device_node *cpu_node;
+ u32 *states;
+ int i, ret;
+
+- cpu_node = of_cpu_device_node_get(cpu);
++ struct device_node *cpu_node __free(device_node) = of_cpu_device_node_get(cpu);
+ if (!cpu_node)
+ return -ENODEV;
+
+ states = devm_kcalloc(dev, state_count, sizeof(*states), GFP_KERNEL);
+- if (!states) {
+- ret = -ENOMEM;
+- goto fail;
+- }
++ if (!states)
++ return -ENOMEM;
+
+ /* Parse SBI specific details from state DT nodes */
+ for (i = 1; i < state_count; i++) {
+@@ -264,10 +262,8 @@ static int sbi_cpuidle_dt_init_states(struct device *dev,
+
+ pr_debug("sbi-state %#x index %d\n", states[i], i);
+ }
+- if (i != state_count) {
+- ret = -ENODEV;
+- goto fail;
+- }
++ if (i != state_count)
++ return -ENODEV;
+
+ /* Initialize optional data, used for the hierarchical topology. */
+ ret = sbi_dt_cpu_init_topology(drv, data, state_count, cpu);
+@@ -277,10 +273,7 @@ static int sbi_cpuidle_dt_init_states(struct device *dev,
+ /* Store states in the per-cpu struct. */
+ data->states = states;
+
+-fail:
+- of_node_put(cpu_node);
+-
+- return ret;
++ return 0;
+ }
+
+ static void sbi_cpuidle_deinit_cpu(int cpu)
+diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
+index fdd724228c2fa8..25c02e26725858 100644
+--- a/drivers/crypto/caam/caamhash.c
++++ b/drivers/crypto/caam/caamhash.c
+@@ -708,6 +708,7 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req,
+ GFP_KERNEL : GFP_ATOMIC;
+ struct ahash_edesc *edesc;
+
++ sg_num = pad_sg_nents(sg_num);
+ edesc = kzalloc(struct_size(edesc, sec4_sg, sg_num), flags);
+ if (!edesc)
+ return NULL;
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 9810edbb272d2b..a2964fb310473a 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -910,7 +910,18 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+
+ sev->int_rcvd = 0;
+
+- reg = FIELD_PREP(SEV_CMDRESP_CMD, cmd) | SEV_CMDRESP_IOC;
++ reg = FIELD_PREP(SEV_CMDRESP_CMD, cmd);
++
++ /*
++ * If invoked during panic handling, local interrupts are disabled so
++ * the PSP command completion interrupt can't be used.
++ * sev_wait_cmd_ioc() already checks for interrupts disabled and
++ * polls for PSP command completion. Ensure we do not request an
++ * interrupt from the PSP if irqs disabled.
++ */
++ if (!irqs_disabled())
++ reg |= SEV_CMDRESP_IOC;
++
+ iowrite32(reg, sev->io_regs + sev->vdata->cmdresp_reg);
+
+ /* wait for command completion */
+@@ -2410,6 +2421,8 @@ void sev_pci_init(void)
+ return;
+
+ err:
++ sev_dev_destroy(psp_master);
++
+ psp_master->sev_data = NULL;
+ }
+
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
+index 10aa4da93323f1..6b536ad2ada52a 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
+@@ -13,9 +13,7 @@
+ #include <linux/uacce.h>
+ #include "hpre.h"
+
+-#define HPRE_QM_ABNML_INT_MASK 0x100004
+ #define HPRE_CTRL_CNT_CLR_CE_BIT BIT(0)
+-#define HPRE_COMM_CNT_CLR_CE 0x0
+ #define HPRE_CTRL_CNT_CLR_CE 0x301000
+ #define HPRE_FSM_MAX_CNT 0x301008
+ #define HPRE_VFG_AXQOS 0x30100c
+@@ -42,7 +40,6 @@
+ #define HPRE_HAC_INT_SET 0x301500
+ #define HPRE_RNG_TIMEOUT_NUM 0x301A34
+ #define HPRE_CORE_INT_ENABLE 0
+-#define HPRE_CORE_INT_DISABLE GENMASK(21, 0)
+ #define HPRE_RDCHN_INI_ST 0x301a00
+ #define HPRE_CLSTR_BASE 0x302000
+ #define HPRE_CORE_EN_OFFSET 0x04
+@@ -66,7 +63,6 @@
+ #define HPRE_CLSTR_ADDR_INTRVL 0x1000
+ #define HPRE_CLUSTER_INQURY 0x100
+ #define HPRE_CLSTR_ADDR_INQRY_RSLT 0x104
+-#define HPRE_TIMEOUT_ABNML_BIT 6
+ #define HPRE_PASID_EN_BIT 9
+ #define HPRE_REG_RD_INTVRL_US 10
+ #define HPRE_REG_RD_TMOUT_US 1000
+@@ -203,9 +199,9 @@ static const struct hisi_qm_cap_info hpre_basic_info[] = {
+ {HPRE_QM_RESET_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0xC37, 0x6C37},
+ {HPRE_QM_OOO_SHUTDOWN_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0x4, 0x6C37},
+ {HPRE_QM_CE_MASK_CAP, 0x312C, 0, GENMASK(31, 0), 0x0, 0x8, 0x8},
+- {HPRE_NFE_MASK_CAP, 0x3130, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0x1FFFFFE},
+- {HPRE_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0xBFFFFE},
+- {HPRE_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x22, 0xBFFFFE},
++ {HPRE_NFE_MASK_CAP, 0x3130, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0x1FFFC3E},
++ {HPRE_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0xBFFC3E},
++ {HPRE_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x22, 0xBFFC3E},
+ {HPRE_CE_MASK_CAP, 0x3138, 0, GENMASK(31, 0), 0x0, 0x1, 0x1},
+ {HPRE_CLUSTER_NUM_CAP, 0x313c, 20, GENMASK(3, 0), 0x0, 0x4, 0x1},
+ {HPRE_CORE_TYPE_NUM_CAP, 0x313c, 16, GENMASK(3, 0), 0x0, 0x2, 0x2},
+@@ -358,6 +354,8 @@ static struct dfx_diff_registers hpre_diff_regs[] = {
+ },
+ };
+
++static const struct hisi_qm_err_ini hpre_err_ini;
++
+ bool hpre_check_alg_support(struct hisi_qm *qm, u32 alg)
+ {
+ u32 cap_val;
+@@ -654,11 +652,6 @@ static int hpre_set_user_domain_and_cache(struct hisi_qm *qm)
+ writel(HPRE_QM_USR_CFG_MASK, qm->io_base + QM_AWUSER_M_CFG_ENABLE);
+ writel_relaxed(HPRE_QM_AXI_CFG_MASK, qm->io_base + QM_AXI_M_CFG);
+
+- /* HPRE need more time, we close this interrupt */
+- val = readl_relaxed(qm->io_base + HPRE_QM_ABNML_INT_MASK);
+- val |= BIT(HPRE_TIMEOUT_ABNML_BIT);
+- writel_relaxed(val, qm->io_base + HPRE_QM_ABNML_INT_MASK);
+-
+ if (qm->ver >= QM_HW_V3)
+ writel(HPRE_RSA_ENB | HPRE_ECC_ENB,
+ qm->io_base + HPRE_TYPES_ENB);
+@@ -667,9 +660,7 @@ static int hpre_set_user_domain_and_cache(struct hisi_qm *qm)
+
+ writel(HPRE_QM_VFG_AX_MASK, qm->io_base + HPRE_VFG_AXCACHE);
+ writel(0x0, qm->io_base + HPRE_BD_ENDIAN);
+- writel(0x0, qm->io_base + HPRE_INT_MASK);
+ writel(0x0, qm->io_base + HPRE_POISON_BYPASS);
+- writel(0x0, qm->io_base + HPRE_COMM_CNT_CLR_CE);
+ writel(0x0, qm->io_base + HPRE_ECC_BYPASS);
+
+ writel(HPRE_BD_USR_MASK, qm->io_base + HPRE_BD_ARUSR_CFG);
+@@ -759,7 +750,7 @@ static void hpre_hw_error_disable(struct hisi_qm *qm)
+
+ static void hpre_hw_error_enable(struct hisi_qm *qm)
+ {
+- u32 ce, nfe;
++ u32 ce, nfe, err_en;
+
+ ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver);
+ nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
+@@ -776,7 +767,8 @@ static void hpre_hw_error_enable(struct hisi_qm *qm)
+ hpre_master_ooo_ctrl(qm, true);
+
+ /* enable hpre hw error interrupts */
+- writel(HPRE_CORE_INT_ENABLE, qm->io_base + HPRE_INT_MASK);
++ err_en = ce | nfe | HPRE_HAC_RAS_FE_ENABLE;
++ writel(~err_en, qm->io_base + HPRE_INT_MASK);
+ }
+
+ static inline struct hisi_qm *hpre_file_to_qm(struct hpre_debugfs_file *file)
+@@ -1161,6 +1153,7 @@ static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ qm->qp_num = pf_q_num;
+ qm->debug.curr_qm_qp_num = pf_q_num;
+ qm->qm_list = &hpre_devices;
++ qm->err_ini = &hpre_err_ini;
+ if (pf_q_num_flag)
+ set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
+ }
+@@ -1350,8 +1343,6 @@ static int hpre_pf_probe_init(struct hpre *hpre)
+
+ hpre_open_sva_prefetch(qm);
+
+- qm->err_ini = &hpre_err_ini;
+- qm->err_ini->err_info_init(qm);
+ hisi_qm_dev_err_init(qm);
+ ret = hpre_show_last_regs_init(qm);
+ if (ret)
+@@ -1380,6 +1371,18 @@ static int hpre_probe_init(struct hpre *hpre)
+ return 0;
+ }
+
++static void hpre_probe_uninit(struct hisi_qm *qm)
++{
++ if (qm->fun_type == QM_HW_VF)
++ return;
++
++ hpre_cnt_regs_clear(qm);
++ qm->debug.curr_qm_qp_num = 0;
++ hpre_show_last_regs_uninit(qm);
++ hpre_close_sva_prefetch(qm);
++ hisi_qm_dev_err_uninit(qm);
++}
++
+ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ struct hisi_qm *qm;
+@@ -1405,7 +1408,7 @@ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+
+ ret = hisi_qm_start(qm);
+ if (ret)
+- goto err_with_err_init;
++ goto err_with_probe_init;
+
+ ret = hpre_debugfs_init(qm);
+ if (ret)
+@@ -1444,9 +1447,8 @@ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ hpre_debugfs_exit(qm);
+ hisi_qm_stop(qm, QM_NORMAL);
+
+-err_with_err_init:
+- hpre_show_last_regs_uninit(qm);
+- hisi_qm_dev_err_uninit(qm);
++err_with_probe_init:
++ hpre_probe_uninit(qm);
+
+ err_with_qm_init:
+ hisi_qm_uninit(qm);
+@@ -1468,13 +1470,7 @@ static void hpre_remove(struct pci_dev *pdev)
+ hpre_debugfs_exit(qm);
+ hisi_qm_stop(qm, QM_NORMAL);
+
+- if (qm->fun_type == QM_HW_PF) {
+- hpre_cnt_regs_clear(qm);
+- qm->debug.curr_qm_qp_num = 0;
+- hpre_show_last_regs_uninit(qm);
+- hisi_qm_dev_err_uninit(qm);
+- }
+-
++ hpre_probe_uninit(qm);
+ hisi_qm_uninit(qm);
+ }
+
+diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
+index f614fd228b5624..07983af9e3e229 100644
+--- a/drivers/crypto/hisilicon/qm.c
++++ b/drivers/crypto/hisilicon/qm.c
+@@ -450,6 +450,7 @@ static struct qm_typical_qos_table shaper_cbs_s[] = {
+ };
+
+ static void qm_irqs_unregister(struct hisi_qm *qm);
++static int qm_reset_device(struct hisi_qm *qm);
+
+ static u32 qm_get_hw_error_status(struct hisi_qm *qm)
+ {
+@@ -4014,6 +4015,28 @@ static int qm_set_vf_mse(struct hisi_qm *qm, bool set)
+ return -ETIMEDOUT;
+ }
+
++static void qm_dev_ecc_mbit_handle(struct hisi_qm *qm)
++{
++ u32 nfe_enb = 0;
++
++ /* Kunpeng930 hardware automatically close master ooo when NFE occurs */
++ if (qm->ver >= QM_HW_V3)
++ return;
++
++ if (!qm->err_status.is_dev_ecc_mbit &&
++ qm->err_status.is_qm_ecc_mbit &&
++ qm->err_ini->close_axi_master_ooo) {
++ qm->err_ini->close_axi_master_ooo(qm);
++ } else if (qm->err_status.is_dev_ecc_mbit &&
++ !qm->err_status.is_qm_ecc_mbit &&
++ !qm->err_ini->close_axi_master_ooo) {
++ nfe_enb = readl(qm->io_base + QM_RAS_NFE_ENABLE);
++ writel(nfe_enb & QM_RAS_NFE_MBIT_DISABLE,
++ qm->io_base + QM_RAS_NFE_ENABLE);
++ writel(QM_ECC_MBIT, qm->io_base + QM_ABNORMAL_INT_SET);
++ }
++}
++
+ static int qm_vf_reset_prepare(struct hisi_qm *qm,
+ enum qm_stop_reason stop_reason)
+ {
+@@ -4078,6 +4101,8 @@ static int qm_controller_reset_prepare(struct hisi_qm *qm)
+ return ret;
+ }
+
++ qm_dev_ecc_mbit_handle(qm);
++
+ /* PF obtains the information of VF by querying the register. */
+ qm_cmd_uninit(qm);
+
+@@ -4108,33 +4133,26 @@ static int qm_controller_reset_prepare(struct hisi_qm *qm)
+ return 0;
+ }
+
+-static void qm_dev_ecc_mbit_handle(struct hisi_qm *qm)
++static int qm_master_ooo_check(struct hisi_qm *qm)
+ {
+- u32 nfe_enb = 0;
++ u32 val;
++ int ret;
+
+- /* Kunpeng930 hardware automatically close master ooo when NFE occurs */
+- if (qm->ver >= QM_HW_V3)
+- return;
++ /* Check the ooo register of the device before resetting the device. */
++ writel(ACC_MASTER_GLOBAL_CTRL_SHUTDOWN, qm->io_base + ACC_MASTER_GLOBAL_CTRL);
++ ret = readl_relaxed_poll_timeout(qm->io_base + ACC_MASTER_TRANS_RETURN,
++ val, (val == ACC_MASTER_TRANS_RETURN_RW),
++ POLL_PERIOD, POLL_TIMEOUT);
++ if (ret)
++ pci_warn(qm->pdev, "Bus lock! Please reset system.\n");
+
+- if (!qm->err_status.is_dev_ecc_mbit &&
+- qm->err_status.is_qm_ecc_mbit &&
+- qm->err_ini->close_axi_master_ooo) {
+- qm->err_ini->close_axi_master_ooo(qm);
+- } else if (qm->err_status.is_dev_ecc_mbit &&
+- !qm->err_status.is_qm_ecc_mbit &&
+- !qm->err_ini->close_axi_master_ooo) {
+- nfe_enb = readl(qm->io_base + QM_RAS_NFE_ENABLE);
+- writel(nfe_enb & QM_RAS_NFE_MBIT_DISABLE,
+- qm->io_base + QM_RAS_NFE_ENABLE);
+- writel(QM_ECC_MBIT, qm->io_base + QM_ABNORMAL_INT_SET);
+- }
++ return ret;
+ }
+
+-static int qm_soft_reset(struct hisi_qm *qm)
++static int qm_soft_reset_prepare(struct hisi_qm *qm)
+ {
+ struct pci_dev *pdev = qm->pdev;
+ int ret;
+- u32 val;
+
+ /* Ensure all doorbells and mailboxes received by QM */
+ ret = qm_check_req_recv(qm);
+@@ -4155,30 +4173,23 @@ static int qm_soft_reset(struct hisi_qm *qm)
+ return ret;
+ }
+
+- qm_dev_ecc_mbit_handle(qm);
+-
+- /* OOO register set and check */
+- writel(ACC_MASTER_GLOBAL_CTRL_SHUTDOWN,
+- qm->io_base + ACC_MASTER_GLOBAL_CTRL);
+-
+- /* If bus lock, reset chip */
+- ret = readl_relaxed_poll_timeout(qm->io_base + ACC_MASTER_TRANS_RETURN,
+- val,
+- (val == ACC_MASTER_TRANS_RETURN_RW),
+- POLL_PERIOD, POLL_TIMEOUT);
+- if (ret) {
+- pci_emerg(pdev, "Bus lock! Please reset system.\n");
++ ret = qm_master_ooo_check(qm);
++ if (ret)
+ return ret;
+- }
+
+ if (qm->err_ini->close_sva_prefetch)
+ qm->err_ini->close_sva_prefetch(qm);
+
+ ret = qm_set_pf_mse(qm, false);
+- if (ret) {
++ if (ret)
+ pci_err(pdev, "Fails to disable pf MSE bit.\n");
+- return ret;
+- }
++
++ return ret;
++}
++
++static int qm_reset_device(struct hisi_qm *qm)
++{
++ struct pci_dev *pdev = qm->pdev;
+
+ /* The reset related sub-control registers are not in PCI BAR */
+ if (ACPI_HANDLE(&pdev->dev)) {
+@@ -4197,12 +4208,23 @@ static int qm_soft_reset(struct hisi_qm *qm)
+ pci_err(pdev, "Reset step %llu failed!\n", value);
+ return -EIO;
+ }
+- } else {
+- pci_err(pdev, "No reset method!\n");
+- return -EINVAL;
++
++ return 0;
+ }
+
+- return 0;
++ pci_err(pdev, "No reset method!\n");
++ return -EINVAL;
++}
++
++static int qm_soft_reset(struct hisi_qm *qm)
++{
++ int ret;
++
++ ret = qm_soft_reset_prepare(qm);
++ if (ret)
++ return ret;
++
++ return qm_reset_device(qm);
+ }
+
+ static int qm_vf_reset_done(struct hisi_qm *qm)
+@@ -5155,6 +5177,35 @@ static int qm_get_pci_res(struct hisi_qm *qm)
+ return ret;
+ }
+
++static int qm_clear_device(struct hisi_qm *qm)
++{
++ acpi_handle handle = ACPI_HANDLE(&qm->pdev->dev);
++ int ret;
++
++ if (qm->fun_type == QM_HW_VF)
++ return 0;
++
++ /* Device does not support reset, return */
++ if (!qm->err_ini->err_info_init)
++ return 0;
++ qm->err_ini->err_info_init(qm);
++
++ if (!handle)
++ return 0;
++
++ /* No reset method, return */
++ if (!acpi_has_method(handle, qm->err_info.acpi_rst))
++ return 0;
++
++ ret = qm_master_ooo_check(qm);
++ if (ret) {
++ writel(0x0, qm->io_base + ACC_MASTER_GLOBAL_CTRL);
++ return ret;
++ }
++
++ return qm_reset_device(qm);
++}
++
+ static int hisi_qm_pci_init(struct hisi_qm *qm)
+ {
+ struct pci_dev *pdev = qm->pdev;
+@@ -5184,8 +5235,14 @@ static int hisi_qm_pci_init(struct hisi_qm *qm)
+ goto err_get_pci_res;
+ }
+
++ ret = qm_clear_device(qm);
++ if (ret)
++ goto err_free_vectors;
++
+ return 0;
+
++err_free_vectors:
++ pci_free_irq_vectors(pdev);
+ err_get_pci_res:
+ qm_put_pci_res(qm);
+ err_disable_pcidev:
+@@ -5486,7 +5543,6 @@ static int qm_prepare_for_suspend(struct hisi_qm *qm)
+ {
+ struct pci_dev *pdev = qm->pdev;
+ int ret;
+- u32 val;
+
+ ret = qm->ops->set_msi(qm, false);
+ if (ret) {
+@@ -5494,18 +5550,9 @@ static int qm_prepare_for_suspend(struct hisi_qm *qm)
+ return ret;
+ }
+
+- /* shutdown OOO register */
+- writel(ACC_MASTER_GLOBAL_CTRL_SHUTDOWN,
+- qm->io_base + ACC_MASTER_GLOBAL_CTRL);
+-
+- ret = readl_relaxed_poll_timeout(qm->io_base + ACC_MASTER_TRANS_RETURN,
+- val,
+- (val == ACC_MASTER_TRANS_RETURN_RW),
+- POLL_PERIOD, POLL_TIMEOUT);
+- if (ret) {
+- pci_emerg(pdev, "Bus lock! Please reset system.\n");
++ ret = qm_master_ooo_check(qm);
++ if (ret)
+ return ret;
+- }
+
+ ret = qm_set_pf_mse(qm, false);
+ if (ret)
+diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
+index 75aad04ffe5eb9..c35533d8930b21 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_main.c
++++ b/drivers/crypto/hisilicon/sec2/sec_main.c
+@@ -1065,9 +1065,6 @@ static int sec_pf_probe_init(struct sec_dev *sec)
+ struct hisi_qm *qm = &sec->qm;
+ int ret;
+
+- qm->err_ini = &sec_err_ini;
+- qm->err_ini->err_info_init(qm);
+-
+ ret = sec_set_user_domain_and_cache(qm);
+ if (ret)
+ return ret;
+@@ -1122,6 +1119,7 @@ static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ qm->qp_num = pf_q_num;
+ qm->debug.curr_qm_qp_num = pf_q_num;
+ qm->qm_list = &sec_devices;
++ qm->err_ini = &sec_err_ini;
+ if (pf_q_num_flag)
+ set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
+ } else if (qm->fun_type == QM_HW_VF && qm->ver == QM_HW_V1) {
+@@ -1186,6 +1184,12 @@ static int sec_probe_init(struct sec_dev *sec)
+
+ static void sec_probe_uninit(struct hisi_qm *qm)
+ {
++ if (qm->fun_type == QM_HW_VF)
++ return;
++
++ sec_debug_regs_clear(qm);
++ sec_show_last_regs_uninit(qm);
++ sec_close_sva_prefetch(qm);
+ hisi_qm_dev_err_uninit(qm);
+ }
+
+@@ -1274,7 +1278,6 @@ static int sec_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ sec_debugfs_exit(qm);
+ hisi_qm_stop(qm, QM_NORMAL);
+ err_probe_uninit:
+- sec_show_last_regs_uninit(qm);
+ sec_probe_uninit(qm);
+ err_qm_uninit:
+ sec_qm_uninit(qm);
+@@ -1296,11 +1299,6 @@ static void sec_remove(struct pci_dev *pdev)
+ sec_debugfs_exit(qm);
+
+ (void)hisi_qm_stop(qm, QM_NORMAL);
+-
+- if (qm->fun_type == QM_HW_PF)
+- sec_debug_regs_clear(qm);
+- sec_show_last_regs_uninit(qm);
+-
+ sec_probe_uninit(qm);
+
+ sec_qm_uninit(qm);
+diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c
+index 7c2d803886fdea..d07e47b48be06a 100644
+--- a/drivers/crypto/hisilicon/zip/zip_main.c
++++ b/drivers/crypto/hisilicon/zip/zip_main.c
+@@ -1141,8 +1141,6 @@ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
+
+ hisi_zip->ctrl = ctrl;
+ ctrl->hisi_zip = hisi_zip;
+- qm->err_ini = &hisi_zip_err_ini;
+- qm->err_ini->err_info_init(qm);
+
+ ret = hisi_zip_set_user_domain_and_cache(qm);
+ if (ret)
+@@ -1203,6 +1201,7 @@ static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ qm->qp_num = pf_q_num;
+ qm->debug.curr_qm_qp_num = pf_q_num;
+ qm->qm_list = &zip_devices;
++ qm->err_ini = &hisi_zip_err_ini;
+ if (pf_q_num_flag)
+ set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
+ } else if (qm->fun_type == QM_HW_VF && qm->ver == QM_HW_V1) {
+@@ -1269,6 +1268,16 @@ static int hisi_zip_probe_init(struct hisi_zip *hisi_zip)
+ return 0;
+ }
+
++static void hisi_zip_probe_uninit(struct hisi_qm *qm)
++{
++ if (qm->fun_type == QM_HW_VF)
++ return;
++
++ hisi_zip_show_last_regs_uninit(qm);
++ hisi_zip_close_sva_prefetch(qm);
++ hisi_qm_dev_err_uninit(qm);
++}
++
+ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ struct hisi_zip *hisi_zip;
+@@ -1295,7 +1304,7 @@ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+
+ ret = hisi_qm_start(qm);
+ if (ret)
+- goto err_dev_err_uninit;
++ goto err_probe_uninit;
+
+ ret = hisi_zip_debugfs_init(qm);
+ if (ret)
+@@ -1334,9 +1343,8 @@ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ hisi_zip_debugfs_exit(qm);
+ hisi_qm_stop(qm, QM_NORMAL);
+
+-err_dev_err_uninit:
+- hisi_zip_show_last_regs_uninit(qm);
+- hisi_qm_dev_err_uninit(qm);
++err_probe_uninit:
++ hisi_zip_probe_uninit(qm);
+
+ err_qm_uninit:
+ hisi_zip_qm_uninit(qm);
+@@ -1358,8 +1366,7 @@ static void hisi_zip_remove(struct pci_dev *pdev)
+
+ hisi_zip_debugfs_exit(qm);
+ hisi_qm_stop(qm, QM_NORMAL);
+- hisi_zip_show_last_regs_uninit(qm);
+- hisi_qm_dev_err_uninit(qm);
++ hisi_zip_probe_uninit(qm);
+ hisi_zip_qm_uninit(qm);
+ }
+
+diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+index e810d286ee8c42..237f8700007021 100644
+--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
++++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+@@ -495,10 +495,10 @@ static void remove_device_compression_modes(struct iaa_device *iaa_device)
+ if (!device_mode)
+ continue;
+
+- free_device_compression_mode(iaa_device, device_mode);
+- iaa_device->compression_modes[i] = NULL;
+ if (iaa_compression_modes[i]->free)
+ iaa_compression_modes[i]->free(device_mode);
++ free_device_compression_mode(iaa_device, device_mode);
++ iaa_device->compression_modes[i] = NULL;
+ }
+ }
+
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
+index 8b10926cedbac2..e8c53bd76f1bd2 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
++++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
+@@ -83,7 +83,7 @@
+ #define ADF_WQM_CSR_RPRESETSTS(bank) (ADF_WQM_CSR_RPRESETCTL(bank) + 4)
+
+ /* Ring interrupt */
+-#define ADF_RP_INT_SRC_SEL_F_RISE_MASK BIT(2)
++#define ADF_RP_INT_SRC_SEL_F_RISE_MASK GENMASK(1, 0)
+ #define ADF_RP_INT_SRC_SEL_F_FALL_MASK GENMASK(2, 0)
+ #define ADF_RP_INT_SRC_SEL_RANGE_WIDTH 4
+ #define ADF_COALESCED_POLL_TIMEOUT_US (1 * USEC_PER_SEC)
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_init.c b/drivers/crypto/intel/qat/qat_common/adf_init.c
+index 74f0818c070348..55f1ff1e0b3225 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_init.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_init.c
+@@ -323,6 +323,8 @@ static void adf_dev_stop(struct adf_accel_dev *accel_dev)
+ if (hw_data->stop_timer)
+ hw_data->stop_timer(accel_dev);
+
++ hw_data->disable_iov(accel_dev);
++
+ if (wait)
+ msleep(100);
+
+@@ -386,8 +388,6 @@ static void adf_dev_shutdown(struct adf_accel_dev *accel_dev)
+
+ adf_tl_shutdown(accel_dev);
+
+- hw_data->disable_iov(accel_dev);
+-
+ if (test_bit(ADF_STATUS_IRQ_ALLOCATED, &accel_dev->status)) {
+ hw_data->free_irq(accel_dev);
+ clear_bit(ADF_STATUS_IRQ_ALLOCATED, &accel_dev->status);
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_msg.c b/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_msg.c
+index 0e31f4b41844e0..0cee3b23dee90b 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_msg.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_msg.c
+@@ -18,14 +18,17 @@ void adf_pf2vf_notify_restarting(struct adf_accel_dev *accel_dev)
+
+ dev_dbg(&GET_DEV(accel_dev), "pf2vf notify restarting\n");
+ for (i = 0, vf = accel_dev->pf.vf_info; i < num_vfs; i++, vf++) {
+- vf->restarting = false;
++ if (vf->init && vf->vf_compat_ver >= ADF_PFVF_COMPAT_FALLBACK)
++ vf->restarting = true;
++ else
++ vf->restarting = false;
++
+ if (!vf->init)
+ continue;
++
+ if (adf_send_pf2vf_msg(accel_dev, i, msg))
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to send restarting msg to VF%d\n", i);
+- else if (vf->vf_compat_ver >= ADF_PFVF_COMPAT_FALLBACK)
+- vf->restarting = true;
+ }
+ }
+
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.c b/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.c
+index 1141258db4b65a..10c91e56d6be3b 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.c
+@@ -48,6 +48,20 @@ void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev)
+ }
+ EXPORT_SYMBOL_GPL(adf_vf2pf_notify_shutdown);
+
++void adf_vf2pf_notify_restart_complete(struct adf_accel_dev *accel_dev)
++{
++ struct pfvf_message msg = { .type = ADF_VF2PF_MSGTYPE_RESTARTING_COMPLETE };
++
++ /* Check compatibility version */
++ if (accel_dev->vf.pf_compat_ver < ADF_PFVF_COMPAT_FALLBACK)
++ return;
++
++ if (adf_send_vf2pf_msg(accel_dev, msg))
++ dev_err(&GET_DEV(accel_dev),
++ "Failed to send Restarting complete event to PF\n");
++}
++EXPORT_SYMBOL_GPL(adf_vf2pf_notify_restart_complete);
++
+ int adf_vf2pf_request_version(struct adf_accel_dev *accel_dev)
+ {
+ u8 pf_version;
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.h b/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.h
+index 71bc0e3f1d9335..d79340ab3134ff 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.h
++++ b/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.h
+@@ -6,6 +6,7 @@
+ #if defined(CONFIG_PCI_IOV)
+ int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev);
+ void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev);
++void adf_vf2pf_notify_restart_complete(struct adf_accel_dev *accel_dev);
+ int adf_vf2pf_request_version(struct adf_accel_dev *accel_dev);
+ int adf_vf2pf_get_capabilities(struct adf_accel_dev *accel_dev);
+ int adf_vf2pf_get_ring_to_svc(struct adf_accel_dev *accel_dev);
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c b/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c
+index cdbb2d687b1b0d..4ab9ac33151957 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c
+@@ -13,6 +13,7 @@
+ #include "adf_cfg.h"
+ #include "adf_cfg_strings.h"
+ #include "adf_cfg_common.h"
++#include "adf_pfvf_vf_msg.h"
+ #include "adf_transport_access_macros.h"
+ #include "adf_transport_internal.h"
+
+@@ -75,6 +76,7 @@ static void adf_dev_stop_async(struct work_struct *work)
+
+ /* Re-enable PF2VF interrupts */
+ adf_enable_pf2vf_interrupts(accel_dev);
++ adf_vf2pf_notify_restart_complete(accel_dev);
+ kfree(stop_data);
+ }
+
+diff --git a/drivers/crypto/n2_core.c b/drivers/crypto/n2_core.c
+index 251e088a53dff8..b11545cc5cb795 100644
+--- a/drivers/crypto/n2_core.c
++++ b/drivers/crypto/n2_core.c
+@@ -1353,6 +1353,7 @@ static int __n2_register_one_hmac(struct n2_ahash_alg *n2ahash)
+ ahash->setkey = n2_hmac_async_setkey;
+
+ base = &ahash->halg.base;
++ err = -EINVAL;
+ if (snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "hmac(%s)",
+ p->child_alg) >= CRYPTO_MAX_ALG_NAME)
+ goto out_free_p;
+diff --git a/drivers/crypto/qcom-rng.c b/drivers/crypto/qcom-rng.c
+index c670d7d0c11ea8..6496b075a48d38 100644
+--- a/drivers/crypto/qcom-rng.c
++++ b/drivers/crypto/qcom-rng.c
+@@ -196,7 +196,7 @@ static int qcom_rng_probe(struct platform_device *pdev)
+ if (IS_ERR(rng->clk))
+ return PTR_ERR(rng->clk);
+
+- rng->of_data = (struct qcom_rng_of_data *)of_device_get_match_data(&pdev->dev);
++ rng->of_data = (struct qcom_rng_of_data *)device_get_match_data(&pdev->dev);
+
+ qcom_rng_dev = rng;
+ ret = crypto_register_rng(&qcom_rng_alg);
+@@ -247,7 +247,7 @@ static struct qcom_rng_of_data qcom_trng_of_data = {
+ };
+
+ static const struct acpi_device_id __maybe_unused qcom_rng_acpi_match[] = {
+- { .id = "QCOM8160", .driver_data = 1 },
++ { .id = "QCOM8160", .driver_data = (kernel_ulong_t)&qcom_prng_ee_of_data },
+ {}
+ };
+ MODULE_DEVICE_TABLE(acpi, qcom_rng_acpi_match);
+diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
+index 51132a575b2766..73b6498d5e5ca6 100644
+--- a/drivers/cxl/core/pci.c
++++ b/drivers/cxl/core/pci.c
+@@ -390,10 +390,6 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
+
+ size |= temp & CXL_DVSEC_MEM_SIZE_LOW_MASK;
+ if (!size) {
+- info->dvsec_range[i] = (struct range) {
+- .start = 0,
+- .end = CXL_RESOURCE_NONE,
+- };
+ continue;
+ }
+
+@@ -411,12 +407,10 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
+
+ base |= temp & CXL_DVSEC_MEM_BASE_LOW_MASK;
+
+- info->dvsec_range[i] = (struct range) {
++ info->dvsec_range[ranges++] = (struct range) {
+ .start = base,
+ .end = base + size - 1
+ };
+-
+- ranges++;
+ }
+
+ info->ranges = ranges;
+diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c
+index 0fe75eed8973b2..189a2fc29e74f5 100644
+--- a/drivers/edac/igen6_edac.c
++++ b/drivers/edac/igen6_edac.c
+@@ -316,7 +316,7 @@ static u64 ehl_err_addr_to_imc_addr(u64 eaddr, int mc)
+ if (igen6_tom <= _4GB)
+ return eaddr + igen6_tolud - _4GB;
+
+- if (eaddr < _4GB)
++ if (eaddr >= igen6_tom)
+ return eaddr + igen6_tolud - igen6_tom;
+
+ return eaddr;
+diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
+index ea7a9a342dd30b..d7416166fd8a42 100644
+--- a/drivers/edac/synopsys_edac.c
++++ b/drivers/edac/synopsys_edac.c
+@@ -10,6 +10,7 @@
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
+ #include <linux/spinlock.h>
++#include <linux/sizes.h>
+ #include <linux/interrupt.h>
+ #include <linux/of.h>
+
+@@ -337,6 +338,7 @@ struct synps_edac_priv {
+ * @get_mtype: Get mtype.
+ * @get_dtype: Get dtype.
+ * @get_ecc_state: Get ECC state.
++ * @get_mem_info: Get EDAC memory info
+ * @quirks: To differentiate IPs.
+ */
+ struct synps_platform_data {
+@@ -344,6 +346,9 @@ struct synps_platform_data {
+ enum mem_type (*get_mtype)(const void __iomem *base);
+ enum dev_type (*get_dtype)(const void __iomem *base);
+ bool (*get_ecc_state)(void __iomem *base);
++#ifdef CONFIG_EDAC_DEBUG
++ u64 (*get_mem_info)(struct synps_edac_priv *priv);
++#endif
+ int quirks;
+ };
+
+@@ -402,6 +407,25 @@ static int zynq_get_error_info(struct synps_edac_priv *priv)
+ return 0;
+ }
+
++#ifdef CONFIG_EDAC_DEBUG
++/**
++ * zynqmp_get_mem_info - Get the current memory info.
++ * @priv: DDR memory controller private instance data.
++ *
++ * Return: host interface address.
++ */
++static u64 zynqmp_get_mem_info(struct synps_edac_priv *priv)
++{
++ u64 hif_addr = 0, linear_addr;
++
++ linear_addr = priv->poison_addr;
++ if (linear_addr >= SZ_32G)
++ linear_addr = linear_addr - SZ_32G + SZ_2G;
++ hif_addr = linear_addr >> 3;
++ return hif_addr;
++}
++#endif
++
+ /**
+ * zynqmp_get_error_info - Get the current ECC error info.
+ * @priv: DDR memory controller private instance data.
+@@ -922,6 +946,9 @@ static const struct synps_platform_data zynqmp_edac_def = {
+ .get_mtype = zynqmp_get_mtype,
+ .get_dtype = zynqmp_get_dtype,
+ .get_ecc_state = zynqmp_get_ecc_state,
++#ifdef CONFIG_EDAC_DEBUG
++ .get_mem_info = zynqmp_get_mem_info,
++#endif
+ .quirks = (DDR_ECC_INTR_SUPPORT
+ #ifdef CONFIG_EDAC_DEBUG
+ | DDR_ECC_DATA_POISON_SUPPORT
+@@ -975,10 +1002,16 @@ MODULE_DEVICE_TABLE(of, synps_edac_match);
+ static void ddr_poison_setup(struct synps_edac_priv *priv)
+ {
+ int col = 0, row = 0, bank = 0, bankgrp = 0, rank = 0, regval;
++ const struct synps_platform_data *p_data;
+ int index;
+ ulong hif_addr = 0;
+
+- hif_addr = priv->poison_addr >> 3;
++ p_data = priv->p_data;
++
++ if (p_data->get_mem_info)
++ hif_addr = p_data->get_mem_info(priv);
++ else
++ hif_addr = priv->poison_addr >> 3;
+
+ for (index = 0; index < DDR_MAX_ROW_SHIFT; index++) {
+ if (priv->row_shift[index])
+diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
+index 9a7dc90330a351..a888a001bedb15 100644
+--- a/drivers/firewire/core-cdev.c
++++ b/drivers/firewire/core-cdev.c
+@@ -599,11 +599,11 @@ static void complete_transaction(struct fw_card *card, int rcode, u32 request_ts
+ queue_event(client, &e->event, rsp, sizeof(*rsp) + rsp->length, NULL, 0);
+
+ break;
++ }
+ default:
+ WARN_ON(1);
+ break;
+ }
+- }
+
+ /* Drop the idr's reference */
+ client_put(client);
+diff --git a/drivers/firmware/arm_scmi/optee.c b/drivers/firmware/arm_scmi/optee.c
+index 4e7944b91e3857..0c8908d3b1d678 100644
+--- a/drivers/firmware/arm_scmi/optee.c
++++ b/drivers/firmware/arm_scmi/optee.c
+@@ -473,6 +473,13 @@ static int scmi_optee_chan_free(int id, void *p, void *data)
+ struct scmi_chan_info *cinfo = p;
+ struct scmi_optee_channel *channel = cinfo->transport_info;
+
++ /*
++ * Different protocols might share the same chan info, so a previous
++ * call might have already freed the structure.
++ */
++ if (!channel)
++ return 0;
++
+ mutex_lock(&scmi_optee_private->mu);
+ list_del(&channel->link);
+ mutex_unlock(&scmi_optee_private->mu);
+diff --git a/drivers/firmware/efi/libstub/tpm.c b/drivers/firmware/efi/libstub/tpm.c
+index df3182f2e63a56..1fd6823248ab6e 100644
+--- a/drivers/firmware/efi/libstub/tpm.c
++++ b/drivers/firmware/efi/libstub/tpm.c
+@@ -96,7 +96,7 @@ static void efi_retrieve_tcg2_eventlog(int version, efi_physical_addr_t log_loca
+ }
+
+ /* Allocate space for the logs and copy them. */
+- status = efi_bs_call(allocate_pool, EFI_LOADER_DATA,
++ status = efi_bs_call(allocate_pool, EFI_ACPI_RECLAIM_MEMORY,
+ sizeof(*log_tbl) + log_size, (void **)&log_tbl);
+
+ if (status != EFI_SUCCESS) {
+diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
+index 00c379a3ccebeb..0f5ac346bda434 100644
+--- a/drivers/firmware/qcom/qcom_scm.c
++++ b/drivers/firmware/qcom/qcom_scm.c
+@@ -1954,14 +1954,12 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ * will cause the boot stages to enter download mode, unless
+ * disabled below by a clean shutdown/reboot.
+ */
+- if (download_mode)
+- qcom_scm_set_download_mode(true);
+-
++ qcom_scm_set_download_mode(download_mode);
+
+ /*
+ * Disable SDI if indicated by DT that it is enabled by default.
+ */
+- if (of_property_read_bool(pdev->dev.of_node, "qcom,sdi-enabled"))
++ if (of_property_read_bool(pdev->dev.of_node, "qcom,sdi-enabled") || !download_mode)
+ qcom_scm_disable_sdi();
+
+ ret = of_reserved_mem_device_init(__scm->dev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 094498a0964b51..e2382566af4454 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -117,9 +117,10 @@
+ * - 3.56.0 - Update IB start address and size alignment for decode and encode
+ * - 3.57.0 - Compute tunneling on GFX10+
+ * - 3.58.0 - Add GFX12 DCC support
++ * - 3.59.0 - Cleared VRAM
+ */
+ #define KMS_DRIVER_MAJOR 3
+-#define KMS_DRIVER_MINOR 58
++#define KMS_DRIVER_MINOR 59
+ #define KMS_DRIVER_PATCHLEVEL 0
+
+ /*
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index a060c28f0877cd..119eebeb16b7fa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -902,10 +902,12 @@ amdgpu_vm_tlb_flush(struct amdgpu_vm_update_params *params,
+ {
+ struct amdgpu_vm *vm = params->vm;
+
+- if (!fence || !*fence)
++ tlb_cb->vm = vm;
++ if (!fence || !*fence) {
++ amdgpu_vm_tlb_seq_cb(NULL, &tlb_cb->cb);
+ return;
++ }
+
+- tlb_cb->vm = vm;
+ if (!dma_fence_add_callback(*fence, &tlb_cb->cb,
+ amdgpu_vm_tlb_seq_cb)) {
+ dma_fence_put(vm->last_tlb_flush);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h b/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
+index fb2b394bb9c555..6e9eeaeb3de1dd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
+@@ -213,7 +213,7 @@ struct amd_sriov_msg_pf2vf_info {
+ uint32_t gpu_capacity;
+ /* reserved */
+ uint32_t reserved[256 - AMD_SRIOV_MSG_PF2VF_INFO_FILLED_SIZE];
+-};
++} __packed;
+
+ struct amd_sriov_msg_vf2pf_info_header {
+ /* the total structure size in byte */
+@@ -273,7 +273,7 @@ struct amd_sriov_msg_vf2pf_info {
+ uint32_t mes_info_size;
+ /* reserved */
+ uint32_t reserved[256 - AMD_SRIOV_MSG_VF2PF_INFO_FILLED_SIZE];
+-};
++} __packed;
+
+ /* mailbox message send from guest to host */
+ enum amd_sriov_mailbox_request_message {
+diff --git a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
+index 25feab188dfe69..ebf83fee43bb99 100644
+--- a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
++++ b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
+@@ -2065,26 +2065,29 @@ amdgpu_atombios_encoder_get_lcd_info(struct amdgpu_encoder *encoder)
+ fake_edid_record = (ATOM_FAKE_EDID_PATCH_RECORD *)record;
+ if (fake_edid_record->ucFakeEDIDLength) {
+ struct edid *edid;
+- int edid_size =
+- max((int)EDID_LENGTH, (int)fake_edid_record->ucFakeEDIDLength);
+- edid = kmalloc(edid_size, GFP_KERNEL);
++ int edid_size;
++
++ if (fake_edid_record->ucFakeEDIDLength == 128)
++ edid_size = fake_edid_record->ucFakeEDIDLength;
++ else
++ edid_size = fake_edid_record->ucFakeEDIDLength * 128;
++ edid = kmemdup(&fake_edid_record->ucFakeEDIDString[0],
++ edid_size, GFP_KERNEL);
+ if (edid) {
+- memcpy((u8 *)edid, (u8 *)&fake_edid_record->ucFakeEDIDString[0],
+- fake_edid_record->ucFakeEDIDLength);
+-
+ if (drm_edid_is_valid(edid)) {
+ adev->mode_info.bios_hardcoded_edid = edid;
+ adev->mode_info.bios_hardcoded_edid_size = edid_size;
+- } else
++ } else {
+ kfree(edid);
++ }
+ }
++ record += struct_size(fake_edid_record,
++ ucFakeEDIDString,
++ edid_size);
++ } else {
++ /* empty fake edid record must be 3 bytes long */
++ record += sizeof(ATOM_FAKE_EDID_PATCH_RECORD) + 1;
+ }
+- record += fake_edid_record->ucFakeEDIDLength ?
+- struct_size(fake_edid_record,
+- ucFakeEDIDString,
+- fake_edid_record->ucFakeEDIDLength) :
+- /* empty fake edid record must be 3 bytes long */
+- sizeof(ATOM_FAKE_EDID_PATCH_RECORD) + 1;
+ break;
+ case LCD_PANEL_RESOLUTION_RECORD_TYPE:
+ panel_res_record = (ATOM_PANEL_RESOLUTION_PATCH_RECORD *)record;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index e45d23e8287888..34b95ca700b233 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -202,12 +202,16 @@ static const struct amdgpu_hwip_reg_entry gc_gfx_queue_reg_list_12[] = {
+ SOC15_REG_ENTRY_STR(GC, 0, regCP_IB1_BUFSZ)
+ };
+
+-static const struct soc15_reg_golden golden_settings_gc_12_0[] = {
++static const struct soc15_reg_golden golden_settings_gc_12_0_rev0[] = {
+ SOC15_REG_GOLDEN_VALUE(GC, 0, regDB_MEM_CONFIG, 0x0000000f, 0x0000000f),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, regCB_HW_CONTROL_1, 0x03000000, 0x03000000),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2C_CTRL5, 0x00000070, 0x00000020)
+ };
+
++static const struct soc15_reg_golden golden_settings_gc_12_0[] = {
++ SOC15_REG_GOLDEN_VALUE(GC, 0, regDB_MEM_CONFIG, 0x00008000, 0x00008000),
++};
++
+ #define DEFAULT_SH_MEM_CONFIG \
+ ((SH_MEM_ADDRESS_MODE_64 << SH_MEM_CONFIG__ADDRESS_MODE__SHIFT) | \
+ (SH_MEM_ALIGNMENT_MODE_UNALIGNED << SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) | \
+@@ -3446,10 +3450,14 @@ static void gfx_v12_0_init_golden_registers(struct amdgpu_device *adev)
+ switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
+ case IP_VERSION(12, 0, 0):
+ case IP_VERSION(12, 0, 1):
++ soc15_program_register_sequence(adev,
++ golden_settings_gc_12_0,
++ (const u32)ARRAY_SIZE(golden_settings_gc_12_0));
++
+ if (adev->rev_id == 0)
+ soc15_program_register_sequence(adev,
+- golden_settings_gc_12_0,
+- (const u32)ARRAY_SIZE(golden_settings_gc_12_0));
++ golden_settings_gc_12_0_rev0,
++ (const u32)ARRAY_SIZE(golden_settings_gc_12_0_rev0));
+ break;
+ default:
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+index 8aded0a67037b4..18a0b0441457a9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+@@ -160,7 +160,7 @@ static int mes_v11_0_submit_pkt_and_poll_completion(struct amdgpu_mes *mes,
+ int api_status_off)
+ {
+ union MESAPI__QUERY_MES_STATUS mes_status_pkt;
+- signed long timeout = 3000000; /* 3000 ms */
++ signed long timeout = 2100000; /* 2100 ms */
+ struct amdgpu_device *adev = mes->adev;
+ struct amdgpu_ring *ring = &mes->ring[0];
+ struct MES_API_STATUS *api_status;
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+index a79a8adc3aa5b1..48b3c4e4b1cad8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+@@ -146,7 +146,7 @@ static int mes_v12_0_submit_pkt_and_poll_completion(struct amdgpu_mes *mes,
+ int api_status_off)
+ {
+ union MESAPI__QUERY_MES_STATUS mes_status_pkt;
+- signed long timeout = 3000000; /* 3000 ms */
++ signed long timeout = 2100000; /* 2100 ms */
+ struct amdgpu_device *adev = mes->adev;
+ struct amdgpu_ring *ring = &mes->ring[pipe];
+ spinlock_t *ring_lock = &mes->ring_lock[pipe];
+@@ -453,6 +453,11 @@ static int mes_v12_0_misc_op(struct amdgpu_mes *mes,
+ union MESAPI__MISC misc_pkt;
+ int pipe;
+
++ if (mes->adev->enable_uni_mes)
++ pipe = AMDGPU_MES_KIQ_PIPE;
++ else
++ pipe = AMDGPU_MES_SCHED_PIPE;
++
+ memset(&misc_pkt, 0, sizeof(misc_pkt));
+
+ misc_pkt.header.type = MES_API_TYPE_SCHEDULER;
+@@ -487,6 +492,7 @@ static int mes_v12_0_misc_op(struct amdgpu_mes *mes,
+ misc_pkt.wait_reg_mem.reg_offset2 = input->wrm_reg.reg1;
+ break;
+ case MES_MISC_OP_SET_SHADER_DEBUGGER:
++ pipe = AMDGPU_MES_SCHED_PIPE;
+ misc_pkt.opcode = MESAPI_MISC__SET_SHADER_DEBUGGER;
+ misc_pkt.set_shader_debugger.process_context_addr =
+ input->set_shader_debugger.process_context_addr;
+@@ -504,11 +510,6 @@ static int mes_v12_0_misc_op(struct amdgpu_mes *mes,
+ return -EINVAL;
+ }
+
+- if (mes->adev->enable_uni_mes)
+- pipe = AMDGPU_MES_KIQ_PIPE;
+- else
+- pipe = AMDGPU_MES_SCHED_PIPE;
+-
+ return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe,
+ &misc_pkt, sizeof(misc_pkt),
+ offsetof(union MESAPI__MISC, api_status));
+@@ -582,6 +583,7 @@ static int mes_v12_0_set_hw_resources(struct amdgpu_mes *mes, int pipe)
+ mes_set_hw_res_pkt.disable_mes_log = 1;
+ mes_set_hw_res_pkt.use_different_vmid_compute = 1;
+ mes_set_hw_res_pkt.enable_reg_active_poll = 1;
++ mes_set_hw_res_pkt.enable_level_process_quantum_check = 1;
+
+ /*
+ * Keep oversubscribe timer for sdma . When we have unmapped doorbell
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+index ecee9e7d7e4c6b..403c177f24349d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+@@ -1022,13 +1022,16 @@ static void sdma_v7_0_vm_copy_pte(struct amdgpu_ib *ib,
+ unsigned bytes = count * 8;
+
+ ib->ptr[ib->length_dw++] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_COPY) |
+- SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR);
++ SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR) |
++ SDMA_PKT_COPY_LINEAR_HEADER_CPV(1);
++
+ ib->ptr[ib->length_dw++] = bytes - 1;
+ ib->ptr[ib->length_dw++] = 0; /* src/dst endian swap */
+ ib->ptr[ib->length_dw++] = lower_32_bits(src);
+ ib->ptr[ib->length_dw++] = upper_32_bits(src);
+ ib->ptr[ib->length_dw++] = lower_32_bits(pe);
+ ib->ptr[ib->length_dw++] = upper_32_bits(pe);
++ ib->ptr[ib->length_dw++] = 0;
+
+ }
+
+@@ -1631,7 +1634,7 @@ static void sdma_v7_0_set_buffer_funcs(struct amdgpu_device *adev)
+ }
+
+ static const struct amdgpu_vm_pte_funcs sdma_v7_0_vm_pte_funcs = {
+- .copy_pte_num_dw = 7,
++ .copy_pte_num_dw = 8,
+ .copy_pte = sdma_v7_0_vm_copy_pte,
+ .write_pte = sdma_v7_0_vm_write_pte,
+ .set_pte_pde = sdma_v7_0_vm_set_pte_pde,
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc24.c b/drivers/gpu/drm/amd/amdgpu/soc24.c
+index b0c3678cfb31d8..fd4c3d4f838798 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc24.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc24.c
+@@ -250,13 +250,6 @@ static void soc24_program_aspm(struct amdgpu_device *adev)
+ adev->nbio.funcs->program_aspm(adev);
+ }
+
+-static void soc24_enable_doorbell_aperture(struct amdgpu_device *adev,
+- bool enable)
+-{
+- adev->nbio.funcs->enable_doorbell_aperture(adev, enable);
+- adev->nbio.funcs->enable_doorbell_selfring_aperture(adev, enable);
+-}
+-
+ const struct amdgpu_ip_block_version soc24_common_ip_block = {
+ .type = AMD_IP_BLOCK_TYPE_COMMON,
+ .major = 1,
+@@ -454,6 +447,11 @@ static int soc24_common_late_init(void *handle)
+ if (amdgpu_sriov_vf(adev))
+ xgpu_nv_mailbox_get_irq(adev);
+
++ /* Enable selfring doorbell aperture late because doorbell BAR
++ * aperture will change if resize BAR successfully in gmc sw_init.
++ */
++ adev->nbio.funcs->enable_doorbell_selfring_aperture(adev, true);
++
+ return 0;
+ }
+
+@@ -491,7 +489,7 @@ static int soc24_common_hw_init(void *handle)
+ adev->df.funcs->hw_init(adev);
+
+ /* enable the doorbell aperture */
+- soc24_enable_doorbell_aperture(adev, true);
++ adev->nbio.funcs->enable_doorbell_aperture(adev, true);
+
+ return 0;
+ }
+@@ -500,8 +498,13 @@ static int soc24_common_hw_fini(void *handle)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+- /* disable the doorbell aperture */
+- soc24_enable_doorbell_aperture(adev, false);
++ /* Disable the doorbell aperture and selfring doorbell aperture
++ * separately in hw_fini because soc21_enable_doorbell_aperture
++ * has been removed and there is no need to delay disabling
++ * selfring doorbell.
++ */
++ adev->nbio.funcs->enable_doorbell_aperture(adev, false);
++ adev->nbio.funcs->enable_doorbell_selfring_aperture(adev, false);
+
+ if (amdgpu_sriov_vf(adev))
+ xgpu_nv_mailbox_put_irq(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+index 8d75061f9f3847..4fc307ab0ff837 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+@@ -1347,170 +1347,6 @@ static void vcn_v4_0_5_unified_ring_set_wptr(struct amdgpu_ring *ring)
+ }
+ }
+
+-static int vcn_v4_0_5_limit_sched(struct amdgpu_cs_parser *p,
+- struct amdgpu_job *job)
+-{
+- struct drm_gpu_scheduler **scheds;
+-
+- /* The create msg must be in the first IB submitted */
+- if (atomic_read(&job->base.entity->fence_seq))
+- return -EINVAL;
+-
+- /* if VCN0 is harvested, we can't support AV1 */
+- if (p->adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0)
+- return -EINVAL;
+-
+- scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_ENC]
+- [AMDGPU_RING_PRIO_0].sched;
+- drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
+- return 0;
+-}
+-
+-static int vcn_v4_0_5_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
+- uint64_t addr)
+-{
+- struct ttm_operation_ctx ctx = { false, false };
+- struct amdgpu_bo_va_mapping *map;
+- uint32_t *msg, num_buffers;
+- struct amdgpu_bo *bo;
+- uint64_t start, end;
+- unsigned int i;
+- void *ptr;
+- int r;
+-
+- addr &= AMDGPU_GMC_HOLE_MASK;
+- r = amdgpu_cs_find_mapping(p, addr, &bo, &map);
+- if (r) {
+- DRM_ERROR("Can't find BO for addr 0x%08llx\n", addr);
+- return r;
+- }
+-
+- start = map->start * AMDGPU_GPU_PAGE_SIZE;
+- end = (map->last + 1) * AMDGPU_GPU_PAGE_SIZE;
+- if (addr & 0x7) {
+- DRM_ERROR("VCN messages must be 8 byte aligned!\n");
+- return -EINVAL;
+- }
+-
+- bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
+- amdgpu_bo_placement_from_domain(bo, bo->allowed_domains);
+- r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
+- if (r) {
+- DRM_ERROR("Failed validating the VCN message BO (%d)!\n", r);
+- return r;
+- }
+-
+- r = amdgpu_bo_kmap(bo, &ptr);
+- if (r) {
+- DRM_ERROR("Failed mapping the VCN message (%d)!\n", r);
+- return r;
+- }
+-
+- msg = ptr + addr - start;
+-
+- /* Check length */
+- if (msg[1] > end - addr) {
+- r = -EINVAL;
+- goto out;
+- }
+-
+- if (msg[3] != RDECODE_MSG_CREATE)
+- goto out;
+-
+- num_buffers = msg[2];
+- for (i = 0, msg = &msg[6]; i < num_buffers; ++i, msg += 4) {
+- uint32_t offset, size, *create;
+-
+- if (msg[0] != RDECODE_MESSAGE_CREATE)
+- continue;
+-
+- offset = msg[1];
+- size = msg[2];
+-
+- if (offset + size > end) {
+- r = -EINVAL;
+- goto out;
+- }
+-
+- create = ptr + addr + offset - start;
+-
+- /* H264, HEVC and VP9 can run on any instance */
+- if (create[0] == 0x7 || create[0] == 0x10 || create[0] == 0x11)
+- continue;
+-
+- r = vcn_v4_0_5_limit_sched(p, job);
+- if (r)
+- goto out;
+- }
+-
+-out:
+- amdgpu_bo_kunmap(bo);
+- return r;
+-}
+-
+-#define RADEON_VCN_ENGINE_TYPE_ENCODE (0x00000002)
+-#define RADEON_VCN_ENGINE_TYPE_DECODE (0x00000003)
+-
+-#define RADEON_VCN_ENGINE_INFO (0x30000001)
+-#define RADEON_VCN_ENGINE_INFO_MAX_OFFSET 16
+-
+-#define RENCODE_ENCODE_STANDARD_AV1 2
+-#define RENCODE_IB_PARAM_SESSION_INIT 0x00000003
+-#define RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET 64
+-
+-/* return the offset in ib if id is found, -1 otherwise
+- * to speed up the searching we only search upto max_offset
+- */
+-static int vcn_v4_0_5_enc_find_ib_param(struct amdgpu_ib *ib, uint32_t id, int max_offset)
+-{
+- int i;
+-
+- for (i = 0; i < ib->length_dw && i < max_offset && ib->ptr[i] >= 8; i += ib->ptr[i]/4) {
+- if (ib->ptr[i + 1] == id)
+- return i;
+- }
+- return -1;
+-}
+-
+-static int vcn_v4_0_5_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
+- struct amdgpu_job *job,
+- struct amdgpu_ib *ib)
+-{
+- struct amdgpu_ring *ring = amdgpu_job_ring(job);
+- struct amdgpu_vcn_decode_buffer *decode_buffer;
+- uint64_t addr;
+- uint32_t val;
+- int idx;
+-
+- /* The first instance can decode anything */
+- if (!ring->me)
+- return 0;
+-
+- /* RADEON_VCN_ENGINE_INFO is at the top of ib block */
+- idx = vcn_v4_0_5_enc_find_ib_param(ib, RADEON_VCN_ENGINE_INFO,
+- RADEON_VCN_ENGINE_INFO_MAX_OFFSET);
+- if (idx < 0) /* engine info is missing */
+- return 0;
+-
+- val = amdgpu_ib_get_value(ib, idx + 2); /* RADEON_VCN_ENGINE_TYPE */
+- if (val == RADEON_VCN_ENGINE_TYPE_DECODE) {
+- decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[idx + 6];
+-
+- if (!(decode_buffer->valid_buf_flag & 0x1))
+- return 0;
+-
+- addr = ((u64)decode_buffer->msg_buffer_address_hi) << 32 |
+- decode_buffer->msg_buffer_address_lo;
+- return vcn_v4_0_5_dec_msg(p, job, addr);
+- } else if (val == RADEON_VCN_ENGINE_TYPE_ENCODE) {
+- idx = vcn_v4_0_5_enc_find_ib_param(ib, RENCODE_IB_PARAM_SESSION_INIT,
+- RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET);
+- if (idx >= 0 && ib->ptr[idx + 2] == RENCODE_ENCODE_STANDARD_AV1)
+- return vcn_v4_0_5_limit_sched(p, job);
+- }
+- return 0;
+-}
+-
+ static const struct amdgpu_ring_funcs vcn_v4_0_5_unified_ring_vm_funcs = {
+ .type = AMDGPU_RING_TYPE_VCN_ENC,
+ .align_mask = 0x3f,
+@@ -1518,7 +1354,6 @@ static const struct amdgpu_ring_funcs vcn_v4_0_5_unified_ring_vm_funcs = {
+ .get_rptr = vcn_v4_0_5_unified_ring_get_rptr,
+ .get_wptr = vcn_v4_0_5_unified_ring_get_wptr,
+ .set_wptr = vcn_v4_0_5_unified_ring_set_wptr,
+- .patch_cs_in_place = vcn_v4_0_5_ring_patch_cs_in_place,
+ .emit_frame_size =
+ SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 +
+ SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 4 +
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c
+index d163d92a692f67..2b72d5b4949b6c 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c
+@@ -341,6 +341,10 @@ static void update_mqd_sdma(struct mqd_manager *mm, void *mqd,
+ m->sdmax_rlcx_doorbell_offset =
+ q->doorbell_off << SDMA0_QUEUE0_DOORBELL_OFFSET__OFFSET__SHIFT;
+
++ m->sdmax_rlcx_sched_cntl = (amdgpu_sdma_phase_quantum
++ << SDMA0_QUEUE0_SCHEDULE_CNTL__CONTEXT_QUANTUM__SHIFT)
++ & SDMA0_QUEUE0_SCHEDULE_CNTL__CONTEXT_QUANTUM_MASK;
++
+ m->sdma_engine_id = q->sdma_engine_id;
+ m->sdma_queue_id = q->sdma_queue_id;
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 1e069fa5211ee0..74bb1e0e913487 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -176,6 +176,7 @@ MODULE_FIRMWARE(FIRMWARE_DCN_401_DMUB);
+ static int amdgpu_dm_init(struct amdgpu_device *adev);
+ static void amdgpu_dm_fini(struct amdgpu_device *adev);
+ static bool is_freesync_video_mode(const struct drm_display_mode *mode, struct amdgpu_dm_connector *aconnector);
++static void reset_freesync_config_for_crtc(struct dm_crtc_state *new_crtc_state);
+
+ static enum drm_mode_subconnector get_subconnector_type(struct dc_link *link)
+ {
+@@ -1740,7 +1741,7 @@ static struct dml2_soc_bb *dm_dmub_get_vbios_bounding_box(struct amdgpu_device *
+ /* Send the chunk */
+ ret = dm_dmub_send_vbios_gpint_command(adev, send_addrs[i], chunk, 30000);
+ if (ret != DMUB_STATUS_OK)
+- /* No need to free bb here since it shall be done unconditionally <elsewhere> */
++ /* No need to free bb here since it shall be done in dm_sw_fini() */
+ return NULL;
+ }
+
+@@ -1755,25 +1756,41 @@ static struct dml2_soc_bb *dm_dmub_get_vbios_bounding_box(struct amdgpu_device *
+ static enum dmub_ips_disable_type dm_get_default_ips_mode(
+ struct amdgpu_device *adev)
+ {
+- /*
+- * On DCN35 systems with Z8 enabled, it's possible for IPS2 + Z8 to
+- * cause a hard hang. A fix exists for newer PMFW.
+- *
+- * As a workaround, for non-fixed PMFW, force IPS1+RCG as the deepest
+- * IPS state in all cases, except for s0ix and all displays off (DPMS),
+- * where IPS2 is allowed.
+- *
+- * When checking pmfw version, use the major and minor only.
+- */
+- if (amdgpu_ip_version(adev, DCE_HWIP, 0) == IP_VERSION(3, 5, 0) &&
+- (adev->pm.fw_version & 0x00FFFF00) < 0x005D6300)
+- return DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;
++ enum dmub_ips_disable_type ret = DMUB_IPS_ENABLE;
+
+- if (amdgpu_ip_version(adev, DCE_HWIP, 0) >= IP_VERSION(3, 5, 0))
+- return DMUB_IPS_ENABLE;
++ switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) {
++ case IP_VERSION(3, 5, 0):
++ /*
++ * On DCN35 systems with Z8 enabled, it's possible for IPS2 + Z8 to
++ * cause a hard hang. A fix exists for newer PMFW.
++ *
++ * As a workaround, for non-fixed PMFW, force IPS1+RCG as the deepest
++ * IPS state in all cases, except for s0ix and all displays off (DPMS),
++ * where IPS2 is allowed.
++ *
++ * When checking pmfw version, use the major and minor only.
++ */
++ if ((adev->pm.fw_version & 0x00FFFF00) < 0x005D6300)
++ ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;
++ else if (amdgpu_ip_version(adev, GC_HWIP, 0) > IP_VERSION(11, 5, 0))
++ /*
++ * Other ASICs with DCN35 that have residency issues with
++ * IPS2 in idle.
++ * We want them to use IPS2 only in display off cases.
++ */
++ ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;
++ break;
++ case IP_VERSION(3, 5, 1):
++ ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;
++ break;
++ default:
++ /* ASICs older than DCN35 do not have IPSs */
++ if (amdgpu_ip_version(adev, DCE_HWIP, 0) < IP_VERSION(3, 5, 0))
++ ret = DMUB_IPS_DISABLE_ALL;
++ break;
++ }
+
+- /* ASICs older than DCN35 do not have IPSs */
+- return DMUB_IPS_DISABLE_ALL;
++ return ret;
+ }
+
+ static int amdgpu_dm_init(struct amdgpu_device *adev)
+@@ -2489,8 +2506,17 @@ static int dm_sw_init(void *handle)
+ static int dm_sw_fini(void *handle)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
++ struct dal_allocation *da;
++
++ list_for_each_entry(da, &adev->dm.da_list, list) {
++ if (adev->dm.bb_from_dmub == (void *) da->cpu_ptr) {
++ amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr);
++ list_del(&da->list);
++ kfree(da);
++ break;
++ }
++ }
+
+- kfree(adev->dm.bb_from_dmub);
+ adev->dm.bb_from_dmub = NULL;
+
+ kfree(adev->dm.dmub_fb_info);
+@@ -3230,8 +3256,11 @@ static int dm_resume(void *handle)
+ drm_connector_list_iter_end(&iter);
+
+ /* Force mode set in atomic commit */
+- for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i)
++ for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
+ new_crtc_state->active_changed = true;
++ dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
++ reset_freesync_config_for_crtc(dm_new_crtc_state);
++ }
+
+ /*
+ * atomic_check is expected to create the dc states. We need to release
+@@ -4428,6 +4457,7 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
+
+ #define AMDGPU_DM_DEFAULT_MIN_BACKLIGHT 12
+ #define AMDGPU_DM_DEFAULT_MAX_BACKLIGHT 255
++#define AMDGPU_DM_MIN_SPREAD ((AMDGPU_DM_DEFAULT_MAX_BACKLIGHT - AMDGPU_DM_DEFAULT_MIN_BACKLIGHT) / 2)
+ #define AUX_BL_DEFAULT_TRANSITION_TIME_MS 50
+
+ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm,
+@@ -4442,6 +4472,21 @@ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm,
+ return;
+
+ amdgpu_acpi_get_backlight_caps(&caps);
++
++ /* validate the firmware value is sane */
++ if (caps.caps_valid) {
++ int spread = caps.max_input_signal - caps.min_input_signal;
++
++ if (caps.max_input_signal > AMDGPU_DM_DEFAULT_MAX_BACKLIGHT ||
++ caps.min_input_signal < AMDGPU_DM_DEFAULT_MIN_BACKLIGHT ||
++ spread > AMDGPU_DM_DEFAULT_MAX_BACKLIGHT ||
++ spread < AMDGPU_DM_MIN_SPREAD) {
++ DRM_DEBUG_KMS("DM: Invalid backlight caps: min=%d, max=%d\n",
++ caps.min_input_signal, caps.max_input_signal);
++ caps.caps_valid = false;
++ }
++ }
++
+ if (caps.caps_valid) {
+ dm->backlight_caps[bl_idx].caps_valid = true;
+ if (caps.aux_support)
+@@ -6471,7 +6516,8 @@ static void apply_dsc_policy_for_stream(struct amdgpu_dm_connector *aconnector,
+ dc_link_get_highest_encoding_format(aconnector->dc_link),
+ &stream->timing.dsc_cfg)) {
+ stream->timing.flags.DSC = 1;
+- DRM_DEBUG_DRIVER("%s: [%s] DSC is selected from SST RX\n", __func__, drm_connector->name);
++ DRM_DEBUG_DRIVER("%s: SST_DSC [%s] DSC is selected from SST RX\n",
++ __func__, drm_connector->name);
+ }
+ } else if (sink->link->dpcd_caps.dongle_type == DISPLAY_DONGLE_DP_HDMI_CONVERTER) {
+ timing_bw_in_kbps = dc_bandwidth_in_kbps_from_timing(&stream->timing,
+@@ -6490,7 +6536,7 @@ static void apply_dsc_policy_for_stream(struct amdgpu_dm_connector *aconnector,
+ dc_link_get_highest_encoding_format(aconnector->dc_link),
+ &stream->timing.dsc_cfg)) {
+ stream->timing.flags.DSC = 1;
+- DRM_DEBUG_DRIVER("%s: [%s] DSC is selected from DP-HDMI PCON\n",
++ DRM_DEBUG_DRIVER("%s: SST_DSC [%s] DSC is selected from DP-HDMI PCON\n",
+ __func__, drm_connector->name);
+ }
+ }
+@@ -11651,7 +11697,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ if (dc_resource_is_dsc_encoding_supported(dc)) {
+ ret = compute_mst_dsc_configs_for_state(state, dm_state->context, vars);
+ if (ret) {
+- drm_dbg_atomic(dev, "compute_mst_dsc_configs_for_state() failed\n");
++ drm_dbg_atomic(dev, "MST_DSC compute_mst_dsc_configs_for_state() failed\n");
+ ret = -EINVAL;
+ goto fail;
+ }
+@@ -11672,7 +11718,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ */
+ ret = drm_dp_mst_atomic_check(state);
+ if (ret) {
+- drm_dbg_atomic(dev, "drm_dp_mst_atomic_check() failed\n");
++ drm_dbg_atomic(dev, "MST drm_dp_mst_atomic_check() failed\n");
+ goto fail;
+ }
+ status = dc_validate_global_state(dc, dm_state->context, true);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index b490ae67b6beb1..8befc79478f870 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -759,7 +759,7 @@ static uint8_t write_dsc_enable_synaptics_non_virtual_dpcd_mst(
+ uint8_t ret = 0;
+
+ drm_dbg_dp(aux->drm_dev,
+- "Configure DSC to non-virtual dpcd synaptics\n");
++ "MST_DSC Configure DSC to non-virtual dpcd synaptics\n");
+
+ if (enable) {
+ /* When DSC is enabled on previous boot and reboot with the hub,
+@@ -772,7 +772,7 @@ static uint8_t write_dsc_enable_synaptics_non_virtual_dpcd_mst(
+ apply_synaptics_fifo_reset_wa(aux);
+
+ ret = drm_dp_dpcd_write(aux, DP_DSC_ENABLE, &enable, 1);
+- DRM_INFO("Send DSC enable to synaptics\n");
++ DRM_INFO("MST_DSC Send DSC enable to synaptics\n");
+
+ } else {
+ /* Synaptics hub not support virtual dpcd,
+@@ -781,7 +781,7 @@ static uint8_t write_dsc_enable_synaptics_non_virtual_dpcd_mst(
+ */
+ if (!stream->link->link_status.link_active) {
+ ret = drm_dp_dpcd_write(aux, DP_DSC_ENABLE, &enable, 1);
+- DRM_INFO("Send DSC disable to synaptics\n");
++ DRM_INFO("MST_DSC Send DSC disable to synaptics\n");
+ }
+ }
+
+@@ -823,14 +823,14 @@ bool dm_helpers_dp_write_dsc_enable(
+ DP_DSC_ENABLE,
+ &enable_passthrough, 1);
+ drm_dbg_dp(dev,
+- "Sent DSC pass-through enable to virtual dpcd port, ret = %u\n",
++ "MST_DSC Sent DSC pass-through enable to virtual dpcd port, ret = %u\n",
+ ret);
+ }
+
+ ret = drm_dp_dpcd_write(aconnector->dsc_aux,
+ DP_DSC_ENABLE, &enable_dsc, 1);
+ drm_dbg_dp(dev,
+- "Sent DSC decoding enable to %s port, ret = %u\n",
++ "MST_DSC Sent DSC decoding enable to %s port, ret = %u\n",
+ (port->passthrough_aux) ? "remote RX" :
+ "virtual dpcd",
+ ret);
+@@ -838,7 +838,7 @@ bool dm_helpers_dp_write_dsc_enable(
+ ret = drm_dp_dpcd_write(aconnector->dsc_aux,
+ DP_DSC_ENABLE, &enable_dsc, 1);
+ drm_dbg_dp(dev,
+- "Sent DSC decoding disable to %s port, ret = %u\n",
++ "MST_DSC Sent DSC decoding disable to %s port, ret = %u\n",
+ (port->passthrough_aux) ? "remote RX" :
+ "virtual dpcd",
+ ret);
+@@ -848,7 +848,7 @@ bool dm_helpers_dp_write_dsc_enable(
+ DP_DSC_ENABLE,
+ &enable_passthrough, 1);
+ drm_dbg_dp(dev,
+- "Sent DSC pass-through disable to virtual dpcd port, ret = %u\n",
++ "MST_DSC Sent DSC pass-through disable to virtual dpcd port, ret = %u\n",
+ ret);
+ }
+ }
+@@ -858,12 +858,12 @@ bool dm_helpers_dp_write_dsc_enable(
+ if (stream->sink->link->dpcd_caps.dongle_type == DISPLAY_DONGLE_NONE) {
+ ret = dm_helpers_dp_write_dpcd(ctx, stream->link, DP_DSC_ENABLE, &enable_dsc, 1);
+ drm_dbg_dp(dev,
+- "Send DSC %s to SST RX\n",
++ "SST_DSC Send DSC %s to SST RX\n",
+ enable_dsc ? "enable" : "disable");
+ } else if (stream->sink->link->dpcd_caps.dongle_type == DISPLAY_DONGLE_DP_HDMI_CONVERTER) {
+ ret = dm_helpers_dp_write_dpcd(ctx, stream->link, DP_DSC_ENABLE, &enable_dsc, 1);
+ drm_dbg_dp(dev,
+- "Send DSC %s to DP-HDMI PCON\n",
++ "SST_DSC Send DSC %s to DP-HDMI PCON\n",
+ enable_dsc ? "enable" : "disable");
+ }
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 2e9f6da1acdcad..6c72709aa25816 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -253,7 +253,7 @@ static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnecto
+ aconnector->dsc_aux = &aconnector->mst_root->dm_dp_aux.aux;
+
+ /* synaptics cascaded MST hub case */
+- if (!aconnector->dsc_aux && is_synaptics_cascaded_panamera(aconnector->dc_link, port))
++ if (is_synaptics_cascaded_panamera(aconnector->dc_link, port))
+ aconnector->dsc_aux = port->mgr->aux;
+
+ if (!aconnector->dsc_aux)
+@@ -578,6 +578,8 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
+ if (!aconnector)
+ return NULL;
+
++ DRM_DEBUG_DRIVER("%s: Create aconnector 0x%p for port 0x%p\n", __func__, aconnector, port);
++
+ connector = &aconnector->base;
+ aconnector->mst_output_port = port;
+ aconnector->mst_root = master;
+@@ -872,11 +874,11 @@ static void set_dsc_configs_from_fairness_vars(struct dsc_mst_fairness_params *p
+ if (params[i].sink) {
+ if (params[i].sink->sink_signal != SIGNAL_TYPE_VIRTUAL &&
+ params[i].sink->sink_signal != SIGNAL_TYPE_NONE)
+- DRM_DEBUG_DRIVER("%s i=%d dispname=%s\n", __func__, i,
++ DRM_DEBUG_DRIVER("MST_DSC %s i=%d dispname=%s\n", __func__, i,
+ params[i].sink->edid_caps.display_name);
+ }
+
+- DRM_DEBUG_DRIVER("dsc=%d bits_per_pixel=%d pbn=%d\n",
++ DRM_DEBUG_DRIVER("MST_DSC dsc=%d bits_per_pixel=%d pbn=%d\n",
+ params[i].timing->flags.DSC,
+ params[i].timing->dsc_cfg.bits_per_pixel,
+ vars[i + k].pbn);
+@@ -1054,6 +1056,7 @@ static int try_disable_dsc(struct drm_atomic_state *state,
+ if (next_index == -1)
+ break;
+
++ DRM_DEBUG_DRIVER("MST_DSC index #%d, try no compression\n", next_index);
+ vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps, fec_overhead_multiplier_x1000);
+ ret = drm_dp_atomic_find_time_slots(state,
+ params[next_index].port->mgr,
+@@ -1064,10 +1067,12 @@ static int try_disable_dsc(struct drm_atomic_state *state,
+
+ ret = drm_dp_mst_atomic_check(state);
+ if (ret == 0) {
++ DRM_DEBUG_DRIVER("MST_DSC index #%d, greedily disable dsc\n", next_index);
+ vars[next_index].dsc_enabled = false;
+ vars[next_index].bpp_x16 = 0;
+ } else {
+- vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps, fec_overhead_multiplier_x1000);
++ DRM_DEBUG_DRIVER("MST_DSC index #%d, restore minimum compression\n", next_index);
++ vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.max_kbps, fec_overhead_multiplier_x1000);
+ ret = drm_dp_atomic_find_time_slots(state,
+ params[next_index].port->mgr,
+ params[next_index].port,
+@@ -1082,6 +1087,15 @@ static int try_disable_dsc(struct drm_atomic_state *state,
+ return 0;
+ }
+
++static void log_dsc_params(int count, struct dsc_mst_fairness_vars *vars, int k)
++{
++ int i;
++
++ for (i = 0; i < count; i++)
++ DRM_DEBUG_DRIVER("MST_DSC DSC params: stream #%d --- dsc_enabled = %d, bpp_x16 = %d, pbn = %d\n",
++ i, vars[i + k].dsc_enabled, vars[i + k].bpp_x16, vars[i + k].pbn);
++}
++
+ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ struct dc_state *dc_state,
+ struct dc_link *dc_link,
+@@ -1104,6 +1118,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ return PTR_ERR(mst_state);
+
+ /* Set up params */
++ DRM_DEBUG_DRIVER("%s: MST_DSC Set up params for %d streams\n", __func__, dc_state->stream_count);
+ for (i = 0; i < dc_state->stream_count; i++) {
+ struct dc_dsc_policy dsc_policy = {0};
+
+@@ -1132,7 +1147,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ params[count].num_slices_v = aconnector->dsc_settings.dsc_num_slices_v;
+ params[count].bpp_overwrite = aconnector->dsc_settings.dsc_bits_per_pixel;
+ params[count].compression_possible = stream->sink->dsc_caps.dsc_dec_caps.is_dsc_supported;
+- dc_dsc_get_policy_for_timing(params[count].timing, 0, &dsc_policy);
++ dc_dsc_get_policy_for_timing(params[count].timing, 0, &dsc_policy, dc_link_get_highest_encoding_format(stream->link));
+ if (!dc_dsc_compute_bandwidth_range(
+ stream->sink->ctx->dc->res_pool->dscs[0],
+ stream->sink->ctx->dc->debug.dsc_min_slice_height_override,
+@@ -1145,6 +1160,9 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ params[count].bw_range.stream_kbps = dc_bandwidth_in_kbps_from_timing(&stream->timing,
+ dc_link_get_highest_encoding_format(dc_link));
+
++ DRM_DEBUG_DRIVER("MST_DSC #%d stream 0x%p - max_kbps = %u, min_kbps = %u, uncompressed_kbps = %u\n",
++ count, stream, params[count].bw_range.max_kbps, params[count].bw_range.min_kbps,
++ params[count].bw_range.stream_kbps);
+ count++;
+ }
+
+@@ -1159,6 +1177,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ *link_vars_start_index += count;
+
+ /* Try no compression */
++ DRM_DEBUG_DRIVER("MST_DSC Try no compression\n");
+ for (i = 0; i < count; i++) {
+ vars[i + k].aconnector = params[i].aconnector;
+ vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps, fec_overhead_multiplier_x1000);
+@@ -1177,7 +1196,10 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ return ret;
+ }
+
++ log_dsc_params(count, vars, k);
++
+ /* Try max compression */
++ DRM_DEBUG_DRIVER("MST_DSC Try max compression\n");
+ for (i = 0; i < count; i++) {
+ if (params[i].compression_possible && params[i].clock_force_enable != DSC_CLK_FORCE_DISABLE) {
+ vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps, fec_overhead_multiplier_x1000);
+@@ -1201,14 +1223,26 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ if (ret != 0)
+ return ret;
+
++ log_dsc_params(count, vars, k);
++
+ /* Optimize degree of compression */
++ DRM_DEBUG_DRIVER("MST_DSC Try optimize compression\n");
+ ret = increase_dsc_bpp(state, mst_state, dc_link, params, vars, count, k);
+- if (ret < 0)
++ if (ret < 0) {
++ DRM_DEBUG_DRIVER("MST_DSC Failed to optimize compression\n");
+ return ret;
++ }
+
++ log_dsc_params(count, vars, k);
++
++ DRM_DEBUG_DRIVER("MST_DSC Try disable compression\n");
+ ret = try_disable_dsc(state, dc_link, params, vars, count, k);
+- if (ret < 0)
++ if (ret < 0) {
++ DRM_DEBUG_DRIVER("MST_DSC Failed to disable compression\n");
+ return ret;
++ }
++
++ log_dsc_params(count, vars, k);
+
+ set_dsc_configs_from_fairness_vars(params, vars, count, k);
+
+@@ -1230,17 +1264,19 @@ static bool is_dsc_need_re_compute(
+
+ /* only check phy used by dsc mst branch */
+ if (dc_link->type != dc_connection_mst_branch)
+- return false;
++ goto out;
+
+ /* add a check for older MST DSC with no virtual DPCDs */
+ if (needs_dsc_aux_workaround(dc_link) &&
+ (!(dc_link->dpcd_caps.dsc_caps.dsc_basic_caps.fields.dsc_support.DSC_SUPPORT ||
+ dc_link->dpcd_caps.dsc_caps.dsc_basic_caps.fields.dsc_support.DSC_PASSTHROUGH_SUPPORT)))
+- return false;
++ goto out;
+
+ for (i = 0; i < MAX_PIPES; i++)
+ stream_on_link[i] = NULL;
+
++ DRM_DEBUG_DRIVER("%s: MST_DSC check on %d streams in new dc_state\n", __func__, dc_state->stream_count);
++
+ /* check if there is mode change in new request */
+ for (i = 0; i < dc_state->stream_count; i++) {
+ struct drm_crtc_state *new_crtc_state;
+@@ -1250,6 +1286,8 @@ static bool is_dsc_need_re_compute(
+ if (!stream)
+ continue;
+
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC checking #%d stream 0x%p\n", __func__, __LINE__, i, stream);
++
+ /* check if stream using the same link for mst */
+ if (stream->link != dc_link)
+ continue;
+@@ -1262,8 +1300,11 @@ static bool is_dsc_need_re_compute(
+ new_stream_on_link_num++;
+
+ new_conn_state = drm_atomic_get_new_connector_state(state, &aconnector->base);
+- if (!new_conn_state)
++ if (!new_conn_state) {
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC no new_conn_state for stream 0x%p, aconnector 0x%p\n",
++ __func__, __LINE__, stream, aconnector);
+ continue;
++ }
+
+ if (IS_ERR(new_conn_state))
+ continue;
+@@ -1272,19 +1313,37 @@ static bool is_dsc_need_re_compute(
+ continue;
+
+ new_crtc_state = drm_atomic_get_new_crtc_state(state, new_conn_state->crtc);
+- if (!new_crtc_state)
++ if (!new_crtc_state) {
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC no new_crtc_state for crtc of stream 0x%p, aconnector 0x%p\n",
++ __func__, __LINE__, stream, aconnector);
+ continue;
++ }
+
+ if (IS_ERR(new_crtc_state))
+ continue;
+
+ if (new_crtc_state->enable && new_crtc_state->active) {
+ if (new_crtc_state->mode_changed || new_crtc_state->active_changed ||
+- new_crtc_state->connectors_changed)
+- return true;
++ new_crtc_state->connectors_changed) {
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC dsc recompte required."
++ "stream 0x%p in new dc_state\n",
++ __func__, __LINE__, stream);
++ is_dsc_need_re_compute = true;
++ goto out;
++ }
+ }
+ }
+
++ if (new_stream_on_link_num == 0) {
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC no mode change request for streams in new dc_state\n",
++ __func__, __LINE__);
++ is_dsc_need_re_compute = false;
++ goto out;
++ }
++
++ DRM_DEBUG_DRIVER("%s: MST_DSC check on %d streams in current dc_state\n",
++ __func__, dc->current_state->stream_count);
++
+ if (new_stream_on_link_num == 0)
+ return false;
+
+@@ -1310,11 +1369,18 @@ static bool is_dsc_need_re_compute(
+
+ if (j == new_stream_on_link_num) {
+ /* not in new state */
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC dsc recompute required."
++ "stream 0x%p in current dc_state but not in new dc_state\n",
++ __func__, __LINE__, stream);
+ is_dsc_need_re_compute = true;
+ break;
+ }
+ }
+
++out:
++ DRM_DEBUG_DRIVER("%s: MST_DSC dsc recompute %s\n",
++ __func__, is_dsc_need_re_compute ? "required" : "not required");
++
+ return is_dsc_need_re_compute;
+ }
+
+@@ -1343,6 +1409,9 @@ int compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
+
+ aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
+
++ DRM_DEBUG_DRIVER("%s: MST_DSC compute mst dsc configs for stream 0x%p, aconnector 0x%p\n",
++ __func__, stream, aconnector);
++
+ if (!aconnector || !aconnector->dc_sink || !aconnector->mst_output_port)
+ continue;
+
+@@ -1375,8 +1444,11 @@ int compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
+ stream = dc_state->streams[i];
+
+ if (stream->timing.flags.DSC == 1)
+- if (dc_stream_add_dsc_to_resource(stream->ctx->dc, dc_state, stream) != DC_OK)
++ if (dc_stream_add_dsc_to_resource(stream->ctx->dc, dc_state, stream) != DC_OK) {
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC Failed to request dsc hw resource for stream 0x%p\n",
++ __func__, __LINE__, stream);
+ return -EINVAL;
++ }
+ }
+
+ return ret;
+@@ -1405,6 +1477,9 @@ static int pre_compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
+
+ aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
+
++ DRM_DEBUG_DRIVER("MST_DSC pre compute mst dsc configs for #%d stream 0x%p, aconnector 0x%p\n",
++ i, stream, aconnector);
++
+ if (!aconnector || !aconnector->dc_sink || !aconnector->mst_output_port)
+ continue;
+
+@@ -1494,12 +1569,12 @@ int pre_validate_dsc(struct drm_atomic_state *state,
+ int ret = 0;
+
+ if (!is_dsc_precompute_needed(state)) {
+- DRM_INFO_ONCE("DSC precompute is not needed.\n");
++ DRM_INFO_ONCE("%s:%d MST_DSC dsc precompute is not needed\n", __func__, __LINE__);
+ return 0;
+ }
+ ret = dm_atomic_get_state(state, dm_state_ptr);
+ if (ret != 0) {
+- DRM_INFO_ONCE("dm_atomic_get_state() failed\n");
++ DRM_INFO_ONCE("%s:%d MST_DSC dm_atomic_get_state() failed\n", __func__, __LINE__);
+ return ret;
+ }
+ dm_state = *dm_state_ptr;
+@@ -1553,7 +1628,8 @@ int pre_validate_dsc(struct drm_atomic_state *state,
+
+ ret = pre_compute_mst_dsc_configs_for_state(state, local_dc_state, vars);
+ if (ret != 0) {
+- DRM_INFO_ONCE("pre_compute_mst_dsc_configs_for_state() failed\n");
++ DRM_INFO_ONCE("%s:%d MST_DSC dsc pre_compute_mst_dsc_configs_for_state() failed\n",
++ __func__, __LINE__);
+ ret = -EINVAL;
+ goto clean_exit;
+ }
+@@ -1567,12 +1643,15 @@ int pre_validate_dsc(struct drm_atomic_state *state,
+
+ if (local_dc_state->streams[i] &&
+ dc_is_timing_changed(stream, local_dc_state->streams[i])) {
+- DRM_INFO_ONCE("crtc[%d] needs mode_changed\n", i);
++ DRM_INFO_ONCE("%s:%d MST_DSC crtc[%d] needs mode_change\n", __func__, __LINE__, i);
+ } else {
+ int ind = find_crtc_index_in_state_by_stream(state, stream);
+
+- if (ind >= 0)
++ if (ind >= 0) {
++ DRM_INFO_ONCE("%s:%d MST_DSC no mode changed for stream 0x%p\n",
++ __func__, __LINE__, stream);
+ state->crtcs[ind].new_state->mode_changed = 0;
++ }
+ }
+ }
+ clean_exit:
+@@ -1605,7 +1684,7 @@ static bool is_dsc_common_config_possible(struct dc_stream_state *stream,
+ {
+ struct dc_dsc_policy dsc_policy = {0};
+
+- dc_dsc_get_policy_for_timing(&stream->timing, 0, &dsc_policy);
++ dc_dsc_get_policy_for_timing(&stream->timing, 0, &dsc_policy, dc_link_get_highest_encoding_format(stream->link));
+ dc_dsc_compute_bandwidth_range(stream->sink->ctx->dc->res_pool->dscs[0],
+ stream->sink->ctx->dc->debug.dsc_min_slice_height_override,
+ dsc_policy.min_target_bpp * 16,
+@@ -1697,7 +1776,7 @@ enum dc_status dm_dp_mst_is_port_support_mode(
+ end_to_end_bw_in_kbps = min(root_link_bw_in_kbps, virtual_channel_bw_in_kbps);
+
+ if (stream_kbps <= end_to_end_bw_in_kbps) {
+- DRM_DEBUG_DRIVER("No DSC needed. End-to-end bw sufficient.");
++ DRM_DEBUG_DRIVER("MST_DSC no dsc required. End-to-end bw sufficient\n");
+ return DC_OK;
+ }
+
+@@ -1710,7 +1789,8 @@ enum dc_status dm_dp_mst_is_port_support_mode(
+ /*capable of dsc passthough. dsc bitstream along the entire path*/
+ if (aconnector->mst_output_port->passthrough_aux) {
+ if (bw_range.min_kbps > end_to_end_bw_in_kbps) {
+- DRM_DEBUG_DRIVER("DSC passthrough. Max dsc compression can't fit into end-to-end bw\n");
++ DRM_DEBUG_DRIVER("MST_DSC dsc passthrough and decode at endpoint"
++ "Max dsc compression bw can't fit into end-to-end bw\n");
+ return DC_FAIL_BANDWIDTH_VALIDATE;
+ }
+ } else {
+@@ -1721,7 +1801,8 @@ enum dc_status dm_dp_mst_is_port_support_mode(
+ /*Get last DP link BW capability*/
+ if (dp_get_link_current_set_bw(&aconnector->mst_output_port->aux, &end_link_bw)) {
+ if (stream_kbps > end_link_bw) {
+- DRM_DEBUG_DRIVER("DSC decode at last link. Mode required bw can't fit into available bw\n");
++ DRM_DEBUG_DRIVER("MST_DSC dsc decode at last link."
++ "Mode required bw can't fit into last link\n");
+ return DC_FAIL_BANDWIDTH_VALIDATE;
+ }
+ }
+@@ -1734,7 +1815,8 @@ enum dc_status dm_dp_mst_is_port_support_mode(
+ virtual_channel_bw_in_kbps = kbps_from_pbn(immediate_upstream_port->full_pbn);
+ virtual_channel_bw_in_kbps = min(root_link_bw_in_kbps, virtual_channel_bw_in_kbps);
+ if (bw_range.min_kbps > virtual_channel_bw_in_kbps) {
+- DRM_DEBUG_DRIVER("DSC decode at last link. Max dsc compression can't fit into MST available bw\n");
++ DRM_DEBUG_DRIVER("MST_DSC dsc decode at last link."
++ "Max dsc compression can't fit into MST available bw\n");
+ return DC_FAIL_BANDWIDTH_VALIDATE;
+ }
+ }
+@@ -1751,9 +1833,9 @@ enum dc_status dm_dp_mst_is_port_support_mode(
+ dc_link_get_highest_encoding_format(stream->link),
+ &stream->timing.dsc_cfg)) {
+ stream->timing.flags.DSC = 1;
+- DRM_DEBUG_DRIVER("Require dsc and dsc config found\n");
++ DRM_DEBUG_DRIVER("MST_DSC require dsc and dsc config found\n");
+ } else {
+- DRM_DEBUG_DRIVER("Require dsc but can't find appropriate dsc config\n");
++ DRM_DEBUG_DRIVER("MST_DSC require dsc but can't find appropriate dsc config\n");
+ return DC_FAIL_BANDWIDTH_VALIDATE;
+ }
+
+@@ -1775,11 +1857,11 @@ enum dc_status dm_dp_mst_is_port_support_mode(
+
+ if (branch_max_throughput_mps != 0 &&
+ ((stream->timing.pix_clk_100hz / 10) > branch_max_throughput_mps * 1000)) {
+- DRM_DEBUG_DRIVER("DSC is required but max throughput mps fails");
++ DRM_DEBUG_DRIVER("MST_DSC require dsc but max throughput mps fails\n");
+ return DC_FAIL_BANDWIDTH_VALIDATE;
+ }
+ } else {
+- DRM_DEBUG_DRIVER("DSC is required but can't find common dsc config.");
++ DRM_DEBUG_DRIVER("MST_DSC require dsc but can't find common dsc config\n");
+ return DC_FAIL_BANDWIDTH_VALIDATE;
+ }
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index 70ee0089a20dfe..a54d7358d9e9bd 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -1204,6 +1204,12 @@ void dcn35_clk_mgr_construct(
+ ctx->dc->debug.disable_dpp_power_gate = false;
+ ctx->dc->debug.disable_hubp_power_gate = false;
+ ctx->dc->debug.disable_dsc_power_gate = false;
++
++ /* Disable dynamic IPS2 in older PMFW (93.12) for Z8 interop. */
++ if (ctx->dc->config.disable_ips == DMUB_IPS_ENABLE &&
++ ctx->dce_version == DCN_VERSION_3_5 &&
++ ((clk_mgr->base.smu_ver & 0x00FFFFFF) <= 0x005d0c00))
++ ctx->dc->config.disable_ips = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;
+ } else {
+ /*let's reset the config control flag*/
+ ctx->dc->config.disable_ips = DMUB_IPS_DISABLE_ALL; /*pmfw not support it, disable it all*/
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dsc.h b/drivers/gpu/drm/amd/display/dc/dc_dsc.h
+index fe3078b8789ef1..01c07545ef6b47 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dsc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_dsc.h
+@@ -100,7 +100,8 @@ uint32_t dc_dsc_stream_bandwidth_overhead_in_kbps(
+ */
+ void dc_dsc_get_policy_for_timing(const struct dc_crtc_timing *timing,
+ uint32_t max_target_bpp_limit_override_x16,
+- struct dc_dsc_policy *policy);
++ struct dc_dsc_policy *policy,
++ const enum dc_link_encoding_format link_encoding);
+
+ void dc_dsc_policy_set_max_target_bpp_limit(uint32_t limit);
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+index 06387b8b0aee5e..2baaf602815ec9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+@@ -859,7 +859,9 @@ static void populate_dml21_plane_config_from_plane_state(struct dml2_context *dm
+
+ plane->immediate_flip = plane_state->flip_immediate;
+
+- plane->composition.rect_out_height_spans_vactive = plane_state->dst_rect.height >= stream->timing.v_addressable;
++ plane->composition.rect_out_height_spans_vactive =
++ plane_state->dst_rect.height >= stream->timing.v_addressable &&
++ stream->dst.height >= stream->timing.v_addressable;
+ }
+
+ //TODO : Could be possibly moved to a common helper layer.
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c
+index 6547cc2c2a773b..a8bebc60f05939 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c
+@@ -810,9 +810,11 @@ static void build_synchronized_timing_groups(
+ /* find synchronizable timing groups */
+ for (j = i + 1; j < display_config->display_config.num_streams; j++) {
+ if (memcmp(master_timing,
+- &display_config->display_config.stream_descriptors[j].timing,
+- sizeof(struct dml2_timing_cfg)) == 0 &&
+- display_config->display_config.stream_descriptors[i].output.output_encoder == display_config->display_config.stream_descriptors[j].output.output_encoder) {
++ &display_config->display_config.stream_descriptors[j].timing,
++ sizeof(struct dml2_timing_cfg)) == 0 &&
++ display_config->display_config.stream_descriptors[i].output.output_encoder == display_config->display_config.stream_descriptors[j].output.output_encoder &&
++ (display_config->display_config.stream_descriptors[i].output.output_encoder != dml2_hdmi || //hdmi requires formats match
++ display_config->display_config.stream_descriptors[i].output.output_format == display_config->display_config.stream_descriptors[j].output.output_format)) {
+ set_bit_in_bitfield(&pmo->scratch.pmo_dcn4.synchronized_timing_group_masks[timing_group_idx], j);
+ set_bit_in_bitfield(&stream_mapped_mask, j);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
+index a1727e5bf02479..ff5c4236ae70ef 100644
+--- a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
++++ b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
+@@ -882,7 +882,7 @@ static bool setup_dsc_config(
+
+ memset(dsc_cfg, 0, sizeof(struct dc_dsc_config));
+
+- dc_dsc_get_policy_for_timing(timing, options->max_target_bpp_limit_override_x16, &policy);
++ dc_dsc_get_policy_for_timing(timing, options->max_target_bpp_limit_override_x16, &policy, link_encoding);
+ pic_width = timing->h_addressable + timing->h_border_left + timing->h_border_right;
+ pic_height = timing->v_addressable + timing->v_border_top + timing->v_border_bottom;
+
+@@ -1171,7 +1171,8 @@ uint32_t dc_dsc_stream_bandwidth_overhead_in_kbps(
+
+ void dc_dsc_get_policy_for_timing(const struct dc_crtc_timing *timing,
+ uint32_t max_target_bpp_limit_override_x16,
+- struct dc_dsc_policy *policy)
++ struct dc_dsc_policy *policy,
++ const enum dc_link_encoding_format link_encoding)
+ {
+ uint32_t bpc = 0;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+index 1f2eb2f727dc1e..4d6e90c49ad531 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+@@ -57,6 +57,7 @@
+ #include "panel_cntl.h"
+ #include "dc_state_priv.h"
+ #include "dpcd_defs.h"
++#include "dsc.h"
+ /* include DCE11 register header files */
+ #include "dce/dce_11_0_d.h"
+ #include "dce/dce_11_0_sh_mask.h"
+@@ -1815,6 +1816,48 @@ static void get_edp_links_with_sink(
+ }
+ }
+
++static void clean_up_dsc_blocks(struct dc *dc)
++{
++ struct display_stream_compressor *dsc = NULL;
++ struct timing_generator *tg = NULL;
++ struct stream_encoder *se = NULL;
++ struct dccg *dccg = dc->res_pool->dccg;
++ struct pg_cntl *pg_cntl = dc->res_pool->pg_cntl;
++ int i;
++
++ if (dc->ctx->dce_version != DCN_VERSION_3_5 &&
++ dc->ctx->dce_version != DCN_VERSION_3_51)
++ return;
++
++ for (i = 0; i < dc->res_pool->res_cap->num_dsc; i++) {
++ struct dcn_dsc_state s = {0};
++
++ dsc = dc->res_pool->dscs[i];
++ dsc->funcs->dsc_read_state(dsc, &s);
++ if (s.dsc_fw_en) {
++ /* disable DSC in OPTC */
++ if (i < dc->res_pool->timing_generator_count) {
++ tg = dc->res_pool->timing_generators[i];
++ tg->funcs->set_dsc_config(tg, OPTC_DSC_DISABLED, 0, 0);
++ }
++ /* disable DSC in stream encoder */
++ if (i < dc->res_pool->stream_enc_count) {
++ se = dc->res_pool->stream_enc[i];
++ se->funcs->dp_set_dsc_config(se, OPTC_DSC_DISABLED, 0, 0);
++ se->funcs->dp_set_dsc_pps_info_packet(se, false, NULL, true);
++ }
++ /* disable DSC block */
++ if (dccg->funcs->set_ref_dscclk)
++ dccg->funcs->set_ref_dscclk(dccg, dsc->inst);
++ dsc->funcs->dsc_disable(dsc);
++
++ /* power down DSC */
++ if (pg_cntl != NULL)
++ pg_cntl->funcs->dsc_pg_control(pg_cntl, dsc->inst, false);
++ }
++ }
++}
++
+ /*
+ * When ASIC goes from VBIOS/VGA mode to driver/accelerated mode we need:
+ * 1. Power down all DC HW blocks
+@@ -1917,6 +1960,13 @@ void dce110_enable_accelerated_mode(struct dc *dc, struct dc_state *context)
+ clk_mgr_exit_optimized_pwr_state(dc, dc->clk_mgr);
+
+ power_down_all_hw_blocks(dc);
++
++ /* DSC could be enabled on eDP during VBIOS post.
++ * To clean up dsc blocks if eDP is in link but not active.
++ */
++ if (edp_link_with_sink && (edp_stream_num == 0))
++ clean_up_dsc_blocks(dc);
++
+ disable_vga_and_power_gate_all_controllers(dc);
+ if (edp_link_with_sink && !keep_edp_vdd_on)
+ dc->hwss.edp_power_control(edp_link_with_sink, false);
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+index eaeeade31ed74f..2b4dae2d4b0c18 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+@@ -398,7 +398,11 @@ bool dcn30_set_output_transfer_func(struct dc *dc,
+ }
+ }
+
+- mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++ if (mpc->funcs->set_output_gamma)
++ mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++ else
++ DC_LOG_ERROR("%s: set_output_gamma function pointer is NULL.\n", __func__);
++
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+index 05d8f81daa064d..df80072174b79d 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+@@ -982,8 +982,19 @@ void dcn32_init_hw(struct dc *dc)
+ dc->caps.dmub_caps.gecc_enable = dc->ctx->dmub_srv->dmub->feature_caps.gecc_enable;
+ dc->caps.dmub_caps.mclk_sw = dc->ctx->dmub_srv->dmub->feature_caps.fw_assisted_mclk_switch_ver;
+
+- if (dc->ctx->dmub_srv->dmub->fw_version <
++ /* for DCN401 testing only */
++ dc->caps.dmub_caps.fams_ver = dc->ctx->dmub_srv->dmub->feature_caps.fw_assisted_mclk_switch_ver;
++ if (dc->caps.dmub_caps.fams_ver == 2) {
++ /* FAMS2 is enabled */
++ dc->debug.fams2_config.bits.enable &= true;
++ } else if (dc->ctx->dmub_srv->dmub->fw_version <
+ DMUB_FW_VERSION(7, 0, 35)) {
++ /* FAMS2 is disabled */
++ dc->debug.fams2_config.bits.enable = false;
++ if (dc->debug.using_dml2 && dc->res_pool->funcs->update_bw_bounding_box) {
++ /* update bounding box if FAMS2 disabled */
++ dc->res_pool->funcs->update_bw_bounding_box(dc, dc->clk_mgr->bw_params);
++ }
+ dc->debug.force_disable_subvp = true;
+ dc->debug.disable_fpo_optimizations = true;
+ }
+@@ -1018,6 +1029,20 @@ void dcn32_update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ struct dsc_config dsc_cfg;
+ struct dsc_optc_config dsc_optc_cfg = {0};
+ enum optc_dsc_mode optc_dsc_mode;
++ struct dcn_dsc_state dsc_state = {0};
++
++ if (!dsc) {
++ DC_LOG_DSC("DSC is NULL for tg instance %d:", pipe_ctx->stream_res.tg->inst);
++ return;
++ }
++
++ if (dsc->funcs->dsc_read_state) {
++ dsc->funcs->dsc_read_state(dsc, &dsc_state);
++ if (!dsc_state.dsc_fw_en) {
++ DC_LOG_DSC("DSC has been disabled for tg instance %d:", pipe_ctx->stream_res.tg->inst);
++ return;
++ }
++ }
+
+ /* Enable DSC hw block */
+ dsc_cfg.pic_width = (stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right) / opp_cnt;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index d5e9aec52a0597..80db40787e0197 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -375,7 +375,20 @@ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ struct dsc_config dsc_cfg;
+ struct dsc_optc_config dsc_optc_cfg = {0};
+ enum optc_dsc_mode optc_dsc_mode;
++ struct dcn_dsc_state dsc_state = {0};
+
++ if (!dsc) {
++ DC_LOG_DSC("DSC is NULL for tg instance %d:", pipe_ctx->stream_res.tg->inst);
++ return;
++ }
++
++ if (dsc->funcs->dsc_read_state) {
++ dsc->funcs->dsc_read_state(dsc, &dsc_state);
++ if (!dsc_state.dsc_fw_en) {
++ DC_LOG_DSC("DSC has been disabled for tg instance %d:", pipe_ctx->stream_res.tg->inst);
++ return;
++ }
++ }
+ /* Enable DSC hw block */
+ dsc_cfg.pic_width = (stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right) / opp_cnt;
+ dsc_cfg.pic_height = stream->timing.v_addressable + stream->timing.v_border_top + stream->timing.v_border_bottom;
+diff --git a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c
+index e1257404357b11..cec68c5dba1322 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c
+@@ -28,6 +28,8 @@
+ #include "dccg.h"
+ #include "clk_mgr.h"
+
++#define DC_LOGGER link->ctx->logger
++
+ void set_hpo_dp_throttled_vcp_size(struct pipe_ctx *pipe_ctx,
+ struct fixed31_32 throttled_vcp_size)
+ {
+@@ -108,6 +110,11 @@ void enable_hpo_dp_link_output(struct dc_link *link,
+ enum clock_source_id clock_source,
+ const struct dc_link_settings *link_settings)
+ {
++ if (!link_res->hpo_dp_link_enc) {
++ DC_LOG_ERROR("%s: invalid hpo_dp_link_enc\n", __func__);
++ return;
++ }
++
+ if (link->dc->res_pool->dccg->funcs->set_symclk32_le_root_clock_gating)
+ link->dc->res_pool->dccg->funcs->set_symclk32_le_root_clock_gating(
+ link->dc->res_pool->dccg,
+@@ -124,6 +131,11 @@ void disable_hpo_dp_link_output(struct dc_link *link,
+ const struct link_resource *link_res,
+ enum signal_type signal)
+ {
++ if (!link_res->hpo_dp_link_enc) {
++ DC_LOG_ERROR("%s: invalid hpo_dp_link_enc\n", __func__);
++ return;
++ }
++
+ link_res->hpo_dp_link_enc->funcs->link_disable(link_res->hpo_dp_link_enc);
+ link_res->hpo_dp_link_enc->funcs->disable_link_phy(
+ link_res->hpo_dp_link_enc, signal);
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+index ddf251901fb331..c7e67e947f161d 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+@@ -2153,6 +2153,7 @@ static bool dcn35_resource_construct(
+
+ dc->dml2_options.max_segments_per_hubp = 24;
+ dc->dml2_options.det_segment_size = DCN3_2_DET_SEG_SIZE;/*todo*/
++ dc->dml2_options.override_det_buffer_size_kbytes = true;
+
+ if (dc->config.sdpif_request_limit_words_per_umc == 0)
+ dc->config.sdpif_request_limit_words_per_umc = 16;/*todo*/
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+index 4c5e722baa3a68..da9101b83e8c1e 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+@@ -736,7 +736,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ .hdmichar = true,
+ .dpstream = true,
+ .symclk32_se = true,
+- .symclk32_le = true,
++ .symclk32_le = false,
+ .symclk_fe = true,
+ .physymclk = false,
+ .dpiasymclk = true,
+@@ -2133,6 +2133,7 @@ static bool dcn351_resource_construct(
+
+ dc->dml2_options.max_segments_per_hubp = 24;
+ dc->dml2_options.det_segment_size = DCN3_2_DET_SEG_SIZE;/*todo*/
++ dc->dml2_options.override_det_buffer_size_kbytes = true;
+
+ if (dc->config.sdpif_request_limit_words_per_umc == 0)
+ dc->config.sdpif_request_limit_words_per_umc = 16;/*todo*/
+diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+index a40e6590215a64..bbd259cea4f4f6 100644
+--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
++++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+@@ -134,7 +134,7 @@ unsigned int mod_freesync_calc_v_total_from_refresh(
+
+ v_total = div64_u64(div64_u64(((unsigned long long)(
+ frame_duration_in_ns) * (stream->timing.pix_clk_100hz / 10)),
+- stream->timing.h_total), 1000000);
++ stream->timing.h_total) + 500000, 1000000);
+
+ /* v_total cannot be less than nominal */
+ if (v_total < stream->timing.v_total) {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index a887ab945dfa2f..1d024b122b0c02 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2569,10 +2569,14 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ }
+ }
+
+- return smu_cmn_send_smc_msg_with_param(smu,
++ ret = smu_cmn_send_smc_msg_with_param(smu,
+ SMU_MSG_SetWorkloadMask,
+ workload_mask,
+ NULL);
++ if (!ret)
++ smu->workload_mask = workload_mask;
++
++ return ret;
+ }
+
+ static bool smu_v13_0_0_is_mode1_reset_supported(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index 7bc95c4043778d..b891a5e0a3969a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2501,8 +2501,11 @@ static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu, long *inp
+ return -EINVAL;
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+ 1 << workload_type, NULL);
++
+ if (ret)
+ dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
++ else
++ smu->workload_mask = (1 << workload_type);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index 2b45adecbed2e5..ba17d01e64396a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -1570,10 +1570,14 @@ static int smu_v14_0_2_set_power_profile_mode(struct smu_context *smu,
+ if (workload_type < 0)
+ return -EINVAL;
+
+- return smu_cmn_send_smc_msg_with_param(smu,
++ ret = smu_cmn_send_smc_msg_with_param(smu,
+ SMU_MSG_SetWorkloadMask,
+ 1 << workload_type,
+ NULL);
++ if (!ret)
++ smu->workload_mask = 1 << workload_type;
++
++ return ret;
+ }
+
+ static int smu_v14_0_2_baco_enter(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/bridge/lontium-lt8912b.c b/drivers/gpu/drm/bridge/lontium-lt8912b.c
+index 1a9defa15663cf..e265ab3c8c9293 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt8912b.c
++++ b/drivers/gpu/drm/bridge/lontium-lt8912b.c
+@@ -422,22 +422,6 @@ static const struct drm_connector_funcs lt8912_connector_funcs = {
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ };
+
+-static enum drm_mode_status
+-lt8912_connector_mode_valid(struct drm_connector *connector,
+- struct drm_display_mode *mode)
+-{
+- if (mode->clock > 150000)
+- return MODE_CLOCK_HIGH;
+-
+- if (mode->hdisplay > 1920)
+- return MODE_BAD_HVALUE;
+-
+- if (mode->vdisplay > 1080)
+- return MODE_BAD_VVALUE;
+-
+- return MODE_OK;
+-}
+-
+ static int lt8912_connector_get_modes(struct drm_connector *connector)
+ {
+ const struct drm_edid *drm_edid;
+@@ -463,7 +447,6 @@ static int lt8912_connector_get_modes(struct drm_connector *connector)
+
+ static const struct drm_connector_helper_funcs lt8912_connector_helper_funcs = {
+ .get_modes = lt8912_connector_get_modes,
+- .mode_valid = lt8912_connector_mode_valid,
+ };
+
+ static void lt8912_bridge_mode_set(struct drm_bridge *bridge,
+@@ -605,6 +588,23 @@ static void lt8912_bridge_detach(struct drm_bridge *bridge)
+ drm_bridge_hpd_disable(lt->hdmi_port);
+ }
+
++static enum drm_mode_status
++lt8912_bridge_mode_valid(struct drm_bridge *bridge,
++ const struct drm_display_info *info,
++ const struct drm_display_mode *mode)
++{
++ if (mode->clock > 150000)
++ return MODE_CLOCK_HIGH;
++
++ if (mode->hdisplay > 1920)
++ return MODE_BAD_HVALUE;
++
++ if (mode->vdisplay > 1080)
++ return MODE_BAD_VVALUE;
++
++ return MODE_OK;
++}
++
+ static enum drm_connector_status
+ lt8912_bridge_detect(struct drm_bridge *bridge)
+ {
+@@ -635,6 +635,7 @@ static const struct drm_edid *lt8912_bridge_edid_read(struct drm_bridge *bridge,
+ static const struct drm_bridge_funcs lt8912_bridge_funcs = {
+ .attach = lt8912_bridge_attach,
+ .detach = lt8912_bridge_detach,
++ .mode_valid = lt8912_bridge_mode_valid,
+ .mode_set = lt8912_bridge_mode_set,
+ .enable = lt8912_bridge_enable,
+ .detect = lt8912_bridge_detect,
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_gsc.c b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
+index 1b111e2c33472b..752339d33f39a5 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_gsc.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
+@@ -1174,7 +1174,7 @@ static int gsc_bind(struct device *dev, struct device *master, void *data)
+ struct exynos_drm_ipp *ipp = &ctx->ipp;
+
+ ctx->drm_dev = drm_dev;
+- ctx->drm_dev = drm_dev;
++ ipp->drm_dev = drm_dev;
+ exynos_drm_register_dma(drm_dev, dev, &ctx->dma_priv);
+
+ exynos_drm_ipp_register(dev, ipp, &ipp_funcs,
+diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c
+index 6f34f573e127ec..a90504359e8d27 100644
+--- a/drivers/gpu/drm/mediatek/mtk_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_crtc.c
+@@ -69,6 +69,8 @@ struct mtk_crtc {
+ /* lock for display hardware access */
+ struct mutex hw_lock;
+ bool config_updating;
++ /* lock for config_updating to cmd buffer */
++ spinlock_t config_lock;
+ };
+
+ struct mtk_crtc_state {
+@@ -106,11 +108,16 @@ static void mtk_crtc_finish_page_flip(struct mtk_crtc *mtk_crtc)
+
+ static void mtk_drm_finish_page_flip(struct mtk_crtc *mtk_crtc)
+ {
++ unsigned long flags;
++
+ drm_crtc_handle_vblank(&mtk_crtc->base);
++
++ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ if (!mtk_crtc->config_updating && mtk_crtc->pending_needs_vblank) {
+ mtk_crtc_finish_page_flip(mtk_crtc);
+ mtk_crtc->pending_needs_vblank = false;
+ }
++ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
+ }
+
+ #if IS_REACHABLE(CONFIG_MTK_CMDQ)
+@@ -308,12 +315,19 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
+ struct mtk_crtc *mtk_crtc = container_of(cmdq_cl, struct mtk_crtc, cmdq_client);
+ struct mtk_crtc_state *state;
+ unsigned int i;
++ unsigned long flags;
+
+ if (data->sta < 0)
+ return;
+
+ state = to_mtk_crtc_state(mtk_crtc->base.state);
+
++ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
++ if (mtk_crtc->config_updating) {
++ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++ goto ddp_cmdq_cb_out;
++ }
++
+ state->pending_config = false;
+
+ if (mtk_crtc->pending_planes) {
+@@ -340,6 +354,10 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
+ mtk_crtc->pending_async_planes = false;
+ }
+
++ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++
++ddp_cmdq_cb_out:
++
+ mtk_crtc->cmdq_vblank_cnt = 0;
+ wake_up(&mtk_crtc->cb_blocking_queue);
+ }
+@@ -449,6 +467,7 @@ static void mtk_crtc_ddp_hw_fini(struct mtk_crtc *mtk_crtc)
+ {
+ struct drm_device *drm = mtk_crtc->base.dev;
+ struct drm_crtc *crtc = &mtk_crtc->base;
++ unsigned long flags;
+ int i;
+
+ for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {
+@@ -480,10 +499,10 @@ static void mtk_crtc_ddp_hw_fini(struct mtk_crtc *mtk_crtc)
+ pm_runtime_put(drm->dev);
+
+ if (crtc->state->event && !crtc->state->active) {
+- spin_lock_irq(&crtc->dev->event_lock);
++ spin_lock_irqsave(&crtc->dev->event_lock, flags);
+ drm_crtc_send_vblank_event(crtc, crtc->state->event);
+ crtc->state->event = NULL;
+- spin_unlock_irq(&crtc->dev->event_lock);
++ spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
+ }
+ }
+
+@@ -569,9 +588,14 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
+ struct mtk_drm_private *priv = crtc->dev->dev_private;
+ unsigned int pending_planes = 0, pending_async_planes = 0;
+ int i;
++ unsigned long flags;
+
+ mutex_lock(&mtk_crtc->hw_lock);
++
++ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ mtk_crtc->config_updating = true;
++ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++
+ if (needs_vblank)
+ mtk_crtc->pending_needs_vblank = true;
+
+@@ -625,7 +649,10 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
+ mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0);
+ }
+ #endif
++ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ mtk_crtc->config_updating = false;
++ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++
+ mutex_unlock(&mtk_crtc->hw_lock);
+ }
+
+@@ -1068,6 +1095,7 @@ int mtk_crtc_create(struct drm_device *drm_dev, const unsigned int *path,
+ drm_mode_crtc_set_gamma_size(&mtk_crtc->base, gamma_lut_size);
+ drm_crtc_enable_color_mgmt(&mtk_crtc->base, 0, has_ctm, gamma_lut_size);
+ mutex_init(&mtk_crtc->hw_lock);
++ spin_lock_init(&mtk_crtc->config_lock);
+
+ #if IS_REACHABLE(CONFIG_MTK_CMDQ)
+ i = priv->mbox_index++;
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index c0b5373e90d713..7cfefb5e62218d 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -65,6 +65,8 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
+
+ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ {
++ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
++ struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
+ struct msm_ringbuffer *ring = submit->ring;
+ struct drm_gem_object *obj;
+ uint32_t *ptr, dwords;
+@@ -109,6 +111,7 @@ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit
+ }
+ }
+
++ a5xx_gpu->last_seqno[ring->id] = submit->seqno;
+ a5xx_flush(gpu, ring, true);
+ a5xx_preempt_trigger(gpu);
+
+@@ -150,9 +153,13 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1);
+ OUT_RING(ring, 1);
+
+- /* Enable local preemption for finegrain preemption */
++ /*
++ * Disable local preemption by default because it requires
++ * user-space to be aware of it and provide additional handling
++ * to restore rendering state or do various flushes on switch.
++ */
+ OUT_PKT7(ring, CP_PREEMPT_ENABLE_LOCAL, 1);
+- OUT_RING(ring, 0x1);
++ OUT_RING(ring, 0x0);
+
+ /* Allow CP_CONTEXT_SWITCH_YIELD packets in the IB2 */
+ OUT_PKT7(ring, CP_YIELD_ENABLE, 1);
+@@ -206,6 +213,7 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ /* Write the fence to the scratch register */
+ OUT_PKT4(ring, REG_A5XX_CP_SCRATCH_REG(2), 1);
+ OUT_RING(ring, submit->seqno);
++ a5xx_gpu->last_seqno[ring->id] = submit->seqno;
+
+ /*
+ * Execute a CACHE_FLUSH_TS event. This will ensure that the
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.h b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h
+index c7187bcc5e9082..9c0d701fe4b85b 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.h
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h
+@@ -34,8 +34,10 @@ struct a5xx_gpu {
+ struct drm_gem_object *preempt_counters_bo[MSM_GPU_MAX_RINGS];
+ struct a5xx_preempt_record *preempt[MSM_GPU_MAX_RINGS];
+ uint64_t preempt_iova[MSM_GPU_MAX_RINGS];
++ uint32_t last_seqno[MSM_GPU_MAX_RINGS];
+
+ atomic_t preempt_state;
++ spinlock_t preempt_start_lock;
+ struct timer_list preempt_timer;
+
+ struct drm_gem_object *shadow_bo;
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
+index f58dd564d122ba..0469fea5501083 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
+@@ -55,6 +55,8 @@ static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
+ /* Return the highest priority ringbuffer with something in it */
+ static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
+ {
++ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
++ struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
+ unsigned long flags;
+ int i;
+
+@@ -64,6 +66,8 @@ static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
+
+ spin_lock_irqsave(&ring->preempt_lock, flags);
+ empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring));
++ if (!empty && ring == a5xx_gpu->cur_ring)
++ empty = ring->memptrs->fence == a5xx_gpu->last_seqno[i];
+ spin_unlock_irqrestore(&ring->preempt_lock, flags);
+
+ if (!empty)
+@@ -97,12 +101,19 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu)
+ if (gpu->nr_rings == 1)
+ return;
+
++ /*
++ * Serialize preemption start to ensure that we always make
++ * decision on latest state. Otherwise we can get stuck in
++ * lower priority or empty ring.
++ */
++ spin_lock_irqsave(&a5xx_gpu->preempt_start_lock, flags);
++
+ /*
+ * Try to start preemption by moving from NONE to START. If
+ * unsuccessful, a preemption is already in flight
+ */
+ if (!try_preempt_state(a5xx_gpu, PREEMPT_NONE, PREEMPT_START))
+- return;
++ goto out;
+
+ /* Get the next ring to preempt to */
+ ring = get_next_ring(gpu);
+@@ -127,9 +138,11 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu)
+ set_preempt_state(a5xx_gpu, PREEMPT_ABORT);
+ update_wptr(gpu, a5xx_gpu->cur_ring);
+ set_preempt_state(a5xx_gpu, PREEMPT_NONE);
+- return;
++ goto out;
+ }
+
++ spin_unlock_irqrestore(&a5xx_gpu->preempt_start_lock, flags);
++
+ /* Make sure the wptr doesn't update while we're in motion */
+ spin_lock_irqsave(&ring->preempt_lock, flags);
+ a5xx_gpu->preempt[ring->id]->wptr = get_wptr(ring);
+@@ -152,6 +165,10 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu)
+
+ /* And actually start the preemption */
+ gpu_write(gpu, REG_A5XX_CP_CONTEXT_SWITCH_CNTL, 1);
++ return;
++
++out:
++ spin_unlock_irqrestore(&a5xx_gpu->preempt_start_lock, flags);
+ }
+
+ void a5xx_preempt_irq(struct msm_gpu *gpu)
+@@ -188,6 +205,12 @@ void a5xx_preempt_irq(struct msm_gpu *gpu)
+ update_wptr(gpu, a5xx_gpu->cur_ring);
+
+ set_preempt_state(a5xx_gpu, PREEMPT_NONE);
++
++ /*
++ * Try to trigger preemption again in case there was a submit or
++ * retire during ring switch
++ */
++ a5xx_preempt_trigger(gpu);
+ }
+
+ void a5xx_preempt_hw_init(struct msm_gpu *gpu)
+@@ -204,6 +227,8 @@ void a5xx_preempt_hw_init(struct msm_gpu *gpu)
+ return;
+
+ for (i = 0; i < gpu->nr_rings; i++) {
++ a5xx_gpu->preempt[i]->data = 0;
++ a5xx_gpu->preempt[i]->info = 0;
+ a5xx_gpu->preempt[i]->wptr = 0;
+ a5xx_gpu->preempt[i]->rptr = 0;
+ a5xx_gpu->preempt[i]->rbase = gpu->rb[i]->iova;
+@@ -298,5 +323,6 @@ void a5xx_preempt_init(struct msm_gpu *gpu)
+ }
+ }
+
++ spin_lock_init(&a5xx_gpu->preempt_start_lock);
+ timer_setup(&a5xx_gpu->preempt_timer, a5xx_preempt_timer, 0);
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+index 789a11416f7a45..0fcae53c0b140b 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+@@ -388,18 +388,18 @@ static void a7xx_get_debugbus_blocks(struct msm_gpu *gpu,
+ const u32 *debugbus_blocks, *gbif_debugbus_blocks;
+ int i;
+
+- if (adreno_is_a730(adreno_gpu)) {
++ if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ debugbus_blocks = gen7_0_0_debugbus_blocks;
+ debugbus_blocks_count = ARRAY_SIZE(gen7_0_0_debugbus_blocks);
+ gbif_debugbus_blocks = a7xx_gbif_debugbus_blocks;
+ gbif_debugbus_blocks_count = ARRAY_SIZE(a7xx_gbif_debugbus_blocks);
+- } else if (adreno_is_a740_family(adreno_gpu)) {
++ } else if (adreno_gpu->info->family == ADRENO_7XX_GEN2) {
+ debugbus_blocks = gen7_2_0_debugbus_blocks;
+ debugbus_blocks_count = ARRAY_SIZE(gen7_2_0_debugbus_blocks);
+ gbif_debugbus_blocks = a7xx_gbif_debugbus_blocks;
+ gbif_debugbus_blocks_count = ARRAY_SIZE(a7xx_gbif_debugbus_blocks);
+ } else {
+- BUG_ON(!adreno_is_a750(adreno_gpu));
++ BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
+ debugbus_blocks = gen7_9_0_debugbus_blocks;
+ debugbus_blocks_count = ARRAY_SIZE(gen7_9_0_debugbus_blocks);
+ gbif_debugbus_blocks = gen7_9_0_gbif_debugbus_blocks;
+@@ -509,7 +509,7 @@ static void a6xx_get_debugbus(struct msm_gpu *gpu,
+ const struct a6xx_debugbus_block *cx_debugbus_blocks;
+
+ if (adreno_is_a7xx(adreno_gpu)) {
+- BUG_ON(!(adreno_is_a730(adreno_gpu) || adreno_is_a740_family(adreno_gpu)));
++ BUG_ON(adreno_gpu->info->family > ADRENO_7XX_GEN3);
+ cx_debugbus_blocks = a7xx_cx_debugbus_blocks;
+ nr_cx_debugbus_blocks = ARRAY_SIZE(a7xx_cx_debugbus_blocks);
+ } else {
+@@ -660,13 +660,16 @@ static void a7xx_get_dbgahb_clusters(struct msm_gpu *gpu,
+ const struct gen7_sptp_cluster_registers *dbgahb_clusters;
+ unsigned dbgahb_clusters_size;
+
+- if (adreno_is_a730(adreno_gpu)) {
++ if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ dbgahb_clusters = gen7_0_0_sptp_clusters;
+ dbgahb_clusters_size = ARRAY_SIZE(gen7_0_0_sptp_clusters);
+- } else {
+- BUG_ON(!adreno_is_a740_family(adreno_gpu));
++ } else if (adreno_gpu->info->family == ADRENO_7XX_GEN2) {
+ dbgahb_clusters = gen7_2_0_sptp_clusters;
+ dbgahb_clusters_size = ARRAY_SIZE(gen7_2_0_sptp_clusters);
++ } else {
++ BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
++ dbgahb_clusters = gen7_9_0_sptp_clusters;
++ dbgahb_clusters_size = ARRAY_SIZE(gen7_9_0_sptp_clusters);
+ }
+
+ a6xx_state->dbgahb_clusters = state_kcalloc(a6xx_state,
+@@ -818,14 +821,14 @@ static void a7xx_get_clusters(struct msm_gpu *gpu,
+ const struct gen7_cluster_registers *clusters;
+ unsigned clusters_size;
+
+- if (adreno_is_a730(adreno_gpu)) {
++ if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ clusters = gen7_0_0_clusters;
+ clusters_size = ARRAY_SIZE(gen7_0_0_clusters);
+- } else if (adreno_is_a740_family(adreno_gpu)) {
++ } else if (adreno_gpu->info->family == ADRENO_7XX_GEN2) {
+ clusters = gen7_2_0_clusters;
+ clusters_size = ARRAY_SIZE(gen7_2_0_clusters);
+ } else {
+- BUG_ON(!adreno_is_a750(adreno_gpu));
++ BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
+ clusters = gen7_9_0_clusters;
+ clusters_size = ARRAY_SIZE(gen7_9_0_clusters);
+ }
+@@ -893,7 +896,7 @@ static void a7xx_get_shader_block(struct msm_gpu *gpu,
+ if (WARN_ON(datasize > A6XX_CD_DATA_SIZE))
+ return;
+
+- if (adreno_is_a730(adreno_gpu)) {
++ if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ gpu_rmw(gpu, REG_A7XX_SP_DBG_CNTL, GENMASK(1, 0), 3);
+ }
+
+@@ -923,7 +926,7 @@ static void a7xx_get_shader_block(struct msm_gpu *gpu,
+ datasize);
+
+ out:
+- if (adreno_is_a730(adreno_gpu)) {
++ if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ gpu_rmw(gpu, REG_A7XX_SP_DBG_CNTL, GENMASK(1, 0), 0);
+ }
+ }
+@@ -956,14 +959,14 @@ static void a7xx_get_shaders(struct msm_gpu *gpu,
+ unsigned num_shader_blocks;
+ int i;
+
+- if (adreno_is_a730(adreno_gpu)) {
++ if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ shader_blocks = gen7_0_0_shader_blocks;
+ num_shader_blocks = ARRAY_SIZE(gen7_0_0_shader_blocks);
+- } else if (adreno_is_a740_family(adreno_gpu)) {
++ } else if (adreno_gpu->info->family == ADRENO_7XX_GEN2) {
+ shader_blocks = gen7_2_0_shader_blocks;
+ num_shader_blocks = ARRAY_SIZE(gen7_2_0_shader_blocks);
+ } else {
+- BUG_ON(!adreno_is_a750(adreno_gpu));
++ BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
+ shader_blocks = gen7_9_0_shader_blocks;
+ num_shader_blocks = ARRAY_SIZE(gen7_9_0_shader_blocks);
+ }
+@@ -1348,14 +1351,14 @@ static void a7xx_get_registers(struct msm_gpu *gpu,
+ const u32 *pre_crashdumper_regs;
+ const struct gen7_reg_list *reglist;
+
+- if (adreno_is_a730(adreno_gpu)) {
++ if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ reglist = gen7_0_0_reg_list;
+ pre_crashdumper_regs = gen7_0_0_pre_crashdumper_gpu_registers;
+- } else if (adreno_is_a740_family(adreno_gpu)) {
++ } else if (adreno_gpu->info->family == ADRENO_7XX_GEN2) {
+ reglist = gen7_2_0_reg_list;
+ pre_crashdumper_regs = gen7_0_0_pre_crashdumper_gpu_registers;
+ } else {
+- BUG_ON(!adreno_is_a750(adreno_gpu));
++ BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
+ reglist = gen7_9_0_reg_list;
+ pre_crashdumper_regs = gen7_9_0_pre_crashdumper_gpu_registers;
+ }
+@@ -1405,8 +1408,7 @@ static void a7xx_get_post_crashdumper_registers(struct msm_gpu *gpu,
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ const u32 *regs;
+
+- BUG_ON(!(adreno_is_a730(adreno_gpu) || adreno_is_a740_family(adreno_gpu) ||
+- adreno_is_a750(adreno_gpu)));
++ BUG_ON(adreno_gpu->info->family > ADRENO_7XX_GEN3);
+ regs = gen7_0_0_post_crashdumper_registers;
+
+ a7xx_get_ahb_gpu_registers(gpu,
+@@ -1514,11 +1516,11 @@ static void a7xx_get_indexed_registers(struct msm_gpu *gpu,
+ const struct a6xx_indexed_registers *indexed_regs;
+ int i, indexed_count, mempool_count;
+
+- if (adreno_is_a730(adreno_gpu) || adreno_is_a740_family(adreno_gpu)) {
++ if (adreno_gpu->info->family <= ADRENO_7XX_GEN2) {
+ indexed_regs = a7xx_indexed_reglist;
+ indexed_count = ARRAY_SIZE(a7xx_indexed_reglist);
+ } else {
+- BUG_ON(!adreno_is_a750(adreno_gpu));
++ BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
+ indexed_regs = gen7_9_0_cp_indexed_reg_list;
+ indexed_count = ARRAY_SIZE(gen7_9_0_cp_indexed_reg_list);
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h b/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
+index 260d66eccfecbf..9a327d543f27de 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
++++ b/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
+@@ -1303,7 +1303,7 @@ static struct a6xx_indexed_registers gen7_9_0_cp_indexed_reg_list[] = {
+ REG_A6XX_CP_ROQ_DBG_DATA, 0x00800},
+ { "CP_UCODE_DBG_DATA", REG_A6XX_CP_SQE_UCODE_DBG_ADDR,
+ REG_A6XX_CP_SQE_UCODE_DBG_DATA, 0x08000},
+- { "CP_BV_SQE_STAT_ADDR", REG_A7XX_CP_BV_DRAW_STATE_ADDR,
++ { "CP_BV_DRAW_STATE_ADDR", REG_A7XX_CP_BV_DRAW_STATE_ADDR,
+ REG_A7XX_CP_BV_DRAW_STATE_DATA, 0x00200},
+ { "CP_BV_ROQ_DBG_ADDR", REG_A7XX_CP_BV_ROQ_DBG_ADDR,
+ REG_A7XX_CP_BV_ROQ_DBG_DATA, 0x00800},
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index ecc3fc5cec2270..3896123ec51c96 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -478,7 +478,7 @@ adreno_request_fw(struct adreno_gpu *adreno_gpu, const char *fwname)
+ ret = request_firmware_direct(&fw, fwname, drm->dev);
+ if (!ret) {
+ DRM_DEV_INFO(drm->dev, "loaded %s from legacy location\n",
+- newname);
++ fwname);
+ adreno_gpu->fwloc = FW_LOCATION_LEGACY;
+ goto out;
+ } else if (adreno_gpu->fwloc != FW_LOCATION_UNKNOWN) {
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c
+index 3a7f7edda96b27..500b7dc895d055 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c
+@@ -351,7 +351,7 @@ void mdp5_smp_dump(struct mdp5_smp *smp, struct drm_printer *p,
+
+ drm_printf(p, "%s:%d\t%d\t%s\n",
+ pipe2name(pipe), j, inuse,
+- plane ? plane->name : NULL);
++ plane ? plane->name : "(null)");
+
+ total += inuse;
+ }
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 9622e58dce3e7a..e1228fb093ee01 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -119,7 +119,7 @@ struct msm_dp_desc {
+ };
+
+ static const struct msm_dp_desc sc7180_dp_descs[] = {
+- { .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0 },
++ { .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0, .wide_bus_supported = true },
+ {}
+ };
+
+@@ -130,9 +130,9 @@ static const struct msm_dp_desc sc7280_dp_descs[] = {
+ };
+
+ static const struct msm_dp_desc sc8180x_dp_descs[] = {
+- { .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0 },
+- { .io_start = 0x0ae98000, .id = MSM_DP_CONTROLLER_1 },
+- { .io_start = 0x0ae9a000, .id = MSM_DP_CONTROLLER_2 },
++ { .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0, .wide_bus_supported = true },
++ { .io_start = 0x0ae98000, .id = MSM_DP_CONTROLLER_1, .wide_bus_supported = true },
++ { .io_start = 0x0ae9a000, .id = MSM_DP_CONTROLLER_2, .wide_bus_supported = true },
+ {}
+ };
+
+@@ -149,7 +149,7 @@ static const struct msm_dp_desc sc8280xp_dp_descs[] = {
+ };
+
+ static const struct msm_dp_desc sm8650_dp_descs[] = {
+- { .io_start = 0x0af54000, .id = MSM_DP_CONTROLLER_0 },
++ { .io_start = 0x0af54000, .id = MSM_DP_CONTROLLER_0, .wide_bus_supported = true },
+ {}
+ };
+
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+index 3b59137ca67437..031446c87daec0 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+@@ -135,7 +135,7 @@ static void dsi_pll_calc_dec_frac(struct dsi_pll_7nm *pll, struct dsi_pll_config
+ config->pll_clock_inverters = 0x00;
+ else
+ config->pll_clock_inverters = 0x40;
+- } else {
++ } else if (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_1) {
+ if (pll_freq <= 1000000000ULL)
+ config->pll_clock_inverters = 0xa0;
+ else if (pll_freq <= 2500000000ULL)
+@@ -144,6 +144,16 @@ static void dsi_pll_calc_dec_frac(struct dsi_pll_7nm *pll, struct dsi_pll_config
+ config->pll_clock_inverters = 0x00;
+ else
+ config->pll_clock_inverters = 0x40;
++ } else {
++ /* 4.2, 4.3 */
++ if (pll_freq <= 1000000000ULL)
++ config->pll_clock_inverters = 0xa0;
++ else if (pll_freq <= 2500000000ULL)
++ config->pll_clock_inverters = 0x20;
++ else if (pll_freq <= 3500000000ULL)
++ config->pll_clock_inverters = 0x00;
++ else
++ config->pll_clock_inverters = 0x40;
+ }
+
+ config->decimal_div_start = dec;
+diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c b/drivers/gpu/drm/radeon/evergreen_cs.c
+index e5577d2a19ef49..a4661328339361 100644
+--- a/drivers/gpu/drm/radeon/evergreen_cs.c
++++ b/drivers/gpu/drm/radeon/evergreen_cs.c
+@@ -397,7 +397,7 @@ static int evergreen_cs_track_validate_cb(struct radeon_cs_parser *p, unsigned i
+ struct evergreen_cs_track *track = p->track;
+ struct eg_surface surf;
+ unsigned pitch, slice, mslice;
+- unsigned long offset;
++ u64 offset;
+ int r;
+
+ mslice = G_028C6C_SLICE_MAX(track->cb_color_view[id]) + 1;
+@@ -435,14 +435,14 @@ static int evergreen_cs_track_validate_cb(struct radeon_cs_parser *p, unsigned i
+ return r;
+ }
+
+- offset = track->cb_color_bo_offset[id] << 8;
++ offset = (u64)track->cb_color_bo_offset[id] << 8;
+ if (offset & (surf.base_align - 1)) {
+- dev_warn(p->dev, "%s:%d cb[%d] bo base %ld not aligned with %ld\n",
++ dev_warn(p->dev, "%s:%d cb[%d] bo base %llu not aligned with %ld\n",
+ __func__, __LINE__, id, offset, surf.base_align);
+ return -EINVAL;
+ }
+
+- offset += surf.layer_size * mslice;
++ offset += (u64)surf.layer_size * mslice;
+ if (offset > radeon_bo_size(track->cb_color_bo[id])) {
+ /* old ddx are broken they allocate bo with w*h*bpp but
+ * program slice with ALIGN(h, 8), catch this and patch
+@@ -450,14 +450,14 @@ static int evergreen_cs_track_validate_cb(struct radeon_cs_parser *p, unsigned i
+ */
+ if (!surf.mode) {
+ uint32_t *ib = p->ib.ptr;
+- unsigned long tmp, nby, bsize, size, min = 0;
++ u64 tmp, nby, bsize, size, min = 0;
+
+ /* find the height the ddx wants */
+ if (surf.nby > 8) {
+ min = surf.nby - 8;
+ }
+ bsize = radeon_bo_size(track->cb_color_bo[id]);
+- tmp = track->cb_color_bo_offset[id] << 8;
++ tmp = (u64)track->cb_color_bo_offset[id] << 8;
+ for (nby = surf.nby; nby > min; nby--) {
+ size = nby * surf.nbx * surf.bpe * surf.nsamples;
+ if ((tmp + size * mslice) <= bsize) {
+@@ -469,7 +469,7 @@ static int evergreen_cs_track_validate_cb(struct radeon_cs_parser *p, unsigned i
+ slice = ((nby * surf.nbx) / 64) - 1;
+ if (!evergreen_surface_check(p, &surf, "cb")) {
+ /* check if this one works */
+- tmp += surf.layer_size * mslice;
++ tmp += (u64)surf.layer_size * mslice;
+ if (tmp <= bsize) {
+ ib[track->cb_color_slice_idx[id]] = slice;
+ goto old_ddx_ok;
+@@ -478,9 +478,9 @@ static int evergreen_cs_track_validate_cb(struct radeon_cs_parser *p, unsigned i
+ }
+ }
+ dev_warn(p->dev, "%s:%d cb[%d] bo too small (layer size %d, "
+- "offset %d, max layer %d, bo size %ld, slice %d)\n",
++ "offset %llu, max layer %d, bo size %ld, slice %d)\n",
+ __func__, __LINE__, id, surf.layer_size,
+- track->cb_color_bo_offset[id] << 8, mslice,
++ (u64)track->cb_color_bo_offset[id] << 8, mslice,
+ radeon_bo_size(track->cb_color_bo[id]), slice);
+ dev_warn(p->dev, "%s:%d problematic surf: (%d %d) (%d %d %d %d %d %d %d)\n",
+ __func__, __LINE__, surf.nbx, surf.nby,
+@@ -564,7 +564,7 @@ static int evergreen_cs_track_validate_stencil(struct radeon_cs_parser *p)
+ struct evergreen_cs_track *track = p->track;
+ struct eg_surface surf;
+ unsigned pitch, slice, mslice;
+- unsigned long offset;
++ u64 offset;
+ int r;
+
+ mslice = G_028008_SLICE_MAX(track->db_depth_view) + 1;
+@@ -610,18 +610,18 @@ static int evergreen_cs_track_validate_stencil(struct radeon_cs_parser *p)
+ return r;
+ }
+
+- offset = track->db_s_read_offset << 8;
++ offset = (u64)track->db_s_read_offset << 8;
+ if (offset & (surf.base_align - 1)) {
+- dev_warn(p->dev, "%s:%d stencil read bo base %ld not aligned with %ld\n",
++ dev_warn(p->dev, "%s:%d stencil read bo base %llu not aligned with %ld\n",
+ __func__, __LINE__, offset, surf.base_align);
+ return -EINVAL;
+ }
+- offset += surf.layer_size * mslice;
++ offset += (u64)surf.layer_size * mslice;
+ if (offset > radeon_bo_size(track->db_s_read_bo)) {
+ dev_warn(p->dev, "%s:%d stencil read bo too small (layer size %d, "
+- "offset %ld, max layer %d, bo size %ld)\n",
++ "offset %llu, max layer %d, bo size %ld)\n",
+ __func__, __LINE__, surf.layer_size,
+- (unsigned long)track->db_s_read_offset << 8, mslice,
++ (u64)track->db_s_read_offset << 8, mslice,
+ radeon_bo_size(track->db_s_read_bo));
+ dev_warn(p->dev, "%s:%d stencil invalid (0x%08x 0x%08x 0x%08x 0x%08x)\n",
+ __func__, __LINE__, track->db_depth_size,
+@@ -629,18 +629,18 @@ static int evergreen_cs_track_validate_stencil(struct radeon_cs_parser *p)
+ return -EINVAL;
+ }
+
+- offset = track->db_s_write_offset << 8;
++ offset = (u64)track->db_s_write_offset << 8;
+ if (offset & (surf.base_align - 1)) {
+- dev_warn(p->dev, "%s:%d stencil write bo base %ld not aligned with %ld\n",
++ dev_warn(p->dev, "%s:%d stencil write bo base %llu not aligned with %ld\n",
+ __func__, __LINE__, offset, surf.base_align);
+ return -EINVAL;
+ }
+- offset += surf.layer_size * mslice;
++ offset += (u64)surf.layer_size * mslice;
+ if (offset > radeon_bo_size(track->db_s_write_bo)) {
+ dev_warn(p->dev, "%s:%d stencil write bo too small (layer size %d, "
+- "offset %ld, max layer %d, bo size %ld)\n",
++ "offset %llu, max layer %d, bo size %ld)\n",
+ __func__, __LINE__, surf.layer_size,
+- (unsigned long)track->db_s_write_offset << 8, mslice,
++ (u64)track->db_s_write_offset << 8, mslice,
+ radeon_bo_size(track->db_s_write_bo));
+ return -EINVAL;
+ }
+@@ -661,7 +661,7 @@ static int evergreen_cs_track_validate_depth(struct radeon_cs_parser *p)
+ struct evergreen_cs_track *track = p->track;
+ struct eg_surface surf;
+ unsigned pitch, slice, mslice;
+- unsigned long offset;
++ u64 offset;
+ int r;
+
+ mslice = G_028008_SLICE_MAX(track->db_depth_view) + 1;
+@@ -708,34 +708,34 @@ static int evergreen_cs_track_validate_depth(struct radeon_cs_parser *p)
+ return r;
+ }
+
+- offset = track->db_z_read_offset << 8;
++ offset = (u64)track->db_z_read_offset << 8;
+ if (offset & (surf.base_align - 1)) {
+- dev_warn(p->dev, "%s:%d stencil read bo base %ld not aligned with %ld\n",
++ dev_warn(p->dev, "%s:%d stencil read bo base %llu not aligned with %ld\n",
+ __func__, __LINE__, offset, surf.base_align);
+ return -EINVAL;
+ }
+- offset += surf.layer_size * mslice;
++ offset += (u64)surf.layer_size * mslice;
+ if (offset > radeon_bo_size(track->db_z_read_bo)) {
+ dev_warn(p->dev, "%s:%d depth read bo too small (layer size %d, "
+- "offset %ld, max layer %d, bo size %ld)\n",
++ "offset %llu, max layer %d, bo size %ld)\n",
+ __func__, __LINE__, surf.layer_size,
+- (unsigned long)track->db_z_read_offset << 8, mslice,
++ (u64)track->db_z_read_offset << 8, mslice,
+ radeon_bo_size(track->db_z_read_bo));
+ return -EINVAL;
+ }
+
+- offset = track->db_z_write_offset << 8;
++ offset = (u64)track->db_z_write_offset << 8;
+ if (offset & (surf.base_align - 1)) {
+- dev_warn(p->dev, "%s:%d stencil write bo base %ld not aligned with %ld\n",
++ dev_warn(p->dev, "%s:%d stencil write bo base %llu not aligned with %ld\n",
+ __func__, __LINE__, offset, surf.base_align);
+ return -EINVAL;
+ }
+- offset += surf.layer_size * mslice;
++ offset += (u64)surf.layer_size * mslice;
+ if (offset > radeon_bo_size(track->db_z_write_bo)) {
+ dev_warn(p->dev, "%s:%d depth write bo too small (layer size %d, "
+- "offset %ld, max layer %d, bo size %ld)\n",
++ "offset %llu, max layer %d, bo size %ld)\n",
+ __func__, __LINE__, surf.layer_size,
+- (unsigned long)track->db_z_write_offset << 8, mslice,
++ (u64)track->db_z_write_offset << 8, mslice,
+ radeon_bo_size(track->db_z_write_bo));
+ return -EINVAL;
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c
+index 10793a433bf586..d698a899eaf4cf 100644
+--- a/drivers/gpu/drm/radeon/radeon_atombios.c
++++ b/drivers/gpu/drm/radeon/radeon_atombios.c
+@@ -1717,26 +1717,29 @@ struct radeon_encoder_atom_dig *radeon_atombios_get_lvds_info(struct
+ fake_edid_record = (ATOM_FAKE_EDID_PATCH_RECORD *)record;
+ if (fake_edid_record->ucFakeEDIDLength) {
+ struct edid *edid;
+- int edid_size =
+- max((int)EDID_LENGTH, (int)fake_edid_record->ucFakeEDIDLength);
+- edid = kmalloc(edid_size, GFP_KERNEL);
++ int edid_size;
++
++ if (fake_edid_record->ucFakeEDIDLength == 128)
++ edid_size = fake_edid_record->ucFakeEDIDLength;
++ else
++ edid_size = fake_edid_record->ucFakeEDIDLength * 128;
++ edid = kmemdup(&fake_edid_record->ucFakeEDIDString[0],
++ edid_size, GFP_KERNEL);
+ if (edid) {
+- memcpy((u8 *)edid, (u8 *)&fake_edid_record->ucFakeEDIDString[0],
+- fake_edid_record->ucFakeEDIDLength);
+-
+ if (drm_edid_is_valid(edid)) {
+ rdev->mode_info.bios_hardcoded_edid = edid;
+ rdev->mode_info.bios_hardcoded_edid_size = edid_size;
+- } else
++ } else {
+ kfree(edid);
++ }
+ }
++ record += struct_size(fake_edid_record,
++ ucFakeEDIDString,
++ edid_size);
++ } else {
++ /* empty fake edid record must be 3 bytes long */
++ record += sizeof(ATOM_FAKE_EDID_PATCH_RECORD) + 1;
+ }
+- record += fake_edid_record->ucFakeEDIDLength ?
+- struct_size(fake_edid_record,
+- ucFakeEDIDString,
+- fake_edid_record->ucFakeEDIDLength) :
+- /* empty fake edid record must be 3 bytes long */
+- sizeof(ATOM_FAKE_EDID_PATCH_RECORD) + 1;
+ break;
+ case LCD_PANEL_RESOLUTION_RECORD_TYPE:
+ panel_res_record = (ATOM_PANEL_RESOLUTION_PATCH_RECORD *)record;
+diff --git a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+index fe33092abbe7d7..aae48e906af11b 100644
+--- a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+@@ -434,6 +434,8 @@ static void dw_hdmi_rk3328_setup_hpd(struct dw_hdmi *dw_hdmi, void *data)
+ HIWORD_UPDATE(RK3328_HDMI_SDAIN_MSK | RK3328_HDMI_SCLIN_MSK,
+ RK3328_HDMI_SDAIN_MSK | RK3328_HDMI_SCLIN_MSK |
+ RK3328_HDMI_HPD_IOE));
++
++ dw_hdmi_rk3328_read_hpd(dw_hdmi, data);
+ }
+
+ static const struct dw_hdmi_phy_ops rk3228_hdmi_phy_ops = {
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index a13473b2d54c40..4a9c6ea7f15dc3 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -396,8 +396,8 @@ static void scl_vop_cal_scl_fac(struct vop *vop, const struct vop_win_data *win,
+ if (info->is_yuv)
+ is_yuv = true;
+
+- if (dst_w > 3840) {
+- DRM_DEV_ERROR(vop->dev, "Maximum dst width (3840) exceeded\n");
++ if (dst_w > 4096) {
++ DRM_DEV_ERROR(vop->dev, "Maximum dst width (4096) exceeded\n");
+ return;
+ }
+
+diff --git a/drivers/gpu/drm/stm/drv.c b/drivers/gpu/drm/stm/drv.c
+index e8523abef27a50..4d2db079ad4ff3 100644
+--- a/drivers/gpu/drm/stm/drv.c
++++ b/drivers/gpu/drm/stm/drv.c
+@@ -203,12 +203,14 @@ static int stm_drm_platform_probe(struct platform_device *pdev)
+
+ ret = drm_dev_register(ddev, 0);
+ if (ret)
+- goto err_put;
++ goto err_unload;
+
+ drm_fbdev_dma_setup(ddev, 16);
+
+ return 0;
+
++err_unload:
++ drv_unload(ddev);
+ err_put:
+ drm_dev_put(ddev);
+
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index 5576fdae496233..5aec1e58c968c2 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -1580,6 +1580,8 @@ static struct drm_plane *ltdc_plane_create(struct drm_device *ddev,
+ ARRAY_SIZE(ltdc_drm_fmt_ycbcr_sp) +
+ ARRAY_SIZE(ltdc_drm_fmt_ycbcr_fp)) *
+ sizeof(*formats), GFP_KERNEL);
++ if (!formats)
++ return NULL;
+
+ for (i = 0; i < ldev->caps.pix_fmt_nb; i++) {
+ drm_fmt = ldev->caps.pix_fmt_drm[i];
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index d57c4a5948c89b..cb424604484f1c 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -429,6 +429,7 @@ static int vc4_hdmi_connector_detect_ctx(struct drm_connector *connector,
+ {
+ struct vc4_hdmi *vc4_hdmi = connector_to_vc4_hdmi(connector);
+ enum drm_connector_status status = connector_status_disconnected;
++ int ret;
+
+ /*
+ * NOTE: This function should really take vc4_hdmi->mutex, but
+@@ -441,7 +442,12 @@ static int vc4_hdmi_connector_detect_ctx(struct drm_connector *connector,
+ * the lock for now.
+ */
+
+- WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev));
++ ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
++ if (ret) {
++ drm_err_once(connector->dev, "Failed to retain HDMI power domain: %d\n",
++ ret);
++ return connector_status_unknown;
++ }
+
+ if (vc4_hdmi->hpd_gpio) {
+ if (gpiod_get_value_cansleep(vc4_hdmi->hpd_gpio))
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
+index 9731dcd0b1bde9..d0bbb1d9b1ac15 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue.c
++++ b/drivers/gpu/drm/xe/xe_exec_queue.c
+@@ -166,7 +166,8 @@ struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *v
+
+ struct xe_exec_queue *xe_exec_queue_create_class(struct xe_device *xe, struct xe_gt *gt,
+ struct xe_vm *vm,
+- enum xe_engine_class class, u32 flags)
++ enum xe_engine_class class,
++ u32 flags, u64 extensions)
+ {
+ struct xe_hw_engine *hwe, *hwe0 = NULL;
+ enum xe_hw_engine_id id;
+@@ -186,7 +187,56 @@ struct xe_exec_queue *xe_exec_queue_create_class(struct xe_device *xe, struct xe
+ if (!logical_mask)
+ return ERR_PTR(-ENODEV);
+
+- return xe_exec_queue_create(xe, vm, logical_mask, 1, hwe0, flags, 0);
++ return xe_exec_queue_create(xe, vm, logical_mask, 1, hwe0, flags, extensions);
++}
++
++/**
++ * xe_exec_queue_create_bind() - Create bind exec queue.
++ * @xe: Xe device.
++ * @tile: tile which bind exec queue belongs to.
++ * @flags: exec queue creation flags
++ * @extensions: exec queue creation extensions
++ *
++ * Normalize bind exec queue creation. Bind exec queue is tied to migration VM
++ * for access to physical memory required for page table programming. On a
++ * faulting devices the reserved copy engine instance must be used to avoid
++ * deadlocking (user binds cannot get stuck behind faults as kernel binds which
++ * resolve faults depend on user binds). On non-faulting devices any copy engine
++ * can be used.
++ *
++ * Returns exec queue on success, ERR_PTR on failure
++ */
++struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe,
++ struct xe_tile *tile,
++ u32 flags, u64 extensions)
++{
++ struct xe_gt *gt = tile->primary_gt;
++ struct xe_exec_queue *q;
++ struct xe_vm *migrate_vm;
++
++ migrate_vm = xe_migrate_get_vm(tile->migrate);
++ if (xe->info.has_usm) {
++ struct xe_hw_engine *hwe = xe_gt_hw_engine(gt,
++ XE_ENGINE_CLASS_COPY,
++ gt->usm.reserved_bcs_instance,
++ false);
++
++ if (!hwe) {
++ xe_vm_put(migrate_vm);
++ return ERR_PTR(-EINVAL);
++ }
++
++ q = xe_exec_queue_create(xe, migrate_vm,
++ BIT(hwe->logical_instance), 1, hwe,
++ flags, extensions);
++ } else {
++ q = xe_exec_queue_create_class(xe, gt, migrate_vm,
++ XE_ENGINE_CLASS_COPY, flags,
++ extensions);
++ }
++ xe_vm_put(migrate_vm);
++
++ return q;
+ }
+
+ void xe_exec_queue_destroy(struct kref *ref)
+@@ -418,63 +468,6 @@ static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue
+ return 0;
+ }
+
+-static const enum xe_engine_class user_to_xe_engine_class[] = {
+- [DRM_XE_ENGINE_CLASS_RENDER] = XE_ENGINE_CLASS_RENDER,
+- [DRM_XE_ENGINE_CLASS_COPY] = XE_ENGINE_CLASS_COPY,
+- [DRM_XE_ENGINE_CLASS_VIDEO_DECODE] = XE_ENGINE_CLASS_VIDEO_DECODE,
+- [DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE] = XE_ENGINE_CLASS_VIDEO_ENHANCE,
+- [DRM_XE_ENGINE_CLASS_COMPUTE] = XE_ENGINE_CLASS_COMPUTE,
+-};
+-
+-static struct xe_hw_engine *
+-find_hw_engine(struct xe_device *xe,
+- struct drm_xe_engine_class_instance eci)
+-{
+- u32 idx;
+-
+- if (eci.engine_class >= ARRAY_SIZE(user_to_xe_engine_class))
+- return NULL;
+-
+- if (eci.gt_id >= xe->info.gt_count)
+- return NULL;
+-
+- idx = array_index_nospec(eci.engine_class,
+- ARRAY_SIZE(user_to_xe_engine_class));
+-
+- return xe_gt_hw_engine(xe_device_get_gt(xe, eci.gt_id),
+- user_to_xe_engine_class[idx],
+- eci.engine_instance, true);
+-}
+-
+-static u32 bind_exec_queue_logical_mask(struct xe_device *xe, struct xe_gt *gt,
+- struct drm_xe_engine_class_instance *eci,
+- u16 width, u16 num_placements)
+-{
+- struct xe_hw_engine *hwe;
+- enum xe_hw_engine_id id;
+- u32 logical_mask = 0;
+-
+- if (XE_IOCTL_DBG(xe, width != 1))
+- return 0;
+- if (XE_IOCTL_DBG(xe, num_placements != 1))
+- return 0;
+- if (XE_IOCTL_DBG(xe, eci[0].engine_instance != 0))
+- return 0;
+-
+- eci[0].engine_class = DRM_XE_ENGINE_CLASS_COPY;
+-
+- for_each_hw_engine(hwe, gt, id) {
+- if (xe_hw_engine_is_reserved(hwe))
+- continue;
+-
+- if (hwe->class ==
+- user_to_xe_engine_class[DRM_XE_ENGINE_CLASS_COPY])
+- logical_mask |= BIT(hwe->logical_instance);
+- }
+-
+- return logical_mask;
+-}
+-
+ static u32 calc_validate_logical_mask(struct xe_device *xe, struct xe_gt *gt,
+ struct drm_xe_engine_class_instance *eci,
+ u16 width, u16 num_placements)
+@@ -497,7 +490,7 @@ static u32 calc_validate_logical_mask(struct xe_device *xe, struct xe_gt *gt,
+
+ n = j * width + i;
+
+- hwe = find_hw_engine(xe, eci[n]);
++ hwe = xe_hw_engine_lookup(xe, eci[n]);
+ if (XE_IOCTL_DBG(xe, !hwe))
+ return 0;
+
+@@ -536,8 +529,9 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
+ struct drm_xe_engine_class_instance __user *user_eci =
+ u64_to_user_ptr(args->instances);
+ struct xe_hw_engine *hwe;
+- struct xe_vm *vm, *migrate_vm;
++ struct xe_vm *vm;
+ struct xe_gt *gt;
++ struct xe_tile *tile;
+ struct xe_exec_queue *q = NULL;
+ u32 logical_mask;
+ u32 id;
+@@ -562,37 +556,20 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
+ return -EINVAL;
+
+ if (eci[0].engine_class == DRM_XE_ENGINE_CLASS_VM_BIND) {
+- for_each_gt(gt, xe, id) {
+- struct xe_exec_queue *new;
+- u32 flags;
+-
+- if (xe_gt_is_media_type(gt))
+- continue;
+-
+- eci[0].gt_id = gt->info.id;
+- logical_mask = bind_exec_queue_logical_mask(xe, gt, eci,
+- args->width,
+- args->num_placements);
+- if (XE_IOCTL_DBG(xe, !logical_mask))
+- return -EINVAL;
+-
+- hwe = find_hw_engine(xe, eci[0]);
+- if (XE_IOCTL_DBG(xe, !hwe))
+- return -EINVAL;
+-
+- /* The migration vm doesn't hold rpm ref */
+- xe_pm_runtime_get_noresume(xe);
+-
+- flags = EXEC_QUEUE_FLAG_VM | (id ? EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD : 0);
++ if (XE_IOCTL_DBG(xe, args->width != 1) ||
++ XE_IOCTL_DBG(xe, args->num_placements != 1) ||
++ XE_IOCTL_DBG(xe, eci[0].engine_instance != 0))
++ return -EINVAL;
+
+- migrate_vm = xe_migrate_get_vm(gt_to_tile(gt)->migrate);
+- new = xe_exec_queue_create(xe, migrate_vm, logical_mask,
+- args->width, hwe, flags,
+- args->extensions);
++ for_each_tile(tile, xe, id) {
++ struct xe_exec_queue *new;
++ u32 flags = EXEC_QUEUE_FLAG_VM;
+
+- xe_pm_runtime_put(xe); /* now held by engine */
++ if (id)
++ flags |= EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD;
+
+- xe_vm_put(migrate_vm);
++ new = xe_exec_queue_create_bind(xe, tile, flags,
++ args->extensions);
+ if (IS_ERR(new)) {
+ err = PTR_ERR(new);
+ if (q)
+@@ -613,7 +590,7 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
+ if (XE_IOCTL_DBG(xe, !logical_mask))
+ return -EINVAL;
+
+- hwe = find_hw_engine(xe, eci[0]);
++ hwe = xe_hw_engine_lookup(xe, eci[0]);
+ if (XE_IOCTL_DBG(xe, !hwe))
+ return -EINVAL;
+
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
+index 289a3a51d2a21b..f4ba8897763f8a 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue.h
++++ b/drivers/gpu/drm/xe/xe_exec_queue.h
+@@ -20,7 +20,11 @@ struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *v
+ u64 extensions);
+ struct xe_exec_queue *xe_exec_queue_create_class(struct xe_device *xe, struct xe_gt *gt,
+ struct xe_vm *vm,
+- enum xe_engine_class class, u32 flags);
++ enum xe_engine_class class,
++ u32 flags, u64 extensions);
++struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe,
++ struct xe_tile *tile,
++ u32 flags, u64 extensions);
+
+ void xe_exec_queue_fini(struct xe_exec_queue *q);
+ void xe_exec_queue_destroy(struct kref *ref);
+diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
+index 07ed9fd28f1956..5c0ca61ccda7c2 100644
+--- a/drivers/gpu/drm/xe/xe_hw_engine.c
++++ b/drivers/gpu/drm/xe/xe_hw_engine.c
+@@ -5,7 +5,10 @@
+
+ #include "xe_hw_engine.h"
+
++#include <linux/nospec.h>
++
+ #include <drm/drm_managed.h>
++#include <drm/xe_drm.h>
+
+ #include "regs/xe_engine_regs.h"
+ #include "regs/xe_gt_regs.h"
+@@ -1135,3 +1138,31 @@ enum xe_force_wake_domains xe_hw_engine_to_fw_domain(struct xe_hw_engine *hwe)
+ {
+ return engine_infos[hwe->engine_id].domain;
+ }
++
++static const enum xe_engine_class user_to_xe_engine_class[] = {
++ [DRM_XE_ENGINE_CLASS_RENDER] = XE_ENGINE_CLASS_RENDER,
++ [DRM_XE_ENGINE_CLASS_COPY] = XE_ENGINE_CLASS_COPY,
++ [DRM_XE_ENGINE_CLASS_VIDEO_DECODE] = XE_ENGINE_CLASS_VIDEO_DECODE,
++ [DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE] = XE_ENGINE_CLASS_VIDEO_ENHANCE,
++ [DRM_XE_ENGINE_CLASS_COMPUTE] = XE_ENGINE_CLASS_COMPUTE,
++};
++
++struct xe_hw_engine *
++xe_hw_engine_lookup(struct xe_device *xe,
++ struct drm_xe_engine_class_instance eci)
++{
++ unsigned int idx;
++
++ if (eci.engine_class >= ARRAY_SIZE(user_to_xe_engine_class))
++ return NULL;
++
++ if (eci.gt_id >= xe->info.gt_count)
++ return NULL;
++
++ idx = array_index_nospec(eci.engine_class,
++ ARRAY_SIZE(user_to_xe_engine_class));
++
++ return xe_gt_hw_engine(xe_device_get_gt(xe, eci.gt_id),
++ user_to_xe_engine_class[idx],
++ eci.engine_instance, true);
++}
+diff --git a/drivers/gpu/drm/xe/xe_hw_engine.h b/drivers/gpu/drm/xe/xe_hw_engine.h
+index 900c8c99143031..d227ffe557ebfc 100644
+--- a/drivers/gpu/drm/xe/xe_hw_engine.h
++++ b/drivers/gpu/drm/xe/xe_hw_engine.h
+@@ -9,6 +9,8 @@
+ #include "xe_hw_engine_types.h"
+
+ struct drm_printer;
++struct drm_xe_engine_class_instance;
++struct xe_device;
+
+ #ifdef CONFIG_DRM_XE_JOB_TIMEOUT_MIN
+ #define XE_HW_ENGINE_JOB_TIMEOUT_MIN CONFIG_DRM_XE_JOB_TIMEOUT_MIN
+@@ -62,6 +64,11 @@ void xe_hw_engine_print(struct xe_hw_engine *hwe, struct drm_printer *p);
+ void xe_hw_engine_setup_default_lrc_state(struct xe_hw_engine *hwe);
+
+ bool xe_hw_engine_is_reserved(struct xe_hw_engine *hwe);
++
++struct xe_hw_engine *
++xe_hw_engine_lookup(struct xe_device *xe,
++ struct drm_xe_engine_class_instance eci);
++
+ static inline bool xe_hw_engine_is_valid(struct xe_hw_engine *hwe)
+ {
+ return hwe->name;
+diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
+index c9f5673353ee3e..a849c48d8ac90f 100644
+--- a/drivers/gpu/drm/xe/xe_migrate.c
++++ b/drivers/gpu/drm/xe/xe_migrate.c
+@@ -404,7 +404,7 @@ struct xe_migrate *xe_migrate_init(struct xe_tile *tile)
+ m->q = xe_exec_queue_create_class(xe, primary_gt, vm,
+ XE_ENGINE_CLASS_COPY,
+ EXEC_QUEUE_FLAG_KERNEL |
+- EXEC_QUEUE_FLAG_PERMANENT);
++ EXEC_QUEUE_FLAG_PERMANENT, 0);
+ }
+ if (IS_ERR(m->q)) {
+ xe_vm_close_and_put(vm);
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index 50e8fc49ba6c15..b49bee0dfac5da 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -1412,19 +1412,13 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ /* Kernel migration VM shouldn't have a circular loop.. */
+ if (!(flags & XE_VM_FLAG_MIGRATION)) {
+ for_each_tile(tile, xe, id) {
+- struct xe_gt *gt = tile->primary_gt;
+- struct xe_vm *migrate_vm;
+ struct xe_exec_queue *q;
+ u32 create_flags = EXEC_QUEUE_FLAG_VM;
+
+ if (!vm->pt_root[id])
+ continue;
+
+- migrate_vm = xe_migrate_get_vm(tile->migrate);
+- q = xe_exec_queue_create_class(xe, gt, migrate_vm,
+- XE_ENGINE_CLASS_COPY,
+- create_flags);
+- xe_vm_put(migrate_vm);
++ q = xe_exec_queue_create_bind(xe, tile, create_flags, 0);
+ if (IS_ERR(q)) {
+ err = PTR_ERR(q);
+ goto err_close;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 2541fa2e0fa3b1..e86a37c3cf9c3d 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2322,6 +2322,9 @@ static void wacom_wac_pen_usage_mapping(struct hid_device *hdev,
+ wacom_map_usage(input, usage, field, EV_KEY, BTN_STYLUS3, 0);
+ features->quirks &= ~WACOM_QUIRK_PEN_BUTTON3;
+ break;
++ case WACOM_HID_WD_SEQUENCENUMBER:
++ wacom_wac->hid_data.sequence_number = -1;
++ break;
+ }
+ }
+
+@@ -2446,9 +2449,15 @@ static void wacom_wac_pen_event(struct hid_device *hdev, struct hid_field *field
+ wacom_wac->hid_data.barrelswitch3 = value;
+ return;
+ case WACOM_HID_WD_SEQUENCENUMBER:
+- if (wacom_wac->hid_data.sequence_number != value)
+- hid_warn(hdev, "Dropped %hu packets", (unsigned short)(value - wacom_wac->hid_data.sequence_number));
++ if (wacom_wac->hid_data.sequence_number != value &&
++ wacom_wac->hid_data.sequence_number >= 0) {
++ int sequence_size = field->logical_maximum - field->logical_minimum + 1;
++ int drop_count = (value - wacom_wac->hid_data.sequence_number) % sequence_size;
++ hid_warn(hdev, "Dropped %d packets", drop_count);
++ }
+ wacom_wac->hid_data.sequence_number = value + 1;
++ if (wacom_wac->hid_data.sequence_number > field->logical_maximum)
++ wacom_wac->hid_data.sequence_number = field->logical_minimum;
+ return;
+ }
+
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 6ec499841f7095..e6443740b462fd 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -324,7 +324,7 @@ struct hid_data {
+ int bat_connected;
+ int ps_connected;
+ bool pad_input_event_flag;
+- unsigned short sequence_number;
++ int sequence_number;
+ ktime_t time_delayed;
+ };
+
+diff --git a/drivers/hwmon/max16065.c b/drivers/hwmon/max16065.c
+index 7ce9a89f93a0d9..0ccb5eb596fc40 100644
+--- a/drivers/hwmon/max16065.c
++++ b/drivers/hwmon/max16065.c
+@@ -79,7 +79,7 @@ static const bool max16065_have_current[] = {
+ };
+
+ struct max16065_data {
+- enum chips type;
++ enum chips chip;
+ struct i2c_client *client;
+ const struct attribute_group *groups[4];
+ struct mutex update_lock;
+@@ -114,9 +114,10 @@ static inline int LIMIT_TO_MV(int limit, int range)
+ return limit * range / 256;
+ }
+
+-static inline int MV_TO_LIMIT(int mv, int range)
++static inline int MV_TO_LIMIT(unsigned long mv, int range)
+ {
+- return clamp_val(DIV_ROUND_CLOSEST(mv * 256, range), 0, 255);
++ mv = clamp_val(mv, 0, ULONG_MAX / 256);
++ return DIV_ROUND_CLOSEST(clamp_val(mv * 256, 0, range * 255), range);
+ }
+
+ static inline int ADC_TO_CURR(int adc, int gain)
+@@ -161,10 +162,17 @@ static struct max16065_data *max16065_update_device(struct device *dev)
+ MAX16065_CURR_SENSE);
+ }
+
+- for (i = 0; i < DIV_ROUND_UP(data->num_adc, 8); i++)
++ for (i = 0; i < 2; i++)
+ data->fault[i]
+ = i2c_smbus_read_byte_data(client, MAX16065_FAULT(i));
+
++ /*
++ * MAX16067 and MAX16068 have separate undervoltage and
++ * overvoltage alarm bits. Squash them together.
++ */
++ if (data->chip == max16067 || data->chip == max16068)
++ data->fault[0] |= data->fault[1];
++
+ data->last_updated = jiffies;
+ data->valid = true;
+ }
+@@ -513,6 +521,7 @@ static int max16065_probe(struct i2c_client *client)
+ if (unlikely(!data))
+ return -ENOMEM;
+
++ data->chip = chip;
+ data->client = client;
+ mutex_init(&data->update_lock);
+
+diff --git a/drivers/hwmon/ntc_thermistor.c b/drivers/hwmon/ntc_thermistor.c
+index ef75b63f5894e5..b5352900463fb9 100644
+--- a/drivers/hwmon/ntc_thermistor.c
++++ b/drivers/hwmon/ntc_thermistor.c
+@@ -62,6 +62,7 @@ static const struct platform_device_id ntc_thermistor_id[] = {
+ [NTC_SSG1404001221] = { "ssg1404_001221", TYPE_NCPXXWB473 },
+ [NTC_LAST] = { },
+ };
++MODULE_DEVICE_TABLE(platform, ntc_thermistor_id);
+
+ /*
+ * A compensation table should be sorted by the values of .ohm
+diff --git a/drivers/hwtracing/coresight/coresight-dummy.c b/drivers/hwtracing/coresight/coresight-dummy.c
+index ac70c0b491bebd..dab389a5507c11 100644
+--- a/drivers/hwtracing/coresight/coresight-dummy.c
++++ b/drivers/hwtracing/coresight/coresight-dummy.c
+@@ -23,6 +23,9 @@ DEFINE_CORESIGHT_DEVLIST(sink_devs, "dummy_sink");
+ static int dummy_source_enable(struct coresight_device *csdev,
+ struct perf_event *event, enum cs_mode mode)
+ {
++ if (!coresight_take_mode(csdev, mode))
++ return -EBUSY;
++
+ dev_dbg(csdev->dev.parent, "Dummy source enabled\n");
+
+ return 0;
+@@ -31,6 +34,7 @@ static int dummy_source_enable(struct coresight_device *csdev,
+ static void dummy_source_disable(struct coresight_device *csdev,
+ struct perf_event *event)
+ {
++ coresight_set_mode(csdev, CS_MODE_DISABLED);
+ dev_dbg(csdev->dev.parent, "Dummy source disabled\n");
+ }
+
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+index e75428fa1592a7..610ad51cda656b 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etr.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+@@ -261,6 +261,7 @@ void tmc_free_sg_table(struct tmc_sg_table *sg_table)
+ {
+ tmc_free_table_pages(sg_table);
+ tmc_free_data_pages(sg_table);
++ kfree(sg_table);
+ }
+ EXPORT_SYMBOL_GPL(tmc_free_sg_table);
+
+@@ -342,7 +343,6 @@ struct tmc_sg_table *tmc_alloc_sg_table(struct device *dev,
+ rc = tmc_alloc_table_pages(sg_table);
+ if (rc) {
+ tmc_free_sg_table(sg_table);
+- kfree(sg_table);
+ return ERR_PTR(rc);
+ }
+
+diff --git a/drivers/hwtracing/coresight/coresight-tpdm.c b/drivers/hwtracing/coresight/coresight-tpdm.c
+index 0726f8842552c6..5c5a4b3fe6871c 100644
+--- a/drivers/hwtracing/coresight/coresight-tpdm.c
++++ b/drivers/hwtracing/coresight/coresight-tpdm.c
+@@ -449,6 +449,11 @@ static int tpdm_enable(struct coresight_device *csdev, struct perf_event *event,
+ return -EBUSY;
+ }
+
++ if (!coresight_take_mode(csdev, mode)) {
++ spin_unlock(&drvdata->spinlock);
++ return -EBUSY;
++ }
++
+ __tpdm_enable(drvdata);
+ drvdata->enable = true;
+ spin_unlock(&drvdata->spinlock);
+@@ -506,6 +511,7 @@ static void tpdm_disable(struct coresight_device *csdev,
+ }
+
+ __tpdm_disable(drvdata);
++ coresight_set_mode(csdev, CS_MODE_DISABLED);
+ drvdata->enable = false;
+ spin_unlock(&drvdata->spinlock);
+
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index ce8c4846b7fae4..2a03a221e2dd57 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -170,6 +170,13 @@ struct aspeed_i2c_bus {
+
+ static int aspeed_i2c_reset(struct aspeed_i2c_bus *bus);
+
++/* precondition: bus.lock has been acquired. */
++static void aspeed_i2c_do_stop(struct aspeed_i2c_bus *bus)
++{
++ bus->master_state = ASPEED_I2C_MASTER_STOP;
++ writel(ASPEED_I2CD_M_STOP_CMD, bus->base + ASPEED_I2C_CMD_REG);
++}
++
+ static int aspeed_i2c_recover_bus(struct aspeed_i2c_bus *bus)
+ {
+ unsigned long time_left, flags;
+@@ -187,7 +194,7 @@ static int aspeed_i2c_recover_bus(struct aspeed_i2c_bus *bus)
+ command);
+
+ reinit_completion(&bus->cmd_complete);
+- writel(ASPEED_I2CD_M_STOP_CMD, bus->base + ASPEED_I2C_CMD_REG);
++ aspeed_i2c_do_stop(bus);
+ spin_unlock_irqrestore(&bus->lock, flags);
+
+ time_left = wait_for_completion_timeout(
+@@ -390,13 +397,6 @@ static void aspeed_i2c_do_start(struct aspeed_i2c_bus *bus)
+ writel(command, bus->base + ASPEED_I2C_CMD_REG);
+ }
+
+-/* precondition: bus.lock has been acquired. */
+-static void aspeed_i2c_do_stop(struct aspeed_i2c_bus *bus)
+-{
+- bus->master_state = ASPEED_I2C_MASTER_STOP;
+- writel(ASPEED_I2CD_M_STOP_CMD, bus->base + ASPEED_I2C_CMD_REG);
+-}
+-
+ /* precondition: bus.lock has been acquired. */
+ static void aspeed_i2c_next_msg_or_stop(struct aspeed_i2c_bus *bus)
+ {
+diff --git a/drivers/i2c/busses/i2c-isch.c b/drivers/i2c/busses/i2c-isch.c
+index 33dbc19d3848dd..f59158489ad9fa 100644
+--- a/drivers/i2c/busses/i2c-isch.c
++++ b/drivers/i2c/busses/i2c-isch.c
+@@ -99,8 +99,7 @@ static int sch_transaction(void)
+ if (retries > MAX_RETRIES) {
+ dev_err(&sch_adapter.dev, "SMBus Timeout!\n");
+ result = -ETIMEDOUT;
+- }
+- if (temp & 0x04) {
++ } else if (temp & 0x04) {
+ result = -EIO;
+ dev_dbg(&sch_adapter.dev, "Bus collision! SMBus may be "
+ "locked until next hard reset. (sorry!)\n");
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 19468565120e1c..d3ca7d2f81a614 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -844,23 +844,11 @@ static int xiic_bus_busy(struct xiic_i2c *i2c)
+ return (sr & XIIC_SR_BUS_BUSY_MASK) ? -EBUSY : 0;
+ }
+
+-static int xiic_busy(struct xiic_i2c *i2c)
++static int xiic_wait_not_busy(struct xiic_i2c *i2c)
+ {
+ int tries = 3;
+ int err;
+
+- if (i2c->tx_msg || i2c->rx_msg)
+- return -EBUSY;
+-
+- /* In single master mode bus can only be busy, when in use by this
+- * driver. If the register indicates bus being busy for some reason we
+- * should ignore it, since bus will never be released and i2c will be
+- * stuck forever.
+- */
+- if (i2c->singlemaster) {
+- return 0;
+- }
+-
+ /* for instance if previous transfer was terminated due to TX error
+ * it might be that the bus is on it's way to become available
+ * give it at most 3 ms to wake
+@@ -1104,13 +1092,36 @@ static int xiic_start_xfer(struct xiic_i2c *i2c, struct i2c_msg *msgs, int num)
+
+ mutex_lock(&i2c->lock);
+
+- ret = xiic_busy(i2c);
+- if (ret) {
++ if (i2c->tx_msg || i2c->rx_msg) {
+ dev_err(i2c->adap.dev.parent,
+ "cannot start a transfer while busy\n");
++ ret = -EBUSY;
+ goto out;
+ }
+
++ /* In single master mode bus can only be busy, when in use by this
++ * driver. If the register indicates bus being busy for some reason we
++ * should ignore it, since bus will never be released and i2c will be
++ * stuck forever.
++ */
++ if (!i2c->singlemaster) {
++ ret = xiic_wait_not_busy(i2c);
++ if (ret) {
++ /* If the bus is stuck in a busy state, such as due to spurious low
++ * pulses on the bus causing a false start condition to be detected,
++ * then try to recover by re-initializing the controller and check
++ * again if the bus is still busy.
++ */
++ dev_warn(i2c->adap.dev.parent, "I2C bus busy timeout, reinitializing\n");
++ ret = xiic_reinit(i2c);
++ if (ret)
++ goto out;
++ ret = xiic_wait_not_busy(i2c);
++ if (ret)
++ goto out;
++ }
++ }
++
+ i2c->tx_msg = msgs;
+ i2c->rx_msg = NULL;
+ i2c->nmsgs = num;
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index 9aab7abc2ae90e..88470602b789e1 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -120,6 +120,12 @@ static unsigned int mwait_substates __initdata;
+ */
+ #define CPUIDLE_FLAG_INIT_XSTATE BIT(17)
+
++/*
++ * Ignore the sub-state when matching mwait hints between the ACPI _CST and
++ * custom tables.
++ */
++#define CPUIDLE_FLAG_PARTIAL_HINT_MATCH BIT(18)
++
+ /*
+ * MWAIT takes an 8-bit "hint" in EAX "suggesting"
+ * the C-state (top nibble) and sub-state (bottom nibble)
+@@ -1022,6 +1028,47 @@ static struct cpuidle_state spr_cstates[] __initdata = {
+ .enter = NULL }
+ };
+
++static struct cpuidle_state gnr_cstates[] __initdata = {
++ {
++ .name = "C1",
++ .desc = "MWAIT 0x00",
++ .flags = MWAIT2flg(0x00),
++ .exit_latency = 1,
++ .target_residency = 1,
++ .enter = &intel_idle,
++ .enter_s2idle = intel_idle_s2idle, },
++ {
++ .name = "C1E",
++ .desc = "MWAIT 0x01",
++ .flags = MWAIT2flg(0x01) | CPUIDLE_FLAG_ALWAYS_ENABLE,
++ .exit_latency = 4,
++ .target_residency = 4,
++ .enter = &intel_idle,
++ .enter_s2idle = intel_idle_s2idle, },
++ {
++ .name = "C6",
++ .desc = "MWAIT 0x20",
++ .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED |
++ CPUIDLE_FLAG_INIT_XSTATE |
++ CPUIDLE_FLAG_PARTIAL_HINT_MATCH,
++ .exit_latency = 170,
++ .target_residency = 650,
++ .enter = &intel_idle,
++ .enter_s2idle = intel_idle_s2idle, },
++ {
++ .name = "C6P",
++ .desc = "MWAIT 0x21",
++ .flags = MWAIT2flg(0x21) | CPUIDLE_FLAG_TLB_FLUSHED |
++ CPUIDLE_FLAG_INIT_XSTATE |
++ CPUIDLE_FLAG_PARTIAL_HINT_MATCH,
++ .exit_latency = 210,
++ .target_residency = 1000,
++ .enter = &intel_idle,
++ .enter_s2idle = intel_idle_s2idle, },
++ {
++ .enter = NULL }
++};
++
+ static struct cpuidle_state atom_cstates[] __initdata = {
+ {
+ .name = "C1E",
+@@ -1315,7 +1362,8 @@ static struct cpuidle_state srf_cstates[] __initdata = {
+ {
+ .name = "C6S",
+ .desc = "MWAIT 0x22",
+- .flags = MWAIT2flg(0x22) | CPUIDLE_FLAG_TLB_FLUSHED,
++ .flags = MWAIT2flg(0x22) | CPUIDLE_FLAG_TLB_FLUSHED |
++ CPUIDLE_FLAG_PARTIAL_HINT_MATCH,
+ .exit_latency = 270,
+ .target_residency = 700,
+ .enter = &intel_idle,
+@@ -1323,7 +1371,8 @@ static struct cpuidle_state srf_cstates[] __initdata = {
+ {
+ .name = "C6SP",
+ .desc = "MWAIT 0x23",
+- .flags = MWAIT2flg(0x23) | CPUIDLE_FLAG_TLB_FLUSHED,
++ .flags = MWAIT2flg(0x23) | CPUIDLE_FLAG_TLB_FLUSHED |
++ CPUIDLE_FLAG_PARTIAL_HINT_MATCH,
+ .exit_latency = 310,
+ .target_residency = 900,
+ .enter = &intel_idle,
+@@ -1453,6 +1502,12 @@ static const struct idle_cpu idle_cpu_spr __initconst = {
+ .use_acpi = true,
+ };
+
++static const struct idle_cpu idle_cpu_gnr __initconst = {
++ .state_table = gnr_cstates,
++ .disable_promotion_to_c1e = true,
++ .use_acpi = true,
++};
++
+ static const struct idle_cpu idle_cpu_avn __initconst = {
+ .state_table = avn_cstates,
+ .disable_promotion_to_c1e = true,
+@@ -1533,6 +1588,7 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
+ X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, &idle_cpu_gmt),
+ X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, &idle_cpu_spr),
+ X86_MATCH_VFM(INTEL_EMERALDRAPIDS_X, &idle_cpu_spr),
++ X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, &idle_cpu_gnr),
+ X86_MATCH_VFM(INTEL_XEON_PHI_KNL, &idle_cpu_knl),
+ X86_MATCH_VFM(INTEL_XEON_PHI_KNM, &idle_cpu_knl),
+ X86_MATCH_VFM(INTEL_ATOM_GOLDMONT, &idle_cpu_bxt),
+@@ -1692,7 +1748,7 @@ static void __init intel_idle_init_cstates_acpi(struct cpuidle_driver *drv)
+ }
+ }
+
+-static bool __init intel_idle_off_by_default(u32 mwait_hint)
++static bool __init intel_idle_off_by_default(unsigned int flags, u32 mwait_hint)
+ {
+ int cstate, limit;
+
+@@ -1709,7 +1765,15 @@ static bool __init intel_idle_off_by_default(u32 mwait_hint)
+ * the interesting states are ACPI_CSTATE_FFH.
+ */
+ for (cstate = 1; cstate < limit; cstate++) {
+- if (acpi_state_table.states[cstate].address == mwait_hint)
++ u32 acpi_hint = acpi_state_table.states[cstate].address;
++ u32 table_hint = mwait_hint;
++
++ if (flags & CPUIDLE_FLAG_PARTIAL_HINT_MATCH) {
++ acpi_hint &= ~MWAIT_SUBSTATE_MASK;
++ table_hint &= ~MWAIT_SUBSTATE_MASK;
++ }
++
++ if (acpi_hint == table_hint)
+ return false;
+ }
+ return true;
+@@ -1719,7 +1783,10 @@ static bool __init intel_idle_off_by_default(u32 mwait_hint)
+
+ static inline bool intel_idle_acpi_cst_extract(void) { return false; }
+ static inline void intel_idle_init_cstates_acpi(struct cpuidle_driver *drv) { }
+-static inline bool intel_idle_off_by_default(u32 mwait_hint) { return false; }
++static inline bool intel_idle_off_by_default(unsigned int flags, u32 mwait_hint)
++{
++ return false;
++}
+ #endif /* !CONFIG_ACPI_PROCESSOR_CSTATE */
+
+ /**
+@@ -2046,7 +2113,7 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
+
+ if ((disabled_states_mask & BIT(drv->state_count)) ||
+ ((icpu->use_acpi || force_use_acpi) &&
+- intel_idle_off_by_default(mwait_hint) &&
++ intel_idle_off_by_default(state->flags, mwait_hint) &&
+ !(state->flags & CPUIDLE_FLAG_ALWAYS_ENABLE)))
+ state->flags |= CPUIDLE_FLAG_OFF;
+
+diff --git a/drivers/iio/adc/ad7606.c b/drivers/iio/adc/ad7606.c
+index c321c6ef48df41..8f9ee56c8bed21 100644
+--- a/drivers/iio/adc/ad7606.c
++++ b/drivers/iio/adc/ad7606.c
+@@ -212,9 +212,9 @@ static int ad7606_write_os_hw(struct iio_dev *indio_dev, int val)
+ struct ad7606_state *st = iio_priv(indio_dev);
+ DECLARE_BITMAP(values, 3);
+
+- values[0] = val;
++ values[0] = val & GENMASK(2, 0);
+
+- gpiod_set_array_value(ARRAY_SIZE(values), st->gpio_os->desc,
++ gpiod_set_array_value(st->gpio_os->ndescs, st->gpio_os->desc,
+ st->gpio_os->info, values);
+
+ /* AD7616 requires a reset to update value */
+@@ -419,7 +419,7 @@ static int ad7606_request_gpios(struct ad7606_state *st)
+ return PTR_ERR(st->gpio_range);
+
+ st->gpio_standby = devm_gpiod_get_optional(dev, "standby",
+- GPIOD_OUT_HIGH);
++ GPIOD_OUT_LOW);
+ if (IS_ERR(st->gpio_standby))
+ return PTR_ERR(st->gpio_standby);
+
+@@ -662,7 +662,7 @@ static int ad7606_suspend(struct device *dev)
+
+ if (st->gpio_standby) {
+ gpiod_set_value(st->gpio_range, 1);
+- gpiod_set_value(st->gpio_standby, 0);
++ gpiod_set_value(st->gpio_standby, 1);
+ }
+
+ return 0;
+diff --git a/drivers/iio/adc/ad7606_spi.c b/drivers/iio/adc/ad7606_spi.c
+index 263a778bcf2539..287a0591533b6a 100644
+--- a/drivers/iio/adc/ad7606_spi.c
++++ b/drivers/iio/adc/ad7606_spi.c
+@@ -249,8 +249,9 @@ static int ad7616_sw_mode_config(struct iio_dev *indio_dev)
+ static int ad7606B_sw_mode_config(struct iio_dev *indio_dev)
+ {
+ struct ad7606_state *st = iio_priv(indio_dev);
+- unsigned long os[3] = {1};
++ DECLARE_BITMAP(os, 3);
+
++ bitmap_fill(os, 3);
+ /*
+ * Software mode is enabled when all three oversampling
+ * pins are set to high. If oversampling gpios are defined
+@@ -258,7 +259,7 @@ static int ad7606B_sw_mode_config(struct iio_dev *indio_dev)
+ * otherwise, they must be hardwired to VDD
+ */
+ if (st->gpio_os) {
+- gpiod_set_array_value(ARRAY_SIZE(os),
++ gpiod_set_array_value(st->gpio_os->ndescs,
+ st->gpio_os->desc, st->gpio_os->info, os);
+ }
+ /* OS of 128 and 256 are available only in software mode */
+diff --git a/drivers/iio/chemical/bme680_core.c b/drivers/iio/chemical/bme680_core.c
+index 500f56834b01f6..a6bf689833dad7 100644
+--- a/drivers/iio/chemical/bme680_core.c
++++ b/drivers/iio/chemical/bme680_core.c
+@@ -10,6 +10,7 @@
+ */
+ #include <linux/acpi.h>
+ #include <linux/bitfield.h>
++#include <linux/cleanup.h>
+ #include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/module.h>
+@@ -52,6 +53,7 @@ struct bme680_calib {
+ struct bme680_data {
+ struct regmap *regmap;
+ struct bme680_calib bme680;
++ struct mutex lock; /* Protect multiple serial R/W ops to device. */
+ u8 oversampling_temp;
+ u8 oversampling_press;
+ u8 oversampling_humid;
+@@ -827,6 +829,8 @@ static int bme680_read_raw(struct iio_dev *indio_dev,
+ {
+ struct bme680_data *data = iio_priv(indio_dev);
+
++ guard(mutex)(&data->lock);
++
+ switch (mask) {
+ case IIO_CHAN_INFO_PROCESSED:
+ switch (chan->type) {
+@@ -871,6 +875,8 @@ static int bme680_write_raw(struct iio_dev *indio_dev,
+ {
+ struct bme680_data *data = iio_priv(indio_dev);
+
++ guard(mutex)(&data->lock);
++
+ if (val2 != 0)
+ return -EINVAL;
+
+@@ -967,6 +973,7 @@ int bme680_core_probe(struct device *dev, struct regmap *regmap,
+ name = bme680_match_acpi_device(dev);
+
+ data = iio_priv(indio_dev);
++ mutex_init(&data->lock);
+ dev_set_drvdata(dev, indio_dev);
+ data->regmap = regmap;
+ indio_dev->name = name;
+diff --git a/drivers/iio/magnetometer/ak8975.c b/drivers/iio/magnetometer/ak8975.c
+index dd466c5fa6214f..ccbebe5b66cde2 100644
+--- a/drivers/iio/magnetometer/ak8975.c
++++ b/drivers/iio/magnetometer/ak8975.c
+@@ -1081,7 +1081,6 @@ static const struct of_device_id ak8975_of_match[] = {
+ { .compatible = "asahi-kasei,ak09912", .data = &ak_def_array[AK09912] },
+ { .compatible = "ak09912", .data = &ak_def_array[AK09912] },
+ { .compatible = "asahi-kasei,ak09916", .data = &ak_def_array[AK09916] },
+- { .compatible = "ak09916", .data = &ak_def_array[AK09916] },
+ {}
+ };
+ MODULE_DEVICE_TABLE(of, ak8975_of_match);
+diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c
+index 6791df64a5fe05..b7c078b7f7cfd4 100644
+--- a/drivers/infiniband/core/cache.c
++++ b/drivers/infiniband/core/cache.c
+@@ -1640,8 +1640,10 @@ int ib_cache_setup_one(struct ib_device *device)
+
+ rdma_for_each_port (device, p) {
+ err = ib_cache_update(device, p, true, true, true);
+- if (err)
++ if (err) {
++ gid_table_cleanup_one(device);
+ return err;
++ }
+ }
+
+ return 0;
+diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c
+index 1a6339f3a63fc9..7e3a55349e1070 100644
+--- a/drivers/infiniband/core/iwcm.c
++++ b/drivers/infiniband/core/iwcm.c
+@@ -1182,7 +1182,7 @@ static int __init iw_cm_init(void)
+ if (ret)
+ return ret;
+
+- iwcm_wq = alloc_ordered_workqueue("iw_cm_wq", 0);
++ iwcm_wq = alloc_ordered_workqueue("iw_cm_wq", WQ_MEM_RECLAIM);
+ if (!iwcm_wq)
+ goto err_alloc;
+
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 7c757351a0166f..982e85ba211bc6 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -1042,6 +1042,8 @@ static int bnxt_re_init_user_qp(struct bnxt_re_dev *rdev, struct bnxt_re_pd *pd,
+ qplib_qp->sq.max_wqe :
+ ((qplib_qp->sq.max_wqe * qplib_qp->sq.wqe_size) /
+ sizeof(struct bnxt_qplib_sge));
++ if (_is_host_msn_table(rdev->qplib_res.dattr->dev_cap_flags2))
++ psn_nume = roundup_pow_of_two(psn_nume);
+ bytes += (psn_nume * psn_sz);
+ }
+
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index 040ba2224f9ff6..b3757c6a0457a1 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -1222,6 +1222,8 @@ static int act_establish(struct c4iw_dev *dev, struct sk_buff *skb)
+ int ret;
+
+ ep = lookup_atid(t, atid);
++ if (!ep)
++ return -EINVAL;
+
+ pr_debug("ep %p tid %u snd_isn %u rcv_isn %u\n", ep, tid,
+ be32_to_cpu(req->snd_isn), be32_to_cpu(req->rcv_isn));
+@@ -2279,6 +2281,9 @@ static int act_open_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
+ int ret = 0;
+
+ ep = lookup_atid(t, atid);
++ if (!ep)
++ return -EINVAL;
++
+ la = (struct sockaddr_in *)&ep->com.local_addr;
+ ra = (struct sockaddr_in *)&ep->com.remote_addr;
+ la6 = (struct sockaddr_in6 *)&ep->com.local_addr;
+diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
+index d7e1cbf9f5c26b..7f6e3eaca3bbfc 100644
+--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
++++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
+@@ -1544,11 +1544,31 @@ int erdma_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask,
+ return ret;
+ }
+
++static enum ib_qp_state query_qp_state(struct erdma_qp *qp)
++{
++ switch (qp->attrs.state) {
++ case ERDMA_QP_STATE_IDLE:
++ return IB_QPS_INIT;
++ case ERDMA_QP_STATE_RTR:
++ return IB_QPS_RTR;
++ case ERDMA_QP_STATE_RTS:
++ return IB_QPS_RTS;
++ case ERDMA_QP_STATE_CLOSING:
++ return IB_QPS_ERR;
++ case ERDMA_QP_STATE_TERMINATE:
++ return IB_QPS_ERR;
++ case ERDMA_QP_STATE_ERROR:
++ return IB_QPS_ERR;
++ default:
++ return IB_QPS_ERR;
++ }
++}
++
+ int erdma_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
+ {
+- struct erdma_qp *qp;
+ struct erdma_dev *dev;
++ struct erdma_qp *qp;
+
+ if (ibqp && qp_attr && qp_init_attr) {
+ qp = to_eqp(ibqp);
+@@ -1575,6 +1595,9 @@ int erdma_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+
+ qp_init_attr->cap = qp_attr->cap;
+
++ qp_attr->qp_state = query_qp_state(qp);
++ qp_attr->cur_qp_state = query_qp_state(qp);
++
+ return 0;
+ }
+
+diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c
+index 3e02c474f59fec..4fc5b9d5fea87e 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_ah.c
++++ b/drivers/infiniband/hw/hns/hns_roce_ah.c
+@@ -64,8 +64,10 @@ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
+ u8 tc_mode = 0;
+ int ret;
+
+- if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08 && udata)
+- return -EOPNOTSUPP;
++ if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08 && udata) {
++ ret = -EOPNOTSUPP;
++ goto err_out;
++ }
+
+ ah->av.port = rdma_ah_get_port_num(ah_attr);
+ ah->av.gid_index = grh->sgid_index;
+@@ -83,7 +85,7 @@ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
+ ret = 0;
+
+ if (ret && grh->sgid_attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP)
+- return ret;
++ goto err_out;
+
+ if (tc_mode == HNAE3_TC_MAP_MODE_DSCP &&
+ grh->sgid_attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP)
+@@ -91,8 +93,10 @@ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
+ else
+ ah->av.sl = rdma_ah_get_sl(ah_attr);
+
+- if (!check_sl_valid(hr_dev, ah->av.sl))
+- return -EINVAL;
++ if (!check_sl_valid(hr_dev, ah->av.sl)) {
++ ret = -EINVAL;
++ goto err_out;
++ }
+
+ memcpy(ah->av.dgid, grh->dgid.raw, HNS_ROCE_GID_SIZE);
+ memcpy(ah->av.mac, ah_attr->roce.dmac, ETH_ALEN);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index 02baa853a76c9b..c7c167e2a04513 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -1041,9 +1041,9 @@ static bool hem_list_is_bottom_bt(int hopnum, int bt_level)
+ * @bt_level: base address table level
+ * @unit: ba entries per bt page
+ */
+-static u32 hem_list_calc_ba_range(int hopnum, int bt_level, int unit)
++static u64 hem_list_calc_ba_range(int hopnum, int bt_level, int unit)
+ {
+- u32 step;
++ u64 step;
+ int max;
+ int i;
+
+@@ -1079,7 +1079,7 @@ int hns_roce_hem_list_calc_root_ba(const struct hns_roce_buf_region *regions,
+ {
+ struct hns_roce_buf_region *r;
+ int total = 0;
+- int step;
++ u64 step;
+ int i;
+
+ for (i = 0; i < region_cnt; i++) {
+@@ -1110,7 +1110,7 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
+ int ret = 0;
+ int max_ofs;
+ int level;
+- u32 step;
++ u64 step;
+ int end;
+
+ if (hopnum <= 1)
+@@ -1134,10 +1134,12 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
+
+ /* config L1 bt to last bt and link them to corresponding parent */
+ for (level = 1; level < hopnum; level++) {
+- cur = hem_list_search_item(&mid_bt[level], offset);
+- if (cur) {
+- hem_ptrs[level] = cur;
+- continue;
++ if (!hem_list_is_bottom_bt(hopnum, level)) {
++ cur = hem_list_search_item(&mid_bt[level], offset);
++ if (cur) {
++ hem_ptrs[level] = cur;
++ continue;
++ }
+ }
+
+ step = hem_list_calc_ba_range(hopnum, level, unit);
+@@ -1147,7 +1149,7 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
+ }
+
+ start_aligned = (distance / step) * step + r->offset;
+- end = min_t(int, start_aligned + step - 1, max_ofs);
++ end = min_t(u64, start_aligned + step - 1, max_ofs);
+ cur = hem_list_alloc_item(hr_dev, start_aligned, end, unit,
+ true);
+ if (!cur) {
+@@ -1235,7 +1237,7 @@ static int setup_middle_bt(struct hns_roce_dev *hr_dev, void *cpu_base,
+ struct hns_roce_hem_item *hem, *temp_hem;
+ int total = 0;
+ int offset;
+- int step;
++ u64 step;
+
+ step = hem_list_calc_ba_range(r->hopnum, 1, unit);
+ if (step < 1)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 621b057fb9daa6..24e906b9d3ae13 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1681,8 +1681,8 @@ static int hns_roce_hw_v2_query_counter(struct hns_roce_dev *hr_dev,
+
+ for (i = 0; i < HNS_ROCE_HW_CNT_TOTAL && i < *num_counters; i++) {
+ bd_idx = i / CNT_PER_DESC;
+- if (!(desc[bd_idx].flag & HNS_ROCE_CMD_FLAG_NEXT) &&
+- bd_idx != HNS_ROCE_HW_CNT_TOTAL / CNT_PER_DESC)
++ if (bd_idx != HNS_ROCE_HW_CNT_TOTAL / CNT_PER_DESC &&
++ !(desc[bd_idx].flag & cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT)))
+ break;
+
+ cnt_data = (__le64 *)&desc[bd_idx].data[0];
+@@ -2972,6 +2972,9 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev)
+
+ static void hns_roce_v2_exit(struct hns_roce_dev *hr_dev)
+ {
++ if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08)
++ free_mr_exit(hr_dev);
++
+ hns_roce_function_clear(hr_dev);
+
+ if (!hr_dev->is_vf)
+@@ -4423,12 +4426,14 @@ static int config_qp_rq_buf(struct hns_roce_dev *hr_dev,
+ upper_32_bits(to_hr_hw_page_addr(mtts[0])));
+ hr_reg_clear(qpc_mask, QPC_RQ_CUR_BLK_ADDR_H);
+
+- context->rq_nxt_blk_addr = cpu_to_le32(to_hr_hw_page_addr(mtts[1]));
+- qpc_mask->rq_nxt_blk_addr = 0;
+-
+- hr_reg_write(context, QPC_RQ_NXT_BLK_ADDR_H,
+- upper_32_bits(to_hr_hw_page_addr(mtts[1])));
+- hr_reg_clear(qpc_mask, QPC_RQ_NXT_BLK_ADDR_H);
++ if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) {
++ context->rq_nxt_blk_addr =
++ cpu_to_le32(to_hr_hw_page_addr(mtts[1]));
++ qpc_mask->rq_nxt_blk_addr = 0;
++ hr_reg_write(context, QPC_RQ_NXT_BLK_ADDR_H,
++ upper_32_bits(to_hr_hw_page_addr(mtts[1])));
++ hr_reg_clear(qpc_mask, QPC_RQ_NXT_BLK_ADDR_H);
++ }
+
+ return 0;
+ }
+@@ -6193,6 +6198,7 @@ static irqreturn_t abnormal_interrupt_basic(struct hns_roce_dev *hr_dev,
+ struct pci_dev *pdev = hr_dev->pci_dev;
+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
+ const struct hnae3_ae_ops *ops = ae_dev->ops;
++ enum hnae3_reset_type reset_type;
+ irqreturn_t int_work = IRQ_NONE;
+ u32 int_en;
+
+@@ -6204,10 +6210,12 @@ static irqreturn_t abnormal_interrupt_basic(struct hns_roce_dev *hr_dev,
+ roce_write(hr_dev, ROCEE_VF_ABN_INT_ST_REG,
+ 1 << HNS_ROCE_V2_VF_INT_ST_AEQ_OVERFLOW_S);
+
++ reset_type = hr_dev->is_vf ?
++ HNAE3_VF_FUNC_RESET : HNAE3_FUNC_RESET;
++
+ /* Set reset level for reset_event() */
+ if (ops->set_default_reset_request)
+- ops->set_default_reset_request(ae_dev,
+- HNAE3_FUNC_RESET);
++ ops->set_default_reset_request(ae_dev, reset_type);
+ if (ops->reset_event)
+ ops->reset_event(pdev, NULL);
+
+@@ -6277,7 +6285,7 @@ static u64 fmea_get_ram_res_addr(u32 res_type, __le64 *data)
+ res_type == ECC_RESOURCE_SCCC)
+ return le64_to_cpu(*data);
+
+- return le64_to_cpu(*data) << PAGE_SHIFT;
++ return le64_to_cpu(*data) << HNS_HW_PAGE_SHIFT;
+ }
+
+ static int fmea_recover_others(struct hns_roce_dev *hr_dev, u32 res_type,
+@@ -6949,9 +6957,6 @@ static void __hns_roce_hw_v2_uninit_instance(struct hnae3_handle *handle,
+ hr_dev->state = HNS_ROCE_DEVICE_STATE_UNINIT;
+ hns_roce_handle_device_err(hr_dev);
+
+- if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08)
+- free_mr_exit(hr_dev);
+-
+ hns_roce_exit(hr_dev);
+ kfree(hr_dev->priv);
+ ib_dealloc_device(&hr_dev->ib_dev);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 1de384ce4d0e15..6b03ba671ff8f3 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -1460,19 +1460,19 @@ void hns_roce_lock_cqs(struct hns_roce_cq *send_cq, struct hns_roce_cq *recv_cq)
+ __acquire(&send_cq->lock);
+ __acquire(&recv_cq->lock);
+ } else if (unlikely(send_cq != NULL && recv_cq == NULL)) {
+- spin_lock_irq(&send_cq->lock);
++ spin_lock(&send_cq->lock);
+ __acquire(&recv_cq->lock);
+ } else if (unlikely(send_cq == NULL && recv_cq != NULL)) {
+- spin_lock_irq(&recv_cq->lock);
++ spin_lock(&recv_cq->lock);
+ __acquire(&send_cq->lock);
+ } else if (send_cq == recv_cq) {
+- spin_lock_irq(&send_cq->lock);
++ spin_lock(&send_cq->lock);
+ __acquire(&recv_cq->lock);
+ } else if (send_cq->cqn < recv_cq->cqn) {
+- spin_lock_irq(&send_cq->lock);
++ spin_lock(&send_cq->lock);
+ spin_lock_nested(&recv_cq->lock, SINGLE_DEPTH_NESTING);
+ } else {
+- spin_lock_irq(&recv_cq->lock);
++ spin_lock(&recv_cq->lock);
+ spin_lock_nested(&send_cq->lock, SINGLE_DEPTH_NESTING);
+ }
+ }
+@@ -1492,13 +1492,13 @@ void hns_roce_unlock_cqs(struct hns_roce_cq *send_cq,
+ spin_unlock(&recv_cq->lock);
+ } else if (send_cq == recv_cq) {
+ __release(&recv_cq->lock);
+- spin_unlock_irq(&send_cq->lock);
++ spin_unlock(&send_cq->lock);
+ } else if (send_cq->cqn < recv_cq->cqn) {
+ spin_unlock(&recv_cq->lock);
+- spin_unlock_irq(&send_cq->lock);
++ spin_unlock(&send_cq->lock);
+ } else {
+ spin_unlock(&send_cq->lock);
+- spin_unlock_irq(&recv_cq->lock);
++ spin_unlock(&recv_cq->lock);
+ }
+ }
+
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index fc0ce35da14e64..c7475c925d1b40 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -1347,7 +1347,7 @@ int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ if (attr->max_dest_rd_atomic > dev->hw_attrs.max_hw_ird) {
+ ibdev_err(&iwdev->ibdev,
+ "rd_atomic = %d, above max_hw_ird=%d\n",
+- attr->max_rd_atomic,
++ attr->max_dest_rd_atomic,
+ dev->hw_attrs.max_hw_ird);
+ return -EINVAL;
+ }
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 6048b9ad13bb40..5926fd07a60212 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -550,7 +550,7 @@ static int mlx5_query_port_roce(struct ib_device *device, u32 port_num,
+ if (!ndev)
+ goto out;
+
+- if (dev->lag_active) {
++ if (mlx5_lag_is_roce(mdev) || mlx5_lag_is_sriov(mdev)) {
+ rcu_read_lock();
+ upper = netdev_master_upper_dev_get_rcu(ndev);
+ if (upper) {
+diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+index d5eb1b726675d2..85118b7cb63dbb 100644
+--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
++++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+@@ -796,6 +796,7 @@ struct mlx5_cache_ent {
+ u8 is_tmp:1;
+ u8 disabled:1;
+ u8 fill_to_high_water:1;
++ u8 tmp_cleanup_scheduled:1;
+
+ /*
+ * - limit is the low water mark for stored mkeys, 2* limit is the
+@@ -827,7 +828,6 @@ struct mlx5_mkey_cache {
+ struct mutex rb_lock;
+ struct dentry *fs_root;
+ unsigned long last_add;
+- struct delayed_work remove_ent_dwork;
+ };
+
+ struct mlx5_ib_port_resources {
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 98bd8eaa393ef7..e4db3a9569c14a 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -48,6 +48,7 @@ enum {
+ MAX_PENDING_REG_MR = 8,
+ };
+
++#define MLX5_MR_CACHE_PERSISTENT_ENTRY_MIN_DESCS 4
+ #define MLX5_UMR_ALIGN 2048
+
+ static void
+@@ -211,9 +212,9 @@ static void create_mkey_callback(int status, struct mlx5_async_work *context)
+
+ spin_lock_irqsave(&ent->mkeys_queue.lock, flags);
+ push_mkey_locked(ent, mkey_out->mkey);
++ ent->pending--;
+ /* If we are doing fill_to_high_water then keep going. */
+ queue_adjust_cache_locked(ent);
+- ent->pending--;
+ spin_unlock_irqrestore(&ent->mkeys_queue.lock, flags);
+ kfree(mkey_out);
+ }
+@@ -527,6 +528,21 @@ static void queue_adjust_cache_locked(struct mlx5_cache_ent *ent)
+ }
+ }
+
++static void clean_keys(struct mlx5_ib_dev *dev, struct mlx5_cache_ent *ent)
++{
++ u32 mkey;
++
++ spin_lock_irq(&ent->mkeys_queue.lock);
++ while (ent->mkeys_queue.ci) {
++ mkey = pop_mkey_locked(ent);
++ spin_unlock_irq(&ent->mkeys_queue.lock);
++ mlx5_core_destroy_mkey(dev->mdev, mkey);
++ spin_lock_irq(&ent->mkeys_queue.lock);
++ }
++ ent->tmp_cleanup_scheduled = false;
++ spin_unlock_irq(&ent->mkeys_queue.lock);
++}
++
+ static void __cache_work_func(struct mlx5_cache_ent *ent)
+ {
+ struct mlx5_ib_dev *dev = ent->dev;
+@@ -598,7 +614,11 @@ static void delayed_cache_work_func(struct work_struct *work)
+ struct mlx5_cache_ent *ent;
+
+ ent = container_of(work, struct mlx5_cache_ent, dwork.work);
+- __cache_work_func(ent);
++ /* temp entries are never filled, only cleaned */
++ if (ent->is_tmp)
++ clean_keys(ent->dev, ent);
++ else
++ __cache_work_func(ent);
+ }
+
+ static int cache_ent_key_cmp(struct mlx5r_cache_rb_key key1,
+@@ -659,6 +679,7 @@ mkey_cache_ent_from_rb_key(struct mlx5_ib_dev *dev,
+ {
+ struct rb_node *node = dev->cache.rb_root.rb_node;
+ struct mlx5_cache_ent *cur, *smallest = NULL;
++ u64 ndescs_limit;
+ int cmp;
+
+ /*
+@@ -677,10 +698,18 @@ mkey_cache_ent_from_rb_key(struct mlx5_ib_dev *dev,
+ return cur;
+ }
+
++ /*
++ * Limit the usage of mkeys larger than twice the required size while
++ * also allowing the usage of smallest cache entry for small MRs.
++ */
++ ndescs_limit = max_t(u64, rb_key.ndescs * 2,
++ MLX5_MR_CACHE_PERSISTENT_ENTRY_MIN_DESCS);
++
+ return (smallest &&
+ smallest->rb_key.access_mode == rb_key.access_mode &&
+ smallest->rb_key.access_flags == rb_key.access_flags &&
+- smallest->rb_key.ats == rb_key.ats) ?
++ smallest->rb_key.ats == rb_key.ats &&
++ smallest->rb_key.ndescs <= ndescs_limit) ?
+ smallest :
+ NULL;
+ }
+@@ -765,21 +794,6 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev,
+ return _mlx5_mr_cache_alloc(dev, ent, access_flags);
+ }
+
+-static void clean_keys(struct mlx5_ib_dev *dev, struct mlx5_cache_ent *ent)
+-{
+- u32 mkey;
+-
+- cancel_delayed_work(&ent->dwork);
+- spin_lock_irq(&ent->mkeys_queue.lock);
+- while (ent->mkeys_queue.ci) {
+- mkey = pop_mkey_locked(ent);
+- spin_unlock_irq(&ent->mkeys_queue.lock);
+- mlx5_core_destroy_mkey(dev->mdev, mkey);
+- spin_lock_irq(&ent->mkeys_queue.lock);
+- }
+- spin_unlock_irq(&ent->mkeys_queue.lock);
+-}
+-
+ static void mlx5_mkey_cache_debugfs_cleanup(struct mlx5_ib_dev *dev)
+ {
+ if (!mlx5_debugfs_root || dev->is_rep)
+@@ -892,10 +906,6 @@ mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev,
+ ent->limit = 0;
+
+ mlx5_mkey_cache_debugfs_add_ent(dev, ent);
+- } else {
+- mod_delayed_work(ent->dev->cache.wq,
+- &ent->dev->cache.remove_ent_dwork,
+- msecs_to_jiffies(30 * 1000));
+ }
+
+ return ent;
+@@ -906,35 +916,6 @@ mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev,
+ return ERR_PTR(ret);
+ }
+
+-static void remove_ent_work_func(struct work_struct *work)
+-{
+- struct mlx5_mkey_cache *cache;
+- struct mlx5_cache_ent *ent;
+- struct rb_node *cur;
+-
+- cache = container_of(work, struct mlx5_mkey_cache,
+- remove_ent_dwork.work);
+- mutex_lock(&cache->rb_lock);
+- cur = rb_last(&cache->rb_root);
+- while (cur) {
+- ent = rb_entry(cur, struct mlx5_cache_ent, node);
+- cur = rb_prev(cur);
+- mutex_unlock(&cache->rb_lock);
+-
+- spin_lock_irq(&ent->mkeys_queue.lock);
+- if (!ent->is_tmp) {
+- spin_unlock_irq(&ent->mkeys_queue.lock);
+- mutex_lock(&cache->rb_lock);
+- continue;
+- }
+- spin_unlock_irq(&ent->mkeys_queue.lock);
+-
+- clean_keys(ent->dev, ent);
+- mutex_lock(&cache->rb_lock);
+- }
+- mutex_unlock(&cache->rb_lock);
+-}
+-
+ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
+ {
+ struct mlx5_mkey_cache *cache = &dev->cache;
+@@ -950,7 +931,6 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
+ mutex_init(&dev->slow_path_mutex);
+ mutex_init(&dev->cache.rb_lock);
+ dev->cache.rb_root = RB_ROOT;
+- INIT_DELAYED_WORK(&dev->cache.remove_ent_dwork, remove_ent_work_func);
+ cache->wq = alloc_ordered_workqueue("mkey_cache", WQ_MEM_RECLAIM);
+ if (!cache->wq) {
+ mlx5_ib_warn(dev, "failed to create work queue\n");
+@@ -962,7 +942,7 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
+ mlx5_mkey_cache_debugfs_init(dev);
+ mutex_lock(&cache->rb_lock);
+ for (i = 0; i <= mkey_cache_max_order(dev); i++) {
+- rb_key.ndescs = 1 << (i + 2);
++ rb_key.ndescs = MLX5_MR_CACHE_PERSISTENT_ENTRY_MIN_DESCS << i;
+ ent = mlx5r_cache_create_ent_locked(dev, rb_key, true);
+ if (IS_ERR(ent)) {
+ ret = PTR_ERR(ent);
+@@ -1001,7 +981,6 @@ void mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev)
+ return;
+
+ mutex_lock(&dev->cache.rb_lock);
+- cancel_delayed_work(&dev->cache.remove_ent_dwork);
+ for (node = rb_first(root); node; node = rb_next(node)) {
+ ent = rb_entry(node, struct mlx5_cache_ent, node);
+ spin_lock_irq(&ent->mkeys_queue.lock);
+@@ -1852,8 +1831,18 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);
+ struct mlx5_cache_ent *ent = mr->mmkey.cache_ent;
+
+- if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr))
++ if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr)) {
++ ent = mr->mmkey.cache_ent;
++ /* upon storing to a clean temp entry - schedule its cleanup */
++ spin_lock_irq(&ent->mkeys_queue.lock);
++ if (ent->is_tmp && !ent->tmp_cleanup_scheduled) {
++ mod_delayed_work(ent->dev->cache.wq, &ent->dwork,
++ msecs_to_jiffies(30 * 1000));
++ ent->tmp_cleanup_scheduled = true;
++ }
++ spin_unlock_irq(&ent->mkeys_queue.lock);
+ return 0;
++ }
+
+ if (ent) {
+ spin_lock_irq(&ent->mkeys_queue.lock);
+diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c
+index ffc31b01f69051..8823ecc84e60ee 100644
+--- a/drivers/infiniband/hw/mlx5/umr.c
++++ b/drivers/infiniband/hw/mlx5/umr.c
+@@ -224,6 +224,9 @@ int mlx5r_umr_init(struct mlx5_ib_dev *dev)
+
+ void mlx5r_umr_cleanup(struct mlx5_ib_dev *dev)
+ {
++ if (!dev->umrc.pd)
++ return;
++
+ mutex_destroy(&dev->umrc.init_lock);
+ ib_dealloc_pd(dev->umrc.pd);
+ }
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 88106cf5ce550c..84d2dfcd20af6f 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -626,6 +626,7 @@ static void rtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ */
+ if (WARN_ON(wc->wr_cqe->done != rtrs_clt_rdma_done))
+ return;
++ clt_path->s.hb_missed_cnt = 0;
+ rtrs_from_imm(be32_to_cpu(wc->ex.imm_data),
+ &imm_type, &imm_payload);
+ if (imm_type == RTRS_IO_RSP_IMM ||
+@@ -643,7 +644,6 @@ static void rtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ return rtrs_clt_recv_done(con, wc);
+ } else if (imm_type == RTRS_HB_ACK_IMM) {
+ WARN_ON(con->c.cid);
+- clt_path->s.hb_missed_cnt = 0;
+ clt_path->s.hb_cur_latency =
+ ktime_sub(ktime_get(), clt_path->s.hb_last_sent);
+ if (clt_path->flags & RTRS_MSG_NEW_RKEY_F)
+@@ -670,6 +670,7 @@ static void rtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ /*
+ * Key invalidations from server side
+ */
++ clt_path->s.hb_missed_cnt = 0;
+ WARN_ON(!(wc->wc_flags & IB_WC_WITH_INVALIDATE ||
+ wc->wc_flags & IB_WC_WITH_IMM));
+ WARN_ON(wc->wr_cqe->done != rtrs_clt_rdma_done);
+@@ -2346,6 +2347,12 @@ static int init_conns(struct rtrs_clt_path *clt_path)
+ if (err)
+ goto destroy;
+ }
++
++ /*
++ * Set the cid to con_num - 1, since if we fail later, we want to stay in bounds.
++ */
++ cid = clt_path->s.con_num - 1;
++
+ err = alloc_path_reqs(clt_path);
+ if (err)
+ goto destroy;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index 1d33efb8fb03be..94ac99a4f696e7 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -1229,6 +1229,7 @@ static void rtrs_srv_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ */
+ if (WARN_ON(wc->wr_cqe != &io_comp_cqe))
+ return;
++ srv_path->s.hb_missed_cnt = 0;
+ err = rtrs_post_recv_empty(&con->c, &io_comp_cqe);
+ if (err) {
+ rtrs_err(s, "rtrs_post_recv(), err: %d\n", err);
+diff --git a/drivers/input/keyboard/adp5588-keys.c b/drivers/input/keyboard/adp5588-keys.c
+index 1b0279393df4bb..5acaffb7f6e11d 100644
+--- a/drivers/input/keyboard/adp5588-keys.c
++++ b/drivers/input/keyboard/adp5588-keys.c
+@@ -627,7 +627,7 @@ static int adp5588_setup(struct adp5588_kpad *kpad)
+
+ for (i = 0; i < KEYP_MAX_EVENT; i++) {
+ ret = adp5588_read(client, KEY_EVENTA);
+- if (ret)
++ if (ret < 0)
+ return ret;
+ }
+
+diff --git a/drivers/input/misc/ims-pcu.c b/drivers/input/misc/ims-pcu.c
+index c086dadb45e3a7..058f3470b7ae2b 100644
+--- a/drivers/input/misc/ims-pcu.c
++++ b/drivers/input/misc/ims-pcu.c
+@@ -1067,7 +1067,7 @@ static ssize_t ims_pcu_attribute_store(struct device *dev,
+ if (data_len > attr->field_length)
+ return -EINVAL;
+
+- scoped_cond_guard(mutex, return -EINTR, &pcu->cmd_mutex) {
++ scoped_cond_guard(mutex_intr, return -EINTR, &pcu->cmd_mutex) {
+ memset(field, 0, attr->field_length);
+ memcpy(field, buf, data_len);
+
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index bad238f69a7afd..34d1f07ea4c304 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -1120,6 +1120,43 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ },
+ .driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ },
++ /*
++ * Some TongFang barebones have touchpad and/or keyboard issues after
++ * suspend fixable with nomux + reset + noloop + nopnp. Luckily, none of
++ * them have an external PS/2 port so this can safely be set for all of
++ * them.
++ * TongFang barebones come with board_vendor and/or system_vendor set to
++ * a different value for each individual reseller. The only somewhat
++ * universal way to identify them is by board_name.
++ */
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "GM6XGxX"),
++ },
++ .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++ SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "GMxXGxx"),
++ },
++ .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++ SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "GMxXGxX"),
++ },
++ .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++ SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "GMxHGxx"),
++ },
++ .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++ SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ },
+ /*
+ * A lot of modern Clevo barebones have touchpad and/or keyboard issues
+ * after suspend fixable with nomux + reset + noloop + nopnp. Luckily,
+diff --git a/drivers/input/touchscreen/ilitek_ts_i2c.c b/drivers/input/touchscreen/ilitek_ts_i2c.c
+index 3eb762896345b7..5a807ad723190d 100644
+--- a/drivers/input/touchscreen/ilitek_ts_i2c.c
++++ b/drivers/input/touchscreen/ilitek_ts_i2c.c
+@@ -37,6 +37,8 @@
+ #define ILITEK_TP_CMD_GET_MCU_VER 0x61
+ #define ILITEK_TP_CMD_GET_IC_MODE 0xC0
+
++#define ILITEK_TP_I2C_REPORT_ID 0x48
++
+ #define REPORT_COUNT_ADDRESS 61
+ #define ILITEK_SUPPORT_MAX_POINT 40
+
+@@ -160,15 +162,19 @@ static int ilitek_process_and_report_v6(struct ilitek_ts_data *ts)
+ error = ilitek_i2c_write_and_read(ts, NULL, 0, 0, buf, 64);
+ if (error) {
+ dev_err(dev, "get touch info failed, err:%d\n", error);
+- goto err_sync_frame;
++ return error;
++ }
++
++ if (buf[0] != ILITEK_TP_I2C_REPORT_ID) {
++ dev_err(dev, "get touch info failed. Wrong id: 0x%02X\n", buf[0]);
++ return -EINVAL;
+ }
+
+ report_max_point = buf[REPORT_COUNT_ADDRESS];
+ if (report_max_point > ts->max_tp) {
+ dev_err(dev, "FW report max point:%d > panel info. max:%d\n",
+ report_max_point, ts->max_tp);
+- error = -EINVAL;
+- goto err_sync_frame;
++ return -EINVAL;
+ }
+
+ count = DIV_ROUND_UP(report_max_point, packet_max_point);
+@@ -178,7 +184,7 @@ static int ilitek_process_and_report_v6(struct ilitek_ts_data *ts)
+ if (error) {
+ dev_err(dev, "get touch info. failed, cnt:%d, err:%d\n",
+ count, error);
+- goto err_sync_frame;
++ return error;
+ }
+ }
+
+@@ -203,10 +209,10 @@ static int ilitek_process_and_report_v6(struct ilitek_ts_data *ts)
+ ilitek_touch_down(ts, id, x, y);
+ }
+
+-err_sync_frame:
+ input_mt_sync_frame(input);
+ input_sync(input);
+- return error;
++
++ return 0;
+ }
+
+ /* APIs of cmds for ILITEK Touch IC */
+diff --git a/drivers/interconnect/icc-clk.c b/drivers/interconnect/icc-clk.c
+index f788db15cd76a9..b956e4050f3815 100644
+--- a/drivers/interconnect/icc-clk.c
++++ b/drivers/interconnect/icc-clk.c
+@@ -87,6 +87,7 @@ struct icc_provider *icc_clk_register(struct device *dev,
+ onecell = devm_kzalloc(dev, struct_size(onecell, nodes, 2 * num_clocks), GFP_KERNEL);
+ if (!onecell)
+ return ERR_PTR(-ENOMEM);
++ onecell->num_nodes = 2 * num_clocks;
+
+ qp = devm_kzalloc(dev, struct_size(qp, clocks, num_clocks), GFP_KERNEL);
+ if (!qp)
+@@ -133,8 +134,6 @@ struct icc_provider *icc_clk_register(struct device *dev,
+ onecell->nodes[j++] = node;
+ }
+
+- onecell->num_nodes = j;
+-
+ ret = icc_provider_register(provider);
+ if (ret)
+ goto err;
+diff --git a/drivers/interconnect/qcom/sm8350.c b/drivers/interconnect/qcom/sm8350.c
+index b321c3009acbac..885a9d3f92e4d1 100644
+--- a/drivers/interconnect/qcom/sm8350.c
++++ b/drivers/interconnect/qcom/sm8350.c
+@@ -1965,6 +1965,7 @@ static struct platform_driver qnoc_driver = {
+ .driver = {
+ .name = "qnoc-sm8350",
+ .of_match_table = qnoc_of_match,
++ .sync_state = icc_sync_state,
+ },
+ };
+ module_platform_driver(qnoc_driver);
+diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
+index 1074ee25064d06..05aed3cb46f1bf 100644
+--- a/drivers/iommu/amd/io_pgtable.c
++++ b/drivers/iommu/amd/io_pgtable.c
+@@ -574,20 +574,24 @@ static void v1_free_pgtable(struct io_pgtable *iop)
+ pgtable->mode > PAGE_MODE_6_LEVEL);
+
+ free_sub_pt(pgtable->root, pgtable->mode, &freelist);
++ iommu_put_pages_list(&freelist);
+
+ /* Update data structure */
+ amd_iommu_domain_clr_pt_root(dom);
+
+ /* Make changes visible to IOMMUs */
+ amd_iommu_domain_update(dom);
+-
+- iommu_put_pages_list(&freelist);
+ }
+
+ static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
+ {
+ struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);
+
++ pgtable->root = iommu_alloc_page(GFP_KERNEL);
++ if (!pgtable->root)
++ return NULL;
++ pgtable->mode = PAGE_MODE_3_LEVEL;
++
+ cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES;
+ cfg->ias = IOMMU_IN_ADDR_BIT_SIZE;
+ cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE;
+diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
+index 664e91c88748ea..f9227cbf75dfe0 100644
+--- a/drivers/iommu/amd/io_pgtable_v2.c
++++ b/drivers/iommu/amd/io_pgtable_v2.c
+@@ -51,7 +51,7 @@ static inline u64 set_pgtable_attr(u64 *page)
+ u64 prot;
+
+ prot = IOMMU_PAGE_PRESENT | IOMMU_PAGE_RW | IOMMU_PAGE_USER;
+- prot |= IOMMU_PAGE_ACCESS | IOMMU_PAGE_DIRTY;
++ prot |= IOMMU_PAGE_ACCESS;
+
+ return (iommu_virt_to_phys(page) | prot);
+ }
+@@ -362,7 +362,7 @@ static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
+ struct protection_domain *pdom = (struct protection_domain *)cookie;
+ int ias = IOMMU_IN_ADDR_BIT_SIZE;
+
+- pgtable->pgd = iommu_alloc_page_node(pdom->nid, GFP_ATOMIC);
++ pgtable->pgd = iommu_alloc_page_node(pdom->nid, GFP_KERNEL);
+ if (!pgtable->pgd)
+ return NULL;
+
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index b19e8c0f48fa25..1a61f14459e4fe 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -52,8 +52,6 @@
+ #define HT_RANGE_START (0xfd00000000ULL)
+ #define HT_RANGE_END (0xffffffffffULL)
+
+-#define DEFAULT_PGTABLE_LEVEL PAGE_MODE_3_LEVEL
+-
+ static DEFINE_SPINLOCK(pd_bitmap_lock);
+
+ LIST_HEAD(ioapic_map);
+@@ -1552,8 +1550,8 @@ void amd_iommu_dev_flush_pasid_pages(struct iommu_dev_data *dev_data,
+ void amd_iommu_dev_flush_pasid_all(struct iommu_dev_data *dev_data,
+ ioasid_t pasid)
+ {
+- amd_iommu_dev_flush_pasid_pages(dev_data, 0,
+- CMD_INV_IOMMU_ALL_PAGES_ADDRESS, pasid);
++ amd_iommu_dev_flush_pasid_pages(dev_data, pasid, 0,
++ CMD_INV_IOMMU_ALL_PAGES_ADDRESS);
+ }
+
+ void amd_iommu_domain_flush_complete(struct protection_domain *domain)
+@@ -2185,11 +2183,12 @@ static struct iommu_device *amd_iommu_probe_device(struct device *dev)
+ dev_err(dev, "Failed to initialize - trying to proceed anyway\n");
+ iommu_dev = ERR_PTR(ret);
+ iommu_ignore_device(iommu, dev);
+- } else {
+- amd_iommu_set_pci_msi_domain(dev, iommu);
+- iommu_dev = &iommu->iommu;
++ goto out_err;
+ }
+
++ amd_iommu_set_pci_msi_domain(dev, iommu);
++ iommu_dev = &iommu->iommu;
++
+ /*
+ * If IOMMU and device supports PASID then it will contain max
+ * supported PASIDs, else it will be zero.
+@@ -2201,6 +2200,7 @@ static struct iommu_device *amd_iommu_probe_device(struct device *dev)
+ pci_max_pasids(to_pci_dev(dev)));
+ }
+
++out_err:
+ iommu_completion_wait(iommu);
+
+ return iommu_dev;
+@@ -2265,47 +2265,17 @@ void protection_domain_free(struct protection_domain *domain)
+ if (domain->iop.pgtbl_cfg.tlb)
+ free_io_pgtable_ops(&domain->iop.iop.ops);
+
+- if (domain->iop.root)
+- iommu_free_page(domain->iop.root);
+-
+ if (domain->id)
+ domain_id_free(domain->id);
+
+ kfree(domain);
+ }
+
+-static int protection_domain_init_v1(struct protection_domain *domain, int mode)
+-{
+- u64 *pt_root = NULL;
+-
+- BUG_ON(mode < PAGE_MODE_NONE || mode > PAGE_MODE_6_LEVEL);
+-
+- if (mode != PAGE_MODE_NONE) {
+- pt_root = iommu_alloc_page(GFP_KERNEL);
+- if (!pt_root)
+- return -ENOMEM;
+- }
+-
+- domain->pd_mode = PD_MODE_V1;
+- amd_iommu_domain_set_pgtable(domain, pt_root, mode);
+-
+- return 0;
+-}
+-
+-static int protection_domain_init_v2(struct protection_domain *pdom)
+-{
+- pdom->pd_mode = PD_MODE_V2;
+- pdom->domain.pgsize_bitmap = AMD_IOMMU_PGSIZES_V2;
+-
+- return 0;
+-}
+-
+ struct protection_domain *protection_domain_alloc(unsigned int type)
+ {
+ struct io_pgtable_ops *pgtbl_ops;
+ struct protection_domain *domain;
+ int pgtable;
+- int ret;
+
+ domain = kzalloc(sizeof(*domain), GFP_KERNEL);
+ if (!domain)
+@@ -2341,18 +2311,14 @@ struct protection_domain *protection_domain_alloc(unsigned int type)
+
+ switch (pgtable) {
+ case AMD_IOMMU_V1:
+- ret = protection_domain_init_v1(domain, DEFAULT_PGTABLE_LEVEL);
++ domain->pd_mode = PD_MODE_V1;
+ break;
+ case AMD_IOMMU_V2:
+- ret = protection_domain_init_v2(domain);
++ domain->pd_mode = PD_MODE_V2;
+ break;
+ default:
+- ret = -EINVAL;
+- break;
+- }
+-
+- if (ret)
+ goto out_err;
++ }
+
+ pgtbl_ops = alloc_io_pgtable_ops(pgtable, &domain->iop.pgtbl_cfg, domain);
+ if (!pgtbl_ops)
+@@ -2405,10 +2371,10 @@ static struct iommu_domain *do_iommu_domain_alloc(unsigned int type,
+ domain->domain.geometry.aperture_start = 0;
+ domain->domain.geometry.aperture_end = dma_max_address();
+ domain->domain.geometry.force_aperture = true;
++ domain->domain.pgsize_bitmap = domain->iop.iop.cfg.pgsize_bitmap;
+
+ if (iommu) {
+ domain->domain.type = type;
+- domain->domain.pgsize_bitmap = iommu->iommu.ops->pgsize_bitmap;
+ domain->domain.ops = iommu->iommu.ops->default_domain_ops;
+
+ if (dirty_tracking)
+@@ -2867,7 +2833,6 @@ const struct iommu_ops amd_iommu_ops = {
+ .device_group = amd_iommu_device_group,
+ .get_resv_regions = amd_iommu_get_resv_regions,
+ .is_attach_deferred = amd_iommu_is_attach_deferred,
+- .pgsize_bitmap = AMD_IOMMU_PGSIZES,
+ .def_domain_type = amd_iommu_def_domain_type,
+ .dev_enable_feat = amd_iommu_dev_enable_feature,
+ .dev_disable_feat = amd_iommu_dev_disable_feature,
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index ed2b106e02dd10..f490385c13605c 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -3062,8 +3062,8 @@ arm_smmu_domain_alloc_user(struct device *dev, u32 flags,
+ return ERR_PTR(-EOPNOTSUPP);
+
+ smmu_domain = arm_smmu_domain_alloc();
+- if (!smmu_domain)
+- return ERR_PTR(-ENOMEM);
++ if (IS_ERR(smmu_domain))
++ return ERR_CAST(smmu_domain);
+
+ smmu_domain->domain.type = IOMMU_DOMAIN_UNMANAGED;
+ smmu_domain->domain.ops = arm_smmu_ops.default_domain_ops;
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 36c6b36ad4ff74..6372f3e25c4bc2 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -282,6 +282,20 @@ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
+ u32 smr;
+ int i;
+
++ /*
++ * MSM8998 LPASS SMMU reports 13 context banks, but accessing
++ * the last context bank crashes the system.
++ */
++ if (of_device_is_compatible(smmu->dev->of_node, "qcom,msm8998-smmu-v2") &&
++ smmu->num_context_banks == 13) {
++ smmu->num_context_banks = 12;
++ } else if (of_device_is_compatible(smmu->dev->of_node, "qcom,sdm630-smmu-v2")) {
++ if (smmu->num_context_banks == 21) /* SDM630 / SDM660 A2NOC SMMU */
++ smmu->num_context_banks = 7;
++ else if (smmu->num_context_banks == 14) /* SDM630 / SDM660 LPASS SMMU */
++ smmu->num_context_banks = 13;
++ }
++
+ /*
+ * Some platforms support more than the Arm SMMU architected maximum of
+ * 128 stream matching groups. For unknown reasons, the additional
+@@ -338,6 +352,19 @@ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
+ return 0;
+ }
+
++static int qcom_adreno_smmuv2_cfg_probe(struct arm_smmu_device *smmu)
++{
++ /* Support for 16K pages is advertised on some SoCs, but it doesn't seem to work */
++ smmu->features &= ~ARM_SMMU_FEAT_FMT_AARCH64_16K;
++
++ /* TZ protects several last context banks, hide them from Linux */
++ if (of_device_is_compatible(smmu->dev->of_node, "qcom,sdm630-smmu-v2") &&
++ smmu->num_context_banks == 5)
++ smmu->num_context_banks = 2;
++
++ return 0;
++}
++
+ static void qcom_smmu_write_s2cr(struct arm_smmu_device *smmu, int idx)
+ {
+ struct arm_smmu_s2cr *s2cr = smmu->s2crs + idx;
+@@ -436,6 +463,7 @@ static const struct arm_smmu_impl sdm845_smmu_500_impl = {
+
+ static const struct arm_smmu_impl qcom_adreno_smmu_v2_impl = {
+ .init_context = qcom_adreno_smmu_init_context,
++ .cfg_probe = qcom_adreno_smmuv2_cfg_probe,
+ .def_domain_type = qcom_smmu_def_domain_type,
+ .alloc_context_bank = qcom_adreno_smmu_alloc_context_bank,
+ .write_sctlr = qcom_adreno_smmu_write_sctlr,
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+index 723273440c2118..8321962b37148b 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+@@ -417,7 +417,7 @@ void arm_smmu_read_context_fault_info(struct arm_smmu_device *smmu, int idx,
+ void arm_smmu_print_context_fault_info(struct arm_smmu_device *smmu, int idx,
+ const struct arm_smmu_context_fault_info *cfi)
+ {
+- dev_dbg(smmu->dev,
++ dev_err(smmu->dev,
+ "Unhandled context fault: fsr=0x%x, iova=0x%08lx, fsynr=0x%x, cbfrsynra=0x%x, cb=%d\n",
+ cfi->fsr, cfi->iova, cfi->fsynr, cfi->cbfrsynra, idx);
+
+diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c
+index aefde4443671ed..d06bf6e6c19fd2 100644
+--- a/drivers/iommu/iommufd/hw_pagetable.c
++++ b/drivers/iommu/iommufd/hw_pagetable.c
+@@ -225,7 +225,8 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx *ictx,
+ if ((flags & ~IOMMU_HWPT_FAULT_ID_VALID) ||
+ !user_data->len || !ops->domain_alloc_user)
+ return ERR_PTR(-EOPNOTSUPP);
+- if (parent->auto_domain || !parent->nest_parent)
++ if (parent->auto_domain || !parent->nest_parent ||
++ parent->common.domain->owner != ops)
+ return ERR_PTR(-EINVAL);
+
+ hwpt_nested = __iommufd_object_alloc(
+diff --git a/drivers/iommu/iommufd/io_pagetable.c b/drivers/iommu/iommufd/io_pagetable.c
+index 05fd9d3abf1b80..9f193c933de6ef 100644
+--- a/drivers/iommu/iommufd/io_pagetable.c
++++ b/drivers/iommu/iommufd/io_pagetable.c
+@@ -112,6 +112,7 @@ static int iopt_alloc_iova(struct io_pagetable *iopt, unsigned long *iova,
+ unsigned long page_offset = uptr % PAGE_SIZE;
+ struct interval_tree_double_span_iter used_span;
+ struct interval_tree_span_iter allowed_span;
++ unsigned long max_alignment = PAGE_SIZE;
+ unsigned long iova_alignment;
+
+ lockdep_assert_held(&iopt->iova_rwsem);
+@@ -131,6 +132,13 @@ static int iopt_alloc_iova(struct io_pagetable *iopt, unsigned long *iova,
+ roundup_pow_of_two(length),
+ 1UL << __ffs64(uptr));
+
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
++ max_alignment = HPAGE_SIZE;
++#endif
++ /* Protect against ALIGN() overflow */
++ if (iova_alignment >= max_alignment)
++ iova_alignment = max_alignment;
++
+ if (iova_alignment < iopt->iova_alignment)
+ return -EINVAL;
+
+diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c
+index 222cfc11ebfd00..4a279b8f02cb7e 100644
+--- a/drivers/iommu/iommufd/selftest.c
++++ b/drivers/iommu/iommufd/selftest.c
+@@ -1342,7 +1342,7 @@ static int iommufd_test_dirty(struct iommufd_ucmd *ucmd, unsigned int mockpt_id,
+ unsigned long page_size, void __user *uptr,
+ u32 flags)
+ {
+- unsigned long bitmap_size, i, max;
++ unsigned long i, max;
+ struct iommu_test_cmd *cmd = ucmd->cmd;
+ struct iommufd_hw_pagetable *hwpt;
+ struct mock_iommu_domain *mock;
+@@ -1363,15 +1363,14 @@ static int iommufd_test_dirty(struct iommufd_ucmd *ucmd, unsigned int mockpt_id,
+ }
+
+ max = length / page_size;
+- bitmap_size = DIV_ROUND_UP(max, BITS_PER_BYTE);
+-
+- tmp = kvzalloc(bitmap_size, GFP_KERNEL_ACCOUNT);
++ tmp = kvzalloc(DIV_ROUND_UP(max, BITS_PER_LONG) * sizeof(unsigned long),
++ GFP_KERNEL_ACCOUNT);
+ if (!tmp) {
+ rc = -ENOMEM;
+ goto out_put;
+ }
+
+- if (copy_from_user(tmp, uptr, bitmap_size)) {
++ if (copy_from_user(tmp, uptr,DIV_ROUND_UP(max, BITS_PER_BYTE))) {
+ rc = -EFAULT;
+ goto out_free;
+ }
+diff --git a/drivers/leds/leds-bd2606mvv.c b/drivers/leds/leds-bd2606mvv.c
+index 3fda712d2f8095..c1181a35d0f762 100644
+--- a/drivers/leds/leds-bd2606mvv.c
++++ b/drivers/leds/leds-bd2606mvv.c
+@@ -69,16 +69,14 @@ static const struct regmap_config bd2606mvv_regmap = {
+
+ static int bd2606mvv_probe(struct i2c_client *client)
+ {
+- struct fwnode_handle *np, *child;
+ struct device *dev = &client->dev;
+ struct bd2606mvv_priv *priv;
+ struct fwnode_handle *led_fwnodes[BD2606_MAX_LEDS] = { 0 };
+ int active_pairs[BD2606_MAX_LEDS / 2] = { 0 };
+ int err, reg;
+- int i;
++ int i, j;
+
+- np = dev_fwnode(dev);
+- if (!np)
++ if (!dev_fwnode(dev))
+ return -ENODEV;
+
+ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+@@ -94,20 +92,18 @@ static int bd2606mvv_probe(struct i2c_client *client)
+
+ i2c_set_clientdata(client, priv);
+
+- fwnode_for_each_available_child_node(np, child) {
++ device_for_each_child_node_scoped(dev, child) {
+ struct bd2606mvv_led *led;
+
+ err = fwnode_property_read_u32(child, "reg", ®);
+- if (err) {
+- fwnode_handle_put(child);
++ if (err)
+ return err;
+- }
+- if (reg < 0 || reg >= BD2606_MAX_LEDS || led_fwnodes[reg]) {
+- fwnode_handle_put(child);
++
++ if (reg < 0 || reg >= BD2606_MAX_LEDS || led_fwnodes[reg])
+ return -EINVAL;
+- }
++
+ led = &priv->leds[reg];
+- led_fwnodes[reg] = child;
++ led_fwnodes[reg] = fwnode_handle_get(child);
+ active_pairs[reg / 2]++;
+ led->priv = priv;
+ led->led_no = reg;
+@@ -130,7 +126,8 @@ static int bd2606mvv_probe(struct i2c_client *client)
+ &priv->leds[i].ldev,
+ &init_data);
+ if (err < 0) {
+- fwnode_handle_put(child);
++ for (j = i; j < BD2606_MAX_LEDS; j++)
++ fwnode_handle_put(led_fwnodes[j]);
+ return dev_err_probe(dev, err,
+ "couldn't register LED %s\n",
+ priv->leds[i].ldev.name);
+diff --git a/drivers/leds/leds-gpio.c b/drivers/leds/leds-gpio.c
+index 83fcd7b6afff76..4d1612d557c841 100644
+--- a/drivers/leds/leds-gpio.c
++++ b/drivers/leds/leds-gpio.c
+@@ -150,7 +150,7 @@ static struct gpio_leds_priv *gpio_leds_create(struct device *dev)
+ {
+ struct fwnode_handle *child;
+ struct gpio_leds_priv *priv;
+- int count, ret;
++ int count, used, ret;
+
+ count = device_get_child_node_count(dev);
+ if (!count)
+@@ -159,9 +159,11 @@ static struct gpio_leds_priv *gpio_leds_create(struct device *dev)
+ priv = devm_kzalloc(dev, struct_size(priv, leds, count), GFP_KERNEL);
+ if (!priv)
+ return ERR_PTR(-ENOMEM);
++ priv->num_leds = count;
++ used = 0;
+
+ device_for_each_child_node(dev, child) {
+- struct gpio_led_data *led_dat = &priv->leds[priv->num_leds];
++ struct gpio_led_data *led_dat = &priv->leds[used];
+ struct gpio_led led = {};
+
+ /*
+@@ -197,8 +199,9 @@ static struct gpio_leds_priv *gpio_leds_create(struct device *dev)
+ /* Set gpiod label to match the corresponding LED name. */
+ gpiod_set_consumer_name(led_dat->gpiod,
+ led_dat->cdev.dev->kobj.name);
+- priv->num_leds++;
++ used++;
+ }
++ priv->num_leds = used;
+
+ return priv;
+ }
+diff --git a/drivers/leds/leds-pca995x.c b/drivers/leds/leds-pca995x.c
+index 78215dff14997c..3e56ce90b4b927 100644
+--- a/drivers/leds/leds-pca995x.c
++++ b/drivers/leds/leds-pca995x.c
+@@ -102,16 +102,14 @@ static const struct regmap_config pca995x_regmap = {
+ static int pca995x_probe(struct i2c_client *client)
+ {
+ struct fwnode_handle *led_fwnodes[PCA995X_MAX_OUTPUTS] = { 0 };
+- struct fwnode_handle *np, *child;
+ struct device *dev = &client->dev;
+ struct pca995x_chip *chip;
+ struct pca995x_led *led;
+- int i, btype, reg, ret;
++ int i, j, btype, reg, ret;
+
+ btype = (unsigned long)device_get_match_data(&client->dev);
+
+- np = dev_fwnode(dev);
+- if (!np)
++ if (!dev_fwnode(dev))
+ return -ENODEV;
+
+ chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL);
+@@ -125,20 +123,16 @@ static int pca995x_probe(struct i2c_client *client)
+
+ i2c_set_clientdata(client, chip);
+
+- fwnode_for_each_available_child_node(np, child) {
++ device_for_each_child_node_scoped(dev, child) {
+ ret = fwnode_property_read_u32(child, "reg", ®);
+- if (ret) {
+- fwnode_handle_put(child);
++ if (ret)
+ return ret;
+- }
+
+- if (reg < 0 || reg >= PCA995X_MAX_OUTPUTS || led_fwnodes[reg]) {
+- fwnode_handle_put(child);
++ if (reg < 0 || reg >= PCA995X_MAX_OUTPUTS || led_fwnodes[reg])
+ return -EINVAL;
+- }
+
+ led = &chip->leds[reg];
+- led_fwnodes[reg] = child;
++ led_fwnodes[reg] = fwnode_handle_get(child);
+ led->chip = chip;
+ led->led_no = reg;
+ led->ldev.brightness_set_blocking = pca995x_brightness_set;
+@@ -157,7 +151,8 @@ static int pca995x_probe(struct i2c_client *client)
+ &chip->leds[i].ldev,
+ &init_data);
+ if (ret < 0) {
+- fwnode_handle_put(child);
++ for (j = i; j < PCA995X_MAX_OUTPUTS; j++)
++ fwnode_handle_put(led_fwnodes[j]);
+ return dev_err_probe(dev, ret,
+ "Could not register LED %s\n",
+ chip->leds[i].ldev.name);
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index acff2f64f251f9..4545d934f73d31 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -4717,13 +4717,18 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv
+ ti->error = "Block size doesn't match the information in superblock";
+ goto bad;
+ }
+- if (!le32_to_cpu(ic->sb->journal_sections) != (ic->mode == 'I')) {
+- r = -EINVAL;
+- if (ic->mode != 'I')
++ if (ic->mode != 'I') {
++ if (!le32_to_cpu(ic->sb->journal_sections)) {
++ r = -EINVAL;
+ ti->error = "Corrupted superblock, journal_sections is 0";
+- else
++ goto bad;
++ }
++ } else {
++ if (le32_to_cpu(ic->sb->journal_sections)) {
++ r = -EINVAL;
+ ti->error = "Corrupted superblock, journal_sections is not 0";
+- goto bad;
++ goto bad;
++ }
+ }
+ /* make sure that ti->max_io_len doesn't overflow */
+ if (!ic->meta_dev) {
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index f7e9a3632eb3d9..499f8cc8a39fbf 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -496,8 +496,10 @@ static blk_status_t dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+
+ map = dm_get_live_table(md, &srcu_idx);
+ if (unlikely(!map)) {
++ DMERR_LIMIT("%s: mapping table unavailable, erroring io",
++ dm_device_name(md));
+ dm_put_live_table(md, srcu_idx);
+- return BLK_STS_RESOURCE;
++ return BLK_STS_IOERR;
+ }
+ ti = dm_table_find_target(map, 0);
+ dm_put_live_table(md, srcu_idx);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 87bb9030343582..ff4a6b570b7644 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2030,10 +2030,15 @@ static void dm_submit_bio(struct bio *bio)
+ struct dm_table *map;
+
+ map = dm_get_live_table(md, &srcu_idx);
++ if (unlikely(!map)) {
++ DMERR_LIMIT("%s: mapping table unavailable, erroring io",
++ dm_device_name(md));
++ bio_io_error(bio);
++ goto out;
++ }
+
+- /* If suspended, or map not yet available, queue this IO for later */
+- if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) ||
+- unlikely(!map)) {
++ /* If suspended, queue this IO for later */
++ if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))) {
+ if (bio->bi_opf & REQ_NOWAIT)
+ bio_wouldblock_error(bio);
+ else if (bio->bi_opf & REQ_RAHEAD)
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index d3a837506a36d8..23cc77d5167633 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -8668,7 +8668,6 @@ void md_write_start(struct mddev *mddev, struct bio *bi)
+ BUG_ON(mddev->ro == MD_RDONLY);
+ if (mddev->ro == MD_AUTO_READ) {
+ /* need to switch to read/write */
+- flush_work(&mddev->sync_work);
+ mddev->ro = MD_RDWR;
+ set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ md_wakeup_thread(mddev->thread);
+diff --git a/drivers/media/dvb-frontends/rtl2830.c b/drivers/media/dvb-frontends/rtl2830.c
+index 30d10fe4b33e34..320aa2bf99d423 100644
+--- a/drivers/media/dvb-frontends/rtl2830.c
++++ b/drivers/media/dvb-frontends/rtl2830.c
+@@ -609,7 +609,7 @@ static int rtl2830_pid_filter(struct dvb_frontend *fe, u8 index, u16 pid, int on
+ index, pid, onoff);
+
+ /* skip invalid PIDs (0x2000) */
+- if (pid > 0x1fff || index > 32)
++ if (pid > 0x1fff || index >= 32)
+ return 0;
+
+ if (onoff)
+diff --git a/drivers/media/dvb-frontends/rtl2832.c b/drivers/media/dvb-frontends/rtl2832.c
+index 5142820b1b3d97..76c3f40443b2c9 100644
+--- a/drivers/media/dvb-frontends/rtl2832.c
++++ b/drivers/media/dvb-frontends/rtl2832.c
+@@ -983,7 +983,7 @@ static int rtl2832_pid_filter(struct dvb_frontend *fe, u8 index, u16 pid,
+ index, pid, onoff, dev->slave_ts);
+
+ /* skip invalid PIDs (0x2000) */
+- if (pid > 0x1fff || index > 32)
++ if (pid > 0x1fff || index >= 32)
+ return 0;
+
+ if (onoff)
+diff --git a/drivers/media/platform/imagination/Kconfig b/drivers/media/platform/imagination/Kconfig
+index 7139ae22219b43..a302c955483dca 100644
+--- a/drivers/media/platform/imagination/Kconfig
++++ b/drivers/media/platform/imagination/Kconfig
+@@ -2,6 +2,7 @@
+ config VIDEO_E5010_JPEG_ENC
+ tristate "Imagination E5010 JPEG Encoder Driver"
+ depends on VIDEO_DEV
++ depends on ARCH_K3 || COMPILE_TEST
+ select VIDEOBUF2_DMA_CONTIG
+ select VIDEOBUF2_VMALLOC
+ select V4L2_MEM2MEM_DEV
+diff --git a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_if.c b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_if.c
+index 73d5cef33b2abf..1e1b32faac77bc 100644
+--- a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_if.c
++++ b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_if.c
+@@ -347,11 +347,16 @@ static int vdec_h264_slice_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
+ return vpu_dec_reset(vpu);
+
+ fb = inst->ctx->dev->vdec_pdata->get_cap_buffer(inst->ctx);
++ if (!fb) {
++ mtk_vdec_err(inst->ctx, "fb buffer is NULL");
++ return -ENOMEM;
++ }
++
+ src_buf_info = container_of(bs, struct mtk_video_dec_buf, bs_buffer);
+ dst_buf_info = container_of(fb, struct mtk_video_dec_buf, frame_buffer);
+
+- y_fb_dma = fb ? (u64)fb->base_y.dma_addr : 0;
+- c_fb_dma = fb ? (u64)fb->base_c.dma_addr : 0;
++ y_fb_dma = fb->base_y.dma_addr;
++ c_fb_dma = fb->base_c.dma_addr;
+
+ mtk_vdec_debug(inst->ctx, "+ [%d] FB y_dma=%llx c_dma=%llx va=%p",
+ inst->num_nalu, y_fb_dma, c_fb_dma, fb);
+diff --git a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_multi_if.c b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_multi_if.c
+index 2d4611e7fa0b2f..1ed0ccec56655e 100644
+--- a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_multi_if.c
++++ b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_multi_if.c
+@@ -724,11 +724,16 @@ static int vdec_h264_slice_single_decode(void *h_vdec, struct mtk_vcodec_mem *bs
+ return vpu_dec_reset(vpu);
+
+ fb = inst->ctx->dev->vdec_pdata->get_cap_buffer(inst->ctx);
++ if (!fb) {
++ mtk_vdec_err(inst->ctx, "fb buffer is NULL");
++ return -ENOMEM;
++ }
++
+ src_buf_info = container_of(bs, struct mtk_video_dec_buf, bs_buffer);
+ dst_buf_info = container_of(fb, struct mtk_video_dec_buf, frame_buffer);
+
+- y_fb_dma = fb ? (u64)fb->base_y.dma_addr : 0;
+- c_fb_dma = fb ? (u64)fb->base_c.dma_addr : 0;
++ y_fb_dma = fb->base_y.dma_addr;
++ c_fb_dma = fb->base_c.dma_addr;
+ mtk_vdec_debug(inst->ctx, "[h264-dec] [%d] y_dma=%llx c_dma=%llx",
+ inst->ctx->decoded_frame_cnt, y_fb_dma, c_fb_dma);
+
+diff --git a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_req_if.c b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_req_if.c
+index e27e728f392e6c..232ef3bd246a3d 100644
+--- a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_req_if.c
++++ b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_req_if.c
+@@ -334,14 +334,18 @@ static int vdec_vp8_slice_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
+ src_buf_info = container_of(bs, struct mtk_video_dec_buf, bs_buffer);
+
+ fb = inst->ctx->dev->vdec_pdata->get_cap_buffer(inst->ctx);
+- dst_buf_info = container_of(fb, struct mtk_video_dec_buf, frame_buffer);
++ if (!fb) {
++ mtk_vdec_err(inst->ctx, "fb buffer is NULL");
++ return -ENOMEM;
++ }
+
+- y_fb_dma = fb ? (u64)fb->base_y.dma_addr : 0;
++ dst_buf_info = container_of(fb, struct mtk_video_dec_buf, frame_buffer);
++ y_fb_dma = fb->base_y.dma_addr;
+ if (inst->ctx->q_data[MTK_Q_DATA_DST].fmt->num_planes == 1)
+ c_fb_dma = y_fb_dma +
+ inst->ctx->picinfo.buf_w * inst->ctx->picinfo.buf_h;
+ else
+- c_fb_dma = fb ? (u64)fb->base_c.dma_addr : 0;
++ c_fb_dma = fb->base_c.dma_addr;
+
+ inst->vsi->dec.bs_dma = (u64)bs->dma_addr;
+ inst->vsi->dec.bs_sz = bs->size;
+diff --git a/drivers/media/platform/raspberrypi/pisp_be/Kconfig b/drivers/media/platform/raspberrypi/pisp_be/Kconfig
+index 38c0f8305d620d..46765a2e4c4d15 100644
+--- a/drivers/media/platform/raspberrypi/pisp_be/Kconfig
++++ b/drivers/media/platform/raspberrypi/pisp_be/Kconfig
+@@ -2,6 +2,7 @@ config VIDEO_RASPBERRYPI_PISP_BE
+ tristate "Raspberry Pi PiSP Backend (BE) ISP driver"
+ depends on V4L_PLATFORM_DRIVERS
+ depends on VIDEO_DEV
++ depends on ARCH_BCM2835 || COMPILE_TEST
+ select VIDEO_V4L2_SUBDEV_API
+ select MEDIA_CONTROLLER
+ select VIDEOBUF2_DMA_CONTIG
+diff --git a/drivers/media/platform/renesas/rzg2l-cru/rzg2l-csi2.c b/drivers/media/platform/renesas/rzg2l-cru/rzg2l-csi2.c
+index e68fcdaea207aa..c7fdee347ac8ae 100644
+--- a/drivers/media/platform/renesas/rzg2l-cru/rzg2l-csi2.c
++++ b/drivers/media/platform/renesas/rzg2l-cru/rzg2l-csi2.c
+@@ -865,6 +865,7 @@ static const struct of_device_id rzg2l_csi2_of_table[] = {
+ { .compatible = "renesas,rzg2l-csi2", },
+ { /* sentinel */ }
+ };
++MODULE_DEVICE_TABLE(of, rzg2l_csi2_of_table);
+
+ static struct platform_driver rzg2l_csi2_pdrv = {
+ .remove_new = rzg2l_csi2_remove,
+diff --git a/drivers/media/tuners/tuner-i2c.h b/drivers/media/tuners/tuner-i2c.h
+index 07aeead0644a31..724952e001cd13 100644
+--- a/drivers/media/tuners/tuner-i2c.h
++++ b/drivers/media/tuners/tuner-i2c.h
+@@ -133,10 +133,8 @@ static inline int tuner_i2c_xfer_send_recv(struct tuner_i2c_props *props,
+ } \
+ if (0 == __ret) { \
+ state = kzalloc(sizeof(type), GFP_KERNEL); \
+- if (!state) { \
+- __ret = -ENOMEM; \
++ if (NULL == state) \
+ goto __fail; \
+- } \
+ state->i2c_props.addr = i2caddr; \
+ state->i2c_props.adap = i2cadap; \
+ state->i2c_props.name = devname; \
+diff --git a/drivers/mtd/devices/powernv_flash.c b/drivers/mtd/devices/powernv_flash.c
+index 66044f4f5bade8..10cd1d9b48859d 100644
+--- a/drivers/mtd/devices/powernv_flash.c
++++ b/drivers/mtd/devices/powernv_flash.c
+@@ -207,6 +207,9 @@ static int powernv_flash_set_driver_info(struct device *dev,
+ * get them
+ */
+ mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
++ if (!mtd->name)
++ return -ENOMEM;
++
+ mtd->type = MTD_NORFLASH;
+ mtd->flags = MTD_WRITEABLE;
+ mtd->size = size;
+diff --git a/drivers/mtd/devices/slram.c b/drivers/mtd/devices/slram.c
+index 28131a127d065e..8297b366a06699 100644
+--- a/drivers/mtd/devices/slram.c
++++ b/drivers/mtd/devices/slram.c
+@@ -296,10 +296,12 @@ static int __init init_slram(void)
+ T("slram: devname = %s\n", devname);
+ if ((!map) || (!(devstart = strsep(&map, ",")))) {
+ E("slram: No devicestart specified.\n");
++ break;
+ }
+ T("slram: devstart = %s\n", devstart);
+ if ((!map) || (!(devlength = strsep(&map, ",")))) {
+ E("slram: No devicelength / -end specified.\n");
++ break;
+ }
+ T("slram: devlength = %s\n", devlength);
+ if (parse_cmdline(devname, devstart, devlength) != 0) {
+diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c
+index 17477bb2d48ff0..586868b4139f51 100644
+--- a/drivers/mtd/nand/raw/mtk_nand.c
++++ b/drivers/mtd/nand/raw/mtk_nand.c
+@@ -1429,16 +1429,32 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
+ return 0;
+ }
+
++static void mtk_nfc_nand_chips_cleanup(struct mtk_nfc *nfc)
++{
++ struct mtk_nfc_nand_chip *mtk_chip;
++ struct nand_chip *chip;
++ int ret;
++
++ while (!list_empty(&nfc->chips)) {
++ mtk_chip = list_first_entry(&nfc->chips,
++ struct mtk_nfc_nand_chip, node);
++ chip = &mtk_chip->nand;
++ ret = mtd_device_unregister(nand_to_mtd(chip));
++ WARN_ON(ret);
++ nand_cleanup(chip);
++ list_del(&mtk_chip->node);
++ }
++}
++
+ static int mtk_nfc_nand_chips_init(struct device *dev, struct mtk_nfc *nfc)
+ {
+ struct device_node *np = dev->of_node;
+- struct device_node *nand_np;
+ int ret;
+
+- for_each_child_of_node(np, nand_np) {
++ for_each_child_of_node_scoped(np, nand_np) {
+ ret = mtk_nfc_nand_chip_init(dev, nfc, nand_np);
+ if (ret) {
+- of_node_put(nand_np);
++ mtk_nfc_nand_chips_cleanup(nfc);
+ return ret;
+ }
+ }
+@@ -1570,20 +1586,8 @@ static int mtk_nfc_probe(struct platform_device *pdev)
+ static void mtk_nfc_remove(struct platform_device *pdev)
+ {
+ struct mtk_nfc *nfc = platform_get_drvdata(pdev);
+- struct mtk_nfc_nand_chip *mtk_chip;
+- struct nand_chip *chip;
+- int ret;
+-
+- while (!list_empty(&nfc->chips)) {
+- mtk_chip = list_first_entry(&nfc->chips,
+- struct mtk_nfc_nand_chip, node);
+- chip = &mtk_chip->nand;
+- ret = mtd_device_unregister(nand_to_mtd(chip));
+- WARN_ON(ret);
+- nand_cleanup(chip);
+- list_del(&mtk_chip->node);
+- }
+
++ mtk_nfc_nand_chips_cleanup(nfc);
+ mtk_ecc_release(nfc->ecc);
+ }
+
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index 7aca0544fb29c9..e80992b4f9de9a 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -68,6 +68,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ __be16 proto;
+ void *oiph;
+ int err;
++ int nh;
+
+ bareudp = rcu_dereference_sk_user_data(sk);
+ if (!bareudp)
+@@ -148,10 +149,25 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ }
+ skb_dst_set(skb, &tun_dst->dst);
+ skb->dev = bareudp->dev;
+- oiph = skb_network_header(skb);
+- skb_reset_network_header(skb);
+ skb_reset_mac_header(skb);
+
++ /* Save offset of outer header relative to skb->head,
++ * because we are going to reset the network header to the inner header
++ * and might change skb->head.
++ */
++ nh = skb_network_header(skb) - skb->head;
++
++ skb_reset_network_header(skb);
++
++ if (!pskb_inet_may_pull(skb)) {
++ DEV_STATS_INC(bareudp->dev, rx_length_errors);
++ DEV_STATS_INC(bareudp->dev, rx_errors);
++ goto drop;
++ }
++
++ /* Get the outer header. */
++ oiph = skb->head + nh;
++
+ if (!ipv6_mod_enabled() || family == AF_INET)
+ err = IP_ECN_decapsulate(oiph, skb);
+ else
+@@ -301,6 +317,9 @@ static int bareudp_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ __be32 saddr;
+ int err;
+
++ if (!skb_vlan_inet_prepare(skb, skb->protocol != htons(ETH_P_TEB)))
++ return -EINVAL;
++
+ if (!sock)
+ return -ESHUTDOWN;
+
+@@ -368,6 +387,9 @@ static int bareudp6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ __be16 sport;
+ int err;
+
++ if (!skb_vlan_inet_prepare(skb, skb->protocol != htons(ETH_P_TEB)))
++ return -EINVAL;
++
+ if (!sock)
+ return -ESHUTDOWN;
+
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index bb9c3d6ef43592..e20bee1bdffd72 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -5536,9 +5536,9 @@ bond_xdp_get_xmit_slave(struct net_device *bond_dev, struct xdp_buff *xdp)
+ break;
+
+ default:
+- /* Should never happen. Mode guarded by bond_xdp_check() */
+- netdev_err(bond_dev, "Unknown bonding mode %d for xdp xmit\n", BOND_MODE(bond));
+- WARN_ON_ONCE(1);
++ if (net_ratelimit())
++ netdev_err(bond_dev, "Unknown bonding mode %d for xdp xmit\n",
++ BOND_MODE(bond));
+ return NULL;
+ }
+
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 012c3d22b01dd3..7fec04b024d5b8 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1763,11 +1763,7 @@ static int m_can_close(struct net_device *dev)
+
+ netif_stop_queue(dev);
+
+- if (!cdev->is_peripheral)
+- napi_disable(&cdev->napi);
+-
+ m_can_stop(dev);
+- m_can_clk_stop(cdev);
+ free_irq(dev->irq, dev);
+
+ m_can_clean(dev);
+@@ -1776,10 +1772,13 @@ static int m_can_close(struct net_device *dev)
+ destroy_workqueue(cdev->tx_wq);
+ cdev->tx_wq = NULL;
+ can_rx_offload_disable(&cdev->offload);
++ } else {
++ napi_disable(&cdev->napi);
+ }
+
+ close_candev(dev);
+
++ m_can_clk_stop(cdev);
+ phy_power_off(cdev->transceiver);
+
+ return 0;
+@@ -2030,6 +2029,8 @@ static int m_can_open(struct net_device *dev)
+
+ if (cdev->is_peripheral)
+ can_rx_offload_enable(&cdev->offload);
++ else
++ napi_enable(&cdev->napi);
+
+ /* register interrupt handler */
+ if (cdev->is_peripheral) {
+@@ -2063,9 +2064,6 @@ static int m_can_open(struct net_device *dev)
+ if (err)
+ goto exit_start_fail;
+
+- if (!cdev->is_peripheral)
+- napi_enable(&cdev->napi);
+-
+ netif_start_queue(dev);
+
+ return 0;
+@@ -2079,6 +2077,8 @@ static int m_can_open(struct net_device *dev)
+ out_wq_fail:
+ if (cdev->is_peripheral)
+ can_rx_offload_disable(&cdev->offload);
++ else
++ napi_disable(&cdev->napi);
+ close_candev(dev);
+ exit_disable_clks:
+ m_can_clk_stop(cdev);
+diff --git a/drivers/net/can/usb/esd_usb.c b/drivers/net/can/usb/esd_usb.c
+index 41a0e4261d15e9..03ad10b01867d8 100644
+--- a/drivers/net/can/usb/esd_usb.c
++++ b/drivers/net/can/usb/esd_usb.c
+@@ -3,7 +3,7 @@
+ * CAN driver for esd electronics gmbh CAN-USB/2, CAN-USB/3 and CAN-USB/Micro
+ *
+ * Copyright (C) 2010-2012 esd electronic system design gmbh, Matthias Fuchs <socketcan@esd.eu>
+- * Copyright (C) 2022-2023 esd electronics gmbh, Frank Jungclaus <frank.jungclaus@esd.eu>
++ * Copyright (C) 2022-2024 esd electronics gmbh, Frank Jungclaus <frank.jungclaus@esd.eu>
+ */
+
+ #include <linux/can.h>
+@@ -1116,9 +1116,6 @@ static int esd_usb_3_set_bittiming(struct net_device *netdev)
+ if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
+ flags |= ESD_USB_3_BAUDRATE_FLAG_LOM;
+
+- if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
+- flags |= ESD_USB_3_BAUDRATE_FLAG_TRS;
+-
+ baud_x->nom.brp = cpu_to_le16(nom_bt->brp & (nom_btc->brp_max - 1));
+ baud_x->nom.sjw = cpu_to_le16(nom_bt->sjw & (nom_btc->sjw_max - 1));
+ baud_x->nom.tseg1 = cpu_to_le16((nom_bt->prop_seg + nom_bt->phase_seg1)
+@@ -1219,7 +1216,6 @@ static int esd_usb_probe_one_net(struct usb_interface *intf, int index)
+ switch (le16_to_cpu(dev->udev->descriptor.idProduct)) {
+ case ESD_USB_CANUSB3_PRODUCT_ID:
+ priv->can.clock.freq = ESD_USB_3_CAN_CLOCK;
+- priv->can.ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
+ priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD;
+ priv->can.bittiming_const = &esd_usb_3_nom_bittiming_const;
+ priv->can.data_bittiming_const = &esd_usb_3_data_bittiming_const;
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 5c45f42232d326..f04f42ea60c0f7 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -2305,12 +2305,11 @@ static int enetc_setup_irqs(struct enetc_ndev_priv *priv)
+
+ snprintf(v->name, sizeof(v->name), "%s-rxtx%d",
+ priv->ndev->name, i);
+- err = request_irq(irq, enetc_msix, 0, v->name, v);
++ err = request_irq(irq, enetc_msix, IRQF_NO_AUTOEN, v->name, v);
+ if (err) {
+ dev_err(priv->dev, "request_irq() failed!\n");
+ goto irq_err;
+ }
+- disable_irq(irq);
+
+ v->tbier_base = hw->reg + ENETC_BDR(TX, 0, ENETC_TBIER);
+ v->rbier = hw->reg + ENETC_BDR(RX, i, ENETC_RBIER);
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+index fe64febf7436f4..991b477c873f5c 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+@@ -371,6 +371,10 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
+ IDPF_TX_DESCS_FOR_CTX)) {
+ idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+
++ u64_stats_update_begin(&tx_q->stats_sync);
++ u64_stats_inc(&tx_q->q_stats.q_busy);
++ u64_stats_update_end(&tx_q->stats_sync);
++
+ return NETDEV_TX_BUSY;
+ }
+
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index 585c3dadd9bfac..fd2d14ba754510 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -2157,29 +2157,6 @@ void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc,
+ desc->flow.qw1.compl_tag = cpu_to_le16(params->compl_tag);
+ }
+
+-/**
+- * idpf_tx_maybe_stop_common - 1st level check for common Tx stop conditions
+- * @tx_q: the queue to be checked
+- * @size: number of descriptors we want to assure is available
+- *
+- * Returns 0 if stop is not needed
+- */
+-int idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q, unsigned int size)
+-{
+- struct netdev_queue *nq;
+-
+- if (likely(IDPF_DESC_UNUSED(tx_q) >= size))
+- return 0;
+-
+- u64_stats_update_begin(&tx_q->stats_sync);
+- u64_stats_inc(&tx_q->q_stats.q_busy);
+- u64_stats_update_end(&tx_q->stats_sync);
+-
+- nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
+-
+- return netif_txq_maybe_stop(nq, IDPF_DESC_UNUSED(tx_q), size, size);
+-}
+-
+ /**
+ * idpf_tx_maybe_stop_splitq - 1st level check for Tx splitq stop conditions
+ * @tx_q: the queue to be checked
+@@ -2191,7 +2168,7 @@ static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q,
+ unsigned int descs_needed)
+ {
+ if (idpf_tx_maybe_stop_common(tx_q, descs_needed))
+- goto splitq_stop;
++ goto out;
+
+ /* If there are too many outstanding completions expected on the
+ * completion queue, stop the TX queue to give the device some time to
+@@ -2210,10 +2187,12 @@ static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q,
+ return 0;
+
+ splitq_stop:
++ netif_stop_subqueue(tx_q->netdev, tx_q->idx);
++
++out:
+ u64_stats_update_begin(&tx_q->stats_sync);
+ u64_stats_inc(&tx_q->q_stats.q_busy);
+ u64_stats_update_end(&tx_q->stats_sync);
+- netif_stop_subqueue(tx_q->netdev, tx_q->idx);
+
+ return -EBUSY;
+ }
+@@ -2236,7 +2215,11 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
+ nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
+ tx_q->next_to_use = val;
+
+- idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED);
++ if (idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED)) {
++ u64_stats_update_begin(&tx_q->stats_sync);
++ u64_stats_inc(&tx_q->q_stats.q_busy);
++ u64_stats_update_end(&tx_q->stats_sync);
++ }
+
+ /* Force memory writes to complete before letting h/w
+ * know there are new descriptors to fetch. (Only
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+index 6215dbee554651..90415eb309cc9c 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+@@ -1064,7 +1064,6 @@ void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb,
+ struct idpf_tx_buf *first, u16 ring_idx);
+ unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
+ struct sk_buff *skb);
+-int idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q, unsigned int size);
+ void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue);
+ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
+ struct idpf_tx_queue *tx_q);
+@@ -1073,4 +1072,12 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rxq,
+ u16 cleaned_count);
+ int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off);
+
++static inline bool idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q,
++ u32 needed)
++{
++ return !netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx,
++ IDPF_DESC_UNUSED(tx_q),
++ needed, needed);
++}
++
+ #endif /* !_IDPF_TXRX_H_ */
+diff --git a/drivers/net/ethernet/meta/Kconfig b/drivers/net/ethernet/meta/Kconfig
+index c002ede3640202..85519690b83778 100644
+--- a/drivers/net/ethernet/meta/Kconfig
++++ b/drivers/net/ethernet/meta/Kconfig
+@@ -23,6 +23,8 @@ config FBNIC
+ depends on !S390
+ depends on MAX_SKB_FRAGS < 22
+ depends on PCI_MSI
++ select NET_DEVLINK
++ select PAGE_POOL
+ select PHYLINK
+ help
+ This driver supports Meta Platforms Host Network Interface.
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+index 0ed4c9fff5d807..72f88ae7815f44 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+@@ -1012,14 +1012,14 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
+ nv->fbd = fbd;
+ nv->v_idx = v_idx;
+
+- /* Record IRQ to NAPI struct */
+- netif_napi_set_irq(&nv->napi,
+- pci_irq_vector(to_pci_dev(fbd->dev), nv->v_idx));
+-
+ /* Tie napi to netdev */
+ list_add(&nv->napis, &fbn->napis);
+ netif_napi_add(fbn->netdev, &nv->napi, fbnic_poll);
+
++ /* Record IRQ to NAPI struct */
++ netif_napi_set_irq(&nv->napi,
++ pci_irq_vector(to_pci_dev(fbd->dev), nv->v_idx));
++
+ /* Tie nv back to PCIe dev */
+ nv->dev = fbd->dev;
+
+diff --git a/drivers/net/ethernet/realtek/r8169_phy_config.c b/drivers/net/ethernet/realtek/r8169_phy_config.c
+index 1f74317beb8878..e1e5d9672ae44b 100644
+--- a/drivers/net/ethernet/realtek/r8169_phy_config.c
++++ b/drivers/net/ethernet/realtek/r8169_phy_config.c
+@@ -1060,6 +1060,7 @@ static void rtl8125a_2_hw_phy_config(struct rtl8169_private *tp,
+ phy_modify_paged(phydev, 0xa86, 0x15, 0x0001, 0x0000);
+ rtl8168g_enable_gphy_10m(phydev);
+
++ rtl8168g_disable_aldps(phydev);
+ rtl8125a_config_eee_phy(phydev);
+ }
+
+@@ -1099,6 +1100,7 @@ static void rtl8125b_hw_phy_config(struct rtl8169_private *tp,
+ phy_modify_paged(phydev, 0xbf8, 0x12, 0xe000, 0xa000);
+
+ rtl8125_legacy_force_mode(phydev);
++ rtl8168g_disable_aldps(phydev);
+ rtl8125b_config_eee_phy(phydev);
+ }
+
+diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h
+index 9893c91af1050f..a7de5cf6b31743 100644
+--- a/drivers/net/ethernet/renesas/ravb.h
++++ b/drivers/net/ethernet/renesas/ravb.h
+@@ -1052,6 +1052,7 @@ struct ravb_hw_info {
+ netdev_features_t net_features;
+ int stats_len;
+ u32 tccr_mask;
++ u32 tx_max_frame_size;
+ u32 rx_max_frame_size;
+ u32 rx_buffer_size;
+ u32 rx_desc_size;
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index c02fb296bf7d7a..6b82df11fe8d09 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -555,8 +555,16 @@ static void ravb_emac_init_gbeth(struct net_device *ndev)
+
+ static void ravb_emac_init_rcar(struct net_device *ndev)
+ {
+- /* Receive frame limit set register */
+- ravb_write(ndev, ndev->mtu + ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN, RFLR);
++ struct ravb_private *priv = netdev_priv(ndev);
++
++ /* Set receive frame length
++ *
++ * The length set here describes the frame from the destination address
++ * up to and including the CRC data. However only the frame data,
++ * excluding the CRC, are transferred to memory. To allow for the
++ * largest frames add the CRC length to the maximum Rx descriptor size.
++ */
++ ravb_write(ndev, priv->info->rx_max_frame_size + ETH_FCS_LEN, RFLR);
+
+ /* EMAC Mode: PAUSE prohibition; Duplex; RX Checksum; TX; RX */
+ ravb_write(ndev, ECMR_ZPF | ECMR_DM |
+@@ -2674,6 +2682,7 @@ static const struct ravb_hw_info ravb_gen2_hw_info = {
+ .net_features = NETIF_F_RXCSUM,
+ .stats_len = ARRAY_SIZE(ravb_gstrings_stats),
+ .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3,
++ .tx_max_frame_size = SZ_2K,
+ .rx_max_frame_size = SZ_2K,
+ .rx_buffer_size = SZ_2K +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)),
+@@ -2696,6 +2705,7 @@ static const struct ravb_hw_info ravb_gen3_hw_info = {
+ .net_features = NETIF_F_RXCSUM,
+ .stats_len = ARRAY_SIZE(ravb_gstrings_stats),
+ .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3,
++ .tx_max_frame_size = SZ_2K,
+ .rx_max_frame_size = SZ_2K,
+ .rx_buffer_size = SZ_2K +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)),
+@@ -2721,6 +2731,7 @@ static const struct ravb_hw_info ravb_gen4_hw_info = {
+ .net_features = NETIF_F_RXCSUM,
+ .stats_len = ARRAY_SIZE(ravb_gstrings_stats),
+ .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3,
++ .tx_max_frame_size = SZ_2K,
+ .rx_max_frame_size = SZ_2K,
+ .rx_buffer_size = SZ_2K +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)),
+@@ -2770,6 +2781,7 @@ static const struct ravb_hw_info gbeth_hw_info = {
+ .net_features = NETIF_F_RXCSUM | NETIF_F_HW_CSUM,
+ .stats_len = ARRAY_SIZE(ravb_gstrings_stats_gbeth),
+ .tccr_mask = TCCR_TSRQ0,
++ .tx_max_frame_size = 1522,
+ .rx_max_frame_size = SZ_8K,
+ .rx_buffer_size = SZ_2K,
+ .rx_desc_size = sizeof(struct ravb_rx_desc),
+@@ -2981,7 +2993,7 @@ static int ravb_probe(struct platform_device *pdev)
+ priv->avb_link_active_low =
+ of_property_read_bool(np, "renesas,ether-link-active-low");
+
+- ndev->max_mtu = info->rx_max_frame_size -
++ ndev->max_mtu = info->tx_max_frame_size -
+ (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN);
+ ndev->min_mtu = ETH_MIN_MTU;
+
+diff --git a/drivers/net/ethernet/seeq/ether3.c b/drivers/net/ethernet/seeq/ether3.c
+index c672f92d65e976..9319a2675e7b65 100644
+--- a/drivers/net/ethernet/seeq/ether3.c
++++ b/drivers/net/ethernet/seeq/ether3.c
+@@ -847,9 +847,11 @@ static void ether3_remove(struct expansion_card *ec)
+ {
+ struct net_device *dev = ecard_get_drvdata(ec);
+
++ ether3_outw(priv(dev)->regs.config2 |= CFG2_CTRLO, REG_CONFIG2);
+ ecard_set_drvdata(ec, NULL);
+
+ unregister_netdev(dev);
++ del_timer_sync(&priv(dev)->timer);
+ free_netdev(dev);
+ ecard_release_resources(ec);
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+index 9e40c28d453ab1..ee3604f58def52 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+@@ -35,6 +35,9 @@ static int loongson_default_data(struct plat_stmmacenet_data *plat)
+ /* Disable RX queues routing by default */
+ plat->rx_queues_cfg[0].pkt_route = 0x0;
+
++ plat->clk_ref_rate = 125000000;
++ plat->clk_ptp_rate = 125000000;
++
+ /* Default to phy auto-detection */
+ plat->phy_addr = -1;
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index f3a1b179aaeaca..95d3d1081727fa 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2022,7 +2022,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv,
+ rx_q->queue_index = queue;
+ rx_q->priv_data = priv;
+
+- pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
++ pp_params.flags = PP_FLAG_DMA_MAP | (xdp_prog ? PP_FLAG_DMA_SYNC_DEV : 0);
+ pp_params.pool_size = dma_conf->dma_rx_size;
+ num_pages = DIV_ROUND_UP(dma_conf->dma_buf_sz, PAGE_SIZE);
+ pp_params.order = ilog2(num_pages);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 9eb300fc359096..5dbfee4aee43ce 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -674,15 +674,15 @@ static int axienet_device_reset(struct net_device *ndev)
+ *
+ * Would either be called after a successful transmit operation, or after
+ * there was an error when setting up the chain.
+- * Returns the number of descriptors handled.
++ * Returns the number of packets handled.
+ */
+ static int axienet_free_tx_chain(struct axienet_local *lp, u32 first_bd,
+ int nr_bds, bool force, u32 *sizep, int budget)
+ {
+ struct axidma_bd *cur_p;
+ unsigned int status;
++ int i, packets = 0;
+ dma_addr_t phys;
+- int i;
+
+ for (i = 0; i < nr_bds; i++) {
+ cur_p = &lp->tx_bd_v[(first_bd + i) % lp->tx_bd_num];
+@@ -701,8 +701,10 @@ static int axienet_free_tx_chain(struct axienet_local *lp, u32 first_bd,
+ (cur_p->cntrl & XAXIDMA_BD_CTRL_LENGTH_MASK),
+ DMA_TO_DEVICE);
+
+- if (cur_p->skb && (status & XAXIDMA_BD_STS_COMPLETE_MASK))
++ if (cur_p->skb && (status & XAXIDMA_BD_STS_COMPLETE_MASK)) {
+ napi_consume_skb(cur_p->skb, budget);
++ packets++;
++ }
+
+ cur_p->app0 = 0;
+ cur_p->app1 = 0;
+@@ -718,7 +720,13 @@ static int axienet_free_tx_chain(struct axienet_local *lp, u32 first_bd,
+ *sizep += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK;
+ }
+
+- return i;
++ if (!force) {
++ lp->tx_bd_ci += i;
++ if (lp->tx_bd_ci >= lp->tx_bd_num)
++ lp->tx_bd_ci %= lp->tx_bd_num;
++ }
++
++ return packets;
+ }
+
+ /**
+@@ -891,13 +899,10 @@ static int axienet_tx_poll(struct napi_struct *napi, int budget)
+ u32 size = 0;
+ int packets;
+
+- packets = axienet_free_tx_chain(lp, lp->tx_bd_ci, budget, false, &size, budget);
++ packets = axienet_free_tx_chain(lp, lp->tx_bd_ci, lp->tx_bd_num, false,
++ &size, budget);
+
+ if (packets) {
+- lp->tx_bd_ci += packets;
+- if (lp->tx_bd_ci >= lp->tx_bd_num)
+- lp->tx_bd_ci %= lp->tx_bd_num;
+-
+ u64_stats_update_begin(&lp->tx_stat_sync);
+ u64_stats_add(&lp->tx_packets, packets);
+ u64_stats_add(&lp->tx_bytes, size);
+@@ -1222,9 +1227,10 @@ static irqreturn_t axienet_tx_irq(int irq, void *_ndev)
+ u32 cr = lp->tx_dma_cr;
+
+ cr &= ~(XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK);
+- axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, cr);
+-
+- napi_schedule(&lp->napi_tx);
++ if (napi_schedule_prep(&lp->napi_tx)) {
++ axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, cr);
++ __napi_schedule(&lp->napi_tx);
++ }
+ }
+
+ return IRQ_HANDLED;
+@@ -1266,9 +1272,10 @@ static irqreturn_t axienet_rx_irq(int irq, void *_ndev)
+ u32 cr = lp->rx_dma_cr;
+
+ cr &= ~(XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK);
+- axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, cr);
+-
+- napi_schedule(&lp->napi_rx);
++ if (napi_schedule_prep(&lp->napi_rx)) {
++ axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, cr);
++ __napi_schedule(&lp->napi_rx);
++ }
+ }
+
+ return IRQ_HANDLED;
+diff --git a/drivers/net/netkit.c b/drivers/net/netkit.c
+index 16789cd446e9e4..3f4187102e7737 100644
+--- a/drivers/net/netkit.c
++++ b/drivers/net/netkit.c
+@@ -65,6 +65,7 @@ static struct netkit *netkit_priv(const struct net_device *dev)
+
+ static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
++ struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx;
+ struct netkit *nk = netkit_priv(dev);
+ enum netkit_action ret = READ_ONCE(nk->policy);
+ netdev_tx_t ret_dev = NET_XMIT_SUCCESS;
+@@ -72,6 +73,7 @@ static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
+ struct net_device *peer;
+ int len = skb->len;
+
++ bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
+ rcu_read_lock();
+ peer = rcu_dereference(nk->peer);
+ if (unlikely(!peer || !(peer->flags & IFF_UP) ||
+@@ -110,6 +112,7 @@ static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
+ break;
+ }
+ rcu_read_unlock();
++ bpf_net_ctx_clear(bpf_net_ctx);
+ return ret_dev;
+ }
+
+diff --git a/drivers/net/phy/aquantia/aquantia_firmware.c b/drivers/net/phy/aquantia/aquantia_firmware.c
+index 524627a36c6fc2..dac6464b5fe2e3 100644
+--- a/drivers/net/phy/aquantia/aquantia_firmware.c
++++ b/drivers/net/phy/aquantia/aquantia_firmware.c
+@@ -353,26 +353,32 @@ int aqr_firmware_load(struct phy_device *phydev)
+ {
+ int ret;
+
+- ret = aqr_wait_reset_complete(phydev);
+- if (ret)
+- return ret;
+-
+- /* Check if the firmware is not already loaded by pooling
+- * the current version returned by the PHY. If 0 is returned,
+- * no firmware is loaded.
++ /* Check if the firmware is not already loaded by polling
++ * the current version returned by the PHY.
+ */
+- ret = phy_read_mmd(phydev, MDIO_MMD_VEND1, VEND1_GLOBAL_FW_ID);
+- if (ret > 0)
+- goto exit;
+-
+- ret = aqr_firmware_load_nvmem(phydev);
+- if (!ret)
+- goto exit;
+-
+- ret = aqr_firmware_load_fs(phydev);
+- if (ret)
++ ret = aqr_wait_reset_complete(phydev);
++ switch (ret) {
++ case 0:
++ /* Some firmware is loaded => do nothing */
++ return 0;
++ case -ETIMEDOUT:
++ /* VEND1_GLOBAL_FW_ID still reads 0 after 2 seconds of polling.
++ * We don't have full confidence that no firmware is loaded (in
++ * theory it might just not have loaded yet), but we will
++ * assume that, and load a new image.
++ */
++ ret = aqr_firmware_load_nvmem(phydev);
++ if (!ret)
++ return ret;
++
++ ret = aqr_firmware_load_fs(phydev);
++ if (ret)
++ return ret;
++ break;
++ default:
++ /* PHY read error, propagate it to the caller */
+ return ret;
++ }
+
+-exit:
+ return 0;
+ }
+diff --git a/drivers/net/phy/aquantia/aquantia_leds.c b/drivers/net/phy/aquantia/aquantia_leds.c
+index 0516ac02c3f81e..201c8df93fad94 100644
+--- a/drivers/net/phy/aquantia/aquantia_leds.c
++++ b/drivers/net/phy/aquantia/aquantia_leds.c
+@@ -120,7 +120,8 @@ int aqr_phy_led_hw_control_set(struct phy_device *phydev, u8 index,
+ int aqr_phy_led_active_low_set(struct phy_device *phydev, int index, bool enable)
+ {
+ return phy_modify_mmd(phydev, MDIO_MMD_VEND1, AQR_LED_DRIVE(index),
+- VEND1_GLOBAL_LED_DRIVE_VDD, enable);
++ VEND1_GLOBAL_LED_DRIVE_VDD,
++ enable ? VEND1_GLOBAL_LED_DRIVE_VDD : 0);
+ }
+
+ int aqr_phy_led_polarity_set(struct phy_device *phydev, int index, unsigned long modes)
+diff --git a/drivers/net/phy/aquantia/aquantia_main.c b/drivers/net/phy/aquantia/aquantia_main.c
+index e982e9ce44a596..4d156d406bab9a 100644
+--- a/drivers/net/phy/aquantia/aquantia_main.c
++++ b/drivers/net/phy/aquantia/aquantia_main.c
+@@ -435,6 +435,9 @@ static int aqr107_set_tunable(struct phy_device *phydev,
+ }
+ }
+
++#define AQR_FW_WAIT_SLEEP_US 20000
++#define AQR_FW_WAIT_TIMEOUT_US 2000000
++
+ /* If we configure settings whilst firmware is still initializing the chip,
+ * then these settings may be overwritten. Therefore make sure chip
+ * initialization has completed. Use presence of the firmware ID as
+@@ -444,11 +447,19 @@ static int aqr107_set_tunable(struct phy_device *phydev,
+ */
+ int aqr_wait_reset_complete(struct phy_device *phydev)
+ {
+- int val;
++ int ret, val;
++
++ ret = read_poll_timeout(phy_read_mmd, val, val != 0,
++ AQR_FW_WAIT_SLEEP_US, AQR_FW_WAIT_TIMEOUT_US,
++ false, phydev, MDIO_MMD_VEND1,
++ VEND1_GLOBAL_FW_ID);
++ if (val < 0) {
++ phydev_err(phydev, "Failed to read VEND1_GLOBAL_FW_ID: %pe\n",
++ ERR_PTR(val));
++ return val;
++ }
+
+- return phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1,
+- VEND1_GLOBAL_FW_ID, val, val != 0,
+- 20000, 2000000, false);
++ return ret;
+ }
+
+ static void aqr107_chip_info(struct phy_device *phydev)
+@@ -478,7 +489,7 @@ static int aqr107_config_init(struct phy_device *phydev)
+ {
+ struct aqr107_priv *priv = phydev->priv;
+ u32 led_active_low;
+- int ret, index = 0;
++ int ret;
+
+ /* Check that the PHY interface type is compatible */
+ if (phydev->interface != PHY_INTERFACE_MODE_SGMII &&
+@@ -505,10 +516,9 @@ static int aqr107_config_init(struct phy_device *phydev)
+
+ /* Restore LED polarity state after reset */
+ for_each_set_bit(led_active_low, &priv->leds_active_low, AQR_MAX_LEDS) {
+- ret = aqr_phy_led_active_low_set(phydev, index, led_active_low);
++ ret = aqr_phy_led_active_low_set(phydev, led_active_low, true);
+ if (ret)
+ return ret;
+- index++;
+ }
+
+ return 0;
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 18eb5ba436df69..2506aa8c603ec0 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -464,10 +464,15 @@ static enum skb_state defer_bh(struct usbnet *dev, struct sk_buff *skb,
+ void usbnet_defer_kevent (struct usbnet *dev, int work)
+ {
+ set_bit (work, &dev->flags);
+- if (!schedule_work (&dev->kevent))
+- netdev_dbg(dev->net, "kevent %s may have been dropped\n", usbnet_event_names[work]);
+- else
+- netdev_dbg(dev->net, "kevent %s scheduled\n", usbnet_event_names[work]);
++ if (!usbnet_going_away(dev)) {
++ if (!schedule_work(&dev->kevent))
++ netdev_dbg(dev->net,
++ "kevent %s may have been dropped\n",
++ usbnet_event_names[work]);
++ else
++ netdev_dbg(dev->net,
++ "kevent %s scheduled\n", usbnet_event_names[work]);
++ }
+ }
+ EXPORT_SYMBOL_GPL(usbnet_defer_kevent);
+
+@@ -535,7 +540,8 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags)
+ tasklet_schedule (&dev->bh);
+ break;
+ case 0:
+- __usbnet_queue_skb(&dev->rxq, skb, rx_start);
++ if (!usbnet_going_away(dev))
++ __usbnet_queue_skb(&dev->rxq, skb, rx_start);
+ }
+ } else {
+ netif_dbg(dev, ifdown, dev->net, "rx: stopped\n");
+@@ -843,9 +849,18 @@ int usbnet_stop (struct net_device *net)
+
+ /* deferred work (timer, softirq, task) must also stop */
+ dev->flags = 0;
+- del_timer_sync (&dev->delay);
+- tasklet_kill (&dev->bh);
++ del_timer_sync(&dev->delay);
++ tasklet_kill(&dev->bh);
+ cancel_work_sync(&dev->kevent);
++
++ /* We have cyclic dependencies. Those calls are needed
++ * to break a cycle. We cannot fall into the gaps because
++ * we have a flag
++ */
++ tasklet_kill(&dev->bh);
++ del_timer_sync(&dev->delay);
++ cancel_work_sync(&dev->kevent);
++
+ if (!pm)
+ usb_autopm_put_interface(dev->intf);
+
+@@ -1171,7 +1186,8 @@ usbnet_deferred_kevent (struct work_struct *work)
+ status);
+ } else {
+ clear_bit (EVENT_RX_HALT, &dev->flags);
+- tasklet_schedule (&dev->bh);
++ if (!usbnet_going_away(dev))
++ tasklet_schedule(&dev->bh);
+ }
+ }
+
+@@ -1196,7 +1212,8 @@ usbnet_deferred_kevent (struct work_struct *work)
+ usb_autopm_put_interface(dev->intf);
+ fail_lowmem:
+ if (resched)
+- tasklet_schedule (&dev->bh);
++ if (!usbnet_going_away(dev))
++ tasklet_schedule(&dev->bh);
+ }
+ }
+
+@@ -1559,6 +1576,7 @@ static void usbnet_bh (struct timer_list *t)
+ } else if (netif_running (dev->net) &&
+ netif_device_present (dev->net) &&
+ netif_carrier_ok(dev->net) &&
++ !usbnet_going_away(dev) &&
+ !timer_pending(&dev->delay) &&
+ !test_bit(EVENT_RX_PAUSED, &dev->flags) &&
+ !test_bit(EVENT_RX_HALT, &dev->flags)) {
+@@ -1606,6 +1624,7 @@ void usbnet_disconnect (struct usb_interface *intf)
+ usb_set_intfdata(intf, NULL);
+ if (!dev)
+ return;
++ usbnet_mark_going_away(dev);
+
+ xdev = interface_to_usbdev (intf);
+
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 5a1c1ec5a64b8e..f8131f92a39288 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1807,6 +1807,11 @@ static struct sk_buff *receive_small(struct net_device *dev,
+ struct page *page = virt_to_head_page(buf);
+ struct sk_buff *skb;
+
++ /* We passed the address of virtnet header to virtio-core,
++ * so truncate the padding.
++ */
++ buf -= VIRTNET_RX_PAD + xdp_headroom;
++
+ len -= vi->hdr_len;
+ u64_stats_add(&stats->bytes, len);
+
+@@ -2422,8 +2427,9 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
+ if (unlikely(!buf))
+ return -ENOMEM;
+
+- virtnet_rq_init_one_sg(rq, buf + VIRTNET_RX_PAD + xdp_headroom,
+- vi->hdr_len + GOOD_PACKET_LEN);
++ buf += VIRTNET_RX_PAD + xdp_headroom;
++
++ virtnet_rq_init_one_sg(rq, buf, vi->hdr_len + GOOD_PACKET_LEN);
+
+ err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp);
+ if (err < 0) {
+@@ -2884,6 +2890,25 @@ static void virtnet_cancel_dim(struct virtnet_info *vi, struct dim *dim)
+ net_dim_work_cancel(dim);
+ }
+
++static void virtnet_update_settings(struct virtnet_info *vi)
++{
++ u32 speed;
++ u8 duplex;
++
++ if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_SPEED_DUPLEX))
++ return;
++
++ virtio_cread_le(vi->vdev, struct virtio_net_config, speed, &speed);
++
++ if (ethtool_validate_speed(speed))
++ vi->speed = speed;
++
++ virtio_cread_le(vi->vdev, struct virtio_net_config, duplex, &duplex);
++
++ if (ethtool_validate_duplex(duplex))
++ vi->duplex = duplex;
++}
++
+ static int virtnet_open(struct net_device *dev)
+ {
+ struct virtnet_info *vi = netdev_priv(dev);
+@@ -2902,6 +2927,15 @@ static int virtnet_open(struct net_device *dev)
+ goto err_enable_qp;
+ }
+
++ if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
++ if (vi->status & VIRTIO_NET_S_LINK_UP)
++ netif_carrier_on(vi->dev);
++ virtio_config_driver_enable(vi->vdev);
++ } else {
++ vi->status = VIRTIO_NET_S_LINK_UP;
++ netif_carrier_on(dev);
++ }
++
+ return 0;
+
+ err_enable_qp:
+@@ -3380,12 +3414,22 @@ static int virtnet_close(struct net_device *dev)
+ disable_delayed_refill(vi);
+ /* Make sure refill_work doesn't re-enable napi! */
+ cancel_delayed_work_sync(&vi->refill);
++ /* Prevent the config change callback from changing carrier
++ * after close
++ */
++ virtio_config_driver_disable(vi->vdev);
++ /* Stop getting status/speed updates: we don't care until next
++ * open
++ */
++ cancel_work_sync(&vi->config_work);
+
+ for (i = 0; i < vi->max_queue_pairs; i++) {
+ virtnet_disable_queue_pair(vi, i);
+ virtnet_cancel_dim(vi, &vi->rq[i].dim);
+ }
+
++ netif_carrier_off(dev);
++
+ return 0;
+ }
+
+@@ -5094,25 +5138,6 @@ static void virtnet_init_settings(struct net_device *dev)
+ vi->duplex = DUPLEX_UNKNOWN;
+ }
+
+-static void virtnet_update_settings(struct virtnet_info *vi)
+-{
+- u32 speed;
+- u8 duplex;
+-
+- if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_SPEED_DUPLEX))
+- return;
+-
+- virtio_cread_le(vi->vdev, struct virtio_net_config, speed, &speed);
+-
+- if (ethtool_validate_speed(speed))
+- vi->speed = speed;
+-
+- virtio_cread_le(vi->vdev, struct virtio_net_config, duplex, &duplex);
+-
+- if (ethtool_validate_duplex(duplex))
+- vi->duplex = duplex;
+-}
+-
+ static u32 virtnet_get_rxfh_key_size(struct net_device *dev)
+ {
+ return ((struct virtnet_info *)netdev_priv(dev))->rss_key_size;
+@@ -6521,6 +6546,9 @@ static int virtnet_probe(struct virtio_device *vdev)
+ goto free_failover;
+ }
+
++ /* Disable config change notification until ndo_open. */
++ virtio_config_driver_disable(vi->vdev);
++
+ virtio_device_ready(vdev);
+
+ virtnet_set_queues(vi, vi->curr_queue_pairs);
+@@ -6570,19 +6598,11 @@ static int virtnet_probe(struct virtio_device *vdev)
+ vi->device_stats_cap = le64_to_cpu(v);
+ }
+
+- rtnl_unlock();
+-
+- err = virtnet_cpu_notif_add(vi);
+- if (err) {
+- pr_debug("virtio_net: registering cpu notifier failed\n");
+- goto free_unregister_netdev;
+- }
+-
+ /* Assume link up if device can't report link status,
+ otherwise get link status from config. */
+ netif_carrier_off(dev);
+ if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
+- schedule_work(&vi->config_work);
++ virtnet_config_changed_work(&vi->config_work);
+ } else {
+ vi->status = VIRTIO_NET_S_LINK_UP;
+ virtnet_update_settings(vi);
+@@ -6594,6 +6614,14 @@ static int virtnet_probe(struct virtio_device *vdev)
+ set_bit(guest_offloads[i], &vi->guest_offloads);
+ vi->guest_offloads_capable = vi->guest_offloads;
+
++ rtnl_unlock();
++
++ err = virtnet_cpu_notif_add(vi);
++ if (err) {
++ pr_debug("virtio_net: registering cpu notifier failed\n");
++ goto free_unregister_netdev;
++ }
++
+ pr_debug("virtnet: registered device %s with %d RX and TX vq's\n",
+ dev->name, max_queue_pairs);
+
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index b655967a465bba..76faa55fd0f3ab 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -399,6 +399,7 @@ struct ath11k_vif {
+ u8 bssid[ETH_ALEN];
+ struct cfg80211_bitrate_mask bitrate_mask;
+ struct delayed_work connection_loss_work;
++ struct work_struct bcn_tx_work;
+ int num_legacy_stations;
+ int rtscts_prot_mode;
+ int txpower;
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 7c0ef6916dd258..f8068d2e848c33 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -6599,6 +6599,16 @@ static int ath11k_mac_vdev_delete(struct ath11k *ar, struct ath11k_vif *arvif)
+ return ret;
+ }
+
++static void ath11k_mac_bcn_tx_work(struct work_struct *work)
++{
++ struct ath11k_vif *arvif = container_of(work, struct ath11k_vif,
++ bcn_tx_work);
++
++ mutex_lock(&arvif->ar->conf_mutex);
++ ath11k_mac_bcn_tx_event(arvif);
++ mutex_unlock(&arvif->ar->conf_mutex);
++}
++
+ static int ath11k_mac_op_add_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+ {
+@@ -6637,6 +6647,7 @@ static int ath11k_mac_op_add_interface(struct ieee80211_hw *hw,
+ arvif->vif = vif;
+
+ INIT_LIST_HEAD(&arvif->list);
++ INIT_WORK(&arvif->bcn_tx_work, ath11k_mac_bcn_tx_work);
+ INIT_DELAYED_WORK(&arvif->connection_loss_work,
+ ath11k_mac_vif_sta_connection_loss_work);
+
+@@ -6879,6 +6890,7 @@ static void ath11k_mac_op_remove_interface(struct ieee80211_hw *hw,
+ int i;
+
+ cancel_delayed_work_sync(&arvif->connection_loss_work);
++ cancel_work_sync(&arvif->bcn_tx_work);
+
+ mutex_lock(&ar->conf_mutex);
+
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 38f175dd15578a..2662092ee00a9c 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -7404,7 +7404,9 @@ static void ath11k_bcn_tx_status_event(struct ath11k_base *ab, struct sk_buff *s
+ rcu_read_unlock();
+ return;
+ }
+- ath11k_mac_bcn_tx_event(arvif);
++
++ queue_work(ab->workqueue, &arvif->bcn_tx_work);
++
+ rcu_read_unlock();
+ }
+
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index ce41c8153080cd..a3248d97753298 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -2196,9 +2196,8 @@ static void ath12k_peer_assoc_h_he(struct ath12k *ar,
+ * request, then use MAX_AMPDU_LEN_FACTOR as 16 to calculate max_ampdu
+ * length.
+ */
+- ampdu_factor = (he_cap->he_cap_elem.mac_cap_info[3] &
+- IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK) >>
+- IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK;
++ ampdu_factor = u8_get_bits(he_cap->he_cap_elem.mac_cap_info[3],
++ IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK);
+
+ if (ampdu_factor) {
+ if (sta->deflink.vht_cap.vht_supported)
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 9f6be557365e90..a76413320dbf2f 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -1538,6 +1538,7 @@ int ath12k_wmi_pdev_bss_chan_info_request(struct ath12k *ar,
+ cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_BSS_CHAN_INFO_REQUEST,
+ sizeof(*cmd));
+ cmd->req_type = cpu_to_le32(type);
++ cmd->pdev_id = cpu_to_le32(ar->pdev->pdev_id);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI bss chan info req type %d\n", type);
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index f1f52175a52b33..6a913f9b831580 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -3121,6 +3121,7 @@ struct wmi_pdev_bss_chan_info_req_cmd {
+ __le32 tlv_header;
+ /* ref wmi_bss_chan_info_req_type */
+ __le32 req_type;
++ __le32 pdev_id;
+ } __packed;
+
+ struct wmi_ap_ps_peer_cmd {
+@@ -4085,7 +4086,6 @@ struct wmi_vdev_stopped_event {
+ } __packed;
+
+ struct wmi_pdev_bss_chan_info_event {
+- __le32 pdev_id;
+ __le32 freq; /* Units in MHz */
+ __le32 noise_floor; /* units are dBm */
+ /* rx clear - how often the channel was unused */
+@@ -4103,6 +4103,7 @@ struct wmi_pdev_bss_chan_info_event {
+ /*rx_cycle cnt for my bss in 64bits format */
+ __le32 rx_bss_cycle_count_low;
+ __le32 rx_bss_cycle_count_high;
++ __le32 pdev_id;
+ } __packed;
+
+ #define WMI_VDEV_INSTALL_KEY_COMPL_STATUS_SUCCESS 0
+diff --git a/drivers/net/wireless/ath/ath9k/debug.c b/drivers/net/wireless/ath/ath9k/debug.c
+index d84e3ee7b5d902..bf3da631c69fda 100644
+--- a/drivers/net/wireless/ath/ath9k/debug.c
++++ b/drivers/net/wireless/ath/ath9k/debug.c
+@@ -1380,8 +1380,6 @@ int ath9k_init_debug(struct ath_hw *ah)
+
+ sc->debug.debugfs_phy = debugfs_create_dir("ath9k",
+ sc->hw->wiphy->debugfsdir);
+- if (IS_ERR(sc->debug.debugfs_phy))
+- return -ENOMEM;
+
+ #ifdef CONFIG_ATH_DEBUG
+ debugfs_create_file("debug", 0600, sc->debug.debugfs_phy,
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
+index f7c6d9bc931196..9437d69877cc56 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
+@@ -486,8 +486,6 @@ int ath9k_htc_init_debug(struct ath_hw *ah)
+
+ priv->debug.debugfs_phy = debugfs_create_dir(KBUILD_MODNAME,
+ priv->hw->wiphy->debugfsdir);
+- if (IS_ERR(priv->debug.debugfs_phy))
+- return -ENOMEM;
+
+ ath9k_cmn_spectral_init_debug(&priv->spec_priv, priv->debug.debugfs_phy);
+
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
+index 0c3d119d12199c..1e8495f50c16ae 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
+@@ -123,7 +123,7 @@ static s32 brcmf_btcoex_params_read(struct brcmf_if *ifp, u32 addr, u32 *data)
+ {
+ *data = addr;
+
+- return brcmf_fil_iovar_int_get(ifp, "btc_params", data);
++ return brcmf_fil_iovar_int_query(ifp, "btc_params", data);
+ }
+
+ /**
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index d4cc5fa92341d5..815f6b3c79fc0f 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -663,8 +663,8 @@ static int brcmf_cfg80211_request_sta_if(struct brcmf_if *ifp, u8 *macaddr)
+ /* interface_create version 3+ */
+ /* get supported version from firmware side */
+ iface_create_ver = 0;
+- err = brcmf_fil_bsscfg_int_get(ifp, "interface_create",
+- &iface_create_ver);
++ err = brcmf_fil_bsscfg_int_query(ifp, "interface_create",
++ &iface_create_ver);
+ if (err) {
+ brcmf_err("fail to get supported version, err=%d\n", err);
+ return -EOPNOTSUPP;
+@@ -756,8 +756,8 @@ static int brcmf_cfg80211_request_ap_if(struct brcmf_if *ifp)
+ /* interface_create version 3+ */
+ /* get supported version from firmware side */
+ iface_create_ver = 0;
+- err = brcmf_fil_bsscfg_int_get(ifp, "interface_create",
+- &iface_create_ver);
++ err = brcmf_fil_bsscfg_int_query(ifp, "interface_create",
++ &iface_create_ver);
+ if (err) {
+ brcmf_err("fail to get supported version, err=%d\n", err);
+ return -EOPNOTSUPP;
+@@ -2101,7 +2101,8 @@ brcmf_set_key_mgmt(struct net_device *ndev, struct cfg80211_connect_params *sme)
+ if (!sme->crypto.n_akm_suites)
+ return 0;
+
+- err = brcmf_fil_bsscfg_int_get(netdev_priv(ndev), "wpa_auth", &val);
++ err = brcmf_fil_bsscfg_int_get(netdev_priv(ndev),
++ "wpa_auth", &val);
+ if (err) {
+ bphy_err(drvr, "could not get wpa_auth (%d)\n", err);
+ return err;
+@@ -2680,7 +2681,7 @@ brcmf_cfg80211_get_tx_power(struct wiphy *wiphy, struct wireless_dev *wdev,
+ struct brcmf_cfg80211_info *cfg = wiphy_to_cfg(wiphy);
+ struct brcmf_cfg80211_vif *vif = wdev_to_vif(wdev);
+ struct brcmf_pub *drvr = cfg->pub;
+- s32 qdbm = 0;
++ s32 qdbm;
+ s32 err;
+
+ brcmf_dbg(TRACE, "Enter\n");
+@@ -3067,7 +3068,7 @@ brcmf_cfg80211_get_station_ibss(struct brcmf_if *ifp,
+ struct brcmf_scb_val_le scbval;
+ struct brcmf_pktcnt_le pktcnt;
+ s32 err;
+- u32 rate = 0;
++ u32 rate;
+ u32 rssi;
+
+ /* Get the current tx rate */
+@@ -7046,8 +7047,8 @@ static int brcmf_construct_chaninfo(struct brcmf_cfg80211_info *cfg,
+ ch.bw = BRCMU_CHAN_BW_20;
+ cfg->d11inf.encchspec(&ch);
+ chaninfo = ch.chspec;
+- err = brcmf_fil_bsscfg_int_get(ifp, "per_chan_info",
+- &chaninfo);
++ err = brcmf_fil_bsscfg_int_query(ifp, "per_chan_info",
++ &chaninfo);
+ if (!err) {
+ if (chaninfo & WL_CHAN_RADAR)
+ channel->flags |=
+@@ -7081,7 +7082,7 @@ static int brcmf_enable_bw40_2g(struct brcmf_cfg80211_info *cfg)
+
+ /* verify support for bw_cap command */
+ val = WLC_BAND_5G;
+- err = brcmf_fil_iovar_int_get(ifp, "bw_cap", &val);
++ err = brcmf_fil_iovar_int_query(ifp, "bw_cap", &val);
+
+ if (!err) {
+ /* only set 2G bandwidth using bw_cap command */
+@@ -7157,11 +7158,11 @@ static void brcmf_get_bwcap(struct brcmf_if *ifp, u32 bw_cap[])
+ int err;
+
+ band = WLC_BAND_2G;
+- err = brcmf_fil_iovar_int_get(ifp, "bw_cap", &band);
++ err = brcmf_fil_iovar_int_query(ifp, "bw_cap", &band);
+ if (!err) {
+ bw_cap[NL80211_BAND_2GHZ] = band;
+ band = WLC_BAND_5G;
+- err = brcmf_fil_iovar_int_get(ifp, "bw_cap", &band);
++ err = brcmf_fil_iovar_int_query(ifp, "bw_cap", &band);
+ if (!err) {
+ bw_cap[NL80211_BAND_5GHZ] = band;
+ return;
+@@ -7170,7 +7171,6 @@ static void brcmf_get_bwcap(struct brcmf_if *ifp, u32 bw_cap[])
+ return;
+ }
+ brcmf_dbg(INFO, "fallback to mimo_bw_cap info\n");
+- mimo_bwcap = 0;
+ err = brcmf_fil_iovar_int_get(ifp, "mimo_bw_cap", &mimo_bwcap);
+ if (err)
+ /* assume 20MHz if firmware does not give a clue */
+@@ -7266,10 +7266,10 @@ static int brcmf_setup_wiphybands(struct brcmf_cfg80211_info *cfg)
+ struct brcmf_pub *drvr = cfg->pub;
+ struct brcmf_if *ifp = brcmf_get_ifp(drvr, 0);
+ struct wiphy *wiphy = cfg_to_wiphy(cfg);
+- u32 nmode = 0;
++ u32 nmode;
+ u32 vhtmode = 0;
+ u32 bw_cap[2] = { WLC_BW_20MHZ_BIT, WLC_BW_20MHZ_BIT };
+- u32 rxchain = 0;
++ u32 rxchain;
+ u32 nchain;
+ int err;
+ s32 i;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index bf91b1e1368f03..df53dd1d7e748a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -691,7 +691,7 @@ static int brcmf_net_mon_open(struct net_device *ndev)
+ {
+ struct brcmf_if *ifp = netdev_priv(ndev);
+ struct brcmf_pub *drvr = ifp->drvr;
+- u32 monitor = 0;
++ u32 monitor;
+ int err;
+
+ brcmf_dbg(TRACE, "Enter\n");
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
+index f23310a77a5d11..0d9ae197fa1ec3 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
+@@ -184,7 +184,7 @@ static void brcmf_feat_wlc_version_overrides(struct brcmf_pub *drv)
+ static void brcmf_feat_iovar_int_get(struct brcmf_if *ifp,
+ enum brcmf_feat_id id, char *name)
+ {
+- u32 data = 0;
++ u32 data;
+ int err;
+
+ /* we need to know firmware error */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
+index a315a7fac6a06f..31e080e4da6697 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
+@@ -96,15 +96,22 @@ static inline
+ s32 brcmf_fil_cmd_int_get(struct brcmf_if *ifp, u32 cmd, u32 *data)
+ {
+ s32 err;
+- __le32 data_le = cpu_to_le32(*data);
+
+- err = brcmf_fil_cmd_data_get(ifp, cmd, &data_le, sizeof(data_le));
++ err = brcmf_fil_cmd_data_get(ifp, cmd, data, sizeof(*data));
+ if (err == 0)
+- *data = le32_to_cpu(data_le);
++ *data = le32_to_cpu(*(__le32 *)data);
+ brcmf_dbg(FIL, "ifidx=%d, cmd=%d, value=%d\n", ifp->ifidx, cmd, *data);
+
+ return err;
+ }
++static inline
++s32 brcmf_fil_cmd_int_query(struct brcmf_if *ifp, u32 cmd, u32 *data)
++{
++ __le32 *data_le = (__le32 *)data;
++
++ *data_le = cpu_to_le32(*data);
++ return brcmf_fil_cmd_int_get(ifp, cmd, data);
++}
+
+ s32 brcmf_fil_iovar_data_set(struct brcmf_if *ifp, const char *name,
+ const void *data, u32 len);
+@@ -120,14 +127,21 @@ s32 brcmf_fil_iovar_int_set(struct brcmf_if *ifp, const char *name, u32 data)
+ static inline
+ s32 brcmf_fil_iovar_int_get(struct brcmf_if *ifp, const char *name, u32 *data)
+ {
+- __le32 data_le = cpu_to_le32(*data);
+ s32 err;
+
+- err = brcmf_fil_iovar_data_get(ifp, name, &data_le, sizeof(data_le));
++ err = brcmf_fil_iovar_data_get(ifp, name, data, sizeof(*data));
+ if (err == 0)
+- *data = le32_to_cpu(data_le);
++ *data = le32_to_cpu(*(__le32 *)data);
+ return err;
+ }
++static inline
++s32 brcmf_fil_iovar_int_query(struct brcmf_if *ifp, const char *name, u32 *data)
++{
++ __le32 *data_le = (__le32 *)data;
++
++ *data_le = cpu_to_le32(*data);
++ return brcmf_fil_iovar_int_get(ifp, name, data);
++}
+
+
+ s32 brcmf_fil_bsscfg_data_set(struct brcmf_if *ifp, const char *name,
+@@ -145,15 +159,21 @@ s32 brcmf_fil_bsscfg_int_set(struct brcmf_if *ifp, const char *name, u32 data)
+ static inline
+ s32 brcmf_fil_bsscfg_int_get(struct brcmf_if *ifp, const char *name, u32 *data)
+ {
+- __le32 data_le = cpu_to_le32(*data);
+ s32 err;
+
+- err = brcmf_fil_bsscfg_data_get(ifp, name, &data_le,
+- sizeof(data_le));
++ err = brcmf_fil_bsscfg_data_get(ifp, name, data, sizeof(*data));
+ if (err == 0)
+- *data = le32_to_cpu(data_le);
++ *data = le32_to_cpu(*(__le32 *)data);
+ return err;
+ }
++static inline
++s32 brcmf_fil_bsscfg_int_query(struct brcmf_if *ifp, const char *name, u32 *data)
++{
++ __le32 *data_le = (__le32 *)data;
++
++ *data_le = cpu_to_le32(*data);
++ return brcmf_fil_bsscfg_int_get(ifp, name, data);
++}
+
+ s32 brcmf_fil_xtlv_data_set(struct brcmf_if *ifp, const char *name, u16 id,
+ void *data, u32 len);
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
+index 3b6b8b410be580..b230441abc16ad 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
+@@ -148,6 +148,17 @@ const struct iwl_cfg_trans_params iwl_bz_trans_cfg = {
+ .ltr_delay = IWL_CFG_TRANS_LTR_DELAY_2500US,
+ };
+
++const struct iwl_cfg_trans_params iwl_gl_trans_cfg = {
++ .device_family = IWL_DEVICE_FAMILY_BZ,
++ .base_params = &iwl_bz_base_params,
++ .mq_rx_supported = true,
++ .rf_id = true,
++ .gen2 = true,
++ .umac_prph_offset = 0x300000,
++ .xtal_latency = 12000,
++ .low_latency_xtal = true,
++};
++
+ const char iwl_bz_name[] = "Intel(R) TBD Bz device";
+ const char iwl_fm_name[] = "Intel(R) Wi-Fi 7 BE201 320MHz";
+ const char iwl_gl_name[] = "Intel(R) Wi-Fi 7 BE200 320MHz";
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index b2abd4fd194449..34c91deca57b1b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -504,6 +504,7 @@ extern const struct iwl_cfg_trans_params iwl_so_long_latency_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_so_long_latency_imr_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_ma_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_bz_trans_cfg;
++extern const struct iwl_cfg_trans_params iwl_gl_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_sc_trans_cfg;
+ extern const char iwl9162_name[];
+ extern const char iwl9260_name[];
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
+index c4c1e67b9ac76e..8f63cbe9e50daa 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
+@@ -109,7 +109,7 @@
+ #define IWL_MVM_FTM_INITIATOR_SECURE_LTF false
+ #define IWL_MVM_FTM_RESP_NDP_SUPPORT true
+ #define IWL_MVM_FTM_RESP_LMR_FEEDBACK_SUPPORT true
+-#define IWL_MVM_FTM_NON_TB_MIN_TIME_BETWEEN_MSR 5
++#define IWL_MVM_FTM_NON_TB_MIN_TIME_BETWEEN_MSR 7
+ #define IWL_MVM_FTM_NON_TB_MAX_TIME_BETWEEN_MSR 1000
+ #define IWL_MVM_D3_DEBUG false
+ #define IWL_MVM_USE_TWT true
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+index afd90a52d4ecbe..55245f913286b9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+@@ -772,6 +772,7 @@ iwl_mvm_ftm_set_secured_ranging(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct iwl_mvm_ftm_iter_data target;
+
+ target.bssid = bssid;
++ target.cipher = cipher;
+ ieee80211_iter_keys(mvm->hw, vif, iter, &target);
+ } else {
+ memcpy(tk, entry->tk, sizeof(entry->tk));
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index a8c42ce3b63008..72fa7ac86516cd 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -114,16 +114,14 @@ static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm)
+ iwl_mvm_flush_sta(mvm, mvm->aux_sta.sta_id,
+ mvm->aux_sta.tfd_queue_msk);
+
+- if (mvm->mld_api_is_used) {
+- iwl_mvm_mld_rm_aux_sta(mvm);
+- mutex_unlock(&mvm->mutex);
+- return;
+- }
+-
+ /* In newer version of this command an aux station is added only
+ * in cases of dedicated tx queue and need to be removed in end
+- * of use */
+- if (iwl_mvm_has_new_station_api(mvm->fw))
++ * of use. For the even newer mld api, use the appropriate
++ * function.
++ */
++ if (mvm->mld_api_is_used)
++ iwl_mvm_mld_rm_aux_sta(mvm);
++ else if (iwl_mvm_has_new_station_api(mvm->fw))
+ iwl_mvm_rm_aux_sta(mvm);
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 84fd93278450b0..805fb249a0c6a2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -500,9 +500,7 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x7E40, PCI_ANY_ID, iwl_ma_trans_cfg)},
+
+ /* Bz devices */
+- {IWL_PCI_DEVICE(0x2727, PCI_ANY_ID, iwl_bz_trans_cfg)},
+- {IWL_PCI_DEVICE(0x272D, PCI_ANY_ID, iwl_bz_trans_cfg)},
+- {IWL_PCI_DEVICE(0x272b, PCI_ANY_ID, iwl_bz_trans_cfg)},
++ {IWL_PCI_DEVICE(0x272b, PCI_ANY_ID, iwl_gl_trans_cfg)},
+ {IWL_PCI_DEVICE(0xA840, 0x0000, iwl_bz_trans_cfg)},
+ {IWL_PCI_DEVICE(0xA840, 0x0090, iwl_bz_trans_cfg)},
+ {IWL_PCI_DEVICE(0xA840, 0x0094, iwl_bz_trans_cfg)},
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index bb291fe314fb44..d96ee759828ed2 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -1529,7 +1529,7 @@ EXPORT_SYMBOL_GPL(mt76_wcid_init);
+
+ void mt76_wcid_cleanup(struct mt76_dev *dev, struct mt76_wcid *wcid)
+ {
+- struct mt76_phy *phy = dev->phys[wcid->phy_idx];
++ struct mt76_phy *phy = mt76_dev_phy(dev, wcid->phy_idx);
+ struct ieee80211_hw *hw;
+ struct sk_buff_head list;
+ struct sk_buff *skb;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/dma.c b/drivers/net/wireless/mediatek/mt76/mt7603/dma.c
+index ea017f22fff22c..863e5770df51d5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/dma.c
+@@ -29,7 +29,7 @@ mt7603_rx_loopback_skb(struct mt7603_dev *dev, struct sk_buff *skb)
+ struct ieee80211_sta *sta;
+ struct mt7603_sta *msta;
+ struct mt76_wcid *wcid;
+- u8 tid = 0, hwq = 0;
++ u8 qid, tid = 0, hwq = 0;
+ void *priv;
+ int idx;
+ u32 val;
+@@ -57,7 +57,7 @@ mt7603_rx_loopback_skb(struct mt7603_dev *dev, struct sk_buff *skb)
+ if (ieee80211_is_data_qos(hdr->frame_control)) {
+ tid = *ieee80211_get_qos_ctl(hdr) &
+ IEEE80211_QOS_CTL_TAG1D_MASK;
+- u8 qid = tid_to_ac[tid];
++ qid = tid_to_ac[tid];
+ hwq = wmm_queue_map[qid];
+ skb_set_queue_mapping(skb, qid);
+ } else if (ieee80211_is_data(hdr->frame_control)) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/init.c b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
+index f7722f67db576f..0b9ebdcda221e4 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
+@@ -56,6 +56,9 @@ int mt7615_thermal_init(struct mt7615_dev *dev)
+
+ name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7615_%s",
+ wiphy_name(wiphy));
++ if (!name)
++ return -ENOMEM;
++
+ hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev, name, dev,
+ mt7615_hwmon_groups);
+ return PTR_ERR_OR_ZERO(hwmon);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
+index 353e6606984093..ef003d1620a5b7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
+@@ -28,8 +28,6 @@ enum {
+ #define MT_RXD0_MESH BIT(18)
+ #define MT_RXD0_MHCP BIT(19)
+ #define MT_RXD0_NORMAL_ETH_TYPE_OFS GENMASK(22, 16)
+-#define MT_RXD0_NORMAL_IP_SUM BIT(23)
+-#define MT_RXD0_NORMAL_UDP_TCP_SUM BIT(24)
+
+ #define MT_RXD0_SW_PKT_TYPE_MASK GENMASK(31, 16)
+ #define MT_RXD0_SW_PKT_TYPE_MAP 0x380F
+@@ -80,6 +78,8 @@ enum {
+ #define MT_RXD3_NORMAL_BEACON_UC BIT(21)
+ #define MT_RXD3_NORMAL_CO_ANT BIT(22)
+ #define MT_RXD3_NORMAL_FCS_ERR BIT(24)
++#define MT_RXD3_NORMAL_IP_SUM BIT(26)
++#define MT_RXD3_NORMAL_UDP_TCP_SUM BIT(27)
+ #define MT_RXD3_NORMAL_VLAN2ETH BIT(31)
+
+ /* RXD DW4 */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index a978f434dc5e64..7bc3b4cd359255 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -194,6 +194,8 @@ static int mt7915_thermal_init(struct mt7915_phy *phy)
+
+ name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7915_%s",
+ wiphy_name(wiphy));
++ if (!name)
++ return -ENOMEM;
+
+ cdev = thermal_cooling_device_register(name, phy, &mt7915_thermal_ops);
+ if (!IS_ERR(cdev)) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index 049223df9beb15..efbb8b23e4719b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -564,8 +564,7 @@ static void mt7915_configure_filter(struct ieee80211_hw *hw,
+
+ MT76_FILTER(CONTROL, MT_WF_RFCR_DROP_CTS |
+ MT_WF_RFCR_DROP_RTS |
+- MT_WF_RFCR_DROP_CTL_RSV |
+- MT_WF_RFCR_DROP_NDPA);
++ MT_WF_RFCR_DROP_CTL_RSV);
+
+ *total_flags = flags;
+ rxfilter = phy->rxfilter;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+index ef0c721d26e332..d1d64fa7d35d02 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+@@ -52,6 +52,8 @@ static int mt7921_thermal_init(struct mt792x_phy *phy)
+
+ name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7921_%s",
+ wiphy_name(wiphy));
++ if (!name)
++ return -ENOMEM;
+
+ hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev, name, phy,
+ mt7921_hwmon_groups);
+@@ -83,7 +85,7 @@ mt7921_regd_channel_update(struct wiphy *wiphy, struct mt792x_dev *dev)
+ }
+
+ /* UNII-4 */
+- if (IS_UNII_INVALID(0, 5850, 5925))
++ if (IS_UNII_INVALID(0, 5845, 5925))
+ ch->flags |= IEEE80211_CHAN_DISABLED;
+ }
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mac.c b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
+index cf36750cf70923..634c42bbf23f67 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
+@@ -352,7 +352,7 @@ mt7925_mac_fill_rx_rate(struct mt792x_dev *dev,
+ static int
+ mt7925_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
+ {
+- u32 csum_mask = MT_RXD0_NORMAL_IP_SUM | MT_RXD0_NORMAL_UDP_TCP_SUM;
++ u32 csum_mask = MT_RXD3_NORMAL_IP_SUM | MT_RXD3_NORMAL_UDP_TCP_SUM;
+ struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
+ bool hdr_trans, unicast, insert_ccmp_hdr = false;
+ u8 chfreq, qos_ctl = 0, remove_pad, amsdu_info;
+@@ -362,7 +362,6 @@ mt7925_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
+ struct mt792x_phy *phy = &dev->phy;
+ struct ieee80211_supported_band *sband;
+ u32 csum_status = *(u32 *)skb->cb;
+- u32 rxd0 = le32_to_cpu(rxd[0]);
+ u32 rxd1 = le32_to_cpu(rxd[1]);
+ u32 rxd2 = le32_to_cpu(rxd[2]);
+ u32 rxd3 = le32_to_cpu(rxd[3]);
+@@ -420,7 +419,7 @@ mt7925_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
+ if (!sband->channels)
+ return -EINVAL;
+
+- if (mt76_is_mmio(&dev->mt76) && (rxd0 & csum_mask) == csum_mask &&
++ if (mt76_is_mmio(&dev->mt76) && (rxd3 & csum_mask) == csum_mask &&
+ !(csum_status & (BIT(0) | BIT(2) | BIT(3))))
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/main.c b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+index 8c0768bf9343b3..7c05aebd3c20d1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+@@ -439,6 +439,19 @@ static void mt7925_roc_iter(void *priv, u8 *mac,
+ mt7925_mcu_abort_roc(phy, &mvif->bss_conf, phy->roc_token_id);
+ }
+
++void mt7925_roc_abort_sync(struct mt792x_dev *dev)
++{
++ struct mt792x_phy *phy = &dev->phy;
++
++ del_timer_sync(&phy->roc_timer);
++ cancel_work_sync(&phy->roc_work);
++ if (test_and_clear_bit(MT76_STATE_ROC, &phy->mt76->state))
++ ieee80211_iterate_interfaces(mt76_hw(dev),
++ IEEE80211_IFACE_ITER_RESUME_ALL,
++ mt7925_roc_iter, (void *)phy);
++}
++EXPORT_SYMBOL_GPL(mt7925_roc_abort_sync);
++
+ void mt7925_roc_work(struct work_struct *work)
+ {
+ struct mt792x_phy *phy;
+@@ -1109,6 +1122,8 @@ static void mt7925_mac_link_sta_remove(struct mt76_dev *mdev,
+ msta = (struct mt792x_sta *)link_sta->sta->drv_priv;
+ mlink = mt792x_sta_to_link(msta, link_id);
+
++ mt7925_roc_abort_sync(dev);
++
+ mt76_connac_free_pending_tx_skbs(&dev->pm, &mlink->wcid);
+ mt76_connac_pm_wake(&dev->mphy, &dev->pm);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 9dc22fbe25d335..c6c380571fd86f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -638,6 +638,9 @@ static int mt7925_load_clc(struct mt792x_dev *dev, const char *fw_name)
+ for (offset = 0; offset < len; offset += le32_to_cpu(clc->len)) {
+ clc = (const struct mt7925_clc *)(clc_base + offset);
+
++ if (clc->idx > ARRAY_SIZE(phy->clc))
++ break;
++
+ /* do not init buf again if chip reset triggered */
+ if (phy->clc[clc->idx])
+ continue;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
+index 669f3a079d0485..c5ea431998e06d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
+@@ -307,6 +307,7 @@ int mt7925_mcu_set_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
+ enum mt7925_roc_req type, u8 token_id);
+ int mt7925_mcu_abort_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
+ u8 token_id);
++void mt7925_roc_abort_sync(struct mt792x_dev *dev);
+ int mt7925_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ int cmd, int *wait_seq);
+ int mt7925_mcu_add_key(struct mt76_dev *dev, struct ieee80211_vif *vif,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/pci.c b/drivers/net/wireless/mediatek/mt76/mt7925/pci.c
+index 6e4f4e78c3505d..39305c39362f5c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/pci.c
+@@ -449,6 +449,8 @@ static int mt7925_pci_suspend(struct device *device)
+ cancel_delayed_work_sync(&pm->ps_work);
+ cancel_work_sync(&pm->wake_work);
+
++ mt7925_roc_abort_sync(dev);
++
+ err = mt792x_mcu_drv_pmctrl(dev);
+ if (err < 0)
+ goto restore_suspend;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/init.c b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+index 283df84f1b4335..a98dcb40490bf8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+@@ -1011,8 +1011,6 @@ mt7996_set_stream_he_txbf_caps(struct mt7996_phy *phy,
+ return;
+
+ elem->phy_cap_info[3] |= IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER;
+- if (vif == NL80211_IFTYPE_AP)
+- elem->phy_cap_info[4] |= IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER;
+
+ c = FIELD_PREP(IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_UNDER_80MHZ_MASK,
+ sts - 1) |
+@@ -1020,6 +1018,11 @@ mt7996_set_stream_he_txbf_caps(struct mt7996_phy *phy,
+ sts - 1);
+ elem->phy_cap_info[5] |= c;
+
++ if (vif != NL80211_IFTYPE_AP)
++ return;
++
++ elem->phy_cap_info[4] |= IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER;
++
+ c = IEEE80211_HE_PHY_CAP6_TRIG_SU_BEAMFORMING_FB |
+ IEEE80211_HE_PHY_CAP6_TRIG_MU_BEAMFORMING_PARTIAL_BW_FB;
+ elem->phy_cap_info[6] |= c;
+@@ -1179,7 +1182,6 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ IEEE80211_EHT_MAC_CAP0_OM_CONTROL;
+
+ eht_cap_elem->phy_cap_info[0] =
+- IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ |
+ IEEE80211_EHT_PHY_CAP0_NDP_4_EHT_LFT_32_GI |
+ IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMER |
+ IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMEE;
+@@ -1193,30 +1195,36 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ u8_encode_bits(u8_get_bits(val, GENMASK(2, 1)),
+ IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_80MHZ_MASK) |
+ u8_encode_bits(val,
+- IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_160MHZ_MASK) |
+- u8_encode_bits(val,
+- IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_320MHZ_MASK);
++ IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_160MHZ_MASK);
+
+ eht_cap_elem->phy_cap_info[2] =
+ u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_80MHZ_MASK) |
+- u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_160MHZ_MASK) |
+- u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_320MHZ_MASK);
++ u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_160MHZ_MASK);
++
++ if (band == NL80211_BAND_6GHZ) {
++ eht_cap_elem->phy_cap_info[0] |=
++ IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ;
++
++ eht_cap_elem->phy_cap_info[1] |=
++ u8_encode_bits(val,
++ IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_320MHZ_MASK);
++
++ eht_cap_elem->phy_cap_info[2] |=
++ u8_encode_bits(sts - 1,
++ IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_320MHZ_MASK);
++ }
+
+ eht_cap_elem->phy_cap_info[3] =
+ IEEE80211_EHT_PHY_CAP3_NG_16_SU_FEEDBACK |
+ IEEE80211_EHT_PHY_CAP3_NG_16_MU_FEEDBACK |
+ IEEE80211_EHT_PHY_CAP3_CODEBOOK_4_2_SU_FDBK |
+- IEEE80211_EHT_PHY_CAP3_CODEBOOK_7_5_MU_FDBK |
+- IEEE80211_EHT_PHY_CAP3_TRIG_SU_BF_FDBK |
+- IEEE80211_EHT_PHY_CAP3_TRIG_MU_BF_PART_BW_FDBK |
+- IEEE80211_EHT_PHY_CAP3_TRIG_CQI_FDBK;
++ IEEE80211_EHT_PHY_CAP3_CODEBOOK_7_5_MU_FDBK;
+
+ eht_cap_elem->phy_cap_info[4] =
+ u8_encode_bits(min_t(int, sts - 1, 2),
+ IEEE80211_EHT_PHY_CAP4_MAX_NC_MASK);
+
+ eht_cap_elem->phy_cap_info[5] =
+- IEEE80211_EHT_PHY_CAP5_NON_TRIG_CQI_FEEDBACK |
+ u8_encode_bits(IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_16US,
+ IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_MASK) |
+ u8_encode_bits(u8_get_bits(0x11, GENMASK(1, 0)),
+@@ -1230,14 +1238,6 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ IEEE80211_EHT_PHY_CAP6_MAX_NUM_SUPP_EHT_LTF_MASK) |
+ u8_encode_bits(val, IEEE80211_EHT_PHY_CAP6_MCS15_SUPP_MASK);
+
+- eht_cap_elem->phy_cap_info[7] =
+- IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_80MHZ |
+- IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_160MHZ |
+- IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_320MHZ |
+- IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_80MHZ |
+- IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_160MHZ |
+- IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_320MHZ;
+-
+ val = u8_encode_bits(nss, IEEE80211_EHT_MCS_NSS_RX) |
+ u8_encode_bits(nss, IEEE80211_EHT_MCS_NSS_TX);
+ #define SET_EHT_MAX_NSS(_bw, _val) do { \
+@@ -1248,8 +1248,29 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+
+ SET_EHT_MAX_NSS(80, val);
+ SET_EHT_MAX_NSS(160, val);
+- SET_EHT_MAX_NSS(320, val);
++ if (band == NL80211_BAND_6GHZ)
++ SET_EHT_MAX_NSS(320, val);
+ #undef SET_EHT_MAX_NSS
++
++ if (iftype != NL80211_IFTYPE_AP)
++ return;
++
++ eht_cap_elem->phy_cap_info[3] |=
++ IEEE80211_EHT_PHY_CAP3_TRIG_SU_BF_FDBK |
++ IEEE80211_EHT_PHY_CAP3_TRIG_MU_BF_PART_BW_FDBK;
++
++ eht_cap_elem->phy_cap_info[7] =
++ IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_80MHZ |
++ IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_160MHZ |
++ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_80MHZ |
++ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_160MHZ;
++
++ if (band != NL80211_BAND_6GHZ)
++ return;
++
++ eht_cap_elem->phy_cap_info[7] |=
++ IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_320MHZ |
++ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_320MHZ;
+ }
+
+ static void
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index bc7111a71f98ca..fd5fe96c32e3d1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -435,7 +435,7 @@ mt7996_mac_fill_rx(struct mt7996_dev *dev, enum mt76_rxq_id q,
+ u32 rxd2 = le32_to_cpu(rxd[2]);
+ u32 rxd3 = le32_to_cpu(rxd[3]);
+ u32 rxd4 = le32_to_cpu(rxd[4]);
+- u32 csum_mask = MT_RXD0_NORMAL_IP_SUM | MT_RXD0_NORMAL_UDP_TCP_SUM;
++ u32 csum_mask = MT_RXD3_NORMAL_IP_SUM | MT_RXD3_NORMAL_UDP_TCP_SUM;
+ u32 csum_status = *(u32 *)skb->cb;
+ u32 mesh_mask = MT_RXD0_MESH | MT_RXD0_MHCP;
+ bool is_mesh = (rxd0 & mesh_mask) == mesh_mask;
+@@ -497,7 +497,7 @@ mt7996_mac_fill_rx(struct mt7996_dev *dev, enum mt76_rxq_id q,
+ if (!sband->channels)
+ return -EINVAL;
+
+- if ((rxd0 & csum_mask) == csum_mask &&
++ if ((rxd3 & csum_mask) == csum_mask &&
+ !(csum_status & (BIT(0) | BIT(2) | BIT(3))))
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index bce0820382194d..1ab2fb2922662f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -206,7 +206,7 @@ static int mt7996_add_interface(struct ieee80211_hw *hw,
+ mvif->mt76.omac_idx = idx;
+ mvif->phy = phy;
+ mvif->mt76.band_idx = band_idx;
+- mvif->mt76.wmm_idx = vif->type != NL80211_IFTYPE_AP;
++ mvif->mt76.wmm_idx = vif->type == NL80211_IFTYPE_AP ? 0 : 3;
+
+ ret = mt7996_mcu_add_dev_info(phy, vif, true);
+ if (ret)
+@@ -307,6 +307,10 @@ int mt7996_set_channel(struct mt7996_phy *phy)
+ if (ret)
+ goto out;
+
++ ret = mt7996_mcu_set_chan_info(phy, UNI_CHANNEL_RX_PATH);
++ if (ret)
++ goto out;
++
+ ret = mt7996_dfs_init_radar_detector(phy);
+ mt7996_mac_cca_stats_reset(phy);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index 2e4fa9f48dfbee..7c5ec135c685f5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -735,7 +735,7 @@ void mt7996_mcu_rx_event(struct mt7996_dev *dev, struct sk_buff *skb)
+ static struct tlv *
+ mt7996_mcu_add_uni_tlv(struct sk_buff *skb, u16 tag, u16 len)
+ {
+- struct tlv *ptlv = skb_put(skb, len);
++ struct tlv *ptlv = skb_put_zero(skb, len);
+
+ ptlv->tag = cpu_to_le16(tag);
+ ptlv->len = cpu_to_le16(len);
+@@ -822,7 +822,7 @@ mt7996_mcu_bss_mbssid_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ struct bss_info_uni_mbssid *mbssid;
+ struct tlv *tlv;
+
+- if (!vif->bss_conf.bssid_indicator)
++ if (!vif->bss_conf.bssid_indicator && enable)
+ return;
+
+ tlv = mt7996_mcu_add_uni_tlv(skb, UNI_BSS_INFO_11V_MBSSID, sizeof(*mbssid));
+@@ -1429,10 +1429,10 @@ mt7996_is_ebf_supported(struct mt7996_phy *phy, struct ieee80211_vif *vif,
+
+ if (bfee)
+ return vif->bss_conf.eht_su_beamformee &&
+- EHT_PHY(CAP0_SU_BEAMFORMEE, pe->phy_cap_info[0]);
++ EHT_PHY(CAP0_SU_BEAMFORMER, pe->phy_cap_info[0]);
+ else
+ return vif->bss_conf.eht_su_beamformer &&
+- EHT_PHY(CAP0_SU_BEAMFORMER, pe->phy_cap_info[0]);
++ EHT_PHY(CAP0_SU_BEAMFORMEE, pe->phy_cap_info[0]);
+ }
+
+ if (sta->deflink.he_cap.has_he) {
+@@ -1544,6 +1544,9 @@ mt7996_mcu_sta_bfer_he(struct ieee80211_sta *sta, struct ieee80211_vif *vif,
+ u8 nss_mcs = mt7996_mcu_get_sta_nss(mcs_map);
+ u8 snd_dim, sts;
+
++ if (!vc)
++ return;
++
+ bf->tx_mode = MT_PHY_TYPE_HE_SU;
+
+ mt7996_mcu_sta_sounding_rate(bf);
+@@ -1653,7 +1656,7 @@ mt7996_mcu_sta_bfer_tlv(struct mt7996_dev *dev, struct sk_buff *skb,
+ {
+ struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
+ struct mt7996_phy *phy = mvif->phy;
+- int tx_ant = hweight8(phy->mt76->chainmask) - 1;
++ int tx_ant = hweight16(phy->mt76->chainmask) - 1;
+ struct sta_rec_bf *bf;
+ struct tlv *tlv;
+ static const u8 matrix[4][4] = {
+diff --git a/drivers/net/wireless/microchip/wilc1000/hif.c b/drivers/net/wireless/microchip/wilc1000/hif.c
+index 3c48e1a57b24cd..bba53307b960d9 100644
+--- a/drivers/net/wireless/microchip/wilc1000/hif.c
++++ b/drivers/net/wireless/microchip/wilc1000/hif.c
+@@ -384,6 +384,7 @@ wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ struct wilc_join_bss_param *param;
+ u8 rates_len = 0;
+ int ies_len;
++ u64 ies_tsf;
+ int ret;
+
+ param = kzalloc(sizeof(*param), GFP_KERNEL);
+@@ -399,6 +400,7 @@ wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ return NULL;
+ }
+ ies_len = ies->len;
++ ies_tsf = ies->tsf;
+ rcu_read_unlock();
+
+ param->beacon_period = cpu_to_le16(bss->beacon_interval);
+@@ -454,7 +456,7 @@ wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ IEEE80211_P2P_ATTR_ABSENCE_NOTICE,
+ (u8 *)&noa_attr, sizeof(noa_attr));
+ if (ret > 0) {
+- param->tsf_lo = cpu_to_le32(ies->tsf);
++ param->tsf_lo = cpu_to_le32(ies_tsf);
+ param->noa_enabled = 1;
+ param->idx = noa_attr.index;
+ if (noa_attr.oppps_ctwindow & IEEE80211_P2P_OPPPS_ENABLE_BIT) {
+diff --git a/drivers/net/wireless/realtek/rtw88/coex.c b/drivers/net/wireless/realtek/rtw88/coex.c
+index de3332eb7a227a..a99776af56c27f 100644
+--- a/drivers/net/wireless/realtek/rtw88/coex.c
++++ b/drivers/net/wireless/realtek/rtw88/coex.c
+@@ -2194,7 +2194,6 @@ static void rtw_coex_action_bt_a2dp_pan(struct rtw_dev *rtwdev)
+ struct rtw_coex_stat *coex_stat = &coex->stat;
+ struct rtw_efuse *efuse = &rtwdev->efuse;
+ u8 table_case, tdma_case;
+- bool wl_cpt_test = false, bt_cpt_test = false;
+
+ rtw_dbg(rtwdev, RTW_DBG_COEX, "[BTCoex], %s()\n", __func__);
+
+@@ -2202,29 +2201,16 @@ static void rtw_coex_action_bt_a2dp_pan(struct rtw_dev *rtwdev)
+ rtw_coex_set_rf_para(rtwdev, chip->wl_rf_para_rx[0]);
+ if (efuse->share_ant) {
+ /* Shared-Ant */
+- if (wl_cpt_test) {
+- if (coex_stat->wl_gl_busy) {
+- table_case = 20;
+- tdma_case = 17;
+- } else {
+- table_case = 10;
+- tdma_case = 15;
+- }
+- } else if (bt_cpt_test) {
+- table_case = 26;
+- tdma_case = 26;
+- } else {
+- if (coex_stat->wl_gl_busy &&
+- coex_stat->wl_noisy_level == 0)
+- table_case = 14;
+- else
+- table_case = 10;
++ if (coex_stat->wl_gl_busy &&
++ coex_stat->wl_noisy_level == 0)
++ table_case = 14;
++ else
++ table_case = 10;
+
+- if (coex_stat->wl_gl_busy)
+- tdma_case = 15;
+- else
+- tdma_case = 20;
+- }
++ if (coex_stat->wl_gl_busy)
++ tdma_case = 15;
++ else
++ tdma_case = 20;
+ } else {
+ /* Non-Shared-Ant */
+ table_case = 112;
+@@ -2235,11 +2221,7 @@ static void rtw_coex_action_bt_a2dp_pan(struct rtw_dev *rtwdev)
+ tdma_case = 120;
+ }
+
+- if (wl_cpt_test)
+- rtw_coex_set_rf_para(rtwdev, chip->wl_rf_para_rx[1]);
+- else
+- rtw_coex_set_rf_para(rtwdev, chip->wl_rf_para_rx[0]);
+-
++ rtw_coex_set_rf_para(rtwdev, chip->wl_rf_para_rx[0]);
+ rtw_coex_table(rtwdev, false, table_case);
+ rtw_coex_tdma(rtwdev, false, tdma_case);
+ }
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index ab7d414d0ba679..b9b0114e253b43 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -1468,10 +1468,12 @@ int rtw_fw_write_data_rsvd_page(struct rtw_dev *rtwdev, u16 pg_addr,
+ val |= BIT_ENSWBCN >> 8;
+ rtw_write8(rtwdev, REG_CR + 1, val);
+
+- val = rtw_read8(rtwdev, REG_FWHW_TXQ_CTRL + 2);
+- bckp[1] = val;
+- val &= ~(BIT_EN_BCNQ_DL >> 16);
+- rtw_write8(rtwdev, REG_FWHW_TXQ_CTRL + 2, val);
++ if (rtw_hci_type(rtwdev) == RTW_HCI_TYPE_PCIE) {
++ val = rtw_read8(rtwdev, REG_FWHW_TXQ_CTRL + 2);
++ bckp[1] = val;
++ val &= ~(BIT_EN_BCNQ_DL >> 16);
++ rtw_write8(rtwdev, REG_FWHW_TXQ_CTRL + 2, val);
++ }
+
+ ret = rtw_hci_write_data_rsvd_page(rtwdev, buf, size);
+ if (ret) {
+@@ -1496,7 +1498,8 @@ int rtw_fw_write_data_rsvd_page(struct rtw_dev *rtwdev, u16 pg_addr,
+ rsvd_pg_head = rtwdev->fifo.rsvd_boundary;
+ rtw_write16(rtwdev, REG_FIFOPAGE_CTRL_2,
+ rsvd_pg_head | BIT_BCN_VALID_V1);
+- rtw_write8(rtwdev, REG_FWHW_TXQ_CTRL + 2, bckp[1]);
++ if (rtw_hci_type(rtwdev) == RTW_HCI_TYPE_PCIE)
++ rtw_write8(rtwdev, REG_FWHW_TXQ_CTRL + 2, bckp[1]);
+ rtw_write8(rtwdev, REG_CR + 1, bckp[0]);
+
+ return ret;
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 7ab7a988b123f1..33a7577557a568 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -1313,20 +1313,21 @@ static int rtw_wait_firmware_completion(struct rtw_dev *rtwdev)
+ {
+ const struct rtw_chip_info *chip = rtwdev->chip;
+ struct rtw_fw_state *fw;
++ int ret = 0;
+
+ fw = &rtwdev->fw;
+ wait_for_completion(&fw->completion);
+ if (!fw->firmware)
+- return -EINVAL;
++ ret = -EINVAL;
+
+ if (chip->wow_fw_name) {
+ fw = &rtwdev->wow_fw;
+ wait_for_completion(&fw->completion);
+ if (!fw->firmware)
+- return -EINVAL;
++ ret = -EINVAL;
+ }
+
+- return 0;
++ return ret;
+ }
+
+ static enum rtw_lps_deep_mode rtw_update_lps_deep_mode(struct rtw_dev *rtwdev,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821cu.c b/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
+index e2c7d9f876836a..a019f4085e7389 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
+@@ -31,8 +31,6 @@ static const struct usb_device_id rtw_8821cu_id_table[] = {
+ .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
+ { USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc82b, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
+- { USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc82c, 0xff, 0xff, 0xff),
+- .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
+ { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x331d, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* D-Link */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x7392, 0xc811, 0xff, 0xff, 0xff),
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index 62376d1cca22fc..732d787652208b 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -2611,12 +2611,14 @@ static void query_phy_status_page1(struct rtw_dev *rtwdev, u8 *phy_status,
+ else
+ rxsc = GET_PHY_STAT_P1_HT_RXSC(phy_status);
+
+- if (rxsc >= 9 && rxsc <= 12)
++ if (rxsc == 0)
++ bw = rtwdev->hal.current_band_width;
++ else if (rxsc >= 1 && rxsc <= 8)
++ bw = RTW_CHANNEL_WIDTH_20;
++ else if (rxsc >= 9 && rxsc <= 12)
+ bw = RTW_CHANNEL_WIDTH_40;
+- else if (rxsc >= 13)
+- bw = RTW_CHANNEL_WIDTH_80;
+ else
+- bw = RTW_CHANNEL_WIDTH_20;
++ bw = RTW_CHANNEL_WIDTH_80;
+
+ channel = GET_PHY_STAT_P1_CHANNEL(phy_status);
+ rtw_set_rx_freq_band(pkt_stat, channel);
+diff --git a/drivers/net/wireless/realtek/rtw88/rx.h b/drivers/net/wireless/realtek/rtw88/rx.h
+index d3668c4efc24d5..8a072dd3d73ce4 100644
+--- a/drivers/net/wireless/realtek/rtw88/rx.h
++++ b/drivers/net/wireless/realtek/rtw88/rx.h
+@@ -41,7 +41,7 @@ enum rtw_rx_desc_enc {
+ #define GET_RX_DESC_TSFL(rxdesc) \
+ le32_get_bits(*((__le32 *)(rxdesc) + 0x05), GENMASK(31, 0))
+ #define GET_RX_DESC_BW(rxdesc) \
+- (le32_get_bits(*((__le32 *)(rxdesc) + 0x04), GENMASK(31, 24)))
++ (le32_get_bits(*((__le32 *)(rxdesc) + 0x04), GENMASK(5, 4)))
+
+ void rtw_rx_stats(struct rtw_dev *rtwdev, struct ieee80211_vif *vif,
+ struct sk_buff *skb);
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index 7019f7d482a881..fc172964349bdd 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -4278,6 +4278,7 @@ int rtw89_core_init(struct rtw89_dev *rtwdev)
+
+ rtw89_init_wait(&rtwdev->mcc.wait);
+ rtw89_init_wait(&rtwdev->mac.fw_ofld_wait);
++ rtw89_init_wait(&rtwdev->wow.wait);
+
+ INIT_WORK(&rtwdev->c2h_work, rtw89_fw_c2h_work);
+ INIT_WORK(&rtwdev->ips_work, rtw89_ips_work);
+diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
+index 11fa003a9788c4..9c282d84743b9e 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.h
++++ b/drivers/net/wireless/realtek/rtw89/core.h
+@@ -5293,6 +5293,9 @@ struct rtw89_wow_param {
+ u8 gtk_alg;
+ u8 ptk_keyidx;
+ u8 akm;
++
++ /* see RTW89_WOW_WAIT_COND series for wait condition */
++ struct rtw89_wait_info wait;
+ };
+
+ struct rtw89_mcc_limit {
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index fbe08c162b93de..c322148d6daf59 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -6825,11 +6825,10 @@ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev,
+
+ int rtw89_fw_h2c_wow_request_aoac(struct rtw89_dev *rtwdev)
+ {
+- struct rtw89_wait_info *wait = &rtwdev->mac.fw_ofld_wait;
++ struct rtw89_wait_info *wait = &rtwdev->wow.wait;
+ struct rtw89_h2c_wow_aoac *h2c;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+- unsigned int cond;
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
+ if (!skb) {
+@@ -6848,8 +6847,7 @@ int rtw89_fw_h2c_wow_request_aoac(struct rtw89_dev *rtwdev)
+ H2C_FUNC_AOAC_REPORT_REQ, 1, 0,
+ len);
+
+- cond = RTW89_WOW_WAIT_COND(H2C_FUNC_AOAC_REPORT_REQ);
+- return rtw89_h2c_tx_and_wait(rtwdev, skb, wait, cond);
++ return rtw89_h2c_tx_and_wait(rtwdev, skb, wait, RTW89_WOW_WAIT_COND_AOAC);
+ }
+
+ /* Return < 0, if failures happen during waiting for the condition.
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.h b/drivers/net/wireless/realtek/rtw89/fw.h
+index c3b4324c621c16..df6f2424fa9113 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.h
++++ b/drivers/net/wireless/realtek/rtw89/fw.h
+@@ -3942,8 +3942,11 @@ enum rtw89_wow_h2c_func {
+ NUM_OF_RTW89_WOW_H2C_FUNC,
+ };
+
+-#define RTW89_WOW_WAIT_COND(func) \
+- (NUM_OF_RTW89_WOW_H2C_FUNC + (func))
++#define RTW89_WOW_WAIT_COND(tag, func) \
++ ((tag) * NUM_OF_RTW89_WOW_H2C_FUNC + (func))
++
++#define RTW89_WOW_WAIT_COND_AOAC \
++ RTW89_WOW_WAIT_COND(0 /* don't care */, H2C_FUNC_AOAC_REPORT_REQ)
+
+ /* CLASS 2 - PS */
+ #define H2C_CL_MAC_PS 0x2
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index e2399796aeb1e0..9a4f23d83bf2a8 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -2728,6 +2728,7 @@ bool rtw89_mac_is_qta_dbcc(struct rtw89_dev *rtwdev, enum rtw89_qta_mode mode)
+
+ static int ptcl_init_ax(struct rtw89_dev *rtwdev, u8 mac_idx)
+ {
++ enum rtw89_core_chip_id chip_id = rtwdev->chip->chip_id;
+ u32 val, reg;
+ int ret;
+
+@@ -2766,6 +2767,12 @@ static int ptcl_init_ax(struct rtw89_dev *rtwdev, u8 mac_idx)
+ B_AX_SPE_RPT_PATH_MASK, FWD_TO_WLCPU);
+ }
+
++ if (chip_id == RTL8852A || rtw89_is_rtl885xb(rtwdev)) {
++ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_AGG_LEN_VHT_0, mac_idx);
++ rtw89_write32_mask(rtwdev, reg,
++ B_AX_AMPDU_MAX_LEN_VHT_MASK, 0x3FF80);
++ }
++
+ return 0;
+ }
+
+@@ -5144,11 +5151,10 @@ rtw89_mac_c2h_wow_aoac_rpt(struct rtw89_dev *rtwdev, struct sk_buff *skb, u32 le
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_wow_aoac_report *aoac_rpt = &rtw_wow->aoac_rpt;
+- struct rtw89_wait_info *wait = &rtwdev->mac.fw_ofld_wait;
++ struct rtw89_wait_info *wait = &rtw_wow->wait;
+ const struct rtw89_c2h_wow_aoac_report *c2h =
+ (const struct rtw89_c2h_wow_aoac_report *)skb->data;
+ struct rtw89_completion_data data = {};
+- unsigned int cond;
+
+ aoac_rpt->rpt_ver = c2h->rpt_ver;
+ aoac_rpt->sec_type = c2h->sec_type;
+@@ -5166,8 +5172,7 @@ rtw89_mac_c2h_wow_aoac_rpt(struct rtw89_dev *rtwdev, struct sk_buff *skb, u32 le
+ aoac_rpt->igtk_ipn = le64_to_cpu(c2h->igtk_ipn);
+ memcpy(aoac_rpt->igtk, c2h->igtk, sizeof(aoac_rpt->igtk));
+
+- cond = RTW89_WOW_WAIT_COND(H2C_FUNC_AOAC_REPORT_REQ);
+- rtw89_complete_cond(wait, cond, &data);
++ rtw89_complete_cond(wait, RTW89_WOW_WAIT_COND_AOAC, &data);
+ }
+
+ static void
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h
+index d5895516b3ed52..4c619cec602fcc 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.h
++++ b/drivers/net/wireless/realtek/rtw89/mac.h
+@@ -421,7 +421,6 @@ enum rtw89_mac_c2h_mrc_func {
+
+ enum rtw89_mac_c2h_wow_func {
+ RTW89_MAC_C2H_FUNC_AOAC_REPORT,
+- RTW89_MAC_C2H_FUNC_READ_WOW_CAM,
+
+ NUM_OF_RTW89_MAC_C2H_FUNC_WOW,
+ };
+diff --git a/drivers/net/wireless/realtek/rtw89/reg.h b/drivers/net/wireless/realtek/rtw89/reg.h
+index 7df36f3bff0b01..b1c24eedc7e080 100644
+--- a/drivers/net/wireless/realtek/rtw89/reg.h
++++ b/drivers/net/wireless/realtek/rtw89/reg.h
+@@ -2440,6 +2440,10 @@
+ #define B_AX_RTS_TXTIME_TH_MASK GENMASK(15, 8)
+ #define B_AX_RTS_LEN_TH_MASK GENMASK(7, 0)
+
++#define R_AX_AGG_LEN_VHT_0 0xC618
++#define R_AX_AGG_LEN_VHT_0_C1 0xE618
++#define B_AX_AMPDU_MAX_LEN_VHT_MASK GENMASK(19, 0)
++
+ #define S_AX_CTS2S_TH_SEC_256B 1
+ #define R_AX_SIFS_SETTING 0xC624
+ #define R_AX_SIFS_SETTING_C1 0xE624
+diff --git a/drivers/net/wireless/virtual/mac80211_hwsim.c b/drivers/net/wireless/virtual/mac80211_hwsim.c
+index d86e6ff4523db4..5fe9e4e261429d 100644
+--- a/drivers/net/wireless/virtual/mac80211_hwsim.c
++++ b/drivers/net/wireless/virtual/mac80211_hwsim.c
+@@ -71,7 +71,7 @@ MODULE_PARM_DESC(mlo, "Support MLO");
+
+ static bool multi_radio;
+ module_param(multi_radio, bool, 0444);
+-MODULE_PARM_DESC(mlo, "Support Multiple Radios per wiphy");
++MODULE_PARM_DESC(multi_radio, "Support Multiple Radios per wiphy");
+
+ /**
+ * enum hwsim_regtest - the type of regulatory tests we offer
+diff --git a/drivers/ntb/hw/intel/ntb_hw_gen1.c b/drivers/ntb/hw/intel/ntb_hw_gen1.c
+index 9ab836d0d4f12d..079b8cd7978573 100644
+--- a/drivers/ntb/hw/intel/ntb_hw_gen1.c
++++ b/drivers/ntb/hw/intel/ntb_hw_gen1.c
+@@ -778,7 +778,7 @@ static void ndev_init_debugfs(struct intel_ntb_dev *ndev)
+ ndev->debugfs_dir =
+ debugfs_create_dir(pci_name(ndev->ntb.pdev),
+ debugfs_dir);
+- if (!ndev->debugfs_dir)
++ if (IS_ERR(ndev->debugfs_dir))
+ ndev->debugfs_info = NULL;
+ else
+ ndev->debugfs_info =
+diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
+index 77e55debeed61b..ef2855946a9921 100644
+--- a/drivers/ntb/ntb_transport.c
++++ b/drivers/ntb/ntb_transport.c
+@@ -807,16 +807,29 @@ static void ntb_free_mw(struct ntb_transport_ctx *nt, int num_mw)
+ }
+
+ static int ntb_alloc_mw_buffer(struct ntb_transport_mw *mw,
+- struct device *dma_dev, size_t align)
++ struct device *ntb_dev, size_t align)
+ {
+ dma_addr_t dma_addr;
+ void *alloc_addr, *virt_addr;
+ int rc;
+
+- alloc_addr = dma_alloc_coherent(dma_dev, mw->alloc_size,
+- &dma_addr, GFP_KERNEL);
++ /*
++ * The buffer here is allocated against the NTB device. The reason to
++ * use dma_alloc_*() call is to allocate a large IOVA contiguous buffer
++ * backing the NTB BAR for the remote host to write to. During receive
++ * processing, the data is being copied out of the receive buffer to
++ * the kernel skbuff. When a DMA device is being used, dma_map_page()
++ * is called on the kvaddr of the receive buffer (from dma_alloc_*())
++ * and remapped against the DMA device. It appears to be a double
++ * DMA mapping of buffers, but first is mapped to the NTB device and
++ * second is to the DMA device. DMA_ATTR_FORCE_CONTIGUOUS is necessary
++ * in order for the later dma_map_page() to not fail.
++ */
++ alloc_addr = dma_alloc_attrs(ntb_dev, mw->alloc_size,
++ &dma_addr, GFP_KERNEL,
++ DMA_ATTR_FORCE_CONTIGUOUS);
+ if (!alloc_addr) {
+- dev_err(dma_dev, "Unable to alloc MW buff of size %zu\n",
++ dev_err(ntb_dev, "Unable to alloc MW buff of size %zu\n",
+ mw->alloc_size);
+ return -ENOMEM;
+ }
+@@ -845,7 +858,7 @@ static int ntb_alloc_mw_buffer(struct ntb_transport_mw *mw,
+ return 0;
+
+ err:
+- dma_free_coherent(dma_dev, mw->alloc_size, alloc_addr, dma_addr);
++ dma_free_coherent(ntb_dev, mw->alloc_size, alloc_addr, dma_addr);
+
+ return rc;
+ }
+diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c
+index 553f1f46bc664f..72bc1d017a46ee 100644
+--- a/drivers/ntb/test/ntb_perf.c
++++ b/drivers/ntb/test/ntb_perf.c
+@@ -1227,7 +1227,7 @@ static ssize_t perf_dbgfs_read_info(struct file *filep, char __user *ubuf,
+ "\tOut buffer addr 0x%pK\n", peer->outbuf);
+
+ pos += scnprintf(buf + pos, buf_size - pos,
+- "\tOut buff phys addr %pa[p]\n", &peer->out_phys_addr);
++ "\tOut buff phys addr %pap\n", &peer->out_phys_addr);
+
+ pos += scnprintf(buf + pos, buf_size - pos,
+ "\tOut buffer size %pa\n", &peer->outbuf_size);
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index d6d558f94d6bb2..35d9f3cc2efabf 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -1937,12 +1937,16 @@ static int cmp_dpa(const void *a, const void *b)
+ static struct device **scan_labels(struct nd_region *nd_region)
+ {
+ int i, count = 0;
+- struct device *dev, **devs = NULL;
++ struct device *dev, **devs;
+ struct nd_label_ent *label_ent, *e;
+ struct nd_mapping *nd_mapping = &nd_region->mapping[0];
+ struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
+ resource_size_t map_end = nd_mapping->start + nd_mapping->size - 1;
+
++ devs = kcalloc(2, sizeof(dev), GFP_KERNEL);
++ if (!devs)
++ return NULL;
++
+ /* "safe" because create_namespace_pmem() might list_move() label_ent */
+ list_for_each_entry_safe(label_ent, e, &nd_mapping->labels, list) {
+ struct nd_namespace_label *nd_label = label_ent->label;
+@@ -1961,12 +1965,14 @@ static struct device **scan_labels(struct nd_region *nd_region)
+ goto err;
+ if (i < count)
+ continue;
+- __devs = kcalloc(count + 2, sizeof(dev), GFP_KERNEL);
+- if (!__devs)
+- goto err;
+- memcpy(__devs, devs, sizeof(dev) * count);
+- kfree(devs);
+- devs = __devs;
++ if (count) {
++ __devs = kcalloc(count + 2, sizeof(dev), GFP_KERNEL);
++ if (!__devs)
++ goto err;
++ memcpy(__devs, devs, sizeof(dev) * count);
++ kfree(devs);
++ devs = __devs;
++ }
+
+ dev = create_namespace_pmem(nd_region, nd_mapping, nd_label);
+ if (IS_ERR(dev)) {
+@@ -1993,11 +1999,6 @@ static struct device **scan_labels(struct nd_region *nd_region)
+
+ /* Publish a zero-sized namespace for userspace to configure. */
+ nd_mapping_free_labels(nd_mapping);
+-
+- devs = kcalloc(2, sizeof(dev), GFP_KERNEL);
+- if (!devs)
+- goto err;
+-
+ nspm = kzalloc(sizeof(*nspm), GFP_KERNEL);
+ if (!nspm)
+ goto err;
+@@ -2036,11 +2037,10 @@ static struct device **scan_labels(struct nd_region *nd_region)
+ return devs;
+
+ err:
+- if (devs) {
+- for (i = 0; devs[i]; i++)
+- namespace_pmem_release(devs[i]);
+- kfree(devs);
+- }
++ for (i = 0; devs[i]; i++)
++ namespace_pmem_release(devs[i]);
++ kfree(devs);
++
+ return NULL;
+ }
+
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 518e22dd4f9bea..6d97058cde7a11 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -648,7 +648,7 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
+ rc = device_add_disk(&head->subsys->dev, head->disk,
+ nvme_ns_attr_groups);
+ if (rc) {
+- clear_bit(NVME_NSHEAD_DISK_LIVE, &ns->flags);
++ clear_bit(NVME_NSHEAD_DISK_LIVE, &head->flags);
+ return;
+ }
+ nvme_add_ns_head_cdev(head);
+diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c
+index 4fe3b0cb72ec51..5c62e1a3ba5291 100644
+--- a/drivers/pci/controller/dwc/pci-dra7xx.c
++++ b/drivers/pci/controller/dwc/pci-dra7xx.c
+@@ -850,14 +850,21 @@ static int dra7xx_pcie_probe(struct platform_device *pdev)
+ dra7xx->mode = mode;
+
+ ret = devm_request_threaded_irq(dev, irq, NULL, dra7xx_pcie_irq_handler,
+- IRQF_SHARED, "dra7xx-pcie-main", dra7xx);
++ IRQF_SHARED | IRQF_ONESHOT,
++ "dra7xx-pcie-main", dra7xx);
+ if (ret) {
+ dev_err(dev, "failed to request irq\n");
+- goto err_gpio;
++ goto err_deinit;
+ }
+
+ return 0;
+
++err_deinit:
++ if (dra7xx->mode == DW_PCIE_RC_TYPE)
++ dw_pcie_host_deinit(&dra7xx->pci->pp);
++ else
++ dw_pcie_ep_deinit(&dra7xx->pci->ep);
++
+ err_gpio:
+ err_get_sync:
+ pm_runtime_put(dev);
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 964d67756eb2be..eaec471c462346 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -953,7 +953,7 @@ static int imx6_pcie_host_init(struct dw_pcie_rp *pp)
+ ret = phy_power_on(imx6_pcie->phy);
+ if (ret) {
+ dev_err(dev, "waiting for PHY ready timeout!\n");
+- goto err_phy_off;
++ goto err_phy_exit;
+ }
+ }
+
+@@ -968,8 +968,9 @@ static int imx6_pcie_host_init(struct dw_pcie_rp *pp)
+ return 0;
+
+ err_phy_off:
+- if (imx6_pcie->phy)
+- phy_exit(imx6_pcie->phy);
++ phy_power_off(imx6_pcie->phy);
++err_phy_exit:
++ phy_exit(imx6_pcie->phy);
+ err_clk_disable:
+ imx6_pcie_clk_disable(imx6_pcie);
+ err_reg_disable:
+@@ -1113,6 +1114,8 @@ static int imx6_add_pcie_ep(struct imx6_pcie *imx6_pcie,
+ if (imx6_check_flag(imx6_pcie, IMX6_PCIE_FLAG_SUPPORT_64BIT))
+ dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+
++ ep->page_size = imx6_pcie->drvdata->epc_features->align;
++
+ ret = dw_pcie_ep_init(ep);
+ if (ret) {
+ dev_err(dev, "failed to initialize endpoint\n");
+@@ -1562,7 +1565,8 @@ static const struct imx6_pcie_drvdata drvdata[] = {
+ },
+ [IMX8MM_EP] = {
+ .variant = IMX8MM_EP,
+- .flags = IMX6_PCIE_FLAG_HAS_PHYDRV,
++ .flags = IMX6_PCIE_FLAG_HAS_APP_RESET |
++ IMX6_PCIE_FLAG_HAS_PHYDRV,
+ .mode = DW_PCIE_EP_TYPE,
+ .gpr = "fsl,imx8mm-iomuxc-gpr",
+ .clk_names = imx8mm_clks,
+@@ -1573,7 +1577,8 @@ static const struct imx6_pcie_drvdata drvdata[] = {
+ },
+ [IMX8MP_EP] = {
+ .variant = IMX8MP_EP,
+- .flags = IMX6_PCIE_FLAG_HAS_PHYDRV,
++ .flags = IMX6_PCIE_FLAG_HAS_APP_RESET |
++ IMX6_PCIE_FLAG_HAS_PHYDRV,
+ .mode = DW_PCIE_EP_TYPE,
+ .gpr = "fsl,imx8mp-iomuxc-gpr",
+ .clk_names = imx8mm_clks,
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index 52c6420ae2003c..95a471d6a586d9 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -577,7 +577,7 @@ static void ks_pcie_quirk(struct pci_dev *dev)
+ */
+ if (pci_match_id(am6_pci_devids, bridge)) {
+ bridge_dev = pci_get_host_bridge_device(dev);
+- if (!bridge_dev && !bridge_dev->parent)
++ if (!bridge_dev || !bridge_dev->parent)
+ return;
+
+ ks_pcie = dev_get_drvdata(bridge_dev->parent);
+diff --git a/drivers/pci/controller/dwc/pcie-kirin.c b/drivers/pci/controller/dwc/pcie-kirin.c
+index 0a29136491b891..85a2c77b1835af 100644
+--- a/drivers/pci/controller/dwc/pcie-kirin.c
++++ b/drivers/pci/controller/dwc/pcie-kirin.c
+@@ -420,11 +420,11 @@ static int kirin_pcie_parse_port(struct kirin_pcie *pcie,
+ "unable to get a valid reset gpio\n");
+ }
+
+- pcie->num_slots++;
+- if (pcie->num_slots > MAX_PCI_SLOTS) {
++ if (pcie->num_slots + 1 >= MAX_PCI_SLOTS) {
+ dev_err(dev, "Too many PCI slots!\n");
+ return -EINVAL;
+ }
++ pcie->num_slots++;
+
+ ret = of_pci_get_devfn(child);
+ if (ret < 0) {
+diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+index a9b263f749b6aa..c8bb7cd89678f5 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom-ep.c
++++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+@@ -858,21 +858,15 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
+- ret = qcom_pcie_enable_resources(pcie_ep);
+- if (ret) {
+- dev_err(dev, "Failed to enable resources: %d\n", ret);
+- return ret;
+- }
+-
+ ret = dw_pcie_ep_init(&pcie_ep->pci.ep);
+ if (ret) {
+ dev_err(dev, "Failed to initialize endpoint: %d\n", ret);
+- goto err_disable_resources;
++ return ret;
+ }
+
+ ret = qcom_pcie_ep_enable_irq_resources(pdev, pcie_ep);
+ if (ret)
+- goto err_disable_resources;
++ goto err_ep_deinit;
+
+ name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
+ if (!name) {
+@@ -889,8 +883,8 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev)
+ disable_irq(pcie_ep->global_irq);
+ disable_irq(pcie_ep->perst_irq);
+
+-err_disable_resources:
+- qcom_pcie_disable_resources(pcie_ep);
++err_ep_deinit:
++ dw_pcie_ep_deinit(&pcie_ep->pci.ep);
+
+ return ret;
+ }
+diff --git a/drivers/pci/controller/pcie-xilinx-nwl.c b/drivers/pci/controller/pcie-xilinx-nwl.c
+index 0408f4d612b5af..7417993f8cff76 100644
+--- a/drivers/pci/controller/pcie-xilinx-nwl.c
++++ b/drivers/pci/controller/pcie-xilinx-nwl.c
+@@ -80,8 +80,8 @@
+ #define MSGF_MISC_SR_NON_FATAL_DEV BIT(22)
+ #define MSGF_MISC_SR_FATAL_DEV BIT(23)
+ #define MSGF_MISC_SR_LINK_DOWN BIT(24)
+-#define MSGF_MSIC_SR_LINK_AUTO_BWIDTH BIT(25)
+-#define MSGF_MSIC_SR_LINK_BWIDTH BIT(26)
++#define MSGF_MISC_SR_LINK_AUTO_BWIDTH BIT(25)
++#define MSGF_MISC_SR_LINK_BWIDTH BIT(26)
+
+ #define MSGF_MISC_SR_MASKALL (MSGF_MISC_SR_RXMSG_AVAIL | \
+ MSGF_MISC_SR_RXMSG_OVER | \
+@@ -96,8 +96,8 @@
+ MSGF_MISC_SR_NON_FATAL_DEV | \
+ MSGF_MISC_SR_FATAL_DEV | \
+ MSGF_MISC_SR_LINK_DOWN | \
+- MSGF_MSIC_SR_LINK_AUTO_BWIDTH | \
+- MSGF_MSIC_SR_LINK_BWIDTH)
++ MSGF_MISC_SR_LINK_AUTO_BWIDTH | \
++ MSGF_MISC_SR_LINK_BWIDTH)
+
+ /* Legacy interrupt status mask bits */
+ #define MSGF_LEG_SR_INTA BIT(0)
+@@ -299,10 +299,10 @@ static irqreturn_t nwl_pcie_misc_handler(int irq, void *data)
+ if (misc_stat & MSGF_MISC_SR_FATAL_DEV)
+ dev_err(dev, "Fatal Error Detected\n");
+
+- if (misc_stat & MSGF_MSIC_SR_LINK_AUTO_BWIDTH)
++ if (misc_stat & MSGF_MISC_SR_LINK_AUTO_BWIDTH)
+ dev_info(dev, "Link Autonomous Bandwidth Management Status bit set\n");
+
+- if (misc_stat & MSGF_MSIC_SR_LINK_BWIDTH)
++ if (misc_stat & MSGF_MISC_SR_LINK_BWIDTH)
+ dev_info(dev, "Link Bandwidth Management Status bit set\n");
+
+ /* Clear misc interrupt status */
+@@ -371,7 +371,7 @@ static void nwl_mask_intx_irq(struct irq_data *data)
+ u32 mask;
+ u32 val;
+
+- mask = 1 << (data->hwirq - 1);
++ mask = 1 << data->hwirq;
+ raw_spin_lock_irqsave(&pcie->leg_mask_lock, flags);
+ val = nwl_bridge_readl(pcie, MSGF_LEG_MASK);
+ nwl_bridge_writel(pcie, (val & (~mask)), MSGF_LEG_MASK);
+@@ -385,7 +385,7 @@ static void nwl_unmask_intx_irq(struct irq_data *data)
+ u32 mask;
+ u32 val;
+
+- mask = 1 << (data->hwirq - 1);
++ mask = 1 << data->hwirq;
+ raw_spin_lock_irqsave(&pcie->leg_mask_lock, flags);
+ val = nwl_bridge_readl(pcie, MSGF_LEG_MASK);
+ nwl_bridge_writel(pcie, (val | mask), MSGF_LEG_MASK);
+@@ -779,6 +779,7 @@ static int nwl_pcie_probe(struct platform_device *pdev)
+ return -ENODEV;
+
+ pcie = pci_host_bridge_priv(bridge);
++ platform_set_drvdata(pdev, pcie);
+
+ pcie->dev = dev;
+
+@@ -801,13 +802,13 @@ static int nwl_pcie_probe(struct platform_device *pdev)
+ err = nwl_pcie_bridge_init(pcie);
+ if (err) {
+ dev_err(dev, "HW Initialization failed\n");
+- return err;
++ goto err_clk;
+ }
+
+ err = nwl_pcie_init_irq_domain(pcie);
+ if (err) {
+ dev_err(dev, "Failed creating IRQ Domain\n");
+- return err;
++ goto err_clk;
+ }
+
+ bridge->sysdata = pcie;
+@@ -817,11 +818,24 @@ static int nwl_pcie_probe(struct platform_device *pdev)
+ err = nwl_pcie_enable_msi(pcie);
+ if (err < 0) {
+ dev_err(dev, "failed to enable MSI support: %d\n", err);
+- return err;
++ goto err_clk;
+ }
+ }
+
+- return pci_host_probe(bridge);
++ err = pci_host_probe(bridge);
++ if (!err)
++ return 0;
++
++err_clk:
++ clk_disable_unprepare(pcie->clk);
++ return err;
++}
++
++static void nwl_pcie_remove(struct platform_device *pdev)
++{
++ struct nwl_pcie *pcie = platform_get_drvdata(pdev);
++
++ clk_disable_unprepare(pcie->clk);
+ }
+
+ static struct platform_driver nwl_pcie_driver = {
+@@ -831,5 +845,6 @@ static struct platform_driver nwl_pcie_driver = {
+ .of_match_table = nwl_pcie_of_match,
+ },
+ .probe = nwl_pcie_probe,
++ .remove_new = nwl_pcie_remove,
+ };
+ builtin_platform_driver(nwl_pcie_driver);
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index ffaaca0978cbc9..ad2d571ccbc1f7 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1324,7 +1324,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
+ if (delay > PCI_RESET_WAIT) {
+ if (retrain) {
+ retrain = false;
+- if (pcie_failed_link_retrain(bridge)) {
++ if (pcie_failed_link_retrain(bridge) == 0) {
+ delay = 1;
+ continue;
+ }
+@@ -4718,7 +4718,15 @@ int pcie_retrain_link(struct pci_dev *pdev, bool use_lt)
+ pcie_capability_clear_word(pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_RL);
+ }
+
+- return pcie_wait_for_link_status(pdev, use_lt, !use_lt);
++ rc = pcie_wait_for_link_status(pdev, use_lt, !use_lt);
++
++ /*
++ * Clear LBMS after a manual retrain so that the bit can be used
++ * to track link speed or width changes made by hardware itself
++ * in attempt to correct unreliable link operation.
++ */
++ pcie_capability_write_word(pdev, PCI_EXP_LNKSTA, PCI_EXP_LNKSTA_LBMS);
++ return rc;
+ }
+
+ /**
+@@ -5672,8 +5680,10 @@ static void pci_bus_restore_locked(struct pci_bus *bus)
+
+ list_for_each_entry(dev, &bus->devices, bus_list) {
+ pci_dev_restore(dev);
+- if (dev->subordinate)
++ if (dev->subordinate) {
++ pci_bridge_wait_for_secondary_bus(dev, "bus reset");
+ pci_bus_restore_locked(dev->subordinate);
++ }
+ }
+ }
+
+@@ -5707,8 +5717,10 @@ static void pci_slot_restore_locked(struct pci_slot *slot)
+ if (!dev->slot || dev->slot != slot)
+ continue;
+ pci_dev_restore(dev);
+- if (dev->subordinate)
++ if (dev->subordinate) {
++ pci_bridge_wait_for_secondary_bus(dev, "slot reset");
+ pci_bus_restore_locked(dev->subordinate);
++ }
+ }
+ }
+
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 79c8398f39384c..7c06e55c5072e1 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -606,7 +606,7 @@ void pci_acs_init(struct pci_dev *dev);
+ int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags);
+ int pci_dev_specific_enable_acs(struct pci_dev *dev);
+ int pci_dev_specific_disable_acs_redir(struct pci_dev *dev);
+-bool pcie_failed_link_retrain(struct pci_dev *dev);
++int pcie_failed_link_retrain(struct pci_dev *dev);
+ #else
+ static inline int pci_dev_specific_acs_enabled(struct pci_dev *dev,
+ u16 acs_flags)
+@@ -621,9 +621,9 @@ static inline int pci_dev_specific_disable_acs_redir(struct pci_dev *dev)
+ {
+ return -ENOTTY;
+ }
+-static inline bool pcie_failed_link_retrain(struct pci_dev *dev)
++static inline int pcie_failed_link_retrain(struct pci_dev *dev)
+ {
+- return false;
++ return -ENOTTY;
+ }
+ #endif
+
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index a2ce4e08edf5a3..5d57ea27dbc42a 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -66,7 +66,7 @@
+ * apply this erratum workaround to any downstream ports as long as they
+ * support Link Active reporting and have the Link Control 2 register.
+ * Restrict the speed to 2.5GT/s then with the Target Link Speed field,
+- * request a retrain and wait 200ms for the data link to go up.
++ * request a retrain and check the result.
+ *
+ * If this turns out successful and we know by the Vendor:Device ID it is
+ * safe to do so, then lift the restriction, letting the devices negotiate
+@@ -74,33 +74,45 @@
+ * firmware may have already arranged and lift it with ports that already
+ * report their data link being up.
+ *
+- * Return TRUE if the link has been successfully retrained, otherwise FALSE.
++ * Otherwise revert the speed to the original setting and request a retrain
++ * again to remove any residual state, ignoring the result as it's supposed
++ * to fail anyway.
++ *
++ * Return 0 if the link has been successfully retrained. Return an error
++ * if retraining was not needed or we attempted a retrain and it failed.
+ */
+-bool pcie_failed_link_retrain(struct pci_dev *dev)
++int pcie_failed_link_retrain(struct pci_dev *dev)
+ {
+ static const struct pci_device_id ids[] = {
+ { PCI_VDEVICE(ASMEDIA, 0x2824) }, /* ASMedia ASM2824 */
+ {}
+ };
+ u16 lnksta, lnkctl2;
++ int ret = -ENOTTY;
+
+ if (!pci_is_pcie(dev) || !pcie_downstream_port(dev) ||
+ !pcie_cap_has_lnkctl2(dev) || !dev->link_active_reporting)
+- return false;
++ return ret;
+
+ pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &lnkctl2);
+ pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
+ if ((lnksta & (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_DLLLA)) ==
+ PCI_EXP_LNKSTA_LBMS) {
++ u16 oldlnkctl2 = lnkctl2;
++
+ pci_info(dev, "broken device, retraining non-functional downstream link at 2.5GT/s\n");
+
+ lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS;
+ lnkctl2 |= PCI_EXP_LNKCTL2_TLS_2_5GT;
+ pcie_capability_write_word(dev, PCI_EXP_LNKCTL2, lnkctl2);
+
+- if (pcie_retrain_link(dev, false)) {
++ ret = pcie_retrain_link(dev, false);
++ if (ret) {
+ pci_info(dev, "retraining failed\n");
+- return false;
++ pcie_capability_write_word(dev, PCI_EXP_LNKCTL2,
++ oldlnkctl2);
++ pcie_retrain_link(dev, true);
++ return ret;
+ }
+
+ pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
+@@ -117,13 +129,14 @@ bool pcie_failed_link_retrain(struct pci_dev *dev)
+ lnkctl2 |= lnkcap & PCI_EXP_LNKCAP_SLS;
+ pcie_capability_write_word(dev, PCI_EXP_LNKCTL2, lnkctl2);
+
+- if (pcie_retrain_link(dev, false)) {
++ ret = pcie_retrain_link(dev, false);
++ if (ret) {
+ pci_info(dev, "retraining failed\n");
+- return false;
++ return ret;
+ }
+ }
+
+- return true;
++ return ret;
+ }
+
+ static ktime_t fixup_debug_start(struct pci_dev *dev,
+diff --git a/drivers/perf/alibaba_uncore_drw_pmu.c b/drivers/perf/alibaba_uncore_drw_pmu.c
+index 38a2947ae8130c..c6ff1bc7d336b8 100644
+--- a/drivers/perf/alibaba_uncore_drw_pmu.c
++++ b/drivers/perf/alibaba_uncore_drw_pmu.c
+@@ -400,7 +400,7 @@ static irqreturn_t ali_drw_pmu_isr(int irq_num, void *data)
+ }
+
+ /* clear common counter intr status */
+- clr_status = FIELD_PREP(ALI_DRW_PMCOM_CNT_OV_INTR_MASK, 1);
++ clr_status = FIELD_PREP(ALI_DRW_PMCOM_CNT_OV_INTR_MASK, status);
+ writel(clr_status,
+ drw_pmu->cfg_base + ALI_DRW_PMU_OV_INTR_CLR);
+ }
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index c932d9d355cf0b..48863b31ccfb12 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -24,14 +24,6 @@
+ #define CMN_NI_NODE_ID GENMASK_ULL(31, 16)
+ #define CMN_NI_LOGICAL_ID GENMASK_ULL(47, 32)
+
+-#define CMN_NODEID_DEVID(reg) ((reg) & 3)
+-#define CMN_NODEID_EXT_DEVID(reg) ((reg) & 1)
+-#define CMN_NODEID_PID(reg) (((reg) >> 2) & 1)
+-#define CMN_NODEID_EXT_PID(reg) (((reg) >> 1) & 3)
+-#define CMN_NODEID_1x1_PID(reg) (((reg) >> 2) & 7)
+-#define CMN_NODEID_X(reg, bits) ((reg) >> (3 + (bits)))
+-#define CMN_NODEID_Y(reg, bits) (((reg) >> 3) & ((1U << (bits)) - 1))
+-
+ #define CMN_CHILD_INFO 0x0080
+ #define CMN_CI_CHILD_COUNT GENMASK_ULL(15, 0)
+ #define CMN_CI_CHILD_PTR_OFFSET GENMASK_ULL(31, 16)
+@@ -43,6 +35,9 @@
+ #define CMN_MAX_XPS (CMN_MAX_DIMENSION * CMN_MAX_DIMENSION)
+ #define CMN_MAX_DTMS (CMN_MAX_XPS + (CMN_MAX_DIMENSION - 1) * 4)
+
++/* Currently XPs are the node type we can have most of; others top out at 128 */
++#define CMN_MAX_NODES_PER_EVENT CMN_MAX_XPS
++
+ /* The CFG node has various info besides the discovery tree */
+ #define CMN_CFGM_PERIPH_ID_01 0x0008
+ #define CMN_CFGM_PID0_PART_0 GENMASK_ULL(7, 0)
+@@ -78,7 +73,8 @@
+ /* Technically this is 4 bits wide on DNs, but we only use 2 there anyway */
+ #define CMN__PMU_OCCUP1_ID GENMASK_ULL(34, 32)
+
+-/* HN-Ps are weird... */
++/* Some types are designed to coexist with another device in the same node */
++#define CMN_CCLA_PMU_EVENT_SEL 0x008
+ #define CMN_HNP_PMU_EVENT_SEL 0x008
+
+ /* DTMs live in the PMU space of XP registers */
+@@ -280,8 +276,11 @@ struct arm_cmn_node {
+ u16 id, logid;
+ enum cmn_node_type type;
+
++ /* XP properties really, but replicated to children for convenience */
+ u8 dtm;
+ s8 dtc;
++ u8 portid_bits:4;
++ u8 deviceid_bits:4;
+ /* DN/HN-F/CXHA */
+ struct {
+ u8 val : 4;
+@@ -357,49 +356,33 @@ struct arm_cmn {
+ static int arm_cmn_hp_state;
+
+ struct arm_cmn_nodeid {
+- u8 x;
+- u8 y;
+ u8 port;
+ u8 dev;
+ };
+
+ static int arm_cmn_xyidbits(const struct arm_cmn *cmn)
+ {
+- return fls((cmn->mesh_x - 1) | (cmn->mesh_y - 1) | 2);
++ return fls((cmn->mesh_x - 1) | (cmn->mesh_y - 1));
+ }
+
+-static struct arm_cmn_nodeid arm_cmn_nid(const struct arm_cmn *cmn, u16 id)
++static struct arm_cmn_nodeid arm_cmn_nid(const struct arm_cmn_node *dn)
+ {
+ struct arm_cmn_nodeid nid;
+
+- if (cmn->num_xps == 1) {
+- nid.x = 0;
+- nid.y = 0;
+- nid.port = CMN_NODEID_1x1_PID(id);
+- nid.dev = CMN_NODEID_DEVID(id);
+- } else {
+- int bits = arm_cmn_xyidbits(cmn);
+-
+- nid.x = CMN_NODEID_X(id, bits);
+- nid.y = CMN_NODEID_Y(id, bits);
+- if (cmn->ports_used & 0xc) {
+- nid.port = CMN_NODEID_EXT_PID(id);
+- nid.dev = CMN_NODEID_EXT_DEVID(id);
+- } else {
+- nid.port = CMN_NODEID_PID(id);
+- nid.dev = CMN_NODEID_DEVID(id);
+- }
+- }
++ nid.dev = dn->id & ((1U << dn->deviceid_bits) - 1);
++ nid.port = (dn->id >> dn->deviceid_bits) & ((1U << dn->portid_bits) - 1);
+ return nid;
+ }
+
+ static struct arm_cmn_node *arm_cmn_node_to_xp(const struct arm_cmn *cmn,
+ const struct arm_cmn_node *dn)
+ {
+- struct arm_cmn_nodeid nid = arm_cmn_nid(cmn, dn->id);
+- int xp_idx = cmn->mesh_x * nid.y + nid.x;
++ int id = dn->id >> (dn->portid_bits + dn->deviceid_bits);
++ int bits = arm_cmn_xyidbits(cmn);
++ int x = id >> bits;
++ int y = id & ((1U << bits) - 1);
+
+- return cmn->xps + xp_idx;
++ return cmn->xps + cmn->mesh_x * y + x;
+ }
+ static struct arm_cmn_node *arm_cmn_node(const struct arm_cmn *cmn,
+ enum cmn_node_type type)
+@@ -485,13 +468,13 @@ static const char *arm_cmn_device_type(u8 type)
+ }
+ }
+
+-static void arm_cmn_show_logid(struct seq_file *s, int x, int y, int p, int d)
++static void arm_cmn_show_logid(struct seq_file *s, const struct arm_cmn_node *xp, int p, int d)
+ {
+ struct arm_cmn *cmn = s->private;
+ struct arm_cmn_node *dn;
++ u16 id = xp->id | d | (p << xp->deviceid_bits);
+
+ for (dn = cmn->dns; dn->type; dn++) {
+- struct arm_cmn_nodeid nid = arm_cmn_nid(cmn, dn->id);
+ int pad = dn->logid < 10;
+
+ if (dn->type == CMN_TYPE_XP)
+@@ -500,7 +483,7 @@ static void arm_cmn_show_logid(struct seq_file *s, int x, int y, int p, int d)
+ if (dn->type < CMN_TYPE_HNI)
+ continue;
+
+- if (nid.x != x || nid.y != y || nid.port != p || nid.dev != d)
++ if (dn->id != id)
+ continue;
+
+ seq_printf(s, " %*c#%-*d |", pad + 1, ' ', 3 - pad, dn->logid);
+@@ -521,6 +504,7 @@ static int arm_cmn_map_show(struct seq_file *s, void *data)
+ y = cmn->mesh_y;
+ while (y--) {
+ int xp_base = cmn->mesh_x * y;
++ struct arm_cmn_node *xp = cmn->xps + xp_base;
+ u8 port[CMN_MAX_PORTS][CMN_MAX_DIMENSION];
+
+ for (x = 0; x < cmn->mesh_x; x++)
+@@ -528,16 +512,14 @@ static int arm_cmn_map_show(struct seq_file *s, void *data)
+
+ seq_printf(s, "\n%-2d |", y);
+ for (x = 0; x < cmn->mesh_x; x++) {
+- struct arm_cmn_node *xp = cmn->xps + xp_base + x;
+-
+ for (p = 0; p < CMN_MAX_PORTS; p++)
+- port[p][x] = arm_cmn_device_connect_info(cmn, xp, p);
++ port[p][x] = arm_cmn_device_connect_info(cmn, xp + x, p);
+ seq_printf(s, " XP #%-3d|", xp_base + x);
+ }
+
+ seq_puts(s, "\n |");
+ for (x = 0; x < cmn->mesh_x; x++) {
+- s8 dtc = cmn->xps[xp_base + x].dtc;
++ s8 dtc = xp[x].dtc;
+
+ if (dtc < 0)
+ seq_puts(s, " DTC ?? |");
+@@ -554,10 +536,10 @@ static int arm_cmn_map_show(struct seq_file *s, void *data)
+ seq_puts(s, arm_cmn_device_type(port[p][x]));
+ seq_puts(s, "\n 0|");
+ for (x = 0; x < cmn->mesh_x; x++)
+- arm_cmn_show_logid(s, x, y, p, 0);
++ arm_cmn_show_logid(s, xp + x, p, 0);
+ seq_puts(s, "\n 1|");
+ for (x = 0; x < cmn->mesh_x; x++)
+- arm_cmn_show_logid(s, x, y, p, 1);
++ arm_cmn_show_logid(s, xp + x, p, 1);
+ }
+ seq_puts(s, "\n-----+");
+ }
+@@ -585,7 +567,7 @@ static void arm_cmn_debugfs_init(struct arm_cmn *cmn, int id) {}
+
+ struct arm_cmn_hw_event {
+ struct arm_cmn_node *dn;
+- u64 dtm_idx[4];
++ u64 dtm_idx[DIV_ROUND_UP(CMN_MAX_NODES_PER_EVENT * 2, 64)];
+ s8 dtc_idx[CMN_MAX_DTCS];
+ u8 num_dns;
+ u8 dtm_offset;
+@@ -1815,10 +1797,7 @@ static int arm_cmn_event_init(struct perf_event *event)
+ }
+
+ if (!hw->num_dns) {
+- struct arm_cmn_nodeid nid = arm_cmn_nid(cmn, nodeid);
+-
+- dev_dbg(cmn->dev, "invalid node 0x%x (%d,%d,%d,%d) type 0x%x\n",
+- nodeid, nid.x, nid.y, nid.port, nid.dev, type);
++ dev_dbg(cmn->dev, "invalid node 0x%x type 0x%x\n", nodeid, type);
+ return -EINVAL;
+ }
+
+@@ -1921,7 +1900,7 @@ static int arm_cmn_event_add(struct perf_event *event, int flags)
+ arm_cmn_claim_wp_idx(dtm, event, d, wp_idx, i);
+ writel_relaxed(cfg, dtm->base + CMN_DTM_WPn_CONFIG(wp_idx));
+ } else {
+- struct arm_cmn_nodeid nid = arm_cmn_nid(cmn, dn->id);
++ struct arm_cmn_nodeid nid = arm_cmn_nid(dn);
+
+ if (cmn->multi_dtm)
+ nid.port %= 2;
+@@ -2168,10 +2147,12 @@ static int arm_cmn_init_dtcs(struct arm_cmn *cmn)
+ continue;
+
+ xp = arm_cmn_node_to_xp(cmn, dn);
++ dn->portid_bits = xp->portid_bits;
++ dn->deviceid_bits = xp->deviceid_bits;
+ dn->dtc = xp->dtc;
+ dn->dtm = xp->dtm;
+ if (cmn->multi_dtm)
+- dn->dtm += arm_cmn_nid(cmn, dn->id).port / 2;
++ dn->dtm += arm_cmn_nid(dn).port / 2;
+
+ if (dn->type == CMN_TYPE_DTC) {
+ int err = arm_cmn_init_dtc(cmn, dn, dtc_idx++);
+@@ -2341,18 +2322,27 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
+ arm_cmn_init_dtm(dtm++, xp, 0);
+ /*
+ * Keeping track of connected ports will let us filter out
+- * unnecessary XP events easily. We can also reliably infer the
+- * "extra device ports" configuration for the node ID format
+- * from this, since in that case we will see at least one XP
+- * with port 2 connected, for the HN-D.
++ * unnecessary XP events easily, and also infer the per-XP
++ * part of the node ID format.
+ */
+ for (int p = 0; p < CMN_MAX_PORTS; p++)
+ if (arm_cmn_device_connect_info(cmn, xp, p))
+ xp_ports |= BIT(p);
+
+- if (cmn->multi_dtm && (xp_ports & 0xc))
++ if (cmn->num_xps == 1) {
++ xp->portid_bits = 3;
++ xp->deviceid_bits = 2;
++ } else if (xp_ports > 0x3) {
++ xp->portid_bits = 2;
++ xp->deviceid_bits = 1;
++ } else {
++ xp->portid_bits = 1;
++ xp->deviceid_bits = 2;
++ }
++
++ if (cmn->multi_dtm && (xp_ports > 0x3))
+ arm_cmn_init_dtm(dtm++, xp, 1);
+- if (cmn->multi_dtm && (xp_ports & 0x30))
++ if (cmn->multi_dtm && (xp_ports > 0xf))
+ arm_cmn_init_dtm(dtm++, xp, 2);
+
+ cmn->ports_used |= xp_ports;
+@@ -2407,10 +2397,13 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
+ case CMN_TYPE_CXHA:
+ case CMN_TYPE_CCRA:
+ case CMN_TYPE_CCHA:
+- case CMN_TYPE_CCLA:
+ case CMN_TYPE_HNS:
+ dn++;
+ break;
++ case CMN_TYPE_CCLA:
++ dn->pmu_base += CMN_CCLA_PMU_EVENT_SEL;
++ dn++;
++ break;
+ /* Nothing to see here */
+ case CMN_TYPE_MPAM_S:
+ case CMN_TYPE_MPAM_NS:
+@@ -2428,7 +2421,7 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
+ case CMN_TYPE_HNP:
+ case CMN_TYPE_CCLA_RNI:
+ dn[1] = dn[0];
+- dn[0].pmu_base += CMN_HNP_PMU_EVENT_SEL;
++ dn[0].pmu_base += CMN_CCLA_PMU_EVENT_SEL;
+ dn[1].type = arm_cmn_subtype(dn->type);
+ dn += 2;
+ break;
+diff --git a/drivers/perf/dwc_pcie_pmu.c b/drivers/perf/dwc_pcie_pmu.c
+index c5e328f2384194..f205ecad2e4c06 100644
+--- a/drivers/perf/dwc_pcie_pmu.c
++++ b/drivers/perf/dwc_pcie_pmu.c
+@@ -556,10 +556,10 @@ static int dwc_pcie_register_dev(struct pci_dev *pdev)
+ {
+ struct platform_device *plat_dev;
+ struct dwc_pcie_dev_info *dev_info;
+- u32 bdf;
++ u32 sbdf;
+
+- bdf = PCI_DEVID(pdev->bus->number, pdev->devfn);
+- plat_dev = platform_device_register_data(NULL, "dwc_pcie_pmu", bdf,
++ sbdf = (pci_domain_nr(pdev->bus) << 16) | PCI_DEVID(pdev->bus->number, pdev->devfn);
++ plat_dev = platform_device_register_data(NULL, "dwc_pcie_pmu", sbdf,
+ pdev, sizeof(*pdev));
+
+ if (IS_ERR(plat_dev))
+@@ -611,15 +611,15 @@ static int dwc_pcie_pmu_probe(struct platform_device *plat_dev)
+ struct pci_dev *pdev = plat_dev->dev.platform_data;
+ struct dwc_pcie_pmu *pcie_pmu;
+ char *name;
+- u32 bdf, val;
++ u32 sbdf, val;
+ u16 vsec;
+ int ret;
+
+ vsec = pci_find_vsec_capability(pdev, pdev->vendor,
+ DWC_PCIE_VSEC_RAS_DES_ID);
+ pci_read_config_dword(pdev, vsec + PCI_VNDR_HEADER, &val);
+- bdf = PCI_DEVID(pdev->bus->number, pdev->devfn);
+- name = devm_kasprintf(&plat_dev->dev, GFP_KERNEL, "dwc_rootport_%x", bdf);
++ sbdf = plat_dev->id;
++ name = devm_kasprintf(&plat_dev->dev, GFP_KERNEL, "dwc_rootport_%x", sbdf);
+ if (!name)
+ return -ENOMEM;
+
+@@ -650,7 +650,7 @@ static int dwc_pcie_pmu_probe(struct platform_device *plat_dev)
+ ret = cpuhp_state_add_instance(dwc_pcie_pmu_hp_state,
+ &pcie_pmu->cpuhp_node);
+ if (ret) {
+- pci_err(pdev, "Error %d registering hotplug @%x\n", ret, bdf);
++ pci_err(pdev, "Error %d registering hotplug @%x\n", ret, sbdf);
+ return ret;
+ }
+
+@@ -663,7 +663,7 @@ static int dwc_pcie_pmu_probe(struct platform_device *plat_dev)
+
+ ret = perf_pmu_register(&pcie_pmu->pmu, name, -1);
+ if (ret) {
+- pci_err(pdev, "Error %d registering PMU @%x\n", ret, bdf);
++ pci_err(pdev, "Error %d registering PMU @%x\n", ret, sbdf);
+ return ret;
+ }
+ ret = devm_add_action_or_reset(&plat_dev->dev, dwc_pcie_unregister_pmu,
+@@ -726,7 +726,6 @@ static struct platform_driver dwc_pcie_pmu_driver = {
+ static int __init dwc_pcie_pmu_init(void)
+ {
+ struct pci_dev *pdev = NULL;
+- bool found = false;
+ int ret;
+
+ for_each_pci_dev(pdev) {
+@@ -738,11 +737,7 @@ static int __init dwc_pcie_pmu_init(void)
+ pci_dev_put(pdev);
+ return ret;
+ }
+-
+- found = true;
+ }
+- if (!found)
+- return -ENODEV;
+
+ ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN,
+ "perf/dwc_pcie_pmu:online",
+diff --git a/drivers/perf/hisilicon/hisi_pcie_pmu.c b/drivers/perf/hisilicon/hisi_pcie_pmu.c
+index f06027574a241a..f7d6c59d993016 100644
+--- a/drivers/perf/hisilicon/hisi_pcie_pmu.c
++++ b/drivers/perf/hisilicon/hisi_pcie_pmu.c
+@@ -208,7 +208,7 @@ static void hisi_pcie_pmu_writeq(struct hisi_pcie_pmu *pcie_pmu, u32 reg_offset,
+ static u64 hisi_pcie_pmu_get_event_ctrl_val(struct perf_event *event)
+ {
+ u64 port, trig_len, thr_len, len_mode;
+- u64 reg = HISI_PCIE_INIT_SET;
++ u64 reg = 0;
+
+ /* Config HISI_PCIE_EVENT_CTRL according to event. */
+ reg |= FIELD_PREP(HISI_PCIE_EVENT_M, hisi_pcie_get_real_event(event));
+@@ -452,10 +452,24 @@ static void hisi_pcie_pmu_set_period(struct perf_event *event)
+ struct hisi_pcie_pmu *pcie_pmu = to_pcie_pmu(event->pmu);
+ struct hw_perf_event *hwc = &event->hw;
+ int idx = hwc->idx;
++ u64 orig_cnt, cnt;
++
++ orig_cnt = hisi_pcie_pmu_read_counter(event);
+
+ local64_set(&hwc->prev_count, HISI_PCIE_INIT_VAL);
+ hisi_pcie_pmu_writeq(pcie_pmu, HISI_PCIE_CNT, idx, HISI_PCIE_INIT_VAL);
+ hisi_pcie_pmu_writeq(pcie_pmu, HISI_PCIE_EXT_CNT, idx, HISI_PCIE_INIT_VAL);
++
++ /*
++ * The counter maybe unwritable if the target event is unsupported.
++ * Check this by comparing the counts after setting the period. If
++ * the counts stay unchanged after setting the period then update
++ * the hwc->prev_count correctly. Otherwise the final counts user
++ * get maybe totally wrong.
++ */
++ cnt = hisi_pcie_pmu_read_counter(event);
++ if (orig_cnt == cnt)
++ local64_set(&hwc->prev_count, cnt);
+ }
+
+ static void hisi_pcie_pmu_enable_counter(struct hisi_pcie_pmu *pcie_pmu, struct hw_perf_event *hwc)
+diff --git a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+index 946c01210ac8c0..3bd9b62b23dcc2 100644
+--- a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
++++ b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+@@ -15,6 +15,7 @@
+ #include <linux/of_platform.h>
+ #include <linux/phy/phy.h>
+ #include <linux/platform_device.h>
++#include <linux/pm_runtime.h>
+ #include <linux/rational.h>
+ #include <linux/regmap.h>
+ #include <linux/reset.h>
+diff --git a/drivers/pinctrl/mvebu/pinctrl-dove.c b/drivers/pinctrl/mvebu/pinctrl-dove.c
+index 1947da73e51210..dce601d993728c 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-dove.c
++++ b/drivers/pinctrl/mvebu/pinctrl-dove.c
+@@ -767,7 +767,7 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ struct resource fb_res;
+ struct mvebu_mpp_ctrl_data *mpp_data;
+ void __iomem *base;
+- int i;
++ int i, ret;
+
+ pdev->dev.platform_data = (void *)device_get_match_data(&pdev->dev);
+
+@@ -783,13 +783,17 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ clk_prepare_enable(clk);
+
+ base = devm_platform_get_and_ioremap_resource(pdev, 0, &mpp_res);
+- if (IS_ERR(base))
+- return PTR_ERR(base);
++ if (IS_ERR(base)) {
++ ret = PTR_ERR(base);
++ goto err_probe;
++ }
+
+ mpp_data = devm_kcalloc(&pdev->dev, dove_pinctrl_info.ncontrols,
+ sizeof(*mpp_data), GFP_KERNEL);
+- if (!mpp_data)
+- return -ENOMEM;
++ if (!mpp_data) {
++ ret = -ENOMEM;
++ goto err_probe;
++ }
+
+ dove_pinctrl_info.control_data = mpp_data;
+ for (i = 0; i < ARRAY_SIZE(dove_mpp_controls); i++)
+@@ -808,8 +812,10 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ }
+
+ mpp4_base = devm_ioremap_resource(&pdev->dev, res);
+- if (IS_ERR(mpp4_base))
+- return PTR_ERR(mpp4_base);
++ if (IS_ERR(mpp4_base)) {
++ ret = PTR_ERR(mpp4_base);
++ goto err_probe;
++ }
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 2);
+ if (!res) {
+@@ -820,8 +826,10 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ }
+
+ pmu_base = devm_ioremap_resource(&pdev->dev, res);
+- if (IS_ERR(pmu_base))
+- return PTR_ERR(pmu_base);
++ if (IS_ERR(pmu_base)) {
++ ret = PTR_ERR(pmu_base);
++ goto err_probe;
++ }
+
+ gconfmap = syscon_regmap_lookup_by_compatible("marvell,dove-global-config");
+ if (IS_ERR(gconfmap)) {
+@@ -831,12 +839,17 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ adjust_resource(&fb_res,
+ (mpp_res->start & INT_REGS_MASK) + GC_REGS_OFFS, 0x14);
+ gc_base = devm_ioremap_resource(&pdev->dev, &fb_res);
+- if (IS_ERR(gc_base))
+- return PTR_ERR(gc_base);
++ if (IS_ERR(gc_base)) {
++ ret = PTR_ERR(gc_base);
++ goto err_probe;
++ }
++
+ gconfmap = devm_regmap_init_mmio(&pdev->dev,
+ gc_base, &gc_regmap_config);
+- if (IS_ERR(gconfmap))
+- return PTR_ERR(gconfmap);
++ if (IS_ERR(gconfmap)) {
++ ret = PTR_ERR(gconfmap);
++ goto err_probe;
++ }
+ }
+
+ /* Warn on any missing DT resource */
+@@ -844,6 +857,9 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ dev_warn(&pdev->dev, FW_BUG "Missing pinctrl regs in DTB. Please update your firmware.\n");
+
+ return mvebu_pinctrl_probe(pdev);
++err_probe:
++ clk_disable_unprepare(clk);
++ return ret;
+ }
+
+ static struct platform_driver dove_pinctrl_driver = {
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index 4da3c3f422b691..2ec599e383e4b2 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -1913,7 +1913,8 @@ static int pcs_probe(struct platform_device *pdev)
+
+ dev_info(pcs->dev, "%i pins, size %u\n", pcs->desc.npins, pcs->size);
+
+- if (pinctrl_enable(pcs->pctl))
++ ret = pinctrl_enable(pcs->pctl);
++ if (ret)
+ goto free;
+
+ return 0;
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index 632180570b7048..3ef20f2fa88e47 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -1261,7 +1261,9 @@ static int rzg2l_pinctrl_pinconf_get(struct pinctrl_dev *pctldev,
+ break;
+
+ case PIN_CONFIG_OUTPUT_ENABLE:
+- if (!pctrl->data->oen_read || !(cfg & PIN_CFG_OEN))
++ if (!(cfg & PIN_CFG_OEN))
++ return -EINVAL;
++ if (!pctrl->data->oen_read)
+ return -EOPNOTSUPP;
+ arg = pctrl->data->oen_read(pctrl, _pin);
+ if (!arg)
+@@ -1402,7 +1404,9 @@ static int rzg2l_pinctrl_pinconf_set(struct pinctrl_dev *pctldev,
+
+ case PIN_CONFIG_OUTPUT_ENABLE:
+ arg = pinconf_to_config_argument(_configs[i]);
+- if (!pctrl->data->oen_write || !(cfg & PIN_CFG_OEN))
++ if (!(cfg & PIN_CFG_OEN))
++ return -EINVAL;
++ if (!pctrl->data->oen_write)
+ return -EOPNOTSUPP;
+ ret = pctrl->data->oen_write(pctrl, _pin, !!arg);
+ if (ret)
+diff --git a/drivers/pinctrl/ti/pinctrl-ti-iodelay.c b/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
+index f5e5a23d222600..451801acdc4038 100644
+--- a/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
++++ b/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
+@@ -273,6 +273,22 @@ static int ti_iodelay_pinconf_set(struct ti_iodelay_device *iod,
+ return r;
+ }
+
++/**
++ * ti_iodelay_pinconf_deinit_dev() - deinit the iodelay device
++ * @data: IODelay device
++ *
++ * Deinitialize the IODelay device (basically just lock the region back up.
++ */
++static void ti_iodelay_pinconf_deinit_dev(void *data)
++{
++ struct ti_iodelay_device *iod = data;
++ const struct ti_iodelay_reg_data *reg = iod->reg_data;
++
++ /* lock the iodelay region back again */
++ regmap_update_bits(iod->regmap, reg->reg_global_lock_offset,
++ reg->global_lock_mask, reg->global_lock_val);
++}
++
+ /**
+ * ti_iodelay_pinconf_init_dev() - Initialize IODelay device
+ * @iod: iodelay device
+@@ -295,6 +311,11 @@ static int ti_iodelay_pinconf_init_dev(struct ti_iodelay_device *iod)
+ if (r)
+ return r;
+
++ r = devm_add_action_or_reset(iod->dev, ti_iodelay_pinconf_deinit_dev,
++ iod);
++ if (r)
++ return r;
++
+ /* Read up Recalibration sequence done by bootloader */
+ r = regmap_read(iod->regmap, reg->reg_refclk_offset, &val);
+ if (r)
+@@ -353,21 +374,6 @@ static int ti_iodelay_pinconf_init_dev(struct ti_iodelay_device *iod)
+ return 0;
+ }
+
+-/**
+- * ti_iodelay_pinconf_deinit_dev() - deinit the iodelay device
+- * @iod: IODelay device
+- *
+- * Deinitialize the IODelay device (basically just lock the region back up.
+- */
+-static void ti_iodelay_pinconf_deinit_dev(struct ti_iodelay_device *iod)
+-{
+- const struct ti_iodelay_reg_data *reg = iod->reg_data;
+-
+- /* lock the iodelay region back again */
+- regmap_update_bits(iod->regmap, reg->reg_global_lock_offset,
+- reg->global_lock_mask, reg->global_lock_val);
+-}
+-
+ /**
+ * ti_iodelay_get_pingroup() - Find the group mapped by a group selector
+ * @iod: iodelay device
+@@ -877,27 +883,11 @@ static int ti_iodelay_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- platform_set_drvdata(pdev, iod);
+-
+ return pinctrl_enable(iod->pctl);
+ }
+
+-/**
+- * ti_iodelay_remove() - standard remove
+- * @pdev: platform device
+- */
+-static void ti_iodelay_remove(struct platform_device *pdev)
+-{
+- struct ti_iodelay_device *iod = platform_get_drvdata(pdev);
+-
+- ti_iodelay_pinconf_deinit_dev(iod);
+-
+- /* Expect other allocations to be freed by devm */
+-}
+-
+ static struct platform_driver ti_iodelay_driver = {
+ .probe = ti_iodelay_probe,
+- .remove_new = ti_iodelay_remove,
+ .driver = {
+ .name = DRIVER_NAME,
+ .of_match_table = ti_iodelay_of_match,
+diff --git a/drivers/platform/cznic/turris-omnia-mcu-trng.c b/drivers/platform/cznic/turris-omnia-mcu-trng.c
+index ad953fb3c37afd..9a1d9292dc9ad3 100644
+--- a/drivers/platform/cznic/turris-omnia-mcu-trng.c
++++ b/drivers/platform/cznic/turris-omnia-mcu-trng.c
+@@ -70,8 +70,8 @@ int omnia_mcu_register_trng(struct omnia_mcu *mcu)
+
+ irq_idx = omnia_int_to_gpio_idx[__bf_shf(OMNIA_INT_TRNG)];
+ irq = gpiod_to_irq(gpio_device_get_desc(mcu->gc.gpiodev, irq_idx));
+- if (!irq)
+- return dev_err_probe(dev, -ENXIO, "Cannot get TRNG IRQ\n");
++ if (irq < 0)
++ return dev_err_probe(dev, irq, "Cannot get TRNG IRQ\n");
+
+ /*
+ * If someone else cleared the TRNG interrupt but did not read the
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index 98ec30fce9fdd3..b58df617d4fda7 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -419,13 +419,14 @@ static ssize_t camera_power_show(struct device *dev,
+ char *buf)
+ {
+ struct ideapad_private *priv = dev_get_drvdata(dev);
+- unsigned long result;
++ unsigned long result = 0;
+ int err;
+
+- scoped_guard(mutex, &priv->vpc_mutex)
++ scoped_guard(mutex, &priv->vpc_mutex) {
+ err = read_ec_data(priv->adev->handle, VPCCMD_R_CAMERA, &result);
+- if (err)
+- return err;
++ if (err)
++ return err;
++ }
+
+ return sysfs_emit(buf, "%d\n", !!result);
+ }
+@@ -442,10 +443,11 @@ static ssize_t camera_power_store(struct device *dev,
+ if (err)
+ return err;
+
+- scoped_guard(mutex, &priv->vpc_mutex)
++ scoped_guard(mutex, &priv->vpc_mutex) {
+ err = write_ec_cmd(priv->adev->handle, VPCCMD_W_CAMERA, state);
+- if (err)
+- return err;
++ if (err)
++ return err;
++ }
+
+ return count;
+ }
+@@ -493,13 +495,14 @@ static ssize_t fan_mode_show(struct device *dev,
+ char *buf)
+ {
+ struct ideapad_private *priv = dev_get_drvdata(dev);
+- unsigned long result;
++ unsigned long result = 0;
+ int err;
+
+- scoped_guard(mutex, &priv->vpc_mutex)
++ scoped_guard(mutex, &priv->vpc_mutex) {
+ err = read_ec_data(priv->adev->handle, VPCCMD_R_FAN, &result);
+- if (err)
+- return err;
++ if (err)
++ return err;
++ }
+
+ return sysfs_emit(buf, "%lu\n", result);
+ }
+@@ -519,10 +522,11 @@ static ssize_t fan_mode_store(struct device *dev,
+ if (state > 4 || state == 3)
+ return -EINVAL;
+
+- scoped_guard(mutex, &priv->vpc_mutex)
++ scoped_guard(mutex, &priv->vpc_mutex) {
+ err = write_ec_cmd(priv->adev->handle, VPCCMD_W_FAN, state);
+- if (err)
+- return err;
++ if (err)
++ return err;
++ }
+
+ return count;
+ }
+@@ -602,13 +606,14 @@ static ssize_t touchpad_show(struct device *dev,
+ char *buf)
+ {
+ struct ideapad_private *priv = dev_get_drvdata(dev);
+- unsigned long result;
++ unsigned long result = 0;
+ int err;
+
+- scoped_guard(mutex, &priv->vpc_mutex)
++ scoped_guard(mutex, &priv->vpc_mutex) {
+ err = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &result);
+- if (err)
+- return err;
++ if (err)
++ return err;
++ }
+
+ priv->r_touchpad_val = result;
+
+@@ -627,10 +632,11 @@ static ssize_t touchpad_store(struct device *dev,
+ if (err)
+ return err;
+
+- scoped_guard(mutex, &priv->vpc_mutex)
++ scoped_guard(mutex, &priv->vpc_mutex) {
+ err = write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, state);
+- if (err)
+- return err;
++ if (err)
++ return err;
++ }
+
+ priv->r_touchpad_val = state;
+
+diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
+index 7a61aa88c0614a..acdc3e7b2eae2a 100644
+--- a/drivers/pmdomain/core.c
++++ b/drivers/pmdomain/core.c
+@@ -3190,7 +3190,7 @@ static void mode_status_str(struct seq_file *s, struct device *dev)
+
+ gpd_data = to_gpd_data(dev->power.subsys_data->domain_data);
+
+- seq_printf(s, "%20s", gpd_data->hw_mode ? "HW" : "SW");
++ seq_printf(s, "%9s", gpd_data->hw_mode ? "HW" : "SW");
+ }
+
+ static void perf_status_str(struct seq_file *s, struct device *dev)
+@@ -3198,7 +3198,7 @@ static void perf_status_str(struct seq_file *s, struct device *dev)
+ struct generic_pm_domain_data *gpd_data;
+
+ gpd_data = to_gpd_data(dev->power.subsys_data->domain_data);
+- seq_put_decimal_ull(s, "", gpd_data->performance_state);
++ seq_printf(s, "%-10u ", gpd_data->performance_state);
+ }
+
+ static int genpd_summary_one(struct seq_file *s,
+@@ -3226,7 +3226,7 @@ static int genpd_summary_one(struct seq_file *s,
+ else
+ snprintf(state, sizeof(state), "%s",
+ status_lookup[genpd->status]);
+- seq_printf(s, "%-30s %-50s %u", genpd->name, state, genpd->performance_state);
++ seq_printf(s, "%-30s %-49s %u", genpd->name, state, genpd->performance_state);
+
+ /*
+ * Modifications on the list require holding locks on both
+diff --git a/drivers/power/supply/axp20x_battery.c b/drivers/power/supply/axp20x_battery.c
+index 6ac5c80cfda214..7520b599eb3d17 100644
+--- a/drivers/power/supply/axp20x_battery.c
++++ b/drivers/power/supply/axp20x_battery.c
+@@ -303,11 +303,11 @@ static int axp20x_battery_get_prop(struct power_supply *psy,
+ val->intval = reg & AXP209_FG_PERCENT;
+ break;
+
+- case POWER_SUPPLY_PROP_VOLTAGE_MAX_DESIGN:
++ case POWER_SUPPLY_PROP_VOLTAGE_MAX:
+ return axp20x_batt->data->get_max_voltage(axp20x_batt,
+ &val->intval);
+
+- case POWER_SUPPLY_PROP_VOLTAGE_MIN_DESIGN:
++ case POWER_SUPPLY_PROP_VOLTAGE_MIN:
+ ret = regmap_read(axp20x_batt->regmap, AXP20X_V_OFF, ®);
+ if (ret)
+ return ret;
+@@ -455,10 +455,10 @@ static int axp20x_battery_set_prop(struct power_supply *psy,
+ struct axp20x_batt_ps *axp20x_batt = power_supply_get_drvdata(psy);
+
+ switch (psp) {
+- case POWER_SUPPLY_PROP_VOLTAGE_MIN_DESIGN:
++ case POWER_SUPPLY_PROP_VOLTAGE_MIN:
+ return axp20x_set_voltage_min_design(axp20x_batt, val->intval);
+
+- case POWER_SUPPLY_PROP_VOLTAGE_MAX_DESIGN:
++ case POWER_SUPPLY_PROP_VOLTAGE_MAX:
+ return axp20x_batt->data->set_max_voltage(axp20x_batt, val->intval);
+
+ case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT:
+@@ -493,8 +493,8 @@ static enum power_supply_property axp20x_battery_props[] = {
+ POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT,
+ POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT_MAX,
+ POWER_SUPPLY_PROP_HEALTH,
+- POWER_SUPPLY_PROP_VOLTAGE_MAX_DESIGN,
+- POWER_SUPPLY_PROP_VOLTAGE_MIN_DESIGN,
++ POWER_SUPPLY_PROP_VOLTAGE_MAX,
++ POWER_SUPPLY_PROP_VOLTAGE_MIN,
+ POWER_SUPPLY_PROP_CAPACITY,
+ };
+
+@@ -502,8 +502,8 @@ static int axp20x_battery_prop_writeable(struct power_supply *psy,
+ enum power_supply_property psp)
+ {
+ return psp == POWER_SUPPLY_PROP_STATUS ||
+- psp == POWER_SUPPLY_PROP_VOLTAGE_MIN_DESIGN ||
+- psp == POWER_SUPPLY_PROP_VOLTAGE_MAX_DESIGN ||
++ psp == POWER_SUPPLY_PROP_VOLTAGE_MIN ||
++ psp == POWER_SUPPLY_PROP_VOLTAGE_MAX ||
+ psp == POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT ||
+ psp == POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT_MAX;
+ }
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index e7d37e422c3f6e..496c3e1f2ee6d6 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -853,7 +853,10 @@ static void max17042_set_soc_threshold(struct max17042_chip *chip, u16 off)
+ /* program interrupt thresholds such that we should
+ * get interrupt for every 'off' perc change in the soc
+ */
+- regmap_read(map, MAX17042_RepSOC, &soc);
++ if (chip->pdata->enable_current_sense)
++ regmap_read(map, MAX17042_RepSOC, &soc);
++ else
++ regmap_read(map, MAX17042_VFSOC, &soc);
+ soc >>= 8;
+ soc_tr = (soc + off) << 8;
+ if (off < soc)
+diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
+index 7c0cea2c828d99..b6f682dac42e77 100644
+--- a/drivers/powercap/intel_rapl_common.c
++++ b/drivers/powercap/intel_rapl_common.c
+@@ -740,7 +740,7 @@ static struct rapl_primitive_info *get_rpi(struct rapl_package *rp, int prim)
+ {
+ struct rapl_primitive_info *rpi = rp->priv->rpi;
+
+- if (prim < 0 || prim > NR_RAPL_PRIMITIVES || !rpi)
++ if (prim < 0 || prim >= NR_RAPL_PRIMITIVES || !rpi)
+ return NULL;
+
+ return &rpi[prim];
+diff --git a/drivers/pps/clients/pps_parport.c b/drivers/pps/clients/pps_parport.c
+index 63d03a0df5cce9..abaffb4e1c1ce9 100644
+--- a/drivers/pps/clients/pps_parport.c
++++ b/drivers/pps/clients/pps_parport.c
+@@ -149,6 +149,9 @@ static void parport_attach(struct parport *port)
+ }
+
+ index = ida_alloc(&pps_client_index, GFP_KERNEL);
++ if (index < 0)
++ goto err_free_device;
++
+ memset(&pps_client_cb, 0, sizeof(pps_client_cb));
+ pps_client_cb.private = device;
+ pps_client_cb.irq_func = parport_irq;
+@@ -159,7 +162,7 @@ static void parport_attach(struct parport *port)
+ index);
+ if (!device->pardev) {
+ pr_err("couldn't register with %s\n", port->name);
+- goto err_free;
++ goto err_free_ida;
+ }
+
+ if (parport_claim_or_block(device->pardev) < 0) {
+@@ -187,8 +190,9 @@ static void parport_attach(struct parport *port)
+ parport_release(device->pardev);
+ err_unregister_dev:
+ parport_unregister_device(device->pardev);
+-err_free:
++err_free_ida:
+ ida_free(&pps_client_index, index);
++err_free_device:
+ kfree(device);
+ }
+
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index 03afc160fc72ce..86b680adbf01c7 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -777,7 +777,7 @@ int of_regulator_bulk_get_all(struct device *dev, struct device_node *np,
+ name[i] = '\0';
+ tmp = regulator_get(dev, name);
+ if (IS_ERR(tmp)) {
+- ret = -EINVAL;
++ ret = PTR_ERR(tmp);
+ goto error;
+ }
+ (*consumers)[n].consumer = tmp;
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index 144c8e9a642e8b..448b9a5438e0b8 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -210,7 +210,7 @@ static const struct imx_rproc_att imx_rproc_att_imx8mq[] = {
+ /* QSPI Code - alias */
+ { 0x08000000, 0x08000000, 0x08000000, 0 },
+ /* DDR (Code) - alias */
+- { 0x10000000, 0x80000000, 0x0FFE0000, 0 },
++ { 0x10000000, 0x40000000, 0x0FFE0000, 0 },
+ /* TCML */
+ { 0x1FFE0000, 0x007E0000, 0x00020000, ATT_OWN | ATT_IOMEM},
+ /* TCMU */
+@@ -1076,6 +1076,8 @@ static int imx_rproc_probe(struct platform_device *pdev)
+ return -ENOMEM;
+ }
+
++ INIT_WORK(&priv->rproc_work, imx_rproc_vq_work);
++
+ ret = imx_rproc_xtr_mbox_init(rproc);
+ if (ret)
+ goto err_put_wkq;
+@@ -1094,8 +1096,6 @@ static int imx_rproc_probe(struct platform_device *pdev)
+ if (ret)
+ goto err_put_scu;
+
+- INIT_WORK(&priv->rproc_work, imx_rproc_vq_work);
+-
+ if (rproc->state != RPROC_DETACHED)
+ rproc->auto_boot = of_property_read_bool(np, "fsl,auto-boot");
+
+diff --git a/drivers/reset/reset-berlin.c b/drivers/reset/reset-berlin.c
+index 2537ec05eceefd..578fe867080ce0 100644
+--- a/drivers/reset/reset-berlin.c
++++ b/drivers/reset/reset-berlin.c
+@@ -68,13 +68,14 @@ static int berlin_reset_xlate(struct reset_controller_dev *rcdev,
+
+ static int berlin2_reset_probe(struct platform_device *pdev)
+ {
+- struct device_node *parent_np = of_get_parent(pdev->dev.of_node);
++ struct device_node *parent_np;
+ struct berlin_reset_priv *priv;
+
+ priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
++ parent_np = of_get_parent(pdev->dev.of_node);
+ priv->regmap = syscon_node_to_regmap(parent_np);
+ of_node_put(parent_np);
+ if (IS_ERR(priv->regmap))
+diff --git a/drivers/reset/reset-k210.c b/drivers/reset/reset-k210.c
+index b62a2fd44e4e42..e77e4cca377dca 100644
+--- a/drivers/reset/reset-k210.c
++++ b/drivers/reset/reset-k210.c
+@@ -90,7 +90,7 @@ static const struct reset_control_ops k210_rst_ops = {
+ static int k210_rst_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+- struct device_node *parent_np = of_get_parent(dev->of_node);
++ struct device_node *parent_np;
+ struct k210_rst *ksr;
+
+ dev_info(dev, "K210 reset controller\n");
+@@ -99,6 +99,7 @@ static int k210_rst_probe(struct platform_device *pdev)
+ if (!ksr)
+ return -ENOMEM;
+
++ parent_np = of_get_parent(dev->of_node);
+ ksr->map = syscon_node_to_regmap(parent_np);
+ of_node_put(parent_np);
+ if (IS_ERR(ksr->map))
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index f9f682f194154d..3ba4e1c5e15dfe 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -107,6 +107,7 @@ debug_info_t *ap_dbf_info;
+ static bool ap_scan_bus(void);
+ static bool ap_scan_bus_result; /* result of last ap_scan_bus() */
+ static DEFINE_MUTEX(ap_scan_bus_mutex); /* mutex ap_scan_bus() invocations */
++static struct task_struct *ap_scan_bus_task; /* thread holding the scan mutex */
+ static atomic64_t ap_scan_bus_count; /* counter ap_scan_bus() invocations */
+ static int ap_scan_bus_time = AP_CONFIG_TIME;
+ static struct timer_list ap_scan_bus_timer;
+@@ -1006,11 +1007,25 @@ bool ap_bus_force_rescan(void)
+ if (scan_counter <= 0)
+ goto out;
+
++ /*
++ * There is one unlikely but nevertheless valid scenario where the
++ * thread holding the mutex may try to send some crypto load but
++ * all cards are offline so a rescan is triggered which causes
++ * a recursive call of ap_bus_force_rescan(). A simple return if
++ * the mutex is already locked by this thread solves this.
++ */
++ if (mutex_is_locked(&ap_scan_bus_mutex)) {
++ if (ap_scan_bus_task == current)
++ goto out;
++ }
++
+ /* Try to acquire the AP scan bus mutex */
+ if (mutex_trylock(&ap_scan_bus_mutex)) {
+ /* mutex acquired, run the AP bus scan */
++ ap_scan_bus_task = current;
+ ap_scan_bus_result = ap_scan_bus();
+ rc = ap_scan_bus_result;
++ ap_scan_bus_task = NULL;
+ mutex_unlock(&ap_scan_bus_mutex);
+ goto out;
+ }
+@@ -2284,7 +2299,9 @@ static void ap_scan_bus_wq_callback(struct work_struct *unused)
+ * system_long_wq which invokes this function here again.
+ */
+ if (mutex_trylock(&ap_scan_bus_mutex)) {
++ ap_scan_bus_task = current;
+ ap_scan_bus_result = ap_scan_bus();
++ ap_scan_bus_task = NULL;
+ mutex_unlock(&ap_scan_bus_mutex);
+ }
+ }
+diff --git a/drivers/scsi/NCR5380.c b/drivers/scsi/NCR5380.c
+index cea3a79d538e4b..00e245173320c3 100644
+--- a/drivers/scsi/NCR5380.c
++++ b/drivers/scsi/NCR5380.c
+@@ -1485,6 +1485,7 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
+ unsigned char **data)
+ {
+ struct NCR5380_hostdata *hostdata = shost_priv(instance);
++ struct NCR5380_cmd *ncmd = NCR5380_to_ncmd(hostdata->connected);
+ int c = *count;
+ unsigned char p = *phase;
+ unsigned char *d = *data;
+@@ -1496,7 +1497,7 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
+ return -1;
+ }
+
+- NCR5380_to_ncmd(hostdata->connected)->phase = p;
++ ncmd->phase = p;
+
+ if (p & SR_IO) {
+ if (hostdata->read_overruns)
+@@ -1608,45 +1609,44 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
+ * request.
+ */
+
+- if (hostdata->flags & FLAG_DMA_FIXUP) {
+- if (p & SR_IO) {
+- /*
+- * The workaround was to transfer fewer bytes than we
+- * intended to with the pseudo-DMA read function, wait for
+- * the chip to latch the last byte, read it, and then disable
+- * pseudo-DMA mode.
+- *
+- * After REQ is asserted, the NCR5380 asserts DRQ and ACK.
+- * REQ is deasserted when ACK is asserted, and not reasserted
+- * until ACK goes false. Since the NCR5380 won't lower ACK
+- * until DACK is asserted, which won't happen unless we twiddle
+- * the DMA port or we take the NCR5380 out of DMA mode, we
+- * can guarantee that we won't handshake another extra
+- * byte.
+- */
+-
+- if (NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
+- BASR_DRQ, BASR_DRQ, 0) < 0) {
+- result = -1;
+- shost_printk(KERN_ERR, instance, "PDMA read: DRQ timeout\n");
+- }
+- if (NCR5380_poll_politely(hostdata, STATUS_REG,
+- SR_REQ, 0, 0) < 0) {
+- result = -1;
+- shost_printk(KERN_ERR, instance, "PDMA read: !REQ timeout\n");
+- }
+- d[*count - 1] = NCR5380_read(INPUT_DATA_REG);
+- } else {
+- /*
+- * Wait for the last byte to be sent. If REQ is being asserted for
+- * the byte we're interested, we'll ACK it and it will go false.
+- */
+- if (NCR5380_poll_politely2(hostdata,
+- BUS_AND_STATUS_REG, BASR_DRQ, BASR_DRQ,
+- BUS_AND_STATUS_REG, BASR_PHASE_MATCH, 0, 0) < 0) {
+- result = -1;
+- shost_printk(KERN_ERR, instance, "PDMA write: DRQ and phase timeout\n");
++ if ((hostdata->flags & FLAG_DMA_FIXUP) &&
++ (NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) {
++ /*
++ * The workaround was to transfer fewer bytes than we
++ * intended to with the pseudo-DMA receive function, wait for
++ * the chip to latch the last byte, read it, and then disable
++ * DMA mode.
++ *
++ * After REQ is asserted, the NCR5380 asserts DRQ and ACK.
++ * REQ is deasserted when ACK is asserted, and not reasserted
++ * until ACK goes false. Since the NCR5380 won't lower ACK
++ * until DACK is asserted, which won't happen unless we twiddle
++ * the DMA port or we take the NCR5380 out of DMA mode, we
++ * can guarantee that we won't handshake another extra
++ * byte.
++ *
++ * If sending, wait for the last byte to be sent. If REQ is
++ * being asserted for the byte we're interested, we'll ACK it
++ * and it will go false.
++ */
++ if (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
++ BASR_DRQ, BASR_DRQ, 0)) {
++ if ((p & SR_IO) &&
++ (NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) {
++ if (!NCR5380_poll_politely(hostdata, STATUS_REG,
++ SR_REQ, 0, 0)) {
++ d[c] = NCR5380_read(INPUT_DATA_REG);
++ --ncmd->this_residual;
++ } else {
++ result = -1;
++ scmd_printk(KERN_ERR, hostdata->connected,
++ "PDMA fixup: !REQ timeout\n");
++ }
+ }
++ } else if (NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH) {
++ result = -1;
++ scmd_printk(KERN_ERR, hostdata->connected,
++ "PDMA fixup: DRQ timeout\n");
+ }
+ }
+
+diff --git a/drivers/scsi/elx/libefc/efc_nport.c b/drivers/scsi/elx/libefc/efc_nport.c
+index 2e83a667901fec..1a7437f4328e87 100644
+--- a/drivers/scsi/elx/libefc/efc_nport.c
++++ b/drivers/scsi/elx/libefc/efc_nport.c
+@@ -705,9 +705,9 @@ efc_nport_vport_del(struct efc *efc, struct efc_domain *domain,
+ spin_lock_irqsave(&efc->lock, flags);
+ list_for_each_entry(nport, &domain->nport_list, list_entry) {
+ if (nport->wwpn == wwpn && nport->wwnn == wwnn) {
+- kref_put(&nport->ref, nport->release);
+ /* Shutdown this NPORT */
+ efc_sm_post_event(&nport->sm, EFC_EVT_SHUTDOWN, NULL);
++ kref_put(&nport->ref, nport->release);
+ break;
+ }
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_hw4.h b/drivers/scsi/lpfc/lpfc_hw4.h
+index 500253007b1dc8..26e1313ebb21fc 100644
+--- a/drivers/scsi/lpfc/lpfc_hw4.h
++++ b/drivers/scsi/lpfc/lpfc_hw4.h
+@@ -4847,6 +4847,7 @@ struct fcp_iwrite64_wqe {
+ #define cmd_buff_len_SHIFT 16
+ #define cmd_buff_len_MASK 0x00000ffff
+ #define cmd_buff_len_WORD word3
++/* Note: payload_offset_len field depends on ASIC support */
+ #define payload_offset_len_SHIFT 0
+ #define payload_offset_len_MASK 0x0000ffff
+ #define payload_offset_len_WORD word3
+@@ -4863,6 +4864,7 @@ struct fcp_iread64_wqe {
+ #define cmd_buff_len_SHIFT 16
+ #define cmd_buff_len_MASK 0x00000ffff
+ #define cmd_buff_len_WORD word3
++/* Note: payload_offset_len field depends on ASIC support */
+ #define payload_offset_len_SHIFT 0
+ #define payload_offset_len_MASK 0x0000ffff
+ #define payload_offset_len_WORD word3
+@@ -4879,6 +4881,7 @@ struct fcp_icmnd64_wqe {
+ #define cmd_buff_len_SHIFT 16
+ #define cmd_buff_len_MASK 0x00000ffff
+ #define cmd_buff_len_WORD word3
++/* Note: payload_offset_len field depends on ASIC support */
+ #define payload_offset_len_SHIFT 0
+ #define payload_offset_len_MASK 0x0000ffff
+ #define payload_offset_len_WORD word3
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index e1dfa96c2a553a..0c1404dc5f3bdb 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -4699,6 +4699,7 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
+ uint64_t wwn;
+ bool use_no_reset_hba = false;
+ int rc;
++ u8 if_type;
+
+ if (lpfc_no_hba_reset_cnt) {
+ if (phba->sli_rev < LPFC_SLI_REV4 &&
+@@ -4773,10 +4774,24 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
+ shost->max_id = LPFC_MAX_TARGET;
+ shost->max_lun = vport->cfg_max_luns;
+ shost->this_id = -1;
+- if (phba->sli_rev == LPFC_SLI_REV4)
+- shost->max_cmd_len = LPFC_FCP_CDB_LEN_32;
+- else
++
++ /* Set max_cmd_len applicable to ASIC support */
++ if (phba->sli_rev == LPFC_SLI_REV4) {
++ if_type = bf_get(lpfc_sli_intf_if_type,
++ &phba->sli4_hba.sli_intf);
++ switch (if_type) {
++ case LPFC_SLI_INTF_IF_TYPE_2:
++ fallthrough;
++ case LPFC_SLI_INTF_IF_TYPE_6:
++ shost->max_cmd_len = LPFC_FCP_CDB_LEN_32;
++ break;
++ default:
++ shost->max_cmd_len = LPFC_FCP_CDB_LEN;
++ break;
++ }
++ } else {
+ shost->max_cmd_len = LPFC_FCP_CDB_LEN;
++ }
+
+ if (phba->sli_rev == LPFC_SLI_REV4) {
+ if (!phba->cfg_fcp_mq_threshold ||
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 98ce9d97a22570..9f0b59672e1915 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -4760,7 +4760,7 @@ static int lpfc_scsi_prep_cmnd_buf_s4(struct lpfc_vport *vport,
+
+ /* Word 3 */
+ bf_set(payload_offset_len, &wqe->fcp_icmd,
+- sizeof(struct fcp_cmnd32) + sizeof(struct fcp_rsp));
++ sizeof(struct fcp_cmnd) + sizeof(struct fcp_rsp));
+
+ /* Word 6 */
+ bf_set(wqe_ctxt_tag, &wqe->generic.wqe_com,
+diff --git a/drivers/scsi/mac_scsi.c b/drivers/scsi/mac_scsi.c
+index 53ee8f84d094cf..3958f7dc679f0a 100644
+--- a/drivers/scsi/mac_scsi.c
++++ b/drivers/scsi/mac_scsi.c
+@@ -102,11 +102,15 @@ __setup("mac5380=", mac_scsi_setup);
+ * Linux SCSI drivers lack knowledge of the timing behaviour of SCSI targets
+ * so bus errors are unavoidable.
+ *
+- * If a MOVE.B instruction faults, we assume that zero bytes were transferred
+- * and simply retry. That assumption probably depends on target behaviour but
+- * seems to hold up okay. The NOP provides synchronization: without it the
+- * fault can sometimes occur after the program counter has moved past the
+- * offending instruction. Post-increment addressing can't be used.
++ * If a MOVE.B instruction faults during a receive operation, we assume the
++ * target sent nothing and try again. That assumption probably depends on
++ * target firmware but it seems to hold up okay. If a fault happens during a
++ * send operation, the target may or may not have seen /ACK and got the byte.
++ * It's uncertain so the whole SCSI command gets retried.
++ *
++ * The NOP is needed for synchronization because the fault address in the
++ * exception stack frame may or may not be the instruction that actually
++ * caused the bus error. Post-increment addressing can't be used.
+ */
+
+ #define MOVE_BYTE(operands) \
+@@ -208,8 +212,6 @@ __setup("mac5380=", mac_scsi_setup);
+ ".previous \n" \
+ : "+a" (addr), "+r" (n), "+r" (result) : "a" (io))
+
+-#define MAC_PDMA_DELAY 32
+-
+ static inline int mac_pdma_recv(void __iomem *io, unsigned char *start, int n)
+ {
+ unsigned char *addr = start;
+@@ -245,22 +247,21 @@ static inline int mac_pdma_send(unsigned char *start, void __iomem *io, int n)
+ if (n >= 1) {
+ MOVE_BYTE("%0@,%3@");
+ if (result)
+- goto out;
++ return -1;
+ }
+ if (n >= 1 && ((unsigned long)addr & 1)) {
+ MOVE_BYTE("%0@,%3@");
+ if (result)
+- goto out;
++ return -2;
+ }
+ while (n >= 32)
+ MOVE_16_WORDS("%0@+,%3@");
+ while (n >= 2)
+ MOVE_WORD("%0@+,%3@");
+ if (result)
+- return start - addr; /* Negated to indicate uncertain length */
++ return start - addr - 1; /* Negated to indicate uncertain length */
+ if (n == 1)
+ MOVE_BYTE("%0@,%3@");
+-out:
+ return addr - start;
+ }
+
+@@ -274,25 +275,56 @@ static inline void write_ctrl_reg(struct NCR5380_hostdata *hostdata, u32 value)
+ out_be32(hostdata->io + (CTRL_REG << 4), value);
+ }
+
++static inline int macscsi_wait_for_drq(struct NCR5380_hostdata *hostdata)
++{
++ unsigned int n = 1; /* effectively multiplies NCR5380_REG_POLL_TIME */
++ unsigned char basr;
++
++again:
++ basr = NCR5380_read(BUS_AND_STATUS_REG);
++
++ if (!(basr & BASR_PHASE_MATCH))
++ return 1;
++
++ if (basr & BASR_IRQ)
++ return -1;
++
++ if (basr & BASR_DRQ)
++ return 0;
++
++ if (n-- == 0) {
++ NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
++ dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
++ "%s: DRQ timeout\n", __func__);
++ return -1;
++ }
++
++ NCR5380_poll_politely2(hostdata,
++ BUS_AND_STATUS_REG, BASR_DRQ, BASR_DRQ,
++ BUS_AND_STATUS_REG, BASR_PHASE_MATCH, 0, 0);
++ goto again;
++}
++
+ static inline int macscsi_pread(struct NCR5380_hostdata *hostdata,
+ unsigned char *dst, int len)
+ {
+ u8 __iomem *s = hostdata->pdma_io + (INPUT_DATA_REG << 4);
+ unsigned char *d = dst;
+- int result = 0;
+
+ hostdata->pdma_residual = len;
+
+- while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
+- BASR_DRQ | BASR_PHASE_MATCH,
+- BASR_DRQ | BASR_PHASE_MATCH, 0)) {
+- int bytes;
++ while (macscsi_wait_for_drq(hostdata) == 0) {
++ int bytes, chunk_bytes;
+
+ if (macintosh_config->ident == MAC_MODEL_IIFX)
+ write_ctrl_reg(hostdata, CTRL_HANDSHAKE_MODE |
+ CTRL_INTERRUPTS_ENABLE);
+
+- bytes = mac_pdma_recv(s, d, min(hostdata->pdma_residual, 512));
++ chunk_bytes = min(hostdata->pdma_residual, 512);
++ bytes = mac_pdma_recv(s, d, chunk_bytes);
++
++ if (macintosh_config->ident == MAC_MODEL_IIFX)
++ write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
+
+ if (bytes > 0) {
+ d += bytes;
+@@ -300,37 +332,25 @@ static inline int macscsi_pread(struct NCR5380_hostdata *hostdata,
+ }
+
+ if (hostdata->pdma_residual == 0)
+- goto out;
++ break;
+
+- if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
+- BUS_AND_STATUS_REG, BASR_ACK,
+- BASR_ACK, 0) < 0)
+- scmd_printk(KERN_DEBUG, hostdata->connected,
+- "%s: !REQ and !ACK\n", __func__);
+- if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
+- goto out;
++ if (bytes > 0)
++ continue;
+
+- if (bytes == 0)
+- udelay(MAC_PDMA_DELAY);
++ NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
++ dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
++ "%s: bus error [%d/%d] (%d/%d)\n",
++ __func__, d - dst, len, bytes, chunk_bytes);
+
+- if (bytes >= 0)
++ if (bytes == 0)
+ continue;
+
+- dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
+- "%s: bus error (%d/%d)\n", __func__, d - dst, len);
+- NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
+- result = -1;
+- goto out;
++ if (macscsi_wait_for_drq(hostdata) <= 0)
++ set_host_byte(hostdata->connected, DID_ERROR);
++ break;
+ }
+
+- scmd_printk(KERN_ERR, hostdata->connected,
+- "%s: phase mismatch or !DRQ\n", __func__);
+- NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
+- result = -1;
+-out:
+- if (macintosh_config->ident == MAC_MODEL_IIFX)
+- write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
+- return result;
++ return 0;
+ }
+
+ static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata,
+@@ -338,67 +358,47 @@ static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata,
+ {
+ unsigned char *s = src;
+ u8 __iomem *d = hostdata->pdma_io + (OUTPUT_DATA_REG << 4);
+- int result = 0;
+
+ hostdata->pdma_residual = len;
+
+- while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
+- BASR_DRQ | BASR_PHASE_MATCH,
+- BASR_DRQ | BASR_PHASE_MATCH, 0)) {
+- int bytes;
++ while (macscsi_wait_for_drq(hostdata) == 0) {
++ int bytes, chunk_bytes;
+
+ if (macintosh_config->ident == MAC_MODEL_IIFX)
+ write_ctrl_reg(hostdata, CTRL_HANDSHAKE_MODE |
+ CTRL_INTERRUPTS_ENABLE);
+
+- bytes = mac_pdma_send(s, d, min(hostdata->pdma_residual, 512));
++ chunk_bytes = min(hostdata->pdma_residual, 512);
++ bytes = mac_pdma_send(s, d, chunk_bytes);
++
++ if (macintosh_config->ident == MAC_MODEL_IIFX)
++ write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
+
+ if (bytes > 0) {
+ s += bytes;
+ hostdata->pdma_residual -= bytes;
+ }
+
+- if (hostdata->pdma_residual == 0) {
+- if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG,
+- TCR_LAST_BYTE_SENT,
+- TCR_LAST_BYTE_SENT,
+- 0) < 0) {
+- scmd_printk(KERN_ERR, hostdata->connected,
+- "%s: Last Byte Sent timeout\n", __func__);
+- result = -1;
+- }
+- goto out;
+- }
++ if (hostdata->pdma_residual == 0)
++ break;
+
+- if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
+- BUS_AND_STATUS_REG, BASR_ACK,
+- BASR_ACK, 0) < 0)
+- scmd_printk(KERN_DEBUG, hostdata->connected,
+- "%s: !REQ and !ACK\n", __func__);
+- if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
+- goto out;
++ if (bytes > 0)
++ continue;
+
+- if (bytes == 0)
+- udelay(MAC_PDMA_DELAY);
++ NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
++ dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
++ "%s: bus error [%d/%d] (%d/%d)\n",
++ __func__, s - src, len, bytes, chunk_bytes);
+
+- if (bytes >= 0)
++ if (bytes == 0)
+ continue;
+
+- dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
+- "%s: bus error (%d/%d)\n", __func__, s - src, len);
+- NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
+- result = -1;
+- goto out;
++ if (macscsi_wait_for_drq(hostdata) <= 0)
++ set_host_byte(hostdata->connected, DID_ERROR);
++ break;
+ }
+
+- scmd_printk(KERN_ERR, hostdata->connected,
+- "%s: phase mismatch or !DRQ\n", __func__);
+- NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
+- result = -1;
+-out:
+- if (macintosh_config->ident == MAC_MODEL_IIFX)
+- write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
+- return result;
++ return 0;
+ }
+
+ static int macscsi_dma_xfer_len(struct NCR5380_hostdata *hostdata,
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 9db86943d04cf5..53896df7ec2bf3 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1382,7 +1382,7 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
+ if (protect && sdkp->protection_type == T10_PI_TYPE2_PROTECTION) {
+ ret = sd_setup_rw32_cmnd(cmd, write, lba, nr_blocks,
+ protect | fua, dld);
+- } else if (rq->cmd_flags & REQ_ATOMIC && write) {
++ } else if (rq->cmd_flags & REQ_ATOMIC) {
+ ret = sd_setup_atomic_cmnd(cmd, lba, nr_blocks,
+ sdkp->use_atomic_write_boundary,
+ protect | fua);
+@@ -3404,7 +3404,7 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp,
+ rcu_read_lock();
+ vpd = rcu_dereference(sdkp->device->vpd_pgb1);
+
+- if (!vpd || vpd->len < 8) {
++ if (!vpd || vpd->len <= 8) {
+ rcu_read_unlock();
+ return;
+ }
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 24c7cb285dca0f..c1524fb334eb5c 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -2354,14 +2354,6 @@ static inline void pqi_mask_device(u8 *scsi3addr)
+ scsi3addr[3] |= 0xc0;
+ }
+
+-static inline bool pqi_is_multipath_device(struct pqi_scsi_dev *device)
+-{
+- if (pqi_is_logical_device(device))
+- return false;
+-
+- return (device->path_map & (device->path_map - 1)) != 0;
+-}
+-
+ static inline bool pqi_expose_device(struct pqi_scsi_dev *device)
+ {
+ return !device->is_physical_device || !pqi_skip_device(device->scsi3addr);
+@@ -3258,14 +3250,12 @@ static void pqi_process_aio_io_error(struct pqi_io_request *io_request)
+ int residual_count;
+ int xfer_count;
+ bool device_offline;
+- struct pqi_scsi_dev *device;
+
+ scmd = io_request->scmd;
+ error_info = io_request->error_info;
+ host_byte = DID_OK;
+ sense_data_length = 0;
+ device_offline = false;
+- device = scmd->device->hostdata;
+
+ switch (error_info->service_response) {
+ case PQI_AIO_SERV_RESPONSE_COMPLETE:
+@@ -3290,14 +3280,8 @@ static void pqi_process_aio_io_error(struct pqi_io_request *io_request)
+ break;
+ case PQI_AIO_STATUS_AIO_PATH_DISABLED:
+ pqi_aio_path_disabled(io_request);
+- if (pqi_is_multipath_device(device)) {
+- pqi_device_remove_start(device);
+- host_byte = DID_NO_CONNECT;
+- scsi_status = SAM_STAT_CHECK_CONDITION;
+- } else {
+- scsi_status = SAM_STAT_GOOD;
+- io_request->status = -EAGAIN;
+- }
++ scsi_status = SAM_STAT_GOOD;
++ io_request->status = -EAGAIN;
+ break;
+ case PQI_AIO_STATUS_NO_PATH_TO_DEVICE:
+ case PQI_AIO_STATUS_INVALID_DEVICE:
+diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c
+index 76bb496305a0e0..bacabf731dcb00 100644
+--- a/drivers/soc/fsl/qe/qmc.c
++++ b/drivers/soc/fsl/qe/qmc.c
+@@ -940,11 +940,13 @@ static int qmc_chan_start_rx(struct qmc_chan *chan)
+ goto end;
+ }
+
+- ret = qmc_setup_chan_trnsync(chan->qmc, chan);
+- if (ret) {
+- dev_err(chan->qmc->dev, "chan %u: setup TRNSYNC failed (%d)\n",
+- chan->id, ret);
+- goto end;
++ if (chan->mode == QMC_TRANSPARENT) {
++ ret = qmc_setup_chan_trnsync(chan->qmc, chan);
++ if (ret) {
++ dev_err(chan->qmc->dev, "chan %u: setup TRNSYNC failed (%d)\n",
++ chan->id, ret);
++ goto end;
++ }
+ }
+
+ /* Restart the receiver */
+@@ -982,11 +984,13 @@ static int qmc_chan_start_tx(struct qmc_chan *chan)
+ goto end;
+ }
+
+- ret = qmc_setup_chan_trnsync(chan->qmc, chan);
+- if (ret) {
+- dev_err(chan->qmc->dev, "chan %u: setup TRNSYNC failed (%d)\n",
+- chan->id, ret);
+- goto end;
++ if (chan->mode == QMC_TRANSPARENT) {
++ ret = qmc_setup_chan_trnsync(chan->qmc, chan);
++ if (ret) {
++ dev_err(chan->qmc->dev, "chan %u: setup TRNSYNC failed (%d)\n",
++ chan->id, ret);
++ goto end;
++ }
+ }
+
+ /*
+diff --git a/drivers/soc/fsl/qe/tsa.c b/drivers/soc/fsl/qe/tsa.c
+index 6c5741cf5e9d2a..53968ea84c8872 100644
+--- a/drivers/soc/fsl/qe/tsa.c
++++ b/drivers/soc/fsl/qe/tsa.c
+@@ -140,7 +140,7 @@ static inline void tsa_write32(void __iomem *addr, u32 val)
+ iowrite32be(val, addr);
+ }
+
+-static inline void tsa_write8(void __iomem *addr, u32 val)
++static inline void tsa_write8(void __iomem *addr, u8 val)
+ {
+ iowrite8(val, addr);
+ }
+diff --git a/drivers/soc/qcom/smd-rpm.c b/drivers/soc/qcom/smd-rpm.c
+index b7056aed4c7d10..9d64283d212549 100644
+--- a/drivers/soc/qcom/smd-rpm.c
++++ b/drivers/soc/qcom/smd-rpm.c
+@@ -196,9 +196,6 @@ static int qcom_smd_rpm_probe(struct rpmsg_device *rpdev)
+ {
+ struct qcom_smd_rpm *rpm;
+
+- if (!rpdev->dev.of_node)
+- return -EINVAL;
+-
+ rpm = devm_kzalloc(&rpdev->dev, sizeof(*rpm), GFP_KERNEL);
+ if (!rpm)
+ return -ENOMEM;
+@@ -218,18 +215,38 @@ static void qcom_smd_rpm_remove(struct rpmsg_device *rpdev)
+ of_platform_depopulate(&rpdev->dev);
+ }
+
+-static const struct rpmsg_device_id qcom_smd_rpm_id_table[] = {
+- { .name = "rpm_requests", },
+- { /* sentinel */ }
++static const struct of_device_id qcom_smd_rpm_of_match[] = {
++ { .compatible = "qcom,rpm-apq8084" },
++ { .compatible = "qcom,rpm-ipq6018" },
++ { .compatible = "qcom,rpm-ipq9574" },
++ { .compatible = "qcom,rpm-msm8226" },
++ { .compatible = "qcom,rpm-msm8909" },
++ { .compatible = "qcom,rpm-msm8916" },
++ { .compatible = "qcom,rpm-msm8936" },
++ { .compatible = "qcom,rpm-msm8953" },
++ { .compatible = "qcom,rpm-msm8974" },
++ { .compatible = "qcom,rpm-msm8976" },
++ { .compatible = "qcom,rpm-msm8994" },
++ { .compatible = "qcom,rpm-msm8996" },
++ { .compatible = "qcom,rpm-msm8998" },
++ { .compatible = "qcom,rpm-sdm660" },
++ { .compatible = "qcom,rpm-sm6115" },
++ { .compatible = "qcom,rpm-sm6125" },
++ { .compatible = "qcom,rpm-sm6375" },
++ { .compatible = "qcom,rpm-qcm2290" },
++ { .compatible = "qcom,rpm-qcs404" },
++ {}
+ };
+-MODULE_DEVICE_TABLE(rpmsg, qcom_smd_rpm_id_table);
++MODULE_DEVICE_TABLE(of, qcom_smd_rpm_of_match);
+
+ static struct rpmsg_driver qcom_smd_rpm_driver = {
+ .probe = qcom_smd_rpm_probe,
+ .remove = qcom_smd_rpm_remove,
+ .callback = qcom_smd_rpm_callback,
+- .id_table = qcom_smd_rpm_id_table,
+- .drv.name = "qcom_smd_rpm",
++ .drv = {
++ .name = "qcom_smd_rpm",
++ .of_match_table = qcom_smd_rpm_of_match,
++ },
+ };
+
+ static int __init qcom_smd_rpm_init(void)
+diff --git a/drivers/soc/versatile/soc-integrator.c b/drivers/soc/versatile/soc-integrator.c
+index bab4ad87aa7500..d5099a3386b4fc 100644
+--- a/drivers/soc/versatile/soc-integrator.c
++++ b/drivers/soc/versatile/soc-integrator.c
+@@ -113,6 +113,7 @@ static int __init integrator_soc_init(void)
+ return -ENODEV;
+
+ syscon_regmap = syscon_node_to_regmap(np);
++ of_node_put(np);
+ if (IS_ERR(syscon_regmap))
+ return PTR_ERR(syscon_regmap);
+
+diff --git a/drivers/soc/versatile/soc-realview.c b/drivers/soc/versatile/soc-realview.c
+index c6876d232d8fd6..cf91abe07d38d0 100644
+--- a/drivers/soc/versatile/soc-realview.c
++++ b/drivers/soc/versatile/soc-realview.c
+@@ -4,6 +4,7 @@
+ *
+ * Author: Linus Walleij <linus.walleij@linaro.org>
+ */
++#include <linux/device.h>
+ #include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/slab.h>
+@@ -81,6 +82,13 @@ static struct attribute *realview_attrs[] = {
+
+ ATTRIBUTE_GROUPS(realview);
+
++static void realview_soc_socdev_release(void *data)
++{
++ struct soc_device *soc_dev = data;
++
++ soc_device_unregister(soc_dev);
++}
++
+ static int realview_soc_probe(struct platform_device *pdev)
+ {
+ struct regmap *syscon_regmap;
+@@ -93,7 +101,7 @@ static int realview_soc_probe(struct platform_device *pdev)
+ if (IS_ERR(syscon_regmap))
+ return PTR_ERR(syscon_regmap);
+
+- soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL);
++ soc_dev_attr = devm_kzalloc(&pdev->dev, sizeof(*soc_dev_attr), GFP_KERNEL);
+ if (!soc_dev_attr)
+ return -ENOMEM;
+
+@@ -106,10 +114,14 @@ static int realview_soc_probe(struct platform_device *pdev)
+ soc_dev_attr->family = "Versatile";
+ soc_dev_attr->custom_attr_group = realview_groups[0];
+ soc_dev = soc_device_register(soc_dev_attr);
+- if (IS_ERR(soc_dev)) {
+- kfree(soc_dev_attr);
++ if (IS_ERR(soc_dev))
+ return -ENODEV;
+- }
++
++ ret = devm_add_action_or_reset(&pdev->dev, realview_soc_socdev_release,
++ soc_dev);
++ if (ret)
++ return ret;
++
+ ret = regmap_read(syscon_regmap, REALVIEW_SYS_ID_OFFSET,
+ &realview_coreid);
+ if (ret)
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 5aaff3bee1b78f..9ea91432c11d81 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -375,9 +375,9 @@ static int atmel_qspi_set_cfg(struct atmel_qspi *aq,
+ * If the QSPI controller is set in regular SPI mode, set it in
+ * Serial Memory Mode (SMM).
+ */
+- if (aq->mr != QSPI_MR_SMM) {
+- atmel_qspi_write(QSPI_MR_SMM, aq, QSPI_MR);
+- aq->mr = QSPI_MR_SMM;
++ if (!(aq->mr & QSPI_MR_SMM)) {
++ aq->mr |= QSPI_MR_SMM;
++ atmel_qspi_write(aq->mr, aq, QSPI_MR);
+ }
+
+ /* Clear pending interrupts */
+@@ -501,7 +501,8 @@ static int atmel_qspi_setup(struct spi_device *spi)
+ if (ret < 0)
+ return ret;
+
+- aq->scr = QSPI_SCR_SCBR(scbr);
++ aq->scr &= ~QSPI_SCR_SCBR_MASK;
++ aq->scr |= QSPI_SCR_SCBR(scbr);
+ atmel_qspi_write(aq->scr, aq, QSPI_SCR);
+
+ pm_runtime_mark_last_busy(ctrl->dev.parent);
+@@ -534,6 +535,7 @@ static int atmel_qspi_set_cs_timing(struct spi_device *spi)
+ if (ret < 0)
+ return ret;
+
++ aq->scr &= ~QSPI_SCR_DLYBS_MASK;
+ aq->scr |= QSPI_SCR_DLYBS(cs_setup);
+ atmel_qspi_write(aq->scr, aq, QSPI_SCR);
+
+@@ -549,8 +551,8 @@ static void atmel_qspi_init(struct atmel_qspi *aq)
+ atmel_qspi_write(QSPI_CR_SWRST, aq, QSPI_CR);
+
+ /* Set the QSPI controller by default in Serial Memory Mode */
+- atmel_qspi_write(QSPI_MR_SMM, aq, QSPI_MR);
+- aq->mr = QSPI_MR_SMM;
++ aq->mr |= QSPI_MR_SMM;
++ atmel_qspi_write(aq->mr, aq, QSPI_MR);
+
+ /* Enable the QSPI controller */
+ atmel_qspi_write(QSPI_CR_QSPIEN, aq, QSPI_CR);
+@@ -726,6 +728,7 @@ static void atmel_qspi_remove(struct platform_device *pdev)
+ clk_unprepare(aq->pclk);
+
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+ }
+
+diff --git a/drivers/spi/spi-airoha-snfi.c b/drivers/spi/spi-airoha-snfi.c
+index 9d97ec98881cc9..94458df53eae23 100644
+--- a/drivers/spi/spi-airoha-snfi.c
++++ b/drivers/spi/spi-airoha-snfi.c
+@@ -211,9 +211,6 @@ struct airoha_snand_dev {
+
+ u8 *txrx_buf;
+ dma_addr_t dma_addr;
+-
+- u64 cur_page_num;
+- bool data_need_update;
+ };
+
+ struct airoha_snand_ctrl {
+@@ -405,7 +402,7 @@ static int airoha_snand_write_data(struct airoha_snand_ctrl *as_ctrl, u8 cmd,
+ for (i = 0; i < len; i += data_len) {
+ int err;
+
+- data_len = min(len, SPI_MAX_TRANSFER_SIZE);
++ data_len = min(len - i, SPI_MAX_TRANSFER_SIZE);
+ err = airoha_snand_set_fifo_op(as_ctrl, cmd, data_len);
+ if (err)
+ return err;
+@@ -427,7 +424,7 @@ static int airoha_snand_read_data(struct airoha_snand_ctrl *as_ctrl, u8 *data,
+ for (i = 0; i < len; i += data_len) {
+ int err;
+
+- data_len = min(len, SPI_MAX_TRANSFER_SIZE);
++ data_len = min(len - i, SPI_MAX_TRANSFER_SIZE);
+ err = airoha_snand_set_fifo_op(as_ctrl, 0xc, data_len);
+ if (err)
+ return err;
+@@ -644,11 +641,6 @@ static ssize_t airoha_snand_dirmap_read(struct spi_mem_dirmap_desc *desc,
+ u32 val, rd_mode;
+ int err;
+
+- if (!as_dev->data_need_update)
+- return len;
+-
+- as_dev->data_need_update = false;
+-
+ switch (op->cmd.opcode) {
+ case SPI_NAND_OP_READ_FROM_CACHE_DUAL:
+ rd_mode = 1;
+@@ -739,8 +731,13 @@ static ssize_t airoha_snand_dirmap_read(struct spi_mem_dirmap_desc *desc,
+ if (err)
+ return err;
+
+- err = regmap_set_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_SNF_STA_CTL1,
+- SPI_NFI_READ_FROM_CACHE_DONE);
++ /*
++ * SPI_NFI_READ_FROM_CACHE_DONE bit must be written at the end
++ * of dirmap_read operation even if it is already set.
++ */
++ err = regmap_write_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_SNF_STA_CTL1,
++ SPI_NFI_READ_FROM_CACHE_DONE,
++ SPI_NFI_READ_FROM_CACHE_DONE);
+ if (err)
+ return err;
+
+@@ -870,8 +867,13 @@ static ssize_t airoha_snand_dirmap_write(struct spi_mem_dirmap_desc *desc,
+ if (err)
+ return err;
+
+- err = regmap_set_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_SNF_STA_CTL1,
+- SPI_NFI_LOAD_TO_CACHE_DONE);
++ /*
++ * SPI_NFI_LOAD_TO_CACHE_DONE bit must be written at the end
++ * of dirmap_write operation even if it is already set.
++ */
++ err = regmap_write_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_SNF_STA_CTL1,
++ SPI_NFI_LOAD_TO_CACHE_DONE,
++ SPI_NFI_LOAD_TO_CACHE_DONE);
+ if (err)
+ return err;
+
+@@ -885,23 +887,11 @@ static ssize_t airoha_snand_dirmap_write(struct spi_mem_dirmap_desc *desc,
+ static int airoha_snand_exec_op(struct spi_mem *mem,
+ const struct spi_mem_op *op)
+ {
+- struct airoha_snand_dev *as_dev = spi_get_ctldata(mem->spi);
+ u8 data[8], cmd, opcode = op->cmd.opcode;
+ struct airoha_snand_ctrl *as_ctrl;
+ int i, err;
+
+ as_ctrl = spi_controller_get_devdata(mem->spi->controller);
+- if (opcode == SPI_NAND_OP_PROGRAM_EXECUTE &&
+- op->addr.val == as_dev->cur_page_num) {
+- as_dev->data_need_update = true;
+- } else if (opcode == SPI_NAND_OP_PAGE_READ) {
+- if (!as_dev->data_need_update &&
+- op->addr.val == as_dev->cur_page_num)
+- return 0;
+-
+- as_dev->data_need_update = true;
+- as_dev->cur_page_num = op->addr.val;
+- }
+
+ /* switch to manual mode */
+ err = airoha_snand_set_mode(as_ctrl, SPI_MODE_MANUAL);
+@@ -986,7 +976,6 @@ static int airoha_snand_setup(struct spi_device *spi)
+ if (dma_mapping_error(as_ctrl->dev, as_dev->dma_addr))
+ return -ENOMEM;
+
+- as_dev->data_need_update = true;
+ spi_set_ctldata(spi, as_dev);
+
+ return 0;
+diff --git a/drivers/spi/spi-bcmbca-hsspi.c b/drivers/spi/spi-bcmbca-hsspi.c
+index 9f64afd8164ea9..4965bc86d7f52a 100644
+--- a/drivers/spi/spi-bcmbca-hsspi.c
++++ b/drivers/spi/spi-bcmbca-hsspi.c
+@@ -546,12 +546,14 @@ static int bcmbca_hsspi_probe(struct platform_device *pdev)
+ goto out_put_host;
+ }
+
+- pm_runtime_enable(&pdev->dev);
++ ret = devm_pm_runtime_enable(&pdev->dev);
++ if (ret)
++ goto out_put_host;
+
+ ret = sysfs_create_group(&pdev->dev.kobj, &bcmbca_hsspi_group);
+ if (ret) {
+ dev_err(&pdev->dev, "couldn't register sysfs group\n");
+- goto out_pm_disable;
++ goto out_put_host;
+ }
+
+ /* register and we are done */
+@@ -565,8 +567,6 @@ static int bcmbca_hsspi_probe(struct platform_device *pdev)
+
+ out_sysgroup_disable:
+ sysfs_remove_group(&pdev->dev.kobj, &bcmbca_hsspi_group);
+-out_pm_disable:
+- pm_runtime_disable(&pdev->dev);
+ out_put_host:
+ spi_controller_put(host);
+ out_disable_pll_clk:
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index 8ecb426be45c75..977e8b55c82b7d 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -986,6 +986,7 @@ static void fsl_lpspi_remove(struct platform_device *pdev)
+
+ fsl_lpspi_dma_exit(controller);
+
++ pm_runtime_dont_use_autosuspend(fsl_lpspi->dev);
+ pm_runtime_disable(fsl_lpspi->dev);
+ }
+
+diff --git a/drivers/spi/spi-nxp-fspi.c b/drivers/spi/spi-nxp-fspi.c
+index 6585b19a48662d..a961785724cdf2 100644
+--- a/drivers/spi/spi-nxp-fspi.c
++++ b/drivers/spi/spi-nxp-fspi.c
+@@ -57,13 +57,6 @@
+ #include <linux/spi/spi.h>
+ #include <linux/spi/spi-mem.h>
+
+-/*
+- * The driver only uses one single LUT entry, that is updated on
+- * each call of exec_op(). Index 0 is preset at boot with a basic
+- * read operation, so let's use the last entry (31).
+- */
+-#define SEQID_LUT 31
+-
+ /* Registers used by the driver */
+ #define FSPI_MCR0 0x00
+ #define FSPI_MCR0_AHB_TIMEOUT(x) ((x) << 24)
+@@ -263,9 +256,6 @@
+ #define FSPI_TFDR 0x180
+
+ #define FSPI_LUT_BASE 0x200
+-#define FSPI_LUT_OFFSET (SEQID_LUT * 4 * 4)
+-#define FSPI_LUT_REG(idx) \
+- (FSPI_LUT_BASE + FSPI_LUT_OFFSET + (idx) * 4)
+
+ /* register map end */
+
+@@ -341,6 +331,7 @@ struct nxp_fspi_devtype_data {
+ unsigned int txfifo;
+ unsigned int ahb_buf_size;
+ unsigned int quirks;
++ unsigned int lut_num;
+ bool little_endian;
+ };
+
+@@ -349,6 +340,7 @@ static struct nxp_fspi_devtype_data lx2160a_data = {
+ .txfifo = SZ_1K, /* (128 * 64 bits) */
+ .ahb_buf_size = SZ_2K, /* (256 * 64 bits) */
+ .quirks = 0,
++ .lut_num = 32,
+ .little_endian = true, /* little-endian */
+ };
+
+@@ -357,6 +349,7 @@ static struct nxp_fspi_devtype_data imx8mm_data = {
+ .txfifo = SZ_1K, /* (128 * 64 bits) */
+ .ahb_buf_size = SZ_2K, /* (256 * 64 bits) */
+ .quirks = 0,
++ .lut_num = 32,
+ .little_endian = true, /* little-endian */
+ };
+
+@@ -365,6 +358,7 @@ static struct nxp_fspi_devtype_data imx8qxp_data = {
+ .txfifo = SZ_1K, /* (128 * 64 bits) */
+ .ahb_buf_size = SZ_2K, /* (256 * 64 bits) */
+ .quirks = 0,
++ .lut_num = 32,
+ .little_endian = true, /* little-endian */
+ };
+
+@@ -373,6 +367,16 @@ static struct nxp_fspi_devtype_data imx8dxl_data = {
+ .txfifo = SZ_1K, /* (128 * 64 bits) */
+ .ahb_buf_size = SZ_2K, /* (256 * 64 bits) */
+ .quirks = FSPI_QUIRK_USE_IP_ONLY,
++ .lut_num = 32,
++ .little_endian = true, /* little-endian */
++};
++
++static struct nxp_fspi_devtype_data imx8ulp_data = {
++ .rxfifo = SZ_512, /* (64 * 64 bits) */
++ .txfifo = SZ_1K, /* (128 * 64 bits) */
++ .ahb_buf_size = SZ_2K, /* (256 * 64 bits) */
++ .quirks = 0,
++ .lut_num = 16,
+ .little_endian = true, /* little-endian */
+ };
+
+@@ -544,6 +548,8 @@ static void nxp_fspi_prepare_lut(struct nxp_fspi *f,
+ void __iomem *base = f->iobase;
+ u32 lutval[4] = {};
+ int lutidx = 1, i;
++ u32 lut_offset = (f->devtype_data->lut_num - 1) * 4 * 4;
++ u32 target_lut_reg;
+
+ /* cmd */
+ lutval[0] |= LUT_DEF(0, LUT_CMD, LUT_PAD(op->cmd.buswidth),
+@@ -588,8 +594,10 @@ static void nxp_fspi_prepare_lut(struct nxp_fspi *f,
+ fspi_writel(f, FSPI_LCKER_UNLOCK, f->iobase + FSPI_LCKCR);
+
+ /* fill LUT */
+- for (i = 0; i < ARRAY_SIZE(lutval); i++)
+- fspi_writel(f, lutval[i], base + FSPI_LUT_REG(i));
++ for (i = 0; i < ARRAY_SIZE(lutval); i++) {
++ target_lut_reg = FSPI_LUT_BASE + lut_offset + i * 4;
++ fspi_writel(f, lutval[i], base + target_lut_reg);
++ }
+
+ dev_dbg(f->dev, "CMD[%02x] lutval[0:%08x 1:%08x 2:%08x 3:%08x], size: 0x%08x\n",
+ op->cmd.opcode, lutval[0], lutval[1], lutval[2], lutval[3], op->data.nbytes);
+@@ -876,7 +884,7 @@ static int nxp_fspi_do_op(struct nxp_fspi *f, const struct spi_mem_op *op)
+ void __iomem *base = f->iobase;
+ int seqnum = 0;
+ int err = 0;
+- u32 reg;
++ u32 reg, seqid_lut;
+
+ reg = fspi_readl(f, base + FSPI_IPRXFCR);
+ /* invalid RXFIFO first */
+@@ -892,8 +900,9 @@ static int nxp_fspi_do_op(struct nxp_fspi *f, const struct spi_mem_op *op)
+ * the LUT at each exec_op() call. And also specify the DATA
+ * length, since it's has not been specified in the LUT.
+ */
++ seqid_lut = f->devtype_data->lut_num - 1;
+ fspi_writel(f, op->data.nbytes |
+- (SEQID_LUT << FSPI_IPCR1_SEQID_SHIFT) |
++ (seqid_lut << FSPI_IPCR1_SEQID_SHIFT) |
+ (seqnum << FSPI_IPCR1_SEQNUM_SHIFT),
+ base + FSPI_IPCR1);
+
+@@ -1017,7 +1026,7 @@ static int nxp_fspi_default_setup(struct nxp_fspi *f)
+ {
+ void __iomem *base = f->iobase;
+ int ret, i;
+- u32 reg;
++ u32 reg, seqid_lut;
+
+ /* disable and unprepare clock to avoid glitch pass to controller */
+ nxp_fspi_clk_disable_unprep(f);
+@@ -1092,11 +1101,17 @@ static int nxp_fspi_default_setup(struct nxp_fspi *f)
+ fspi_writel(f, reg, base + FSPI_FLSHB1CR1);
+ fspi_writel(f, reg, base + FSPI_FLSHB2CR1);
+
++ /*
++ * The driver only uses one single LUT entry, that is updated on
++ * each call of exec_op(). Index 0 is preset at boot with a basic
++ * read operation, so let's use the last entry.
++ */
++ seqid_lut = f->devtype_data->lut_num - 1;
+ /* AHB Read - Set lut sequence ID for all CS. */
+- fspi_writel(f, SEQID_LUT, base + FSPI_FLSHA1CR2);
+- fspi_writel(f, SEQID_LUT, base + FSPI_FLSHA2CR2);
+- fspi_writel(f, SEQID_LUT, base + FSPI_FLSHB1CR2);
+- fspi_writel(f, SEQID_LUT, base + FSPI_FLSHB2CR2);
++ fspi_writel(f, seqid_lut, base + FSPI_FLSHA1CR2);
++ fspi_writel(f, seqid_lut, base + FSPI_FLSHA2CR2);
++ fspi_writel(f, seqid_lut, base + FSPI_FLSHB1CR2);
++ fspi_writel(f, seqid_lut, base + FSPI_FLSHB2CR2);
+
+ f->selected = -1;
+
+@@ -1291,6 +1306,7 @@ static const struct of_device_id nxp_fspi_dt_ids[] = {
+ { .compatible = "nxp,imx8mp-fspi", .data = (void *)&imx8mm_data, },
+ { .compatible = "nxp,imx8qxp-fspi", .data = (void *)&imx8qxp_data, },
+ { .compatible = "nxp,imx8dxl-fspi", .data = (void *)&imx8dxl_data, },
++ { .compatible = "nxp,imx8ulp-fspi", .data = (void *)&imx8ulp_data, },
+ { /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, nxp_fspi_dt_ids);
+diff --git a/drivers/spi/spi-ppc4xx.c b/drivers/spi/spi-ppc4xx.c
+index 942c3117ab3a90..8f6309f32de0b5 100644
+--- a/drivers/spi/spi-ppc4xx.c
++++ b/drivers/spi/spi-ppc4xx.c
+@@ -27,7 +27,6 @@
+ #include <linux/wait.h>
+ #include <linux/platform_device.h>
+ #include <linux/of_address.h>
+-#include <linux/of_irq.h>
+ #include <linux/of_platform.h>
+ #include <linux/interrupt.h>
+ #include <linux/delay.h>
+@@ -412,7 +411,11 @@ static int spi_ppc4xx_of_probe(struct platform_device *op)
+ }
+
+ /* Request IRQ */
+- hw->irqnum = irq_of_parse_and_map(np, 0);
++ ret = platform_get_irq(op, 0);
++ if (ret < 0)
++ goto free_host;
++ hw->irqnum = ret;
++
+ ret = request_irq(hw->irqnum, spi_ppc4xx_int,
+ 0, "spi_ppc4xx_of", (void *)hw);
+ if (ret) {
+diff --git a/drivers/staging/media/starfive/camss/stf-camss.c b/drivers/staging/media/starfive/camss/stf-camss.c
+index fecd3e67c7a1d5..b6d34145bc191e 100644
+--- a/drivers/staging/media/starfive/camss/stf-camss.c
++++ b/drivers/staging/media/starfive/camss/stf-camss.c
+@@ -358,8 +358,6 @@ static int stfcamss_probe(struct platform_device *pdev)
+ /*
+ * stfcamss_remove - Remove STFCAMSS platform device
+ * @pdev: Pointer to STFCAMSS platform device
+- *
+- * Always returns 0.
+ */
+ static void stfcamss_remove(struct platform_device *pdev)
+ {
+diff --git a/drivers/thermal/gov_bang_bang.c b/drivers/thermal/gov_bang_bang.c
+index daed67d19efb81..863e7a4272e66f 100644
+--- a/drivers/thermal/gov_bang_bang.c
++++ b/drivers/thermal/gov_bang_bang.c
+@@ -92,23 +92,21 @@ static void bang_bang_manage(struct thermal_zone_device *tz)
+
+ for_each_trip_desc(tz, td) {
+ const struct thermal_trip *trip = &td->trip;
++ bool turn_on;
+
+- if (tz->temperature >= td->threshold ||
+- trip->temperature == THERMAL_TEMP_INVALID ||
++ if (trip->temperature == THERMAL_TEMP_INVALID ||
+ trip->type == THERMAL_TRIP_CRITICAL ||
+ trip->type == THERMAL_TRIP_HOT)
+ continue;
+
+ /*
+- * If the initial cooling device state is "on", but the zone
+- * temperature is not above the trip point, the core will not
+- * call bang_bang_control() until the zone temperature reaches
+- * the trip point temperature which may be never. In those
+- * cases, set the initial state of the cooling device to 0.
++ * Adjust the target states for uninitialized thermal instances
++ * to the thermal zone temperature and the trip point threshold.
+ */
++ turn_on = tz->temperature >= td->threshold;
+ list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+ if (!instance->initialized && instance->trip == trip)
+- bang_bang_set_instance_target(instance, 0);
++ bang_bang_set_instance_target(instance, turn_on);
+ }
+ }
+
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index e6669aeda1fff0..93c16aff0df209 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -323,11 +323,15 @@ static void thermal_zone_broken_disable(struct thermal_zone_device *tz)
+ static void thermal_zone_device_set_polling(struct thermal_zone_device *tz,
+ unsigned long delay)
+ {
+- if (delay)
+- mod_delayed_work(system_freezable_power_efficient_wq,
+- &tz->poll_queue, delay);
+- else
++ if (!delay) {
+ cancel_delayed_work(&tz->poll_queue);
++ return;
++ }
++
++ if (delay > HZ)
++ delay = round_jiffies_relative(delay);
++
++ mod_delayed_work(system_freezable_power_efficient_wq, &tz->poll_queue, delay);
+ }
+
+ static void thermal_zone_recheck(struct thermal_zone_device *tz, int error)
+@@ -991,20 +995,6 @@ void print_bind_err_msg(struct thermal_zone_device *tz,
+ tz->type, cdev->type, ret);
+ }
+
+-static void bind_cdev(struct thermal_cooling_device *cdev)
+-{
+- int ret;
+- struct thermal_zone_device *pos = NULL;
+-
+- list_for_each_entry(pos, &thermal_tz_list, node) {
+- if (pos->ops.bind) {
+- ret = pos->ops.bind(pos, cdev);
+- if (ret)
+- print_bind_err_msg(pos, cdev, ret);
+- }
+- }
+-}
+-
+ /**
+ * __thermal_cooling_device_register() - register a new thermal cooling device
+ * @np: a pointer to a device tree node.
+@@ -1100,7 +1090,13 @@ __thermal_cooling_device_register(struct device_node *np,
+ list_add(&cdev->node, &thermal_cdev_list);
+
+ /* Update binding information for 'this' new cdev */
+- bind_cdev(cdev);
++ list_for_each_entry(pos, &thermal_tz_list, node) {
++ if (pos->ops.bind) {
++ ret = pos->ops.bind(pos, cdev);
++ if (ret)
++ print_bind_err_msg(pos, cdev, ret);
++ }
++ }
+
+ list_for_each_entry(pos, &thermal_tz_list, node)
+ if (atomic_cmpxchg(&pos->need_update, 1, 0))
+@@ -1338,32 +1334,6 @@ void thermal_cooling_device_unregister(struct thermal_cooling_device *cdev)
+ }
+ EXPORT_SYMBOL_GPL(thermal_cooling_device_unregister);
+
+-static void bind_tz(struct thermal_zone_device *tz)
+-{
+- int ret;
+- struct thermal_cooling_device *pos = NULL;
+-
+- if (!tz->ops.bind)
+- return;
+-
+- mutex_lock(&thermal_list_lock);
+-
+- list_for_each_entry(pos, &thermal_cdev_list, node) {
+- ret = tz->ops.bind(tz, pos);
+- if (ret)
+- print_bind_err_msg(tz, pos, ret);
+- }
+-
+- mutex_unlock(&thermal_list_lock);
+-}
+-
+-static void thermal_set_delay_jiffies(unsigned long *delay_jiffies, int delay_ms)
+-{
+- *delay_jiffies = msecs_to_jiffies(delay_ms);
+- if (delay_ms > 1000)
+- *delay_jiffies = round_jiffies(*delay_jiffies);
+-}
+-
+ int thermal_zone_get_crit_temp(struct thermal_zone_device *tz, int *temp)
+ {
+ const struct thermal_trip_desc *td;
+@@ -1509,8 +1479,8 @@ thermal_zone_device_register_with_trips(const char *type,
+ td->threshold = INT_MAX;
+ }
+
+- thermal_set_delay_jiffies(&tz->passive_delay_jiffies, passive_delay);
+- thermal_set_delay_jiffies(&tz->polling_delay_jiffies, polling_delay);
++ tz->polling_delay_jiffies = msecs_to_jiffies(polling_delay);
++ tz->passive_delay_jiffies = msecs_to_jiffies(passive_delay);
+ tz->recheck_delay_jiffies = THERMAL_RECHECK_DELAY;
+
+ /* sys I/F */
+@@ -1554,13 +1524,23 @@ thermal_zone_device_register_with_trips(const char *type,
+ }
+
+ mutex_lock(&thermal_list_lock);
++
+ mutex_lock(&tz->lock);
+ list_add_tail(&tz->node, &thermal_tz_list);
+ mutex_unlock(&tz->lock);
+- mutex_unlock(&thermal_list_lock);
+
+ /* Bind cooling devices for this zone */
+- bind_tz(tz);
++ if (tz->ops.bind) {
++ struct thermal_cooling_device *cdev;
++
++ list_for_each_entry(cdev, &thermal_cdev_list, node) {
++ result = tz->ops.bind(tz, cdev);
++ if (result)
++ print_bind_err_msg(tz, cdev, result);
++ }
++ }
++
++ mutex_unlock(&thermal_list_lock);
+
+ thermal_zone_device_init(tz);
+ /* Update the new thermal zone and mark it as already updated. */
+diff --git a/drivers/thermal/thermal_core.h b/drivers/thermal/thermal_core.h
+index 4cf2b7230d04bb..3c8e2bca87f2d9 100644
+--- a/drivers/thermal/thermal_core.h
++++ b/drivers/thermal/thermal_core.h
+@@ -15,8 +15,20 @@
+ #include "thermal_netlink.h"
+ #include "thermal_debugfs.h"
+
++struct thermal_attr {
++ struct device_attribute attr;
++ char name[THERMAL_NAME_LENGTH];
++};
++
++struct thermal_trip_attrs {
++ struct thermal_attr type;
++ struct thermal_attr temp;
++ struct thermal_attr hyst;
++};
++
+ struct thermal_trip_desc {
+ struct thermal_trip trip;
++ struct thermal_trip_attrs trip_attrs;
+ struct list_head notify_list_node;
+ int notify_temp;
+ int threshold;
+@@ -56,9 +68,6 @@ struct thermal_governor {
+ * @device: &struct device for this thermal zone
+ * @removal: removal completion
+ * @resume: resume completion
+- * @trip_temp_attrs: attributes for trip points for sysfs: trip temperature
+- * @trip_type_attrs: attributes for trip points for sysfs: trip type
+- * @trip_hyst_attrs: attributes for trip points for sysfs: trip hysteresis
+ * @mode: current mode of this thermal zone
+ * @devdata: private pointer for device private data
+ * @num_trips: number of trip points the thermal zone supports
+@@ -102,9 +111,6 @@ struct thermal_zone_device {
+ struct completion removal;
+ struct completion resume;
+ struct attribute_group trips_attribute_group;
+- struct thermal_attr *trip_temp_attrs;
+- struct thermal_attr *trip_type_attrs;
+- struct thermal_attr *trip_hyst_attrs;
+ enum thermal_device_mode mode;
+ void *devdata;
+ int num_trips;
+@@ -188,11 +194,6 @@ int for_each_thermal_governor(int (*cb)(struct thermal_governor *, void *),
+
+ struct thermal_zone_device *thermal_zone_get_by_id(int id);
+
+-struct thermal_attr {
+- struct device_attribute attr;
+- char name[THERMAL_NAME_LENGTH];
+-};
+-
+ static inline bool cdev_is_power_actor(struct thermal_cooling_device *cdev)
+ {
+ return cdev->ops->get_requested_power && cdev->ops->state2power &&
+@@ -262,11 +263,11 @@ const char *thermal_trip_type_name(enum thermal_trip_type trip_type);
+ void thermal_zone_set_trips(struct thermal_zone_device *tz);
+ int thermal_zone_trip_id(const struct thermal_zone_device *tz,
+ const struct thermal_trip *trip);
+-void thermal_zone_trip_updated(struct thermal_zone_device *tz,
+- const struct thermal_trip *trip);
+ int __thermal_zone_get_temp(struct thermal_zone_device *tz, int *temp);
+ void thermal_zone_trip_down(struct thermal_zone_device *tz,
+ const struct thermal_trip *trip);
++void thermal_zone_set_trip_hyst(struct thermal_zone_device *tz,
++ struct thermal_trip *trip, int hyst);
+
+ /* sysfs I/F */
+ int thermal_zone_create_device_groups(struct thermal_zone_device *tz);
+diff --git a/drivers/thermal/thermal_sysfs.c b/drivers/thermal/thermal_sysfs.c
+index 72b302bf914e31..d628dd67be5ccf 100644
+--- a/drivers/thermal/thermal_sysfs.c
++++ b/drivers/thermal/thermal_sysfs.c
+@@ -12,6 +12,7 @@
+
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
++#include <linux/container_of.h>
+ #include <linux/sysfs.h>
+ #include <linux/device.h>
+ #include <linux/err.h>
+@@ -78,51 +79,58 @@ mode_store(struct device *dev, struct device_attribute *attr,
+ return count;
+ }
+
++#define thermal_trip_of_attr(_ptr_, _attr_) \
++ ({ \
++ struct thermal_trip_desc *td; \
++ \
++ td = container_of(_ptr_, struct thermal_trip_desc, \
++ trip_attrs._attr_.attr); \
++ &td->trip; \
++ })
++
+ static ssize_t
+ trip_point_type_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+- struct thermal_zone_device *tz = to_thermal_zone(dev);
+- int trip_id;
++ struct thermal_trip *trip = thermal_trip_of_attr(attr, type);
+
+- if (sscanf(attr->attr.name, "trip_point_%d_type", &trip_id) != 1)
+- return -EINVAL;
+-
+- return sprintf(buf, "%s\n", thermal_trip_type_name(tz->trips[trip_id].trip.type));
++ return sprintf(buf, "%s\n", thermal_trip_type_name(trip->type));
+ }
+
+ static ssize_t
+ trip_point_temp_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
++ struct thermal_trip *trip = thermal_trip_of_attr(attr, temp);
+ struct thermal_zone_device *tz = to_thermal_zone(dev);
+- struct thermal_trip *trip;
+- int trip_id, ret;
+- int temp;
++ int ret, temp;
+
+ ret = kstrtoint(buf, 10, &temp);
+ if (ret)
+ return -EINVAL;
+
+- if (sscanf(attr->attr.name, "trip_point_%d_temp", &trip_id) != 1)
+- return -EINVAL;
+-
+ mutex_lock(&tz->lock);
+
+- trip = &tz->trips[trip_id].trip;
+-
+- if (temp != trip->temperature) {
+- if (tz->ops.set_trip_temp) {
+- ret = tz->ops.set_trip_temp(tz, trip, temp);
+- if (ret)
+- goto unlock;
+- }
++ if (temp == trip->temperature)
++ goto unlock;
+
+- thermal_zone_set_trip_temp(tz, trip, temp);
++ /* Arrange the condition to avoid integer overflows. */
++ if (temp != THERMAL_TEMP_INVALID &&
++ temp <= trip->hysteresis + THERMAL_TEMP_INVALID) {
++ ret = -EINVAL;
++ goto unlock;
++ }
+
+- __thermal_zone_device_update(tz, THERMAL_TRIP_CHANGED);
++ if (tz->ops.set_trip_temp) {
++ ret = tz->ops.set_trip_temp(tz, trip, temp);
++ if (ret)
++ goto unlock;
+ }
+
++ thermal_zone_set_trip_temp(tz, trip, temp);
++
++ __thermal_zone_device_update(tz, THERMAL_TRIP_CHANGED);
++
+ unlock:
+ mutex_unlock(&tz->lock);
+
+@@ -133,57 +141,61 @@ static ssize_t
+ trip_point_temp_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+- struct thermal_zone_device *tz = to_thermal_zone(dev);
+- int trip_id;
++ struct thermal_trip *trip = thermal_trip_of_attr(attr, temp);
+
+- if (sscanf(attr->attr.name, "trip_point_%d_temp", &trip_id) != 1)
+- return -EINVAL;
+-
+- return sprintf(buf, "%d\n", READ_ONCE(tz->trips[trip_id].trip.temperature));
++ return sprintf(buf, "%d\n", READ_ONCE(trip->temperature));
+ }
+
+ static ssize_t
+ trip_point_hyst_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
++ struct thermal_trip *trip = thermal_trip_of_attr(attr, hyst);
+ struct thermal_zone_device *tz = to_thermal_zone(dev);
+- struct thermal_trip *trip;
+- int trip_id, ret;
+- int hyst;
++ int ret, hyst;
+
+ ret = kstrtoint(buf, 10, &hyst);
+ if (ret || hyst < 0)
+ return -EINVAL;
+
+- if (sscanf(attr->attr.name, "trip_point_%d_hyst", &trip_id) != 1)
+- return -EINVAL;
+-
+ mutex_lock(&tz->lock);
+
+- trip = &tz->trips[trip_id].trip;
++ if (hyst == trip->hysteresis)
++ goto unlock;
+
+- if (hyst != trip->hysteresis) {
++ /*
++ * Allow the hysteresis to be updated when the temperature is invalid
++ * to allow user space to avoid having to adjust hysteresis after a
++ * valid temperature has been set, but in that case just change the
++ * value and do nothing else.
++ */
++ if (trip->temperature == THERMAL_TEMP_INVALID) {
+ WRITE_ONCE(trip->hysteresis, hyst);
++ goto unlock;
++ }
+
+- thermal_zone_trip_updated(tz, trip);
++ if (trip->temperature - hyst <= THERMAL_TEMP_INVALID) {
++ ret = -EINVAL;
++ goto unlock;
+ }
+
++ thermal_zone_set_trip_hyst(tz, trip, hyst);
++
++ __thermal_zone_device_update(tz, THERMAL_TRIP_CHANGED);
++
++unlock:
+ mutex_unlock(&tz->lock);
+
+- return count;
++ return ret ? ret : count;
+ }
+
+ static ssize_t
+ trip_point_hyst_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+- struct thermal_zone_device *tz = to_thermal_zone(dev);
+- int trip_id;
+-
+- if (sscanf(attr->attr.name, "trip_point_%d_hyst", &trip_id) != 1)
+- return -EINVAL;
++ struct thermal_trip *trip = thermal_trip_of_attr(attr, hyst);
+
+- return sprintf(buf, "%d\n", READ_ONCE(tz->trips[trip_id].trip.hysteresis));
++ return sprintf(buf, "%d\n", READ_ONCE(trip->hysteresis));
+ }
+
+ static ssize_t
+@@ -382,87 +394,55 @@ static const struct attribute_group *thermal_zone_attribute_groups[] = {
+ */
+ static int create_trip_attrs(struct thermal_zone_device *tz)
+ {
+- const struct thermal_trip_desc *td;
++ struct thermal_trip_desc *td;
+ struct attribute **attrs;
+-
+- /* This function works only for zones with at least one trip */
+- if (tz->num_trips <= 0)
+- return -EINVAL;
+-
+- tz->trip_type_attrs = kcalloc(tz->num_trips, sizeof(*tz->trip_type_attrs),
+- GFP_KERNEL);
+- if (!tz->trip_type_attrs)
+- return -ENOMEM;
+-
+- tz->trip_temp_attrs = kcalloc(tz->num_trips, sizeof(*tz->trip_temp_attrs),
+- GFP_KERNEL);
+- if (!tz->trip_temp_attrs) {
+- kfree(tz->trip_type_attrs);
+- return -ENOMEM;
+- }
+-
+- tz->trip_hyst_attrs = kcalloc(tz->num_trips,
+- sizeof(*tz->trip_hyst_attrs),
+- GFP_KERNEL);
+- if (!tz->trip_hyst_attrs) {
+- kfree(tz->trip_type_attrs);
+- kfree(tz->trip_temp_attrs);
+- return -ENOMEM;
+- }
++ int i;
+
+ attrs = kcalloc(tz->num_trips * 3 + 1, sizeof(*attrs), GFP_KERNEL);
+- if (!attrs) {
+- kfree(tz->trip_type_attrs);
+- kfree(tz->trip_temp_attrs);
+- kfree(tz->trip_hyst_attrs);
++ if (!attrs)
+ return -ENOMEM;
+- }
+
++ i = 0;
+ for_each_trip_desc(tz, td) {
+- int indx = thermal_zone_trip_id(tz, &td->trip);
++ struct thermal_trip_attrs *trip_attrs = &td->trip_attrs;
+
+ /* create trip type attribute */
+- snprintf(tz->trip_type_attrs[indx].name, THERMAL_NAME_LENGTH,
+- "trip_point_%d_type", indx);
++ snprintf(trip_attrs->type.name, THERMAL_NAME_LENGTH,
++ "trip_point_%d_type", i);
+
+- sysfs_attr_init(&tz->trip_type_attrs[indx].attr.attr);
+- tz->trip_type_attrs[indx].attr.attr.name =
+- tz->trip_type_attrs[indx].name;
+- tz->trip_type_attrs[indx].attr.attr.mode = S_IRUGO;
+- tz->trip_type_attrs[indx].attr.show = trip_point_type_show;
+- attrs[indx] = &tz->trip_type_attrs[indx].attr.attr;
++ sysfs_attr_init(&trip_attrs->type.attr.attr);
++ trip_attrs->type.attr.attr.name = trip_attrs->type.name;
++ trip_attrs->type.attr.attr.mode = S_IRUGO;
++ trip_attrs->type.attr.show = trip_point_type_show;
++ attrs[i] = &trip_attrs->type.attr.attr;
+
+ /* create trip temp attribute */
+- snprintf(tz->trip_temp_attrs[indx].name, THERMAL_NAME_LENGTH,
+- "trip_point_%d_temp", indx);
+-
+- sysfs_attr_init(&tz->trip_temp_attrs[indx].attr.attr);
+- tz->trip_temp_attrs[indx].attr.attr.name =
+- tz->trip_temp_attrs[indx].name;
+- tz->trip_temp_attrs[indx].attr.attr.mode = S_IRUGO;
+- tz->trip_temp_attrs[indx].attr.show = trip_point_temp_show;
++ snprintf(trip_attrs->temp.name, THERMAL_NAME_LENGTH,
++ "trip_point_%d_temp", i);
++
++ sysfs_attr_init(&trip_attrs->temp.attr.attr);
++ trip_attrs->temp.attr.attr.name = trip_attrs->temp.name;
++ trip_attrs->temp.attr.attr.mode = S_IRUGO;
++ trip_attrs->temp.attr.show = trip_point_temp_show;
+ if (td->trip.flags & THERMAL_TRIP_FLAG_RW_TEMP) {
+- tz->trip_temp_attrs[indx].attr.attr.mode |= S_IWUSR;
+- tz->trip_temp_attrs[indx].attr.store =
+- trip_point_temp_store;
++ trip_attrs->temp.attr.attr.mode |= S_IWUSR;
++ trip_attrs->temp.attr.store = trip_point_temp_store;
+ }
+- attrs[indx + tz->num_trips] = &tz->trip_temp_attrs[indx].attr.attr;
++ attrs[i + tz->num_trips] = &trip_attrs->temp.attr.attr;
+
+- snprintf(tz->trip_hyst_attrs[indx].name, THERMAL_NAME_LENGTH,
+- "trip_point_%d_hyst", indx);
++ snprintf(trip_attrs->hyst.name, THERMAL_NAME_LENGTH,
++ "trip_point_%d_hyst", i);
+
+- sysfs_attr_init(&tz->trip_hyst_attrs[indx].attr.attr);
+- tz->trip_hyst_attrs[indx].attr.attr.name =
+- tz->trip_hyst_attrs[indx].name;
+- tz->trip_hyst_attrs[indx].attr.attr.mode = S_IRUGO;
+- tz->trip_hyst_attrs[indx].attr.show = trip_point_hyst_show;
++ sysfs_attr_init(&trip_attrs->hyst.attr.attr);
++ trip_attrs->hyst.attr.attr.name = trip_attrs->hyst.name;
++ trip_attrs->hyst.attr.attr.mode = S_IRUGO;
++ trip_attrs->hyst.attr.show = trip_point_hyst_show;
+ if (td->trip.flags & THERMAL_TRIP_FLAG_RW_HYST) {
+- tz->trip_hyst_attrs[indx].attr.attr.mode |= S_IWUSR;
+- tz->trip_hyst_attrs[indx].attr.store =
+- trip_point_hyst_store;
++ trip_attrs->hyst.attr.attr.mode |= S_IWUSR;
++ trip_attrs->hyst.attr.store = trip_point_hyst_store;
+ }
+- attrs[indx + tz->num_trips * 2] =
+- &tz->trip_hyst_attrs[indx].attr.attr;
++ attrs[i + 2 * tz->num_trips] = &trip_attrs->hyst.attr.attr;
++ i++;
+ }
+ attrs[tz->num_trips * 3] = NULL;
+
+@@ -479,13 +459,8 @@ static int create_trip_attrs(struct thermal_zone_device *tz)
+ */
+ static void destroy_trip_attrs(struct thermal_zone_device *tz)
+ {
+- if (!tz)
+- return;
+-
+- kfree(tz->trip_type_attrs);
+- kfree(tz->trip_temp_attrs);
+- kfree(tz->trip_hyst_attrs);
+- kfree(tz->trips_attribute_group.attrs);
++ if (tz)
++ kfree(tz->trips_attribute_group.attrs);
+ }
+
+ int thermal_zone_create_device_groups(struct thermal_zone_device *tz)
+diff --git a/drivers/thermal/thermal_trip.c b/drivers/thermal/thermal_trip.c
+index 06a0554ddc3891..fa5ac9b512312d 100644
+--- a/drivers/thermal/thermal_trip.c
++++ b/drivers/thermal/thermal_trip.c
+@@ -138,11 +138,11 @@ int thermal_zone_trip_id(const struct thermal_zone_device *tz,
+ return trip_to_trip_desc(trip) - tz->trips;
+ }
+
+-void thermal_zone_trip_updated(struct thermal_zone_device *tz,
+- const struct thermal_trip *trip)
++void thermal_zone_set_trip_hyst(struct thermal_zone_device *tz,
++ struct thermal_trip *trip, int hyst)
+ {
++ WRITE_ONCE(trip->hysteresis, hyst);
+ thermal_notify_tz_trip_change(tz, trip);
+- __thermal_zone_device_update(tz, THERMAL_TRIP_CHANGED);
+ }
+
+ void thermal_zone_set_trip_temp(struct thermal_zone_device *tz,
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index afef1dd4ddf49c..fca5f25d693a72 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -1581,7 +1581,7 @@ static int omap8250_probe(struct platform_device *pdev)
+ ret = devm_request_irq(&pdev->dev, up.port.irq, omap8250_irq, 0,
+ dev_name(&pdev->dev), priv);
+ if (ret < 0)
+- return ret;
++ goto err;
+
+ priv->wakeirq = irq_of_parse_and_map(np, 1);
+
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 69a632fefc41f9..f8f6e9466b400d 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -124,13 +124,14 @@ struct qcom_geni_serial_port {
+ dma_addr_t tx_dma_addr;
+ dma_addr_t rx_dma_addr;
+ bool setup;
+- unsigned int baud;
++ unsigned long poll_timeout_us;
+ unsigned long clk_rate;
+ void *rx_buf;
+ u32 loopback;
+ bool brk;
+
+ unsigned int tx_remaining;
++ unsigned int tx_queued;
+ int wakeup_irq;
+ bool rx_tx_swap;
+ bool cts_rts_swap;
+@@ -144,6 +145,8 @@ static const struct uart_ops qcom_geni_uart_pops;
+ static struct uart_driver qcom_geni_console_driver;
+ static struct uart_driver qcom_geni_uart_driver;
+
++static void qcom_geni_serial_cancel_tx_cmd(struct uart_port *uport);
++
+ static inline struct qcom_geni_serial_port *to_dev_port(struct uart_port *uport)
+ {
+ return container_of(uport, struct qcom_geni_serial_port, uport);
+@@ -265,27 +268,18 @@ static bool qcom_geni_serial_secondary_active(struct uart_port *uport)
+ return readl(uport->membase + SE_GENI_STATUS) & S_GENI_CMD_ACTIVE;
+ }
+
+-static bool qcom_geni_serial_poll_bit(struct uart_port *uport,
+- int offset, int field, bool set)
++static bool qcom_geni_serial_poll_bitfield(struct uart_port *uport,
++ unsigned int offset, u32 field, u32 val)
+ {
+ u32 reg;
+ struct qcom_geni_serial_port *port;
+- unsigned int baud;
+- unsigned int fifo_bits;
+ unsigned long timeout_us = 20000;
+ struct qcom_geni_private_data *private_data = uport->private_data;
+
+ if (private_data->drv) {
+ port = to_dev_port(uport);
+- baud = port->baud;
+- if (!baud)
+- baud = 115200;
+- fifo_bits = port->tx_fifo_depth * port->tx_fifo_width;
+- /*
+- * Total polling iterations based on FIFO worth of bytes to be
+- * sent at current baud. Add a little fluff to the wait.
+- */
+- timeout_us = ((fifo_bits * USEC_PER_SEC) / baud) + 500;
++ if (port->poll_timeout_us)
++ timeout_us = port->poll_timeout_us;
+ }
+
+ /*
+@@ -295,7 +289,7 @@ static bool qcom_geni_serial_poll_bit(struct uart_port *uport,
+ timeout_us = DIV_ROUND_UP(timeout_us, 10) * 10;
+ while (timeout_us) {
+ reg = readl(uport->membase + offset);
+- if ((bool)(reg & field) == set)
++ if ((reg & field) == val)
+ return true;
+ udelay(10);
+ timeout_us -= 10;
+@@ -303,6 +297,12 @@ static bool qcom_geni_serial_poll_bit(struct uart_port *uport,
+ return false;
+ }
+
++static bool qcom_geni_serial_poll_bit(struct uart_port *uport,
++ unsigned int offset, u32 field, bool set)
++{
++ return qcom_geni_serial_poll_bitfield(uport, offset, field, set ? field : 0);
++}
++
+ static void qcom_geni_serial_setup_tx(struct uart_port *uport, u32 xmit_size)
+ {
+ u32 m_cmd;
+@@ -315,18 +315,16 @@ static void qcom_geni_serial_setup_tx(struct uart_port *uport, u32 xmit_size)
+ static void qcom_geni_serial_poll_tx_done(struct uart_port *uport)
+ {
+ int done;
+- u32 irq_clear = M_CMD_DONE_EN;
+
+ done = qcom_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
+ M_CMD_DONE_EN, true);
+ if (!done) {
+ writel(M_GENI_CMD_ABORT, uport->membase +
+ SE_GENI_M_CMD_CTRL_REG);
+- irq_clear |= M_CMD_ABORT_EN;
+ qcom_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
+ M_CMD_ABORT_EN, true);
++ writel(M_CMD_ABORT_EN, uport->membase + SE_GENI_M_IRQ_CLEAR);
+ }
+- writel(irq_clear, uport->membase + SE_GENI_M_IRQ_CLEAR);
+ }
+
+ static void qcom_geni_serial_abort_rx(struct uart_port *uport)
+@@ -387,6 +385,7 @@ static void qcom_geni_serial_poll_put_char(struct uart_port *uport,
+ unsigned char c)
+ {
+ writel(DEF_TX_WM, uport->membase + SE_GENI_TX_WATERMARK_REG);
++ writel(M_CMD_DONE_EN, uport->membase + SE_GENI_M_IRQ_CLEAR);
+ qcom_geni_serial_setup_tx(uport, 1);
+ WARN_ON(!qcom_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
+ M_TX_FIFO_WATERMARK_EN, true));
+@@ -397,6 +396,14 @@ static void qcom_geni_serial_poll_put_char(struct uart_port *uport,
+ #endif
+
+ #ifdef CONFIG_SERIAL_QCOM_GENI_CONSOLE
++static void qcom_geni_serial_drain_fifo(struct uart_port *uport)
++{
++ struct qcom_geni_serial_port *port = to_dev_port(uport);
++
++ qcom_geni_serial_poll_bitfield(uport, SE_GENI_M_GP_LENGTH, GP_LENGTH,
++ port->tx_queued);
++}
++
+ static void qcom_geni_serial_wr_char(struct uart_port *uport, unsigned char ch)
+ {
+ struct qcom_geni_private_data *private_data = uport->private_data;
+@@ -431,6 +438,7 @@ __qcom_geni_serial_console_write(struct uart_port *uport, const char *s,
+ }
+
+ writel(DEF_TX_WM, uport->membase + SE_GENI_TX_WATERMARK_REG);
++ writel(M_CMD_DONE_EN, uport->membase + SE_GENI_M_IRQ_CLEAR);
+ qcom_geni_serial_setup_tx(uport, bytes_to_send);
+ for (i = 0; i < count; ) {
+ size_t chars_to_write = 0;
+@@ -471,8 +479,6 @@ static void qcom_geni_serial_console_write(struct console *co, const char *s,
+ struct qcom_geni_serial_port *port;
+ bool locked = true;
+ unsigned long flags;
+- u32 geni_status;
+- u32 irq_en;
+
+ WARN_ON(co->index < 0 || co->index >= GENI_UART_CONS_PORTS);
+
+@@ -486,40 +492,20 @@ static void qcom_geni_serial_console_write(struct console *co, const char *s,
+ else
+ uart_port_lock_irqsave(uport, &flags);
+
+- geni_status = readl(uport->membase + SE_GENI_STATUS);
++ if (qcom_geni_serial_main_active(uport)) {
++ /* Wait for completion or drain FIFO */
++ if (!locked || port->tx_remaining == 0)
++ qcom_geni_serial_poll_tx_done(uport);
++ else
++ qcom_geni_serial_drain_fifo(uport);
+
+- if (!locked) {
+- /*
+- * We can only get here if an oops is in progress then we were
+- * unable to get the lock. This means we can't safely access
+- * our state variables like tx_remaining. About the best we
+- * can do is wait for the FIFO to be empty before we start our
+- * transfer, so we'll do that.
+- */
+- qcom_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
+- M_TX_FIFO_NOT_EMPTY_EN, false);
+- } else if ((geni_status & M_GENI_CMD_ACTIVE) && !port->tx_remaining) {
+- /*
+- * It seems we can't interrupt existing transfers if all data
+- * has been sent, in which case we need to look for done first.
+- */
+- qcom_geni_serial_poll_tx_done(uport);
+-
+- if (!kfifo_is_empty(&uport->state->port.xmit_fifo)) {
+- irq_en = readl(uport->membase + SE_GENI_M_IRQ_EN);
+- writel(irq_en | M_TX_FIFO_WATERMARK_EN,
+- uport->membase + SE_GENI_M_IRQ_EN);
+- }
++ qcom_geni_serial_cancel_tx_cmd(uport);
+ }
+
+ __qcom_geni_serial_console_write(uport, s, count);
+
+-
+- if (locked) {
+- if (port->tx_remaining)
+- qcom_geni_serial_setup_tx(uport, port->tx_remaining);
++ if (locked)
+ uart_port_unlock_irqrestore(uport, flags);
+- }
+ }
+
+ static void handle_rx_console(struct uart_port *uport, u32 bytes, bool drop)
+@@ -700,6 +686,7 @@ static void qcom_geni_serial_cancel_tx_cmd(struct uart_port *uport)
+ writel(M_CMD_CANCEL_EN, uport->membase + SE_GENI_M_IRQ_CLEAR);
+
+ port->tx_remaining = 0;
++ port->tx_queued = 0;
+ }
+
+ static void qcom_geni_serial_handle_rx_fifo(struct uart_port *uport, bool drop)
+@@ -926,6 +913,7 @@ static void qcom_geni_serial_handle_tx_fifo(struct uart_port *uport,
+ if (!port->tx_remaining) {
+ qcom_geni_serial_setup_tx(uport, pending);
+ port->tx_remaining = pending;
++ port->tx_queued = 0;
+
+ irq_en = readl(uport->membase + SE_GENI_M_IRQ_EN);
+ if (!(irq_en & M_TX_FIFO_WATERMARK_EN))
+@@ -934,6 +922,7 @@ static void qcom_geni_serial_handle_tx_fifo(struct uart_port *uport,
+ }
+
+ qcom_geni_serial_send_chunk_fifo(uport, chunk);
++ port->tx_queued += chunk;
+
+ /*
+ * The tx fifo watermark is level triggered and latched. Though we had
+@@ -1244,11 +1233,11 @@ static void qcom_geni_serial_set_termios(struct uart_port *uport,
+ unsigned long clk_rate;
+ u32 ver, sampling_rate;
+ unsigned int avg_bw_core;
++ unsigned long timeout;
+
+ qcom_geni_serial_stop_rx(uport);
+ /* baud rate */
+ baud = uart_get_baud_rate(uport, termios, old, 300, 4000000);
+- port->baud = baud;
+
+ sampling_rate = UART_OVERSAMPLING;
+ /* Sampling rate is halved for IP versions >= 2.5 */
+@@ -1326,9 +1315,21 @@ static void qcom_geni_serial_set_termios(struct uart_port *uport,
+ else
+ tx_trans_cfg |= UART_CTS_MASK;
+
+- if (baud)
++ if (baud) {
+ uart_update_timeout(uport, termios->c_cflag, baud);
+
++ /*
++ * Make sure that qcom_geni_serial_poll_bitfield() waits for
++ * the FIFO, two-word intermediate transfer register and shift
++ * register to clear.
++ *
++ * Note that uart_fifo_timeout() also adds a 20 ms margin.
++ */
++ timeout = jiffies_to_usecs(uart_fifo_timeout(uport));
++ timeout += 3 * timeout / port->tx_fifo_depth;
++ WRITE_ONCE(port->poll_timeout_us, timeout);
++ }
++
+ if (!uart_console(uport))
+ writel(port->loopback,
+ uport->membase + SE_UART_LOOPBACK_CFG);
+diff --git a/drivers/tty/serial/rp2.c b/drivers/tty/serial/rp2.c
+index 4132fcff7d4e29..8bab2aedc4991f 100644
+--- a/drivers/tty/serial/rp2.c
++++ b/drivers/tty/serial/rp2.c
+@@ -577,8 +577,8 @@ static void rp2_reset_asic(struct rp2_card *card, unsigned int asic_id)
+ u32 clk_cfg;
+
+ writew(1, base + RP2_GLOBAL_CMD);
+- readw(base + RP2_GLOBAL_CMD);
+ msleep(100);
++ readw(base + RP2_GLOBAL_CMD);
+ writel(0, base + RP2_CLK_PRESCALER);
+
+ /* TDM clock configuration */
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 5bea3af46abcef..20e797c9e13ab8 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -2696,14 +2696,13 @@ static int uart_poll_init(struct tty_driver *driver, int line, char *options)
+ int ret = 0;
+
+ tport = &state->port;
+- mutex_lock(&tport->mutex);
++
++ guard(mutex)(&tport->mutex);
+
+ port = uart_port_check(state);
+ if (!port || port->type == PORT_UNKNOWN ||
+- !(port->ops->poll_get_char && port->ops->poll_put_char)) {
+- ret = -1;
+- goto out;
+- }
++ !(port->ops->poll_get_char && port->ops->poll_put_char))
++ return -1;
+
+ pm_state = state->pm_state;
+ uart_change_pm(state, UART_PM_STATE_ON);
+@@ -2723,10 +2722,10 @@ static int uart_poll_init(struct tty_driver *driver, int line, char *options)
+ ret = uart_set_options(port, NULL, baud, parity, bits, flow);
+ console_list_unlock();
+ }
+-out:
++
+ if (ret)
+ uart_change_pm(state, pm_state);
+- mutex_unlock(&tport->mutex);
++
+ return ret;
+ }
+
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index c87fdc849c627e..ecdfff2456e31d 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -93,7 +93,7 @@ static const struct __ufs_qcom_bw_table {
+ [MODE_HS_RB][UFS_HS_G3][UFS_LANE_2] = { 1492582, 204800 },
+ [MODE_HS_RB][UFS_HS_G4][UFS_LANE_2] = { 2915200, 409600 },
+ [MODE_HS_RB][UFS_HS_G5][UFS_LANE_2] = { 5836800, 819200 },
+- [MODE_MAX][0][0] = { 7643136, 307200 },
++ [MODE_MAX][0][0] = { 7643136, 819200 },
+ };
+
+ static void ufs_qcom_get_default_testbus_cfg(struct ufs_qcom_host *host);
+diff --git a/drivers/usb/cdns3/cdnsp-ring.c b/drivers/usb/cdns3/cdnsp-ring.c
+index dbd83d321bca01..46852529499d16 100644
+--- a/drivers/usb/cdns3/cdnsp-ring.c
++++ b/drivers/usb/cdns3/cdnsp-ring.c
+@@ -718,7 +718,8 @@ int cdnsp_remove_request(struct cdnsp_device *pdev,
+ seg = cdnsp_trb_in_td(pdev, cur_td->start_seg, cur_td->first_trb,
+ cur_td->last_trb, hw_deq);
+
+- if (seg && (pep->ep_state & EP_ENABLED))
++ if (seg && (pep->ep_state & EP_ENABLED) &&
++ !(pep->ep_state & EP_DIS_IN_RROGRESS))
+ cdnsp_find_new_dequeue_state(pdev, pep, preq->request.stream_id,
+ cur_td, &deq_state);
+ else
+@@ -736,7 +737,8 @@ int cdnsp_remove_request(struct cdnsp_device *pdev,
+ * During disconnecting all endpoint will be disabled so we don't
+ * have to worry about updating dequeue pointer.
+ */
+- if (pdev->cdnsp_state & CDNSP_STATE_DISCONNECT_PENDING) {
++ if (pdev->cdnsp_state & CDNSP_STATE_DISCONNECT_PENDING ||
++ pep->ep_state & EP_DIS_IN_RROGRESS) {
+ status = -ESHUTDOWN;
+ ret = cdnsp_cmd_set_deq(pdev, pep, &deq_state);
+ }
+diff --git a/drivers/usb/cdns3/host.c b/drivers/usb/cdns3/host.c
+index ceca4d839dfd42..7ba760ee62e331 100644
+--- a/drivers/usb/cdns3/host.c
++++ b/drivers/usb/cdns3/host.c
+@@ -62,7 +62,9 @@ static const struct xhci_plat_priv xhci_plat_cdns3_xhci = {
+ .resume_quirk = xhci_cdns3_resume_quirk,
+ };
+
+-static const struct xhci_plat_priv xhci_plat_cdnsp_xhci;
++static const struct xhci_plat_priv xhci_plat_cdnsp_xhci = {
++ .quirks = XHCI_CDNS_SCTX_QUIRK,
++};
+
+ static int __cdns_host_init(struct cdns *cdns)
+ {
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 0c1b69d944ca45..605fea4611029b 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -962,10 +962,12 @@ static int get_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+ struct acm *acm = tty->driver_data;
+
+ ss->line = acm->minor;
++ mutex_lock(&acm->port.mutex);
+ ss->close_delay = jiffies_to_msecs(acm->port.close_delay) / 10;
+ ss->closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+ ASYNC_CLOSING_WAIT_NONE :
+ jiffies_to_msecs(acm->port.closing_wait) / 10;
++ mutex_unlock(&acm->port.mutex);
+ return 0;
+ }
+
+diff --git a/drivers/usb/dwc2/drd.c b/drivers/usb/dwc2/drd.c
+index a8605b02115b1c..1ad8fa3f862a15 100644
+--- a/drivers/usb/dwc2/drd.c
++++ b/drivers/usb/dwc2/drd.c
+@@ -127,6 +127,15 @@ static int dwc2_drd_role_sw_set(struct usb_role_switch *sw, enum usb_role role)
+ role = USB_ROLE_DEVICE;
+ }
+
++ if ((IS_ENABLED(CONFIG_USB_DWC2_PERIPHERAL) ||
++ IS_ENABLED(CONFIG_USB_DWC2_DUAL_ROLE)) &&
++ dwc2_is_device_mode(hsotg) &&
++ hsotg->lx_state == DWC2_L2 &&
++ hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_NONE &&
++ hsotg->bus_suspended &&
++ !hsotg->params.no_clock_gating)
++ dwc2_gadget_exit_clock_gating(hsotg, 0);
++
+ if (role == USB_ROLE_HOST) {
+ already = dwc2_ovr_avalid(hsotg, true);
+ } else if (role == USB_ROLE_DEVICE) {
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index f37b0d8386c1a9..ff7bee78bcc492 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -1304,7 +1304,8 @@ static int dummy_urb_enqueue(
+
+ /* kick the scheduler, it'll do the rest */
+ if (!hrtimer_active(&dum_hcd->timer))
+- hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), HRTIMER_MODE_REL);
++ hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS),
++ HRTIMER_MODE_REL_SOFT);
+
+ done:
+ spin_unlock_irqrestore(&dum_hcd->dum->lock, flags);
+@@ -1325,7 +1326,7 @@ static int dummy_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+ rc = usb_hcd_check_unlink_urb(hcd, urb, status);
+ if (!rc && dum_hcd->rh_state != DUMMY_RH_RUNNING &&
+ !list_empty(&dum_hcd->urbp_list))
+- hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL);
++ hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT);
+
+ spin_unlock_irqrestore(&dum_hcd->dum->lock, flags);
+ return rc;
+@@ -1995,7 +1996,8 @@ static enum hrtimer_restart dummy_timer(struct hrtimer *t)
+ dum_hcd->udev = NULL;
+ } else if (dum_hcd->rh_state == DUMMY_RH_RUNNING) {
+ /* want a 1 msec delay here */
+- hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), HRTIMER_MODE_REL);
++ hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS),
++ HRTIMER_MODE_REL_SOFT);
+ }
+
+ spin_unlock_irqrestore(&dum->lock, flags);
+@@ -2389,7 +2391,7 @@ static int dummy_bus_resume(struct usb_hcd *hcd)
+ dum_hcd->rh_state = DUMMY_RH_RUNNING;
+ set_link_state(dum_hcd);
+ if (!list_empty(&dum_hcd->urbp_list))
+- hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL);
++ hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT);
+ hcd->state = HC_STATE_RUNNING;
+ }
+ spin_unlock_irq(&dum_hcd->dum->lock);
+@@ -2467,7 +2469,7 @@ static DEVICE_ATTR_RO(urbs);
+
+ static int dummy_start_ss(struct dummy_hcd *dum_hcd)
+ {
+- hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);
+ dum_hcd->timer.function = dummy_timer;
+ dum_hcd->rh_state = DUMMY_RH_RUNNING;
+ dum_hcd->stream_en_ep = 0;
+@@ -2497,7 +2499,7 @@ static int dummy_start(struct usb_hcd *hcd)
+ return dummy_start_ss(dum_hcd);
+
+ spin_lock_init(&dum_hcd->dum->lock);
+- hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);
+ dum_hcd->timer.function = dummy_timer;
+ dum_hcd->rh_state = DUMMY_RH_RUNNING;
+
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index dc1e345ab67ea8..994fd8b38bd015 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -55,6 +55,9 @@
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_PCH_XHCI 0x54ed
+
++#define PCI_VENDOR_ID_PHYTIUM 0x1db7
++#define PCI_DEVICE_ID_PHYTIUM_XHCI 0xdc27
++
+ /* Thunderbolt */
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138
+ #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_XHCI 0x15b5
+@@ -78,6 +81,9 @@
+ #define PCI_DEVICE_ID_ASMEDIA_2142_XHCI 0x2142
+ #define PCI_DEVICE_ID_ASMEDIA_3242_XHCI 0x3242
+
++#define PCI_DEVICE_ID_CADENCE 0x17CD
++#define PCI_DEVICE_ID_CADENCE_SSP 0x0200
++
+ static const char hcd_name[] = "xhci_hcd";
+
+ static struct hc_driver __read_mostly xhci_pci_hc_driver;
+@@ -416,6 +422,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ if (pdev->vendor == PCI_VENDOR_ID_VIA)
+ xhci->quirks |= XHCI_RESET_ON_RESUME;
+
++ if (pdev->vendor == PCI_VENDOR_ID_PHYTIUM &&
++ pdev->device == PCI_DEVICE_ID_PHYTIUM_XHCI)
++ xhci->quirks |= XHCI_RESET_ON_RESUME;
++
+ /* See https://bugzilla.kernel.org/show_bug.cgi?id=79511 */
+ if (pdev->vendor == PCI_VENDOR_ID_VIA &&
+ pdev->device == 0x3432)
+@@ -473,6 +483,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH;
+ }
+
++ if (pdev->vendor == PCI_DEVICE_ID_CADENCE &&
++ pdev->device == PCI_DEVICE_ID_CADENCE_SSP)
++ xhci->quirks |= XHCI_CDNS_SCTX_QUIRK;
++
+ /* xHC spec requires PCI devices to support D3hot and D3cold */
+ if (xhci->hci_version >= 0x120)
+ xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+@@ -655,8 +669,10 @@ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ static void xhci_pci_remove(struct pci_dev *dev)
+ {
+ struct xhci_hcd *xhci;
++ bool set_power_d3;
+
+ xhci = hcd_to_xhci(pci_get_drvdata(dev));
++ set_power_d3 = xhci->quirks & XHCI_SPURIOUS_WAKEUP;
+
+ xhci->xhc_state |= XHCI_STATE_REMOVING;
+
+@@ -669,11 +685,11 @@ static void xhci_pci_remove(struct pci_dev *dev)
+ xhci->shared_hcd = NULL;
+ }
+
++ usb_hcd_pci_remove(dev);
++
+ /* Workaround for spurious wakeups at shutdown with HSW */
+- if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
++ if (set_power_d3)
+ pci_set_power_state(dev, PCI_D3hot);
+-
+- usb_hcd_pci_remove(dev);
+ }
+
+ /*
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 4ea2c3e072a9e3..90726899bc5bcc 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1399,6 +1399,20 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ struct xhci_stream_ctx *ctx =
+ &ep->stream_info->stream_ctx_array[stream_id];
+ deq = le64_to_cpu(ctx->stream_ring) & SCTX_DEQ_MASK;
++
++ /*
++ * Cadence xHCI controllers store some endpoint state
++ * information within Rsvd0 fields of Stream Endpoint
++ * context. This field is not cleared during Set TR
++ * Dequeue Pointer command which causes XDMA to skip
++ * over transfer ring and leads to data loss on stream
++ * pipe.
++ * To fix this issue driver must clear Rsvd0 field.
++ */
++ if (xhci->quirks & XHCI_CDNS_SCTX_QUIRK) {
++ ctx->reserved[0] = 0;
++ ctx->reserved[1] = 0;
++ }
+ } else {
+ deq = le64_to_cpu(ep_ctx->deq) & ~EP_CTX_CYCLE_MASK;
+ }
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index ebd0afd59a60be..0ea95ad4cb9022 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1628,6 +1628,7 @@ struct xhci_hcd {
+ #define XHCI_ZHAOXIN_TRB_FETCH BIT_ULL(45)
+ #define XHCI_ZHAOXIN_HOST BIT_ULL(46)
+ #define XHCI_WRITE_64_HI_LO BIT_ULL(47)
++#define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48)
+
+ unsigned int num_active_eps;
+ unsigned int limit_active_eps;
+diff --git a/drivers/usb/misc/appledisplay.c b/drivers/usb/misc/appledisplay.c
+index c8098e9b432e13..62b5a30edc4267 100644
+--- a/drivers/usb/misc/appledisplay.c
++++ b/drivers/usb/misc/appledisplay.c
+@@ -107,7 +107,12 @@ static void appledisplay_complete(struct urb *urb)
+ case ACD_BTN_BRIGHT_UP:
+ case ACD_BTN_BRIGHT_DOWN:
+ pdata->button_pressed = 1;
+- schedule_delayed_work(&pdata->work, 0);
++ /*
++ * there is a window during which no device
++ * is registered
++ */
++ if (pdata->bd )
++ schedule_delayed_work(&pdata->work, 0);
+ break;
+ case ACD_BTN_NONE:
+ default:
+@@ -202,6 +207,7 @@ static int appledisplay_probe(struct usb_interface *iface,
+ const struct usb_device_id *id)
+ {
+ struct backlight_properties props;
++ struct backlight_device *backlight;
+ struct appledisplay *pdata;
+ struct usb_device *udev = interface_to_usbdev(iface);
+ struct usb_endpoint_descriptor *endpoint;
+@@ -272,13 +278,14 @@ static int appledisplay_probe(struct usb_interface *iface,
+ memset(&props, 0, sizeof(struct backlight_properties));
+ props.type = BACKLIGHT_RAW;
+ props.max_brightness = 0xff;
+- pdata->bd = backlight_device_register(bl_name, NULL, pdata,
++ backlight = backlight_device_register(bl_name, NULL, pdata,
+ &appledisplay_bl_data, &props);
+- if (IS_ERR(pdata->bd)) {
++ if (IS_ERR(backlight)) {
+ dev_err(&iface->dev, "Backlight registration failed\n");
+- retval = PTR_ERR(pdata->bd);
++ retval = PTR_ERR(backlight);
+ goto error;
+ }
++ pdata->bd = backlight;
+
+ /* Try to get brightness */
+ brightness = appledisplay_bl_get_brightness(pdata->bd);
+diff --git a/drivers/usb/misc/cypress_cy7c63.c b/drivers/usb/misc/cypress_cy7c63.c
+index cecd7693b7413c..75f5a740cba397 100644
+--- a/drivers/usb/misc/cypress_cy7c63.c
++++ b/drivers/usb/misc/cypress_cy7c63.c
+@@ -88,6 +88,9 @@ static int vendor_command(struct cypress *dev, unsigned char request,
+ USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+ address, data, iobuf, CYPRESS_MAX_REQSIZE,
+ USB_CTRL_GET_TIMEOUT);
++ /* we must not process garbage */
++ if (retval < 2)
++ goto err_buf;
+
+ /* store returned data (more READs to be added) */
+ switch (request) {
+@@ -107,6 +110,7 @@ static int vendor_command(struct cypress *dev, unsigned char request,
+ break;
+ }
+
++err_buf:
+ kfree(iobuf);
+ error:
+ return retval;
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 4745a320eae453..4a9859e03f6b4c 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -404,7 +404,6 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+ struct usb_yurex *dev;
+ int len = 0;
+ char in_buffer[MAX_S64_STRLEN];
+- unsigned long flags;
+
+ dev = file->private_data;
+
+@@ -419,9 +418,9 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+ return -EIO;
+ }
+
+- spin_lock_irqsave(&dev->lock, flags);
++ spin_lock_irq(&dev->lock);
+ scnprintf(in_buffer, MAX_S64_STRLEN, "%lld\n", dev->bbu);
+- spin_unlock_irqrestore(&dev->lock, flags);
++ spin_unlock_irq(&dev->lock);
+ mutex_unlock(&dev->io_mutex);
+
+ return simple_read_from_buffer(buffer, count, ppos, in_buffer, len);
+@@ -511,8 +510,11 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ __func__, retval);
+ goto error;
+ }
+- if (set && timeout)
++ if (set && timeout) {
++ spin_lock_irq(&dev->lock);
+ dev->bbu = c2;
++ spin_unlock_irq(&dev->lock);
++ }
+ return timeout ? count : -EIO;
+
+ error:
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 17155ed17fdf84..8cc43c866130a3 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -38,6 +38,10 @@
+
+ void ucsi_notify_common(struct ucsi *ucsi, u32 cci)
+ {
++ /* Ignore bogus data in CCI if busy indicator is set. */
++ if (cci & UCSI_CCI_BUSY)
++ return;
++
+ if (UCSI_CCI_CONNECTOR(cci))
+ ucsi_connector_change(ucsi, UCSI_CCI_CONNECTOR(cci));
+
+@@ -107,16 +111,14 @@ static int ucsi_run_command(struct ucsi *ucsi, u64 command, u32 *cci,
+ size = clamp(size, 0, 16);
+
+ ret = ucsi->ops->sync_control(ucsi, command);
+- if (ret)
+- return ret;
++ if (ucsi->ops->read_cci(ucsi, cci))
++ return -EIO;
+
+- ret = ucsi->ops->read_cci(ucsi, cci);
++ if (*cci & UCSI_CCI_BUSY)
++ return ucsi_run_command(ucsi, UCSI_CANCEL, cci, NULL, 0, false) ?: -EBUSY;
+ if (ret)
+ return ret;
+
+- if (*cci & UCSI_CCI_BUSY)
+- return -EBUSY;
+-
+ if (!(*cci & UCSI_CCI_COMMAND_COMPLETE))
+ return -EIO;
+
+@@ -148,15 +150,7 @@ static int ucsi_read_error(struct ucsi *ucsi, u8 connector_num)
+ int ret;
+
+ command = UCSI_GET_ERROR_STATUS | UCSI_CONNECTOR_NUMBER(connector_num);
+- ret = ucsi_run_command(ucsi, command, &cci,
+- &error, sizeof(error), false);
+-
+- if (cci & UCSI_CCI_BUSY) {
+- ret = ucsi_run_command(ucsi, UCSI_CANCEL, &cci, NULL, 0, false);
+-
+- return ret ? ret : -EBUSY;
+- }
+-
++ ret = ucsi_run_command(ucsi, command, &cci, &error, sizeof(error), false);
+ if (ret < 0)
+ return ret;
+
+@@ -238,9 +232,8 @@ static int ucsi_send_command_common(struct ucsi *ucsi, u64 cmd,
+ mutex_lock(&ucsi->ppm_lock);
+
+ ret = ucsi_run_command(ucsi, cmd, &cci, data, size, conn_ack);
+- if (cci & UCSI_CCI_BUSY)
+- ret = ucsi_run_command(ucsi, UCSI_CANCEL, &cci, NULL, 0, false) ?: -EBUSY;
+- else if (cci & UCSI_CCI_ERROR)
++
++ if (cci & UCSI_CCI_ERROR)
+ ret = ucsi_read_error(ucsi, connector_num);
+
+ mutex_unlock(&ucsi->ppm_lock);
+@@ -1249,6 +1242,10 @@ static void ucsi_handle_connector_change(struct work_struct *work)
+
+ mutex_lock(&con->lock);
+
++ if (!test_and_set_bit(EVENT_PENDING, &ucsi->flags))
++ dev_err_once(ucsi->dev, "%s entered without EVENT_PENDING\n",
++ __func__);
++
+ command = UCSI_GET_CONNECTOR_STATUS | UCSI_CONNECTOR_NUMBER(con->num);
+
+ ret = ucsi_send_command_common(ucsi, command, &con->status,
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 4758914ccf8608..bf56f3d6962533 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -581,6 +581,9 @@ static void mlx5_vdpa_show_mr_leaks(struct mlx5_vdpa_dev *mvdev)
+
+ void mlx5_vdpa_destroy_mr_resources(struct mlx5_vdpa_dev *mvdev)
+ {
++ if (!mvdev->res.valid)
++ return;
++
+ for (int i = 0; i < MLX5_VDPA_NUM_AS; i++)
+ mlx5_vdpa_update_mr(mvdev, NULL, i);
+
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 478cd46a49ede2..5a49b5a6d49641 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -209,11 +209,9 @@ static void vhost_vdpa_setup_vq_irq(struct vhost_vdpa *v, u16 qid)
+ if (irq < 0)
+ return;
+
+- irq_bypass_unregister_producer(&vq->call_ctx.producer);
+ if (!vq->call_ctx.ctx)
+ return;
+
+- vq->call_ctx.producer.token = vq->call_ctx.ctx;
+ vq->call_ctx.producer.irq = irq;
+ ret = irq_bypass_register_producer(&vq->call_ctx.producer);
+ if (unlikely(ret))
+@@ -709,6 +707,14 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd,
+ vq->last_avail_idx = vq_state.split.avail_index;
+ }
+ break;
++ case VHOST_SET_VRING_CALL:
++ if (vq->call_ctx.ctx) {
++ if (ops->get_status(vdpa) &
++ VIRTIO_CONFIG_S_DRIVER_OK)
++ vhost_vdpa_unsetup_vq_irq(v, idx);
++ vq->call_ctx.producer.token = NULL;
++ }
++ break;
+ }
+
+ r = vhost_vring_ioctl(&v->vdev, cmd, argp);
+@@ -747,13 +753,16 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd,
+ cb.callback = vhost_vdpa_virtqueue_cb;
+ cb.private = vq;
+ cb.trigger = vq->call_ctx.ctx;
++ vq->call_ctx.producer.token = vq->call_ctx.ctx;
++ if (ops->get_status(vdpa) &
++ VIRTIO_CONFIG_S_DRIVER_OK)
++ vhost_vdpa_setup_vq_irq(v, idx);
+ } else {
+ cb.callback = NULL;
+ cb.private = NULL;
+ cb.trigger = NULL;
+ }
+ ops->set_vq_cb(vdpa, idx, &cb);
+- vhost_vdpa_setup_vq_irq(v, idx);
+ break;
+
+ case VHOST_SET_VRING_NUM:
+@@ -1419,6 +1428,7 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep)
+ for (i = 0; i < nvqs; i++) {
+ vqs[i] = &v->vqs[i];
+ vqs[i]->handle_kick = handle_vq_kick;
++ vqs[i]->call_ctx.ctx = NULL;
+ }
+ vhost_dev_init(dev, vqs, nvqs, 0, 0, 0, false,
+ vhost_vdpa_process_iotlb_msg);
+diff --git a/drivers/video/fbdev/hpfb.c b/drivers/video/fbdev/hpfb.c
+index 66fac8e5393e08..a1144b15098266 100644
+--- a/drivers/video/fbdev/hpfb.c
++++ b/drivers/video/fbdev/hpfb.c
+@@ -345,6 +345,7 @@ static int hpfb_dio_probe(struct dio_dev *d, const struct dio_device_id *ent)
+ if (hpfb_init_one(paddr, vaddr)) {
+ if (d->scode >= DIOII_SCBASE)
+ iounmap((void *)vaddr);
++ release_mem_region(d->resource.start, resource_size(&d->resource));
+ return -ENOMEM;
+ }
+ return 0;
+diff --git a/drivers/video/fbdev/xen-fbfront.c b/drivers/video/fbdev/xen-fbfront.c
+index 66d4628a96ae04..c90f48ebb15e3f 100644
+--- a/drivers/video/fbdev/xen-fbfront.c
++++ b/drivers/video/fbdev/xen-fbfront.c
+@@ -407,6 +407,7 @@ static int xenfb_probe(struct xenbus_device *dev,
+ /* complete the abuse: */
+ fb_info->pseudo_palette = fb_info->par;
+ fb_info->par = info;
++ fb_info->device = &dev->dev;
+
+ fb_info->screen_buffer = info->fb;
+
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index bc1f962e483b98..b9095751e43bb7 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -127,10 +127,12 @@ static void __virtio_config_changed(struct virtio_device *dev)
+ {
+ struct virtio_driver *drv = drv_to_virtio(dev->dev.driver);
+
+- if (!dev->config_enabled)
++ if (!dev->config_core_enabled || dev->config_driver_disabled)
+ dev->config_change_pending = true;
+- else if (drv && drv->config_changed)
++ else if (drv && drv->config_changed) {
+ drv->config_changed(dev);
++ dev->config_change_pending = false;
++ }
+ }
+
+ void virtio_config_changed(struct virtio_device *dev)
+@@ -143,20 +145,51 @@ void virtio_config_changed(struct virtio_device *dev)
+ }
+ EXPORT_SYMBOL_GPL(virtio_config_changed);
+
+-static void virtio_config_disable(struct virtio_device *dev)
++/**
++ * virtio_config_driver_disable - disable config change reporting by drivers
++ * @dev: the device to reset
++ *
++ * This is only allowed to be called by a driver and disabling can't
++ * be nested.
++ */
++void virtio_config_driver_disable(struct virtio_device *dev)
+ {
+ spin_lock_irq(&dev->config_lock);
+- dev->config_enabled = false;
++ dev->config_driver_disabled = true;
+ spin_unlock_irq(&dev->config_lock);
+ }
++EXPORT_SYMBOL_GPL(virtio_config_driver_disable);
+
+-static void virtio_config_enable(struct virtio_device *dev)
++/**
++ * virtio_config_driver_enable - enable config change reporting by drivers
++ * @dev: the device to reset
++ *
++ * This is only allowed to be called by a driver and enabling can't
++ * be nested.
++ */
++void virtio_config_driver_enable(struct virtio_device *dev)
+ {
+ spin_lock_irq(&dev->config_lock);
+- dev->config_enabled = true;
++ dev->config_driver_disabled = false;
++ if (dev->config_change_pending)
++ __virtio_config_changed(dev);
++ spin_unlock_irq(&dev->config_lock);
++}
++EXPORT_SYMBOL_GPL(virtio_config_driver_enable);
++
++static void virtio_config_core_disable(struct virtio_device *dev)
++{
++ spin_lock_irq(&dev->config_lock);
++ dev->config_core_enabled = false;
++ spin_unlock_irq(&dev->config_lock);
++}
++
++static void virtio_config_core_enable(struct virtio_device *dev)
++{
++ spin_lock_irq(&dev->config_lock);
++ dev->config_core_enabled = true;
+ if (dev->config_change_pending)
+ __virtio_config_changed(dev);
+- dev->config_change_pending = false;
+ spin_unlock_irq(&dev->config_lock);
+ }
+
+@@ -316,7 +349,7 @@ static int virtio_dev_probe(struct device *_d)
+ if (drv->scan)
+ drv->scan(dev);
+
+- virtio_config_enable(dev);
++ virtio_config_core_enable(dev);
+
+ return 0;
+
+@@ -331,7 +364,7 @@ static void virtio_dev_remove(struct device *_d)
+ struct virtio_device *dev = dev_to_virtio(_d);
+ struct virtio_driver *drv = drv_to_virtio(dev->dev.driver);
+
+- virtio_config_disable(dev);
++ virtio_config_core_disable(dev);
+
+ drv->remove(dev);
+
+@@ -443,7 +476,7 @@ int register_virtio_device(struct virtio_device *dev)
+ goto out_ida_remove;
+
+ spin_lock_init(&dev->config_lock);
+- dev->config_enabled = false;
++ dev->config_core_enabled = false;
+ dev->config_change_pending = false;
+
+ INIT_LIST_HEAD(&dev->vqs);
+@@ -500,14 +533,14 @@ int virtio_device_freeze(struct virtio_device *dev)
+ struct virtio_driver *drv = drv_to_virtio(dev->dev.driver);
+ int ret;
+
+- virtio_config_disable(dev);
++ virtio_config_core_disable(dev);
+
+ dev->failed = dev->config->get_status(dev) & VIRTIO_CONFIG_S_FAILED;
+
+ if (drv && drv->freeze) {
+ ret = drv->freeze(dev);
+ if (ret) {
+- virtio_config_enable(dev);
++ virtio_config_core_enable(dev);
+ return ret;
+ }
+ }
+@@ -557,7 +590,7 @@ int virtio_device_restore(struct virtio_device *dev)
+ if (!(dev->config->get_status(dev) & VIRTIO_CONFIG_S_DRIVER_OK))
+ virtio_device_ready(dev);
+
+- virtio_config_enable(dev);
++ virtio_config_core_enable(dev);
+
+ return 0;
+
+diff --git a/drivers/watchdog/imx_sc_wdt.c b/drivers/watchdog/imx_sc_wdt.c
+index e51fe1b78518f4..d73076b686d8ca 100644
+--- a/drivers/watchdog/imx_sc_wdt.c
++++ b/drivers/watchdog/imx_sc_wdt.c
+@@ -216,29 +216,6 @@ static int imx_sc_wdt_probe(struct platform_device *pdev)
+ return devm_watchdog_register_device(dev, wdog);
+ }
+
+-static int __maybe_unused imx_sc_wdt_suspend(struct device *dev)
+-{
+- struct imx_sc_wdt_device *imx_sc_wdd = dev_get_drvdata(dev);
+-
+- if (watchdog_active(&imx_sc_wdd->wdd))
+- imx_sc_wdt_stop(&imx_sc_wdd->wdd);
+-
+- return 0;
+-}
+-
+-static int __maybe_unused imx_sc_wdt_resume(struct device *dev)
+-{
+- struct imx_sc_wdt_device *imx_sc_wdd = dev_get_drvdata(dev);
+-
+- if (watchdog_active(&imx_sc_wdd->wdd))
+- imx_sc_wdt_start(&imx_sc_wdd->wdd);
+-
+- return 0;
+-}
+-
+-static SIMPLE_DEV_PM_OPS(imx_sc_wdt_pm_ops,
+- imx_sc_wdt_suspend, imx_sc_wdt_resume);
+-
+ static const struct of_device_id imx_sc_wdt_dt_ids[] = {
+ { .compatible = "fsl,imx-sc-wdt", },
+ { /* sentinel */ }
+@@ -250,7 +227,6 @@ static struct platform_driver imx_sc_wdt_driver = {
+ .driver = {
+ .name = "imx-sc-wdt",
+ .of_match_table = imx_sc_wdt_dt_ids,
+- .pm = &imx_sc_wdt_pm_ops,
+ },
+ };
+ module_platform_driver(imx_sc_wdt_driver);
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index 35155258a7e2da..a337edcf8faf71 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
+ {
+ unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
+ unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
++ phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
+
+ next_bfn = pfn_to_bfn(xen_pfn);
+
++ /* If buffer is physically aligned, ensure DMA alignment. */
++ if (IS_ALIGNED(p, algn) &&
++ !IS_ALIGNED((phys_addr_t)next_bfn << XEN_PAGE_SHIFT, algn))
++ return 1;
++
+ for (i = 1; i < nr_pages; i++)
+ if (pfn_to_bfn(++xen_pfn) != ++next_bfn)
+ return 1;
+@@ -141,7 +147,7 @@ xen_swiotlb_alloc_coherent(struct device *dev, size_t size,
+ void *ret;
+
+ /* Align the allocation to the Xen page size */
+- size = 1UL << (order + XEN_PAGE_SHIFT);
++ size = ALIGN(size, XEN_PAGE_SIZE);
+
+ ret = (void *)__get_free_pages(flags, get_order(size));
+ if (!ret)
+@@ -173,7 +179,7 @@ xen_swiotlb_free_coherent(struct device *dev, size_t size, void *vaddr,
+ int order = get_order(size);
+
+ /* Convert the size to actually allocated. */
+- size = 1UL << (order + XEN_PAGE_SHIFT);
++ size = ALIGN(size, XEN_PAGE_SIZE);
+
+ if (WARN_ON_ONCE(dma_handle + size - 1 > dev->coherent_dma_mask) ||
+ WARN_ON_ONCE(range_straddles_page_boundary(phys, size)))
+diff --git a/fs/autofs/inode.c b/fs/autofs/inode.c
+index cf792d4de4f1b3..64faa6c51f60a5 100644
+--- a/fs/autofs/inode.c
++++ b/fs/autofs/inode.c
+@@ -172,8 +172,7 @@ static int autofs_parse_fd(struct fs_context *fc, struct autofs_sb_info *sbi,
+ ret = autofs_check_pipe(pipe);
+ if (ret < 0) {
+ errorf(fc, "Invalid/unusable pipe");
+- if (param->type != fs_value_is_file)
+- fput(pipe);
++ fput(pipe);
+ return -EBADF;
+ }
+
+diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
+index 3056c8aed8ef4f..2bc074035db4ad 100644
+--- a/fs/btrfs/btrfs_inode.h
++++ b/fs/btrfs/btrfs_inode.h
+@@ -152,6 +152,7 @@ struct btrfs_inode {
+ * logged_trans), to access/update delalloc_bytes, new_delalloc_bytes,
+ * defrag_bytes, disk_i_size, outstanding_extents, csum_bytes and to
+ * update the VFS' inode number of bytes used.
++ * Also protects setting struct file::private_data.
+ */
+ spinlock_t lock;
+
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index c8568b1a61c432..18554f34f2d30c 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -459,6 +459,8 @@ struct btrfs_file_private {
+ void *filldir_buf;
+ u64 last_index;
+ struct extent_state *llseek_cached_state;
++ /* Task that allocated this structure. */
++ struct task_struct *owner_task;
+ };
+
+ static inline u32 BTRFS_LEAF_DATA_SIZE(const struct btrfs_fs_info *info)
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index feec49e6f9c801..a5966324607d49 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -6551,13 +6551,13 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ continue;
+
+ ret = btrfs_trim_free_extents(device, &group_trimmed);
++
++ trimmed += group_trimmed;
+ if (ret) {
+ dev_failed++;
+ dev_ret = ret;
+ break;
+ }
+-
+- trimmed += group_trimmed;
+ }
+ mutex_unlock(&fs_devices->device_list_mutex);
+
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 2aeb8116549ca9..d386f643079b58 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -3485,7 +3485,7 @@ static bool find_desired_extent_in_hole(struct btrfs_inode *inode, int whence,
+ static loff_t find_desired_extent(struct file *file, loff_t offset, int whence)
+ {
+ struct btrfs_inode *inode = BTRFS_I(file->f_mapping->host);
+- struct btrfs_file_private *private = file->private_data;
++ struct btrfs_file_private *private;
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ struct extent_state *cached_state = NULL;
+ struct extent_state **delalloc_cached_state;
+@@ -3513,7 +3513,19 @@ static loff_t find_desired_extent(struct file *file, loff_t offset, int whence)
+ inode_get_bytes(&inode->vfs_inode) == i_size)
+ return i_size;
+
+- if (!private) {
++ spin_lock(&inode->lock);
++ private = file->private_data;
++ spin_unlock(&inode->lock);
++
++ if (private && private->owner_task != current) {
++ /*
++ * Not allocated by us, don't use it as its cached state is used
++ * by the task that allocated it and we don't want neither to
++ * mess with it nor get incorrect results because it reflects an
++ * invalid state for the current task.
++ */
++ private = NULL;
++ } else if (!private) {
+ private = kzalloc(sizeof(*private), GFP_KERNEL);
+ /*
+ * No worries if memory allocation failed.
+@@ -3521,7 +3533,23 @@ static loff_t find_desired_extent(struct file *file, loff_t offset, int whence)
+ * lseek SEEK_HOLE/DATA calls to a file when there's delalloc,
+ * so everything will still be correct.
+ */
+- file->private_data = private;
++ if (private) {
++ bool free = false;
++
++ private->owner_task = current;
++
++ spin_lock(&inode->lock);
++ if (file->private_data)
++ free = true;
++ else
++ file->private_data = private;
++ spin_unlock(&inode->lock);
++
++ if (free) {
++ kfree(private);
++ private = NULL;
++ }
++ }
+ }
+
+ if (private)
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index e0a664b8a46a86..94d8f29b04c5b6 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -543,13 +543,11 @@ static noinline int btrfs_ioctl_fitrim(struct btrfs_fs_info *fs_info,
+
+ range.minlen = max(range.minlen, minlen);
+ ret = btrfs_trim_fs(fs_info, &range);
+- if (ret < 0)
+- return ret;
+
+ if (copy_to_user(arg, &range, sizeof(range)))
+ return -EFAULT;
+
+- return 0;
++ return ret;
+ }
+
+ int __pure btrfs_is_empty_uuid(const u8 *uuid)
+diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
+index 8ddd5fcbeb9391..5941b366270957 100644
+--- a/fs/btrfs/subpage.c
++++ b/fs/btrfs/subpage.c
+@@ -900,8 +900,14 @@ void btrfs_folio_end_all_writers(const struct btrfs_fs_info *fs_info, struct fol
+ }
+
+ #define GET_SUBPAGE_BITMAP(subpage, subpage_info, name, dst) \
+- bitmap_cut(dst, subpage->bitmaps, 0, \
+- subpage_info->name##_offset, subpage_info->bitmap_nr_bits)
++{ \
++ const int bitmap_nr_bits = subpage_info->bitmap_nr_bits; \
++ \
++ ASSERT(bitmap_nr_bits < BITS_PER_LONG); \
++ *dst = bitmap_read(subpage->bitmaps, \
++ subpage_info->name##_offset, \
++ bitmap_nr_bits); \
++}
+
+ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info,
+ struct folio *folio, u64 start, u32 len)
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 634d69964fe4c1..7b50263723bc1a 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1517,7 +1517,7 @@ static int check_extent_item(struct extent_buffer *leaf,
+ dref_objectid > BTRFS_LAST_FREE_OBJECTID)) {
+ extent_err(leaf, slot,
+ "invalid data ref objectid value %llu",
+- dref_root);
++ dref_objectid);
+ return -EUCLEAN;
+ }
+ if (unlikely(!IS_ALIGNED(dref_offset,
+diff --git a/fs/cachefiles/xattr.c b/fs/cachefiles/xattr.c
+index 4dd8a993c60a8b..7c6f260a3be567 100644
+--- a/fs/cachefiles/xattr.c
++++ b/fs/cachefiles/xattr.c
+@@ -64,9 +64,15 @@ int cachefiles_set_object_xattr(struct cachefiles_object *object)
+ memcpy(buf->data, fscache_get_aux(object->cookie), len);
+
+ ret = cachefiles_inject_write_error();
+- if (ret == 0)
+- ret = vfs_setxattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache,
+- buf, sizeof(struct cachefiles_xattr) + len, 0);
++ if (ret == 0) {
++ ret = mnt_want_write_file(file);
++ if (ret == 0) {
++ ret = vfs_setxattr(&nop_mnt_idmap, dentry,
++ cachefiles_xattr_cache, buf,
++ sizeof(struct cachefiles_xattr) + len, 0);
++ mnt_drop_write_file(file);
++ }
++ }
+ if (ret < 0) {
+ trace_cachefiles_vfs_error(object, file_inode(file), ret,
+ cachefiles_trace_setxattr_error);
+@@ -151,8 +157,14 @@ int cachefiles_remove_object_xattr(struct cachefiles_cache *cache,
+ int ret;
+
+ ret = cachefiles_inject_remove_error();
+- if (ret == 0)
+- ret = vfs_removexattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache);
++ if (ret == 0) {
++ ret = mnt_want_write(cache->mnt);
++ if (ret == 0) {
++ ret = vfs_removexattr(&nop_mnt_idmap, dentry,
++ cachefiles_xattr_cache);
++ mnt_drop_write(cache->mnt);
++ }
++ }
+ if (ret < 0) {
+ trace_cachefiles_vfs_error(object, d_inode(dentry), ret,
+ cachefiles_trace_remxattr_error);
+@@ -208,9 +220,15 @@ bool cachefiles_set_volume_xattr(struct cachefiles_volume *volume)
+ memcpy(buf->data, p, volume->vcookie->coherency_len);
+
+ ret = cachefiles_inject_write_error();
+- if (ret == 0)
+- ret = vfs_setxattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache,
+- buf, len, 0);
++ if (ret == 0) {
++ ret = mnt_want_write(volume->cache->mnt);
++ if (ret == 0) {
++ ret = vfs_setxattr(&nop_mnt_idmap, dentry,
++ cachefiles_xattr_cache,
++ buf, len, 0);
++ mnt_drop_write(volume->cache->mnt);
++ }
++ }
+ if (ret < 0) {
+ trace_cachefiles_vfs_error(NULL, d_inode(dentry), ret,
+ cachefiles_trace_setxattr_error);
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 91521576f5003e..66d9b3b4c5881d 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -89,12 +89,14 @@ enum {
+ Opt_uid,
+ Opt_gid,
+ Opt_mode,
++ Opt_source,
+ };
+
+ static const struct fs_parameter_spec debugfs_param_specs[] = {
+ fsparam_gid ("gid", Opt_gid),
+ fsparam_u32oct ("mode", Opt_mode),
+ fsparam_uid ("uid", Opt_uid),
++ fsparam_string ("source", Opt_source),
+ {}
+ };
+
+@@ -126,6 +128,12 @@ static int debugfs_parse_param(struct fs_context *fc, struct fs_parameter *param
+ case Opt_mode:
+ opts->mode = result.uint_32 & S_IALLUGO;
+ break;
++ case Opt_source:
++ if (fc->source)
++ return invalfc(fc, "Multiple sources specified");
++ fc->source = param->string;
++ param->string = NULL;
++ break;
+ /*
+ * We might like to report bad mount options here;
+ * but traditionally debugfs has ignored all mount options
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index c2253b6a54164d..eb318c7ddd80ed 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -539,7 +539,7 @@ int __init z_erofs_init_decompressor(void)
+ for (i = 0; i < Z_EROFS_COMPRESSION_MAX; ++i) {
+ err = z_erofs_decomp[i] ? z_erofs_decomp[i]->init() : 0;
+ if (err) {
+- while (--i)
++ while (i--)
+ if (z_erofs_decomp[i])
+ z_erofs_decomp[i]->exit();
+ return err;
+diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
+index 419432be3223b7..4e3ea6689cb4ea 100644
+--- a/fs/erofs/inode.c
++++ b/fs/erofs/inode.c
+@@ -178,12 +178,14 @@ static int erofs_fill_symlink(struct inode *inode, void *kaddr,
+ unsigned int m_pofs)
+ {
+ struct erofs_inode *vi = EROFS_I(inode);
+- unsigned int bsz = i_blocksize(inode);
++ loff_t off;
+ char *lnk;
+
+- /* if it cannot be handled with fast symlink scheme */
+- if (vi->datalayout != EROFS_INODE_FLAT_INLINE ||
+- inode->i_size >= bsz || inode->i_size < 0) {
++ m_pofs += vi->xattr_isize;
++ /* check if it cannot be handled with fast symlink scheme */
++ if (vi->datalayout != EROFS_INODE_FLAT_INLINE || inode->i_size < 0 ||
++ check_add_overflow(m_pofs, inode->i_size, &off) ||
++ off > i_blocksize(inode)) {
+ inode->i_op = &erofs_symlink_iops;
+ return 0;
+ }
+@@ -192,16 +194,6 @@ static int erofs_fill_symlink(struct inode *inode, void *kaddr,
+ if (!lnk)
+ return -ENOMEM;
+
+- m_pofs += vi->xattr_isize;
+- /* inline symlink data shouldn't cross block boundary */
+- if (m_pofs + inode->i_size > bsz) {
+- kfree(lnk);
+- erofs_err(inode->i_sb,
+- "inline data cross block boundary @ nid %llu",
+- vi->nid);
+- DBG_BUGON(1);
+- return -EFSCORRUPTED;
+- }
+ memcpy(lnk, kaddr + m_pofs, inode->i_size);
+ lnk[inode->i_size] = '\0';
+
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 424f656cd765e2..a0bae499c5ff65 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -1428,6 +1428,7 @@ static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+ struct z_erofs_bvec zbv;
+ struct address_space *mapping;
+ struct folio *folio;
++ struct page *page;
+ int bs = i_blocksize(f->inode);
+
+ /* Except for inplace folios, the entire folio can be used for I/Os */
+@@ -1450,7 +1451,6 @@ static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+ * file-backed folios will be used instead.
+ */
+ if (folio->private == (void *)Z_EROFS_PREALLOCATED_PAGE) {
+- folio->private = 0;
+ tocache = true;
+ goto out_tocache;
+ }
+@@ -1468,7 +1468,7 @@ static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+ }
+
+ folio_lock(folio);
+- if (folio->mapping == mc) {
++ if (likely(folio->mapping == mc)) {
+ /*
+ * The cached folio is still in managed cache but without
+ * a valid `->private` pcluster hint. Let's reconnect them.
+@@ -1478,41 +1478,44 @@ static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+ /* compressed_bvecs[] already takes a ref before */
+ folio_put(folio);
+ }
+-
+- /* no need to submit if it is already up-to-date */
+- if (folio_test_uptodate(folio)) {
+- folio_unlock(folio);
+- bvec->bv_page = NULL;
++ if (likely(folio->private == pcl)) {
++ /* don't submit cache I/Os again if already uptodate */
++ if (folio_test_uptodate(folio)) {
++ folio_unlock(folio);
++ bvec->bv_page = NULL;
++ }
++ return;
+ }
+- return;
++ /*
++ * Already linked with another pcluster, which only appears in
++ * crafted images by fuzzers for now. But handle this anyway.
++ */
++ tocache = false; /* use temporary short-lived pages */
++ } else {
++ DBG_BUGON(1); /* referenced managed folios can't be truncated */
++ tocache = true;
+ }
+-
+- /*
+- * It has been truncated, so it's unsafe to reuse this one. Let's
+- * allocate a new page for compressed data.
+- */
+- DBG_BUGON(folio->mapping);
+- tocache = true;
+ folio_unlock(folio);
+ folio_put(folio);
+ out_allocfolio:
+- zbv.page = erofs_allocpage(&f->pagepool, gfp | __GFP_NOFAIL);
++ page = erofs_allocpage(&f->pagepool, gfp | __GFP_NOFAIL);
+ spin_lock(&pcl->obj.lockref.lock);
+- if (pcl->compressed_bvecs[nr].page) {
+- erofs_pagepool_add(&f->pagepool, zbv.page);
++ if (unlikely(pcl->compressed_bvecs[nr].page != zbv.page)) {
++ erofs_pagepool_add(&f->pagepool, page);
+ spin_unlock(&pcl->obj.lockref.lock);
+ cond_resched();
+ goto repeat;
+ }
+- bvec->bv_page = pcl->compressed_bvecs[nr].page = zbv.page;
+- folio = page_folio(zbv.page);
+- /* first mark it as a temporary shortlived folio (now 1 ref) */
+- folio->private = (void *)Z_EROFS_SHORTLIVED_PAGE;
++ bvec->bv_page = pcl->compressed_bvecs[nr].page = page;
++ folio = page_folio(page);
+ spin_unlock(&pcl->obj.lockref.lock);
+ out_tocache:
+ if (!tocache || bs != PAGE_SIZE ||
+- filemap_add_folio(mc, folio, pcl->obj.index + nr, gfp))
++ filemap_add_folio(mc, folio, pcl->obj.index + nr, gfp)) {
++ /* turn into a temporary shortlived folio (1 ref) */
++ folio->private = (void *)Z_EROFS_SHORTLIVED_PAGE;
+ return;
++ }
+ folio_attach_private(folio, pcl);
+ /* drop a refcount added by allocpage (then 2 refs in total here) */
+ folio_put(folio);
+@@ -1647,13 +1650,10 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+ cur = mdev.m_pa;
+ end = cur + pcl->pclustersize;
+ do {
+- z_erofs_fill_bio_vec(&bvec, f, pcl, i++, mc);
+- if (!bvec.bv_page)
+- continue;
+-
++ bvec.bv_page = NULL;
+ if (bio && (cur != last_pa ||
+ bio->bi_bdev != mdev.m_bdev)) {
+-io_retry:
++drain_io:
+ if (!erofs_is_fscache_mode(sb))
+ submit_bio(bio);
+ else
+@@ -1666,6 +1666,15 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+ bio = NULL;
+ }
+
++ if (!bvec.bv_page) {
++ z_erofs_fill_bio_vec(&bvec, f, pcl, i++, mc);
++ if (!bvec.bv_page)
++ continue;
++ if (cur + bvec.bv_len > end)
++ bvec.bv_len = end - cur;
++ DBG_BUGON(bvec.bv_len < sb->s_blocksize);
++ }
++
+ if (unlikely(PageWorkingset(bvec.bv_page)) &&
+ !memstall) {
+ psi_memstall_enter(&pflags);
+@@ -1685,13 +1694,9 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+ ++nr_bios;
+ }
+
+- if (cur + bvec.bv_len > end)
+- bvec.bv_len = end - cur;
+- DBG_BUGON(bvec.bv_len < sb->s_blocksize);
+ if (!bio_add_page(bio, bvec.bv_page, bvec.bv_len,
+ bvec.bv_offset))
+- goto io_retry;
+-
++ goto drain_io;
+ last_pa = cur + bvec.bv_len;
+ bypass = false;
+ } while ((cur += bvec.bv_len) < end);
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index f53ca4f7fceddd..6d0e2f547ae7d1 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -420,7 +420,7 @@ static bool busy_loop_ep_timeout(unsigned long start_time,
+
+ static bool ep_busy_loop_on(struct eventpoll *ep)
+ {
+- return !!ep->busy_poll_usecs || net_busy_loop_on();
++ return !!READ_ONCE(ep->busy_poll_usecs) || net_busy_loop_on();
+ }
+
+ static bool ep_busy_loop_end(void *p, unsigned long start_time)
+diff --git a/fs/exfat/nls.c b/fs/exfat/nls.c
+index afdf13c34ff526..1ac011088ce76e 100644
+--- a/fs/exfat/nls.c
++++ b/fs/exfat/nls.c
+@@ -779,8 +779,11 @@ int exfat_create_upcase_table(struct super_block *sb)
+ le32_to_cpu(ep->dentry.upcase.checksum));
+
+ brelse(bh);
+- if (ret && ret != -EIO)
++ if (ret && ret != -EIO) {
++ /* free memory from exfat_load_upcase_table call */
++ exfat_free_upcase_table(sbi);
+ goto load_default;
++ }
+
+ /* load successfully */
+ return ret;
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index 9dfd768ed9f8d9..81641be38c0e8b 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -514,6 +514,8 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
+ if (min_inodes < 1)
+ min_inodes = 1;
+ min_clusters = avefreec - EXT4_CLUSTERS_PER_GROUP(sb)*flex_size / 4;
++ if (min_clusters < 0)
++ min_clusters = 0;
+
+ /*
+ * Start looking in the flex group where we last allocated an
+@@ -755,10 +757,10 @@ int ext4_mark_inode_used(struct super_block *sb, int ino)
+ struct ext4_group_desc *gdp;
+ ext4_group_t group;
+ int bit;
+- int err = -EFSCORRUPTED;
++ int err;
+
+ if (ino < EXT4_FIRST_INO(sb) || ino > max_ino)
+- goto out;
++ return -EFSCORRUPTED;
+
+ group = (ino - 1) / EXT4_INODES_PER_GROUP(sb);
+ bit = (ino - 1) % EXT4_INODES_PER_GROUP(sb);
+@@ -860,6 +862,7 @@ int ext4_mark_inode_used(struct super_block *sb, int ino)
+ err = ext4_handle_dirty_metadata(NULL, NULL, group_desc_bh);
+ sync_dirty_buffer(group_desc_bh);
+ out:
++ brelse(inode_bitmap_bh);
+ return err;
+ }
+
+@@ -1053,12 +1056,13 @@ struct inode *__ext4_new_inode(struct mnt_idmap *idmap,
+ brelse(inode_bitmap_bh);
+ inode_bitmap_bh = ext4_read_inode_bitmap(sb, group);
+ /* Skip groups with suspicious inode tables */
+- if (((!(sbi->s_mount_state & EXT4_FC_REPLAY))
+- && EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) ||
+- IS_ERR(inode_bitmap_bh)) {
++ if (IS_ERR(inode_bitmap_bh)) {
+ inode_bitmap_bh = NULL;
+ goto next_group;
+ }
++ if (!(sbi->s_mount_state & EXT4_FC_REPLAY) &&
++ EXT4_MB_GRP_IBITMAP_CORRUPT(grp))
++ goto next_group;
+
+ repeat_in_this_group:
+ ret2 = find_inode_bit(sb, group, inode_bitmap_bh, &ino);
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index e7a09a99837b96..44a5f6df59ecda 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1664,24 +1664,36 @@ struct buffer_head *ext4_find_inline_entry(struct inode *dir,
+ struct ext4_dir_entry_2 **res_dir,
+ int *has_inline_data)
+ {
++ struct ext4_xattr_ibody_find is = {
++ .s = { .not_found = -ENODATA, },
++ };
++ struct ext4_xattr_info i = {
++ .name_index = EXT4_XATTR_INDEX_SYSTEM,
++ .name = EXT4_XATTR_SYSTEM_DATA,
++ };
+ int ret;
+- struct ext4_iloc iloc;
+ void *inline_start;
+ int inline_size;
+
+- if (ext4_get_inode_loc(dir, &iloc))
+- return NULL;
++ ret = ext4_get_inode_loc(dir, &is.iloc);
++ if (ret)
++ return ERR_PTR(ret);
+
+ down_read(&EXT4_I(dir)->xattr_sem);
++
++ ret = ext4_xattr_ibody_find(dir, &i, &is);
++ if (ret)
++ goto out;
++
+ if (!ext4_has_inline_data(dir)) {
+ *has_inline_data = 0;
+ goto out;
+ }
+
+- inline_start = (void *)ext4_raw_inode(&iloc)->i_block +
++ inline_start = (void *)ext4_raw_inode(&is.iloc)->i_block +
+ EXT4_INLINE_DOTDOT_SIZE;
+ inline_size = EXT4_MIN_INLINE_DATA_SIZE - EXT4_INLINE_DOTDOT_SIZE;
+- ret = ext4_search_dir(iloc.bh, inline_start, inline_size,
++ ret = ext4_search_dir(is.iloc.bh, inline_start, inline_size,
+ dir, fname, 0, res_dir);
+ if (ret == 1)
+ goto out_find;
+@@ -1691,20 +1703,23 @@ struct buffer_head *ext4_find_inline_entry(struct inode *dir,
+ if (ext4_get_inline_size(dir) == EXT4_MIN_INLINE_DATA_SIZE)
+ goto out;
+
+- inline_start = ext4_get_inline_xattr_pos(dir, &iloc);
++ inline_start = ext4_get_inline_xattr_pos(dir, &is.iloc);
+ inline_size = ext4_get_inline_size(dir) - EXT4_MIN_INLINE_DATA_SIZE;
+
+- ret = ext4_search_dir(iloc.bh, inline_start, inline_size,
++ ret = ext4_search_dir(is.iloc.bh, inline_start, inline_size,
+ dir, fname, 0, res_dir);
+ if (ret == 1)
+ goto out_find;
+
+ out:
+- brelse(iloc.bh);
+- iloc.bh = NULL;
++ brelse(is.iloc.bh);
++ if (ret < 0)
++ is.iloc.bh = ERR_PTR(ret);
++ else
++ is.iloc.bh = NULL;
+ out_find:
+ up_read(&EXT4_I(dir)->xattr_sem);
+- return iloc.bh;
++ return is.iloc.bh;
+ }
+
+ int ext4_delete_inline_entry(handle_t *handle,
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 9dda9cd68ab2f5..dfecd25cee4eae 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -3887,11 +3887,8 @@ static void ext4_free_data_in_buddy(struct super_block *sb,
+ /*
+ * Clear the trimmed flag for the group so that the next
+ * ext4_trim_fs can trim it.
+- * If the volume is mounted with -o discard, online discard
+- * is supported and the free blocks will be trimmed online.
+ */
+- if (!test_opt(sb, DISCARD))
+- EXT4_MB_GRP_CLEAR_TRIMMED(db);
++ EXT4_MB_GRP_CLEAR_TRIMMED(db);
+
+ if (!db->bb_free_root.rb_node) {
+ /* No more items in the per group rb tree
+@@ -6515,8 +6512,9 @@ static void ext4_mb_clear_bb(handle_t *handle, struct inode *inode,
+ " group:%u block:%d count:%lu failed"
+ " with %d", block_group, bit, count,
+ err);
+- } else
+- EXT4_MB_GRP_CLEAR_TRIMMED(e4b.bd_info);
++ }
++
++ EXT4_MB_GRP_CLEAR_TRIMMED(e4b.bd_info);
+
+ ext4_lock_group(sb, block_group);
+ mb_free_blocks(inode, &e4b, bit, count_clusters);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index e72145c4ae5a07..7e73e13741d1e2 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -5165,6 +5165,18 @@ static int ext4_block_group_meta_init(struct super_block *sb, int silent)
+ return 0;
+ }
+
++/*
++ * It's hard to get stripe aligned blocks if stripe is not aligned with
++ * cluster, just disable stripe and alert user to simplify code and avoid
++ * stripe aligned allocation which will rarely succeed.
++ */
++static bool ext4_is_stripe_incompatible(struct super_block *sb, unsigned long stripe)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ return (stripe > 0 && sbi->s_cluster_ratio > 1 &&
++ stripe % sbi->s_cluster_ratio != 0);
++}
++
+ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ {
+ struct ext4_super_block *es = NULL;
+@@ -5272,13 +5284,7 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ goto failed_mount3;
+
+ sbi->s_stripe = ext4_get_stripe_size(sbi);
+- /*
+- * It's hard to get stripe aligned blocks if stripe is not aligned with
+- * cluster, just disable stripe and alert user to simpfy code and avoid
+- * stripe aligned allocation which will rarely successes.
+- */
+- if (sbi->s_stripe > 0 && sbi->s_cluster_ratio > 1 &&
+- sbi->s_stripe % sbi->s_cluster_ratio != 0) {
++ if (ext4_is_stripe_incompatible(sb, sbi->s_stripe)) {
+ ext4_msg(sb, KERN_WARNING,
+ "stripe (%lu) is not aligned with cluster size (%u), "
+ "stripe is disabled",
+@@ -6441,6 +6447,15 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+
+ }
+
++ if ((ctx->spec & EXT4_SPEC_s_stripe) &&
++ ext4_is_stripe_incompatible(sb, ctx->s_stripe)) {
++ ext4_msg(sb, KERN_WARNING,
++ "stripe (%lu) is not aligned with cluster size (%u), "
++ "stripe is disabled",
++ ctx->s_stripe, sbi->s_cluster_ratio);
++ ctx->s_stripe = 0;
++ }
++
+ /*
+ * Changing the DIOREAD_NOLOCK or DELALLOC mount options may cause
+ * two calls to ext4_should_dioread_nolock() to return inconsistent
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 990b93689b460b..f55d54bb12f422 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -945,7 +945,7 @@ static int __f2fs_get_cluster_blocks(struct inode *inode,
+ unsigned int cluster_size = F2FS_I(inode)->i_cluster_size;
+ int count, i;
+
+- for (i = 1, count = 1; i < cluster_size; i++) {
++ for (i = 0, count = 0; i < cluster_size; i++) {
+ block_t blkaddr = data_blkaddr(dn->inode, dn->node_page,
+ dn->ofs_in_node + i);
+
+@@ -956,8 +956,8 @@ static int __f2fs_get_cluster_blocks(struct inode *inode,
+ return count;
+ }
+
+-static int __f2fs_cluster_blocks(struct inode *inode,
+- unsigned int cluster_idx, bool compr_blks)
++static int __f2fs_cluster_blocks(struct inode *inode, unsigned int cluster_idx,
++ enum cluster_check_type type)
+ {
+ struct dnode_of_data dn;
+ unsigned int start_idx = cluster_idx <<
+@@ -978,10 +978,12 @@ static int __f2fs_cluster_blocks(struct inode *inode,
+ }
+
+ if (dn.data_blkaddr == COMPRESS_ADDR) {
+- if (compr_blks)
+- ret = __f2fs_get_cluster_blocks(inode, &dn);
+- else
++ if (type == CLUSTER_COMPR_BLKS)
++ ret = 1 + __f2fs_get_cluster_blocks(inode, &dn);
++ else if (type == CLUSTER_IS_COMPR)
+ ret = 1;
++ } else if (type == CLUSTER_RAW_BLKS) {
++ ret = __f2fs_get_cluster_blocks(inode, &dn);
+ }
+ fail:
+ f2fs_put_dnode(&dn);
+@@ -991,7 +993,16 @@ static int __f2fs_cluster_blocks(struct inode *inode,
+ /* return # of compressed blocks in compressed cluster */
+ static int f2fs_compressed_blocks(struct compress_ctx *cc)
+ {
+- return __f2fs_cluster_blocks(cc->inode, cc->cluster_idx, true);
++ return __f2fs_cluster_blocks(cc->inode, cc->cluster_idx,
++ CLUSTER_COMPR_BLKS);
++}
++
++/* return # of raw blocks in non-compressed cluster */
++static int f2fs_decompressed_blocks(struct inode *inode,
++ unsigned int cluster_idx)
++{
++ return __f2fs_cluster_blocks(inode, cluster_idx,
++ CLUSTER_RAW_BLKS);
+ }
+
+ /* return whether cluster is compressed one or not */
+@@ -999,7 +1010,16 @@ int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index)
+ {
+ return __f2fs_cluster_blocks(inode,
+ index >> F2FS_I(inode)->i_log_cluster_size,
+- false);
++ CLUSTER_IS_COMPR);
++}
++
++/* return whether cluster contains non raw blocks or not */
++bool f2fs_is_sparse_cluster(struct inode *inode, pgoff_t index)
++{
++ unsigned int cluster_idx = index >> F2FS_I(inode)->i_log_cluster_size;
++
++ return f2fs_decompressed_blocks(inode, cluster_idx) !=
++ F2FS_I(inode)->i_cluster_size;
+ }
+
+ static bool cluster_may_compress(struct compress_ctx *cc)
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 6457e5bca9c9e7..be66b3a0e793f6 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2650,10 +2650,13 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
+ struct dnode_of_data dn;
+ struct node_info ni;
+ bool ipu_force = false;
++ bool atomic_commit;
+ int err = 0;
+
+ /* Use COW inode to make dnode_of_data for atomic write */
+- if (f2fs_is_atomic_file(inode))
++ atomic_commit = f2fs_is_atomic_file(inode) &&
++ page_private_atomic(fio->page);
++ if (atomic_commit)
+ set_new_dnode(&dn, F2FS_I(inode)->cow_inode, NULL, NULL, 0);
+ else
+ set_new_dnode(&dn, inode, NULL, NULL, 0);
+@@ -2752,6 +2755,8 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
+ f2fs_outplace_write_data(&dn, fio);
+ trace_f2fs_do_write_data_page(page_folio(page), OPU);
+ set_inode_flag(inode, FI_APPEND_WRITE);
++ if (atomic_commit)
++ clear_page_private_atomic(page);
+ out_writepage:
+ f2fs_put_dnode(&dn);
+ out:
+@@ -3721,6 +3726,9 @@ static int f2fs_write_end(struct file *file,
+
+ set_page_dirty(page);
+
++ if (f2fs_is_atomic_file(inode))
++ set_page_private_atomic(page);
++
+ if (pos + copied > i_size_read(inode) &&
+ !f2fs_verity_in_progress(inode)) {
+ f2fs_i_size_write(inode, pos + copied);
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index cbd7a5e96a371e..14900ca8a9ffbd 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -166,7 +166,8 @@ static unsigned long dir_block_index(unsigned int level,
+ unsigned long bidx = 0;
+
+ for (i = 0; i < level; i++)
+- bidx += dir_buckets(i, dir_level) * bucket_blocks(i);
++ bidx += mul_u32_u32(dir_buckets(i, dir_level),
++ bucket_blocks(i));
+ bidx += idx * bucket_blocks(level);
+ return bidx;
+ }
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index fd1fc06359eea3..62ac440d94168a 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -366,7 +366,7 @@ static unsigned int __free_extent_tree(struct f2fs_sb_info *sbi,
+ static void __drop_largest_extent(struct extent_tree *et,
+ pgoff_t fofs, unsigned int len)
+ {
+- if (fofs < et->largest.fofs + et->largest.len &&
++ if (fofs < (pgoff_t)et->largest.fofs + et->largest.len &&
+ fofs + len > et->largest.fofs) {
+ et->largest.len = 0;
+ et->largest_updated = true;
+@@ -456,7 +456,7 @@ static bool __lookup_extent_tree(struct inode *inode, pgoff_t pgofs,
+
+ if (type == EX_READ &&
+ et->largest.fofs <= pgofs &&
+- et->largest.fofs + et->largest.len > pgofs) {
++ (pgoff_t)et->largest.fofs + et->largest.len > pgofs) {
+ *ei = et->largest;
+ ret = true;
+ stat_inc_largest_node_hit(sbi);
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index ac19c61f0c3ec4..40e96d577982c0 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -285,6 +285,7 @@ enum {
+ APPEND_INO, /* for append ino list */
+ UPDATE_INO, /* for update ino list */
+ TRANS_DIR_INO, /* for transactions dir ino list */
++ XATTR_DIR_INO, /* for xattr updated dir ino list */
+ FLUSH_INO, /* for multiple device flushing */
+ MAX_INO_ENTRY, /* max. list */
+ };
+@@ -784,7 +785,6 @@ enum {
+ FI_NEED_IPU, /* used for ipu per file */
+ FI_ATOMIC_FILE, /* indicate atomic file */
+ FI_DATA_EXIST, /* indicate data exists */
+- FI_INLINE_DOTS, /* indicate inline dot dentries */
+ FI_SKIP_WRITES, /* should skip data page writeback */
+ FI_OPU_WRITE, /* used for opu per file */
+ FI_DIRTY_FILE, /* indicate regular/symlink has dirty pages */
+@@ -802,6 +802,7 @@ enum {
+ FI_ALIGNED_WRITE, /* enable aligned write */
+ FI_COW_FILE, /* indicate COW file */
+ FI_ATOMIC_COMMITTED, /* indicate atomic commit completed except disk sync */
++ FI_ATOMIC_DIRTIED, /* indicate atomic file is dirtied */
+ FI_ATOMIC_REPLACE, /* indicate atomic replace */
+ FI_OPENED_FILE, /* indicate file has been opened */
+ FI_MAX, /* max flag, never be used */
+@@ -1155,6 +1156,7 @@ enum cp_reason_type {
+ CP_FASTBOOT_MODE,
+ CP_SPEC_LOG_NUM,
+ CP_RECOVER_DIR,
++ CP_XATTR_DIR,
+ };
+
+ enum iostat_type {
+@@ -1418,7 +1420,8 @@ static inline void f2fs_clear_bit(unsigned int nr, char *addr);
+ * bit 1 PAGE_PRIVATE_ONGOING_MIGRATION
+ * bit 2 PAGE_PRIVATE_INLINE_INODE
+ * bit 3 PAGE_PRIVATE_REF_RESOURCE
+- * bit 4- f2fs private data
++ * bit 4 PAGE_PRIVATE_ATOMIC_WRITE
++ * bit 5- f2fs private data
+ *
+ * Layout B: lowest bit should be 0
+ * page.private is a wrapped pointer.
+@@ -1428,6 +1431,7 @@ enum {
+ PAGE_PRIVATE_ONGOING_MIGRATION, /* data page which is on-going migrating */
+ PAGE_PRIVATE_INLINE_INODE, /* inode page contains inline data */
+ PAGE_PRIVATE_REF_RESOURCE, /* dirty page has referenced resources */
++ PAGE_PRIVATE_ATOMIC_WRITE, /* data page from atomic write path */
+ PAGE_PRIVATE_MAX
+ };
+
+@@ -2396,14 +2400,17 @@ static inline void clear_page_private_##name(struct page *page) \
+ PAGE_PRIVATE_GET_FUNC(nonpointer, NOT_POINTER);
+ PAGE_PRIVATE_GET_FUNC(inline, INLINE_INODE);
+ PAGE_PRIVATE_GET_FUNC(gcing, ONGOING_MIGRATION);
++PAGE_PRIVATE_GET_FUNC(atomic, ATOMIC_WRITE);
+
+ PAGE_PRIVATE_SET_FUNC(reference, REF_RESOURCE);
+ PAGE_PRIVATE_SET_FUNC(inline, INLINE_INODE);
+ PAGE_PRIVATE_SET_FUNC(gcing, ONGOING_MIGRATION);
++PAGE_PRIVATE_SET_FUNC(atomic, ATOMIC_WRITE);
+
+ PAGE_PRIVATE_CLEAR_FUNC(reference, REF_RESOURCE);
+ PAGE_PRIVATE_CLEAR_FUNC(inline, INLINE_INODE);
+ PAGE_PRIVATE_CLEAR_FUNC(gcing, ONGOING_MIGRATION);
++PAGE_PRIVATE_CLEAR_FUNC(atomic, ATOMIC_WRITE);
+
+ static inline unsigned long get_page_private_data(struct page *page)
+ {
+@@ -2435,6 +2442,7 @@ static inline void clear_page_private_all(struct page *page)
+ clear_page_private_reference(page);
+ clear_page_private_gcing(page);
+ clear_page_private_inline(page);
++ clear_page_private_atomic(page);
+
+ f2fs_bug_on(F2FS_P_SB(page), page_private(page));
+ }
+@@ -3038,10 +3046,8 @@ static inline void __mark_inode_dirty_flag(struct inode *inode,
+ return;
+ fallthrough;
+ case FI_DATA_EXIST:
+- case FI_INLINE_DOTS:
+ case FI_PIN_FILE:
+ case FI_COMPRESS_RELEASED:
+- case FI_ATOMIC_COMMITTED:
+ f2fs_mark_inode_dirty_sync(inode, true);
+ }
+ }
+@@ -3163,8 +3169,6 @@ static inline void get_inline_info(struct inode *inode, struct f2fs_inode *ri)
+ set_bit(FI_INLINE_DENTRY, fi->flags);
+ if (ri->i_inline & F2FS_DATA_EXIST)
+ set_bit(FI_DATA_EXIST, fi->flags);
+- if (ri->i_inline & F2FS_INLINE_DOTS)
+- set_bit(FI_INLINE_DOTS, fi->flags);
+ if (ri->i_inline & F2FS_EXTRA_ATTR)
+ set_bit(FI_EXTRA_ATTR, fi->flags);
+ if (ri->i_inline & F2FS_PIN_FILE)
+@@ -3185,8 +3189,6 @@ static inline void set_raw_inline(struct inode *inode, struct f2fs_inode *ri)
+ ri->i_inline |= F2FS_INLINE_DENTRY;
+ if (is_inode_flag_set(inode, FI_DATA_EXIST))
+ ri->i_inline |= F2FS_DATA_EXIST;
+- if (is_inode_flag_set(inode, FI_INLINE_DOTS))
+- ri->i_inline |= F2FS_INLINE_DOTS;
+ if (is_inode_flag_set(inode, FI_EXTRA_ATTR))
+ ri->i_inline |= F2FS_EXTRA_ATTR;
+ if (is_inode_flag_set(inode, FI_PIN_FILE))
+@@ -3267,11 +3269,6 @@ static inline int f2fs_exist_data(struct inode *inode)
+ return is_inode_flag_set(inode, FI_DATA_EXIST);
+ }
+
+-static inline int f2fs_has_inline_dots(struct inode *inode)
+-{
+- return is_inode_flag_set(inode, FI_INLINE_DOTS);
+-}
+-
+ static inline int f2fs_is_mmap_file(struct inode *inode)
+ {
+ return is_inode_flag_set(inode, FI_MMAP_FILE);
+@@ -3495,7 +3492,7 @@ int f2fs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ int f2fs_truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end);
+ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count);
+ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+- bool readonly);
++ bool readonly, bool need_lock);
+ int f2fs_precache_extents(struct inode *inode);
+ int f2fs_fileattr_get(struct dentry *dentry, struct fileattr *fa);
+ int f2fs_fileattr_set(struct mnt_idmap *idmap,
+@@ -4289,6 +4286,11 @@ static inline bool f2fs_meta_inode_gc_required(struct inode *inode)
+ * compress.c
+ */
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
++enum cluster_check_type {
++ CLUSTER_IS_COMPR, /* check only if compressed cluster */
++ CLUSTER_COMPR_BLKS, /* return # of compressed blocks in a cluster */
++ CLUSTER_RAW_BLKS /* return # of raw blocks in a cluster */
++};
+ bool f2fs_is_compressed_page(struct page *page);
+ struct page *f2fs_compress_control_page(struct page *page);
+ int f2fs_prepare_compress_overwrite(struct inode *inode,
+@@ -4315,6 +4317,7 @@ int f2fs_write_multi_pages(struct compress_ctx *cc,
+ struct writeback_control *wbc,
+ enum iostat_type io_type);
+ int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index);
++bool f2fs_is_sparse_cluster(struct inode *inode, pgoff_t index);
+ void f2fs_update_read_extent_tree_range_compressed(struct inode *inode,
+ pgoff_t fofs, block_t blkaddr,
+ unsigned int llen, unsigned int c_len);
+@@ -4401,6 +4404,12 @@ static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
+ static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
+ nid_t ino) { }
+ #define inc_compr_inode_stat(inode) do { } while (0)
++static inline int f2fs_is_compressed_cluster(
++ struct inode *inode,
++ pgoff_t index) { return 0; }
++static inline bool f2fs_is_sparse_cluster(
++ struct inode *inode,
++ pgoff_t index) { return true; }
+ static inline void f2fs_update_read_extent_tree_range_compressed(
+ struct inode *inode,
+ pgoff_t fofs, block_t blkaddr,
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 168f085070046e..80ebc13ccb35b6 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -218,6 +218,9 @@ static inline enum cp_reason_type need_do_checkpoint(struct inode *inode)
+ f2fs_exist_written_data(sbi, F2FS_I(inode)->i_pino,
+ TRANS_DIR_INO))
+ cp_reason = CP_RECOVER_DIR;
++ else if (f2fs_exist_written_data(sbi, F2FS_I(inode)->i_pino,
++ XATTR_DIR_INO))
++ cp_reason = CP_XATTR_DIR;
+
+ return cp_reason;
+ }
+@@ -1052,6 +1055,13 @@ int f2fs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ return err;
+ }
+
++ /*
++ * wait for inflight dio, blocks should be removed after
++ * IO completion.
++ */
++ if (attr->ia_size < old_size)
++ inode_dio_wait(inode);
++
+ f2fs_down_write(&fi->i_gc_rwsem[WRITE]);
+ filemap_invalidate_lock(inode->i_mapping);
+
+@@ -1888,6 +1898,12 @@ static long f2fs_fallocate(struct file *file, int mode,
+ if (ret)
+ goto out;
+
++ /*
++ * wait for inflight dio, blocks should be removed after IO
++ * completion.
++ */
++ inode_dio_wait(inode);
++
+ if (mode & FALLOC_FL_PUNCH_HOLE) {
+ if (offset >= inode->i_size)
+ goto out;
+@@ -2116,10 +2132,12 @@ static int f2fs_ioc_start_atomic_write(struct file *filp, bool truncate)
+ struct mnt_idmap *idmap = file_mnt_idmap(filp);
+ struct f2fs_inode_info *fi = F2FS_I(inode);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+- struct inode *pinode;
+ loff_t isize;
+ int ret;
+
++ if (!(filp->f_mode & FMODE_WRITE))
++ return -EBADF;
++
+ if (!inode_owner_or_capable(idmap, inode))
+ return -EACCES;
+
+@@ -2166,15 +2184,10 @@ static int f2fs_ioc_start_atomic_write(struct file *filp, bool truncate)
+ /* Check if the inode already has a COW inode */
+ if (fi->cow_inode == NULL) {
+ /* Create a COW inode for atomic write */
+- pinode = f2fs_iget(inode->i_sb, fi->i_pino);
+- if (IS_ERR(pinode)) {
+- f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
+- ret = PTR_ERR(pinode);
+- goto out;
+- }
++ struct dentry *dentry = file_dentry(filp);
++ struct inode *dir = d_inode(dentry->d_parent);
+
+- ret = f2fs_get_tmpfile(idmap, pinode, &fi->cow_inode);
+- iput(pinode);
++ ret = f2fs_get_tmpfile(idmap, dir, &fi->cow_inode);
+ if (ret) {
+ f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
+ goto out;
+@@ -2187,6 +2200,10 @@ static int f2fs_ioc_start_atomic_write(struct file *filp, bool truncate)
+ F2FS_I(fi->cow_inode)->atomic_inode = inode;
+ } else {
+ /* Reuse the already created COW inode */
++ f2fs_bug_on(sbi, get_dirty_pages(fi->cow_inode));
++
++ invalidate_mapping_pages(fi->cow_inode->i_mapping, 0, -1);
++
+ ret = f2fs_do_truncate_blocks(fi->cow_inode, 0, true);
+ if (ret) {
+ f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
+@@ -2228,6 +2245,9 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp)
+ struct mnt_idmap *idmap = file_mnt_idmap(filp);
+ int ret;
+
++ if (!(filp->f_mode & FMODE_WRITE))
++ return -EBADF;
++
+ if (!inode_owner_or_capable(idmap, inode))
+ return -EACCES;
+
+@@ -2260,6 +2280,9 @@ static int f2fs_ioc_abort_atomic_write(struct file *filp)
+ struct mnt_idmap *idmap = file_mnt_idmap(filp);
+ int ret;
+
++ if (!(filp->f_mode & FMODE_WRITE))
++ return -EBADF;
++
+ if (!inode_owner_or_capable(idmap, inode))
+ return -EACCES;
+
+@@ -2279,7 +2302,7 @@ static int f2fs_ioc_abort_atomic_write(struct file *filp)
+ }
+
+ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+- bool readonly)
++ bool readonly, bool need_lock)
+ {
+ struct super_block *sb = sbi->sb;
+ int ret = 0;
+@@ -2326,12 +2349,19 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+ if (readonly)
+ goto out;
+
++ /* grab sb->s_umount to avoid racing w/ remount() */
++ if (need_lock)
++ down_read(&sbi->sb->s_umount);
++
+ f2fs_stop_gc_thread(sbi);
+ f2fs_stop_discard_thread(sbi);
+
+ f2fs_drop_discard_cmd(sbi);
+ clear_opt(sbi, DISCARD);
+
++ if (need_lock)
++ up_read(&sbi->sb->s_umount);
++
+ f2fs_update_time(sbi, REQ_TIME);
+ out:
+
+@@ -2368,7 +2398,7 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ }
+ }
+
+- ret = f2fs_do_shutdown(sbi, in, readonly);
++ ret = f2fs_do_shutdown(sbi, in, readonly, true);
+
+ if (need_drop)
+ mnt_drop_write_file(filp);
+@@ -2686,7 +2716,8 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
+ (range->start + range->len) >> PAGE_SHIFT,
+ DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE));
+
+- if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED)) {
++ if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED) ||
++ f2fs_is_atomic_file(inode)) {
+ err = -EINVAL;
+ goto unlock_out;
+ }
+@@ -2710,7 +2741,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
+ * block addresses are continuous.
+ */
+ if (f2fs_lookup_read_extent_cache(inode, pg_start, &ei)) {
+- if (ei.fofs + ei.len >= pg_end)
++ if ((pgoff_t)ei.fofs + ei.len >= pg_end)
+ goto out;
+ }
+
+@@ -2793,6 +2824,8 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
+ goto clear_out;
+ }
+
++ f2fs_wait_on_page_writeback(page, DATA, true, true);
++
+ set_page_dirty(page);
+ set_page_private_gcing(page);
+ f2fs_put_page(page, 1);
+@@ -2917,6 +2950,11 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in,
+ goto out_unlock;
+ }
+
++ if (f2fs_is_atomic_file(src) || f2fs_is_atomic_file(dst)) {
++ ret = -EINVAL;
++ goto out_unlock;
++ }
++
+ ret = -EINVAL;
+ if (pos_in + len > src->i_size || pos_in + len < pos_in)
+ goto out_unlock;
+@@ -3300,6 +3338,11 @@ static int f2fs_ioc_set_pin_file(struct file *filp, unsigned long arg)
+
+ inode_lock(inode);
+
++ if (f2fs_is_atomic_file(inode)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
+ if (!pin) {
+ clear_inode_flag(inode, FI_PIN_FILE);
+ f2fs_i_gc_failures_write(inode, 0);
+@@ -4193,6 +4236,8 @@ static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
+ /* It will never fail, when page has pinned above */
+ f2fs_bug_on(F2FS_I_SB(inode), !page);
+
++ f2fs_wait_on_page_writeback(page, DATA, true, true);
++
+ set_page_dirty(page);
+ set_page_private_gcing(page);
+ f2fs_put_page(page, 1);
+@@ -4207,9 +4252,8 @@ static int f2fs_ioc_decompress_file(struct file *filp)
+ struct inode *inode = file_inode(filp);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct f2fs_inode_info *fi = F2FS_I(inode);
+- pgoff_t page_idx = 0, last_idx;
+- int cluster_size = fi->i_cluster_size;
+- int count, ret;
++ pgoff_t page_idx = 0, last_idx, cluster_idx;
++ int ret;
+
+ if (!f2fs_sb_has_compression(sbi) ||
+ F2FS_OPTION(sbi).compress_mode != COMPR_MODE_USER)
+@@ -4244,10 +4288,15 @@ static int f2fs_ioc_decompress_file(struct file *filp)
+ goto out;
+
+ last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
++ last_idx >>= fi->i_log_cluster_size;
++
++ for (cluster_idx = 0; cluster_idx < last_idx; cluster_idx++) {
++ page_idx = cluster_idx << fi->i_log_cluster_size;
+
+- count = last_idx - page_idx;
+- while (count && count >= cluster_size) {
+- ret = redirty_blocks(inode, page_idx, cluster_size);
++ if (!f2fs_is_compressed_cluster(inode, page_idx))
++ continue;
++
++ ret = redirty_blocks(inode, page_idx, fi->i_cluster_size);
+ if (ret < 0)
+ break;
+
+@@ -4257,9 +4306,6 @@ static int f2fs_ioc_decompress_file(struct file *filp)
+ break;
+ }
+
+- count -= cluster_size;
+- page_idx += cluster_size;
+-
+ cond_resched();
+ if (fatal_signal_pending(current)) {
+ ret = -EINTR;
+@@ -4286,9 +4332,9 @@ static int f2fs_ioc_compress_file(struct file *filp)
+ {
+ struct inode *inode = file_inode(filp);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+- pgoff_t page_idx = 0, last_idx;
+- int cluster_size = F2FS_I(inode)->i_cluster_size;
+- int count, ret;
++ struct f2fs_inode_info *fi = F2FS_I(inode);
++ pgoff_t page_idx = 0, last_idx, cluster_idx;
++ int ret;
+
+ if (!f2fs_sb_has_compression(sbi) ||
+ F2FS_OPTION(sbi).compress_mode != COMPR_MODE_USER)
+@@ -4322,10 +4368,15 @@ static int f2fs_ioc_compress_file(struct file *filp)
+ set_inode_flag(inode, FI_ENABLE_COMPRESS);
+
+ last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
++ last_idx >>= fi->i_log_cluster_size;
+
+- count = last_idx - page_idx;
+- while (count && count >= cluster_size) {
+- ret = redirty_blocks(inode, page_idx, cluster_size);
++ for (cluster_idx = 0; cluster_idx < last_idx; cluster_idx++) {
++ page_idx = cluster_idx << fi->i_log_cluster_size;
++
++ if (f2fs_is_sparse_cluster(inode, page_idx))
++ continue;
++
++ ret = redirty_blocks(inode, page_idx, fi->i_cluster_size);
+ if (ret < 0)
+ break;
+
+@@ -4335,9 +4386,6 @@ static int f2fs_ioc_compress_file(struct file *filp)
+ break;
+ }
+
+- count -= cluster_size;
+- page_idx += cluster_size;
+-
+ cond_resched();
+ if (fatal_signal_pending(current)) {
+ ret = -EINTR;
+@@ -4597,6 +4645,10 @@ static ssize_t f2fs_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ f2fs_trace_rw_file_path(iocb->ki_filp, iocb->ki_pos,
+ iov_iter_count(to), READ);
+
++ /* In LFS mode, if there is inflight dio, wait for its completion */
++ if (f2fs_lfs_mode(F2FS_I_SB(inode)))
++ inode_dio_wait(inode);
++
+ if (f2fs_should_use_dio(inode, iocb, to)) {
+ ret = f2fs_dio_read_iter(iocb, to);
+ } else {
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index aef57172014fae..4729c49bf6d7e6 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -35,6 +35,11 @@ void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync)
+ if (f2fs_inode_dirtied(inode, sync))
+ return;
+
++ if (f2fs_is_atomic_file(inode)) {
++ set_inode_flag(inode, FI_ATOMIC_DIRTIED);
++ return;
++ }
++
+ mark_inode_dirty_sync(inode);
+ }
+
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 38b4750475db65..57d46e1439dedf 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -457,62 +457,6 @@ struct dentry *f2fs_get_parent(struct dentry *child)
+ return d_obtain_alias(f2fs_iget(child->d_sb, ino));
+ }
+
+-static int __recover_dot_dentries(struct inode *dir, nid_t pino)
+-{
+- struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
+- struct qstr dot = QSTR_INIT(".", 1);
+- struct f2fs_dir_entry *de;
+- struct page *page;
+- int err = 0;
+-
+- if (f2fs_readonly(sbi->sb)) {
+- f2fs_info(sbi, "skip recovering inline_dots inode (ino:%lu, pino:%u) in readonly mountpoint",
+- dir->i_ino, pino);
+- return 0;
+- }
+-
+- if (!S_ISDIR(dir->i_mode)) {
+- f2fs_err(sbi, "inconsistent inode status, skip recovering inline_dots inode (ino:%lu, i_mode:%u, pino:%u)",
+- dir->i_ino, dir->i_mode, pino);
+- set_sbi_flag(sbi, SBI_NEED_FSCK);
+- return -ENOTDIR;
+- }
+-
+- err = f2fs_dquot_initialize(dir);
+- if (err)
+- return err;
+-
+- f2fs_balance_fs(sbi, true);
+-
+- f2fs_lock_op(sbi);
+-
+- de = f2fs_find_entry(dir, &dot, &page);
+- if (de) {
+- f2fs_put_page(page, 0);
+- } else if (IS_ERR(page)) {
+- err = PTR_ERR(page);
+- goto out;
+- } else {
+- err = f2fs_do_add_link(dir, &dot, NULL, dir->i_ino, S_IFDIR);
+- if (err)
+- goto out;
+- }
+-
+- de = f2fs_find_entry(dir, &dotdot_name, &page);
+- if (de)
+- f2fs_put_page(page, 0);
+- else if (IS_ERR(page))
+- err = PTR_ERR(page);
+- else
+- err = f2fs_do_add_link(dir, &dotdot_name, NULL, pino, S_IFDIR);
+-out:
+- if (!err)
+- clear_inode_flag(dir, FI_INLINE_DOTS);
+-
+- f2fs_unlock_op(sbi);
+- return err;
+-}
+-
+ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry,
+ unsigned int flags)
+ {
+@@ -522,7 +466,6 @@ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry,
+ struct dentry *new;
+ nid_t ino = -1;
+ int err = 0;
+- unsigned int root_ino = F2FS_ROOT_INO(F2FS_I_SB(dir));
+ struct f2fs_filename fname;
+
+ trace_f2fs_lookup_start(dir, dentry, flags);
+@@ -558,17 +501,6 @@ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry,
+ goto out;
+ }
+
+- if ((dir->i_ino == root_ino) && f2fs_has_inline_dots(dir)) {
+- err = __recover_dot_dentries(dir, root_ino);
+- if (err)
+- goto out_iput;
+- }
+-
+- if (f2fs_has_inline_dots(inode)) {
+- err = __recover_dot_dentries(inode, dir->i_ino);
+- if (err)
+- goto out_iput;
+- }
+ if (IS_ENCRYPTED(dir) &&
+ (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode)) &&
+ !fscrypt_has_permitted_context(dir, inode)) {
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 78c3198a6308f5..418f2e663f6acb 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -199,6 +199,10 @@ void f2fs_abort_atomic_write(struct inode *inode, bool clean)
+ clear_inode_flag(inode, FI_ATOMIC_COMMITTED);
+ clear_inode_flag(inode, FI_ATOMIC_REPLACE);
+ clear_inode_flag(inode, FI_ATOMIC_FILE);
++ if (is_inode_flag_set(inode, FI_ATOMIC_DIRTIED)) {
++ clear_inode_flag(inode, FI_ATOMIC_DIRTIED);
++ f2fs_mark_inode_dirty_sync(inode, true);
++ }
+ stat_dec_atomic_inode(inode);
+
+ F2FS_I(inode)->atomic_write_task = NULL;
+@@ -366,6 +370,10 @@ static int __f2fs_commit_atomic_write(struct inode *inode)
+ } else {
+ sbi->committed_atomic_block += fi->atomic_write_cnt;
+ set_inode_flag(inode, FI_ATOMIC_COMMITTED);
++ if (is_inode_flag_set(inode, FI_ATOMIC_DIRTIED)) {
++ clear_inode_flag(inode, FI_ATOMIC_DIRTIED);
++ f2fs_mark_inode_dirty_sync(inode, true);
++ }
+ }
+
+ __complete_revoke_list(inode, &revoke_list, ret ? true : false);
+@@ -1282,6 +1290,13 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi,
+ wait_list, issued);
+ return 0;
+ }
++
++ /*
++ * Issue discard for conventional zones only if the device
++ * supports discard.
++ */
++ if (!bdev_max_discard_sectors(bdev))
++ return -EOPNOTSUPP;
+ }
+ #endif
+
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 3959fd137cc9b1..29754dc50fa47a 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2561,7 +2561,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+
+ static void f2fs_shutdown(struct super_block *sb)
+ {
+- f2fs_do_shutdown(F2FS_SB(sb), F2FS_GOING_DOWN_NOSYNC, false);
++ f2fs_do_shutdown(F2FS_SB(sb), F2FS_GOING_DOWN_NOSYNC, false, false);
+ }
+
+ #ifdef CONFIG_QUOTA
+@@ -3356,9 +3356,9 @@ static inline bool sanity_check_area_boundary(struct f2fs_sb_info *sbi,
+ u32 segment_count = le32_to_cpu(raw_super->segment_count);
+ u32 log_blocks_per_seg = le32_to_cpu(raw_super->log_blocks_per_seg);
+ u64 main_end_blkaddr = main_blkaddr +
+- (segment_count_main << log_blocks_per_seg);
++ ((u64)segment_count_main << log_blocks_per_seg);
+ u64 seg_end_blkaddr = segment0_blkaddr +
+- (segment_count << log_blocks_per_seg);
++ ((u64)segment_count << log_blocks_per_seg);
+
+ if (segment0_blkaddr != cp_blkaddr) {
+ f2fs_info(sbi, "Mismatch start address, segment0(%u) cp_blkaddr(%u)",
+@@ -4173,12 +4173,14 @@ void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
+ }
+
+ f2fs_warn(sbi, "Remounting filesystem read-only");
++
+ /*
+- * Make sure updated value of ->s_mount_flags will be visible before
+- * ->s_flags update
++ * We have already set CP_ERROR_FLAG flag to stop all updates
++ * to filesystem, so it doesn't need to set SB_RDONLY flag here
++ * because the flag should be set covered w/ sb->s_umount semaphore
++ * via remount procedure, otherwise, it will confuse code like
++ * freeze_super() which will lead to deadlocks and other problems.
+ */
+- smp_wmb();
+- sb->s_flags |= SB_RDONLY;
+ }
+
+ static void f2fs_record_error_work(struct work_struct *work)
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index f290fe9327c498..3f387494367963 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -629,6 +629,7 @@ static int __f2fs_setxattr(struct inode *inode, int index,
+ const char *name, const void *value, size_t size,
+ struct page *ipage, int flags)
+ {
++ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct f2fs_xattr_entry *here, *last;
+ void *base_addr, *last_base_addr;
+ int found, newsize;
+@@ -772,9 +773,18 @@ static int __f2fs_setxattr(struct inode *inode, int index,
+ if (index == F2FS_XATTR_INDEX_ENCRYPTION &&
+ !strcmp(name, F2FS_XATTR_NAME_ENCRYPTION_CONTEXT))
+ f2fs_set_encrypted_inode(inode);
+- if (S_ISDIR(inode->i_mode))
+- set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_CP);
+
++ if (!S_ISDIR(inode->i_mode))
++ goto same;
++ /*
++ * In restrict mode, fsync() always try to trigger checkpoint for all
++ * metadata consistency, in other mode, it triggers checkpoint when
++ * parent's xattr metadata was updated.
++ */
++ if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT)
++ set_sbi_flag(sbi, SBI_NEED_CP);
++ else
++ f2fs_add_ino_entry(sbi, inode->i_ino, XATTR_DIR_INO);
+ same:
+ if (is_inode_flag_set(inode, FI_ACL_MODE)) {
+ inode->i_mode = F2FS_I(inode)->i_acl_mode;
+diff --git a/fs/fcntl.c b/fs/fcntl.c
+index 300e5d9ad913b5..c28dc6c005f174 100644
+--- a/fs/fcntl.c
++++ b/fs/fcntl.c
+@@ -87,8 +87,8 @@ static int setfl(int fd, struct file * filp, unsigned int arg)
+ return error;
+ }
+
+-static void f_modown(struct file *filp, struct pid *pid, enum pid_type type,
+- int force)
++void __f_setown(struct file *filp, struct pid *pid, enum pid_type type,
++ int force)
+ {
+ write_lock_irq(&filp->f_owner.lock);
+ if (force || !filp->f_owner.pid) {
+@@ -98,19 +98,13 @@ static void f_modown(struct file *filp, struct pid *pid, enum pid_type type,
+
+ if (pid) {
+ const struct cred *cred = current_cred();
++ security_file_set_fowner(filp);
+ filp->f_owner.uid = cred->uid;
+ filp->f_owner.euid = cred->euid;
+ }
+ }
+ write_unlock_irq(&filp->f_owner.lock);
+ }
+-
+-void __f_setown(struct file *filp, struct pid *pid, enum pid_type type,
+- int force)
+-{
+- security_file_set_fowner(filp);
+- f_modown(filp, pid, type, force);
+-}
+ EXPORT_SYMBOL(__f_setown);
+
+ int f_setown(struct file *filp, int who, int force)
+@@ -146,7 +140,7 @@ EXPORT_SYMBOL(f_setown);
+
+ void f_delown(struct file *filp)
+ {
+- f_modown(filp, NULL, PIDTYPE_TGID, 1);
++ __f_setown(filp, NULL, PIDTYPE_TGID, 1);
+ }
+
+ pid_t f_getown(struct file *filp)
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index ed76121f73f2e0..08f7d538ca98f4 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1345,7 +1345,7 @@ static bool fuse_dio_wr_exclusive_lock(struct kiocb *iocb, struct iov_iter *from
+
+ /* shared locks are not allowed with parallel page cache IO */
+ if (test_bit(FUSE_I_CACHE_IO_MODE, &fi->state))
+- return false;
++ return true;
+
+ /* Parallel dio beyond EOF is not supported, at least for now. */
+ if (fuse_io_past_eof(iocb, from))
+diff --git a/fs/inode.c b/fs/inode.c
+index 10c4619faeef8c..7125b73b536753 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -770,6 +770,10 @@ void evict_inodes(struct super_block *sb)
+ continue;
+
+ spin_lock(&inode->i_lock);
++ if (atomic_read(&inode->i_count)) {
++ spin_unlock(&inode->i_lock);
++ continue;
++ }
+ if (inode->i_state & (I_NEW | I_FREEING | I_WILL_FREE)) {
+ spin_unlock(&inode->i_lock);
+ continue;
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 5713994328cbcb..0625d1c0d0649a 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -187,7 +187,7 @@ int dbMount(struct inode *ipbmap)
+ }
+
+ bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
+- if (!bmp->db_numag) {
++ if (!bmp->db_numag || bmp->db_numag >= MAXAG) {
+ err = -EINVAL;
+ goto err_release_metapage;
+ }
+@@ -652,7 +652,7 @@ int dbNextAG(struct inode *ipbmap)
+ * average free space.
+ */
+ for (i = 0 ; i < bmp->db_numag; i++, agpref++) {
+- if (agpref == bmp->db_numag)
++ if (agpref >= bmp->db_numag)
+ agpref = 0;
+
+ if (atomic_read(&bmp->db_active[agpref]))
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index 1407feccbc2d05..a360b24ed320c0 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -1360,7 +1360,7 @@ int diAlloc(struct inode *pip, bool dir, struct inode *ip)
+ /* get the ag number of this iag */
+ agno = BLKTOAG(JFS_IP(pip)->agstart, JFS_SBI(pip->i_sb));
+ dn_numag = JFS_SBI(pip->i_sb)->bmap->db_numag;
+- if (agno < 0 || agno > dn_numag)
++ if (agno < 0 || agno > dn_numag || agno >= MAXAG)
+ return -EIO;
+
+ if (atomic_read(&JFS_SBI(pip->i_sb)->bmap->db_active[agno])) {
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 328087a4df8a6e..155c6abda71da6 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2921,8 +2921,15 @@ static void mnt_warn_timestamp_expiry(struct path *mountpoint, struct vfsmount *
+ if (!__mnt_is_readonly(mnt) &&
+ (!(sb->s_iflags & SB_I_TS_EXPIRY_WARNED)) &&
+ (ktime_get_real_seconds() + TIME_UPTIME_SEC_MAX > sb->s_time_max)) {
+- char *buf = (char *)__get_free_page(GFP_KERNEL);
+- char *mntpath = buf ? d_path(mountpoint, buf, PAGE_SIZE) : ERR_PTR(-ENOMEM);
++ char *buf, *mntpath;
++
++ buf = (char *)__get_free_page(GFP_KERNEL);
++ if (buf)
++ mntpath = d_path(mountpoint, buf, PAGE_SIZE);
++ else
++ mntpath = ERR_PTR(-ENOMEM);
++ if (IS_ERR(mntpath))
++ mntpath = "(unknown)";
+
+ pr_warn("%s filesystem being %s at %s supports timestamps until %ptTd (0x%llx)\n",
+ sb->s_type->name,
+@@ -2930,8 +2937,9 @@ static void mnt_warn_timestamp_expiry(struct path *mountpoint, struct vfsmount *
+ mntpath, &sb->s_time_max,
+ (unsigned long long)sb->s_time_max);
+
+- free_page((unsigned long)buf);
+ sb->s_iflags |= SB_I_TS_EXPIRY_WARNED;
++ if (buf)
++ free_page((unsigned long)buf);
+ }
+ }
+
+diff --git a/fs/netfs/main.c b/fs/netfs/main.c
+index 5f0f438e5d211d..9d6b49dc66945f 100644
+--- a/fs/netfs/main.c
++++ b/fs/netfs/main.c
+@@ -142,7 +142,7 @@ static int __init netfs_init(void)
+
+ error_fscache:
+ error_procfile:
+- remove_proc_entry("fs/netfs", NULL);
++ remove_proc_subtree("fs/netfs", NULL);
+ error_proc:
+ mempool_exit(&netfs_subrequest_pool);
+ error_subreqpool:
+@@ -159,7 +159,7 @@ fs_initcall(netfs_init);
+ static void __exit netfs_exit(void)
+ {
+ fscache_exit();
+- remove_proc_entry("fs/netfs", NULL);
++ remove_proc_subtree("fs/netfs", NULL);
+ mempool_exit(&netfs_subrequest_pool);
+ kmem_cache_destroy(netfs_subrequest_slab);
+ mempool_exit(&netfs_request_pool);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index b8ffbe52ba15a5..cd2fbde2e6d72e 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -3904,6 +3904,18 @@ static void nfs4_close_context(struct nfs_open_context *ctx, int is_sync)
+ #define FATTR4_WORD2_NFS41_MASK (2*FATTR4_WORD2_SUPPATTR_EXCLCREAT - 1UL)
+ #define FATTR4_WORD2_NFS42_MASK (2*FATTR4_WORD2_OPEN_ARGUMENTS - 1UL)
+
++#define FATTR4_WORD2_NFS42_TIME_DELEG_MASK \
++ (FATTR4_WORD2_TIME_DELEG_MODIFY|FATTR4_WORD2_TIME_DELEG_ACCESS)
++static bool nfs4_server_delegtime_capable(struct nfs4_server_caps_res *res)
++{
++ u32 share_access_want = res->open_caps.oa_share_access_want[0];
++ u32 attr_bitmask = res->attr_bitmask[2];
++
++ return (share_access_want & NFS4_SHARE_WANT_DELEG_TIMESTAMPS) &&
++ ((attr_bitmask & FATTR4_WORD2_NFS42_TIME_DELEG_MASK) ==
++ FATTR4_WORD2_NFS42_TIME_DELEG_MASK);
++}
++
+ static int _nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle)
+ {
+ u32 minorversion = server->nfs_client->cl_minorversion;
+@@ -3982,8 +3994,6 @@ static int _nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *f
+ #endif
+ if (res.attr_bitmask[0] & FATTR4_WORD0_FS_LOCATIONS)
+ server->caps |= NFS_CAP_FS_LOCATIONS;
+- if (res.attr_bitmask[2] & FATTR4_WORD2_TIME_DELEG_MODIFY)
+- server->caps |= NFS_CAP_DELEGTIME;
+ if (!(res.attr_bitmask[0] & FATTR4_WORD0_FILEID))
+ server->fattr_valid &= ~NFS_ATTR_FATTR_FILEID;
+ if (!(res.attr_bitmask[1] & FATTR4_WORD1_MODE))
+@@ -4011,6 +4021,8 @@ static int _nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *f
+ if (res.open_caps.oa_share_access_want[0] &
+ NFS4_SHARE_WANT_OPEN_XOR_DELEGATION)
+ server->caps |= NFS_CAP_OPEN_XOR;
++ if (nfs4_server_delegtime_capable(&res))
++ server->caps |= NFS_CAP_DELEGTIME;
+
+ memcpy(server->cache_consistency_bitmask, res.attr_bitmask, sizeof(server->cache_consistency_bitmask));
+ server->cache_consistency_bitmask[0] &= FATTR4_WORD0_CHANGE|FATTR4_WORD0_SIZE;
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 877f682b45f2d0..30aba1dedaba6c 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1957,6 +1957,7 @@ static int nfs4_do_reclaim(struct nfs_client *clp, const struct nfs4_state_recov
+ set_bit(ops->owner_flag_bit, &sp->so_flags);
+ nfs4_put_state_owner(sp);
+ status = nfs4_recovery_handle_error(clp, status);
++ nfs4_free_state_owners(&freeme);
+ return (status != 0) ? status : -EAGAIN;
+ }
+
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index f4704f5d408675..e2e248032bfd04 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -1035,8 +1035,6 @@ nfsd_file_do_acquire(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ if (likely(ret == 0))
+ goto open_file;
+
+- if (ret == -EEXIST)
+- goto retry;
+ trace_nfsd_file_insert_err(rqstp, inode, may_flags, ret);
+ status = nfserr_jukebox;
+ goto construction_err;
+@@ -1051,6 +1049,7 @@ nfsd_file_do_acquire(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ status = nfserr_jukebox;
+ goto construction_err;
+ }
++ nfsd_file_put(nf);
+ open_retry = false;
+ fh_put(fhp);
+ goto retry;
+diff --git a/fs/nfsd/nfs4idmap.c b/fs/nfsd/nfs4idmap.c
+index 7a806ac13e317e..8cca1329f3485c 100644
+--- a/fs/nfsd/nfs4idmap.c
++++ b/fs/nfsd/nfs4idmap.c
+@@ -581,6 +581,7 @@ static __be32 idmap_id_to_name(struct xdr_stream *xdr,
+ .id = id,
+ .type = type,
+ };
++ __be32 status = nfs_ok;
+ __be32 *p;
+ int ret;
+ struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+@@ -593,12 +594,16 @@ static __be32 idmap_id_to_name(struct xdr_stream *xdr,
+ return nfserrno(ret);
+ ret = strlen(item->name);
+ WARN_ON_ONCE(ret > IDMAP_NAMESZ);
++
+ p = xdr_reserve_space(xdr, ret + 4);
+- if (!p)
+- return nfserr_resource;
+- p = xdr_encode_opaque(p, item->name, ret);
++ if (unlikely(!p)) {
++ status = nfserr_resource;
++ goto out_put;
++ }
++ xdr_encode_opaque(p, item->name, ret);
++out_put:
+ cache_put(&item->h, nn->idtoname_cache);
+- return 0;
++ return status;
+ }
+
+ static bool
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index 67d8673a9391c7..69a3a84e159e62 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -809,6 +809,10 @@ __cld_pipe_inprogress_downcall(const struct cld_msg_v2 __user *cmsg,
+ ci = &cmsg->cm_u.cm_clntinfo;
+ if (get_user(namelen, &ci->cc_name.cn_len))
+ return -EFAULT;
++ if (!namelen) {
++ dprintk("%s: namelen should not be zero", __func__);
++ return -EINVAL;
++ }
+ name.data = memdup_user(&ci->cc_name.cn_id, namelen);
+ if (IS_ERR(name.data))
+ return PTR_ERR(name.data);
+@@ -831,6 +835,10 @@ __cld_pipe_inprogress_downcall(const struct cld_msg_v2 __user *cmsg,
+ cnm = &cmsg->cm_u.cm_name;
+ if (get_user(namelen, &cnm->cn_len))
+ return -EFAULT;
++ if (!namelen) {
++ dprintk("%s: namelen should not be zero", __func__);
++ return -EINVAL;
++ }
+ name.data = memdup_user(&cnm->cn_id, namelen);
+ if (IS_ERR(name.data))
+ return PTR_ERR(name.data);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index a366fb1c1b9b4f..fe06779ea527a1 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -5912,6 +5912,28 @@ static void nfsd4_open_deleg_none_ext(struct nfsd4_open *open, int status)
+ }
+ }
+
++static bool
++nfs4_delegation_stat(struct nfs4_delegation *dp, struct svc_fh *currentfh,
++ struct kstat *stat)
++{
++ struct nfsd_file *nf = find_rw_file(dp->dl_stid.sc_file);
++ struct path path;
++ int rc;
++
++ if (!nf)
++ return false;
++
++ path.mnt = currentfh->fh_export->ex_path.mnt;
++ path.dentry = file_dentry(nf->nf_file);
++
++ rc = vfs_getattr(&path, stat,
++ (STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
++ AT_STATX_SYNC_AS_STAT);
++
++ nfsd_file_put(nf);
++ return rc == 0;
++}
++
+ /*
+ * The Linux NFS server does not offer write delegations to NFSv4.0
+ * clients in order to avoid conflicts between write delegations and
+@@ -5947,7 +5969,6 @@ nfs4_open_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ int cb_up;
+ int status = 0;
+ struct kstat stat;
+- struct path path;
+
+ cb_up = nfsd4_cb_channel_good(oo->oo_owner.so_client);
+ open->op_recall = false;
+@@ -5983,20 +6004,16 @@ nfs4_open_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ memcpy(&open->op_delegate_stateid, &dp->dl_stid.sc_stateid, sizeof(dp->dl_stid.sc_stateid));
+
+ if (open->op_share_access & NFS4_SHARE_ACCESS_WRITE) {
+- open->op_delegate_type = NFS4_OPEN_DELEGATE_WRITE;
+- trace_nfsd_deleg_write(&dp->dl_stid.sc_stateid);
+- path.mnt = currentfh->fh_export->ex_path.mnt;
+- path.dentry = currentfh->fh_dentry;
+- if (vfs_getattr(&path, &stat,
+- (STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
+- AT_STATX_SYNC_AS_STAT)) {
++ if (!nfs4_delegation_stat(dp, currentfh, &stat)) {
+ nfs4_put_stid(&dp->dl_stid);
+ destroy_delegation(dp);
+ goto out_no_deleg;
+ }
++ open->op_delegate_type = NFS4_OPEN_DELEGATE_WRITE;
+ dp->dl_cb_fattr.ncf_cur_fsize = stat.size;
+ dp->dl_cb_fattr.ncf_initial_cinfo =
+ nfsd4_change_attribute(&stat, d_inode(currentfh->fh_dentry));
++ trace_nfsd_deleg_write(&dp->dl_stid.sc_stateid);
+ } else {
+ open->op_delegate_type = NFS4_OPEN_DELEGATE_READ;
+ trace_nfsd_deleg_read(&dp->dl_stid.sc_stateid);
+@@ -8836,6 +8853,7 @@ nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp, struct dentry *dentry,
+ __be32 status;
+ struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ struct file_lock_context *ctx;
++ struct nfs4_delegation *dp = NULL;
+ struct file_lease *fl;
+ struct iattr attrs;
+ struct nfs4_cb_fattr *ncf;
+@@ -8845,84 +8863,76 @@ nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp, struct dentry *dentry,
+ ctx = locks_inode_context(inode);
+ if (!ctx)
+ return 0;
++
++#define NON_NFSD_LEASE ((void *)1)
++
+ spin_lock(&ctx->flc_lock);
+ for_each_file_lock(fl, &ctx->flc_lease) {
+- unsigned char type = fl->c.flc_type;
+-
+ if (fl->c.flc_flags == FL_LAYOUT)
+ continue;
+- if (fl->fl_lmops != &nfsd_lease_mng_ops) {
+- /*
+- * non-nfs lease, if it's a lease with F_RDLCK then
+- * we are done; there isn't any write delegation
+- * on this inode
+- */
+- if (type == F_RDLCK)
+- break;
+-
+- nfsd_stats_wdeleg_getattr_inc(nn);
+- spin_unlock(&ctx->flc_lock);
+-
+- status = nfserrno(nfsd_open_break_lease(inode, NFSD_MAY_READ));
++ if (fl->c.flc_type == F_WRLCK) {
++ if (fl->fl_lmops == &nfsd_lease_mng_ops)
++ dp = fl->c.flc_owner;
++ else
++ dp = NON_NFSD_LEASE;
++ }
++ break;
++ }
++ if (dp == NULL || dp == NON_NFSD_LEASE ||
++ dp->dl_recall.cb_clp == *(rqstp->rq_lease_breaker)) {
++ spin_unlock(&ctx->flc_lock);
++ if (dp == NON_NFSD_LEASE) {
++ status = nfserrno(nfsd_open_break_lease(inode,
++ NFSD_MAY_READ));
+ if (status != nfserr_jukebox ||
+ !nfsd_wait_for_delegreturn(rqstp, inode))
+ return status;
+- return 0;
+ }
+- if (type == F_WRLCK) {
+- struct nfs4_delegation *dp = fl->c.flc_owner;
++ return 0;
++ }
+
+- if (dp->dl_recall.cb_clp == *(rqstp->rq_lease_breaker)) {
+- spin_unlock(&ctx->flc_lock);
+- return 0;
+- }
+- nfsd_stats_wdeleg_getattr_inc(nn);
+- dp = fl->c.flc_owner;
+- refcount_inc(&dp->dl_stid.sc_count);
+- ncf = &dp->dl_cb_fattr;
+- nfs4_cb_getattr(&dp->dl_cb_fattr);
+- spin_unlock(&ctx->flc_lock);
+- wait_on_bit_timeout(&ncf->ncf_cb_flags, CB_GETATTR_BUSY,
+- TASK_INTERRUPTIBLE, NFSD_CB_GETATTR_TIMEOUT);
+- if (ncf->ncf_cb_status) {
+- /* Recall delegation only if client didn't respond */
+- status = nfserrno(nfsd_open_break_lease(inode, NFSD_MAY_READ));
+- if (status != nfserr_jukebox ||
+- !nfsd_wait_for_delegreturn(rqstp, inode)) {
+- nfs4_put_stid(&dp->dl_stid);
+- return status;
+- }
+- }
+- if (!ncf->ncf_file_modified &&
+- (ncf->ncf_initial_cinfo != ncf->ncf_cb_change ||
+- ncf->ncf_cur_fsize != ncf->ncf_cb_fsize))
+- ncf->ncf_file_modified = true;
+- if (ncf->ncf_file_modified) {
+- int err;
+-
+- /*
+- * Per section 10.4.3 of RFC 8881, the server would
+- * not update the file's metadata with the client's
+- * modified size
+- */
+- attrs.ia_mtime = attrs.ia_ctime = current_time(inode);
+- attrs.ia_valid = ATTR_MTIME | ATTR_CTIME | ATTR_DELEG;
+- inode_lock(inode);
+- err = notify_change(&nop_mnt_idmap, dentry, &attrs, NULL);
+- inode_unlock(inode);
+- if (err) {
+- nfs4_put_stid(&dp->dl_stid);
+- return nfserrno(err);
+- }
+- ncf->ncf_cur_fsize = ncf->ncf_cb_fsize;
+- *size = ncf->ncf_cur_fsize;
+- *modified = true;
+- }
+- nfs4_put_stid(&dp->dl_stid);
+- return 0;
++ nfsd_stats_wdeleg_getattr_inc(nn);
++ refcount_inc(&dp->dl_stid.sc_count);
++ ncf = &dp->dl_cb_fattr;
++ nfs4_cb_getattr(&dp->dl_cb_fattr);
++ spin_unlock(&ctx->flc_lock);
++
++ wait_on_bit_timeout(&ncf->ncf_cb_flags, CB_GETATTR_BUSY,
++ TASK_INTERRUPTIBLE, NFSD_CB_GETATTR_TIMEOUT);
++ if (ncf->ncf_cb_status) {
++ /* Recall delegation only if client didn't respond */
++ status = nfserrno(nfsd_open_break_lease(inode, NFSD_MAY_READ));
++ if (status != nfserr_jukebox ||
++ !nfsd_wait_for_delegreturn(rqstp, inode))
++ goto out_status;
++ }
++ if (!ncf->ncf_file_modified &&
++ (ncf->ncf_initial_cinfo != ncf->ncf_cb_change ||
++ ncf->ncf_cur_fsize != ncf->ncf_cb_fsize))
++ ncf->ncf_file_modified = true;
++ if (ncf->ncf_file_modified) {
++ int err;
++
++ /*
++ * Per section 10.4.3 of RFC 8881, the server would
++ * not update the file's metadata with the client's
++ * modified size
++ */
++ attrs.ia_mtime = attrs.ia_ctime = current_time(inode);
++ attrs.ia_valid = ATTR_MTIME | ATTR_CTIME | ATTR_DELEG;
++ inode_lock(inode);
++ err = notify_change(&nop_mnt_idmap, dentry, &attrs, NULL);
++ inode_unlock(inode);
++ if (err) {
++ status = nfserrno(err);
++ goto out_status;
+ }
+- break;
++ ncf->ncf_cur_fsize = ncf->ncf_cb_fsize;
++ *size = ncf->ncf_cur_fsize;
++ *modified = true;
+ }
+- spin_unlock(&ctx->flc_lock);
+- return 0;
++ status = 0;
++out_status:
++ nfs4_put_stid(&dp->dl_stid);
++ return status;
+ }
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index 862bdf23120e8a..ef5061bb56da1e 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -350,7 +350,7 @@ static int nilfs_btree_node_broken(const struct nilfs_btree_node *node,
+ if (unlikely(level < NILFS_BTREE_LEVEL_NODE_MIN ||
+ level >= NILFS_BTREE_LEVEL_MAX ||
+ (flags & NILFS_BTREE_NODE_ROOT) ||
+- nchildren < 0 ||
++ nchildren <= 0 ||
+ nchildren > NILFS_BTREE_NODE_NCHILDREN_MAX(size))) {
+ nilfs_crit(inode->i_sb,
+ "bad btree node (ino=%lu, blocknr=%llu): level = %d, flags = 0x%x, nchildren = %d",
+@@ -381,7 +381,8 @@ static int nilfs_btree_root_broken(const struct nilfs_btree_node *node,
+ if (unlikely(level < NILFS_BTREE_LEVEL_NODE_MIN ||
+ level >= NILFS_BTREE_LEVEL_MAX ||
+ nchildren < 0 ||
+- nchildren > NILFS_BTREE_ROOT_NCHILDREN_MAX)) {
++ nchildren > NILFS_BTREE_ROOT_NCHILDREN_MAX ||
++ (nchildren == 0 && level > NILFS_BTREE_LEVEL_NODE_MIN))) {
+ nilfs_crit(inode->i_sb,
+ "bad btree root (ino=%lu): level = %d, flags = 0x%x, nchildren = %d",
+ inode->i_ino, level, flags, nchildren);
+@@ -1658,13 +1659,16 @@ static int nilfs_btree_check_delete(struct nilfs_bmap *btree, __u64 key)
+ int nchildren, ret;
+
+ root = nilfs_btree_get_root(btree);
++ nchildren = nilfs_btree_node_get_nchildren(root);
++ if (unlikely(nchildren == 0))
++ return 0;
++
+ switch (nilfs_btree_height(btree)) {
+ case 2:
+ bh = NULL;
+ node = root;
+ break;
+ case 3:
+- nchildren = nilfs_btree_node_get_nchildren(root);
+ if (nchildren > 1)
+ return 0;
+ ptr = nilfs_btree_node_get_ptr(root, nchildren - 1,
+@@ -1673,12 +1677,12 @@ static int nilfs_btree_check_delete(struct nilfs_bmap *btree, __u64 key)
+ if (ret < 0)
+ return ret;
+ node = (struct nilfs_btree_node *)bh->b_data;
++ nchildren = nilfs_btree_node_get_nchildren(node);
+ break;
+ default:
+ return 0;
+ }
+
+- nchildren = nilfs_btree_node_get_nchildren(node);
+ maxkey = nilfs_btree_node_get_key(node, nchildren - 1);
+ nextmaxkey = (nchildren > 1) ?
+ nilfs_btree_node_get_key(node, nchildren - 2) : 0;
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index 7ae885e6d5d739..d533b58e21c285 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -2406,7 +2406,7 @@ static int vfs_setup_quota_inode(struct inode *inode, int type)
+ int dquot_load_quota_sb(struct super_block *sb, int type, int format_id,
+ unsigned int flags)
+ {
+- struct quota_format_type *fmt = find_quota_format(format_id);
++ struct quota_format_type *fmt;
+ struct quota_info *dqopt = sb_dqopt(sb);
+ int error;
+
+@@ -2416,6 +2416,7 @@ int dquot_load_quota_sb(struct super_block *sb, int type, int format_id,
+ if (WARN_ON_ONCE(flags & DQUOT_SUSPENDED))
+ return -EINVAL;
+
++ fmt = find_quota_format(format_id);
+ if (!fmt)
+ return -ESRCH;
+ if (!sb->dq_op || !sb->s_qcop ||
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index 9e859ba010cf12..7cbd580120d129 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -496,7 +496,7 @@ int ksmbd_vfs_write(struct ksmbd_work *work, struct ksmbd_file *fp,
+ int err = 0;
+
+ if (work->conn->connection_type) {
+- if (!(fp->daccess & FILE_WRITE_DATA_LE)) {
++ if (!(fp->daccess & (FILE_WRITE_DATA_LE | FILE_APPEND_DATA_LE))) {
+ pr_err("no right to write(%pD)\n", fp->filp);
+ err = -EACCES;
+ goto out;
+@@ -1115,9 +1115,10 @@ static bool __dir_empty(struct dir_context *ctx, const char *name, int namlen,
+ struct ksmbd_readdir_data *buf;
+
+ buf = container_of(ctx, struct ksmbd_readdir_data, ctx);
+- buf->dirent_count++;
++ if (!is_dot_dotdot(name, namlen))
++ buf->dirent_count++;
+
+- return buf->dirent_count <= 2;
++ return !buf->dirent_count;
+ }
+
+ /**
+@@ -1137,7 +1138,7 @@ int ksmbd_vfs_empty_dir(struct ksmbd_file *fp)
+ readdir_data.dirent_count = 0;
+
+ err = iterate_dir(fp->filp, &readdir_data.ctx);
+- if (readdir_data.dirent_count > 2)
++ if (readdir_data.dirent_count)
+ err = -ENOTEMPTY;
+ else
+ err = 0;
+@@ -1166,7 +1167,7 @@ static bool __caseless_lookup(struct dir_context *ctx, const char *name,
+ if (cmp < 0)
+ cmp = strncasecmp((char *)buf->private, name, namlen);
+ if (!cmp) {
+- memcpy((char *)buf->private, name, namlen);
++ memcpy((char *)buf->private, name, buf->used);
+ buf->dirent_count = 1;
+ return false;
+ }
+@@ -1234,10 +1235,7 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name,
+ char *filepath;
+ size_t path_len, remain_len;
+
+- filepath = kstrdup(name, GFP_KERNEL);
+- if (!filepath)
+- return -ENOMEM;
+-
++ filepath = name;
+ path_len = strlen(filepath);
+ remain_len = path_len;
+
+@@ -1280,10 +1278,9 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name,
+ err = -EINVAL;
+ out2:
+ path_put(parent_path);
+-out1:
+- kfree(filepath);
+ }
+
++out1:
+ if (!err) {
+ err = mnt_want_write(parent_path->mnt);
+ if (err) {
+diff --git a/include/acpi/acoutput.h b/include/acpi/acoutput.h
+index b1571dd96310af..5e0346142f983d 100644
+--- a/include/acpi/acoutput.h
++++ b/include/acpi/acoutput.h
+@@ -193,6 +193,7 @@
+ */
+ #ifndef ACPI_NO_ERROR_MESSAGES
+ #define AE_INFO _acpi_module_name, __LINE__
++#define ACPI_ONCE(_fn, _plist) { static char _done; if (!_done) { _done = 1; _fn _plist; } }
+
+ /*
+ * Error reporting. Callers module and line number are inserted by AE_INFO,
+@@ -201,8 +202,10 @@
+ */
+ #define ACPI_INFO(plist) acpi_info plist
+ #define ACPI_WARNING(plist) acpi_warning plist
++#define ACPI_WARNING_ONCE(plist) ACPI_ONCE(acpi_warning, plist)
+ #define ACPI_EXCEPTION(plist) acpi_exception plist
+ #define ACPI_ERROR(plist) acpi_error plist
++#define ACPI_ERROR_ONCE(plist) ACPI_ONCE(acpi_error, plist)
+ #define ACPI_BIOS_WARNING(plist) acpi_bios_warning plist
+ #define ACPI_BIOS_EXCEPTION(plist) acpi_bios_exception plist
+ #define ACPI_BIOS_ERROR(plist) acpi_bios_error plist
+@@ -214,8 +217,10 @@
+
+ #define ACPI_INFO(plist)
+ #define ACPI_WARNING(plist)
++#define ACPI_WARNING_ONCE(plist)
+ #define ACPI_EXCEPTION(plist)
+ #define ACPI_ERROR(plist)
++#define ACPI_ERROR_ONCE(plist)
+ #define ACPI_BIOS_WARNING(plist)
+ #define ACPI_BIOS_EXCEPTION(plist)
+ #define ACPI_BIOS_ERROR(plist)
+diff --git a/include/acpi/cppc_acpi.h b/include/acpi/cppc_acpi.h
+index 930b6afba6f4d4..e1720d93066695 100644
+--- a/include/acpi/cppc_acpi.h
++++ b/include/acpi/cppc_acpi.h
+@@ -64,6 +64,8 @@ struct cpc_desc {
+ int cpu_id;
+ int write_cmd_status;
+ int write_cmd_id;
++ /* Lock used for RMW operations in cpc_write() */
++ spinlock_t rmw_lock;
+ struct cpc_register_resource cpc_regs[MAX_CPC_REG_ENT];
+ struct acpi_psd_package domain_info;
+ struct kobject kobj;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 3b94ec161e8ccf..70fa4ffc3879f5 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -694,6 +694,11 @@ enum bpf_type_flag {
+ /* DYNPTR points to xdp_buff */
+ DYNPTR_TYPE_XDP = BIT(16 + BPF_BASE_TYPE_BITS),
+
++ /* Memory must be aligned on some architectures, used in combination with
++ * MEM_FIXED_SIZE.
++ */
++ MEM_ALIGNED = BIT(17 + BPF_BASE_TYPE_BITS),
++
+ __BPF_TYPE_FLAG_MAX,
+ __BPF_TYPE_LAST_FLAG = __BPF_TYPE_FLAG_MAX - 1,
+ };
+@@ -731,8 +736,6 @@ enum bpf_arg_type {
+ ARG_ANYTHING, /* any (initialized) argument is ok */
+ ARG_PTR_TO_SPIN_LOCK, /* pointer to bpf_spin_lock */
+ ARG_PTR_TO_SOCK_COMMON, /* pointer to sock_common */
+- ARG_PTR_TO_INT, /* pointer to int */
+- ARG_PTR_TO_LONG, /* pointer to long */
+ ARG_PTR_TO_SOCKET, /* pointer to bpf_sock (fullsock) */
+ ARG_PTR_TO_BTF_ID, /* pointer to in-kernel struct */
+ ARG_PTR_TO_RINGBUF_MEM, /* pointer to dynamically reserved ringbuf memory */
+@@ -919,6 +922,7 @@ static_assert(__BPF_REG_TYPE_MAX <= BPF_BASE_TYPE_LIMIT);
+ */
+ struct bpf_insn_access_aux {
+ enum bpf_reg_type reg_type;
++ bool is_ldsx;
+ union {
+ int ctx_field_size;
+ struct {
+@@ -927,6 +931,7 @@ struct bpf_insn_access_aux {
+ };
+ };
+ struct bpf_verifier_log *log; /* for verbose logs */
++ bool is_retval; /* is accessing function return value ? */
+ };
+
+ static inline void
+diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
+index 1de7ece5d36d43..aefcd656425126 100644
+--- a/include/linux/bpf_lsm.h
++++ b/include/linux/bpf_lsm.h
+@@ -9,6 +9,7 @@
+
+ #include <linux/sched.h>
+ #include <linux/bpf.h>
++#include <linux/bpf_verifier.h>
+ #include <linux/lsm_hooks.h>
+
+ #ifdef CONFIG_BPF_LSM
+@@ -45,6 +46,8 @@ void bpf_inode_storage_free(struct inode *inode);
+
+ void bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog, bpf_func_t *bpf_func);
+
++int bpf_lsm_get_retval_range(const struct bpf_prog *prog,
++ struct bpf_retval_range *range);
+ #else /* !CONFIG_BPF_LSM */
+
+ static inline bool bpf_lsm_is_sleepable_hook(u32 btf_id)
+@@ -78,6 +81,11 @@ static inline void bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog,
+ {
+ }
+
++static inline int bpf_lsm_get_retval_range(const struct bpf_prog *prog,
++ struct bpf_retval_range *range)
++{
++ return -EOPNOTSUPP;
++}
+ #endif /* CONFIG_BPF_LSM */
+
+ #endif /* _LINUX_BPF_LSM_H */
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 2df665fa2964d3..1b7f02fd759911 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -133,7 +133,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+ #define annotate_unreachable() __annotate_unreachable(__COUNTER__)
+
+ /* Annotate a C jump table to allow objtool to follow the code flow */
+-#define __annotate_jump_table __section(".rodata..c_jump_table")
++#define __annotate_jump_table __section(".rodata..c_jump_table,\"a\",@progbits #")
+
+ #else /* !CONFIG_OBJTOOL */
+ #define annotate_reachable()
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index 01bee2b289c2a7..c68e37201a12ad 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -278,7 +278,7 @@ struct node_footer {
+ #define F2FS_INLINE_DATA 0x02 /* file inline data flag */
+ #define F2FS_INLINE_DENTRY 0x04 /* file inline dentry flag */
+ #define F2FS_DATA_EXIST 0x08 /* file inline data exist flag */
+-#define F2FS_INLINE_DOTS 0x10 /* file having implicit dot dentries */
++#define F2FS_INLINE_DOTS 0x10 /* file having implicit dot dentries (obsolete) */
+ #define F2FS_EXTRA_ATTR 0x20 /* file having extra attribute */
+ #define F2FS_PIN_FILE 0x40 /* file should not be gced */
+ #define F2FS_COMPRESS_RELEASED 0x80 /* file released compressed blocks */
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 855db460e08bc6..19c333fafe1138 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -114,6 +114,7 @@ LSM_HOOK(int, 0, path_notify, const struct path *path, u64 mask,
+ unsigned int obj_type)
+ LSM_HOOK(int, 0, inode_alloc_security, struct inode *inode)
+ LSM_HOOK(void, LSM_RET_VOID, inode_free_security, struct inode *inode)
++LSM_HOOK(void, LSM_RET_VOID, inode_free_security_rcu, void *inode_security)
+ LSM_HOOK(int, -EOPNOTSUPP, inode_init_security, struct inode *inode,
+ struct inode *dir, const struct qstr *qstr, struct xattr *xattrs,
+ int *xattr_count)
+diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
+index a2ade0ffe9e7d2..efd4a0655159cc 100644
+--- a/include/linux/lsm_hooks.h
++++ b/include/linux/lsm_hooks.h
+@@ -73,6 +73,7 @@ struct lsm_blob_sizes {
+ int lbs_cred;
+ int lbs_file;
+ int lbs_inode;
++ int lbs_sock;
+ int lbs_superblock;
+ int lbs_ipc;
+ int lbs_msg_msg;
+diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h
+index c09cdcc99471e1..189140bf11fc40 100644
+--- a/include/linux/sbitmap.h
++++ b/include/linux/sbitmap.h
+@@ -40,7 +40,7 @@ struct sbitmap_word {
+ /**
+ * @swap_lock: serializes simultaneous updates of ->word and ->cleared
+ */
+- spinlock_t swap_lock;
++ raw_spinlock_t swap_lock;
+ } ____cacheline_aligned_in_smp;
+
+ /**
+diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
+index 0f038a1a033099..c3bca9c0bf2cf5 100644
+--- a/include/linux/soc/qcom/geni-se.h
++++ b/include/linux/soc/qcom/geni-se.h
+@@ -88,11 +88,15 @@ struct geni_se {
+ #define SE_GENI_M_IRQ_STATUS 0x610
+ #define SE_GENI_M_IRQ_EN 0x614
+ #define SE_GENI_M_IRQ_CLEAR 0x618
++#define SE_GENI_M_IRQ_EN_SET 0x61c
++#define SE_GENI_M_IRQ_EN_CLEAR 0x620
+ #define SE_GENI_S_CMD0 0x630
+ #define SE_GENI_S_CMD_CTRL_REG 0x634
+ #define SE_GENI_S_IRQ_STATUS 0x640
+ #define SE_GENI_S_IRQ_EN 0x644
+ #define SE_GENI_S_IRQ_CLEAR 0x648
++#define SE_GENI_S_IRQ_EN_SET 0x64c
++#define SE_GENI_S_IRQ_EN_CLEAR 0x650
+ #define SE_GENI_TX_FIFOn 0x700
+ #define SE_GENI_RX_FIFOn 0x780
+ #define SE_GENI_TX_FIFO_STATUS 0x800
+@@ -101,6 +105,8 @@ struct geni_se {
+ #define SE_GENI_RX_WATERMARK_REG 0x810
+ #define SE_GENI_RX_RFR_WATERMARK_REG 0x814
+ #define SE_GENI_IOS 0x908
++#define SE_GENI_M_GP_LENGTH 0x910
++#define SE_GENI_S_GP_LENGTH 0x914
+ #define SE_DMA_TX_IRQ_STAT 0xc40
+ #define SE_DMA_TX_IRQ_CLR 0xc44
+ #define SE_DMA_TX_FSM_RST 0xc58
+@@ -234,6 +240,9 @@ struct geni_se {
+ #define IO2_DATA_IN BIT(1)
+ #define RX_DATA_IN BIT(0)
+
++/* SE_GENI_M_GP_LENGTH and SE_GENI_S_GP_LENGTH fields */
++#define GP_LENGTH GENMASK(31, 0)
++
+ /* SE_DMA_TX_IRQ_STAT Register fields */
+ #define TX_DMA_DONE BIT(0)
+ #define TX_EOT BIT(1)
+diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h
+index 9f08a584d70785..0b9f1e598e3a6b 100644
+--- a/include/linux/usb/usbnet.h
++++ b/include/linux/usb/usbnet.h
+@@ -76,8 +76,23 @@ struct usbnet {
+ # define EVENT_LINK_CHANGE 11
+ # define EVENT_SET_RX_MODE 12
+ # define EVENT_NO_IP_ALIGN 13
++/* This one is special, as it indicates that the device is going away
++ * there are cyclic dependencies between tasklet, timer and bh
++ * that must be broken
++ */
++# define EVENT_UNPLUG 31
+ };
+
++static inline bool usbnet_going_away(struct usbnet *ubn)
++{
++ return test_bit(EVENT_UNPLUG, &ubn->flags);
++}
++
++static inline void usbnet_mark_going_away(struct usbnet *ubn)
++{
++ set_bit(EVENT_UNPLUG, &ubn->flags);
++}
++
+ static inline struct usb_driver *driver_of(struct usb_interface *intf)
+ {
+ return to_usb_driver(intf->dev.driver);
+diff --git a/include/linux/virtio.h b/include/linux/virtio.h
+index 4b16844c6bc29a..306137a15d0753 100644
+--- a/include/linux/virtio.h
++++ b/include/linux/virtio.h
+@@ -118,7 +118,9 @@ struct virtio_admin_cmd {
+ * struct virtio_device - representation of a device using virtio
+ * @index: unique position on the virtio bus
+ * @failed: saved value for VIRTIO_CONFIG_S_FAILED bit (for restore)
+- * @config_enabled: configuration change reporting enabled
++ * @config_core_enabled: configuration change reporting enabled by core
++ * @config_driver_disabled: configuration change reporting disabled by
++ * a driver
+ * @config_change_pending: configuration change reported while disabled
+ * @config_lock: protects configuration change reporting
+ * @vqs_list_lock: protects @vqs.
+@@ -135,7 +137,8 @@ struct virtio_admin_cmd {
+ struct virtio_device {
+ int index;
+ bool failed;
+- bool config_enabled;
++ bool config_core_enabled;
++ bool config_driver_disabled;
+ bool config_change_pending;
+ spinlock_t config_lock;
+ spinlock_t vqs_list_lock;
+@@ -166,6 +169,10 @@ void __virtqueue_break(struct virtqueue *_vq);
+ void __virtqueue_unbreak(struct virtqueue *_vq);
+
+ void virtio_config_changed(struct virtio_device *dev);
++
++void virtio_config_driver_disable(struct virtio_device *dev);
++void virtio_config_driver_enable(struct virtio_device *dev);
++
+ #ifdef CONFIG_PM_SLEEP
+ int virtio_device_freeze(struct virtio_device *dev);
+ int virtio_device_restore(struct virtio_device *dev);
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 1a32e602630e39..88265d37aa72e3 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -2257,8 +2257,8 @@ void mgmt_device_disconnected(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ bool mgmt_connected);
+ void mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ u8 link_type, u8 addr_type, u8 status);
+-void mgmt_connect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
+- u8 addr_type, u8 status);
++void mgmt_connect_failed(struct hci_dev *hdev, struct hci_conn *conn,
++ u8 status);
+ void mgmt_pin_code_request(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 secure);
+ void mgmt_pin_code_reply_complete(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ u8 status);
+diff --git a/include/net/ip.h b/include/net/ip.h
+index c5606cadb1a552..82248813619e3f 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -795,6 +795,8 @@ static inline void ip_cmsg_recv(struct msghdr *msg, struct sk_buff *skb)
+ }
+
+ bool icmp_global_allow(void);
++void icmp_global_consume(void);
++
+ extern int sysctl_icmp_msgs_per_sec;
+ extern int sysctl_icmp_msgs_burst;
+
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index 0a04eaf5343c64..fae37598c11069 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -994,8 +994,9 @@ enum mac80211_tx_info_flags {
+ * of their QoS TID or other priority field values.
+ * @IEEE80211_TX_CTRL_MCAST_MLO_FIRST_TX: first MLO TX, used mostly internally
+ * for sequence number assignment
+- * @IEEE80211_TX_CTRL_SCAN_TX: Indicates that this frame is transmitted
+- * due to scanning, not in normal operation on the interface.
++ * @IEEE80211_TX_CTRL_DONT_USE_RATE_MASK: Don't use rate mask for this frame
++ * which is transmitted due to scanning or offchannel TX, not in normal
++ * operation on the interface.
+ * @IEEE80211_TX_CTRL_MLO_LINK: If not @IEEE80211_LINK_UNSPECIFIED, this
+ * frame should be transmitted on the specific link. This really is
+ * only relevant for frames that do not have data present, and is
+@@ -1016,7 +1017,7 @@ enum mac80211_tx_control_flags {
+ IEEE80211_TX_CTRL_NO_SEQNO = BIT(7),
+ IEEE80211_TX_CTRL_DONT_REORDER = BIT(8),
+ IEEE80211_TX_CTRL_MCAST_MLO_FIRST_TX = BIT(9),
+- IEEE80211_TX_CTRL_SCAN_TX = BIT(10),
++ IEEE80211_TX_CTRL_DONT_USE_RATE_MASK = BIT(10),
+ IEEE80211_TX_CTRL_MLO_LINK = 0xf0000000,
+ };
+
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 1bfdd16890fac7..2be4738eae1cc1 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1674,6 +1674,7 @@ struct nft_trans_rule {
+
+ struct nft_trans_set {
+ struct nft_trans_binding nft_trans_binding;
++ struct list_head list_trans_newset;
+ struct nft_set *set;
+ u32 set_id;
+ u32 gc_int;
+@@ -1875,6 +1876,7 @@ static inline int nft_request_module(struct net *net, const char *fmt, ...) { re
+ struct nftables_pernet {
+ struct list_head tables;
+ struct list_head commit_list;
++ struct list_head commit_set_list;
+ struct list_head binding_list;
+ struct list_head module_list;
+ struct list_head notify_list;
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 2aac11e7e1cc5d..196c148fce8a81 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -2434,9 +2434,26 @@ static inline s64 tcp_rto_delta_us(const struct sock *sk)
+ {
+ const struct sk_buff *skb = tcp_rtx_queue_head(sk);
+ u32 rto = inet_csk(sk)->icsk_rto;
+- u64 rto_time_stamp_us = tcp_skb_timestamp_us(skb) + jiffies_to_usecs(rto);
+
+- return rto_time_stamp_us - tcp_sk(sk)->tcp_mstamp;
++ if (likely(skb)) {
++ u64 rto_time_stamp_us = tcp_skb_timestamp_us(skb) + jiffies_to_usecs(rto);
++
++ return rto_time_stamp_us - tcp_sk(sk)->tcp_mstamp;
++ } else {
++ WARN_ONCE(1,
++ "rtx queue emtpy: "
++ "out:%u sacked:%u lost:%u retrans:%u "
++ "tlp_high_seq:%u sk_state:%u ca_state:%u "
++ "advmss:%u mss_cache:%u pmtu:%u\n",
++ tcp_sk(sk)->packets_out, tcp_sk(sk)->sacked_out,
++ tcp_sk(sk)->lost_out, tcp_sk(sk)->retrans_out,
++ tcp_sk(sk)->tlp_high_seq, sk->sk_state,
++ inet_csk(sk)->icsk_ca_state,
++ tcp_sk(sk)->advmss, tcp_sk(sk)->mss_cache,
++ inet_csk(sk)->icsk_pmtu_cookie);
++ return jiffies_to_usecs(rto);
++ }
++
+ }
+
+ /*
+diff --git a/include/sound/tas2563-tlv.h b/include/sound/tas2563-tlv.h
+new file mode 100644
+index 00000000000000..faa3e194f73b16
+--- /dev/null
++++ b/include/sound/tas2563-tlv.h
+@@ -0,0 +1,279 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++//
++// ALSA SoC Texas Instruments TAS2563 Audio Smart Amplifier
++//
++// Copyright (C) 2022 - 2024 Texas Instruments Incorporated
++// https://www.ti.com
++//
++// The TAS2563 driver implements a flexible and configurable
++// algo coefficient setting for one, two, or even multiple
++// TAS2563 chips.
++//
++// Author: Shenghao Ding <shenghao-ding@ti.com>
++//
++
++#ifndef __TAS2563_TLV_H__
++#define __TAS2563_TLV_H__
++
++static const __maybe_unused DECLARE_TLV_DB_SCALE(tas2563_dvc_tlv, -12150, 50, 1);
++
++/* pow(10, db/20) * pow(2,30) */
++static const unsigned char tas2563_dvc_table[][4] = {
++ { 0X00, 0X00, 0X00, 0X00 }, /* -121.5db */
++ { 0X00, 0X00, 0X03, 0XBC }, /* -121.0db */
++ { 0X00, 0X00, 0X03, 0XF5 }, /* -120.5db */
++ { 0X00, 0X00, 0X04, 0X31 }, /* -120.0db */
++ { 0X00, 0X00, 0X04, 0X71 }, /* -119.5db */
++ { 0X00, 0X00, 0X04, 0XB4 }, /* -119.0db */
++ { 0X00, 0X00, 0X04, 0XFC }, /* -118.5db */
++ { 0X00, 0X00, 0X05, 0X47 }, /* -118.0db */
++ { 0X00, 0X00, 0X05, 0X97 }, /* -117.5db */
++ { 0X00, 0X00, 0X05, 0XEC }, /* -117.0db */
++ { 0X00, 0X00, 0X06, 0X46 }, /* -116.5db */
++ { 0X00, 0X00, 0X06, 0XA5 }, /* -116.0db */
++ { 0X00, 0X00, 0X07, 0X0A }, /* -115.5db */
++ { 0X00, 0X00, 0X07, 0X75 }, /* -115.0db */
++ { 0X00, 0X00, 0X07, 0XE6 }, /* -114.5db */
++ { 0X00, 0X00, 0X08, 0X5E }, /* -114.0db */
++ { 0X00, 0X00, 0X08, 0XDD }, /* -113.5db */
++ { 0X00, 0X00, 0X09, 0X63 }, /* -113.0db */
++ { 0X00, 0X00, 0X09, 0XF2 }, /* -112.5db */
++ { 0X00, 0X00, 0X0A, 0X89 }, /* -112.0db */
++ { 0X00, 0X00, 0X0B, 0X28 }, /* -111.5db */
++ { 0X00, 0X00, 0X0B, 0XD2 }, /* -111.0db */
++ { 0X00, 0X00, 0X0C, 0X85 }, /* -110.5db */
++ { 0X00, 0X00, 0X0D, 0X43 }, /* -110.0db */
++ { 0X00, 0X00, 0X0E, 0X0C }, /* -109.5db */
++ { 0X00, 0X00, 0X0E, 0XE1 }, /* -109.0db */
++ { 0X00, 0X00, 0X0F, 0XC3 }, /* -108.5db */
++ { 0X00, 0X00, 0X10, 0XB2 }, /* -108.0db */
++ { 0X00, 0X00, 0X11, 0XAF }, /* -107.5db */
++ { 0X00, 0X00, 0X12, 0XBC }, /* -107.0db */
++ { 0X00, 0X00, 0X13, 0XD8 }, /* -106.5db */
++ { 0X00, 0X00, 0X15, 0X05 }, /* -106.0db */
++ { 0X00, 0X00, 0X16, 0X44 }, /* -105.5db */
++ { 0X00, 0X00, 0X17, 0X96 }, /* -105.0db */
++ { 0X00, 0X00, 0X18, 0XFB }, /* -104.5db */
++ { 0X00, 0X00, 0X1A, 0X76 }, /* -104.0db */
++ { 0X00, 0X00, 0X1C, 0X08 }, /* -103.5db */
++ { 0X00, 0X00, 0X1D, 0XB1 }, /* -103.0db */
++ { 0X00, 0X00, 0X1F, 0X73 }, /* -102.5db */
++ { 0X00, 0X00, 0X21, 0X51 }, /* -102.0db */
++ { 0X00, 0X00, 0X23, 0X4A }, /* -101.5db */
++ { 0X00, 0X00, 0X25, 0X61 }, /* -101.0db */
++ { 0X00, 0X00, 0X27, 0X98 }, /* -100.5db */
++ { 0X00, 0X00, 0X29, 0XF1 }, /* -100.0db */
++ { 0X00, 0X00, 0X2C, 0X6D }, /* -99.5db */
++ { 0X00, 0X00, 0X2F, 0X0F }, /* -99.0db */
++ { 0X00, 0X00, 0X31, 0XD9 }, /* -98.5db */
++ { 0X00, 0X00, 0X34, 0XCD }, /* -98.0db */
++ { 0X00, 0X00, 0X37, 0XEE }, /* -97.5db */
++ { 0X00, 0X00, 0X3B, 0X3F }, /* -97.0db */
++ { 0X00, 0X00, 0X3E, 0XC1 }, /* -96.5db */
++ { 0X00, 0X00, 0X42, 0X79 }, /* -96.0db */
++ { 0X00, 0X00, 0X46, 0X6A }, /* -95.5db */
++ { 0X00, 0X00, 0X4A, 0X96 }, /* -95.0db */
++ { 0X00, 0X00, 0X4F, 0X01 }, /* -94.5db */
++ { 0X00, 0X00, 0X53, 0XAF }, /* -94.0db */
++ { 0X00, 0X00, 0X58, 0XA5 }, /* -93.5db */
++ { 0X00, 0X00, 0X5D, 0XE6 }, /* -93.0db */
++ { 0X00, 0X00, 0X63, 0X76 }, /* -92.5db */
++ { 0X00, 0X00, 0X69, 0X5B }, /* -92.0db */
++ { 0X00, 0X00, 0X6F, 0X99 }, /* -91.5db */
++ { 0X00, 0X00, 0X76, 0X36 }, /* -91.0db */
++ { 0X00, 0X00, 0X7D, 0X37 }, /* -90.5db */
++ { 0X00, 0X00, 0X84, 0XA2 }, /* -90.0db */
++ { 0X00, 0X00, 0X8C, 0X7E }, /* -89.5db */
++ { 0X00, 0X00, 0X94, 0XD1 }, /* -89.0db */
++ { 0X00, 0X00, 0X9D, 0XA3 }, /* -88.5db */
++ { 0X00, 0X00, 0XA6, 0XFA }, /* -88.0db */
++ { 0X00, 0X00, 0XB0, 0XDF }, /* -87.5db */
++ { 0X00, 0X00, 0XBB, 0X5A }, /* -87.0db */
++ { 0X00, 0X00, 0XC6, 0X74 }, /* -86.5db */
++ { 0X00, 0X00, 0XD2, 0X36 }, /* -86.0db */
++ { 0X00, 0X00, 0XDE, 0XAB }, /* -85.5db */
++ { 0X00, 0X00, 0XEB, 0XDC }, /* -85.0db */
++ { 0X00, 0X00, 0XF9, 0XD6 }, /* -84.5db */
++ { 0X00, 0X01, 0X08, 0XA4 }, /* -84.0db */
++ { 0X00, 0X01, 0X18, 0X52 }, /* -83.5db */
++ { 0X00, 0X01, 0X28, 0XEF }, /* -83.0db */
++ { 0X00, 0X01, 0X3A, 0X87 }, /* -82.5db */
++ { 0X00, 0X01, 0X4D, 0X2A }, /* -82.0db */
++ { 0X00, 0X01, 0X60, 0XE8 }, /* -81.5db */
++ { 0X00, 0X01, 0X75, 0XD1 }, /* -81.0db */
++ { 0X00, 0X01, 0X8B, 0XF7 }, /* -80.5db */
++ { 0X00, 0X01, 0XA3, 0X6E }, /* -80.0db */
++ { 0X00, 0X01, 0XBC, 0X48 }, /* -79.5db */
++ { 0X00, 0X01, 0XD6, 0X9B }, /* -79.0db */
++ { 0X00, 0X01, 0XF2, 0X7E }, /* -78.5db */
++ { 0X00, 0X02, 0X10, 0X08 }, /* -78.0db */
++ { 0X00, 0X02, 0X2F, 0X51 }, /* -77.5db */
++ { 0X00, 0X02, 0X50, 0X76 }, /* -77.0db */
++ { 0X00, 0X02, 0X73, 0X91 }, /* -76.5db */
++ { 0X00, 0X02, 0X98, 0XC0 }, /* -76.0db */
++ { 0X00, 0X02, 0XC0, 0X24 }, /* -75.5db */
++ { 0X00, 0X02, 0XE9, 0XDD }, /* -75.0db */
++ { 0X00, 0X03, 0X16, 0X0F }, /* -74.5db */
++ { 0X00, 0X03, 0X44, 0XDF }, /* -74.0db */
++ { 0X00, 0X03, 0X76, 0X76 }, /* -73.5db */
++ { 0X00, 0X03, 0XAA, 0XFC }, /* -73.0db */
++ { 0X00, 0X03, 0XE2, 0XA0 }, /* -72.5db */
++ { 0X00, 0X04, 0X1D, 0X8F }, /* -72.0db */
++ { 0X00, 0X04, 0X5B, 0XFD }, /* -71.5db */
++ { 0X00, 0X04, 0X9E, 0X1D }, /* -71.0db */
++ { 0X00, 0X04, 0XE4, 0X29 }, /* -70.5db */
++ { 0X00, 0X05, 0X2E, 0X5A }, /* -70.0db */
++ { 0X00, 0X05, 0X7C, 0XF2 }, /* -69.5db */
++ { 0X00, 0X05, 0XD0, 0X31 }, /* -69.0db */
++ { 0X00, 0X06, 0X28, 0X60 }, /* -68.5db */
++ { 0X00, 0X06, 0X85, 0XC8 }, /* -68.0db */
++ { 0X00, 0X06, 0XE8, 0XB9 }, /* -67.5db */
++ { 0X00, 0X07, 0X51, 0X86 }, /* -67.0db */
++ { 0X00, 0X07, 0XC0, 0X8A }, /* -66.5db */
++ { 0X00, 0X08, 0X36, 0X21 }, /* -66.0db */
++ { 0X00, 0X08, 0XB2, 0XB0 }, /* -65.5db */
++ { 0X00, 0X09, 0X36, 0XA1 }, /* -65.0db */
++ { 0X00, 0X09, 0XC2, 0X63 }, /* -64.5db */
++ { 0X00, 0X0A, 0X56, 0X6D }, /* -64.0db */
++ { 0X00, 0X0A, 0XF3, 0X3C }, /* -63.5db */
++ { 0X00, 0X0B, 0X99, 0X56 }, /* -63.0db */
++ { 0X00, 0X0C, 0X49, 0X48 }, /* -62.5db */
++ { 0X00, 0X0D, 0X03, 0XA7 }, /* -62.0db */
++ { 0X00, 0X0D, 0XC9, 0X11 }, /* -61.5db */
++ { 0X00, 0X0E, 0X9A, 0X2D }, /* -61.0db */
++ { 0X00, 0X0F, 0X77, 0XAD }, /* -60.5db */
++ { 0X00, 0X10, 0X62, 0X4D }, /* -60.0db */
++ { 0X00, 0X11, 0X5A, 0XD5 }, /* -59.5db */
++ { 0X00, 0X12, 0X62, 0X16 }, /* -59.0db */
++ { 0X00, 0X13, 0X78, 0XF0 }, /* -58.5db */
++ { 0X00, 0X14, 0XA0, 0X50 }, /* -58.0db */
++ { 0X00, 0X15, 0XD9, 0X31 }, /* -57.5db */
++ { 0X00, 0X17, 0X24, 0X9C }, /* -57.0db */
++ { 0X00, 0X18, 0X83, 0XAA }, /* -56.5db */
++ { 0X00, 0X19, 0XF7, 0X86 }, /* -56.0db */
++ { 0X00, 0X1B, 0X81, 0X6A }, /* -55.5db */
++ { 0X00, 0X1D, 0X22, 0XA4 }, /* -55.0db */
++ { 0X00, 0X1E, 0XDC, 0X98 }, /* -54.5db */
++ { 0X00, 0X20, 0XB0, 0XBC }, /* -54.0db */
++ { 0X00, 0X22, 0XA0, 0X9D }, /* -53.5db */
++ { 0X00, 0X24, 0XAD, 0XE0 }, /* -53.0db */
++ { 0X00, 0X26, 0XDA, 0X43 }, /* -52.5db */
++ { 0X00, 0X29, 0X27, 0X9D }, /* -52.0db */
++ { 0X00, 0X2B, 0X97, 0XE3 }, /* -51.5db */
++ { 0X00, 0X2E, 0X2D, 0X27 }, /* -51.0db */
++ { 0X00, 0X30, 0XE9, 0X9A }, /* -50.5db */
++ { 0X00, 0X33, 0XCF, 0X8D }, /* -50.0db */
++ { 0X00, 0X36, 0XE1, 0X78 }, /* -49.5db */
++ { 0X00, 0X3A, 0X21, 0XF3 }, /* -49.0db */
++ { 0X00, 0X3D, 0X93, 0XC3 }, /* -48.5db */
++ { 0X00, 0X41, 0X39, 0XD3 }, /* -48.0db */
++ { 0X00, 0X45, 0X17, 0X3B }, /* -47.5db */
++ { 0X00, 0X49, 0X2F, 0X44 }, /* -47.0db */
++ { 0X00, 0X4D, 0X85, 0X66 }, /* -46.5db */
++ { 0X00, 0X52, 0X1D, 0X50 }, /* -46.0db */
++ { 0X00, 0X56, 0XFA, 0XE8 }, /* -45.5db */
++ { 0X00, 0X5C, 0X22, 0X4E }, /* -45.0db */
++ { 0X00, 0X61, 0X97, 0XE1 }, /* -44.5db */
++ { 0X00, 0X67, 0X60, 0X44 }, /* -44.0db */
++ { 0X00, 0X6D, 0X80, 0X60 }, /* -43.5db */
++ { 0X00, 0X73, 0XFD, 0X65 }, /* -43.0db */
++ { 0X00, 0X7A, 0XDC, 0XD7 }, /* -42.5db */
++ { 0X00, 0X82, 0X24, 0X8A }, /* -42.0db */
++ { 0X00, 0X89, 0XDA, 0XAB }, /* -41.5db */
++ { 0X00, 0X92, 0X05, 0XC6 }, /* -41.0db */
++ { 0X00, 0X9A, 0XAC, 0XC8 }, /* -40.5db */
++ { 0X00, 0XA3, 0XD7, 0X0A }, /* -40.0db */
++ { 0X00, 0XAD, 0X8C, 0X52 }, /* -39.5db */
++ { 0X00, 0XB7, 0XD4, 0XDD }, /* -39.0db */
++ { 0X00, 0XC2, 0XB9, 0X65 }, /* -38.5db */
++ { 0X00, 0XCE, 0X43, 0X28 }, /* -38.0db */
++ { 0X00, 0XDA, 0X7B, 0XF1 }, /* -37.5db */
++ { 0X00, 0XE7, 0X6E, 0X1E }, /* -37.0db */
++ { 0X00, 0XF5, 0X24, 0XAC }, /* -36.5db */
++ { 0X01, 0X03, 0XAB, 0X3D }, /* -36.0db */
++ { 0X01, 0X13, 0X0E, 0X24 }, /* -35.5db */
++ { 0X01, 0X23, 0X5A, 0X71 }, /* -35.0db */
++ { 0X01, 0X34, 0X9D, 0XF8 }, /* -34.5db */
++ { 0X01, 0X46, 0XE7, 0X5D }, /* -34.0db */
++ { 0X01, 0X5A, 0X46, 0X27 }, /* -33.5db */
++ { 0X01, 0X6E, 0XCA, 0XC5 }, /* -33.0db */
++ { 0X01, 0X84, 0X86, 0X9F }, /* -32.5db */
++ { 0X01, 0X9B, 0X8C, 0X27 }, /* -32.0db */
++ { 0X01, 0XB3, 0XEE, 0XE5 }, /* -31.5db */
++ { 0X01, 0XCD, 0XC3, 0X8C }, /* -31.0db */
++ { 0X01, 0XE9, 0X20, 0X05 }, /* -30.5db */
++ { 0X02, 0X06, 0X1B, 0X89 }, /* -30.0db */
++ { 0X02, 0X24, 0XCE, 0XB0 }, /* -29.5db */
++ { 0X02, 0X45, 0X53, 0X85 }, /* -29.0db */
++ { 0X02, 0X67, 0XC5, 0XA2 }, /* -28.5db */
++ { 0X02, 0X8C, 0X42, 0X3F }, /* -28.0db */
++ { 0X02, 0XB2, 0XE8, 0X55 }, /* -27.5db */
++ { 0X02, 0XDB, 0XD8, 0XAD }, /* -27.0db */
++ { 0X03, 0X07, 0X36, 0X05 }, /* -26.5db */
++ { 0X03, 0X35, 0X25, 0X29 }, /* -26.0db */
++ { 0X03, 0X65, 0XCD, 0X13 }, /* -25.5db */
++ { 0X03, 0X99, 0X57, 0X0C }, /* -25.0db */
++ { 0X03, 0XCF, 0XEE, 0XCF }, /* -24.5db */
++ { 0X04, 0X09, 0XC2, 0XB0 }, /* -24.0db */
++ { 0X04, 0X47, 0X03, 0XC1 }, /* -23.5db */
++ { 0X04, 0X87, 0XE5, 0XFB }, /* -23.0db */
++ { 0X04, 0XCC, 0XA0, 0X6D }, /* -22.5db */
++ { 0X05, 0X15, 0X6D, 0X68 }, /* -22.0db */
++ { 0X05, 0X62, 0X8A, 0XB3 }, /* -21.5db */
++ { 0X05, 0XB4, 0X39, 0XBC }, /* -21.0db */
++ { 0X06, 0X0A, 0XBF, 0XD4 }, /* -20.5db */
++ { 0X06, 0X66, 0X66, 0X66 }, /* -20.0db */
++ { 0X06, 0XC7, 0X7B, 0X36 }, /* -19.5db */
++ { 0X07, 0X2E, 0X50, 0XA6 }, /* -19.0db */
++ { 0X07, 0X9B, 0X3D, 0XF6 }, /* -18.5db */
++ { 0X08, 0X0E, 0X9F, 0X96 }, /* -18.0db */
++ { 0X08, 0X88, 0XD7, 0X6D }, /* -17.5db */
++ { 0X09, 0X0A, 0X4D, 0X2F }, /* -17.0db */
++ { 0X09, 0X93, 0X6E, 0XB8 }, /* -16.5db */
++ { 0X0A, 0X24, 0XB0, 0X62 }, /* -16.0db */
++ { 0X0A, 0XBE, 0X8D, 0X70 }, /* -15.5db */
++ { 0X0B, 0X61, 0X88, 0X71 }, /* -15.0db */
++ { 0X0C, 0X0E, 0X2B, 0XB0 }, /* -14.5db */
++ { 0X0C, 0XC5, 0X09, 0XAB }, /* -14.0db */
++ { 0X0D, 0X86, 0XBD, 0X8D }, /* -13.5db */
++ { 0X0E, 0X53, 0XEB, 0XB3 }, /* -13.0db */
++ { 0X0F, 0X2D, 0X42, 0X38 }, /* -12.5db */
++ { 0X10, 0X13, 0X79, 0X87 }, /* -12.0db */
++ { 0X11, 0X07, 0X54, 0XF9 }, /* -11.5db */
++ { 0X12, 0X09, 0XA3, 0X7A }, /* -11.0db */
++ { 0X13, 0X1B, 0X40, 0X39 }, /* -10.5db */
++ { 0X14, 0X3D, 0X13, 0X62 }, /* -10.0db */
++ { 0X15, 0X70, 0X12, 0XE1 }, /* -9.5db */
++ { 0X16, 0XB5, 0X43, 0X37 }, /* -9.0db */
++ { 0X18, 0X0D, 0XB8, 0X54 }, /* -8.5db */
++ { 0X19, 0X7A, 0X96, 0X7F }, /* -8.0db */
++ { 0X1A, 0XFD, 0X13, 0X54 }, /* -7.5db */
++ { 0X1C, 0X96, 0X76, 0XC6 }, /* -7.0db */
++ { 0X1E, 0X48, 0X1C, 0X37 }, /* -6.5db */
++ { 0X20, 0X13, 0X73, 0X9E }, /* -6.0db */
++ { 0X21, 0XFA, 0X02, 0XBF }, /* -5.5db */
++ { 0X23, 0XFD, 0X66, 0X78 }, /* -5.0db */
++ { 0X26, 0X1F, 0X54, 0X1C }, /* -4.5db */
++ { 0X28, 0X61, 0X9A, 0XE9 }, /* -4.0db */
++ { 0X2A, 0XC6, 0X25, 0X91 }, /* -3.5db */
++ { 0X2D, 0X4E, 0XFB, 0XD5 }, /* -3.0db */
++ { 0X2F, 0XFE, 0X44, 0X48 }, /* -2.5db */
++ { 0X32, 0XD6, 0X46, 0X17 }, /* -2.0db */
++ { 0X35, 0XD9, 0X6B, 0X02 }, /* -1.5db */
++ { 0X39, 0X0A, 0X41, 0X5F }, /* -1.0db */
++ { 0X3C, 0X6B, 0X7E, 0X4F }, /* -0.5db */
++ { 0X40, 0X00, 0X00, 0X00 }, /* 0.0db */
++ { 0X43, 0XCA, 0XD0, 0X22 }, /* 0.5db */
++ { 0X47, 0XCF, 0X26, 0X7D }, /* 1.0db */
++ { 0X4C, 0X10, 0X6B, 0XA5 }, /* 1.5db */
++ { 0X50, 0X92, 0X3B, 0XE3 }, /* 2.0db */
++ { 0X55, 0X58, 0X6A, 0X46 }, /* 2.5db */
++ { 0X5A, 0X67, 0X03, 0XDF }, /* 3.0db */
++ { 0X5F, 0XC2, 0X53, 0X32 }, /* 3.5db */
++ { 0X65, 0X6E, 0XE3, 0XDB }, /* 4.0db */
++ { 0X6B, 0X71, 0X86, 0X68 }, /* 4.5db */
++ { 0X71, 0XCF, 0X54, 0X71 }, /* 5.0db */
++ { 0X78, 0X8D, 0XB4, 0XE9 }, /* 5.5db */
++ { 0X7F, 0XFF, 0XFF, 0XFF }, /* 6.0db */
++};
++#endif
+diff --git a/include/sound/tas2781-tlv.h b/include/sound/tas2781-tlv.h
+index 00fd4d449ff35c..d87263e43fdb61 100644
+--- a/include/sound/tas2781-tlv.h
++++ b/include/sound/tas2781-tlv.h
+@@ -17,265 +17,5 @@
+
+ static const __maybe_unused DECLARE_TLV_DB_SCALE(dvc_tlv, -10000, 100, 0);
+ static const __maybe_unused DECLARE_TLV_DB_SCALE(amp_vol_tlv, 1100, 50, 0);
+-static const __maybe_unused DECLARE_TLV_DB_SCALE(tas2563_dvc_tlv, -12150, 50, 1);
+
+-/* pow(10, db/20) * pow(2,30) */
+-static const __maybe_unused unsigned char tas2563_dvc_table[][4] = {
+- { 0X00, 0X00, 0X00, 0X00 }, /* -121.5db */
+- { 0X00, 0X00, 0X03, 0XBC }, /* -121.0db */
+- { 0X00, 0X00, 0X03, 0XF5 }, /* -120.5db */
+- { 0X00, 0X00, 0X04, 0X31 }, /* -120.0db */
+- { 0X00, 0X00, 0X04, 0X71 }, /* -119.5db */
+- { 0X00, 0X00, 0X04, 0XB4 }, /* -119.0db */
+- { 0X00, 0X00, 0X04, 0XFC }, /* -118.5db */
+- { 0X00, 0X00, 0X05, 0X47 }, /* -118.0db */
+- { 0X00, 0X00, 0X05, 0X97 }, /* -117.5db */
+- { 0X00, 0X00, 0X05, 0XEC }, /* -117.0db */
+- { 0X00, 0X00, 0X06, 0X46 }, /* -116.5db */
+- { 0X00, 0X00, 0X06, 0XA5 }, /* -116.0db */
+- { 0X00, 0X00, 0X07, 0X0A }, /* -115.5db */
+- { 0X00, 0X00, 0X07, 0X75 }, /* -115.0db */
+- { 0X00, 0X00, 0X07, 0XE6 }, /* -114.5db */
+- { 0X00, 0X00, 0X08, 0X5E }, /* -114.0db */
+- { 0X00, 0X00, 0X08, 0XDD }, /* -113.5db */
+- { 0X00, 0X00, 0X09, 0X63 }, /* -113.0db */
+- { 0X00, 0X00, 0X09, 0XF2 }, /* -112.5db */
+- { 0X00, 0X00, 0X0A, 0X89 }, /* -112.0db */
+- { 0X00, 0X00, 0X0B, 0X28 }, /* -111.5db */
+- { 0X00, 0X00, 0X0B, 0XD2 }, /* -111.0db */
+- { 0X00, 0X00, 0X0C, 0X85 }, /* -110.5db */
+- { 0X00, 0X00, 0X0D, 0X43 }, /* -110.0db */
+- { 0X00, 0X00, 0X0E, 0X0C }, /* -109.5db */
+- { 0X00, 0X00, 0X0E, 0XE1 }, /* -109.0db */
+- { 0X00, 0X00, 0X0F, 0XC3 }, /* -108.5db */
+- { 0X00, 0X00, 0X10, 0XB2 }, /* -108.0db */
+- { 0X00, 0X00, 0X11, 0XAF }, /* -107.5db */
+- { 0X00, 0X00, 0X12, 0XBC }, /* -107.0db */
+- { 0X00, 0X00, 0X13, 0XD8 }, /* -106.5db */
+- { 0X00, 0X00, 0X15, 0X05 }, /* -106.0db */
+- { 0X00, 0X00, 0X16, 0X44 }, /* -105.5db */
+- { 0X00, 0X00, 0X17, 0X96 }, /* -105.0db */
+- { 0X00, 0X00, 0X18, 0XFB }, /* -104.5db */
+- { 0X00, 0X00, 0X1A, 0X76 }, /* -104.0db */
+- { 0X00, 0X00, 0X1C, 0X08 }, /* -103.5db */
+- { 0X00, 0X00, 0X1D, 0XB1 }, /* -103.0db */
+- { 0X00, 0X00, 0X1F, 0X73 }, /* -102.5db */
+- { 0X00, 0X00, 0X21, 0X51 }, /* -102.0db */
+- { 0X00, 0X00, 0X23, 0X4A }, /* -101.5db */
+- { 0X00, 0X00, 0X25, 0X61 }, /* -101.0db */
+- { 0X00, 0X00, 0X27, 0X98 }, /* -100.5db */
+- { 0X00, 0X00, 0X29, 0XF1 }, /* -100.0db */
+- { 0X00, 0X00, 0X2C, 0X6D }, /* -99.5db */
+- { 0X00, 0X00, 0X2F, 0X0F }, /* -99.0db */
+- { 0X00, 0X00, 0X31, 0XD9 }, /* -98.5db */
+- { 0X00, 0X00, 0X34, 0XCD }, /* -98.0db */
+- { 0X00, 0X00, 0X37, 0XEE }, /* -97.5db */
+- { 0X00, 0X00, 0X3B, 0X3F }, /* -97.0db */
+- { 0X00, 0X00, 0X3E, 0XC1 }, /* -96.5db */
+- { 0X00, 0X00, 0X42, 0X79 }, /* -96.0db */
+- { 0X00, 0X00, 0X46, 0X6A }, /* -95.5db */
+- { 0X00, 0X00, 0X4A, 0X96 }, /* -95.0db */
+- { 0X00, 0X00, 0X4F, 0X01 }, /* -94.5db */
+- { 0X00, 0X00, 0X53, 0XAF }, /* -94.0db */
+- { 0X00, 0X00, 0X58, 0XA5 }, /* -93.5db */
+- { 0X00, 0X00, 0X5D, 0XE6 }, /* -93.0db */
+- { 0X00, 0X00, 0X63, 0X76 }, /* -92.5db */
+- { 0X00, 0X00, 0X69, 0X5B }, /* -92.0db */
+- { 0X00, 0X00, 0X6F, 0X99 }, /* -91.5db */
+- { 0X00, 0X00, 0X76, 0X36 }, /* -91.0db */
+- { 0X00, 0X00, 0X7D, 0X37 }, /* -90.5db */
+- { 0X00, 0X00, 0X84, 0XA2 }, /* -90.0db */
+- { 0X00, 0X00, 0X8C, 0X7E }, /* -89.5db */
+- { 0X00, 0X00, 0X94, 0XD1 }, /* -89.0db */
+- { 0X00, 0X00, 0X9D, 0XA3 }, /* -88.5db */
+- { 0X00, 0X00, 0XA6, 0XFA }, /* -88.0db */
+- { 0X00, 0X00, 0XB0, 0XDF }, /* -87.5db */
+- { 0X00, 0X00, 0XBB, 0X5A }, /* -87.0db */
+- { 0X00, 0X00, 0XC6, 0X74 }, /* -86.5db */
+- { 0X00, 0X00, 0XD2, 0X36 }, /* -86.0db */
+- { 0X00, 0X00, 0XDE, 0XAB }, /* -85.5db */
+- { 0X00, 0X00, 0XEB, 0XDC }, /* -85.0db */
+- { 0X00, 0X00, 0XF9, 0XD6 }, /* -84.5db */
+- { 0X00, 0X01, 0X08, 0XA4 }, /* -84.0db */
+- { 0X00, 0X01, 0X18, 0X52 }, /* -83.5db */
+- { 0X00, 0X01, 0X28, 0XEF }, /* -83.0db */
+- { 0X00, 0X01, 0X3A, 0X87 }, /* -82.5db */
+- { 0X00, 0X01, 0X4D, 0X2A }, /* -82.0db */
+- { 0X00, 0X01, 0X60, 0XE8 }, /* -81.5db */
+- { 0X00, 0X01, 0X75, 0XD1 }, /* -81.0db */
+- { 0X00, 0X01, 0X8B, 0XF7 }, /* -80.5db */
+- { 0X00, 0X01, 0XA3, 0X6E }, /* -80.0db */
+- { 0X00, 0X01, 0XBC, 0X48 }, /* -79.5db */
+- { 0X00, 0X01, 0XD6, 0X9B }, /* -79.0db */
+- { 0X00, 0X01, 0XF2, 0X7E }, /* -78.5db */
+- { 0X00, 0X02, 0X10, 0X08 }, /* -78.0db */
+- { 0X00, 0X02, 0X2F, 0X51 }, /* -77.5db */
+- { 0X00, 0X02, 0X50, 0X76 }, /* -77.0db */
+- { 0X00, 0X02, 0X73, 0X91 }, /* -76.5db */
+- { 0X00, 0X02, 0X98, 0XC0 }, /* -76.0db */
+- { 0X00, 0X02, 0XC0, 0X24 }, /* -75.5db */
+- { 0X00, 0X02, 0XE9, 0XDD }, /* -75.0db */
+- { 0X00, 0X03, 0X16, 0X0F }, /* -74.5db */
+- { 0X00, 0X03, 0X44, 0XDF }, /* -74.0db */
+- { 0X00, 0X03, 0X76, 0X76 }, /* -73.5db */
+- { 0X00, 0X03, 0XAA, 0XFC }, /* -73.0db */
+- { 0X00, 0X03, 0XE2, 0XA0 }, /* -72.5db */
+- { 0X00, 0X04, 0X1D, 0X8F }, /* -72.0db */
+- { 0X00, 0X04, 0X5B, 0XFD }, /* -71.5db */
+- { 0X00, 0X04, 0X9E, 0X1D }, /* -71.0db */
+- { 0X00, 0X04, 0XE4, 0X29 }, /* -70.5db */
+- { 0X00, 0X05, 0X2E, 0X5A }, /* -70.0db */
+- { 0X00, 0X05, 0X7C, 0XF2 }, /* -69.5db */
+- { 0X00, 0X05, 0XD0, 0X31 }, /* -69.0db */
+- { 0X00, 0X06, 0X28, 0X60 }, /* -68.5db */
+- { 0X00, 0X06, 0X85, 0XC8 }, /* -68.0db */
+- { 0X00, 0X06, 0XE8, 0XB9 }, /* -67.5db */
+- { 0X00, 0X07, 0X51, 0X86 }, /* -67.0db */
+- { 0X00, 0X07, 0XC0, 0X8A }, /* -66.5db */
+- { 0X00, 0X08, 0X36, 0X21 }, /* -66.0db */
+- { 0X00, 0X08, 0XB2, 0XB0 }, /* -65.5db */
+- { 0X00, 0X09, 0X36, 0XA1 }, /* -65.0db */
+- { 0X00, 0X09, 0XC2, 0X63 }, /* -64.5db */
+- { 0X00, 0X0A, 0X56, 0X6D }, /* -64.0db */
+- { 0X00, 0X0A, 0XF3, 0X3C }, /* -63.5db */
+- { 0X00, 0X0B, 0X99, 0X56 }, /* -63.0db */
+- { 0X00, 0X0C, 0X49, 0X48 }, /* -62.5db */
+- { 0X00, 0X0D, 0X03, 0XA7 }, /* -62.0db */
+- { 0X00, 0X0D, 0XC9, 0X11 }, /* -61.5db */
+- { 0X00, 0X0E, 0X9A, 0X2D }, /* -61.0db */
+- { 0X00, 0X0F, 0X77, 0XAD }, /* -60.5db */
+- { 0X00, 0X10, 0X62, 0X4D }, /* -60.0db */
+- { 0X00, 0X11, 0X5A, 0XD5 }, /* -59.5db */
+- { 0X00, 0X12, 0X62, 0X16 }, /* -59.0db */
+- { 0X00, 0X13, 0X78, 0XF0 }, /* -58.5db */
+- { 0X00, 0X14, 0XA0, 0X50 }, /* -58.0db */
+- { 0X00, 0X15, 0XD9, 0X31 }, /* -57.5db */
+- { 0X00, 0X17, 0X24, 0X9C }, /* -57.0db */
+- { 0X00, 0X18, 0X83, 0XAA }, /* -56.5db */
+- { 0X00, 0X19, 0XF7, 0X86 }, /* -56.0db */
+- { 0X00, 0X1B, 0X81, 0X6A }, /* -55.5db */
+- { 0X00, 0X1D, 0X22, 0XA4 }, /* -55.0db */
+- { 0X00, 0X1E, 0XDC, 0X98 }, /* -54.5db */
+- { 0X00, 0X20, 0XB0, 0XBC }, /* -54.0db */
+- { 0X00, 0X22, 0XA0, 0X9D }, /* -53.5db */
+- { 0X00, 0X24, 0XAD, 0XE0 }, /* -53.0db */
+- { 0X00, 0X26, 0XDA, 0X43 }, /* -52.5db */
+- { 0X00, 0X29, 0X27, 0X9D }, /* -52.0db */
+- { 0X00, 0X2B, 0X97, 0XE3 }, /* -51.5db */
+- { 0X00, 0X2E, 0X2D, 0X27 }, /* -51.0db */
+- { 0X00, 0X30, 0XE9, 0X9A }, /* -50.5db */
+- { 0X00, 0X33, 0XCF, 0X8D }, /* -50.0db */
+- { 0X00, 0X36, 0XE1, 0X78 }, /* -49.5db */
+- { 0X00, 0X3A, 0X21, 0XF3 }, /* -49.0db */
+- { 0X00, 0X3D, 0X93, 0XC3 }, /* -48.5db */
+- { 0X00, 0X41, 0X39, 0XD3 }, /* -48.0db */
+- { 0X00, 0X45, 0X17, 0X3B }, /* -47.5db */
+- { 0X00, 0X49, 0X2F, 0X44 }, /* -47.0db */
+- { 0X00, 0X4D, 0X85, 0X66 }, /* -46.5db */
+- { 0X00, 0X52, 0X1D, 0X50 }, /* -46.0db */
+- { 0X00, 0X56, 0XFA, 0XE8 }, /* -45.5db */
+- { 0X00, 0X5C, 0X22, 0X4E }, /* -45.0db */
+- { 0X00, 0X61, 0X97, 0XE1 }, /* -44.5db */
+- { 0X00, 0X67, 0X60, 0X44 }, /* -44.0db */
+- { 0X00, 0X6D, 0X80, 0X60 }, /* -43.5db */
+- { 0X00, 0X73, 0XFD, 0X65 }, /* -43.0db */
+- { 0X00, 0X7A, 0XDC, 0XD7 }, /* -42.5db */
+- { 0X00, 0X82, 0X24, 0X8A }, /* -42.0db */
+- { 0X00, 0X89, 0XDA, 0XAB }, /* -41.5db */
+- { 0X00, 0X92, 0X05, 0XC6 }, /* -41.0db */
+- { 0X00, 0X9A, 0XAC, 0XC8 }, /* -40.5db */
+- { 0X00, 0XA3, 0XD7, 0X0A }, /* -40.0db */
+- { 0X00, 0XAD, 0X8C, 0X52 }, /* -39.5db */
+- { 0X00, 0XB7, 0XD4, 0XDD }, /* -39.0db */
+- { 0X00, 0XC2, 0XB9, 0X65 }, /* -38.5db */
+- { 0X00, 0XCE, 0X43, 0X28 }, /* -38.0db */
+- { 0X00, 0XDA, 0X7B, 0XF1 }, /* -37.5db */
+- { 0X00, 0XE7, 0X6E, 0X1E }, /* -37.0db */
+- { 0X00, 0XF5, 0X24, 0XAC }, /* -36.5db */
+- { 0X01, 0X03, 0XAB, 0X3D }, /* -36.0db */
+- { 0X01, 0X13, 0X0E, 0X24 }, /* -35.5db */
+- { 0X01, 0X23, 0X5A, 0X71 }, /* -35.0db */
+- { 0X01, 0X34, 0X9D, 0XF8 }, /* -34.5db */
+- { 0X01, 0X46, 0XE7, 0X5D }, /* -34.0db */
+- { 0X01, 0X5A, 0X46, 0X27 }, /* -33.5db */
+- { 0X01, 0X6E, 0XCA, 0XC5 }, /* -33.0db */
+- { 0X01, 0X84, 0X86, 0X9F }, /* -32.5db */
+- { 0X01, 0X9B, 0X8C, 0X27 }, /* -32.0db */
+- { 0X01, 0XB3, 0XEE, 0XE5 }, /* -31.5db */
+- { 0X01, 0XCD, 0XC3, 0X8C }, /* -31.0db */
+- { 0X01, 0XE9, 0X20, 0X05 }, /* -30.5db */
+- { 0X02, 0X06, 0X1B, 0X89 }, /* -30.0db */
+- { 0X02, 0X24, 0XCE, 0XB0 }, /* -29.5db */
+- { 0X02, 0X45, 0X53, 0X85 }, /* -29.0db */
+- { 0X02, 0X67, 0XC5, 0XA2 }, /* -28.5db */
+- { 0X02, 0X8C, 0X42, 0X3F }, /* -28.0db */
+- { 0X02, 0XB2, 0XE8, 0X55 }, /* -27.5db */
+- { 0X02, 0XDB, 0XD8, 0XAD }, /* -27.0db */
+- { 0X03, 0X07, 0X36, 0X05 }, /* -26.5db */
+- { 0X03, 0X35, 0X25, 0X29 }, /* -26.0db */
+- { 0X03, 0X65, 0XCD, 0X13 }, /* -25.5db */
+- { 0X03, 0X99, 0X57, 0X0C }, /* -25.0db */
+- { 0X03, 0XCF, 0XEE, 0XCF }, /* -24.5db */
+- { 0X04, 0X09, 0XC2, 0XB0 }, /* -24.0db */
+- { 0X04, 0X47, 0X03, 0XC1 }, /* -23.5db */
+- { 0X04, 0X87, 0XE5, 0XFB }, /* -23.0db */
+- { 0X04, 0XCC, 0XA0, 0X6D }, /* -22.5db */
+- { 0X05, 0X15, 0X6D, 0X68 }, /* -22.0db */
+- { 0X05, 0X62, 0X8A, 0XB3 }, /* -21.5db */
+- { 0X05, 0XB4, 0X39, 0XBC }, /* -21.0db */
+- { 0X06, 0X0A, 0XBF, 0XD4 }, /* -20.5db */
+- { 0X06, 0X66, 0X66, 0X66 }, /* -20.0db */
+- { 0X06, 0XC7, 0X7B, 0X36 }, /* -19.5db */
+- { 0X07, 0X2E, 0X50, 0XA6 }, /* -19.0db */
+- { 0X07, 0X9B, 0X3D, 0XF6 }, /* -18.5db */
+- { 0X08, 0X0E, 0X9F, 0X96 }, /* -18.0db */
+- { 0X08, 0X88, 0XD7, 0X6D }, /* -17.5db */
+- { 0X09, 0X0A, 0X4D, 0X2F }, /* -17.0db */
+- { 0X09, 0X93, 0X6E, 0XB8 }, /* -16.5db */
+- { 0X0A, 0X24, 0XB0, 0X62 }, /* -16.0db */
+- { 0X0A, 0XBE, 0X8D, 0X70 }, /* -15.5db */
+- { 0X0B, 0X61, 0X88, 0X71 }, /* -15.0db */
+- { 0X0C, 0X0E, 0X2B, 0XB0 }, /* -14.5db */
+- { 0X0C, 0XC5, 0X09, 0XAB }, /* -14.0db */
+- { 0X0D, 0X86, 0XBD, 0X8D }, /* -13.5db */
+- { 0X0E, 0X53, 0XEB, 0XB3 }, /* -13.0db */
+- { 0X0F, 0X2D, 0X42, 0X38 }, /* -12.5db */
+- { 0X10, 0X13, 0X79, 0X87 }, /* -12.0db */
+- { 0X11, 0X07, 0X54, 0XF9 }, /* -11.5db */
+- { 0X12, 0X09, 0XA3, 0X7A }, /* -11.0db */
+- { 0X13, 0X1B, 0X40, 0X39 }, /* -10.5db */
+- { 0X14, 0X3D, 0X13, 0X62 }, /* -10.0db */
+- { 0X15, 0X70, 0X12, 0XE1 }, /* -9.5db */
+- { 0X16, 0XB5, 0X43, 0X37 }, /* -9.0db */
+- { 0X18, 0X0D, 0XB8, 0X54 }, /* -8.5db */
+- { 0X19, 0X7A, 0X96, 0X7F }, /* -8.0db */
+- { 0X1A, 0XFD, 0X13, 0X54 }, /* -7.5db */
+- { 0X1C, 0X96, 0X76, 0XC6 }, /* -7.0db */
+- { 0X1E, 0X48, 0X1C, 0X37 }, /* -6.5db */
+- { 0X20, 0X13, 0X73, 0X9E }, /* -6.0db */
+- { 0X21, 0XFA, 0X02, 0XBF }, /* -5.5db */
+- { 0X23, 0XFD, 0X66, 0X78 }, /* -5.0db */
+- { 0X26, 0X1F, 0X54, 0X1C }, /* -4.5db */
+- { 0X28, 0X61, 0X9A, 0XE9 }, /* -4.0db */
+- { 0X2A, 0XC6, 0X25, 0X91 }, /* -3.5db */
+- { 0X2D, 0X4E, 0XFB, 0XD5 }, /* -3.0db */
+- { 0X2F, 0XFE, 0X44, 0X48 }, /* -2.5db */
+- { 0X32, 0XD6, 0X46, 0X17 }, /* -2.0db */
+- { 0X35, 0XD9, 0X6B, 0X02 }, /* -1.5db */
+- { 0X39, 0X0A, 0X41, 0X5F }, /* -1.0db */
+- { 0X3C, 0X6B, 0X7E, 0X4F }, /* -0.5db */
+- { 0X40, 0X00, 0X00, 0X00 }, /* 0.0db */
+- { 0X43, 0XCA, 0XD0, 0X22 }, /* 0.5db */
+- { 0X47, 0XCF, 0X26, 0X7D }, /* 1.0db */
+- { 0X4C, 0X10, 0X6B, 0XA5 }, /* 1.5db */
+- { 0X50, 0X92, 0X3B, 0XE3 }, /* 2.0db */
+- { 0X55, 0X58, 0X6A, 0X46 }, /* 2.5db */
+- { 0X5A, 0X67, 0X03, 0XDF }, /* 3.0db */
+- { 0X5F, 0XC2, 0X53, 0X32 }, /* 3.5db */
+- { 0X65, 0X6E, 0XE3, 0XDB }, /* 4.0db */
+- { 0X6B, 0X71, 0X86, 0X68 }, /* 4.5db */
+- { 0X71, 0XCF, 0X54, 0X71 }, /* 5.0db */
+- { 0X78, 0X8D, 0XB4, 0XE9 }, /* 5.5db */
+- { 0XFF, 0XFF, 0XFF, 0XFF }, /* 6.0db */
+-};
+ #endif
+diff --git a/include/sound/tas2781.h b/include/sound/tas2781.h
+index 18161d02a96f7c..dbda552398b5b9 100644
+--- a/include/sound/tas2781.h
++++ b/include/sound/tas2781.h
+@@ -81,11 +81,6 @@ struct tasdevice {
+ bool is_loaderr;
+ };
+
+-struct tasdevice_irqinfo {
+- int irq_gpio;
+- int irq;
+-};
+-
+ struct calidata {
+ unsigned char *data;
+ unsigned long total_sz;
+@@ -93,7 +88,6 @@ struct calidata {
+
+ struct tasdevice_priv {
+ struct tasdevice tasdevice[TASDEVICE_MAX_CHANNELS];
+- struct tasdevice_irqinfo irq_info;
+ struct tasdevice_rca rcabin;
+ struct calidata cali_data;
+ struct tasdevice_fw *fmw;
+@@ -115,6 +109,7 @@ struct tasdevice_priv {
+ unsigned int chip_id;
+ unsigned int sysclk;
+
++ int irq;
+ int cur_prog;
+ int cur_conf;
+ int fw_state;
+diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
+index ed794b5fefbe3a..2851c823095bc1 100644
+--- a/include/trace/events/f2fs.h
++++ b/include/trace/events/f2fs.h
+@@ -139,7 +139,8 @@ TRACE_DEFINE_ENUM(EX_BLOCK_AGE);
+ { CP_NODE_NEED_CP, "node needs cp" }, \
+ { CP_FASTBOOT_MODE, "fastboot mode" }, \
+ { CP_SPEC_LOG_NUM, "log type is 2" }, \
+- { CP_RECOVER_DIR, "dir needs recovery" })
++ { CP_RECOVER_DIR, "dir needs recovery" }, \
++ { CP_XATTR_DIR, "dir's xattr updated" })
+
+ #define show_shutdown_mode(type) \
+ __print_symbolic(type, \
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index f1e7c670add85f..a38f36b6806041 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -13,6 +13,7 @@
+ #include <linux/slab.h>
+ #include <linux/rculist_nulls.h>
+ #include <linux/cpu.h>
++#include <linux/cpuset.h>
+ #include <linux/task_work.h>
+ #include <linux/audit.h>
+ #include <linux/mmu_context.h>
+@@ -1167,7 +1168,7 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
+
+ if (!alloc_cpumask_var(&wq->cpu_mask, GFP_KERNEL))
+ goto err;
+- cpumask_copy(wq->cpu_mask, cpu_possible_mask);
++ cpuset_cpus_allowed(data->task, wq->cpu_mask);
+ wq->acct[IO_WQ_ACCT_BOUND].max_workers = bounded;
+ wq->acct[IO_WQ_ACCT_UNBOUND].max_workers =
+ task_rlimit(current, RLIMIT_NPROC);
+@@ -1322,17 +1323,29 @@ static int io_wq_cpu_offline(unsigned int cpu, struct hlist_node *node)
+
+ int io_wq_cpu_affinity(struct io_uring_task *tctx, cpumask_var_t mask)
+ {
++ cpumask_var_t allowed_mask;
++ int ret = 0;
++
+ if (!tctx || !tctx->io_wq)
+ return -EINVAL;
+
++ if (!alloc_cpumask_var(&allowed_mask, GFP_KERNEL))
++ return -ENOMEM;
++
+ rcu_read_lock();
+- if (mask)
+- cpumask_copy(tctx->io_wq->cpu_mask, mask);
+- else
+- cpumask_copy(tctx->io_wq->cpu_mask, cpu_possible_mask);
++ cpuset_cpus_allowed(tctx->io_wq->task, allowed_mask);
++ if (mask) {
++ if (cpumask_subset(mask, allowed_mask))
++ cpumask_copy(tctx->io_wq->cpu_mask, mask);
++ else
++ ret = -EINVAL;
++ } else {
++ cpumask_copy(tctx->io_wq->cpu_mask, allowed_mask);
++ }
+ rcu_read_unlock();
+
+- return 0;
++ free_cpumask_var(allowed_mask);
++ return ret;
+ }
+
+ /*
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 3942db160f18ef..25112cf78e2b36 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -2360,7 +2360,7 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ return 1;
+ if (unlikely(!llist_empty(&ctx->work_llist)))
+ return 1;
+- if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL)))
++ if (unlikely(task_work_pending(current)))
+ return 1;
+ if (unlikely(task_sigpending(current)))
+ return -EINTR;
+@@ -2463,9 +2463,9 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ * If we got woken because of task_work being processed, run it
+ * now rather than let the caller do another wait loop.
+ */
+- io_run_task_work();
+ if (!llist_empty(&ctx->work_llist))
+ io_run_local_work(ctx, nr_wait);
++ io_run_task_work();
+
+ /*
+ * Non-local task_work will be run on exit to userspace, but
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index c004d21e2f12e3..d85e2d41a992b8 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -855,6 +855,14 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
+
+ ret = io_iter_do_read(rw, &io->iter);
+
++ /*
++ * Some file systems like to return -EOPNOTSUPP for an IOCB_NOWAIT
++ * issue, even though they should be returning -EAGAIN. To be safe,
++ * retry from blocking context for either.
++ */
++ if (ret == -EOPNOTSUPP && force_nonblock)
++ ret = -EAGAIN;
++
+ if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) {
+ req->flags &= ~REQ_F_REISSUE;
+ /* If we can poll, just do that. */
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index 3b50dc9586d14f..fd6c29f90615c0 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -10,6 +10,7 @@
+ #include <linux/slab.h>
+ #include <linux/audit.h>
+ #include <linux/security.h>
++#include <linux/cpuset.h>
+ #include <linux/io_uring.h>
+
+ #include <uapi/linux/io_uring.h>
+@@ -460,11 +461,22 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ return 0;
+
+ if (p->flags & IORING_SETUP_SQ_AFF) {
++ cpumask_var_t allowed_mask;
+ int cpu = p->sq_thread_cpu;
+
+ ret = -EINVAL;
+ if (cpu >= nr_cpu_ids || !cpu_online(cpu))
+ goto err_sqpoll;
++ ret = -ENOMEM;
++ if (!alloc_cpumask_var(&allowed_mask, GFP_KERNEL))
++ goto err_sqpoll;
++ ret = -EINVAL;
++ cpuset_cpus_allowed(current, allowed_mask);
++ if (!cpumask_test_cpu(cpu, allowed_mask)) {
++ free_cpumask_var(allowed_mask);
++ goto err_sqpoll;
++ }
++ free_cpumask_var(allowed_mask);
+ sqd->sq_cpu = cpu;
+ } else {
+ sqd->sq_cpu = -1;
+diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
+index 08a338e1f23114..701092736826ad 100644
+--- a/kernel/bpf/bpf_lsm.c
++++ b/kernel/bpf/bpf_lsm.c
+@@ -11,7 +11,6 @@
+ #include <linux/lsm_hooks.h>
+ #include <linux/bpf_lsm.h>
+ #include <linux/kallsyms.h>
+-#include <linux/bpf_verifier.h>
+ #include <net/bpf_sk_storage.h>
+ #include <linux/bpf_local_storage.h>
+ #include <linux/btf_ids.h>
+@@ -390,3 +389,36 @@ const struct bpf_verifier_ops lsm_verifier_ops = {
+ .get_func_proto = bpf_lsm_func_proto,
+ .is_valid_access = btf_ctx_access,
+ };
++
++/* hooks return 0 or 1 */
++BTF_SET_START(bool_lsm_hooks)
++#ifdef CONFIG_SECURITY_NETWORK_XFRM
++BTF_ID(func, bpf_lsm_xfrm_state_pol_flow_match)
++#endif
++#ifdef CONFIG_AUDIT
++BTF_ID(func, bpf_lsm_audit_rule_known)
++#endif
++BTF_ID(func, bpf_lsm_inode_xattr_skipcap)
++BTF_SET_END(bool_lsm_hooks)
++
++int bpf_lsm_get_retval_range(const struct bpf_prog *prog,
++ struct bpf_retval_range *retval_range)
++{
++ /* no return value range for void hooks */
++ if (!prog->aux->attach_func_proto->type)
++ return -EINVAL;
++
++ if (btf_id_set_contains(&bool_lsm_hooks, prog->aux->attach_btf_id)) {
++ retval_range->minval = 0;
++ retval_range->maxval = 1;
++ } else {
++ /* All other available LSM hooks, except task_prctl, return 0
++ * on success and negative error code on failure.
++ * To keep things simple, we only allow bpf progs to return 0
++ * or negative errno for task_prctl too.
++ */
++ retval_range->minval = -MAX_ERRNO;
++ retval_range->maxval = 0;
++ }
++ return 0;
++}
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index a4e4f8d43ecf04..7783b16b87cfef 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -6418,8 +6418,11 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+
+ if (arg == nr_args) {
+ switch (prog->expected_attach_type) {
+- case BPF_LSM_CGROUP:
+ case BPF_LSM_MAC:
++ /* mark we are accessing the return value */
++ info->is_retval = true;
++ fallthrough;
++ case BPF_LSM_CGROUP:
+ case BPF_TRACE_FEXIT:
+ /* When LSM programs are attached to void LSM hooks
+ * they use FEXIT trampolines and when attached to
+@@ -8888,6 +8891,7 @@ int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo,
+ struct bpf_core_cand_list cands = {};
+ struct bpf_core_relo_res targ_res;
+ struct bpf_core_spec *specs;
++ const struct btf_type *type;
+ int err;
+
+ /* ~4k of temp memory necessary to convert LLVM spec like "0:1:0:5"
+@@ -8897,6 +8901,13 @@ int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo,
+ if (!specs)
+ return -ENOMEM;
+
++ type = btf_type_by_id(ctx->btf, relo->type_id);
++ if (!type) {
++ bpf_log(ctx->log, "relo #%u: bad type id %u\n",
++ relo_idx, relo->type_id);
++ return -EINVAL;
++ }
++
+ if (need_cands) {
+ struct bpf_cand_cache *cc;
+ int i;
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index b5f0adae82933b..c9e235807caca0 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -517,11 +517,12 @@ static int __bpf_strtoll(const char *buf, size_t buf_len, u64 flags,
+ }
+
+ BPF_CALL_4(bpf_strtol, const char *, buf, size_t, buf_len, u64, flags,
+- long *, res)
++ s64 *, res)
+ {
+ long long _res;
+ int err;
+
++ *res = 0;
+ err = __bpf_strtoll(buf, buf_len, flags, &_res);
+ if (err < 0)
+ return err;
+@@ -538,16 +539,18 @@ const struct bpf_func_proto bpf_strtol_proto = {
+ .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY,
+ .arg2_type = ARG_CONST_SIZE,
+ .arg3_type = ARG_ANYTHING,
+- .arg4_type = ARG_PTR_TO_LONG,
++ .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg4_size = sizeof(s64),
+ };
+
+ BPF_CALL_4(bpf_strtoul, const char *, buf, size_t, buf_len, u64, flags,
+- unsigned long *, res)
++ u64 *, res)
+ {
+ unsigned long long _res;
+ bool is_negative;
+ int err;
+
++ *res = 0;
+ err = __bpf_strtoull(buf, buf_len, flags, &_res, &is_negative);
+ if (err < 0)
+ return err;
+@@ -566,7 +569,8 @@ const struct bpf_func_proto bpf_strtoul_proto = {
+ .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY,
+ .arg2_type = ARG_CONST_SIZE,
+ .arg3_type = ARG_ANYTHING,
+- .arg4_type = ARG_PTR_TO_LONG,
++ .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg4_size = sizeof(u64),
+ };
+
+ BPF_CALL_3(bpf_strncmp, const char *, s1, u32, s1_sz, const char *, s2)
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index bf6c5f685ea22c..d9cae8e259699c 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -5932,6 +5932,7 @@ static const struct bpf_func_proto bpf_sys_close_proto = {
+
+ BPF_CALL_4(bpf_kallsyms_lookup_name, const char *, name, int, name_sz, int, flags, u64 *, res)
+ {
++ *res = 0;
+ if (flags)
+ return -EINVAL;
+
+@@ -5952,7 +5953,8 @@ static const struct bpf_func_proto bpf_kallsyms_lookup_name_proto = {
+ .arg1_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_CONST_SIZE_OR_ZERO,
+ .arg3_type = ARG_ANYTHING,
+- .arg4_type = ARG_PTR_TO_LONG,
++ .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg4_size = sizeof(u64),
+ };
+
+ static const struct bpf_func_proto *
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index d8520095ca030e..8c07efa3905b9a 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2334,6 +2334,25 @@ static void mark_reg_unknown(struct bpf_verifier_env *env,
+ __mark_reg_unknown(env, regs + regno);
+ }
+
++static int __mark_reg_s32_range(struct bpf_verifier_env *env,
++ struct bpf_reg_state *regs,
++ u32 regno,
++ s32 s32_min,
++ s32 s32_max)
++{
++ struct bpf_reg_state *reg = regs + regno;
++
++ reg->s32_min_value = max_t(s32, reg->s32_min_value, s32_min);
++ reg->s32_max_value = min_t(s32, reg->s32_max_value, s32_max);
++
++ reg->smin_value = max_t(s64, reg->smin_value, s32_min);
++ reg->smax_value = min_t(s64, reg->smax_value, s32_max);
++
++ reg_bounds_sync(reg);
++
++ return reg_bounds_sanity_check(env, reg, "s32_range");
++}
++
+ static void __mark_reg_not_init(const struct bpf_verifier_env *env,
+ struct bpf_reg_state *reg)
+ {
+@@ -5587,11 +5606,13 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
+ /* check access to 'struct bpf_context' fields. Supports fixed offsets only */
+ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off, int size,
+ enum bpf_access_type t, enum bpf_reg_type *reg_type,
+- struct btf **btf, u32 *btf_id)
++ struct btf **btf, u32 *btf_id, bool *is_retval, bool is_ldsx)
+ {
+ struct bpf_insn_access_aux info = {
+ .reg_type = *reg_type,
+ .log = &env->log,
++ .is_retval = false,
++ .is_ldsx = is_ldsx,
+ };
+
+ if (env->ops->is_valid_access &&
+@@ -5604,6 +5625,7 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off,
+ * type of narrower access.
+ */
+ *reg_type = info.reg_type;
++ *is_retval = info.is_retval;
+
+ if (base_type(*reg_type) == PTR_TO_BTF_ID) {
+ *btf = info.btf;
+@@ -6772,6 +6794,17 @@ static int check_stack_access_within_bounds(
+ return grow_stack_state(env, state, -min_off /* size */);
+ }
+
++static bool get_func_retval_range(struct bpf_prog *prog,
++ struct bpf_retval_range *range)
++{
++ if (prog->type == BPF_PROG_TYPE_LSM &&
++ prog->expected_attach_type == BPF_LSM_MAC &&
++ !bpf_lsm_get_retval_range(prog, range)) {
++ return true;
++ }
++ return false;
++}
++
+ /* check whether memory at (regno + off) is accessible for t = (read | write)
+ * if t==write, value_regno is a register which value is stored into memory
+ * if t==read, value_regno is a register which will receive the value from memory
+@@ -6876,6 +6909,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem))
+ mark_reg_unknown(env, regs, value_regno);
+ } else if (reg->type == PTR_TO_CTX) {
++ bool is_retval = false;
++ struct bpf_retval_range range;
+ enum bpf_reg_type reg_type = SCALAR_VALUE;
+ struct btf *btf = NULL;
+ u32 btf_id = 0;
+@@ -6891,7 +6926,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ return err;
+
+ err = check_ctx_access(env, insn_idx, off, size, t, ®_type, &btf,
+- &btf_id);
++ &btf_id, &is_retval, is_ldsx);
+ if (err)
+ verbose_linfo(env, insn_idx, "; ");
+ if (!err && t == BPF_READ && value_regno >= 0) {
+@@ -6900,7 +6935,14 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ * case, we know the offset is zero.
+ */
+ if (reg_type == SCALAR_VALUE) {
+- mark_reg_unknown(env, regs, value_regno);
++ if (is_retval && get_func_retval_range(env->prog, &range)) {
++ err = __mark_reg_s32_range(env, regs, value_regno,
++ range.minval, range.maxval);
++ if (err)
++ return err;
++ } else {
++ mark_reg_unknown(env, regs, value_regno);
++ }
+ } else {
+ mark_reg_known_zero(env, regs,
+ value_regno);
+@@ -8128,6 +8170,12 @@ static bool arg_type_is_mem_size(enum bpf_arg_type type)
+ type == ARG_CONST_SIZE_OR_ZERO;
+ }
+
++static bool arg_type_is_raw_mem(enum bpf_arg_type type)
++{
++ return base_type(type) == ARG_PTR_TO_MEM &&
++ type & MEM_UNINIT;
++}
++
+ static bool arg_type_is_release(enum bpf_arg_type type)
+ {
+ return type & OBJ_RELEASE;
+@@ -8138,16 +8186,6 @@ static bool arg_type_is_dynptr(enum bpf_arg_type type)
+ return base_type(type) == ARG_PTR_TO_DYNPTR;
+ }
+
+-static int int_ptr_type_to_size(enum bpf_arg_type type)
+-{
+- if (type == ARG_PTR_TO_INT)
+- return sizeof(u32);
+- else if (type == ARG_PTR_TO_LONG)
+- return sizeof(u64);
+-
+- return -EINVAL;
+-}
+-
+ static int resolve_map_arg_type(struct bpf_verifier_env *env,
+ const struct bpf_call_arg_meta *meta,
+ enum bpf_arg_type *arg_type)
+@@ -8220,16 +8258,6 @@ static const struct bpf_reg_types mem_types = {
+ },
+ };
+
+-static const struct bpf_reg_types int_ptr_types = {
+- .types = {
+- PTR_TO_STACK,
+- PTR_TO_PACKET,
+- PTR_TO_PACKET_META,
+- PTR_TO_MAP_KEY,
+- PTR_TO_MAP_VALUE,
+- },
+-};
+-
+ static const struct bpf_reg_types spin_lock_types = {
+ .types = {
+ PTR_TO_MAP_VALUE,
+@@ -8285,8 +8313,6 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
+ [ARG_PTR_TO_SPIN_LOCK] = &spin_lock_types,
+ [ARG_PTR_TO_MEM] = &mem_types,
+ [ARG_PTR_TO_RINGBUF_MEM] = &ringbuf_mem_types,
+- [ARG_PTR_TO_INT] = &int_ptr_types,
+- [ARG_PTR_TO_LONG] = &int_ptr_types,
+ [ARG_PTR_TO_PERCPU_BTF_ID] = &percpu_btf_ptr_types,
+ [ARG_PTR_TO_FUNC] = &func_ptr_types,
+ [ARG_PTR_TO_STACK] = &stack_ptr_types,
+@@ -8847,9 +8873,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ */
+ meta->raw_mode = arg_type & MEM_UNINIT;
+ if (arg_type & MEM_FIXED_SIZE) {
+- err = check_helper_mem_access(env, regno,
+- fn->arg_size[arg], false,
+- meta);
++ err = check_helper_mem_access(env, regno, fn->arg_size[arg], false, meta);
++ if (err)
++ return err;
++ if (arg_type & MEM_ALIGNED)
++ err = check_ptr_alignment(env, reg, 0, fn->arg_size[arg], true);
+ }
+ break;
+ case ARG_CONST_SIZE:
+@@ -8874,17 +8902,6 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ if (err)
+ return err;
+ break;
+- case ARG_PTR_TO_INT:
+- case ARG_PTR_TO_LONG:
+- {
+- int size = int_ptr_type_to_size(arg_type);
+-
+- err = check_helper_mem_access(env, regno, size, false, meta);
+- if (err)
+- return err;
+- err = check_ptr_alignment(env, reg, 0, size, true);
+- break;
+- }
+ case ARG_PTR_TO_CONST_STR:
+ {
+ err = check_reg_const_str(env, reg, regno);
+@@ -9201,15 +9218,15 @@ static bool check_raw_mode_ok(const struct bpf_func_proto *fn)
+ {
+ int count = 0;
+
+- if (fn->arg1_type == ARG_PTR_TO_UNINIT_MEM)
++ if (arg_type_is_raw_mem(fn->arg1_type))
+ count++;
+- if (fn->arg2_type == ARG_PTR_TO_UNINIT_MEM)
++ if (arg_type_is_raw_mem(fn->arg2_type))
+ count++;
+- if (fn->arg3_type == ARG_PTR_TO_UNINIT_MEM)
++ if (arg_type_is_raw_mem(fn->arg3_type))
+ count++;
+- if (fn->arg4_type == ARG_PTR_TO_UNINIT_MEM)
++ if (arg_type_is_raw_mem(fn->arg4_type))
+ count++;
+- if (fn->arg5_type == ARG_PTR_TO_UNINIT_MEM)
++ if (arg_type_is_raw_mem(fn->arg5_type))
+ count++;
+
+ /* We only support one arg being in raw mode at the moment,
+@@ -9923,9 +9940,13 @@ static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env)
+ return is_rbtree_lock_required_kfunc(kfunc_btf_id);
+ }
+
+-static bool retval_range_within(struct bpf_retval_range range, const struct bpf_reg_state *reg)
++static bool retval_range_within(struct bpf_retval_range range, const struct bpf_reg_state *reg,
++ bool return_32bit)
+ {
+- return range.minval <= reg->smin_value && reg->smax_value <= range.maxval;
++ if (return_32bit)
++ return range.minval <= reg->s32_min_value && reg->s32_max_value <= range.maxval;
++ else
++ return range.minval <= reg->smin_value && reg->smax_value <= range.maxval;
+ }
+
+ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
+@@ -9962,8 +9983,8 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
+ if (err)
+ return err;
+
+- /* enforce R0 return value range */
+- if (!retval_range_within(callee->callback_ret_range, r0)) {
++ /* enforce R0 return value range, and bpf_callback_t returns 64bit */
++ if (!retval_range_within(callee->callback_ret_range, r0, false)) {
+ verbose_invalid_scalar(env, r0, callee->callback_ret_range,
+ "At callback return", "R0");
+ return -EINVAL;
+@@ -15569,6 +15590,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
+ int err;
+ struct bpf_func_state *frame = env->cur_state->frame[0];
+ const bool is_subprog = frame->subprogno;
++ bool return_32bit = false;
+
+ /* LSM and struct_ops func-ptr's return type could be "void" */
+ if (!is_subprog || frame->in_exception_callback_fn) {
+@@ -15674,12 +15696,14 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
+
+ case BPF_PROG_TYPE_LSM:
+ if (env->prog->expected_attach_type != BPF_LSM_CGROUP) {
+- /* Regular BPF_PROG_TYPE_LSM programs can return
+- * any value.
+- */
+- return 0;
+- }
+- if (!env->prog->aux->attach_func_proto->type) {
++ /* no range found, any return value is allowed */
++ if (!get_func_retval_range(env->prog, &range))
++ return 0;
++ /* no restricted range, any return value is allowed */
++ if (range.minval == S32_MIN && range.maxval == S32_MAX)
++ return 0;
++ return_32bit = true;
++ } else if (!env->prog->aux->attach_func_proto->type) {
+ /* Make sure programs that attach to void
+ * hooks don't try to modify return value.
+ */
+@@ -15709,7 +15733,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
+ if (err)
+ return err;
+
+- if (!retval_range_within(range, reg)) {
++ if (!retval_range_within(range, reg, return_32bit)) {
+ verbose_invalid_scalar(env, reg, range, exit_ctx, reg_name);
+ if (!is_subprog &&
+ prog->expected_attach_type == BPF_LSM_CGROUP &&
+diff --git a/kernel/cgroup/pids.c b/kernel/cgroup/pids.c
+index f5cb0ec45b9dde..34aa63d7c9c659 100644
+--- a/kernel/cgroup/pids.c
++++ b/kernel/cgroup/pids.c
+@@ -244,7 +244,6 @@ static void pids_event(struct pids_cgroup *pids_forking,
+ struct pids_cgroup *pids_over_limit)
+ {
+ struct pids_cgroup *p = pids_forking;
+- bool limit = false;
+
+ /* Only log the first time limit is hit. */
+ if (atomic64_inc_return(&p->events_local[PIDCG_FORKFAIL]) == 1) {
+@@ -252,20 +251,17 @@ static void pids_event(struct pids_cgroup *pids_forking,
+ pr_cont_cgroup_path(p->css.cgroup);
+ pr_cont("\n");
+ }
+- cgroup_file_notify(&p->events_local_file);
+ if (!cgroup_subsys_on_dfl(pids_cgrp_subsys) ||
+- cgrp_dfl_root.flags & CGRP_ROOT_PIDS_LOCAL_EVENTS)
++ cgrp_dfl_root.flags & CGRP_ROOT_PIDS_LOCAL_EVENTS) {
++ cgroup_file_notify(&p->events_local_file);
+ return;
++ }
+
+- for (; parent_pids(p); p = parent_pids(p)) {
+- if (p == pids_over_limit) {
+- limit = true;
+- atomic64_inc(&p->events_local[PIDCG_MAX]);
+- cgroup_file_notify(&p->events_local_file);
+- }
+- if (limit)
+- atomic64_inc(&p->events[PIDCG_MAX]);
++ atomic64_inc(&pids_over_limit->events_local[PIDCG_MAX]);
++ cgroup_file_notify(&pids_over_limit->events_local_file);
+
++ for (p = pids_over_limit; parent_pids(p); p = parent_pids(p)) {
++ atomic64_inc(&p->events[PIDCG_MAX]);
+ cgroup_file_notify(&p->events_file);
+ }
+ }
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index f7be976ff88af7..db4ceb0f503cca 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -845,8 +845,16 @@ int kthread_worker_fn(void *worker_ptr)
+ * event only cares about the address.
+ */
+ trace_sched_kthread_work_execute_end(work, func);
+- } else if (!freezing(current))
++ } else if (!freezing(current)) {
+ schedule();
++ } else {
++ /*
++ * Handle the case where the current remains
++ * TASK_INTERRUPTIBLE. try_to_freeze() expects
++ * the current to be TASK_RUNNING.
++ */
++ __set_current_state(TASK_RUNNING);
++ }
+
+ try_to_freeze();
+ cond_resched();
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 0349f957e672da..77e008239c6ab7 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -6196,25 +6196,27 @@ static struct pending_free *get_pending_free(void)
+ static void free_zapped_rcu(struct rcu_head *cb);
+
+ /*
+- * Schedule an RCU callback if no RCU callback is pending. Must be called with
+- * the graph lock held.
+- */
+-static void call_rcu_zapped(struct pending_free *pf)
++* See if we need to queue an RCU callback, must called with
++* the lockdep lock held, returns false if either we don't have
++* any pending free or the callback is already scheduled.
++* Otherwise, a call_rcu() must follow this function call.
++*/
++static bool prepare_call_rcu_zapped(struct pending_free *pf)
+ {
+ WARN_ON_ONCE(inside_selftest());
+
+ if (list_empty(&pf->zapped))
+- return;
++ return false;
+
+ if (delayed_free.scheduled)
+- return;
++ return false;
+
+ delayed_free.scheduled = true;
+
+ WARN_ON_ONCE(delayed_free.pf + delayed_free.index != pf);
+ delayed_free.index ^= 1;
+
+- call_rcu(&delayed_free.rcu_head, free_zapped_rcu);
++ return true;
+ }
+
+ /* The caller must hold the graph lock. May be called from RCU context. */
+@@ -6240,6 +6242,7 @@ static void free_zapped_rcu(struct rcu_head *ch)
+ {
+ struct pending_free *pf;
+ unsigned long flags;
++ bool need_callback;
+
+ if (WARN_ON_ONCE(ch != &delayed_free.rcu_head))
+ return;
+@@ -6251,14 +6254,18 @@ static void free_zapped_rcu(struct rcu_head *ch)
+ pf = delayed_free.pf + (delayed_free.index ^ 1);
+ __free_zapped_classes(pf);
+ delayed_free.scheduled = false;
++ need_callback =
++ prepare_call_rcu_zapped(delayed_free.pf + delayed_free.index);
++ lockdep_unlock();
++ raw_local_irq_restore(flags);
+
+ /*
+- * If there's anything on the open list, close and start a new callback.
+- */
+- call_rcu_zapped(delayed_free.pf + delayed_free.index);
++ * If there's pending free and its callback has not been scheduled,
++ * queue an RCU callback.
++ */
++ if (need_callback)
++ call_rcu(&delayed_free.rcu_head, free_zapped_rcu);
+
+- lockdep_unlock();
+- raw_local_irq_restore(flags);
+ }
+
+ /*
+@@ -6298,6 +6305,7 @@ static void lockdep_free_key_range_reg(void *start, unsigned long size)
+ {
+ struct pending_free *pf;
+ unsigned long flags;
++ bool need_callback;
+
+ init_data_structures_once();
+
+@@ -6305,10 +6313,11 @@ static void lockdep_free_key_range_reg(void *start, unsigned long size)
+ lockdep_lock();
+ pf = get_pending_free();
+ __lockdep_free_key_range(pf, start, size);
+- call_rcu_zapped(pf);
++ need_callback = prepare_call_rcu_zapped(pf);
+ lockdep_unlock();
+ raw_local_irq_restore(flags);
+-
++ if (need_callback)
++ call_rcu(&delayed_free.rcu_head, free_zapped_rcu);
+ /*
+ * Wait for any possible iterators from look_up_lock_class() to pass
+ * before continuing to free the memory they refer to.
+@@ -6402,6 +6411,7 @@ static void lockdep_reset_lock_reg(struct lockdep_map *lock)
+ struct pending_free *pf;
+ unsigned long flags;
+ int locked;
++ bool need_callback = false;
+
+ raw_local_irq_save(flags);
+ locked = graph_lock();
+@@ -6410,11 +6420,13 @@ static void lockdep_reset_lock_reg(struct lockdep_map *lock)
+
+ pf = get_pending_free();
+ __lockdep_reset_lock(pf, lock);
+- call_rcu_zapped(pf);
++ need_callback = prepare_call_rcu_zapped(pf);
+
+ graph_unlock();
+ out_irq:
+ raw_local_irq_restore(flags);
++ if (need_callback)
++ call_rcu(&delayed_free.rcu_head, free_zapped_rcu);
+ }
+
+ /*
+@@ -6458,6 +6470,7 @@ void lockdep_unregister_key(struct lock_class_key *key)
+ struct pending_free *pf;
+ unsigned long flags;
+ bool found = false;
++ bool need_callback = false;
+
+ might_sleep();
+
+@@ -6478,11 +6491,14 @@ void lockdep_unregister_key(struct lock_class_key *key)
+ if (found) {
+ pf = get_pending_free();
+ __lockdep_free_key_range(pf, key, 1);
+- call_rcu_zapped(pf);
++ need_callback = prepare_call_rcu_zapped(pf);
+ }
+ lockdep_unlock();
+ raw_local_irq_restore(flags);
+
++ if (need_callback)
++ call_rcu(&delayed_free.rcu_head, free_zapped_rcu);
++
+ /* Wait until is_dynamic_key() has finished accessing k->hash_entry. */
+ synchronize_rcu();
+ }
+diff --git a/kernel/module/Makefile b/kernel/module/Makefile
+index a10b2b9a6fdfc6..50ffcc413b5450 100644
+--- a/kernel/module/Makefile
++++ b/kernel/module/Makefile
+@@ -5,7 +5,7 @@
+
+ # These are called from save_stack_trace() on slub debug path,
+ # and produce insane amounts of uninteresting coverage.
+-KCOV_INSTRUMENT_module.o := n
++KCOV_INSTRUMENT_main.o := n
+
+ obj-y += main.o
+ obj-y += strict_rwx.o
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 0fa6c289546032..d899f34558afcc 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -404,7 +404,8 @@ void padata_do_serial(struct padata_priv *padata)
+ /* Sort in ascending order of sequence number. */
+ list_for_each_prev(pos, &reorder->list) {
+ cur = list_entry(pos, struct padata_priv, list);
+- if (cur->seq_nr < padata->seq_nr)
++ /* Compare by difference to consider integer wrap around */
++ if ((signed int)(cur->seq_nr - padata->seq_nr) < 0)
+ break;
+ }
+ list_add(&padata->list, pos);
+@@ -512,9 +513,12 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
+ * thread function. Load balance large jobs between threads by
+ * increasing the number of chunks, guarantee at least the minimum
+ * chunk size from the caller, and honor the caller's alignment.
++ * Ensure chunk_size is at least 1 to prevent divide-by-0
++ * panic in padata_mt_helper().
+ */
+ ps.chunk_size = job->size / (ps.nworks * load_balance_factor);
+ ps.chunk_size = max(ps.chunk_size, job->min_chunk);
++ ps.chunk_size = max(ps.chunk_size, 1ul);
+ ps.chunk_size = roundup(ps.chunk_size, job->align);
+
+ /*
+diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
+index 3ce30841119ade..2686ba122fa08d 100644
+--- a/kernel/rcu/tree_nocb.h
++++ b/kernel/rcu/tree_nocb.h
+@@ -220,7 +220,10 @@ static bool __wake_nocb_gp(struct rcu_data *rdp_gp,
+ raw_spin_unlock_irqrestore(&rdp_gp->nocb_gp_lock, flags);
+ if (needwake) {
+ trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("DoWake"));
+- wake_up_process(rdp_gp->nocb_gp_kthread);
++ if (cpu_is_offline(raw_smp_processor_id()))
++ swake_up_one_online(&rdp_gp->nocb_gp_wq);
++ else
++ wake_up_process(rdp_gp->nocb_gp_kthread);
+ }
+
+ return needwake;
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index f59e5c19d94450..c5a3691ba6cc19 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1599,46 +1599,40 @@ static inline bool __dl_less(struct rb_node *a, const struct rb_node *b)
+ return dl_time_before(__node_2_dle(a)->deadline, __node_2_dle(b)->deadline);
+ }
+
+-static inline struct sched_statistics *
++static __always_inline struct sched_statistics *
+ __schedstats_from_dl_se(struct sched_dl_entity *dl_se)
+ {
++ if (!schedstat_enabled())
++ return NULL;
++
++ if (dl_server(dl_se))
++ return NULL;
++
+ return &dl_task_of(dl_se)->stats;
+ }
+
+ static inline void
+ update_stats_wait_start_dl(struct dl_rq *dl_rq, struct sched_dl_entity *dl_se)
+ {
+- struct sched_statistics *stats;
+-
+- if (!schedstat_enabled())
+- return;
+-
+- stats = __schedstats_from_dl_se(dl_se);
+- __update_stats_wait_start(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
++ struct sched_statistics *stats = __schedstats_from_dl_se(dl_se);
++ if (stats)
++ __update_stats_wait_start(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
+ }
+
+ static inline void
+ update_stats_wait_end_dl(struct dl_rq *dl_rq, struct sched_dl_entity *dl_se)
+ {
+- struct sched_statistics *stats;
+-
+- if (!schedstat_enabled())
+- return;
+-
+- stats = __schedstats_from_dl_se(dl_se);
+- __update_stats_wait_end(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
++ struct sched_statistics *stats = __schedstats_from_dl_se(dl_se);
++ if (stats)
++ __update_stats_wait_end(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
+ }
+
+ static inline void
+ update_stats_enqueue_sleeper_dl(struct dl_rq *dl_rq, struct sched_dl_entity *dl_se)
+ {
+- struct sched_statistics *stats;
+-
+- if (!schedstat_enabled())
+- return;
+-
+- stats = __schedstats_from_dl_se(dl_se);
+- __update_stats_enqueue_sleeper(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
++ struct sched_statistics *stats = __schedstats_from_dl_se(dl_se);
++ if (stats)
++ __update_stats_enqueue_sleeper(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
+ }
+
+ static inline void
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 9057584ec06de9..1d2cbdb162a67d 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -511,7 +511,7 @@ static int cfs_rq_is_idle(struct cfs_rq *cfs_rq)
+
+ static int se_is_idle(struct sched_entity *se)
+ {
+- return 0;
++ return task_has_idle_policy(task_of(se));
+ }
+
+ #endif /* CONFIG_FAIR_GROUP_SCHED */
+@@ -3188,6 +3188,15 @@ static bool vma_is_accessed(struct mm_struct *mm, struct vm_area_struct *vma)
+ return true;
+ }
+
++ /*
++ * This vma has not been accessed for a while, and if the number
++ * the threads in the same process is low, which means no other
++ * threads can help scan this vma, force a vma scan.
++ */
++ if (READ_ONCE(mm->numa_scan_seq) >
++ (vma->numab_state->prev_scan_seq + get_nr_threads(current)))
++ return true;
++
+ return false;
+ }
+
+@@ -8381,16 +8390,7 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ if (test_tsk_need_resched(curr))
+ return;
+
+- /* Idle tasks are by definition preempted by non-idle tasks. */
+- if (unlikely(task_has_idle_policy(curr)) &&
+- likely(!task_has_idle_policy(p)))
+- goto preempt;
+-
+- /*
+- * Batch and idle tasks do not preempt non-idle tasks (their preemption
+- * is driven by the tick):
+- */
+- if (unlikely(p->policy != SCHED_NORMAL) || !sched_feat(WAKEUP_PREEMPTION))
++ if (!sched_feat(WAKEUP_PREEMPTION))
+ return;
+
+ find_matching_se(&se, &pse);
+@@ -8400,7 +8400,7 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ pse_is_idle = se_is_idle(pse);
+
+ /*
+- * Preempt an idle group in favor of a non-idle group (and don't preempt
++ * Preempt an idle entity in favor of a non-idle entity (and don't preempt
+ * in the inverse case).
+ */
+ if (cse_is_idle && !pse_is_idle)
+@@ -8408,9 +8408,14 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ if (cse_is_idle != pse_is_idle)
+ return;
+
++ /*
++ * BATCH and IDLE tasks do not preempt others.
++ */
++ if (unlikely(p->policy != SCHED_NORMAL))
++ return;
++
+ cfs_rq = cfs_rq_of(se);
+ update_curr(cfs_rq);
+-
+ /*
+ * XXX pick_eevdf(cfs_rq) != se ?
+ */
+@@ -9360,9 +9365,10 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
+
+ hw_pressure = arch_scale_hw_pressure(cpu_of(rq));
+
++ /* hw_pressure doesn't care about invariance */
+ decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
+ update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
+- update_hw_load_avg(now, rq, hw_pressure) |
++ update_hw_load_avg(rq_clock_task(rq), rq, hw_pressure) |
+ update_irq_load_avg(rq, 0);
+
+ if (others_have_blocked(rq))
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index cd098846e251cd..add26dc27d7e37 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1226,7 +1226,8 @@ static const struct bpf_func_proto bpf_get_func_arg_proto = {
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+ .arg2_type = ARG_ANYTHING,
+- .arg3_type = ARG_PTR_TO_LONG,
++ .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg3_size = sizeof(u64),
+ };
+
+ BPF_CALL_2(get_func_ret, void *, ctx, u64 *, value)
+@@ -1242,7 +1243,8 @@ static const struct bpf_func_proto bpf_get_func_ret_proto = {
+ .func = get_func_ret,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+- .arg2_type = ARG_PTR_TO_LONG,
++ .arg2_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg2_size = sizeof(u64),
+ };
+
+ BPF_CALL_1(get_func_arg_cnt, void *, ctx)
+@@ -3485,17 +3487,20 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
+ uprobes[i].ref_ctr_offset,
+ &uprobes[i].consumer);
+ if (err) {
+- bpf_uprobe_unregister(&path, uprobes, i);
+- goto error_free;
++ link->cnt = i;
++ goto error_unregister;
+ }
+ }
+
+ err = bpf_link_prime(&link->link, &link_primer);
+ if (err)
+- goto error_free;
++ goto error_unregister;
+
+ return bpf_link_settle(&link_primer);
+
++error_unregister:
++ bpf_uprobe_unregister(&path, uprobes, link->cnt);
++
+ error_free:
+ kvfree(uprobes);
+ kfree(link);
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 7cea91e193a8f0..1ea8af72849cdb 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -142,13 +142,14 @@ static void fill_pool(void)
+ * READ_ONCE()s pair with the WRITE_ONCE()s in pool_lock critical
+ * sections.
+ */
+- while (READ_ONCE(obj_nr_tofree) && (READ_ONCE(obj_pool_free) < obj_pool_min_free)) {
++ while (READ_ONCE(obj_nr_tofree) &&
++ READ_ONCE(obj_pool_free) < debug_objects_pool_min_level) {
+ raw_spin_lock_irqsave(&pool_lock, flags);
+ /*
+ * Recheck with the lock held as the worker thread might have
+ * won the race and freed the global free list already.
+ */
+- while (obj_nr_tofree && (obj_pool_free < obj_pool_min_free)) {
++ while (obj_nr_tofree && (obj_pool_free < debug_objects_pool_min_level)) {
+ obj = hlist_entry(obj_to_free.first, typeof(*obj), node);
+ hlist_del(&obj->node);
+ WRITE_ONCE(obj_nr_tofree, obj_nr_tofree - 1);
+diff --git a/lib/sbitmap.c b/lib/sbitmap.c
+index 5e2e93307f0d05..d3412984170c03 100644
+--- a/lib/sbitmap.c
++++ b/lib/sbitmap.c
+@@ -65,7 +65,7 @@ static inline bool sbitmap_deferred_clear(struct sbitmap_word *map,
+ {
+ unsigned long mask, word_mask;
+
+- guard(spinlock_irqsave)(&map->swap_lock);
++ guard(raw_spinlock_irqsave)(&map->swap_lock);
+
+ if (!map->cleared) {
+ if (depth == 0)
+@@ -136,7 +136,7 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift,
+ }
+
+ for (i = 0; i < sb->map_nr; i++)
+- spin_lock_init(&sb->map[i].swap_lock);
++ raw_spin_lock_init(&sb->map[i].swap_lock);
+
+ return 0;
+ }
+diff --git a/lib/xz/xz_crc32.c b/lib/xz/xz_crc32.c
+index 88a2c35e1b5971..5627b00fca296e 100644
+--- a/lib/xz/xz_crc32.c
++++ b/lib/xz/xz_crc32.c
+@@ -29,7 +29,7 @@ STATIC_RW_DATA uint32_t xz_crc32_table[256];
+
+ XZ_EXTERN void xz_crc32_init(void)
+ {
+- const uint32_t poly = CRC32_POLY_LE;
++ const uint32_t poly = 0xEDB88320;
+
+ uint32_t i;
+ uint32_t j;
+diff --git a/lib/xz/xz_private.h b/lib/xz/xz_private.h
+index bf1e94ec7873cf..d9fd49b45fd758 100644
+--- a/lib/xz/xz_private.h
++++ b/lib/xz/xz_private.h
+@@ -105,10 +105,6 @@
+ # endif
+ #endif
+
+-#ifndef CRC32_POLY_LE
+-#define CRC32_POLY_LE 0xedb88320
+-#endif
+-
+ /*
+ * Allocate memory for LZMA2 decoder. xz_dec_lzma2_reset() must be used
+ * before calling xz_dec_lzma2_run().
+diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
+index 58829baf8b5d9e..a0036dc78a3bbc 100644
+--- a/mm/damon/vaddr.c
++++ b/mm/damon/vaddr.c
+@@ -126,6 +126,7 @@ static int __damon_va_three_regions(struct mm_struct *mm,
+ * If this is too slow, it can be optimised to examine the maple
+ * tree gaps.
+ */
++ rcu_read_lock();
+ for_each_vma(vmi, vma) {
+ unsigned long gap;
+
+@@ -146,6 +147,7 @@ static int __damon_va_three_regions(struct mm_struct *mm,
+ next:
+ prev = vma;
+ }
++ rcu_read_unlock();
+
+ if (!sz_range(&second_gap) || !sz_range(&first_gap))
+ return -EINVAL;
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 67c86a5d64a6a9..99b146d16a1850 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -220,6 +220,8 @@ static bool get_huge_zero_page(void)
+ count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED);
+ return false;
+ }
++ /* Ensure zero folio won't have large_rmappable flag set. */
++ folio_clear_large_rmappable(zero_folio);
+ preempt_disable();
+ if (cmpxchg(&huge_zero_folio, NULL, zero_folio)) {
+ preempt_enable();
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index aaf508be0a2b04..c4176574ed9e8b 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3921,100 +3921,124 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
+ return 0;
+ }
+
+-static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio)
++static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst,
++ struct list_head *src_list)
+ {
+- int i, nid = folio_nid(folio);
+- struct hstate *target_hstate;
+- struct page *subpage;
+- struct folio *inner_folio;
+- int rc = 0;
+-
+- target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order);
++ long rc;
++ struct folio *folio, *next;
++ LIST_HEAD(dst_list);
++ LIST_HEAD(ret_list);
+
+- remove_hugetlb_folio(h, folio, false);
+- spin_unlock_irq(&hugetlb_lock);
+-
+- /*
+- * If vmemmap already existed for folio, the remove routine above would
+- * have cleared the hugetlb folio flag. Hence the folio is technically
+- * no longer a hugetlb folio. hugetlb_vmemmap_restore_folio can only be
+- * passed hugetlb folios and will BUG otherwise.
+- */
+- if (folio_test_hugetlb(folio)) {
+- rc = hugetlb_vmemmap_restore_folio(h, folio);
+- if (rc) {
+- /* Allocation of vmemmmap failed, we can not demote folio */
+- spin_lock_irq(&hugetlb_lock);
+- add_hugetlb_folio(h, folio, false);
+- return rc;
+- }
+- }
+-
+- /*
+- * Use destroy_compound_hugetlb_folio_for_demote for all huge page
+- * sizes as it will not ref count folios.
+- */
+- destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h));
++ rc = hugetlb_vmemmap_restore_folios(src, src_list, &ret_list);
++ list_splice_init(&ret_list, src_list);
+
+ /*
+ * Taking target hstate mutex synchronizes with set_max_huge_pages.
+ * Without the mutex, pages added to target hstate could be marked
+ * as surplus.
+ *
+- * Note that we already hold h->resize_lock. To prevent deadlock,
++ * Note that we already hold src->resize_lock. To prevent deadlock,
+ * use the convention of always taking larger size hstate mutex first.
+ */
+- mutex_lock(&target_hstate->resize_lock);
+- for (i = 0; i < pages_per_huge_page(h);
+- i += pages_per_huge_page(target_hstate)) {
+- subpage = folio_page(folio, i);
+- inner_folio = page_folio(subpage);
+- if (hstate_is_gigantic(target_hstate))
+- prep_compound_gigantic_folio_for_demote(inner_folio,
+- target_hstate->order);
+- else
+- prep_compound_page(subpage, target_hstate->order);
+- folio_change_private(inner_folio, NULL);
+- prep_new_hugetlb_folio(target_hstate, inner_folio, nid);
+- free_huge_folio(inner_folio);
++ mutex_lock(&dst->resize_lock);
++
++ list_for_each_entry_safe(folio, next, src_list, lru) {
++ int i;
++
++ if (folio_test_hugetlb_vmemmap_optimized(folio))
++ continue;
++
++ list_del(&folio->lru);
++ /*
++ * Use destroy_compound_hugetlb_folio_for_demote for all huge page
++ * sizes as it will not ref count folios.
++ */
++ destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(src));
++
++ for (i = 0; i < pages_per_huge_page(src); i += pages_per_huge_page(dst)) {
++ struct page *page = folio_page(folio, i);
++
++ if (hstate_is_gigantic(dst))
++ prep_compound_gigantic_folio_for_demote(page_folio(page),
++ dst->order);
++ else
++ prep_compound_page(page, dst->order);
++ set_page_private(page, 0);
++
++ init_new_hugetlb_folio(dst, page_folio(page));
++ list_add(&page->lru, &dst_list);
++ }
+ }
+- mutex_unlock(&target_hstate->resize_lock);
+
+- spin_lock_irq(&hugetlb_lock);
++ prep_and_add_allocated_folios(dst, &dst_list);
+
+- /*
+- * Not absolutely necessary, but for consistency update max_huge_pages
+- * based on pool changes for the demoted page.
+- */
+- h->max_huge_pages--;
+- target_hstate->max_huge_pages +=
+- pages_per_huge_page(h) / pages_per_huge_page(target_hstate);
++ mutex_unlock(&dst->resize_lock);
+
+ return rc;
+ }
+
+-static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
++static long demote_pool_huge_page(struct hstate *src, nodemask_t *nodes_allowed,
++ unsigned long nr_to_demote)
+ __must_hold(&hugetlb_lock)
+ {
+ int nr_nodes, node;
+- struct folio *folio;
++ struct hstate *dst;
++ long rc = 0;
++ long nr_demoted = 0;
+
+ lockdep_assert_held(&hugetlb_lock);
+
+ /* We should never get here if no demote order */
+- if (!h->demote_order) {
++ if (!src->demote_order) {
+ pr_warn("HugeTLB: NULL demote order passed to demote_pool_huge_page.\n");
+ return -EINVAL; /* internal error */
+ }
++ dst = size_to_hstate(PAGE_SIZE << src->demote_order);
+
+- for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) {
+- list_for_each_entry(folio, &h->hugepage_freelists[node], lru) {
++ for_each_node_mask_to_free(src, nr_nodes, node, nodes_allowed) {
++ LIST_HEAD(list);
++ struct folio *folio, *next;
++
++ list_for_each_entry_safe(folio, next, &src->hugepage_freelists[node], lru) {
+ if (folio_test_hwpoison(folio))
+ continue;
+- return demote_free_hugetlb_folio(h, folio);
++
++ remove_hugetlb_folio(src, folio, false);
++ list_add(&folio->lru, &list);
++
++ if (++nr_demoted == nr_to_demote)
++ break;
+ }
++
++ spin_unlock_irq(&hugetlb_lock);
++
++ rc = demote_free_hugetlb_folios(src, dst, &list);
++
++ spin_lock_irq(&hugetlb_lock);
++
++ list_for_each_entry_safe(folio, next, &list, lru) {
++ list_del(&folio->lru);
++ add_hugetlb_folio(src, folio, false);
++
++ nr_demoted--;
++ }
++
++ if (rc < 0 || nr_demoted == nr_to_demote)
++ break;
+ }
+
++ /*
++ * Not absolutely necessary, but for consistency update max_huge_pages
++ * based on pool changes for the demoted page.
++ */
++ src->max_huge_pages -= nr_demoted;
++ dst->max_huge_pages += nr_demoted << (huge_page_order(src) - huge_page_order(dst));
++
++ if (rc < 0)
++ return rc;
++
++ if (nr_demoted)
++ return nr_demoted;
+ /*
+ * Only way to get here is if all pages on free lists are poisoned.
+ * Return -EBUSY so that caller will not retry.
+@@ -4249,6 +4273,8 @@ static ssize_t demote_store(struct kobject *kobj,
+ spin_lock_irq(&hugetlb_lock);
+
+ while (nr_demote) {
++ long rc;
++
+ /*
+ * Check for available pages to demote each time thorough the
+ * loop as demote_pool_huge_page will drop hugetlb_lock.
+@@ -4261,11 +4287,13 @@ static ssize_t demote_store(struct kobject *kobj,
+ if (!nr_available)
+ break;
+
+- err = demote_pool_huge_page(h, n_mask);
+- if (err)
++ rc = demote_pool_huge_page(h, n_mask, nr_demote);
++ if (rc < 0) {
++ err = rc;
+ break;
++ }
+
+- nr_demote--;
++ nr_demote -= rc;
+ }
+
+ spin_unlock_irq(&hugetlb_lock);
+@@ -6048,7 +6076,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
+ * When the original hugepage is shared one, it does not have
+ * anon_vma prepared.
+ */
+- ret = vmf_anon_prepare(vmf);
++ ret = __vmf_anon_prepare(vmf);
+ if (unlikely(ret))
+ goto out_release_all;
+
+@@ -6247,7 +6275,7 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping,
+ }
+
+ if (!(vma->vm_flags & VM_MAYSHARE)) {
+- ret = vmf_anon_prepare(vmf);
++ ret = __vmf_anon_prepare(vmf);
+ if (unlikely(ret))
+ goto out;
+ }
+@@ -6378,6 +6406,14 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping,
+ folio_unlock(folio);
+ out:
+ hugetlb_vma_unlock_read(vma);
++
++ /*
++ * We must check to release the per-VMA lock. __vmf_anon_prepare() is
++ * the only way ret can be set to VM_FAULT_RETRY.
++ */
++ if (unlikely(ret & VM_FAULT_RETRY))
++ vma_end_read(vma);
++
+ mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ return ret;
+
+@@ -6599,6 +6635,14 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ }
+ out_mutex:
+ hugetlb_vma_unlock_read(vma);
++
++ /*
++ * We must check to release the per-VMA lock. __vmf_anon_prepare() in
++ * hugetlb_wp() is the only way ret can be set to VM_FAULT_RETRY.
++ */
++ if (unlikely(ret & VM_FAULT_RETRY))
++ vma_end_read(vma);
++
+ mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ /*
+ * Generally it's safe to hold refcount during waiting page lock. But
+diff --git a/mm/internal.h b/mm/internal.h
+index b4d86436565b93..a963f67d3452ad 100644
+--- a/mm/internal.h
++++ b/mm/internal.h
+@@ -310,7 +310,16 @@ static inline void wake_throttle_isolated(pg_data_t *pgdat)
+ wake_up(wqh);
+ }
+
+-vm_fault_t vmf_anon_prepare(struct vm_fault *vmf);
++vm_fault_t __vmf_anon_prepare(struct vm_fault *vmf);
++static inline vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
++{
++ vm_fault_t ret = __vmf_anon_prepare(vmf);
++
++ if (unlikely(ret & VM_FAULT_RETRY))
++ vma_end_read(vmf->vma);
++ return ret;
++}
++
+ vm_fault_t do_swap_page(struct vm_fault *vmf);
+ void folio_rotate_reclaimable(struct folio *folio);
+ bool __folio_end_writeback(struct folio *folio);
+diff --git a/mm/memory.c b/mm/memory.c
+index ebfc9768f801af..cda2c12c500b8d 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3276,7 +3276,7 @@ static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf)
+ }
+
+ /**
+- * vmf_anon_prepare - Prepare to handle an anonymous fault.
++ * __vmf_anon_prepare - Prepare to handle an anonymous fault.
+ * @vmf: The vm_fault descriptor passed from the fault handler.
+ *
+ * When preparing to insert an anonymous page into a VMA from a
+@@ -3290,7 +3290,7 @@ static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf)
+ * Return: 0 if fault handling can proceed. Any other value should be
+ * returned to the caller.
+ */
+-vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
++vm_fault_t __vmf_anon_prepare(struct vm_fault *vmf)
+ {
+ struct vm_area_struct *vma = vmf->vma;
+ vm_fault_t ret = 0;
+@@ -3298,10 +3298,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
+ if (likely(vma->anon_vma))
+ return 0;
+ if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
+- if (!mmap_read_trylock(vma->vm_mm)) {
+- vma_end_read(vma);
++ if (!mmap_read_trylock(vma->vm_mm))
+ return VM_FAULT_RETRY;
+- }
+ }
+ if (__anon_vma_prepare(vma))
+ ret = VM_FAULT_OOM;
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 923ea80ba7442b..368ab3878fa6e0 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -1118,7 +1118,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
+ int rc = -EAGAIN;
+ int old_page_state = 0;
+ struct anon_vma *anon_vma = NULL;
+- bool is_lru = !__folio_test_movable(src);
++ bool is_lru = data_race(!__folio_test_movable(src));
+ bool locked = false;
+ bool dst_locked = false;
+
+diff --git a/mm/mmap.c b/mm/mmap.c
+index d0dfc85b209bbc..18fddcce03b851 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -3198,8 +3198,12 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
+ flags |= MAP_LOCKED;
+
+ file = get_file(vma->vm_file);
++ ret = security_mmap_file(vma->vm_file, prot, flags);
++ if (ret)
++ goto out_fput;
+ ret = do_mmap(vma->vm_file, start, size,
+ prot, flags, 0, pgoff, &populate, NULL);
++out_fput:
+ fput(file);
+ out:
+ mmap_write_unlock(mm);
+diff --git a/mm/util.c b/mm/util.c
+index bd283e2132e0d4..baca6cafc9f1a3 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -463,7 +463,7 @@ static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack)
+ if (gap + pad > gap)
+ gap += pad;
+
+- if (gap < MIN_GAP)
++ if (gap < MIN_GAP && MIN_GAP < MAX_GAP)
+ gap = MIN_GAP;
+ else if (gap > MAX_GAP)
+ gap = MAX_GAP;
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index c82502e213a88a..58b528df1a86ec 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -106,8 +106,7 @@ void hci_connect_le_scan_cleanup(struct hci_conn *conn, u8 status)
+ * where a timeout + cancel does indicate an actual failure.
+ */
+ if (status && status != HCI_ERROR_UNKNOWN_CONN_ID)
+- mgmt_connect_failed(hdev, &conn->dst, conn->type,
+- conn->dst_type, status);
++ mgmt_connect_failed(hdev, conn, status);
+
+ /* The connection attempt was doing scan for new RPA, and is
+ * in scan phase. If params are not associated with any other
+@@ -1250,8 +1249,7 @@ void hci_conn_failed(struct hci_conn *conn, u8 status)
+ hci_le_conn_failed(conn, status);
+ break;
+ case ACL_LINK:
+- mgmt_connect_failed(hdev, &conn->dst, conn->type,
+- conn->dst_type, status);
++ mgmt_connect_failed(hdev, conn, status);
+ break;
+ }
+
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 5533e6f561b3a3..40ccdef168d7da 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -5380,7 +5380,10 @@ int hci_stop_discovery_sync(struct hci_dev *hdev)
+ if (!e)
+ return 0;
+
+- return hci_remote_name_cancel_sync(hdev, &e->data.bdaddr);
++ /* Ignore cancel errors since it should interfere with stopping
++ * of the discovery.
++ */
++ hci_remote_name_cancel_sync(hdev, &e->data.bdaddr);
+ }
+
+ return 0;
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 279902e8bd8a77..e4f564d6f6fbfb 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -9779,13 +9779,18 @@ void mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ mgmt_pending_remove(cmd);
+ }
+
+-void mgmt_connect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
+- u8 addr_type, u8 status)
++void mgmt_connect_failed(struct hci_dev *hdev, struct hci_conn *conn, u8 status)
+ {
+ struct mgmt_ev_connect_failed ev;
+
+- bacpy(&ev.addr.bdaddr, bdaddr);
+- ev.addr.type = link_to_bdaddr(link_type, addr_type);
++ if (test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags)) {
++ mgmt_device_disconnected(hdev, &conn->dst, conn->type,
++ conn->dst_type, status, true);
++ return;
++ }
++
++ bacpy(&ev.addr.bdaddr, &conn->dst);
++ ev.addr.type = link_to_bdaddr(conn->type, conn->dst_type);
+ ev.status = mgmt_status(status);
+
+ mgmt_event(MGMT_EV_CONNECT_FAILED, hdev, &ev, sizeof(ev), NULL);
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index 46d3ec3aa44b4a..217049fa496e9d 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -1471,8 +1471,10 @@ static void bcm_notify(struct bcm_sock *bo, unsigned long msg,
+ /* remove device reference, if this is our bound device */
+ if (bo->bound && bo->ifindex == dev->ifindex) {
+ #if IS_ENABLED(CONFIG_PROC_FS)
+- if (sock_net(sk)->can.bcmproc_dir && bo->bcm_proc_read)
++ if (sock_net(sk)->can.bcmproc_dir && bo->bcm_proc_read) {
+ remove_proc_entry(bo->procname, sock_net(sk)->can.bcmproc_dir);
++ bo->bcm_proc_read = NULL;
++ }
+ #endif
+ bo->bound = 0;
+ bo->ifindex = 0;
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 4be73de5033cb7..319f47df33300c 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -1179,10 +1179,10 @@ static enum hrtimer_restart j1939_tp_txtimer(struct hrtimer *hrtimer)
+ break;
+ case -ENETDOWN:
+ /* In this case we should get a netdev_event(), all active
+- * sessions will be cleared by
+- * j1939_cancel_all_active_sessions(). So handle this as an
+- * error, but let j1939_cancel_all_active_sessions() do the
+- * cleanup including propagation of the error to user space.
++ * sessions will be cleared by j1939_cancel_active_session().
++ * So handle this as an error, but let
++ * j1939_cancel_active_session() do the cleanup including
++ * propagation of the error to user space.
+ */
+ break;
+ case -EOVERFLOW:
+diff --git a/net/core/filter.c b/net/core/filter.c
+index f3c72cf8609974..0e719c7c43bb7f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -6262,20 +6262,25 @@ BPF_CALL_5(bpf_skb_check_mtu, struct sk_buff *, skb,
+ int ret = BPF_MTU_CHK_RET_FRAG_NEEDED;
+ struct net_device *dev = skb->dev;
+ int skb_len, dev_len;
+- int mtu;
++ int mtu = 0;
+
+- if (unlikely(flags & ~(BPF_MTU_CHK_SEGS)))
+- return -EINVAL;
++ if (unlikely(flags & ~(BPF_MTU_CHK_SEGS))) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+- if (unlikely(flags & BPF_MTU_CHK_SEGS && (len_diff || *mtu_len)))
+- return -EINVAL;
++ if (unlikely(flags & BPF_MTU_CHK_SEGS && (len_diff || *mtu_len))) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ dev = __dev_via_ifindex(dev, ifindex);
+- if (unlikely(!dev))
+- return -ENODEV;
++ if (unlikely(!dev)) {
++ ret = -ENODEV;
++ goto out;
++ }
+
+ mtu = READ_ONCE(dev->mtu);
+-
+ dev_len = mtu + dev->hard_header_len;
+
+ /* If set use *mtu_len as input, L3 as iph->tot_len (like fib_lookup) */
+@@ -6293,15 +6298,12 @@ BPF_CALL_5(bpf_skb_check_mtu, struct sk_buff *, skb,
+ */
+ if (skb_is_gso(skb)) {
+ ret = BPF_MTU_CHK_RET_SUCCESS;
+-
+ if (flags & BPF_MTU_CHK_SEGS &&
+ !skb_gso_validate_network_len(skb, mtu))
+ ret = BPF_MTU_CHK_RET_SEGS_TOOBIG;
+ }
+ out:
+- /* BPF verifier guarantees valid pointer */
+ *mtu_len = mtu;
+-
+ return ret;
+ }
+
+@@ -6311,19 +6313,21 @@ BPF_CALL_5(bpf_xdp_check_mtu, struct xdp_buff *, xdp,
+ struct net_device *dev = xdp->rxq->dev;
+ int xdp_len = xdp->data_end - xdp->data;
+ int ret = BPF_MTU_CHK_RET_SUCCESS;
+- int mtu, dev_len;
++ int mtu = 0, dev_len;
+
+ /* XDP variant doesn't support multi-buffer segment check (yet) */
+- if (unlikely(flags))
+- return -EINVAL;
++ if (unlikely(flags)) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ dev = __dev_via_ifindex(dev, ifindex);
+- if (unlikely(!dev))
+- return -ENODEV;
++ if (unlikely(!dev)) {
++ ret = -ENODEV;
++ goto out;
++ }
+
+ mtu = READ_ONCE(dev->mtu);
+-
+- /* Add L2-header as dev MTU is L3 size */
+ dev_len = mtu + dev->hard_header_len;
+
+ /* Use *mtu_len as input, L3 as iph->tot_len (like fib_lookup) */
+@@ -6333,10 +6337,8 @@ BPF_CALL_5(bpf_xdp_check_mtu, struct xdp_buff *, xdp,
+ xdp_len += len_diff; /* minus result pass check */
+ if (xdp_len > dev_len)
+ ret = BPF_MTU_CHK_RET_FRAG_NEEDED;
+-
+- /* BPF verifier guarantees valid pointer */
++out:
+ *mtu_len = mtu;
+-
+ return ret;
+ }
+
+@@ -6346,7 +6348,8 @@ static const struct bpf_func_proto bpf_skb_check_mtu_proto = {
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+ .arg2_type = ARG_ANYTHING,
+- .arg3_type = ARG_PTR_TO_INT,
++ .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg3_size = sizeof(u32),
+ .arg4_type = ARG_ANYTHING,
+ .arg5_type = ARG_ANYTHING,
+ };
+@@ -6357,7 +6360,8 @@ static const struct bpf_func_proto bpf_xdp_check_mtu_proto = {
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+ .arg2_type = ARG_ANYTHING,
+- .arg3_type = ARG_PTR_TO_INT,
++ .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg3_size = sizeof(u32),
+ .arg4_type = ARG_ANYTHING,
+ .arg5_type = ARG_ANYTHING,
+ };
+@@ -8579,13 +8583,16 @@ static bool bpf_skb_is_valid_access(int off, int size, enum bpf_access_type type
+ if (off + size > offsetofend(struct __sk_buff, cb[4]))
+ return false;
+ break;
++ case bpf_ctx_range(struct __sk_buff, data):
++ case bpf_ctx_range(struct __sk_buff, data_meta):
++ case bpf_ctx_range(struct __sk_buff, data_end):
++ if (info->is_ldsx || size != size_default)
++ return false;
++ break;
+ case bpf_ctx_range_till(struct __sk_buff, remote_ip6[0], remote_ip6[3]):
+ case bpf_ctx_range_till(struct __sk_buff, local_ip6[0], local_ip6[3]):
+ case bpf_ctx_range_till(struct __sk_buff, remote_ip4, remote_ip4):
+ case bpf_ctx_range_till(struct __sk_buff, local_ip4, local_ip4):
+- case bpf_ctx_range(struct __sk_buff, data):
+- case bpf_ctx_range(struct __sk_buff, data_meta):
+- case bpf_ctx_range(struct __sk_buff, data_end):
+ if (size != size_default)
+ return false;
+ break;
+@@ -9029,6 +9036,14 @@ static bool xdp_is_valid_access(int off, int size,
+ }
+ }
+ return false;
++ } else {
++ switch (off) {
++ case offsetof(struct xdp_md, data_meta):
++ case offsetof(struct xdp_md, data):
++ case offsetof(struct xdp_md, data_end):
++ if (info->is_ldsx)
++ return false;
++ }
+ }
+
+ switch (off) {
+@@ -9354,12 +9369,12 @@ static bool flow_dissector_is_valid_access(int off, int size,
+
+ switch (off) {
+ case bpf_ctx_range(struct __sk_buff, data):
+- if (size != size_default)
++ if (info->is_ldsx || size != size_default)
+ return false;
+ info->reg_type = PTR_TO_PACKET;
+ return true;
+ case bpf_ctx_range(struct __sk_buff, data_end):
+- if (size != size_default)
++ if (info->is_ldsx || size != size_default)
+ return false;
+ info->reg_type = PTR_TO_PACKET_END;
+ return true;
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index d3dbb92153f2fe..724b6856fcc3e9 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -1183,6 +1183,7 @@ static void sock_hash_free(struct bpf_map *map)
+ sock_put(elem->sk);
+ sock_hash_free_elem(htab, elem);
+ }
++ cond_resched();
+ }
+
+ /* wait for psock readers accessing its map link */
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index ab6d0d98dbc34c..336518e623b280 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -224,57 +224,59 @@ int sysctl_icmp_msgs_per_sec __read_mostly = 1000;
+ int sysctl_icmp_msgs_burst __read_mostly = 50;
+
+ static struct {
+- spinlock_t lock;
+- u32 credit;
++ atomic_t credit;
+ u32 stamp;
+-} icmp_global = {
+- .lock = __SPIN_LOCK_UNLOCKED(icmp_global.lock),
+-};
++} icmp_global;
+
+ /**
+ * icmp_global_allow - Are we allowed to send one more ICMP message ?
+ *
+ * Uses a token bucket to limit our ICMP messages to ~sysctl_icmp_msgs_per_sec.
+ * Returns false if we reached the limit and can not send another packet.
+- * Note: called with BH disabled
++ * Works in tandem with icmp_global_consume().
+ */
+ bool icmp_global_allow(void)
+ {
+- u32 credit, delta, incr = 0, now = (u32)jiffies;
+- bool rc = false;
++ u32 delta, now, oldstamp;
++ int incr, new, old;
+
+- /* Check if token bucket is empty and cannot be refilled
+- * without taking the spinlock. The READ_ONCE() are paired
+- * with the following WRITE_ONCE() in this same function.
++ /* Note: many cpus could find this condition true.
++ * Then later icmp_global_consume() could consume more credits,
++ * this is an acceptable race.
+ */
+- if (!READ_ONCE(icmp_global.credit)) {
+- delta = min_t(u32, now - READ_ONCE(icmp_global.stamp), HZ);
+- if (delta < HZ / 50)
+- return false;
+- }
++ if (atomic_read(&icmp_global.credit) > 0)
++ return true;
+
+- spin_lock(&icmp_global.lock);
+- delta = min_t(u32, now - icmp_global.stamp, HZ);
+- if (delta >= HZ / 50) {
+- incr = READ_ONCE(sysctl_icmp_msgs_per_sec) * delta / HZ;
+- if (incr)
+- WRITE_ONCE(icmp_global.stamp, now);
+- }
+- credit = min_t(u32, icmp_global.credit + incr,
+- READ_ONCE(sysctl_icmp_msgs_burst));
+- if (credit) {
+- /* We want to use a credit of one in average, but need to randomize
+- * it for security reasons.
+- */
+- credit = max_t(int, credit - get_random_u32_below(3), 0);
+- rc = true;
++ now = jiffies;
++ oldstamp = READ_ONCE(icmp_global.stamp);
++ delta = min_t(u32, now - oldstamp, HZ);
++ if (delta < HZ / 50)
++ return false;
++
++ incr = READ_ONCE(sysctl_icmp_msgs_per_sec) * delta / HZ;
++ if (!incr)
++ return false;
++
++ if (cmpxchg(&icmp_global.stamp, oldstamp, now) == oldstamp) {
++ old = atomic_read(&icmp_global.credit);
++ do {
++ new = min(old + incr, READ_ONCE(sysctl_icmp_msgs_burst));
++ } while (!atomic_try_cmpxchg(&icmp_global.credit, &old, new));
+ }
+- WRITE_ONCE(icmp_global.credit, credit);
+- spin_unlock(&icmp_global.lock);
+- return rc;
++ return true;
+ }
+ EXPORT_SYMBOL(icmp_global_allow);
+
++void icmp_global_consume(void)
++{
++ int credits = get_random_u32_below(3);
++
++ /* Note: this might make icmp_global.credit negative. */
++ if (credits)
++ atomic_sub(credits, &icmp_global.credit);
++}
++EXPORT_SYMBOL(icmp_global_consume);
++
+ static bool icmpv4_mask_allow(struct net *net, int type, int code)
+ {
+ if (type > NR_ICMP_TYPES)
+@@ -291,14 +293,16 @@ static bool icmpv4_mask_allow(struct net *net, int type, int code)
+ return false;
+ }
+
+-static bool icmpv4_global_allow(struct net *net, int type, int code)
++static bool icmpv4_global_allow(struct net *net, int type, int code,
++ bool *apply_ratelimit)
+ {
+ if (icmpv4_mask_allow(net, type, code))
+ return true;
+
+- if (icmp_global_allow())
++ if (icmp_global_allow()) {
++ *apply_ratelimit = true;
+ return true;
+-
++ }
+ __ICMP_INC_STATS(net, ICMP_MIB_RATELIMITGLOBAL);
+ return false;
+ }
+@@ -308,15 +312,16 @@ static bool icmpv4_global_allow(struct net *net, int type, int code)
+ */
+
+ static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
+- struct flowi4 *fl4, int type, int code)
++ struct flowi4 *fl4, int type, int code,
++ bool apply_ratelimit)
+ {
+ struct dst_entry *dst = &rt->dst;
+ struct inet_peer *peer;
+ bool rc = true;
+ int vif;
+
+- if (icmpv4_mask_allow(net, type, code))
+- goto out;
++ if (!apply_ratelimit)
++ return true;
+
+ /* No rate limit on loopback */
+ if (dst->dev && (dst->dev->flags&IFF_LOOPBACK))
+@@ -331,6 +336,8 @@ static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
+ out:
+ if (!rc)
+ __ICMP_INC_STATS(net, ICMP_MIB_RATELIMITHOST);
++ else
++ icmp_global_consume();
+ return rc;
+ }
+
+@@ -402,6 +409,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+ struct ipcm_cookie ipc;
+ struct rtable *rt = skb_rtable(skb);
+ struct net *net = dev_net(rt->dst.dev);
++ bool apply_ratelimit = false;
+ struct flowi4 fl4;
+ struct sock *sk;
+ struct inet_sock *inet;
+@@ -413,11 +421,11 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+ if (ip_options_echo(net, &icmp_param->replyopts.opt.opt, skb))
+ return;
+
+- /* Needed by both icmp_global_allow and icmp_xmit_lock */
++ /* Needed by both icmpv4_global_allow and icmp_xmit_lock */
+ local_bh_disable();
+
+- /* global icmp_msgs_per_sec */
+- if (!icmpv4_global_allow(net, type, code))
++ /* is global icmp_msgs_per_sec exhausted ? */
++ if (!icmpv4_global_allow(net, type, code, &apply_ratelimit))
+ goto out_bh_enable;
+
+ sk = icmp_xmit_lock(net);
+@@ -450,7 +458,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+ rt = ip_route_output_key(net, &fl4);
+ if (IS_ERR(rt))
+ goto out_unlock;
+- if (icmpv4_xrlim_allow(net, rt, &fl4, type, code))
++ if (icmpv4_xrlim_allow(net, rt, &fl4, type, code, apply_ratelimit))
+ icmp_push_reply(sk, icmp_param, &fl4, &ipc, &rt);
+ ip_rt_put(rt);
+ out_unlock:
+@@ -596,6 +604,7 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ int room;
+ struct icmp_bxm icmp_param;
+ struct rtable *rt = skb_rtable(skb_in);
++ bool apply_ratelimit = false;
+ struct ipcm_cookie ipc;
+ struct flowi4 fl4;
+ __be32 saddr;
+@@ -677,7 +686,7 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ }
+ }
+
+- /* Needed by both icmp_global_allow and icmp_xmit_lock */
++ /* Needed by both icmpv4_global_allow and icmp_xmit_lock */
+ local_bh_disable();
+
+ /* Check global sysctl_icmp_msgs_per_sec ratelimit, unless
+@@ -685,7 +694,7 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ * loopback, then peer ratelimit still work (in icmpv4_xrlim_allow)
+ */
+ if (!(skb_in->dev && (skb_in->dev->flags&IFF_LOOPBACK)) &&
+- !icmpv4_global_allow(net, type, code))
++ !icmpv4_global_allow(net, type, code, &apply_ratelimit))
+ goto out_bh_enable;
+
+ sk = icmp_xmit_lock(net);
+@@ -744,7 +753,7 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ goto out_unlock;
+
+ /* peer icmp_ratelimit */
+- if (!icmpv4_xrlim_allow(net, rt, &fl4, type, code))
++ if (!icmpv4_xrlim_allow(net, rt, &fl4, type, code, apply_ratelimit))
+ goto ende;
+
+ /* RFC says return as much as we can without exceeding 576 bytes. */
+diff --git a/net/ipv6/Kconfig b/net/ipv6/Kconfig
+index 08d4b7132d4c45..1c9c686d9522f7 100644
+--- a/net/ipv6/Kconfig
++++ b/net/ipv6/Kconfig
+@@ -323,6 +323,7 @@ config IPV6_RPL_LWTUNNEL
+ bool "IPv6: RPL Source Routing Header support"
+ depends on IPV6
+ select LWTUNNEL
++ select DST_CACHE
+ help
+ Support for RFC6554 RPL Source Routing Header using the lightweight
+ tunnels mechanism.
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index 7b31674644efc3..46f70e4a835139 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -175,14 +175,16 @@ static bool icmpv6_mask_allow(struct net *net, int type)
+ return false;
+ }
+
+-static bool icmpv6_global_allow(struct net *net, int type)
++static bool icmpv6_global_allow(struct net *net, int type,
++ bool *apply_ratelimit)
+ {
+ if (icmpv6_mask_allow(net, type))
+ return true;
+
+- if (icmp_global_allow())
++ if (icmp_global_allow()) {
++ *apply_ratelimit = true;
+ return true;
+-
++ }
+ __ICMP_INC_STATS(net, ICMP_MIB_RATELIMITGLOBAL);
+ return false;
+ }
+@@ -191,13 +193,13 @@ static bool icmpv6_global_allow(struct net *net, int type)
+ * Check the ICMP output rate limit
+ */
+ static bool icmpv6_xrlim_allow(struct sock *sk, u8 type,
+- struct flowi6 *fl6)
++ struct flowi6 *fl6, bool apply_ratelimit)
+ {
+ struct net *net = sock_net(sk);
+ struct dst_entry *dst;
+ bool res = false;
+
+- if (icmpv6_mask_allow(net, type))
++ if (!apply_ratelimit)
+ return true;
+
+ /*
+@@ -228,6 +230,8 @@ static bool icmpv6_xrlim_allow(struct sock *sk, u8 type,
+ if (!res)
+ __ICMP6_INC_STATS(net, ip6_dst_idev(dst),
+ ICMP6_MIB_RATELIMITHOST);
++ else
++ icmp_global_consume();
+ dst_release(dst);
+ return res;
+ }
+@@ -452,6 +456,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ struct net *net;
+ struct ipv6_pinfo *np;
+ const struct in6_addr *saddr = NULL;
++ bool apply_ratelimit = false;
+ struct dst_entry *dst;
+ struct icmp6hdr tmp_hdr;
+ struct flowi6 fl6;
+@@ -533,11 +538,12 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ return;
+ }
+
+- /* Needed by both icmp_global_allow and icmpv6_xmit_lock */
++ /* Needed by both icmpv6_global_allow and icmpv6_xmit_lock */
+ local_bh_disable();
+
+ /* Check global sysctl_icmp_msgs_per_sec ratelimit */
+- if (!(skb->dev->flags & IFF_LOOPBACK) && !icmpv6_global_allow(net, type))
++ if (!(skb->dev->flags & IFF_LOOPBACK) &&
++ !icmpv6_global_allow(net, type, &apply_ratelimit))
+ goto out_bh_enable;
+
+ mip6_addr_swap(skb, parm);
+@@ -575,7 +581,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+
+ np = inet6_sk(sk);
+
+- if (!icmpv6_xrlim_allow(sk, type, &fl6))
++ if (!icmpv6_xrlim_allow(sk, type, &fl6, apply_ratelimit))
+ goto out;
+
+ tmp_hdr.icmp6_type = type;
+@@ -717,6 +723,7 @@ static enum skb_drop_reason icmpv6_echo_reply(struct sk_buff *skb)
+ struct ipv6_pinfo *np;
+ const struct in6_addr *saddr = NULL;
+ struct icmp6hdr *icmph = icmp6_hdr(skb);
++ bool apply_ratelimit = false;
+ struct icmp6hdr tmp_hdr;
+ struct flowi6 fl6;
+ struct icmpv6_msg msg;
+@@ -781,8 +788,9 @@ static enum skb_drop_reason icmpv6_echo_reply(struct sk_buff *skb)
+ goto out;
+
+ /* Check the ratelimit */
+- if ((!(skb->dev->flags & IFF_LOOPBACK) && !icmpv6_global_allow(net, ICMPV6_ECHO_REPLY)) ||
+- !icmpv6_xrlim_allow(sk, ICMPV6_ECHO_REPLY, &fl6))
++ if ((!(skb->dev->flags & IFF_LOOPBACK) &&
++ !icmpv6_global_allow(net, ICMPV6_ECHO_REPLY, &apply_ratelimit)) ||
++ !icmpv6_xrlim_allow(sk, ICMPV6_ECHO_REPLY, &fl6, apply_ratelimit))
+ goto out_dst_release;
+
+ idev = __in6_dev_get(skb->dev);
+diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c
+index dedee264b8f6c8..b9457473c176df 100644
+--- a/net/ipv6/netfilter/nf_reject_ipv6.c
++++ b/net/ipv6/netfilter/nf_reject_ipv6.c
+@@ -223,33 +223,23 @@ void nf_reject_ip6_tcphdr_put(struct sk_buff *nskb,
+ const struct tcphdr *oth, unsigned int otcplen)
+ {
+ struct tcphdr *tcph;
+- int needs_ack;
+
+ skb_reset_transport_header(nskb);
+- tcph = skb_put(nskb, sizeof(struct tcphdr));
++ tcph = skb_put_zero(nskb, sizeof(struct tcphdr));
+ /* Truncate to length (no data) */
+ tcph->doff = sizeof(struct tcphdr)/4;
+ tcph->source = oth->dest;
+ tcph->dest = oth->source;
+
+ if (oth->ack) {
+- needs_ack = 0;
+ tcph->seq = oth->ack_seq;
+- tcph->ack_seq = 0;
+ } else {
+- needs_ack = 1;
+ tcph->ack_seq = htonl(ntohl(oth->seq) + oth->syn + oth->fin +
+ otcplen - (oth->doff<<2));
+- tcph->seq = 0;
++ tcph->ack = 1;
+ }
+
+- /* Reset flags */
+- ((u_int8_t *)tcph)[13] = 0;
+ tcph->rst = 1;
+- tcph->ack = needs_ack;
+- tcph->window = 0;
+- tcph->urg_ptr = 0;
+- tcph->check = 0;
+
+ /* Adjust TCP checksum */
+ tcph->check = csum_ipv6_magic(&ipv6_hdr(nskb)->saddr,
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 219701caba1e94..b4dcd8f3e7baba 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -174,7 +174,7 @@ static void rt6_uncached_list_flush_dev(struct net_device *dev)
+ struct net_device *rt_dev = rt->dst.dev;
+ bool handled = false;
+
+- if (rt_idev->dev == dev) {
++ if (rt_idev && rt_idev->dev == dev) {
+ rt->rt6i_idev = in6_dev_get(blackhole_netdev);
+ in6_dev_put(rt_idev);
+ handled = true;
+diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c
+index 2c83b7586422dd..db3c19a42e1ca7 100644
+--- a/net/ipv6/rpl_iptunnel.c
++++ b/net/ipv6/rpl_iptunnel.c
+@@ -263,10 +263,8 @@ static int rpl_input(struct sk_buff *skb)
+ rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
+
+ err = rpl_do_srh(skb, rlwt);
+- if (unlikely(err)) {
+- kfree_skb(skb);
+- return err;
+- }
++ if (unlikely(err))
++ goto drop;
+
+ local_bh_disable();
+ dst = dst_cache_get(&rlwt->cache);
+@@ -286,9 +284,13 @@ static int rpl_input(struct sk_buff *skb)
+
+ err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+ if (unlikely(err))
+- return err;
++ goto drop;
+
+ return dst_input(skb);
++
++drop:
++ kfree_skb(skb);
++ return err;
+ }
+
+ static int nla_put_rpl_srh(struct sk_buff *skb, int attrtype,
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index b4ad66af3af31d..f735e41560a32b 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -462,6 +462,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ {
+ struct ieee80211_local *local = sdata->local;
+ unsigned long flags;
++ struct sk_buff_head freeq;
+ struct sk_buff *skb, *tmp;
+ u32 hw_reconf_flags = 0;
+ int i, flushed;
+@@ -637,18 +638,32 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ skb_queue_purge(&sdata->status_queue);
+ }
+
++ /*
++ * Since ieee80211_free_txskb() may issue __dev_queue_xmit()
++ * which should be called with interrupts enabled, reclamation
++ * is done in two phases:
++ */
++ __skb_queue_head_init(&freeq);
++
++ /* unlink from local queues... */
+ spin_lock_irqsave(&local->queue_stop_reason_lock, flags);
+ for (i = 0; i < IEEE80211_MAX_QUEUES; i++) {
+ skb_queue_walk_safe(&local->pending[i], skb, tmp) {
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ if (info->control.vif == &sdata->vif) {
+ __skb_unlink(skb, &local->pending[i]);
+- ieee80211_free_txskb(&local->hw, skb);
++ __skb_queue_tail(&freeq, skb);
+ }
+ }
+ }
+ spin_unlock_irqrestore(&local->queue_stop_reason_lock, flags);
+
++ /* ... and perform actual reclamation with interrupts enabled. */
++ skb_queue_walk_safe(&freeq, skb, tmp) {
++ __skb_unlink(skb, &freeq);
++ ieee80211_free_txskb(&local->hw, skb);
++ }
++
+ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
+ ieee80211_txq_remove_vlan(local, sdata);
+
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index f9526bbc363371..9e3d2ed9cf6780 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -4715,7 +4715,7 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
+ ((assoc_data->wmm && !elems->wmm_param) ||
+ (link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_HT &&
+ (!elems->ht_cap_elem || !elems->ht_operation)) ||
+- (link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT &&
++ (is_5ghz && link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT &&
+ (!elems->vht_cap_elem || !elems->vht_operation)))) {
+ const struct cfg80211_bss_ies *ies;
+ struct ieee802_11_elems *bss_elems;
+@@ -4763,19 +4763,22 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
+ sdata_info(sdata,
+ "AP bug: HT operation missing from AssocResp\n");
+ }
+- if (!elems->vht_cap_elem && bss_elems->vht_cap_elem &&
+- link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT) {
+- elems->vht_cap_elem = bss_elems->vht_cap_elem;
+- sdata_info(sdata,
+- "AP bug: VHT capa missing from AssocResp\n");
+- }
+- if (!elems->vht_operation && bss_elems->vht_operation &&
+- link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT) {
+- elems->vht_operation = bss_elems->vht_operation;
+- sdata_info(sdata,
+- "AP bug: VHT operation missing from AssocResp\n");
+- }
+
++ if (is_5ghz) {
++ if (!elems->vht_cap_elem && bss_elems->vht_cap_elem &&
++ link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT) {
++ elems->vht_cap_elem = bss_elems->vht_cap_elem;
++ sdata_info(sdata,
++ "AP bug: VHT capa missing from AssocResp\n");
++ }
++
++ if (!elems->vht_operation && bss_elems->vht_operation &&
++ link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT) {
++ elems->vht_operation = bss_elems->vht_operation;
++ sdata_info(sdata,
++ "AP bug: VHT operation missing from AssocResp\n");
++ }
++ }
+ kfree(bss_elems);
+ }
+
+@@ -7660,6 +7663,7 @@ static int ieee80211_do_assoc(struct ieee80211_sub_if_data *sdata)
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
+ assoc_data->tries++;
++ assoc_data->comeback = false;
+ if (assoc_data->tries > IEEE80211_ASSOC_MAX_TRIES) {
+ sdata_info(sdata, "association with %pM timed out\n",
+ assoc_data->ap_addr);
+diff --git a/net/mac80211/offchannel.c b/net/mac80211/offchannel.c
+index 28d03196ef75a7..29fab7ae47b4c7 100644
+--- a/net/mac80211/offchannel.c
++++ b/net/mac80211/offchannel.c
+@@ -997,6 +997,7 @@ int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ }
+
+ IEEE80211_SKB_CB(skb)->flags = flags;
++ IEEE80211_SKB_CB(skb)->control.flags |= IEEE80211_TX_CTRL_DONT_USE_RATE_MASK;
+
+ skb->dev = sdata->dev;
+
+diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c
+index 4dc1def6954865..3dc9752188d58f 100644
+--- a/net/mac80211/rate.c
++++ b/net/mac80211/rate.c
+@@ -890,7 +890,7 @@ void ieee80211_get_tx_rates(struct ieee80211_vif *vif,
+ if (ieee80211_is_tx_data(skb))
+ rate_control_apply_mask(sdata, sta, sband, dest, max_rates);
+
+- if (!(info->control.flags & IEEE80211_TX_CTRL_SCAN_TX))
++ if (!(info->control.flags & IEEE80211_TX_CTRL_DONT_USE_RATE_MASK))
+ mask = sdata->rc_rateidx_mask[info->band];
+
+ if (dest[0].idx < 0)
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index b5f2df61c7f671..1c5d99975ad04d 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -649,7 +649,7 @@ static void ieee80211_send_scan_probe_req(struct ieee80211_sub_if_data *sdata,
+ cpu_to_le16(IEEE80211_SN_TO_SEQ(sn));
+ }
+ IEEE80211_SKB_CB(skb)->flags |= tx_flags;
+- IEEE80211_SKB_CB(skb)->control.flags |= IEEE80211_TX_CTRL_SCAN_TX;
++ IEEE80211_SKB_CB(skb)->control.flags |= IEEE80211_TX_CTRL_DONT_USE_RATE_MASK;
+ ieee80211_tx_skb_tid_band(sdata, skb, 7, channel->band);
+ }
+ }
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index bca7b341dd772d..a9ee8698225929 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -699,7 +699,7 @@ ieee80211_tx_h_rate_ctrl(struct ieee80211_tx_data *tx)
+ txrc.skb = tx->skb;
+ txrc.reported_rate.idx = -1;
+
+- if (unlikely(info->control.flags & IEEE80211_TX_CTRL_SCAN_TX)) {
++ if (unlikely(info->control.flags & IEEE80211_TX_CTRL_DONT_USE_RATE_MASK)) {
+ txrc.rate_idx_mask = ~0;
+ } else {
+ txrc.rate_idx_mask = tx->sdata->rc_rateidx_mask[info->band];
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 4cbf71d0786b0d..c55cf5bc36b2f2 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -382,7 +382,7 @@ static int ctnetlink_dump_secctx(struct sk_buff *skb, const struct nf_conn *ct)
+ #define ctnetlink_dump_secctx(a, b) (0)
+ #endif
+
+-#ifdef CONFIG_NF_CONNTRACK_LABELS
++#ifdef CONFIG_NF_CONNTRACK_EVENTS
+ static inline int ctnetlink_label_size(const struct nf_conn *ct)
+ {
+ struct nf_conn_labels *labels = nf_ct_labels_find(ct);
+@@ -391,6 +391,7 @@ static inline int ctnetlink_label_size(const struct nf_conn *ct)
+ return 0;
+ return nla_total_size(sizeof(labels->bits));
+ }
++#endif
+
+ static int
+ ctnetlink_dump_labels(struct sk_buff *skb, const struct nf_conn *ct)
+@@ -411,10 +412,6 @@ ctnetlink_dump_labels(struct sk_buff *skb, const struct nf_conn *ct)
+
+ return 0;
+ }
+-#else
+-#define ctnetlink_dump_labels(a, b) (0)
+-#define ctnetlink_label_size(a) (0)
+-#endif
+
+ #define master_tuple(ct) &(ct->master->tuplehash[IP_CT_DIR_ORIGINAL].tuple)
+
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 0a2f7934695896..472f211472db48 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -393,6 +393,7 @@ static void nft_trans_commit_list_add_tail(struct net *net, struct nft_trans *tr
+ {
+ struct nftables_pernet *nft_net = nft_pernet(net);
+ struct nft_trans_binding *binding;
++ struct nft_trans_set *trans_set;
+
+ list_add_tail(&trans->list, &nft_net->commit_list);
+
+@@ -402,9 +403,13 @@ static void nft_trans_commit_list_add_tail(struct net *net, struct nft_trans *tr
+
+ switch (trans->msg_type) {
+ case NFT_MSG_NEWSET:
++ trans_set = nft_trans_container_set(trans);
++
+ if (!nft_trans_set_update(trans) &&
+ nft_set_is_anonymous(nft_trans_set(trans)))
+ list_add_tail(&binding->binding_list, &nft_net->binding_list);
++
++ list_add_tail(&trans_set->list_trans_newset, &nft_net->commit_set_list);
+ break;
+ case NFT_MSG_NEWCHAIN:
+ if (!nft_trans_chain_update(trans) &&
+@@ -611,6 +616,7 @@ static int __nft_trans_set_add(const struct nft_ctx *ctx, int msg_type,
+
+ trans_set = nft_trans_container_set(trans);
+ INIT_LIST_HEAD(&trans_set->nft_trans_binding.binding_list);
++ INIT_LIST_HEAD(&trans_set->list_trans_newset);
+
+ if (msg_type == NFT_MSG_NEWSET && ctx->nla[NFTA_SET_ID] && !desc) {
+ nft_trans_set_id(trans) =
+@@ -1841,7 +1847,7 @@ static int nft_dump_basechain_hook(struct sk_buff *skb, int family,
+ if (!hook_list)
+ hook_list = &basechain->hook_list;
+
+- list_for_each_entry(hook, hook_list, list) {
++ list_for_each_entry_rcu(hook, hook_list, list) {
+ if (!first)
+ first = hook;
+
+@@ -4485,17 +4491,16 @@ static struct nft_set *nft_set_lookup_byid(const struct net *net,
+ {
+ struct nftables_pernet *nft_net = nft_pernet(net);
+ u32 id = ntohl(nla_get_be32(nla));
+- struct nft_trans *trans;
++ struct nft_trans_set *trans;
+
+- list_for_each_entry(trans, &nft_net->commit_list, list) {
+- if (trans->msg_type == NFT_MSG_NEWSET) {
+- struct nft_set *set = nft_trans_set(trans);
++ /* its likely the id we need is at the tail, not at start */
++ list_for_each_entry_reverse(trans, &nft_net->commit_set_list, list_trans_newset) {
++ struct nft_set *set = trans->set;
+
+- if (id == nft_trans_set_id(trans) &&
+- set->table == table &&
+- nft_active_genmask(set, genmask))
+- return set;
+- }
++ if (id == trans->set_id &&
++ set->table == table &&
++ nft_active_genmask(set, genmask))
++ return set;
+ }
+ return ERR_PTR(-ENOENT);
+ }
+@@ -4587,7 +4592,7 @@ int nf_msecs_to_jiffies64(const struct nlattr *nla, u64 *result)
+ return -ERANGE;
+
+ ms *= NSEC_PER_MSEC;
+- *result = nsecs_to_jiffies64(ms);
++ *result = nsecs_to_jiffies64(ms) ? : !!ms;
+ return 0;
+ }
+
+@@ -6674,7 +6679,7 @@ static int nft_setelem_catchall_insert(const struct net *net,
+ }
+ }
+
+- catchall = kmalloc(sizeof(*catchall), GFP_KERNEL);
++ catchall = kmalloc(sizeof(*catchall), GFP_KERNEL_ACCOUNT);
+ if (!catchall)
+ return -ENOMEM;
+
+@@ -6910,17 +6915,23 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ return err;
+ } else if (set->flags & NFT_SET_TIMEOUT &&
+ !(flags & NFT_SET_ELEM_INTERVAL_END)) {
+- timeout = READ_ONCE(set->timeout);
++ timeout = set->timeout;
+ }
+
+ expiration = 0;
+ if (nla[NFTA_SET_ELEM_EXPIRATION] != NULL) {
+ if (!(set->flags & NFT_SET_TIMEOUT))
+ return -EINVAL;
++ if (timeout == 0)
++ return -EOPNOTSUPP;
++
+ err = nf_msecs_to_jiffies64(nla[NFTA_SET_ELEM_EXPIRATION],
+ &expiration);
+ if (err)
+ return err;
++
++ if (expiration > timeout)
++ return -ERANGE;
+ }
+
+ if (nla[NFTA_SET_ELEM_EXPR]) {
+@@ -7011,7 +7022,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ if (err < 0)
+ goto err_parse_key_end;
+
+- if (timeout != READ_ONCE(set->timeout)) {
++ if (timeout != set->timeout) {
+ err = nft_set_ext_add(&tmpl, NFT_SET_EXT_TIMEOUT);
+ if (err < 0)
+ goto err_parse_key_end;
+@@ -9174,7 +9185,7 @@ static void nf_tables_flowtable_destroy(struct nft_flowtable *flowtable)
+ flowtable->data.type->setup(&flowtable->data, hook->ops.dev,
+ FLOW_BLOCK_UNBIND);
+ list_del_rcu(&hook->list);
+- kfree(hook);
++ kfree_rcu(hook, rcu);
+ }
+ kfree(flowtable->name);
+ module_put(flowtable->data.type->owner);
+@@ -10447,6 +10458,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ nft_flow_rule_destroy(nft_trans_flow_rule(trans));
+ break;
+ case NFT_MSG_NEWSET:
++ list_del(&nft_trans_container_set(trans)->list_trans_newset);
+ if (nft_trans_set_update(trans)) {
+ struct nft_set *set = nft_trans_set(trans);
+
+@@ -10755,6 +10767,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ nft_trans_destroy(trans);
+ break;
+ case NFT_MSG_NEWSET:
++ list_del(&nft_trans_container_set(trans)->list_trans_newset);
+ if (nft_trans_set_update(trans)) {
+ nft_trans_destroy(trans);
+ break;
+@@ -10850,6 +10863,8 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ }
+ }
+
++ WARN_ON_ONCE(!list_empty(&nft_net->commit_set_list));
++
+ nft_set_abort_update(&set_update_list);
+
+ synchronize_rcu();
+@@ -11519,6 +11534,7 @@ static int __net_init nf_tables_init_net(struct net *net)
+
+ INIT_LIST_HEAD(&nft_net->tables);
+ INIT_LIST_HEAD(&nft_net->commit_list);
++ INIT_LIST_HEAD(&nft_net->commit_set_list);
+ INIT_LIST_HEAD(&nft_net->binding_list);
+ INIT_LIST_HEAD(&nft_net->module_list);
+ INIT_LIST_HEAD(&nft_net->notify_list);
+@@ -11549,6 +11565,7 @@ static void __net_exit nf_tables_exit_net(struct net *net)
+ gc_seq = nft_gc_seq_begin(nft_net);
+
+ WARN_ON_ONCE(!list_empty(&nft_net->commit_list));
++ WARN_ON_ONCE(!list_empty(&nft_net->commit_set_list));
+
+ if (!list_empty(&nft_net->module_list))
+ nf_tables_module_autoload_cleanup(net);
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index d3d11dede54507..85450f60114263 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -536,7 +536,7 @@ nft_match_large_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ struct xt_match *m = expr->ops->data;
+ int ret;
+
+- priv->info = kmalloc(XT_ALIGN(m->matchsize), GFP_KERNEL);
++ priv->info = kmalloc(XT_ALIGN(m->matchsize), GFP_KERNEL_ACCOUNT);
+ if (!priv->info)
+ return -ENOMEM;
+
+@@ -810,7 +810,7 @@ nft_match_select_ops(const struct nft_ctx *ctx,
+ goto err;
+ }
+
+- ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL);
++ ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL_ACCOUNT);
+ if (!ops) {
+ err = -ENOMEM;
+ goto err;
+@@ -900,7 +900,7 @@ nft_target_select_ops(const struct nft_ctx *ctx,
+ goto err;
+ }
+
+- ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL);
++ ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL_ACCOUNT);
+ if (!ops) {
+ err = -ENOMEM;
+ goto err;
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index b4ada3ab21679b..489a9b34f1ecc1 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -56,7 +56,7 @@ static struct nft_elem_priv *nft_dynset_new(struct nft_set *set,
+ if (!atomic_add_unless(&set->nelems, 1, set->size))
+ return NULL;
+
+- timeout = priv->timeout ? : set->timeout;
++ timeout = priv->timeout ? : READ_ONCE(set->timeout);
+ elem_priv = nft_set_elem_init(set, &priv->tmpl,
+ ®s->data[priv->sreg_key], NULL,
+ ®s->data[priv->sreg_data],
+@@ -95,7 +95,7 @@ void nft_dynset_eval(const struct nft_expr *expr,
+ expr, regs, &ext)) {
+ if (priv->op == NFT_DYNSET_OP_UPDATE &&
+ nft_set_ext_exists(ext, NFT_SET_EXT_EXPIRATION)) {
+- timeout = priv->timeout ? : set->timeout;
++ timeout = priv->timeout ? : READ_ONCE(set->timeout);
+ *nft_set_ext_expiration(ext) = get_jiffies_64() + timeout;
+ }
+
+@@ -313,7 +313,7 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ nft_dynset_ext_add_expr(priv);
+
+ if (set->flags & NFT_SET_TIMEOUT) {
+- if (timeout || set->timeout) {
++ if (timeout || READ_ONCE(set->timeout)) {
+ nft_set_ext_add(&priv->tmpl, NFT_SET_EXT_TIMEOUT);
+ nft_set_ext_add(&priv->tmpl, NFT_SET_EXT_EXPIRATION);
+ }
+diff --git a/net/netfilter/nft_log.c b/net/netfilter/nft_log.c
+index 5defe6e4fd9820..e3558813799579 100644
+--- a/net/netfilter/nft_log.c
++++ b/net/netfilter/nft_log.c
+@@ -163,7 +163,7 @@ static int nft_log_init(const struct nft_ctx *ctx,
+
+ nla = tb[NFTA_LOG_PREFIX];
+ if (nla != NULL) {
+- priv->prefix = kmalloc(nla_len(nla) + 1, GFP_KERNEL);
++ priv->prefix = kmalloc(nla_len(nla) + 1, GFP_KERNEL_ACCOUNT);
+ if (priv->prefix == NULL)
+ return -ENOMEM;
+ nla_strscpy(priv->prefix, nla, nla_len(nla) + 1);
+diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c
+index 9139ce38ea7b9a..f23faf565b6873 100644
+--- a/net/netfilter/nft_meta.c
++++ b/net/netfilter/nft_meta.c
+@@ -954,7 +954,7 @@ static int nft_secmark_obj_init(const struct nft_ctx *ctx,
+ if (tb[NFTA_SECMARK_CTX] == NULL)
+ return -EINVAL;
+
+- priv->ctx = nla_strdup(tb[NFTA_SECMARK_CTX], GFP_KERNEL);
++ priv->ctx = nla_strdup(tb[NFTA_SECMARK_CTX], GFP_KERNEL_ACCOUNT);
+ if (!priv->ctx)
+ return -ENOMEM;
+
+diff --git a/net/netfilter/nft_numgen.c b/net/netfilter/nft_numgen.c
+index 7d29db7c2ac0f0..bd058babfc820c 100644
+--- a/net/netfilter/nft_numgen.c
++++ b/net/netfilter/nft_numgen.c
+@@ -66,7 +66,7 @@ static int nft_ng_inc_init(const struct nft_ctx *ctx,
+ if (priv->offset + priv->modulus - 1 < priv->offset)
+ return -EOVERFLOW;
+
+- priv->counter = kmalloc(sizeof(*priv->counter), GFP_KERNEL);
++ priv->counter = kmalloc(sizeof(*priv->counter), GFP_KERNEL_ACCOUNT);
+ if (!priv->counter)
+ return -ENOMEM;
+
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index eb4c4a4ac7acea..7be342b495f5f7 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -663,7 +663,7 @@ static int pipapo_realloc_mt(struct nft_pipapo_field *f,
+ check_add_overflow(rules, extra, &rules_alloc))
+ return -EOVERFLOW;
+
+- new_mt = kvmalloc_array(rules_alloc, sizeof(*new_mt), GFP_KERNEL);
++ new_mt = kvmalloc_array(rules_alloc, sizeof(*new_mt), GFP_KERNEL_ACCOUNT);
+ if (!new_mt)
+ return -ENOMEM;
+
+@@ -936,7 +936,7 @@ static void pipapo_lt_bits_adjust(struct nft_pipapo_field *f)
+ return;
+ }
+
+- new_lt = kvzalloc(lt_size + NFT_PIPAPO_ALIGN_HEADROOM, GFP_KERNEL);
++ new_lt = kvzalloc(lt_size + NFT_PIPAPO_ALIGN_HEADROOM, GFP_KERNEL_ACCOUNT);
+ if (!new_lt)
+ return;
+
+@@ -1212,7 +1212,7 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
+ scratch = kzalloc_node(struct_size(scratch, map,
+ bsize_max * 2) +
+ NFT_PIPAPO_ALIGN_HEADROOM,
+- GFP_KERNEL, cpu_to_node(i));
++ GFP_KERNEL_ACCOUNT, cpu_to_node(i));
+ if (!scratch) {
+ /* On failure, there's no need to undo previous
+ * allocations: this means that some scratch maps have
+@@ -1427,7 +1427,7 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ struct nft_pipapo_match *new;
+ int i;
+
+- new = kmalloc(struct_size(new, f, old->field_count), GFP_KERNEL);
++ new = kmalloc(struct_size(new, f, old->field_count), GFP_KERNEL_ACCOUNT);
+ if (!new)
+ return NULL;
+
+@@ -1457,7 +1457,7 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ new_lt = kvzalloc(src->groups * NFT_PIPAPO_BUCKETS(src->bb) *
+ src->bsize * sizeof(*dst->lt) +
+ NFT_PIPAPO_ALIGN_HEADROOM,
+- GFP_KERNEL);
++ GFP_KERNEL_ACCOUNT);
+ if (!new_lt)
+ goto out_lt;
+
+@@ -1470,7 +1470,8 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+
+ if (src->rules > 0) {
+ dst->mt = kvmalloc_array(src->rules_alloc,
+- sizeof(*src->mt), GFP_KERNEL);
++ sizeof(*src->mt),
++ GFP_KERNEL_ACCOUNT);
+ if (!dst->mt)
+ goto out_mt;
+
+diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c
+index 60a76e6e348e7b..5c6ed68cc6e058 100644
+--- a/net/netfilter/nft_tunnel.c
++++ b/net/netfilter/nft_tunnel.c
+@@ -509,13 +509,14 @@ static int nft_tunnel_obj_init(const struct nft_ctx *ctx,
+ return err;
+ }
+
+- md = metadata_dst_alloc(priv->opts.len, METADATA_IP_TUNNEL, GFP_KERNEL);
++ md = metadata_dst_alloc(priv->opts.len, METADATA_IP_TUNNEL,
++ GFP_KERNEL_ACCOUNT);
+ if (!md)
+ return -ENOMEM;
+
+ memcpy(&md->u.tun_info, &info, sizeof(info));
+ #ifdef CONFIG_DST_CACHE
+- err = dst_cache_init(&md->u.tun_info.dst_cache, GFP_KERNEL);
++ err = dst_cache_init(&md->u.tun_info.dst_cache, GFP_KERNEL_ACCOUNT);
+ if (err < 0) {
+ metadata_dst_free(md);
+ return err;
+diff --git a/net/qrtr/af_qrtr.c b/net/qrtr/af_qrtr.c
+index 41ece61eb57ab7..00c51cf693f3d0 100644
+--- a/net/qrtr/af_qrtr.c
++++ b/net/qrtr/af_qrtr.c
+@@ -884,7 +884,7 @@ static int qrtr_bcast_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+
+ mutex_lock(&qrtr_node_lock);
+ list_for_each_entry(node, &qrtr_all_nodes, item) {
+- skbn = skb_clone(skb, GFP_KERNEL);
++ skbn = pskb_copy(skb, GFP_KERNEL);
+ if (!skbn)
+ break;
+ skb_set_owner_w(skbn, skb->sk);
+diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
+index 593846d252143c..114fef65f92eab 100644
+--- a/net/tipc/bcast.c
++++ b/net/tipc/bcast.c
+@@ -320,8 +320,8 @@ static int tipc_mcast_send_sync(struct net *net, struct sk_buff *skb,
+ {
+ struct tipc_msg *hdr, *_hdr;
+ struct sk_buff_head tmpq;
++ u16 cong_link_cnt = 0;
+ struct sk_buff *_skb;
+- u16 cong_link_cnt;
+ int rc = 0;
+
+ /* Is a cluster supporting with new capabilities ? */
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 0be0dcb07f7b62..001ccc55ef0f93 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -693,10 +693,7 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ unix_state_unlock(sk);
+
+ #if IS_ENABLED(CONFIG_AF_UNIX_OOB)
+- if (u->oob_skb) {
+- kfree_skb(u->oob_skb);
+- u->oob_skb = NULL;
+- }
++ u->oob_skb = NULL;
+ #endif
+
+ wake_up_interruptible_all(&u->peer_wait);
+@@ -2226,13 +2223,9 @@ static int queue_oob(struct socket *sock, struct msghdr *msg, struct sock *other
+ }
+
+ maybe_add_creds(skb, sock, other);
+- skb_get(skb);
+-
+ scm_stat_add(other, skb);
+
+ spin_lock(&other->sk_receive_queue.lock);
+- if (ousk->oob_skb)
+- consume_skb(ousk->oob_skb);
+ WRITE_ONCE(ousk->oob_skb, skb);
+ __skb_queue_tail(&other->sk_receive_queue, skb);
+ spin_unlock(&other->sk_receive_queue.lock);
+@@ -2640,8 +2633,6 @@ static int unix_stream_recv_urg(struct unix_stream_read_state *state)
+
+ if (!(state->flags & MSG_PEEK))
+ WRITE_ONCE(u->oob_skb, NULL);
+- else
+- skb_get(oob_skb);
+
+ spin_unlock(&sk->sk_receive_queue.lock);
+ unix_state_unlock(sk);
+@@ -2651,8 +2642,6 @@ static int unix_stream_recv_urg(struct unix_stream_read_state *state)
+ if (!(state->flags & MSG_PEEK))
+ UNIXCB(oob_skb).consumed += 1;
+
+- consume_skb(oob_skb);
+-
+ mutex_unlock(&u->iolock);
+
+ if (chunk < 0)
+@@ -2665,56 +2654,52 @@ static int unix_stream_recv_urg(struct unix_stream_read_state *state)
+ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
+ int flags, int copied)
+ {
++ struct sk_buff *read_skb = NULL, *unread_skb = NULL;
+ struct unix_sock *u = unix_sk(sk);
+
+- if (!unix_skb_len(skb)) {
+- struct sk_buff *unlinked_skb = NULL;
++ if (likely(unix_skb_len(skb) && skb != READ_ONCE(u->oob_skb)))
++ return skb;
+
+- spin_lock(&sk->sk_receive_queue.lock);
++ spin_lock(&sk->sk_receive_queue.lock);
+
++ if (!unix_skb_len(skb)) {
+ if (copied && (!u->oob_skb || skb == u->oob_skb)) {
+ skb = NULL;
+ } else if (flags & MSG_PEEK) {
+ skb = skb_peek_next(skb, &sk->sk_receive_queue);
+ } else {
+- unlinked_skb = skb;
++ read_skb = skb;
+ skb = skb_peek_next(skb, &sk->sk_receive_queue);
+- __skb_unlink(unlinked_skb, &sk->sk_receive_queue);
++ __skb_unlink(read_skb, &sk->sk_receive_queue);
+ }
+
+- spin_unlock(&sk->sk_receive_queue.lock);
++ if (!skb)
++ goto unlock;
++ }
+
+- consume_skb(unlinked_skb);
+- } else {
+- struct sk_buff *unlinked_skb = NULL;
++ if (skb != u->oob_skb)
++ goto unlock;
+
+- spin_lock(&sk->sk_receive_queue.lock);
++ if (copied) {
++ skb = NULL;
++ } else if (!(flags & MSG_PEEK)) {
++ WRITE_ONCE(u->oob_skb, NULL);
+
+- if (skb == u->oob_skb) {
+- if (copied) {
+- skb = NULL;
+- } else if (!(flags & MSG_PEEK)) {
+- if (sock_flag(sk, SOCK_URGINLINE)) {
+- WRITE_ONCE(u->oob_skb, NULL);
+- consume_skb(skb);
+- } else {
+- __skb_unlink(skb, &sk->sk_receive_queue);
+- WRITE_ONCE(u->oob_skb, NULL);
+- unlinked_skb = skb;
+- skb = skb_peek(&sk->sk_receive_queue);
+- }
+- } else if (!sock_flag(sk, SOCK_URGINLINE)) {
+- skb = skb_peek_next(skb, &sk->sk_receive_queue);
+- }
++ if (!sock_flag(sk, SOCK_URGINLINE)) {
++ __skb_unlink(skb, &sk->sk_receive_queue);
++ unread_skb = skb;
++ skb = skb_peek(&sk->sk_receive_queue);
+ }
++ } else if (!sock_flag(sk, SOCK_URGINLINE)) {
++ skb = skb_peek_next(skb, &sk->sk_receive_queue);
++ }
+
+- spin_unlock(&sk->sk_receive_queue.lock);
++unlock:
++ spin_unlock(&sk->sk_receive_queue.lock);
++
++ consume_skb(read_skb);
++ kfree_skb(unread_skb);
+
+- if (unlinked_skb) {
+- WARN_ON_ONCE(skb_unref(unlinked_skb));
+- kfree_skb(unlinked_skb);
+- }
+- }
+ return skb;
+ }
+ #endif
+@@ -2756,7 +2741,6 @@ static int unix_stream_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
+ unix_state_unlock(sk);
+
+ if (drop) {
+- WARN_ON_ONCE(skb_unref(skb));
+ kfree_skb(skb);
+ return -EAGAIN;
+ }
+@@ -3192,9 +3176,13 @@ static int unix_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ skb = skb_peek(&sk->sk_receive_queue);
+ if (skb) {
+ struct sk_buff *oob_skb = READ_ONCE(u->oob_skb);
++ struct sk_buff *next_skb;
++
++ next_skb = skb_peek_next(skb, &sk->sk_receive_queue);
+
+ if (skb == oob_skb ||
+- (!oob_skb && !unix_skb_len(skb)))
++ (!unix_skb_len(skb) &&
++ (!oob_skb || next_skb == oob_skb)))
+ answ = 1;
+ }
+
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index 06d94ad999e992..0068e758be4ddb 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -337,18 +337,6 @@ static bool unix_vertex_dead(struct unix_vertex *vertex)
+ return true;
+ }
+
+-static void unix_collect_queue(struct unix_sock *u, struct sk_buff_head *hitlist)
+-{
+- skb_queue_splice_init(&u->sk.sk_receive_queue, hitlist);
+-
+-#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
+- if (u->oob_skb) {
+- WARN_ON_ONCE(skb_unref(u->oob_skb));
+- u->oob_skb = NULL;
+- }
+-#endif
+-}
+-
+ static void unix_collect_skb(struct list_head *scc, struct sk_buff_head *hitlist)
+ {
+ struct unix_vertex *vertex;
+@@ -371,11 +359,11 @@ static void unix_collect_skb(struct list_head *scc, struct sk_buff_head *hitlist
+ struct sk_buff_head *embryo_queue = &skb->sk->sk_receive_queue;
+
+ spin_lock(&embryo_queue->lock);
+- unix_collect_queue(unix_sk(skb->sk), hitlist);
++ skb_queue_splice_init(embryo_queue, hitlist);
+ spin_unlock(&embryo_queue->lock);
+ }
+ } else {
+- unix_collect_queue(u, hitlist);
++ skb_queue_splice_init(queue, hitlist);
+ }
+
+ spin_unlock(&queue->lock);
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 7397a372c78eb7..1d83bc3de5ca5d 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -9778,7 +9778,8 @@ nl80211_parse_sched_scan(struct wiphy *wiphy, struct wireless_dev *wdev,
+ return ERR_PTR(-ENOMEM);
+
+ if (n_ssids)
+- request->ssids = (void *)&request->channels[n_channels];
++ request->ssids = (void *)request +
++ struct_size(request, channels, n_channels);
+ request->n_ssids = n_ssids;
+ if (ie_len) {
+ if (n_ssids)
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 64eeed82d43d5c..3ff818849d83a5 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -3467,8 +3467,8 @@ int cfg80211_wext_siwscan(struct net_device *dev,
+ n_channels = ieee80211_get_num_supported_channels(wiphy);
+ }
+
+- creq = kzalloc(sizeof(*creq) + sizeof(struct cfg80211_ssid) +
+- n_channels * sizeof(void *),
++ creq = kzalloc(struct_size(creq, channels, n_channels) +
++ sizeof(struct cfg80211_ssid),
+ GFP_ATOMIC);
+ if (!creq)
+ return -ENOMEM;
+@@ -3476,7 +3476,7 @@ int cfg80211_wext_siwscan(struct net_device *dev,
+ creq->wiphy = wiphy;
+ creq->wdev = dev->ieee80211_ptr;
+ /* SSIDs come after channels */
+- creq->ssids = (void *)&creq->channels[n_channels];
++ creq->ssids = (void *)creq + struct_size(creq, channels, n_channels);
+ creq->n_channels = n_channels;
+ creq->n_ssids = 1;
+ creq->scan_start = jiffies;
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index d9d7bf8bb5c1ac..431da30817a6f6 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -115,7 +115,8 @@ static int cfg80211_conn_scan(struct wireless_dev *wdev)
+ n_channels = i;
+ }
+ request->n_channels = n_channels;
+- request->ssids = (void *)&request->channels[n_channels];
++ request->ssids = (void *)request +
++ struct_size(request, channels, n_channels);
+ request->n_ssids = 1;
+
+ memcpy(request->ssids[0].ssid, wdev->conn->params.ssid,
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 9a7c3adc8a3bf4..edeeb056fe4da1 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -998,10 +998,10 @@ unsigned int cfg80211_classify8021d(struct sk_buff *skb,
+ * Diffserv Service Classes no update is needed:
+ * - Standard: DF
+ * - Low Priority Data: CS1
+- * - Multimedia Streaming: AF31, AF32, AF33
+ * - Multimedia Conferencing: AF41, AF42, AF43
+ * - Network Control Traffic: CS7
+ * - Real-Time Interactive: CS4
++ * - Signaling: CS5
+ */
+ switch (dscp >> 2) {
+ case 10:
+@@ -1026,9 +1026,11 @@ unsigned int cfg80211_classify8021d(struct sk_buff *skb,
+ /* Broadcasting video: CS3 */
+ ret = 4;
+ break;
+- case 40:
+- /* Signaling: CS5 */
+- ret = 5;
++ case 26:
++ case 28:
++ case 30:
++ /* Multimedia Streaming: AF31, AF32, AF33 */
++ ret = 4;
+ break;
+ case 44:
+ /* Voice Admit: VA */
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index c0e0204b963045..b0f24ebd05f0ba 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -623,20 +623,31 @@ static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u3
+ return nb_entries;
+ }
+
+-u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
++static u32 xp_alloc_slow(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
++ u32 max)
+ {
+- u32 nb_entries1 = 0, nb_entries2;
++ int i;
+
+- if (unlikely(pool->dev && dma_dev_need_sync(pool->dev))) {
++ for (i = 0; i < max; i++) {
+ struct xdp_buff *buff;
+
+- /* Slow path */
+ buff = xp_alloc(pool);
+- if (buff)
+- *xdp = buff;
+- return !!buff;
++ if (unlikely(!buff))
++ return i;
++ *xdp = buff;
++ xdp++;
+ }
+
++ return max;
++}
++
++u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
++{
++ u32 nb_entries1 = 0, nb_entries2;
++
++ if (unlikely(pool->dev && dma_dev_need_sync(pool->dev)))
++ return xp_alloc_slow(pool, xdp, max);
++
+ if (unlikely(pool->free_list_cnt)) {
+ nb_entries1 = xp_alloc_reused(pool, xdp, max);
+ if (nb_entries1 == max)
+diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
+index 3e003dd6bea09c..dca56aa360ff32 100644
+--- a/samples/bpf/Makefile
++++ b/samples/bpf/Makefile
+@@ -169,6 +169,10 @@ BPF_EXTRA_CFLAGS += -I$(srctree)/arch/mips/include/asm/mach-generic
+ endif
+ endif
+
++ifeq ($(ARCH), x86)
++BPF_EXTRA_CFLAGS += -fcf-protection
++endif
++
+ TPROGS_CFLAGS += -Wall -O2
+ TPROGS_CFLAGS += -Wmissing-prototypes
+ TPROGS_CFLAGS += -Wstrict-prototypes
+@@ -405,7 +409,7 @@ $(obj)/%.o: $(src)/%.c
+ -Wno-gnu-variable-sized-type-not-at-end \
+ -Wno-address-of-packed-member -Wno-tautological-compare \
+ -Wno-unknown-warning-option $(CLANG_ARCH_ARGS) \
+- -fno-asynchronous-unwind-tables -fcf-protection \
++ -fno-asynchronous-unwind-tables \
+ -I$(srctree)/samples/bpf/ -include asm_goto_workaround.h \
+ -O2 -emit-llvm -Xclang -disable-llvm-passes -c $< -o - | \
+ $(OPT) -O2 -mtriple=bpf-pc-linux | $(LLVM_DIS) | \
+diff --git a/security/apparmor/include/net.h b/security/apparmor/include/net.h
+index 67bf888c3bd6b9..c42ed8a73f1cef 100644
+--- a/security/apparmor/include/net.h
++++ b/security/apparmor/include/net.h
+@@ -51,10 +51,9 @@ struct aa_sk_ctx {
+ struct aa_label *peer;
+ };
+
+-#define SK_CTX(X) ((X)->sk_security)
+ static inline struct aa_sk_ctx *aa_sock(const struct sock *sk)
+ {
+- return sk->sk_security;
++ return sk->sk_security + apparmor_blob_sizes.lbs_sock;
+ }
+
+ #define DEFINE_AUDIT_NET(NAME, OP, SK, F, T, P) \
+diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
+index 808060f9effb73..f5d05297d59ee4 100644
+--- a/security/apparmor/lsm.c
++++ b/security/apparmor/lsm.c
+@@ -1058,27 +1058,12 @@ static int apparmor_userns_create(const struct cred *cred)
+ return error;
+ }
+
+-static int apparmor_sk_alloc_security(struct sock *sk, int family, gfp_t flags)
+-{
+- struct aa_sk_ctx *ctx;
+-
+- ctx = kzalloc(sizeof(*ctx), flags);
+- if (!ctx)
+- return -ENOMEM;
+-
+- sk->sk_security = ctx;
+-
+- return 0;
+-}
+-
+ static void apparmor_sk_free_security(struct sock *sk)
+ {
+ struct aa_sk_ctx *ctx = aa_sock(sk);
+
+- sk->sk_security = NULL;
+ aa_put_label(ctx->label);
+ aa_put_label(ctx->peer);
+- kfree(ctx);
+ }
+
+ /**
+@@ -1433,6 +1418,7 @@ struct lsm_blob_sizes apparmor_blob_sizes __ro_after_init = {
+ .lbs_cred = sizeof(struct aa_label *),
+ .lbs_file = sizeof(struct aa_file_ctx),
+ .lbs_task = sizeof(struct aa_task_ctx),
++ .lbs_sock = sizeof(struct aa_sk_ctx),
+ };
+
+ static const struct lsm_id apparmor_lsmid = {
+@@ -1478,7 +1464,6 @@ static struct security_hook_list apparmor_hooks[] __ro_after_init = {
+ LSM_HOOK_INIT(getprocattr, apparmor_getprocattr),
+ LSM_HOOK_INIT(setprocattr, apparmor_setprocattr),
+
+- LSM_HOOK_INIT(sk_alloc_security, apparmor_sk_alloc_security),
+ LSM_HOOK_INIT(sk_free_security, apparmor_sk_free_security),
+ LSM_HOOK_INIT(sk_clone_security, apparmor_sk_clone_security),
+
+diff --git a/security/apparmor/net.c b/security/apparmor/net.c
+index 87e934b2b54887..77413a5191179a 100644
+--- a/security/apparmor/net.c
++++ b/security/apparmor/net.c
+@@ -151,7 +151,7 @@ static int aa_label_sk_perm(const struct cred *subj_cred,
+ const char *op, u32 request,
+ struct sock *sk)
+ {
+- struct aa_sk_ctx *ctx = SK_CTX(sk);
++ struct aa_sk_ctx *ctx = aa_sock(sk);
+ int error = 0;
+
+ AA_BUG(!label);
+diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
+index 57b9ffd53c98ae..3663aec7bcbd21 100644
+--- a/security/bpf/hooks.c
++++ b/security/bpf/hooks.c
+@@ -31,7 +31,6 @@ static int __init bpf_lsm_init(void)
+
+ struct lsm_blob_sizes bpf_lsm_blob_sizes __ro_after_init = {
+ .lbs_inode = sizeof(struct bpf_storage_blob),
+- .lbs_task = sizeof(struct bpf_storage_blob),
+ };
+
+ DEFINE_LSM(bpf) = {
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index c51e24d24d1e9e..3c323ca213d42c 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -223,7 +223,7 @@ static inline void ima_inode_set_iint(const struct inode *inode,
+
+ struct ima_iint_cache *ima_iint_find(struct inode *inode);
+ struct ima_iint_cache *ima_inode_get(struct inode *inode);
+-void ima_inode_free(struct inode *inode);
++void ima_inode_free_rcu(void *inode_security);
+ void __init ima_iintcache_init(void);
+
+ extern const int read_idmap[];
+diff --git a/security/integrity/ima/ima_iint.c b/security/integrity/ima/ima_iint.c
+index e23412a2c56b09..00b249101f98d3 100644
+--- a/security/integrity/ima/ima_iint.c
++++ b/security/integrity/ima/ima_iint.c
+@@ -109,22 +109,18 @@ struct ima_iint_cache *ima_inode_get(struct inode *inode)
+ }
+
+ /**
+- * ima_inode_free - Called on inode free
+- * @inode: Pointer to the inode
++ * ima_inode_free_rcu - Called to free an inode via a RCU callback
++ * @inode_security: The inode->i_security pointer
+ *
+- * Free the iint associated with an inode.
++ * Free the IMA data associated with an inode.
+ */
+-void ima_inode_free(struct inode *inode)
++void ima_inode_free_rcu(void *inode_security)
+ {
+- struct ima_iint_cache *iint;
+-
+- if (!IS_IMA(inode))
+- return;
+-
+- iint = ima_iint_find(inode);
+- ima_inode_set_iint(inode, NULL);
++ struct ima_iint_cache **iint_p = inode_security + ima_blob_sizes.lbs_inode;
+
+- ima_iint_free(iint);
++ /* *iint_p should be NULL if !IS_IMA(inode) */
++ if (*iint_p)
++ ima_iint_free(*iint_p);
+ }
+
+ static void ima_iint_init_once(void *foo)
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index f04f43af651c8e..5b3394864b218e 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -1193,7 +1193,7 @@ static struct security_hook_list ima_hooks[] __ro_after_init = {
+ #ifdef CONFIG_INTEGRITY_ASYMMETRIC_KEYS
+ LSM_HOOK_INIT(kernel_module_request, ima_kernel_module_request),
+ #endif
+- LSM_HOOK_INIT(inode_free_security, ima_inode_free),
++ LSM_HOOK_INIT(inode_free_security_rcu, ima_inode_free_rcu),
+ };
+
+ static const struct lsm_id ima_lsmid = {
+diff --git a/security/landlock/fs.c b/security/landlock/fs.c
+index 7877a64cc6b87c..0804f76a67be2e 100644
+--- a/security/landlock/fs.c
++++ b/security/landlock/fs.c
+@@ -1207,13 +1207,16 @@ static int current_check_refer_path(struct dentry *const old_dentry,
+
+ /* Inode hooks */
+
+-static void hook_inode_free_security(struct inode *const inode)
++static void hook_inode_free_security_rcu(void *inode_security)
+ {
++ struct landlock_inode_security *inode_sec;
++
+ /*
+ * All inodes must already have been untied from their object by
+ * release_inode() or hook_sb_delete().
+ */
+- WARN_ON_ONCE(landlock_inode(inode)->object);
++ inode_sec = inode_security + landlock_blob_sizes.lbs_inode;
++ WARN_ON_ONCE(inode_sec->object);
+ }
+
+ /* Super-block hooks */
+@@ -1637,7 +1640,7 @@ static int hook_file_ioctl_compat(struct file *file, unsigned int cmd,
+ }
+
+ static struct security_hook_list landlock_hooks[] __ro_after_init = {
+- LSM_HOOK_INIT(inode_free_security, hook_inode_free_security),
++ LSM_HOOK_INIT(inode_free_security_rcu, hook_inode_free_security_rcu),
+
+ LSM_HOOK_INIT(sb_delete, hook_sb_delete),
+ LSM_HOOK_INIT(sb_mount, hook_sb_mount),
+diff --git a/security/security.c b/security/security.c
+index 8cee5b6c6e6d53..43166e341526c0 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -29,6 +29,7 @@
+ #include <linux/msg.h>
+ #include <linux/overflow.h>
+ #include <net/flow.h>
++#include <net/sock.h>
+
+ /* How many LSMs were built into the kernel? */
+ #define LSM_COUNT (__end_lsm_info - __start_lsm_info)
+@@ -227,6 +228,7 @@ static void __init lsm_set_blob_sizes(struct lsm_blob_sizes *needed)
+ lsm_set_blob_size(&needed->lbs_inode, &blob_sizes.lbs_inode);
+ lsm_set_blob_size(&needed->lbs_ipc, &blob_sizes.lbs_ipc);
+ lsm_set_blob_size(&needed->lbs_msg_msg, &blob_sizes.lbs_msg_msg);
++ lsm_set_blob_size(&needed->lbs_sock, &blob_sizes.lbs_sock);
+ lsm_set_blob_size(&needed->lbs_superblock, &blob_sizes.lbs_superblock);
+ lsm_set_blob_size(&needed->lbs_task, &blob_sizes.lbs_task);
+ lsm_set_blob_size(&needed->lbs_xattr_count,
+@@ -401,6 +403,7 @@ static void __init ordered_lsm_init(void)
+ init_debug("inode blob size = %d\n", blob_sizes.lbs_inode);
+ init_debug("ipc blob size = %d\n", blob_sizes.lbs_ipc);
+ init_debug("msg_msg blob size = %d\n", blob_sizes.lbs_msg_msg);
++ init_debug("sock blob size = %d\n", blob_sizes.lbs_sock);
+ init_debug("superblock blob size = %d\n", blob_sizes.lbs_superblock);
+ init_debug("task blob size = %d\n", blob_sizes.lbs_task);
+ init_debug("xattr slots = %d\n", blob_sizes.lbs_xattr_count);
+@@ -1596,9 +1599,8 @@ int security_inode_alloc(struct inode *inode)
+
+ static void inode_free_by_rcu(struct rcu_head *head)
+ {
+- /*
+- * The rcu head is at the start of the inode blob
+- */
++ /* The rcu head is at the start of the inode blob */
++ call_void_hook(inode_free_security_rcu, head);
+ kmem_cache_free(lsm_inode_cache, head);
+ }
+
+@@ -1606,23 +1608,24 @@ static void inode_free_by_rcu(struct rcu_head *head)
+ * security_inode_free() - Free an inode's LSM blob
+ * @inode: the inode
+ *
+- * Deallocate the inode security structure and set @inode->i_security to NULL.
++ * Release any LSM resources associated with @inode, although due to the
++ * inode's RCU protections it is possible that the resources will not be
++ * fully released until after the current RCU grace period has elapsed.
++ *
++ * It is important for LSMs to note that despite being present in a call to
++ * security_inode_free(), @inode may still be referenced in a VFS path walk
++ * and calls to security_inode_permission() may be made during, or after,
++ * a call to security_inode_free(). For this reason the inode->i_security
++ * field is released via a call_rcu() callback and any LSMs which need to
++ * retain inode state for use in security_inode_permission() should only
++ * release that state in the inode_free_security_rcu() LSM hook callback.
+ */
+ void security_inode_free(struct inode *inode)
+ {
+ call_void_hook(inode_free_security, inode);
+- /*
+- * The inode may still be referenced in a path walk and
+- * a call to security_inode_permission() can be made
+- * after inode_free_security() is called. Ideally, the VFS
+- * wouldn't do this, but fixing that is a much harder
+- * job. For now, simply free the i_security via RCU, and
+- * leave the current inode->i_security pointer intact.
+- * The inode will be freed after the RCU grace period too.
+- */
+- if (inode->i_security)
+- call_rcu((struct rcu_head *)inode->i_security,
+- inode_free_by_rcu);
++ if (!inode->i_security)
++ return;
++ call_rcu((struct rcu_head *)inode->i_security, inode_free_by_rcu);
+ }
+
+ /**
+@@ -4673,6 +4676,28 @@ int security_socket_getpeersec_dgram(struct socket *sock,
+ }
+ EXPORT_SYMBOL(security_socket_getpeersec_dgram);
+
++/**
++ * lsm_sock_alloc - allocate a composite sock blob
++ * @sock: the sock that needs a blob
++ * @priority: allocation mode
++ *
++ * Allocate the sock blob for all the modules
++ *
++ * Returns 0, or -ENOMEM if memory can't be allocated.
++ */
++static int lsm_sock_alloc(struct sock *sock, gfp_t priority)
++{
++ if (blob_sizes.lbs_sock == 0) {
++ sock->sk_security = NULL;
++ return 0;
++ }
++
++ sock->sk_security = kzalloc(blob_sizes.lbs_sock, priority);
++ if (sock->sk_security == NULL)
++ return -ENOMEM;
++ return 0;
++}
++
+ /**
+ * security_sk_alloc() - Allocate and initialize a sock's LSM blob
+ * @sk: sock
+@@ -4686,7 +4711,14 @@ EXPORT_SYMBOL(security_socket_getpeersec_dgram);
+ */
+ int security_sk_alloc(struct sock *sk, int family, gfp_t priority)
+ {
+- return call_int_hook(sk_alloc_security, sk, family, priority);
++ int rc = lsm_sock_alloc(sk, priority);
++
++ if (unlikely(rc))
++ return rc;
++ rc = call_int_hook(sk_alloc_security, sk, family, priority);
++ if (unlikely(rc))
++ security_sk_free(sk);
++ return rc;
+ }
+
+ /**
+@@ -4698,6 +4730,8 @@ int security_sk_alloc(struct sock *sk, int family, gfp_t priority)
+ void security_sk_free(struct sock *sk)
+ {
+ call_void_hook(sk_free_security, sk);
++ kfree(sk->sk_security);
++ sk->sk_security = NULL;
+ }
+
+ /**
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 400eca4ad0fb6c..c11303d662d809 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -4594,7 +4594,7 @@ static int socket_sockcreate_sid(const struct task_security_struct *tsec,
+
+ static int sock_has_perm(struct sock *sk, u32 perms)
+ {
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ struct common_audit_data ad;
+ struct lsm_network_audit net;
+
+@@ -4662,7 +4662,7 @@ static int selinux_socket_post_create(struct socket *sock, int family,
+ isec->initialized = LABEL_INITIALIZED;
+
+ if (sock->sk) {
+- sksec = sock->sk->sk_security;
++ sksec = selinux_sock(sock->sk);
+ sksec->sclass = sclass;
+ sksec->sid = sid;
+ /* Allows detection of the first association on this socket */
+@@ -4678,8 +4678,8 @@ static int selinux_socket_post_create(struct socket *sock, int family,
+ static int selinux_socket_socketpair(struct socket *socka,
+ struct socket *sockb)
+ {
+- struct sk_security_struct *sksec_a = socka->sk->sk_security;
+- struct sk_security_struct *sksec_b = sockb->sk->sk_security;
++ struct sk_security_struct *sksec_a = selinux_sock(socka->sk);
++ struct sk_security_struct *sksec_b = selinux_sock(sockb->sk);
+
+ sksec_a->peer_sid = sksec_b->sid;
+ sksec_b->peer_sid = sksec_a->sid;
+@@ -4694,7 +4694,7 @@ static int selinux_socket_socketpair(struct socket *socka,
+ static int selinux_socket_bind(struct socket *sock, struct sockaddr *address, int addrlen)
+ {
+ struct sock *sk = sock->sk;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ u16 family;
+ int err;
+
+@@ -4834,7 +4834,7 @@ static int selinux_socket_connect_helper(struct socket *sock,
+ struct sockaddr *address, int addrlen)
+ {
+ struct sock *sk = sock->sk;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ int err;
+
+ err = sock_has_perm(sk, SOCKET__CONNECT);
+@@ -5012,9 +5012,9 @@ static int selinux_socket_unix_stream_connect(struct sock *sock,
+ struct sock *other,
+ struct sock *newsk)
+ {
+- struct sk_security_struct *sksec_sock = sock->sk_security;
+- struct sk_security_struct *sksec_other = other->sk_security;
+- struct sk_security_struct *sksec_new = newsk->sk_security;
++ struct sk_security_struct *sksec_sock = selinux_sock(sock);
++ struct sk_security_struct *sksec_other = selinux_sock(other);
++ struct sk_security_struct *sksec_new = selinux_sock(newsk);
+ struct common_audit_data ad;
+ struct lsm_network_audit net;
+ int err;
+@@ -5043,8 +5043,8 @@ static int selinux_socket_unix_stream_connect(struct sock *sock,
+ static int selinux_socket_unix_may_send(struct socket *sock,
+ struct socket *other)
+ {
+- struct sk_security_struct *ssec = sock->sk->sk_security;
+- struct sk_security_struct *osec = other->sk->sk_security;
++ struct sk_security_struct *ssec = selinux_sock(sock->sk);
++ struct sk_security_struct *osec = selinux_sock(other->sk);
+ struct common_audit_data ad;
+ struct lsm_network_audit net;
+
+@@ -5081,7 +5081,7 @@ static int selinux_sock_rcv_skb_compat(struct sock *sk, struct sk_buff *skb,
+ u16 family)
+ {
+ int err = 0;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ u32 sk_sid = sksec->sid;
+ struct common_audit_data ad;
+ struct lsm_network_audit net;
+@@ -5110,7 +5110,7 @@ static int selinux_sock_rcv_skb_compat(struct sock *sk, struct sk_buff *skb,
+ static int selinux_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ {
+ int err, peerlbl_active, secmark_active;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ u16 family = sk->sk_family;
+ u32 sk_sid = sksec->sid;
+ struct common_audit_data ad;
+@@ -5178,7 +5178,7 @@ static int selinux_socket_getpeersec_stream(struct socket *sock,
+ int err = 0;
+ char *scontext = NULL;
+ u32 scontext_len;
+- struct sk_security_struct *sksec = sock->sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sock->sk);
+ u32 peer_sid = SECSID_NULL;
+
+ if (sksec->sclass == SECCLASS_UNIX_STREAM_SOCKET ||
+@@ -5238,34 +5238,27 @@ static int selinux_socket_getpeersec_dgram(struct socket *sock,
+
+ static int selinux_sk_alloc_security(struct sock *sk, int family, gfp_t priority)
+ {
+- struct sk_security_struct *sksec;
+-
+- sksec = kzalloc(sizeof(*sksec), priority);
+- if (!sksec)
+- return -ENOMEM;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+
+ sksec->peer_sid = SECINITSID_UNLABELED;
+ sksec->sid = SECINITSID_UNLABELED;
+ sksec->sclass = SECCLASS_SOCKET;
+ selinux_netlbl_sk_security_reset(sksec);
+- sk->sk_security = sksec;
+
+ return 0;
+ }
+
+ static void selinux_sk_free_security(struct sock *sk)
+ {
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+
+- sk->sk_security = NULL;
+ selinux_netlbl_sk_security_free(sksec);
+- kfree(sksec);
+ }
+
+ static void selinux_sk_clone_security(const struct sock *sk, struct sock *newsk)
+ {
+- struct sk_security_struct *sksec = sk->sk_security;
+- struct sk_security_struct *newsksec = newsk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
++ struct sk_security_struct *newsksec = selinux_sock(newsk);
+
+ newsksec->sid = sksec->sid;
+ newsksec->peer_sid = sksec->peer_sid;
+@@ -5279,7 +5272,7 @@ static void selinux_sk_getsecid(const struct sock *sk, u32 *secid)
+ if (!sk)
+ *secid = SECINITSID_ANY_SOCKET;
+ else {
+- const struct sk_security_struct *sksec = sk->sk_security;
++ const struct sk_security_struct *sksec = selinux_sock(sk);
+
+ *secid = sksec->sid;
+ }
+@@ -5289,7 +5282,7 @@ static void selinux_sock_graft(struct sock *sk, struct socket *parent)
+ {
+ struct inode_security_struct *isec =
+ inode_security_novalidate(SOCK_INODE(parent));
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+
+ if (sk->sk_family == PF_INET || sk->sk_family == PF_INET6 ||
+ sk->sk_family == PF_UNIX)
+@@ -5306,7 +5299,7 @@ static int selinux_sctp_process_new_assoc(struct sctp_association *asoc,
+ {
+ struct sock *sk = asoc->base.sk;
+ u16 family = sk->sk_family;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ struct common_audit_data ad;
+ struct lsm_network_audit net;
+ int err;
+@@ -5361,7 +5354,7 @@ static int selinux_sctp_process_new_assoc(struct sctp_association *asoc,
+ static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+ struct sk_buff *skb)
+ {
+- struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(asoc->base.sk);
+ u32 conn_sid;
+ int err;
+
+@@ -5394,7 +5387,7 @@ static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+ static int selinux_sctp_assoc_established(struct sctp_association *asoc,
+ struct sk_buff *skb)
+ {
+- struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(asoc->base.sk);
+
+ if (!selinux_policycap_extsockclass())
+ return 0;
+@@ -5493,8 +5486,8 @@ static int selinux_sctp_bind_connect(struct sock *sk, int optname,
+ static void selinux_sctp_sk_clone(struct sctp_association *asoc, struct sock *sk,
+ struct sock *newsk)
+ {
+- struct sk_security_struct *sksec = sk->sk_security;
+- struct sk_security_struct *newsksec = newsk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
++ struct sk_security_struct *newsksec = selinux_sock(newsk);
+
+ /* If policy does not support SECCLASS_SCTP_SOCKET then call
+ * the non-sctp clone version.
+@@ -5510,8 +5503,8 @@ static void selinux_sctp_sk_clone(struct sctp_association *asoc, struct sock *sk
+
+ static int selinux_mptcp_add_subflow(struct sock *sk, struct sock *ssk)
+ {
+- struct sk_security_struct *ssksec = ssk->sk_security;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *ssksec = selinux_sock(ssk);
++ struct sk_security_struct *sksec = selinux_sock(sk);
+
+ ssksec->sclass = sksec->sclass;
+ ssksec->sid = sksec->sid;
+@@ -5526,7 +5519,7 @@ static int selinux_mptcp_add_subflow(struct sock *sk, struct sock *ssk)
+ static int selinux_inet_conn_request(const struct sock *sk, struct sk_buff *skb,
+ struct request_sock *req)
+ {
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ int err;
+ u16 family = req->rsk_ops->family;
+ u32 connsid;
+@@ -5547,7 +5540,7 @@ static int selinux_inet_conn_request(const struct sock *sk, struct sk_buff *skb,
+ static void selinux_inet_csk_clone(struct sock *newsk,
+ const struct request_sock *req)
+ {
+- struct sk_security_struct *newsksec = newsk->sk_security;
++ struct sk_security_struct *newsksec = selinux_sock(newsk);
+
+ newsksec->sid = req->secid;
+ newsksec->peer_sid = req->peer_secid;
+@@ -5564,7 +5557,7 @@ static void selinux_inet_csk_clone(struct sock *newsk,
+ static void selinux_inet_conn_established(struct sock *sk, struct sk_buff *skb)
+ {
+ u16 family = sk->sk_family;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+
+ /* handle mapped IPv4 packets arriving via IPv6 sockets */
+ if (family == PF_INET6 && skb->protocol == htons(ETH_P_IP))
+@@ -5639,7 +5632,7 @@ static int selinux_tun_dev_attach_queue(void *security)
+ static int selinux_tun_dev_attach(struct sock *sk, void *security)
+ {
+ struct tun_security_struct *tunsec = security;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+
+ /* we don't currently perform any NetLabel based labeling here and it
+ * isn't clear that we would want to do so anyway; while we could apply
+@@ -5762,7 +5755,7 @@ static unsigned int selinux_ip_output(void *priv, struct sk_buff *skb,
+ return NF_ACCEPT;
+
+ /* standard practice, label using the parent socket */
+- sksec = sk->sk_security;
++ sksec = selinux_sock(sk);
+ sid = sksec->sid;
+ } else
+ sid = SECINITSID_KERNEL;
+@@ -5785,7 +5778,7 @@ static unsigned int selinux_ip_postroute_compat(struct sk_buff *skb,
+ sk = skb_to_full_sk(skb);
+ if (sk == NULL)
+ return NF_ACCEPT;
+- sksec = sk->sk_security;
++ sksec = selinux_sock(sk);
+
+ ad_net_init_from_iif(&ad, &net, state->out->ifindex, state->pf);
+ if (selinux_parse_skb(skb, &ad, NULL, 0, &proto))
+@@ -5874,7 +5867,7 @@ static unsigned int selinux_ip_postroute(void *priv,
+ u32 skb_sid;
+ struct sk_security_struct *sksec;
+
+- sksec = sk->sk_security;
++ sksec = selinux_sock(sk);
+ if (selinux_skb_peerlbl_sid(skb, family, &skb_sid))
+ return NF_DROP;
+ /* At this point, if the returned skb peerlbl is SECSID_NULL
+@@ -5903,7 +5896,7 @@ static unsigned int selinux_ip_postroute(void *priv,
+ } else {
+ /* Locally generated packet, fetch the security label from the
+ * associated socket. */
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ peer_sid = sksec->sid;
+ secmark_perm = PACKET__SEND;
+ }
+@@ -5946,7 +5939,7 @@ static int selinux_netlink_send(struct sock *sk, struct sk_buff *skb)
+ unsigned int data_len = skb->len;
+ unsigned char *data = skb->data;
+ struct nlmsghdr *nlh;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ u16 sclass = sksec->sclass;
+ u32 perm;
+
+@@ -7004,6 +6997,7 @@ struct lsm_blob_sizes selinux_blob_sizes __ro_after_init = {
+ .lbs_inode = sizeof(struct inode_security_struct),
+ .lbs_ipc = sizeof(struct ipc_security_struct),
+ .lbs_msg_msg = sizeof(struct msg_security_struct),
++ .lbs_sock = sizeof(struct sk_security_struct),
+ .lbs_superblock = sizeof(struct superblock_security_struct),
+ .lbs_xattr_count = SELINUX_INODE_INIT_XATTRS,
+ };
+diff --git a/security/selinux/include/objsec.h b/security/selinux/include/objsec.h
+index dea1d6f3ed2d3b..b074099acbaf79 100644
+--- a/security/selinux/include/objsec.h
++++ b/security/selinux/include/objsec.h
+@@ -195,4 +195,9 @@ selinux_superblock(const struct super_block *superblock)
+ return superblock->s_security + selinux_blob_sizes.lbs_superblock;
+ }
+
++static inline struct sk_security_struct *selinux_sock(const struct sock *sock)
++{
++ return sock->sk_security + selinux_blob_sizes.lbs_sock;
++}
++
+ #endif /* _SELINUX_OBJSEC_H_ */
+diff --git a/security/selinux/netlabel.c b/security/selinux/netlabel.c
+index 55885634e8804a..fbe5f8c29f8137 100644
+--- a/security/selinux/netlabel.c
++++ b/security/selinux/netlabel.c
+@@ -17,6 +17,7 @@
+ #include <linux/gfp.h>
+ #include <linux/ip.h>
+ #include <linux/ipv6.h>
++#include <linux/lsm_hooks.h>
+ #include <net/sock.h>
+ #include <net/netlabel.h>
+ #include <net/ip.h>
+@@ -68,7 +69,7 @@ static int selinux_netlbl_sidlookup_cached(struct sk_buff *skb,
+ static struct netlbl_lsm_secattr *selinux_netlbl_sock_genattr(struct sock *sk)
+ {
+ int rc;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ struct netlbl_lsm_secattr *secattr;
+
+ if (sksec->nlbl_secattr != NULL)
+@@ -100,7 +101,7 @@ static struct netlbl_lsm_secattr *selinux_netlbl_sock_getattr(
+ const struct sock *sk,
+ u32 sid)
+ {
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ struct netlbl_lsm_secattr *secattr = sksec->nlbl_secattr;
+
+ if (secattr == NULL)
+@@ -240,7 +241,7 @@ int selinux_netlbl_skbuff_setsid(struct sk_buff *skb,
+ * being labeled by it's parent socket, if it is just exit */
+ sk = skb_to_full_sk(skb);
+ if (sk != NULL) {
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+
+ if (sksec->nlbl_state != NLBL_REQSKB)
+ return 0;
+@@ -277,7 +278,7 @@ int selinux_netlbl_sctp_assoc_request(struct sctp_association *asoc,
+ {
+ int rc;
+ struct netlbl_lsm_secattr secattr;
+- struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(asoc->base.sk);
+ struct sockaddr_in addr4;
+ struct sockaddr_in6 addr6;
+
+@@ -356,7 +357,7 @@ int selinux_netlbl_inet_conn_request(struct request_sock *req, u16 family)
+ */
+ void selinux_netlbl_inet_csk_clone(struct sock *sk, u16 family)
+ {
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+
+ if (family == PF_INET)
+ sksec->nlbl_state = NLBL_LABELED;
+@@ -374,8 +375,8 @@ void selinux_netlbl_inet_csk_clone(struct sock *sk, u16 family)
+ */
+ void selinux_netlbl_sctp_sk_clone(struct sock *sk, struct sock *newsk)
+ {
+- struct sk_security_struct *sksec = sk->sk_security;
+- struct sk_security_struct *newsksec = newsk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
++ struct sk_security_struct *newsksec = selinux_sock(newsk);
+
+ newsksec->nlbl_state = sksec->nlbl_state;
+ }
+@@ -393,7 +394,7 @@ void selinux_netlbl_sctp_sk_clone(struct sock *sk, struct sock *newsk)
+ int selinux_netlbl_socket_post_create(struct sock *sk, u16 family)
+ {
+ int rc;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ struct netlbl_lsm_secattr *secattr;
+
+ if (family != PF_INET && family != PF_INET6)
+@@ -510,7 +511,7 @@ int selinux_netlbl_socket_setsockopt(struct socket *sock,
+ {
+ int rc = 0;
+ struct sock *sk = sock->sk;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ struct netlbl_lsm_secattr secattr;
+
+ if (selinux_netlbl_option(level, optname) &&
+@@ -548,7 +549,7 @@ static int selinux_netlbl_socket_connect_helper(struct sock *sk,
+ struct sockaddr *addr)
+ {
+ int rc;
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+ struct netlbl_lsm_secattr *secattr;
+
+ /* connected sockets are allowed to disconnect when the address family
+@@ -587,7 +588,7 @@ static int selinux_netlbl_socket_connect_helper(struct sock *sk,
+ int selinux_netlbl_socket_connect_locked(struct sock *sk,
+ struct sockaddr *addr)
+ {
+- struct sk_security_struct *sksec = sk->sk_security;
++ struct sk_security_struct *sksec = selinux_sock(sk);
+
+ if (sksec->nlbl_state != NLBL_REQSKB &&
+ sksec->nlbl_state != NLBL_CONNLABELED)
+diff --git a/security/smack/smack.h b/security/smack/smack.h
+index 041688e5a77a3e..297f21446f4568 100644
+--- a/security/smack/smack.h
++++ b/security/smack/smack.h
+@@ -355,6 +355,11 @@ static inline struct superblock_smack *smack_superblock(
+ return superblock->s_security + smack_blob_sizes.lbs_superblock;
+ }
+
++static inline struct socket_smack *smack_sock(const struct sock *sock)
++{
++ return sock->sk_security + smack_blob_sizes.lbs_sock;
++}
++
+ /*
+ * Is the directory transmuting?
+ */
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 002a1b9ed83a56..6ec9a40f3ec592 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -1606,7 +1606,7 @@ static int smack_inode_getsecurity(struct mnt_idmap *idmap,
+ if (sock == NULL || sock->sk == NULL)
+ return -EOPNOTSUPP;
+
+- ssp = sock->sk->sk_security;
++ ssp = smack_sock(sock->sk);
+
+ if (strcmp(name, XATTR_SMACK_IPIN) == 0)
+ isp = ssp->smk_in;
+@@ -1994,7 +1994,7 @@ static int smack_file_receive(struct file *file)
+
+ if (inode->i_sb->s_magic == SOCKFS_MAGIC) {
+ sock = SOCKET_I(inode);
+- ssp = sock->sk->sk_security;
++ ssp = smack_sock(sock->sk);
+ tsp = smack_cred(current_cred());
+ /*
+ * If the receiving process can't write to the
+@@ -2409,11 +2409,7 @@ static void smack_task_to_inode(struct task_struct *p, struct inode *inode)
+ static int smack_sk_alloc_security(struct sock *sk, int family, gfp_t gfp_flags)
+ {
+ struct smack_known *skp = smk_of_current();
+- struct socket_smack *ssp;
+-
+- ssp = kzalloc(sizeof(struct socket_smack), gfp_flags);
+- if (ssp == NULL)
+- return -ENOMEM;
++ struct socket_smack *ssp = smack_sock(sk);
+
+ /*
+ * Sockets created by kernel threads receive web label.
+@@ -2427,11 +2423,10 @@ static int smack_sk_alloc_security(struct sock *sk, int family, gfp_t gfp_flags)
+ }
+ ssp->smk_packet = NULL;
+
+- sk->sk_security = ssp;
+-
+ return 0;
+ }
+
++#ifdef SMACK_IPV6_PORT_LABELING
+ /**
+ * smack_sk_free_security - Free a socket blob
+ * @sk: the socket
+@@ -2440,7 +2435,6 @@ static int smack_sk_alloc_security(struct sock *sk, int family, gfp_t gfp_flags)
+ */
+ static void smack_sk_free_security(struct sock *sk)
+ {
+-#ifdef SMACK_IPV6_PORT_LABELING
+ struct smk_port_label *spp;
+
+ if (sk->sk_family == PF_INET6) {
+@@ -2453,9 +2447,8 @@ static void smack_sk_free_security(struct sock *sk)
+ }
+ rcu_read_unlock();
+ }
+-#endif
+- kfree(sk->sk_security);
+ }
++#endif
+
+ /**
+ * smack_sk_clone_security - Copy security context
+@@ -2466,8 +2459,8 @@ static void smack_sk_free_security(struct sock *sk)
+ */
+ static void smack_sk_clone_security(const struct sock *sk, struct sock *newsk)
+ {
+- struct socket_smack *ssp_old = sk->sk_security;
+- struct socket_smack *ssp_new = newsk->sk_security;
++ struct socket_smack *ssp_old = smack_sock(sk);
++ struct socket_smack *ssp_new = smack_sock(newsk);
+
+ *ssp_new = *ssp_old;
+ }
+@@ -2583,7 +2576,7 @@ static struct smack_known *smack_ipv6host_label(struct sockaddr_in6 *sip)
+ */
+ static int smack_netlbl_add(struct sock *sk)
+ {
+- struct socket_smack *ssp = sk->sk_security;
++ struct socket_smack *ssp = smack_sock(sk);
+ struct smack_known *skp = ssp->smk_out;
+ int rc;
+
+@@ -2616,7 +2609,7 @@ static int smack_netlbl_add(struct sock *sk)
+ */
+ static void smack_netlbl_delete(struct sock *sk)
+ {
+- struct socket_smack *ssp = sk->sk_security;
++ struct socket_smack *ssp = smack_sock(sk);
+
+ /*
+ * Take the label off the socket if one is set.
+@@ -2648,7 +2641,7 @@ static int smk_ipv4_check(struct sock *sk, struct sockaddr_in *sap)
+ struct smack_known *skp;
+ int rc = 0;
+ struct smack_known *hkp;
+- struct socket_smack *ssp = sk->sk_security;
++ struct socket_smack *ssp = smack_sock(sk);
+ struct smk_audit_info ad;
+
+ rcu_read_lock();
+@@ -2721,7 +2714,7 @@ static void smk_ipv6_port_label(struct socket *sock, struct sockaddr *address)
+ {
+ struct sock *sk = sock->sk;
+ struct sockaddr_in6 *addr6;
+- struct socket_smack *ssp = sock->sk->sk_security;
++ struct socket_smack *ssp = smack_sock(sock->sk);
+ struct smk_port_label *spp;
+ unsigned short port = 0;
+
+@@ -2809,7 +2802,7 @@ static int smk_ipv6_port_check(struct sock *sk, struct sockaddr_in6 *address,
+ int act)
+ {
+ struct smk_port_label *spp;
+- struct socket_smack *ssp = sk->sk_security;
++ struct socket_smack *ssp = smack_sock(sk);
+ struct smack_known *skp = NULL;
+ unsigned short port;
+ struct smack_known *object;
+@@ -2912,7 +2905,7 @@ static int smack_inode_setsecurity(struct inode *inode, const char *name,
+ if (sock == NULL || sock->sk == NULL)
+ return -EOPNOTSUPP;
+
+- ssp = sock->sk->sk_security;
++ ssp = smack_sock(sock->sk);
+
+ if (strcmp(name, XATTR_SMACK_IPIN) == 0)
+ ssp->smk_in = skp;
+@@ -2960,7 +2953,7 @@ static int smack_socket_post_create(struct socket *sock, int family,
+ * Sockets created by kernel threads receive web label.
+ */
+ if (unlikely(current->flags & PF_KTHREAD)) {
+- ssp = sock->sk->sk_security;
++ ssp = smack_sock(sock->sk);
+ ssp->smk_in = &smack_known_web;
+ ssp->smk_out = &smack_known_web;
+ }
+@@ -2985,8 +2978,8 @@ static int smack_socket_post_create(struct socket *sock, int family,
+ static int smack_socket_socketpair(struct socket *socka,
+ struct socket *sockb)
+ {
+- struct socket_smack *asp = socka->sk->sk_security;
+- struct socket_smack *bsp = sockb->sk->sk_security;
++ struct socket_smack *asp = smack_sock(socka->sk);
++ struct socket_smack *bsp = smack_sock(sockb->sk);
+
+ asp->smk_packet = bsp->smk_out;
+ bsp->smk_packet = asp->smk_out;
+@@ -3049,7 +3042,7 @@ static int smack_socket_connect(struct socket *sock, struct sockaddr *sap,
+ if (__is_defined(SMACK_IPV6_SECMARK_LABELING))
+ rsp = smack_ipv6host_label(sip);
+ if (rsp != NULL) {
+- struct socket_smack *ssp = sock->sk->sk_security;
++ struct socket_smack *ssp = smack_sock(sock->sk);
+
+ rc = smk_ipv6_check(ssp->smk_out, rsp, sip,
+ SMK_CONNECTING);
+@@ -3844,9 +3837,9 @@ static int smack_unix_stream_connect(struct sock *sock,
+ {
+ struct smack_known *skp;
+ struct smack_known *okp;
+- struct socket_smack *ssp = sock->sk_security;
+- struct socket_smack *osp = other->sk_security;
+- struct socket_smack *nsp = newsk->sk_security;
++ struct socket_smack *ssp = smack_sock(sock);
++ struct socket_smack *osp = smack_sock(other);
++ struct socket_smack *nsp = smack_sock(newsk);
+ struct smk_audit_info ad;
+ int rc = 0;
+ #ifdef CONFIG_AUDIT
+@@ -3898,8 +3891,8 @@ static int smack_unix_stream_connect(struct sock *sock,
+ */
+ static int smack_unix_may_send(struct socket *sock, struct socket *other)
+ {
+- struct socket_smack *ssp = sock->sk->sk_security;
+- struct socket_smack *osp = other->sk->sk_security;
++ struct socket_smack *ssp = smack_sock(sock->sk);
++ struct socket_smack *osp = smack_sock(other->sk);
+ struct smk_audit_info ad;
+ int rc;
+
+@@ -3936,7 +3929,7 @@ static int smack_socket_sendmsg(struct socket *sock, struct msghdr *msg,
+ struct sockaddr_in6 *sap = (struct sockaddr_in6 *) msg->msg_name;
+ #endif
+ #ifdef SMACK_IPV6_SECMARK_LABELING
+- struct socket_smack *ssp = sock->sk->sk_security;
++ struct socket_smack *ssp = smack_sock(sock->sk);
+ struct smack_known *rsp;
+ #endif
+ int rc = 0;
+@@ -4148,7 +4141,7 @@ static struct smack_known *smack_from_netlbl(const struct sock *sk, u16 family,
+ netlbl_secattr_init(&secattr);
+
+ if (sk)
+- ssp = sk->sk_security;
++ ssp = smack_sock(sk);
+
+ if (netlbl_skbuff_getattr(skb, family, &secattr) == 0) {
+ skp = smack_from_secattr(&secattr, ssp);
+@@ -4170,7 +4163,7 @@ static struct smack_known *smack_from_netlbl(const struct sock *sk, u16 family,
+ */
+ static int smack_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ {
+- struct socket_smack *ssp = sk->sk_security;
++ struct socket_smack *ssp = smack_sock(sk);
+ struct smack_known *skp = NULL;
+ int rc = 0;
+ struct smk_audit_info ad;
+@@ -4274,7 +4267,7 @@ static int smack_socket_getpeersec_stream(struct socket *sock,
+ u32 slen = 1;
+ int rc = 0;
+
+- ssp = sock->sk->sk_security;
++ ssp = smack_sock(sock->sk);
+ if (ssp->smk_packet != NULL) {
+ rcp = ssp->smk_packet->smk_known;
+ slen = strlen(rcp) + 1;
+@@ -4324,7 +4317,7 @@ static int smack_socket_getpeersec_dgram(struct socket *sock,
+
+ switch (family) {
+ case PF_UNIX:
+- ssp = sock->sk->sk_security;
++ ssp = smack_sock(sock->sk);
+ s = ssp->smk_out->smk_secid;
+ break;
+ case PF_INET:
+@@ -4373,7 +4366,7 @@ static void smack_sock_graft(struct sock *sk, struct socket *parent)
+ (sk->sk_family != PF_INET && sk->sk_family != PF_INET6))
+ return;
+
+- ssp = sk->sk_security;
++ ssp = smack_sock(sk);
+ ssp->smk_in = skp;
+ ssp->smk_out = skp;
+ /* cssp->smk_packet is already set in smack_inet_csk_clone() */
+@@ -4393,7 +4386,7 @@ static int smack_inet_conn_request(const struct sock *sk, struct sk_buff *skb,
+ {
+ u16 family = sk->sk_family;
+ struct smack_known *skp;
+- struct socket_smack *ssp = sk->sk_security;
++ struct socket_smack *ssp = smack_sock(sk);
+ struct sockaddr_in addr;
+ struct iphdr *hdr;
+ struct smack_known *hskp;
+@@ -4479,7 +4472,7 @@ static int smack_inet_conn_request(const struct sock *sk, struct sk_buff *skb,
+ static void smack_inet_csk_clone(struct sock *sk,
+ const struct request_sock *req)
+ {
+- struct socket_smack *ssp = sk->sk_security;
++ struct socket_smack *ssp = smack_sock(sk);
+ struct smack_known *skp;
+
+ if (req->peer_secid != 0) {
+@@ -5049,6 +5042,7 @@ struct lsm_blob_sizes smack_blob_sizes __ro_after_init = {
+ .lbs_inode = sizeof(struct inode_smack),
+ .lbs_ipc = sizeof(struct smack_known *),
+ .lbs_msg_msg = sizeof(struct smack_known *),
++ .lbs_sock = sizeof(struct socket_smack),
+ .lbs_superblock = sizeof(struct superblock_smack),
+ .lbs_xattr_count = SMACK_INODE_INIT_XATTRS,
+ };
+@@ -5173,7 +5167,9 @@ static struct security_hook_list smack_hooks[] __ro_after_init = {
+ LSM_HOOK_INIT(socket_getpeersec_stream, smack_socket_getpeersec_stream),
+ LSM_HOOK_INIT(socket_getpeersec_dgram, smack_socket_getpeersec_dgram),
+ LSM_HOOK_INIT(sk_alloc_security, smack_sk_alloc_security),
++#ifdef SMACK_IPV6_PORT_LABELING
+ LSM_HOOK_INIT(sk_free_security, smack_sk_free_security),
++#endif
+ LSM_HOOK_INIT(sk_clone_security, smack_sk_clone_security),
+ LSM_HOOK_INIT(sock_graft, smack_sock_graft),
+ LSM_HOOK_INIT(inet_conn_request, smack_inet_conn_request),
+diff --git a/security/smack/smack_netfilter.c b/security/smack/smack_netfilter.c
+index b945c1d3a7431a..bad71b7e648da7 100644
+--- a/security/smack/smack_netfilter.c
++++ b/security/smack/smack_netfilter.c
+@@ -26,8 +26,8 @@ static unsigned int smack_ip_output(void *priv,
+ struct socket_smack *ssp;
+ struct smack_known *skp;
+
+- if (sk && sk->sk_security) {
+- ssp = sk->sk_security;
++ if (sk) {
++ ssp = smack_sock(sk);
+ skp = ssp->smk_out;
+ skb->secmark = skp->smk_secid;
+ }
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index e22aad7604e8ac..5dd1e164f9b13d 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -932,7 +932,7 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ }
+ if (rc >= 0) {
+ old_cat = skp->smk_netlabel.attr.mls.cat;
+- skp->smk_netlabel.attr.mls.cat = ncats.attr.mls.cat;
++ rcu_assign_pointer(skp->smk_netlabel.attr.mls.cat, ncats.attr.mls.cat);
+ skp->smk_netlabel.attr.mls.lvl = ncats.attr.mls.lvl;
+ synchronize_rcu();
+ netlbl_catmap_free(old_cat);
+diff --git a/sound/pci/hda/cs35l41_hda_spi.c b/sound/pci/hda/cs35l41_hda_spi.c
+index b76c0dfd5fefc4..f8c356ad0d340f 100644
+--- a/sound/pci/hda/cs35l41_hda_spi.c
++++ b/sound/pci/hda/cs35l41_hda_spi.c
+@@ -38,6 +38,7 @@ static const struct spi_device_id cs35l41_hda_spi_id[] = {
+ { "cs35l41-hda", 0 },
+ {}
+ };
++MODULE_DEVICE_TABLE(spi, cs35l41_hda_spi_id);
+
+ static const struct acpi_device_id cs35l41_acpi_hda_match[] = {
+ { "CSC3551", 0 },
+diff --git a/sound/pci/hda/tas2781_hda_i2c.c b/sound/pci/hda/tas2781_hda_i2c.c
+index 89d8235537cd3b..f58f434e7110ee 100644
+--- a/sound/pci/hda/tas2781_hda_i2c.c
++++ b/sound/pci/hda/tas2781_hda_i2c.c
+@@ -818,7 +818,7 @@ static int tas2781_hda_i2c_probe(struct i2c_client *clt)
+ } else
+ return -ENODEV;
+
+- tas_hda->priv->irq_info.irq = clt->irq;
++ tas_hda->priv->irq = clt->irq;
+ ret = tas2781_read_acpi(tas_hda->priv, device_name);
+ if (ret)
+ return dev_err_probe(tas_hda->dev, ret,
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index e3aca9c785a079..aa163ec4086223 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -2903,8 +2903,10 @@ int rt5682_register_dai_clks(struct rt5682_priv *rt5682)
+ }
+
+ if (dev->of_node) {
+- devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get,
++ ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get,
+ dai_clk_hw);
++ if (ret)
++ return ret;
+ } else {
+ ret = devm_clk_hw_register_clkdev(dev, dai_clk_hw,
+ init.name,
+diff --git a/sound/soc/codecs/rt5682s.c b/sound/soc/codecs/rt5682s.c
+index f50f196d700d72..ce2e88e066f3e5 100644
+--- a/sound/soc/codecs/rt5682s.c
++++ b/sound/soc/codecs/rt5682s.c
+@@ -2828,7 +2828,9 @@ static int rt5682s_register_dai_clks(struct snd_soc_component *component)
+ }
+
+ if (dev->of_node) {
+- devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, dai_clk_hw);
++ ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, dai_clk_hw);
++ if (ret)
++ return ret;
+ } else {
+ ret = devm_clk_hw_register_clkdev(dev, dai_clk_hw,
+ init.name, dev_name(dev));
+diff --git a/sound/soc/codecs/tas2781-comlib.c b/sound/soc/codecs/tas2781-comlib.c
+index 1fbf4560f5cc26..28d8b4d7b98550 100644
+--- a/sound/soc/codecs/tas2781-comlib.c
++++ b/sound/soc/codecs/tas2781-comlib.c
+@@ -14,7 +14,6 @@
+ #include <linux/interrupt.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+-#include <linux/of_gpio.h>
+ #include <linux/of_irq.h>
+ #include <linux/regmap.h>
+ #include <linux/slab.h>
+@@ -411,8 +410,6 @@ EXPORT_SYMBOL_GPL(tasdevice_dsp_remove);
+
+ void tasdevice_remove(struct tasdevice_priv *tas_priv)
+ {
+- if (gpio_is_valid(tas_priv->irq_info.irq_gpio))
+- gpio_free(tas_priv->irq_info.irq_gpio);
+ mutex_destroy(&tas_priv->codec_lock);
+ }
+ EXPORT_SYMBOL_GPL(tasdevice_remove);
+diff --git a/sound/soc/codecs/tas2781-fmwlib.c b/sound/soc/codecs/tas2781-fmwlib.c
+index 8f9a3ae7153e94..f3a7605f071043 100644
+--- a/sound/soc/codecs/tas2781-fmwlib.c
++++ b/sound/soc/codecs/tas2781-fmwlib.c
+@@ -13,7 +13,6 @@
+ #include <linux/interrupt.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+-#include <linux/of_gpio.h>
+ #include <linux/of_irq.h>
+ #include <linux/regmap.h>
+ #include <linux/slab.h>
+diff --git a/sound/soc/codecs/tas2781-i2c.c b/sound/soc/codecs/tas2781-i2c.c
+index cf8bc7ede6c7ce..ea9c6bafa1c3a0 100644
+--- a/sound/soc/codecs/tas2781-i2c.c
++++ b/sound/soc/codecs/tas2781-i2c.c
+@@ -22,7 +22,6 @@
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+-#include <linux/of_gpio.h>
+ #include <linux/of_irq.h>
+ #include <linux/regmap.h>
+ #include <linux/slab.h>
+@@ -30,6 +29,7 @@
+ #include <sound/soc.h>
+ #include <sound/tas2781.h>
+ #include <sound/tlv.h>
++#include <sound/tas2563-tlv.h>
+ #include <sound/tas2781-tlv.h>
+ #include <asm/unaligned.h>
+
+@@ -757,7 +757,7 @@ static void tasdevice_parse_dt(struct tasdevice_priv *tas_priv)
+ {
+ struct i2c_client *client = (struct i2c_client *)tas_priv->client;
+ unsigned int dev_addrs[TASDEVICE_MAX_CHANNELS];
+- int rc, i, ndev = 0;
++ int i, ndev = 0;
+
+ if (tas_priv->isacpi) {
+ ndev = device_property_read_u32_array(&client->dev,
+@@ -772,7 +772,7 @@ static void tasdevice_parse_dt(struct tasdevice_priv *tas_priv)
+ "ti,audio-slots", dev_addrs, ndev);
+ }
+
+- tas_priv->irq_info.irq_gpio =
++ tas_priv->irq =
+ acpi_dev_gpio_irq_get(ACPI_COMPANION(&client->dev), 0);
+ } else if (IS_ENABLED(CONFIG_OF)) {
+ struct device_node *np = tas_priv->dev->of_node;
+@@ -784,7 +784,7 @@ static void tasdevice_parse_dt(struct tasdevice_priv *tas_priv)
+ dev_addrs[ndev++] = addr;
+ }
+
+- tas_priv->irq_info.irq_gpio = of_irq_get(np, 0);
++ tas_priv->irq = of_irq_get(np, 0);
+ } else {
+ ndev = 1;
+ dev_addrs[0] = client->addr;
+@@ -794,29 +794,12 @@ static void tasdevice_parse_dt(struct tasdevice_priv *tas_priv)
+ tas_priv->tasdevice[i].dev_addr = dev_addrs[i];
+
+ tas_priv->reset = devm_gpiod_get_optional(&client->dev,
+- "reset-gpios", GPIOD_OUT_HIGH);
++ "reset", GPIOD_OUT_HIGH);
+ if (IS_ERR(tas_priv->reset))
+ dev_err(tas_priv->dev, "%s Can't get reset GPIO\n",
+ __func__);
+
+ strcpy(tas_priv->dev_name, tasdevice_id[tas_priv->chip_id].name);
+-
+- if (gpio_is_valid(tas_priv->irq_info.irq_gpio)) {
+- rc = gpio_request(tas_priv->irq_info.irq_gpio,
+- "AUDEV-IRQ");
+- if (!rc) {
+- gpio_direction_input(
+- tas_priv->irq_info.irq_gpio);
+-
+- tas_priv->irq_info.irq =
+- gpio_to_irq(tas_priv->irq_info.irq_gpio);
+- } else
+- dev_err(tas_priv->dev, "%s: GPIO %d request error\n",
+- __func__, tas_priv->irq_info.irq_gpio);
+- } else
+- dev_err(tas_priv->dev,
+- "Looking up irq-gpio property failed %d\n",
+- tas_priv->irq_info.irq_gpio);
+ }
+
+ static int tasdevice_i2c_probe(struct i2c_client *i2c)
+diff --git a/sound/soc/loongson/loongson_card.c b/sound/soc/loongson/loongson_card.c
+index fae5e9312bf08c..2c8dbdba27c5f8 100644
+--- a/sound/soc/loongson/loongson_card.c
++++ b/sound/soc/loongson/loongson_card.c
+@@ -127,8 +127,8 @@ static int loongson_card_parse_of(struct loongson_card_data *data)
+ codec = of_get_child_by_name(dev->of_node, "codec");
+ if (!codec) {
+ dev_err(dev, "audio-codec property missing or invalid\n");
+- ret = -EINVAL;
+- goto err;
++ of_node_put(cpu);
++ return -EINVAL;
+ }
+
+ for (i = 0; i < card->num_links; i++) {
+diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c
+index 6789c7a4d5ca1d..3b57ba095ab61d 100644
+--- a/tools/bpf/bpftool/btf.c
++++ b/tools/bpf/bpftool/btf.c
+@@ -561,9 +561,10 @@ static const char *btf_type_sort_name(const struct btf *btf, __u32 index, bool f
+ case BTF_KIND_ENUM64: {
+ int name_off = t->name_off;
+
+- /* Use name of the first element for anonymous enums if allowed */
+- if (!from_ref && !t->name_off && btf_vlen(t))
+- name_off = btf_enum(t)->name_off;
++ if (!from_ref && !name_off && btf_vlen(t))
++ name_off = btf_kind(t) == BTF_KIND_ENUM64 ?
++ btf_enum64(t)->name_off :
++ btf_enum(t)->name_off;
+
+ return btf__name_by_offset(btf, name_off);
+ }
+diff --git a/tools/bpf/runqslower/Makefile b/tools/bpf/runqslower/Makefile
+index d8288936c9120f..c4f1f1735af659 100644
+--- a/tools/bpf/runqslower/Makefile
++++ b/tools/bpf/runqslower/Makefile
+@@ -15,6 +15,7 @@ INCLUDES := -I$(OUTPUT) -I$(BPF_INCLUDE) -I$(abspath ../../include/uapi)
+ CFLAGS := -g -Wall $(CLANG_CROSS_FLAGS)
+ CFLAGS += $(EXTRA_CFLAGS)
+ LDFLAGS += $(EXTRA_LDFLAGS)
++LDLIBS += -lelf -lz
+
+ # Try to detect best kernel BTF source
+ KERNEL_REL := $(shell uname -r)
+@@ -51,7 +52,7 @@ clean:
+ libbpf_hdrs: $(BPFOBJ)
+
+ $(OUTPUT)/runqslower: $(OUTPUT)/runqslower.o $(BPFOBJ)
+- $(QUIET_LINK)$(CC) $(CFLAGS) $^ -lelf -lz -o $@
++ $(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LDLIBS) -o $@
+
+ $(OUTPUT)/runqslower.o: runqslower.h $(OUTPUT)/runqslower.skel.h \
+ $(OUTPUT)/runqslower.bpf.o | libbpf_hdrs
+diff --git a/tools/build/feature/test-all.c b/tools/build/feature/test-all.c
+index dd0a18c2ef8fc0..6f4bf386a3b5c4 100644
+--- a/tools/build/feature/test-all.c
++++ b/tools/build/feature/test-all.c
+@@ -134,10 +134,6 @@
+ #undef main
+ #endif
+
+-#define main main_test_libcapstone
+-# include "test-libcapstone.c"
+-#undef main
+-
+ #define main main_test_lzma
+ # include "test-lzma.c"
+ #undef main
+diff --git a/tools/include/nolibc/string.h b/tools/include/nolibc/string.h
+index f9ab28421e6dcd..9ec9c24f38c092 100644
+--- a/tools/include/nolibc/string.h
++++ b/tools/include/nolibc/string.h
+@@ -7,6 +7,7 @@
+ #ifndef _NOLIBC_STRING_H
+ #define _NOLIBC_STRING_H
+
++#include "arch.h"
+ #include "std.h"
+
+ static void *malloc(size_t len);
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index 32c00db3b91b9d..40aae244e35f7a 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -996,6 +996,7 @@ static struct btf *btf_new_empty(struct btf *base_btf)
+ btf->base_btf = base_btf;
+ btf->start_id = btf__type_cnt(base_btf);
+ btf->start_str_off = base_btf->hdr->str_len;
++ btf->swapped_endian = base_btf->swapped_endian;
+ }
+
+ /* +1 for empty string at offset 0 */
+@@ -5394,6 +5395,9 @@ int btf__distill_base(const struct btf *src_btf, struct btf **new_base_btf,
+ new_base = btf__new_empty();
+ if (!new_base)
+ return libbpf_err(-ENOMEM);
++
++ btf__set_endianness(new_base, btf__endianness(src_btf));
++
+ dist.id_map = calloc(n, sizeof(*dist.id_map));
+ if (!dist.id_map) {
+ err = -ENOMEM;
+diff --git a/tools/lib/bpf/btf_relocate.c b/tools/lib/bpf/btf_relocate.c
+index 17f8b32f94a087..4f7399d85eab3d 100644
+--- a/tools/lib/bpf/btf_relocate.c
++++ b/tools/lib/bpf/btf_relocate.c
+@@ -1,4 +1,4 @@
+-// SPDX-License-Identifier: GPL-2.0
++// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+ /* Copyright (c) 2024, Oracle and/or its affiliates. */
+
+ #ifndef _GNU_SOURCE
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index a3be6f8fac09ea..d3a542649e6ba2 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -496,8 +496,6 @@ struct bpf_program {
+ };
+
+ struct bpf_struct_ops {
+- const char *tname;
+- const struct btf_type *type;
+ struct bpf_program **progs;
+ __u32 *kern_func_off;
+ /* e.g. struct tcp_congestion_ops in bpf_prog's btf format */
+@@ -1083,11 +1081,14 @@ static int bpf_object_adjust_struct_ops_autoload(struct bpf_object *obj)
+ continue;
+
+ for (j = 0; j < obj->nr_maps; ++j) {
++ const struct btf_type *type;
++
+ map = &obj->maps[j];
+ if (!bpf_map__is_struct_ops(map))
+ continue;
+
+- vlen = btf_vlen(map->st_ops->type);
++ type = btf__type_by_id(obj->btf, map->st_ops->type_id);
++ vlen = btf_vlen(type);
+ for (k = 0; k < vlen; ++k) {
+ slot_prog = map->st_ops->progs[k];
+ if (prog != slot_prog)
+@@ -1121,8 +1122,8 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map)
+ int err;
+
+ st_ops = map->st_ops;
+- type = st_ops->type;
+- tname = st_ops->tname;
++ type = btf__type_by_id(btf, st_ops->type_id);
++ tname = btf__name_by_offset(btf, type->name_off);
+ err = find_struct_ops_kern_types(obj, tname, &mod_btf,
+ &kern_type, &kern_type_id,
+ &kern_vtype, &kern_vtype_id,
+@@ -1423,8 +1424,6 @@ static int init_struct_ops_maps(struct bpf_object *obj, const char *sec_name,
+ memcpy(st_ops->data,
+ data->d_buf + vsi->offset,
+ type->size);
+- st_ops->tname = tname;
+- st_ops->type = type;
+ st_ops->type_id = type_id;
+
+ pr_debug("struct_ops init: struct %s(type_id=%u) %s found at offset %u\n",
+@@ -7906,16 +7905,19 @@ static int bpf_object_init_progs(struct bpf_object *obj, const struct bpf_object
+ }
+
+ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf, size_t obj_buf_sz,
++ const char *obj_name,
+ const struct bpf_object_open_opts *opts)
+ {
+- const char *obj_name, *kconfig, *btf_tmp_path, *token_path;
++ const char *kconfig, *btf_tmp_path, *token_path;
+ struct bpf_object *obj;
+- char tmp_name[64];
+ int err;
+ char *log_buf;
+ size_t log_size;
+ __u32 log_level;
+
++ if (obj_buf && !obj_name)
++ return ERR_PTR(-EINVAL);
++
+ if (elf_version(EV_CURRENT) == EV_NONE) {
+ pr_warn("failed to init libelf for %s\n",
+ path ? : "(mem buf)");
+@@ -7925,16 +7927,12 @@ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf,
+ if (!OPTS_VALID(opts, bpf_object_open_opts))
+ return ERR_PTR(-EINVAL);
+
+- obj_name = OPTS_GET(opts, object_name, NULL);
++ obj_name = OPTS_GET(opts, object_name, NULL) ?: obj_name;
+ if (obj_buf) {
+- if (!obj_name) {
+- snprintf(tmp_name, sizeof(tmp_name), "%lx-%lx",
+- (unsigned long)obj_buf,
+- (unsigned long)obj_buf_sz);
+- obj_name = tmp_name;
+- }
+ path = obj_name;
+ pr_debug("loading object '%s' from buffer\n", obj_name);
++ } else {
++ pr_debug("loading object from %s\n", path);
+ }
+
+ log_buf = OPTS_GET(opts, kernel_log_buf, NULL);
+@@ -8018,9 +8016,7 @@ bpf_object__open_file(const char *path, const struct bpf_object_open_opts *opts)
+ if (!path)
+ return libbpf_err_ptr(-EINVAL);
+
+- pr_debug("loading %s\n", path);
+-
+- return libbpf_ptr(bpf_object_open(path, NULL, 0, opts));
++ return libbpf_ptr(bpf_object_open(path, NULL, 0, NULL, opts));
+ }
+
+ struct bpf_object *bpf_object__open(const char *path)
+@@ -8032,10 +8028,15 @@ struct bpf_object *
+ bpf_object__open_mem(const void *obj_buf, size_t obj_buf_sz,
+ const struct bpf_object_open_opts *opts)
+ {
++ char tmp_name[64];
++
+ if (!obj_buf || obj_buf_sz == 0)
+ return libbpf_err_ptr(-EINVAL);
+
+- return libbpf_ptr(bpf_object_open(NULL, obj_buf, obj_buf_sz, opts));
++ /* create a (quite useless) default "name" for this memory buffer object */
++ snprintf(tmp_name, sizeof(tmp_name), "%lx-%zx", (unsigned long)obj_buf, obj_buf_sz);
++
++ return libbpf_ptr(bpf_object_open(NULL, obj_buf, obj_buf_sz, tmp_name, opts));
+ }
+
+ static int bpf_object_unload(struct bpf_object *obj)
+@@ -8445,11 +8446,13 @@ static int bpf_object__resolve_externs(struct bpf_object *obj,
+
+ static void bpf_map_prepare_vdata(const struct bpf_map *map)
+ {
++ const struct btf_type *type;
+ struct bpf_struct_ops *st_ops;
+ __u32 i;
+
+ st_ops = map->st_ops;
+- for (i = 0; i < btf_vlen(st_ops->type); i++) {
++ type = btf__type_by_id(map->obj->btf, st_ops->type_id);
++ for (i = 0; i < btf_vlen(type); i++) {
+ struct bpf_program *prog = st_ops->progs[i];
+ void *kern_data;
+ int prog_fd;
+@@ -9712,6 +9715,7 @@ static struct bpf_map *find_struct_ops_map_by_offset(struct bpf_object *obj,
+ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
+ Elf64_Shdr *shdr, Elf_Data *data)
+ {
++ const struct btf_type *type;
+ const struct btf_member *member;
+ struct bpf_struct_ops *st_ops;
+ struct bpf_program *prog;
+@@ -9771,13 +9775,14 @@ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
+ }
+ insn_idx = sym->st_value / BPF_INSN_SZ;
+
+- member = find_member_by_offset(st_ops->type, moff * 8);
++ type = btf__type_by_id(btf, st_ops->type_id);
++ member = find_member_by_offset(type, moff * 8);
+ if (!member) {
+ pr_warn("struct_ops reloc %s: cannot find member at moff %u\n",
+ map->name, moff);
+ return -EINVAL;
+ }
+- member_idx = member - btf_members(st_ops->type);
++ member_idx = member - btf_members(type);
+ name = btf__name_by_offset(btf, member->name_off);
+
+ if (!resolve_func_ptr(btf, member->type, NULL)) {
+@@ -13758,29 +13763,13 @@ static int populate_skeleton_progs(const struct bpf_object *obj,
+ int bpf_object__open_skeleton(struct bpf_object_skeleton *s,
+ const struct bpf_object_open_opts *opts)
+ {
+- DECLARE_LIBBPF_OPTS(bpf_object_open_opts, skel_opts,
+- .object_name = s->name,
+- );
+ struct bpf_object *obj;
+ int err;
+
+- /* Attempt to preserve opts->object_name, unless overriden by user
+- * explicitly. Overwriting object name for skeletons is discouraged,
+- * as it breaks global data maps, because they contain object name
+- * prefix as their own map name prefix. When skeleton is generated,
+- * bpftool is making an assumption that this name will stay the same.
+- */
+- if (opts) {
+- memcpy(&skel_opts, opts, sizeof(*opts));
+- if (!opts->object_name)
+- skel_opts.object_name = s->name;
+- }
+-
+- obj = bpf_object__open_mem(s->data, s->data_sz, &skel_opts);
+- err = libbpf_get_error(obj);
+- if (err) {
+- pr_warn("failed to initialize skeleton BPF object '%s': %d\n",
+- s->name, err);
++ obj = bpf_object_open(NULL, s->data, s->data_sz, s->name, opts);
++ if (IS_ERR(obj)) {
++ err = PTR_ERR(obj);
++ pr_warn("failed to initialize skeleton BPF object '%s': %d\n", s->name, err);
+ return libbpf_err(err);
+ }
+
+diff --git a/tools/objtool/arch/loongarch/decode.c b/tools/objtool/arch/loongarch/decode.c
+index aee479d2191c21..69b66994f2a155 100644
+--- a/tools/objtool/arch/loongarch/decode.c
++++ b/tools/objtool/arch/loongarch/decode.c
+@@ -122,7 +122,7 @@ static bool decode_insn_reg2i12_fomat(union loongarch_instruction inst,
+ switch (inst.reg2i12_format.opcode) {
+ case addid_op:
+ if ((inst.reg2i12_format.rd == CFI_SP) || (inst.reg2i12_format.rj == CFI_SP)) {
+- /* addi.d sp,sp,si12 or addi.d fp,sp,si12 */
++ /* addi.d sp,sp,si12 or addi.d fp,sp,si12 or addi.d sp,fp,si12 */
+ insn->immediate = sign_extend64(inst.reg2i12_format.immediate, 11);
+ ADD_OP(op) {
+ op->src.type = OP_SRC_ADD;
+@@ -132,6 +132,15 @@ static bool decode_insn_reg2i12_fomat(union loongarch_instruction inst,
+ op->dest.reg = inst.reg2i12_format.rd;
+ }
+ }
++ if ((inst.reg2i12_format.rd == CFI_SP) && (inst.reg2i12_format.rj == CFI_FP)) {
++ /* addi.d sp,fp,si12 */
++ struct symbol *func = find_func_containing(insn->sec, insn->offset);
++
++ if (!func)
++ return false;
++
++ func->frame_pointer = true;
++ }
+ break;
+ case ldd_op:
+ if (inst.reg2i12_format.rj == CFI_SP) {
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 01237d16722387..af9cfed7f4ecb5 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2993,10 +2993,27 @@ static int update_cfi_state(struct instruction *insn,
+ break;
+ }
+
+- if (op->dest.reg == CFI_SP && op->src.reg == CFI_BP) {
++ if (op->dest.reg == CFI_BP && op->src.reg == CFI_SP &&
++ insn->sym->frame_pointer) {
++ /* addi.d fp,sp,imm on LoongArch */
++ if (cfa->base == CFI_SP && cfa->offset == op->src.offset) {
++ cfa->base = CFI_BP;
++ cfa->offset = 0;
++ }
++ break;
++ }
+
+- /* lea disp(%rbp), %rsp */
+- cfi->stack_size = -(op->src.offset + regs[CFI_BP].offset);
++ if (op->dest.reg == CFI_SP && op->src.reg == CFI_BP) {
++ /* addi.d sp,fp,imm on LoongArch */
++ if (cfa->base == CFI_BP && cfa->offset == 0) {
++ if (insn->sym->frame_pointer) {
++ cfa->base = CFI_SP;
++ cfa->offset = -op->src.offset;
++ }
++ } else {
++ /* lea disp(%rbp), %rsp */
++ cfi->stack_size = -(op->src.offset + regs[CFI_BP].offset);
++ }
+ break;
+ }
+
+diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
+index 2b8a69de4db871..d7e815c2fd1567 100644
+--- a/tools/objtool/include/objtool/elf.h
++++ b/tools/objtool/include/objtool/elf.h
+@@ -68,6 +68,7 @@ struct symbol {
+ u8 warned : 1;
+ u8 embedded_insn : 1;
+ u8 local_label : 1;
++ u8 frame_pointer : 1;
+ struct list_head pv_target;
+ struct reloc *relocs;
+ };
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index c157bd31f2e5a4..7298f360706220 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -3266,7 +3266,7 @@ static int perf_c2c__record(int argc, const char **argv)
+ return -1;
+ }
+
+- if (perf_pmu__mem_events_init(pmu)) {
++ if (perf_pmu__mem_events_init()) {
+ pr_err("failed: memory events not supported\n");
+ return -1;
+ }
+@@ -3290,19 +3290,15 @@ static int perf_c2c__record(int argc, const char **argv)
+ * PERF_MEM_EVENTS__LOAD_STORE if it is supported.
+ */
+ if (e->tag) {
+- e->record = true;
++ perf_mem_record[PERF_MEM_EVENTS__LOAD_STORE] = true;
+ rec_argv[i++] = "-W";
+ } else {
+- e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__LOAD);
+- e->record = true;
+-
+- e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__STORE);
+- e->record = true;
++ perf_mem_record[PERF_MEM_EVENTS__LOAD] = true;
++ perf_mem_record[PERF_MEM_EVENTS__STORE] = true;
+ }
+ }
+
+- e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__LOAD);
+- if (e->record)
++ if (perf_mem_record[PERF_MEM_EVENTS__LOAD])
+ rec_argv[i++] = "-W";
+
+ rec_argv[i++] = "-d";
+diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
+index a212678d47beb1..c80fb0f60e611f 100644
+--- a/tools/perf/builtin-inject.c
++++ b/tools/perf/builtin-inject.c
+@@ -2204,6 +2204,7 @@ int cmd_inject(int argc, const char **argv)
+ .finished_init = perf_event__repipe_op2_synth,
+ .compressed = perf_event__repipe_op4_synth,
+ .auxtrace = perf_event__repipe_auxtrace,
++ .dont_split_sample_group = true,
+ },
+ .input_name = "-",
+ .samples = LIST_HEAD_INIT(inject.samples),
+diff --git a/tools/perf/builtin-mem.c b/tools/perf/builtin-mem.c
+index 863fcd735daee9..08724fa508e14c 100644
+--- a/tools/perf/builtin-mem.c
++++ b/tools/perf/builtin-mem.c
+@@ -97,7 +97,7 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
+ return -1;
+ }
+
+- if (perf_pmu__mem_events_init(pmu)) {
++ if (perf_pmu__mem_events_init()) {
+ pr_err("failed: memory events not supported\n");
+ return -1;
+ }
+@@ -126,22 +126,17 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
+ if (e->tag &&
+ (mem->operation & MEM_OPERATION_LOAD) &&
+ (mem->operation & MEM_OPERATION_STORE)) {
+- e->record = true;
++ perf_mem_record[PERF_MEM_EVENTS__LOAD_STORE] = true;
+ rec_argv[i++] = "-W";
+ } else {
+- if (mem->operation & MEM_OPERATION_LOAD) {
+- e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__LOAD);
+- e->record = true;
+- }
++ if (mem->operation & MEM_OPERATION_LOAD)
++ perf_mem_record[PERF_MEM_EVENTS__LOAD] = true;
+
+- if (mem->operation & MEM_OPERATION_STORE) {
+- e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__STORE);
+- e->record = true;
+- }
++ if (mem->operation & MEM_OPERATION_STORE)
++ perf_mem_record[PERF_MEM_EVENTS__STORE] = true;
+ }
+
+- e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__LOAD);
+- if (e->record)
++ if (perf_mem_record[PERF_MEM_EVENTS__LOAD])
+ rec_argv[i++] = "-W";
+
+ rec_argv[i++] = "-d";
+@@ -372,6 +367,7 @@ static int report_events(int argc, const char **argv, struct perf_mem *mem)
+ rep_argv[i] = argv[j];
+
+ ret = cmd_report(i, rep_argv);
++ free(new_sort_order);
+ free(rep_argv);
+ return ret;
+ }
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 6edc0d4ce6fbed..40826f825c9c20 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -565,6 +565,7 @@ static int evlist__tty_browse_hists(struct evlist *evlist, struct report *rep, c
+ struct hists *hists = evsel__hists(pos);
+ const char *evname = evsel__name(pos);
+
++ i++;
+ if (symbol_conf.event_group && !evsel__is_group_leader(pos))
+ continue;
+
+@@ -574,7 +575,7 @@ static int evlist__tty_browse_hists(struct evlist *evlist, struct report *rep, c
+ hists__fprintf_nr_sample_events(hists, rep, evname, stdout);
+
+ if (rep->total_cycles_mode) {
+- report__browse_block_hists(&rep->block_reports[i++].hist,
++ report__browse_block_hists(&rep->block_reports[i - 1].hist,
+ rep->min_percent, pos, NULL);
+ continue;
+ }
+diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
+index 8750b5f2d49b49..d0c44f6116ed44 100644
+--- a/tools/perf/builtin-sched.c
++++ b/tools/perf/builtin-sched.c
+@@ -2683,9 +2683,12 @@ static int timehist_sched_change_event(struct perf_tool *tool,
+ * - previous sched event is out of window - we are done
+ * - sample time is beyond window user cares about - reset it
+ * to close out stats for time window interest
++ * - If tprev is 0, that is, sched_in event for current task is
++ * not recorded, cannot determine whether sched_in event is
++ * within time window interest - ignore it
+ */
+ if (ptime->end) {
+- if (tprev > ptime->end)
++ if (!tprev || tprev > ptime->end)
+ goto out;
+
+ if (t > ptime->end)
+@@ -3121,7 +3124,8 @@ static int perf_sched__timehist(struct perf_sched *sched)
+
+ if (perf_time__parse_str(&sched->ptime, sched->time_str) != 0) {
+ pr_err("Invalid time string\n");
+- return -EINVAL;
++ err = -EINVAL;
++ goto out;
+ }
+
+ if (timehist_check_attr(sched, evlist) != 0)
+diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-cache.json b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-cache.json
+index c9596e18ec0901..6347eba4881051 100644
+--- a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-cache.json
++++ b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-cache.json
+@@ -4577,7 +4577,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD",
+- "Filter": "config1=0x40233",
++ "Filter": "config1=0x4023300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRds issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4588,7 +4588,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD",
+- "Filter": "config1=0x40433",
++ "Filter": "config1=0x4043300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4599,7 +4599,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefCRD",
+- "Filter": "config1=0x4b233",
++ "Filter": "config1=0x4b23300000000",
+ "PerPkg": "1",
+ "UMask": "0x11",
+ "Unit": "CHA"
+@@ -4609,7 +4609,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefDRD",
+- "Filter": "config1=0x4b433",
++ "Filter": "config1=0x4b43300000000",
+ "PerPkg": "1",
+ "UMask": "0x11",
+ "Unit": "CHA"
+@@ -4619,7 +4619,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefRFO",
+- "Filter": "config1=0x4b033",
++ "Filter": "config1=0x4b03300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4630,7 +4630,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4651,7 +4651,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD",
+- "Filter": "config1=0x40233",
++ "Filter": "config1=0x4023300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRds issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -4662,7 +4662,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD",
+- "Filter": "config1=0x40433",
++ "Filter": "config1=0x4043300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -4673,7 +4673,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefCRD",
+- "Filter": "config1=0x4b233",
++ "Filter": "config1=0x4b23300000000",
+ "PerPkg": "1",
+ "UMask": "0x21",
+ "Unit": "CHA"
+@@ -4683,7 +4683,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefDRD",
+- "Filter": "config1=0x4b433",
++ "Filter": "config1=0x4b43300000000",
+ "PerPkg": "1",
+ "UMask": "0x21",
+ "Unit": "CHA"
+@@ -4693,7 +4693,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefRFO",
+- "Filter": "config1=0x4b033",
++ "Filter": "config1=0x4b03300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -4704,7 +4704,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -4747,7 +4747,7 @@
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM",
+ "Experimental": "1",
+- "Filter": "config1=0x49033",
++ "Filter": "config1=0x4903300000000",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of entries successfully inserted into the TOR that are generated from local IO ItoM requests that miss the LLC. An ItoM request is used by IIO to request a data write without first reading the data for ownership.",
+ "UMask": "0x24",
+@@ -4759,7 +4759,7 @@
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RDCUR",
+ "Experimental": "1",
+- "Filter": "config1=0x43C33",
++ "Filter": "config1=0x43c3300000000",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of entries successfully inserted into the TOR that are generated from local IO RdCur requests and miss the LLC. A RdCur request is used by IIO to read data without changing state.",
+ "UMask": "0x24",
+@@ -4771,7 +4771,7 @@
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RFO",
+ "Experimental": "1",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of entries successfully inserted into the TOR that are generated from local IO RFO requests that miss the LLC. A read for ownership (RFO) requests a cache line to be cached in E state with the intent to modify.",
+ "UMask": "0x24",
+@@ -4999,7 +4999,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD",
+- "Filter": "config1=0x40233",
++ "Filter": "config1=0x4023300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRds issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -5010,7 +5010,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD",
+- "Filter": "config1=0x40433",
++ "Filter": "config1=0x4043300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -5021,7 +5021,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefCRD",
+- "Filter": "config1=0x4b233",
++ "Filter": "config1=0x4b23300000000",
+ "PerPkg": "1",
+ "UMask": "0x11",
+ "Unit": "CHA"
+@@ -5031,7 +5031,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefDRD",
+- "Filter": "config1=0x4b433",
++ "Filter": "config1=0x4b43300000000",
+ "PerPkg": "1",
+ "UMask": "0x11",
+ "Unit": "CHA"
+@@ -5041,7 +5041,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefRFO",
+- "Filter": "config1=0x4b033",
++ "Filter": "config1=0x4b03300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -5052,7 +5052,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -5073,7 +5073,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD",
+- "Filter": "config1=0x40233",
++ "Filter": "config1=0x4023300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRds issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -5084,7 +5084,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD",
+- "Filter": "config1=0x40433",
++ "Filter": "config1=0x4043300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -5095,7 +5095,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefCRD",
+- "Filter": "config1=0x4b233",
++ "Filter": "config1=0x4b23300000000",
+ "PerPkg": "1",
+ "UMask": "0x21",
+ "Unit": "CHA"
+@@ -5105,7 +5105,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefDRD",
+- "Filter": "config1=0x4b433",
++ "Filter": "config1=0x4b43300000000",
+ "PerPkg": "1",
+ "UMask": "0x21",
+ "Unit": "CHA"
+@@ -5115,7 +5115,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefRFO",
+- "Filter": "config1=0x4b033",
++ "Filter": "config1=0x4b03300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -5126,7 +5126,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -5171,7 +5171,7 @@
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM",
+ "Experimental": "1",
+- "Filter": "config1=0x49033",
++ "Filter": "config1=0x4903300000000",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO ItoM requests that miss the LLC. An ItoM is used by IIO to request a data write without first reading the data for ownership.",
+ "UMask": "0x24",
+@@ -5183,7 +5183,7 @@
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RDCUR",
+ "Experimental": "1",
+- "Filter": "config1=0x43C33",
++ "Filter": "config1=0x43c3300000000",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO RdCur requests that miss the LLC. A RdCur request is used by IIO to read data without changing state.",
+ "UMask": "0x24",
+@@ -5195,7 +5195,7 @@
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RFO",
+ "Experimental": "1",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO RFO requests that miss the LLC. A read for ownership (RFO) requests data to be cached in E state with the intent to modify.",
+ "UMask": "0x24",
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-cache.json b/tools/perf/pmu-events/arch/x86/skylakex/uncore-cache.json
+index da46a3aeb58c7a..4fc81862649124 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-cache.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-cache.json
+@@ -4454,7 +4454,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD",
+- "Filter": "config1=0x40233",
++ "Filter": "config1=0x4023300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRds issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4465,7 +4465,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD",
+- "Filter": "config1=0x40433",
++ "Filter": "config1=0x4043300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4476,7 +4476,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefCRD",
+- "Filter": "config1=0x4b233",
++ "Filter": "config1=0x4b23300000000",
+ "PerPkg": "1",
+ "UMask": "0x11",
+ "Unit": "CHA"
+@@ -4486,7 +4486,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefDRD",
+- "Filter": "config1=0x4b433",
++ "Filter": "config1=0x4b43300000000",
+ "PerPkg": "1",
+ "UMask": "0x11",
+ "Unit": "CHA"
+@@ -4496,7 +4496,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefRFO",
+- "Filter": "config1=0x4b033",
++ "Filter": "config1=0x4b03300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4507,7 +4507,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4528,7 +4528,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD",
+- "Filter": "config1=0x40233",
++ "Filter": "config1=0x4023300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRds issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -4539,7 +4539,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD",
+- "Filter": "config1=0x40433",
++ "Filter": "config1=0x4043300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -4550,7 +4550,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefCRD",
+- "Filter": "config1=0x4b233",
++ "Filter": "config1=0x4b23300000000",
+ "PerPkg": "1",
+ "UMask": "0x21",
+ "Unit": "CHA"
+@@ -4560,7 +4560,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefDRD",
+- "Filter": "config1=0x4b433",
++ "Filter": "config1=0x4b43300000000",
+ "PerPkg": "1",
+ "UMask": "0x21",
+ "Unit": "CHA"
+@@ -4570,7 +4570,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefRFO",
+- "Filter": "config1=0x4b033",
++ "Filter": "config1=0x4b03300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -4581,7 +4581,7 @@
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -4624,7 +4624,7 @@
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM",
+ "Experimental": "1",
+- "Filter": "config1=0x49033",
++ "Filter": "config1=0x4903300000000",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of entries successfully inserted into the TOR that are generated from local IO ItoM requests that miss the LLC. An ItoM request is used by IIO to request a data write without first reading the data for ownership.",
+ "UMask": "0x24",
+@@ -4636,7 +4636,7 @@
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RDCUR",
+ "Experimental": "1",
+- "Filter": "config1=0x43C33",
++ "Filter": "config1=0x43c3300000000",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of entries successfully inserted into the TOR that are generated from local IO RdCur requests and miss the LLC. A RdCur request is used by IIO to read data without changing state.",
+ "UMask": "0x24",
+@@ -4648,7 +4648,7 @@
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RFO",
+ "Experimental": "1",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of entries successfully inserted into the TOR that are generated from local IO RFO requests that miss the LLC. A read for ownership (RFO) requests a cache line to be cached in E state with the intent to modify.",
+ "UMask": "0x24",
+@@ -4865,7 +4865,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD",
+- "Filter": "config1=0x40233",
++ "Filter": "config1=0x4023300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRds issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4876,7 +4876,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD",
+- "Filter": "config1=0x40433",
++ "Filter": "config1=0x4043300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4887,7 +4887,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefCRD",
+- "Filter": "config1=0x4b233",
++ "Filter": "config1=0x4b23300000000",
+ "PerPkg": "1",
+ "UMask": "0x11",
+ "Unit": "CHA"
+@@ -4897,7 +4897,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefDRD",
+- "Filter": "config1=0x4b433",
++ "Filter": "config1=0x4b43300000000",
+ "PerPkg": "1",
+ "UMask": "0x11",
+ "Unit": "CHA"
+@@ -4907,7 +4907,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefRFO",
+- "Filter": "config1=0x4b033",
++ "Filter": "config1=0x4b03300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4918,7 +4918,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x11",
+@@ -4939,7 +4939,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD",
+- "Filter": "config1=0x40233",
++ "Filter": "config1=0x4023300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRds issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -4950,7 +4950,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD",
+- "Filter": "config1=0x40433",
++ "Filter": "config1=0x4043300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -4961,7 +4961,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefCRD",
+- "Filter": "config1=0x4b233",
++ "Filter": "config1=0x4b23300000000",
+ "PerPkg": "1",
+ "UMask": "0x21",
+ "Unit": "CHA"
+@@ -4971,7 +4971,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefDRD",
+- "Filter": "config1=0x4b433",
++ "Filter": "config1=0x4b43300000000",
+ "PerPkg": "1",
+ "UMask": "0x21",
+ "Unit": "CHA"
+@@ -4981,7 +4981,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefRFO",
+- "Filter": "config1=0x4b033",
++ "Filter": "config1=0x4b03300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -4992,7 +4992,7 @@
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0x21",
+@@ -5037,7 +5037,7 @@
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM",
+ "Experimental": "1",
+- "Filter": "config1=0x49033",
++ "Filter": "config1=0x4903300000000",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO ItoM requests that miss the LLC. An ItoM is used by IIO to request a data write without first reading the data for ownership.",
+ "UMask": "0x24",
+@@ -5049,7 +5049,7 @@
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RDCUR",
+ "Experimental": "1",
+- "Filter": "config1=0x43C33",
++ "Filter": "config1=0x43c3300000000",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO RdCur requests that miss the LLC. A RdCur request is used by IIO to read data without changing state.",
+ "UMask": "0x24",
+@@ -5061,7 +5061,7 @@
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RFO",
+ "Experimental": "1",
+- "Filter": "config1=0x40033",
++ "Filter": "config1=0x4003300000000",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO RFO requests that miss the LLC. A read for ownership (RFO) requests data to be cached in E state with the intent to modify.",
+ "UMask": "0x24",
+diff --git a/tools/perf/pmu-events/arch/x86/snowridgex/uncore-cache.json b/tools/perf/pmu-events/arch/x86/snowridgex/uncore-cache.json
+index 7551fb91a9d7d5..a81776deb2e618 100644
+--- a/tools/perf/pmu-events/arch/x86/snowridgex/uncore-cache.json
++++ b/tools/perf/pmu-events/arch/x86/snowridgex/uncore-cache.json
+@@ -1,61 +1,4 @@
+ [
+- {
+- "BriefDescription": "MMIO reads. Derived from unc_cha_tor_inserts.ia_miss",
+- "Counter": "0,1,2,3",
+- "EventCode": "0x35",
+- "EventName": "LLC_MISSES.MMIO_READ",
+- "Filter": "config1=0x40040e33",
+- "PerPkg": "1",
+- "PublicDescription": "TOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+- "UMask": "0xc001fe01",
+- "Unit": "CHA"
+- },
+- {
+- "BriefDescription": "MMIO writes. Derived from unc_cha_tor_inserts.ia_miss",
+- "Counter": "0,1,2,3",
+- "EventCode": "0x35",
+- "EventName": "LLC_MISSES.MMIO_WRITE",
+- "Filter": "config1=0x40041e33",
+- "PerPkg": "1",
+- "PublicDescription": "TOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+- "UMask": "0xc001fe01",
+- "Unit": "CHA"
+- },
+- {
+- "BriefDescription": "LLC misses - Uncacheable reads (from cpu) . Derived from unc_cha_tor_inserts.ia_miss",
+- "Counter": "0,1,2,3",
+- "EventCode": "0x35",
+- "EventName": "LLC_MISSES.UNCACHEABLE",
+- "Filter": "config1=0x40e33",
+- "PerPkg": "1",
+- "PublicDescription": "TOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+- "UMask": "0xc001fe01",
+- "Unit": "CHA"
+- },
+- {
+- "BriefDescription": "Streaming stores (full cache line). Derived from unc_cha_tor_inserts.ia_miss",
+- "Counter": "0,1,2,3",
+- "EventCode": "0x35",
+- "EventName": "LLC_REFERENCES.STREAMING_FULL",
+- "Filter": "config1=0x41833",
+- "PerPkg": "1",
+- "PublicDescription": "TOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+- "ScaleUnit": "64Bytes",
+- "UMask": "0xc001fe01",
+- "Unit": "CHA"
+- },
+- {
+- "BriefDescription": "Streaming stores (partial cache line). Derived from unc_cha_tor_inserts.ia_miss",
+- "Counter": "0,1,2,3",
+- "EventCode": "0x35",
+- "EventName": "LLC_REFERENCES.STREAMING_PARTIAL",
+- "Filter": "config1=0x41a33",
+- "PerPkg": "1",
+- "PublicDescription": "TOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+- "ScaleUnit": "64Bytes",
+- "UMask": "0xc001fe01",
+- "Unit": "CHA"
+- },
+ {
+ "BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
+diff --git a/tools/perf/scripts/python/arm-cs-trace-disasm.py b/tools/perf/scripts/python/arm-cs-trace-disasm.py
+index d973c2baed1c85..7aff02d84ffb3b 100755
+--- a/tools/perf/scripts/python/arm-cs-trace-disasm.py
++++ b/tools/perf/scripts/python/arm-cs-trace-disasm.py
+@@ -192,17 +192,16 @@ def process_event(param_dict):
+ ip = sample["ip"]
+ addr = sample["addr"]
+
++ if (options.verbose == True):
++ print("Event type: %s" % name)
++ print_sample(sample)
++
+ # Initialize CPU data if it's empty, and directly return back
+ # if this is the first tracing event for this CPU.
+ if (cpu_data.get(str(cpu) + 'addr') == None):
+ cpu_data[str(cpu) + 'addr'] = addr
+ return
+
+-
+- if (options.verbose == True):
+- print("Event type: %s" % name)
+- print_sample(sample)
+-
+ # If cannot find dso so cannot dump assembler, bail out
+ if (dso == '[unknown]'):
+ return
+diff --git a/tools/perf/ui/hist.c b/tools/perf/ui/hist.c
+index 5d1f04f66a5a15..e5491995adf08b 100644
+--- a/tools/perf/ui/hist.c
++++ b/tools/perf/ui/hist.c
+@@ -62,7 +62,7 @@ static int __hpp__fmt(struct perf_hpp *hpp, struct hist_entry *he,
+ struct evsel *pos;
+ char *buf = hpp->buf;
+ size_t size = hpp->size;
+- int i, nr_members = 1;
++ int i = 0, nr_members = 1;
+ struct hpp_fmt_value *values;
+
+ if (evsel__is_group_event(evsel))
+@@ -72,16 +72,16 @@ static int __hpp__fmt(struct perf_hpp *hpp, struct hist_entry *he,
+ if (values == NULL)
+ return 0;
+
+- i = 0;
+- for_each_group_evsel(pos, evsel)
+- values[i++].hists = evsel__hists(pos);
+-
++ values[0].hists = evsel__hists(evsel);
+ values[0].val = get_field(he);
+ values[0].samples = he->stat.nr_events;
+
+ if (evsel__is_group_event(evsel)) {
+ struct hist_entry *pair;
+
++ for_each_group_member(pos, evsel)
++ values[++i].hists = evsel__hists(pos);
++
+ list_for_each_entry(pair, &he->pairs.head, pairs.node) {
+ for (i = 0; i < nr_members; i++) {
+ if (values[i].hists != pair->hists)
+diff --git a/tools/perf/util/Build b/tools/perf/util/Build
+index 0f18fe81ef0b2a..b24360c04aaea4 100644
+--- a/tools/perf/util/Build
++++ b/tools/perf/util/Build
+@@ -13,6 +13,7 @@ perf-util-y += copyfile.o
+ perf-util-y += ctype.o
+ perf-util-y += db-export.o
+ perf-util-y += disasm.o
++perf-util-y += disasm_bpf.o
+ perf-util-y += env.o
+ perf-util-y += event.o
+ perf-util-y += evlist.o
+diff --git a/tools/perf/util/annotate-data.c b/tools/perf/util/annotate-data.c
+index 965da6c0b5427e..79c1f2ae7affdc 100644
+--- a/tools/perf/util/annotate-data.c
++++ b/tools/perf/util/annotate-data.c
+@@ -104,7 +104,7 @@ static void pr_debug_location(Dwarf_Die *die, u64 pc, int reg)
+ return;
+
+ while ((off = dwarf_getlocations(&attr, off, &base, &start, &end, &ops, &nops)) > 0) {
+- if (reg != DWARF_REG_PC && end < pc)
++ if (reg != DWARF_REG_PC && end <= pc)
+ continue;
+ if (reg != DWARF_REG_PC && start > pc)
+ break;
+diff --git a/tools/perf/util/bpf_skel/lock_data.h b/tools/perf/util/bpf_skel/lock_data.h
+index 36af11faad03c1..de12892f992f8d 100644
+--- a/tools/perf/util/bpf_skel/lock_data.h
++++ b/tools/perf/util/bpf_skel/lock_data.h
+@@ -7,11 +7,11 @@ struct tstamp_data {
+ u64 timestamp;
+ u64 lock;
+ u32 flags;
+- u32 stack_id;
++ s32 stack_id;
+ };
+
+ struct contention_key {
+- u32 stack_id;
++ s32 stack_id;
+ u32 pid;
+ u64 lock_addr_or_cgroup;
+ };
+diff --git a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
+index e9028235d7717b..d818e30c545713 100644
+--- a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
++++ b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
+@@ -15,6 +15,7 @@
+
+ typedef __u8 u8;
+ typedef __u32 u32;
++typedef __s32 s32;
+ typedef __u64 u64;
+ typedef __s64 s64;
+
+diff --git a/tools/perf/util/disasm.c b/tools/perf/util/disasm.c
+index e10558b79504b0..766cbd005f32a8 100644
+--- a/tools/perf/util/disasm.c
++++ b/tools/perf/util/disasm.c
+@@ -15,6 +15,7 @@
+ #include "build-id.h"
+ #include "debug.h"
+ #include "disasm.h"
++#include "disasm_bpf.h"
+ #include "dso.h"
+ #include "env.h"
+ #include "evsel.h"
+@@ -1164,192 +1165,6 @@ static int dso__disassemble_filename(struct dso *dso, char *filename, size_t fil
+ return 0;
+ }
+
+-#if defined(HAVE_LIBBFD_SUPPORT) && defined(HAVE_LIBBPF_SUPPORT)
+-#define PACKAGE "perf"
+-#include <bfd.h>
+-#include <dis-asm.h>
+-#include <bpf/bpf.h>
+-#include <bpf/btf.h>
+-#include <bpf/libbpf.h>
+-#include <linux/btf.h>
+-#include <tools/dis-asm-compat.h>
+-
+-#include "bpf-event.h"
+-#include "bpf-utils.h"
+-
+-static int symbol__disassemble_bpf(struct symbol *sym,
+- struct annotate_args *args)
+-{
+- struct annotation *notes = symbol__annotation(sym);
+- struct bpf_prog_linfo *prog_linfo = NULL;
+- struct bpf_prog_info_node *info_node;
+- int len = sym->end - sym->start;
+- disassembler_ftype disassemble;
+- struct map *map = args->ms.map;
+- struct perf_bpil *info_linear;
+- struct disassemble_info info;
+- struct dso *dso = map__dso(map);
+- int pc = 0, count, sub_id;
+- struct btf *btf = NULL;
+- char tpath[PATH_MAX];
+- size_t buf_size;
+- int nr_skip = 0;
+- char *buf;
+- bfd *bfdf;
+- int ret;
+- FILE *s;
+-
+- if (dso__binary_type(dso) != DSO_BINARY_TYPE__BPF_PROG_INFO)
+- return SYMBOL_ANNOTATE_ERRNO__BPF_INVALID_FILE;
+-
+- pr_debug("%s: handling sym %s addr %" PRIx64 " len %" PRIx64 "\n", __func__,
+- sym->name, sym->start, sym->end - sym->start);
+-
+- memset(tpath, 0, sizeof(tpath));
+- perf_exe(tpath, sizeof(tpath));
+-
+- bfdf = bfd_openr(tpath, NULL);
+- if (bfdf == NULL)
+- abort();
+-
+- if (!bfd_check_format(bfdf, bfd_object))
+- abort();
+-
+- s = open_memstream(&buf, &buf_size);
+- if (!s) {
+- ret = errno;
+- goto out;
+- }
+- init_disassemble_info_compat(&info, s,
+- (fprintf_ftype) fprintf,
+- fprintf_styled);
+- info.arch = bfd_get_arch(bfdf);
+- info.mach = bfd_get_mach(bfdf);
+-
+- info_node = perf_env__find_bpf_prog_info(dso__bpf_prog(dso)->env,
+- dso__bpf_prog(dso)->id);
+- if (!info_node) {
+- ret = SYMBOL_ANNOTATE_ERRNO__BPF_MISSING_BTF;
+- goto out;
+- }
+- info_linear = info_node->info_linear;
+- sub_id = dso__bpf_prog(dso)->sub_id;
+-
+- info.buffer = (void *)(uintptr_t)(info_linear->info.jited_prog_insns);
+- info.buffer_length = info_linear->info.jited_prog_len;
+-
+- if (info_linear->info.nr_line_info)
+- prog_linfo = bpf_prog_linfo__new(&info_linear->info);
+-
+- if (info_linear->info.btf_id) {
+- struct btf_node *node;
+-
+- node = perf_env__find_btf(dso__bpf_prog(dso)->env,
+- info_linear->info.btf_id);
+- if (node)
+- btf = btf__new((__u8 *)(node->data),
+- node->data_size);
+- }
+-
+- disassemble_init_for_target(&info);
+-
+-#ifdef DISASM_FOUR_ARGS_SIGNATURE
+- disassemble = disassembler(info.arch,
+- bfd_big_endian(bfdf),
+- info.mach,
+- bfdf);
+-#else
+- disassemble = disassembler(bfdf);
+-#endif
+- if (disassemble == NULL)
+- abort();
+-
+- fflush(s);
+- do {
+- const struct bpf_line_info *linfo = NULL;
+- struct disasm_line *dl;
+- size_t prev_buf_size;
+- const char *srcline;
+- u64 addr;
+-
+- addr = pc + ((u64 *)(uintptr_t)(info_linear->info.jited_ksyms))[sub_id];
+- count = disassemble(pc, &info);
+-
+- if (prog_linfo)
+- linfo = bpf_prog_linfo__lfind_addr_func(prog_linfo,
+- addr, sub_id,
+- nr_skip);
+-
+- if (linfo && btf) {
+- srcline = btf__name_by_offset(btf, linfo->line_off);
+- nr_skip++;
+- } else
+- srcline = NULL;
+-
+- fprintf(s, "\n");
+- prev_buf_size = buf_size;
+- fflush(s);
+-
+- if (!annotate_opts.hide_src_code && srcline) {
+- args->offset = -1;
+- args->line = strdup(srcline);
+- args->line_nr = 0;
+- args->fileloc = NULL;
+- args->ms.sym = sym;
+- dl = disasm_line__new(args);
+- if (dl) {
+- annotation_line__add(&dl->al,
+- ¬es->src->source);
+- }
+- }
+-
+- args->offset = pc;
+- args->line = buf + prev_buf_size;
+- args->line_nr = 0;
+- args->fileloc = NULL;
+- args->ms.sym = sym;
+- dl = disasm_line__new(args);
+- if (dl)
+- annotation_line__add(&dl->al, ¬es->src->source);
+-
+- pc += count;
+- } while (count > 0 && pc < len);
+-
+- ret = 0;
+-out:
+- free(prog_linfo);
+- btf__free(btf);
+- fclose(s);
+- bfd_close(bfdf);
+- return ret;
+-}
+-#else // defined(HAVE_LIBBFD_SUPPORT) && defined(HAVE_LIBBPF_SUPPORT)
+-static int symbol__disassemble_bpf(struct symbol *sym __maybe_unused,
+- struct annotate_args *args __maybe_unused)
+-{
+- return SYMBOL_ANNOTATE_ERRNO__NO_LIBOPCODES_FOR_BPF;
+-}
+-#endif // defined(HAVE_LIBBFD_SUPPORT) && defined(HAVE_LIBBPF_SUPPORT)
+-
+-static int
+-symbol__disassemble_bpf_image(struct symbol *sym,
+- struct annotate_args *args)
+-{
+- struct annotation *notes = symbol__annotation(sym);
+- struct disasm_line *dl;
+-
+- args->offset = -1;
+- args->line = strdup("to be implemented");
+- args->line_nr = 0;
+- args->fileloc = NULL;
+- dl = disasm_line__new(args);
+- if (dl)
+- annotation_line__add(&dl->al, ¬es->src->source);
+-
+- zfree(&args->line);
+- return 0;
+-}
+-
+ #ifdef HAVE_LIBCAPSTONE_SUPPORT
+ #include <capstone/capstone.h>
+
+diff --git a/tools/perf/util/disasm_bpf.c b/tools/perf/util/disasm_bpf.c
+new file mode 100644
+index 00000000000000..1fee71c79b624c
+--- /dev/null
++++ b/tools/perf/util/disasm_bpf.c
+@@ -0,0 +1,195 @@
++// SPDX-License-Identifier: GPL-2.0-only
++
++#include "util/annotate.h"
++#include "util/disasm_bpf.h"
++#include "util/symbol.h"
++#include <linux/zalloc.h>
++#include <string.h>
++
++#if defined(HAVE_LIBBFD_SUPPORT) && defined(HAVE_LIBBPF_SUPPORT)
++#define PACKAGE "perf"
++#include <bfd.h>
++#include <bpf/bpf.h>
++#include <bpf/btf.h>
++#include <bpf/libbpf.h>
++#include <dis-asm.h>
++#include <errno.h>
++#include <linux/btf.h>
++#include <tools/dis-asm-compat.h>
++
++#include "util/bpf-event.h"
++#include "util/bpf-utils.h"
++#include "util/debug.h"
++#include "util/dso.h"
++#include "util/map.h"
++#include "util/env.h"
++#include "util/util.h"
++
++int symbol__disassemble_bpf(struct symbol *sym, struct annotate_args *args)
++{
++ struct annotation *notes = symbol__annotation(sym);
++ struct bpf_prog_linfo *prog_linfo = NULL;
++ struct bpf_prog_info_node *info_node;
++ int len = sym->end - sym->start;
++ disassembler_ftype disassemble;
++ struct map *map = args->ms.map;
++ struct perf_bpil *info_linear;
++ struct disassemble_info info;
++ struct dso *dso = map__dso(map);
++ int pc = 0, count, sub_id;
++ struct btf *btf = NULL;
++ char tpath[PATH_MAX];
++ size_t buf_size;
++ int nr_skip = 0;
++ char *buf;
++ bfd *bfdf;
++ int ret;
++ FILE *s;
++
++ if (dso__binary_type(dso) != DSO_BINARY_TYPE__BPF_PROG_INFO)
++ return SYMBOL_ANNOTATE_ERRNO__BPF_INVALID_FILE;
++
++ pr_debug("%s: handling sym %s addr %" PRIx64 " len %" PRIx64 "\n", __func__,
++ sym->name, sym->start, sym->end - sym->start);
++
++ memset(tpath, 0, sizeof(tpath));
++ perf_exe(tpath, sizeof(tpath));
++
++ bfdf = bfd_openr(tpath, NULL);
++ if (bfdf == NULL)
++ abort();
++
++ if (!bfd_check_format(bfdf, bfd_object))
++ abort();
++
++ s = open_memstream(&buf, &buf_size);
++ if (!s) {
++ ret = errno;
++ goto out;
++ }
++ init_disassemble_info_compat(&info, s,
++ (fprintf_ftype) fprintf,
++ fprintf_styled);
++ info.arch = bfd_get_arch(bfdf);
++ info.mach = bfd_get_mach(bfdf);
++
++ info_node = perf_env__find_bpf_prog_info(dso__bpf_prog(dso)->env,
++ dso__bpf_prog(dso)->id);
++ if (!info_node) {
++ ret = SYMBOL_ANNOTATE_ERRNO__BPF_MISSING_BTF;
++ goto out;
++ }
++ info_linear = info_node->info_linear;
++ sub_id = dso__bpf_prog(dso)->sub_id;
++
++ info.buffer = (void *)(uintptr_t)(info_linear->info.jited_prog_insns);
++ info.buffer_length = info_linear->info.jited_prog_len;
++
++ if (info_linear->info.nr_line_info)
++ prog_linfo = bpf_prog_linfo__new(&info_linear->info);
++
++ if (info_linear->info.btf_id) {
++ struct btf_node *node;
++
++ node = perf_env__find_btf(dso__bpf_prog(dso)->env,
++ info_linear->info.btf_id);
++ if (node)
++ btf = btf__new((__u8 *)(node->data),
++ node->data_size);
++ }
++
++ disassemble_init_for_target(&info);
++
++#ifdef DISASM_FOUR_ARGS_SIGNATURE
++ disassemble = disassembler(info.arch,
++ bfd_big_endian(bfdf),
++ info.mach,
++ bfdf);
++#else
++ disassemble = disassembler(bfdf);
++#endif
++ if (disassemble == NULL)
++ abort();
++
++ fflush(s);
++ do {
++ const struct bpf_line_info *linfo = NULL;
++ struct disasm_line *dl;
++ size_t prev_buf_size;
++ const char *srcline;
++ u64 addr;
++
++ addr = pc + ((u64 *)(uintptr_t)(info_linear->info.jited_ksyms))[sub_id];
++ count = disassemble(pc, &info);
++
++ if (prog_linfo)
++ linfo = bpf_prog_linfo__lfind_addr_func(prog_linfo,
++ addr, sub_id,
++ nr_skip);
++
++ if (linfo && btf) {
++ srcline = btf__name_by_offset(btf, linfo->line_off);
++ nr_skip++;
++ } else
++ srcline = NULL;
++
++ fprintf(s, "\n");
++ prev_buf_size = buf_size;
++ fflush(s);
++
++ if (!annotate_opts.hide_src_code && srcline) {
++ args->offset = -1;
++ args->line = strdup(srcline);
++ args->line_nr = 0;
++ args->fileloc = NULL;
++ args->ms.sym = sym;
++ dl = disasm_line__new(args);
++ if (dl) {
++ annotation_line__add(&dl->al,
++ ¬es->src->source);
++ }
++ }
++
++ args->offset = pc;
++ args->line = buf + prev_buf_size;
++ args->line_nr = 0;
++ args->fileloc = NULL;
++ args->ms.sym = sym;
++ dl = disasm_line__new(args);
++ if (dl)
++ annotation_line__add(&dl->al, ¬es->src->source);
++
++ pc += count;
++ } while (count > 0 && pc < len);
++
++ ret = 0;
++out:
++ free(prog_linfo);
++ btf__free(btf);
++ fclose(s);
++ bfd_close(bfdf);
++ return ret;
++}
++#else // defined(HAVE_LIBBFD_SUPPORT) && defined(HAVE_LIBBPF_SUPPORT)
++int symbol__disassemble_bpf(struct symbol *sym __maybe_unused, struct annotate_args *args __maybe_unused)
++{
++ return SYMBOL_ANNOTATE_ERRNO__NO_LIBOPCODES_FOR_BPF;
++}
++#endif // defined(HAVE_LIBBFD_SUPPORT) && defined(HAVE_LIBBPF_SUPPORT)
++
++int symbol__disassemble_bpf_image(struct symbol *sym, struct annotate_args *args)
++{
++ struct annotation *notes = symbol__annotation(sym);
++ struct disasm_line *dl;
++
++ args->offset = -1;
++ args->line = strdup("to be implemented");
++ args->line_nr = 0;
++ args->fileloc = NULL;
++ dl = disasm_line__new(args);
++ if (dl)
++ annotation_line__add(&dl->al, ¬es->src->source);
++
++ zfree(&args->line);
++ return 0;
++}
+diff --git a/tools/perf/util/disasm_bpf.h b/tools/perf/util/disasm_bpf.h
+new file mode 100644
+index 00000000000000..2ecb19545388b1
+--- /dev/null
++++ b/tools/perf/util/disasm_bpf.h
+@@ -0,0 +1,12 @@
++// SPDX-License-Identifier: GPL-2.0-only
++
++#ifndef __PERF_DISASM_BPF_H
++#define __PERF_DISASM_BPF_H
++
++struct symbol;
++struct annotate_args;
++
++int symbol__disassemble_bpf(struct symbol *sym, struct annotate_args *args);
++int symbol__disassemble_bpf_image(struct symbol *sym, struct annotate_args *args);
++
++#endif /* __PERF_DISASM_BPF_H */
+diff --git a/tools/perf/util/dwarf-aux.c b/tools/perf/util/dwarf-aux.c
+index 44ef968a7ad331..1b0e59f4d8e936 100644
+--- a/tools/perf/util/dwarf-aux.c
++++ b/tools/perf/util/dwarf-aux.c
+@@ -1444,7 +1444,7 @@ static int __die_find_var_reg_cb(Dwarf_Die *die_mem, void *arg)
+
+ while ((off = dwarf_getlocations(&attr, off, &base, &start, &end, &ops, &nops)) > 0) {
+ /* Assuming the location list is sorted by address */
+- if (end < data->pc)
++ if (end <= data->pc)
+ continue;
+ if (start > data->pc)
+ break;
+@@ -1598,6 +1598,9 @@ static int __die_collect_vars_cb(Dwarf_Die *die_mem, void *arg)
+ if (dwarf_getlocations(&attr, 0, &base, &start, &end, &ops, &nops) <= 0)
+ return DIE_FIND_CB_SIBLING;
+
++ if (!check_allowed_ops(ops, nops))
++ return DIE_FIND_CB_SIBLING;
++
+ if (die_get_real_type(die_mem, &type_die) == NULL)
+ return DIE_FIND_CB_SIBLING;
+
+@@ -1974,8 +1977,15 @@ static int __die_find_member_offset_cb(Dwarf_Die *die_mem, void *arg)
+ return DIE_FIND_CB_SIBLING;
+
+ /* Unions might not have location */
+- if (die_get_data_member_location(die_mem, &loc) < 0)
+- loc = 0;
++ if (die_get_data_member_location(die_mem, &loc) < 0) {
++ Dwarf_Attribute attr;
++
++ if (dwarf_attr_integrate(die_mem, DW_AT_data_bit_offset, &attr) &&
++ dwarf_formudata(&attr, &loc) == 0)
++ loc /= 8;
++ else
++ loc = 0;
++ }
+
+ if (offset == loc)
+ return DIE_FIND_CB_END;
+diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
+index be048bd02f36cc..051feb93ed8d40 100644
+--- a/tools/perf/util/mem-events.c
++++ b/tools/perf/util/mem-events.c
+@@ -29,6 +29,8 @@ struct perf_mem_event perf_mem_events[PERF_MEM_EVENTS__MAX] = {
+ };
+ #undef E
+
++bool perf_mem_record[PERF_MEM_EVENTS__MAX] = { 0 };
++
+ static char mem_loads_name[100];
+ static char mem_stores_name[100];
+
+@@ -163,7 +165,7 @@ int perf_pmu__mem_events_parse(struct perf_pmu *pmu, const char *str)
+ continue;
+
+ if (strstr(e->tag, tok))
+- e->record = found = true;
++ perf_mem_record[j] = found = true;
+ }
+
+ tok = strtok_r(NULL, ",", &saveptr);
+@@ -192,7 +194,7 @@ static bool perf_pmu__mem_events_supported(const char *mnt, struct perf_pmu *pmu
+ return !stat(path, &st);
+ }
+
+-int perf_pmu__mem_events_init(struct perf_pmu *pmu)
++static int __perf_pmu__mem_events_init(struct perf_pmu *pmu)
+ {
+ const char *mnt = sysfs__mount();
+ bool found = false;
+@@ -219,6 +221,18 @@ int perf_pmu__mem_events_init(struct perf_pmu *pmu)
+ return found ? 0 : -ENOENT;
+ }
+
++int perf_pmu__mem_events_init(void)
++{
++ struct perf_pmu *pmu = NULL;
++
++ while ((pmu = perf_pmus__scan_mem(pmu)) != NULL) {
++ if (__perf_pmu__mem_events_init(pmu))
++ return -ENOENT;
++ }
++
++ return 0;
++}
++
+ void perf_pmu__mem_events_list(struct perf_pmu *pmu)
+ {
+ int j;
+@@ -249,7 +263,7 @@ int perf_mem_events__record_args(const char **rec_argv, int *argv_nr)
+ for (int j = 0; j < PERF_MEM_EVENTS__MAX; j++) {
+ e = perf_pmu__mem_events_ptr(pmu, j);
+
+- if (!e->record)
++ if (!perf_mem_record[j])
+ continue;
+
+ if (!e->supported) {
+diff --git a/tools/perf/util/mem-events.h b/tools/perf/util/mem-events.h
+index ca31014d7934f4..8dc27db9fd52f4 100644
+--- a/tools/perf/util/mem-events.h
++++ b/tools/perf/util/mem-events.h
+@@ -6,7 +6,6 @@
+ #include <linux/types.h>
+
+ struct perf_mem_event {
+- bool record;
+ bool supported;
+ bool ldlat;
+ u32 aux_event;
+@@ -28,9 +27,10 @@ struct perf_pmu;
+
+ extern unsigned int perf_mem_events__loads_ldlat;
+ extern struct perf_mem_event perf_mem_events[PERF_MEM_EVENTS__MAX];
++extern bool perf_mem_record[PERF_MEM_EVENTS__MAX];
+
+ int perf_pmu__mem_events_parse(struct perf_pmu *pmu, const char *str);
+-int perf_pmu__mem_events_init(struct perf_pmu *pmu);
++int perf_pmu__mem_events_init(void);
+
+ struct perf_mem_event *perf_pmu__mem_events_ptr(struct perf_pmu *pmu, int i);
+ struct perf_pmu *perf_mem_events_find_pmu(void);
+diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
+index 5596bed1b8c83f..080242c691969c 100644
+--- a/tools/perf/util/session.c
++++ b/tools/perf/util/session.c
+@@ -1511,6 +1511,9 @@ static int deliver_sample_group(struct evlist *evlist,
+ int ret = -EINVAL;
+ struct sample_read_value *v = sample->read.group.values;
+
++ if (tool->dont_split_sample_group)
++ return deliver_sample_value(evlist, tool, event, sample, v, machine);
++
+ sample_read_group__for_each(v, sample->read.group.nr, read_format) {
+ ret = deliver_sample_value(evlist, tool, event, sample, v,
+ machine);
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index c38bcb6f4c78e3..ea96e4ebad8c83 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -1237,7 +1237,8 @@ static void print_metric_headers(struct perf_stat_config *config,
+
+ /* Print metrics headers only */
+ evlist__for_each_entry(evlist, counter) {
+- if (config->aggr_mode != AGGR_NONE && counter->metric_leader != counter)
++ if (!config->iostat_run &&
++ config->aggr_mode != AGGR_NONE && counter->metric_leader != counter)
+ continue;
+
+ os.evsel = counter;
+diff --git a/tools/perf/util/time-utils.c b/tools/perf/util/time-utils.c
+index 30244392168163..1b91ccd4d52348 100644
+--- a/tools/perf/util/time-utils.c
++++ b/tools/perf/util/time-utils.c
+@@ -20,7 +20,7 @@ int parse_nsec_time(const char *str, u64 *ptime)
+ u64 time_sec, time_nsec;
+ char *end;
+
+- time_sec = strtoul(str, &end, 10);
++ time_sec = strtoull(str, &end, 10);
+ if (*end != '.' && *end != '\0')
+ return -1;
+
+@@ -38,7 +38,7 @@ int parse_nsec_time(const char *str, u64 *ptime)
+ for (i = strlen(nsec_buf); i < 9; i++)
+ nsec_buf[i] = '0';
+
+- time_nsec = strtoul(nsec_buf, &end, 10);
++ time_nsec = strtoull(nsec_buf, &end, 10);
+ if (*end != '\0')
+ return -1;
+ } else
+diff --git a/tools/perf/util/tool.h b/tools/perf/util/tool.h
+index c957fb849ac633..62bbc9cec151bb 100644
+--- a/tools/perf/util/tool.h
++++ b/tools/perf/util/tool.h
+@@ -85,6 +85,7 @@ struct perf_tool {
+ bool namespace_events;
+ bool cgroup_events;
+ bool no_warn;
++ bool dont_split_sample_group;
+ enum show_feature_header show_feat_hdr;
+ };
+
+diff --git a/tools/power/cpupower/lib/powercap.c b/tools/power/cpupower/lib/powercap.c
+index a7a59c6bacda81..94a0c69e55ef5e 100644
+--- a/tools/power/cpupower/lib/powercap.c
++++ b/tools/power/cpupower/lib/powercap.c
+@@ -77,6 +77,14 @@ int powercap_get_enabled(int *mode)
+ return sysfs_get_enabled(path, mode);
+ }
+
++/*
++ * TODO: implement function. Returns dummy 0 for now.
++ */
++int powercap_set_enabled(int mode)
++{
++ return 0;
++}
++
+ /*
+ * Hardcoded, because rapl is the only powercap implementation
+ - * this needs to get more generic if more powercap implementations
+diff --git a/tools/testing/selftests/arm64/signal/Makefile b/tools/testing/selftests/arm64/signal/Makefile
+index 8f5febaf1a9a25..edb3613513b8a8 100644
+--- a/tools/testing/selftests/arm64/signal/Makefile
++++ b/tools/testing/selftests/arm64/signal/Makefile
+@@ -23,7 +23,7 @@ $(TEST_GEN_PROGS): $(PROGS)
+ # Common test-unit targets to build common-layout test-cases executables
+ # Needs secondary expansion to properly include the testcase c-file in pre-reqs
+ COMMON_SOURCES := test_signals.c test_signals_utils.c testcases/testcases.c \
+- signals.S
++ signals.S sve_helpers.c
+ COMMON_HEADERS := test_signals.h test_signals_utils.h testcases/testcases.h
+
+ .SECONDEXPANSION:
+diff --git a/tools/testing/selftests/arm64/signal/sve_helpers.c b/tools/testing/selftests/arm64/signal/sve_helpers.c
+new file mode 100644
+index 00000000000000..0acc121af3062a
+--- /dev/null
++++ b/tools/testing/selftests/arm64/signal/sve_helpers.c
+@@ -0,0 +1,56 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (C) 2024 ARM Limited
++ *
++ * Common helper functions for SVE and SME functionality.
++ */
++
++#include <stdbool.h>
++#include <kselftest.h>
++#include <asm/sigcontext.h>
++#include <sys/prctl.h>
++
++unsigned int vls[SVE_VQ_MAX];
++unsigned int nvls;
++
++int sve_fill_vls(bool use_sme, int min_vls)
++{
++ int vq, vl;
++ int pr_set_vl = use_sme ? PR_SME_SET_VL : PR_SVE_SET_VL;
++ int len_mask = use_sme ? PR_SME_VL_LEN_MASK : PR_SVE_VL_LEN_MASK;
++
++ /*
++ * Enumerate up to SVE_VQ_MAX vector lengths
++ */
++ for (vq = SVE_VQ_MAX; vq > 0; --vq) {
++ vl = prctl(pr_set_vl, vq * 16);
++ if (vl == -1)
++ return KSFT_FAIL;
++
++ vl &= len_mask;
++
++ /*
++ * Unlike SVE, SME does not require the minimum vector length
++ * to be implemented, or the VLs to be consecutive, so any call
++ * to the prctl might return the single implemented VL, which
++ * might be larger than 16. So to avoid this loop never
++ * terminating, bail out here when we find a higher VL than
++ * we asked for.
++ * See the ARM ARM, DDI 0487K.a, B1.4.2: I_QQRNR and I_NWYBP.
++ */
++ if (vq < sve_vq_from_vl(vl))
++ break;
++
++ /* Skip missing VLs */
++ vq = sve_vq_from_vl(vl);
++
++ vls[nvls++] = vl;
++ }
++
++ if (nvls < min_vls) {
++ fprintf(stderr, "Only %d VL supported\n", nvls);
++ return KSFT_SKIP;
++ }
++
++ return KSFT_PASS;
++}
+diff --git a/tools/testing/selftests/arm64/signal/sve_helpers.h b/tools/testing/selftests/arm64/signal/sve_helpers.h
+new file mode 100644
+index 00000000000000..50948ce471cc62
+--- /dev/null
++++ b/tools/testing/selftests/arm64/signal/sve_helpers.h
+@@ -0,0 +1,21 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Copyright (C) 2024 ARM Limited
++ *
++ * Common helper functions for SVE and SME functionality.
++ */
++
++#ifndef __SVE_HELPERS_H__
++#define __SVE_HELPERS_H__
++
++#include <stdbool.h>
++
++#define VLS_USE_SVE false
++#define VLS_USE_SME true
++
++extern unsigned int vls[];
++extern unsigned int nvls;
++
++int sve_fill_vls(bool use_sme, int min_vls);
++
++#endif
+diff --git a/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c b/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c
+index ebd5815b54bbaa..dfd6a2badf9fb3 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c
++++ b/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c
+@@ -6,44 +6,28 @@
+ * handler, this is not supported and is expected to segfault.
+ */
+
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+
+ struct fake_sigframe sf;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+
+ static bool sme_get_vls(struct tdescr *td)
+ {
+- int vq, vl;
++ int res = sve_fill_vls(VLS_USE_SME, 2);
+
+- /*
+- * Enumerate up to SVE_VQ_MAX vector lengths
+- */
+- for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+- vl = prctl(PR_SVE_SET_VL, vq * 16);
+- if (vl == -1)
+- return false;
++ if (!res)
++ return true;
+
+- vl &= PR_SME_VL_LEN_MASK;
++ if (res == KSFT_SKIP)
++ td->result = KSFT_SKIP;
+
+- /* Skip missing VLs */
+- vq = sve_vq_from_vl(vl);
+-
+- vls[nvls++] = vl;
+- }
+-
+- /* We need at least two VLs */
+- if (nvls < 2) {
+- fprintf(stderr, "Only %d VL supported\n", nvls);
+- return false;
+- }
+-
+- return true;
++ return false;
+ }
+
+ static int fake_sigreturn_ssve_change_vl(struct tdescr *td,
+@@ -51,30 +35,30 @@ static int fake_sigreturn_ssve_change_vl(struct tdescr *td,
+ {
+ size_t resv_sz, offset;
+ struct _aarch64_ctx *head = GET_SF_RESV_HEAD(sf);
+- struct sve_context *sve;
++ struct za_context *za;
+
+ /* Get a signal context with a SME ZA frame in it */
+ if (!get_current_context(td, &sf.uc, sizeof(sf.uc)))
+ return 1;
+
+ resv_sz = GET_SF_RESV_SIZE(sf);
+- head = get_header(head, SVE_MAGIC, resv_sz, &offset);
++ head = get_header(head, ZA_MAGIC, resv_sz, &offset);
+ if (!head) {
+- fprintf(stderr, "No SVE context\n");
++ fprintf(stderr, "No ZA context\n");
+ return 1;
+ }
+
+- if (head->size != sizeof(struct sve_context)) {
++ if (head->size != sizeof(struct za_context)) {
+ fprintf(stderr, "Register data present, aborting\n");
+ return 1;
+ }
+
+- sve = (struct sve_context *)head;
++ za = (struct za_context *)head;
+
+ /* No changes are supported; init left us at minimum VL so go to max */
+ fprintf(stderr, "Attempting to change VL from %d to %d\n",
+- sve->vl, vls[0]);
+- sve->vl = vls[0];
++ za->vl, vls[0]);
++ za->vl = vls[0];
+
+ fake_sigreturn(&sf, sizeof(sf), 0);
+
+diff --git a/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sve_change_vl.c b/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sve_change_vl.c
+index e2a452190511ff..e1ccf8f85a70c8 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sve_change_vl.c
++++ b/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sve_change_vl.c
+@@ -12,40 +12,22 @@
+ #include <sys/prctl.h>
+
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+
+ struct fake_sigframe sf;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+
+ static bool sve_get_vls(struct tdescr *td)
+ {
+- int vq, vl;
++ int res = sve_fill_vls(VLS_USE_SVE, 2);
+
+- /*
+- * Enumerate up to SVE_VQ_MAX vector lengths
+- */
+- for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+- vl = prctl(PR_SVE_SET_VL, vq * 16);
+- if (vl == -1)
+- return false;
++ if (!res)
++ return true;
+
+- vl &= PR_SVE_VL_LEN_MASK;
+-
+- /* Skip missing VLs */
+- vq = sve_vq_from_vl(vl);
+-
+- vls[nvls++] = vl;
+- }
+-
+- /* We need at least two VLs */
+- if (nvls < 2) {
+- fprintf(stderr, "Only %d VL supported\n", nvls);
++ if (res == KSFT_SKIP)
+ td->result = KSFT_SKIP;
+- return false;
+- }
+
+- return true;
++ return false;
+ }
+
+ static int fake_sigreturn_sve_change_vl(struct tdescr *td,
+diff --git a/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c b/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c
+index 3d37daafcff513..6dbe48cf8b09ed 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c
+@@ -6,51 +6,31 @@
+ * set up as expected.
+ */
+
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+
+ static union {
+ ucontext_t uc;
+ char buf[1024 * 64];
+ } context;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+
+ static bool sme_get_vls(struct tdescr *td)
+ {
+- int vq, vl;
++ int res = sve_fill_vls(VLS_USE_SME, 1);
+
+- /*
+- * Enumerate up to SVE_VQ_MAX vector lengths
+- */
+- for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+- vl = prctl(PR_SME_SET_VL, vq * 16);
+- if (vl == -1)
+- return false;
+-
+- vl &= PR_SME_VL_LEN_MASK;
+-
+- /* Did we find the lowest supported VL? */
+- if (vq < sve_vq_from_vl(vl))
+- break;
++ if (!res)
++ return true;
+
+- /* Skip missing VLs */
+- vq = sve_vq_from_vl(vl);
+-
+- vls[nvls++] = vl;
+- }
+-
+- /* We need at least one VL */
+- if (nvls < 1) {
+- fprintf(stderr, "Only %d VL supported\n", nvls);
+- return false;
+- }
++ if (res == KSFT_SKIP)
++ td->result = KSFT_SKIP;
+
+- return true;
++ return false;
+ }
+
+ static void setup_ssve_regs(void)
+diff --git a/tools/testing/selftests/arm64/signal/testcases/ssve_za_regs.c b/tools/testing/selftests/arm64/signal/testcases/ssve_za_regs.c
+index 9dc5f128bbc0d5..5557e116e97363 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/ssve_za_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/ssve_za_regs.c
+@@ -6,51 +6,31 @@
+ * signal frames is set up as expected when enabled simultaneously.
+ */
+
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+
+ static union {
+ ucontext_t uc;
+ char buf[1024 * 128];
+ } context;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+
+ static bool sme_get_vls(struct tdescr *td)
+ {
+- int vq, vl;
++ int res = sve_fill_vls(VLS_USE_SME, 1);
+
+- /*
+- * Enumerate up to SVE_VQ_MAX vector lengths
+- */
+- for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+- vl = prctl(PR_SME_SET_VL, vq * 16);
+- if (vl == -1)
+- return false;
+-
+- vl &= PR_SME_VL_LEN_MASK;
+-
+- /* Did we find the lowest supported VL? */
+- if (vq < sve_vq_from_vl(vl))
+- break;
++ if (!res)
++ return true;
+
+- /* Skip missing VLs */
+- vq = sve_vq_from_vl(vl);
+-
+- vls[nvls++] = vl;
+- }
+-
+- /* We need at least one VL */
+- if (nvls < 1) {
+- fprintf(stderr, "Only %d VL supported\n", nvls);
+- return false;
+- }
++ if (res == KSFT_SKIP)
++ td->result = KSFT_SKIP;
+
+- return true;
++ return false;
+ }
+
+ static void setup_regs(void)
+diff --git a/tools/testing/selftests/arm64/signal/testcases/sve_regs.c b/tools/testing/selftests/arm64/signal/testcases/sve_regs.c
+index 8b16eabbb7697e..8143eb1c58c187 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/sve_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/sve_regs.c
+@@ -6,47 +6,31 @@
+ * expected.
+ */
+
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+
+ static union {
+ ucontext_t uc;
+ char buf[1024 * 64];
+ } context;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+
+ static bool sve_get_vls(struct tdescr *td)
+ {
+- int vq, vl;
++ int res = sve_fill_vls(VLS_USE_SVE, 1);
+
+- /*
+- * Enumerate up to SVE_VQ_MAX vector lengths
+- */
+- for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+- vl = prctl(PR_SVE_SET_VL, vq * 16);
+- if (vl == -1)
+- return false;
+-
+- vl &= PR_SVE_VL_LEN_MASK;
+-
+- /* Skip missing VLs */
+- vq = sve_vq_from_vl(vl);
++ if (!res)
++ return true;
+
+- vls[nvls++] = vl;
+- }
+-
+- /* We need at least one VL */
+- if (nvls < 1) {
+- fprintf(stderr, "Only %d VL supported\n", nvls);
+- return false;
+- }
++ if (res == KSFT_SKIP)
++ td->result = KSFT_SKIP;
+
+- return true;
++ return false;
+ }
+
+ static void setup_sve_regs(void)
+diff --git a/tools/testing/selftests/arm64/signal/testcases/za_no_regs.c b/tools/testing/selftests/arm64/signal/testcases/za_no_regs.c
+index 4d6f94b6178f36..ce26e9c2fa5e34 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/za_no_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/za_no_regs.c
+@@ -6,47 +6,31 @@
+ * expected.
+ */
+
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+
+ static union {
+ ucontext_t uc;
+ char buf[1024 * 128];
+ } context;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+
+ static bool sme_get_vls(struct tdescr *td)
+ {
+- int vq, vl;
++ int res = sve_fill_vls(VLS_USE_SME, 1);
+
+- /*
+- * Enumerate up to SME_VQ_MAX vector lengths
+- */
+- for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+- vl = prctl(PR_SME_SET_VL, vq * 16);
+- if (vl == -1)
+- return false;
+-
+- vl &= PR_SME_VL_LEN_MASK;
+-
+- /* Skip missing VLs */
+- vq = sve_vq_from_vl(vl);
++ if (!res)
++ return true;
+
+- vls[nvls++] = vl;
+- }
+-
+- /* We need at least one VL */
+- if (nvls < 1) {
+- fprintf(stderr, "Only %d VL supported\n", nvls);
+- return false;
+- }
++ if (res == KSFT_SKIP)
++ td->result = KSFT_SKIP;
+
+- return true;
++ return false;
+ }
+
+ static int do_one_sme_vl(struct tdescr *td, siginfo_t *si, ucontext_t *uc,
+diff --git a/tools/testing/selftests/arm64/signal/testcases/za_regs.c b/tools/testing/selftests/arm64/signal/testcases/za_regs.c
+index 174ad665669647..b9e13f27f1f9aa 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/za_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/za_regs.c
+@@ -6,51 +6,31 @@
+ * expected.
+ */
+
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+
+ static union {
+ ucontext_t uc;
+ char buf[1024 * 128];
+ } context;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+
+ static bool sme_get_vls(struct tdescr *td)
+ {
+- int vq, vl;
++ int res = sve_fill_vls(VLS_USE_SME, 1);
+
+- /*
+- * Enumerate up to SME_VQ_MAX vector lengths
+- */
+- for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+- vl = prctl(PR_SME_SET_VL, vq * 16);
+- if (vl == -1)
+- return false;
+-
+- vl &= PR_SME_VL_LEN_MASK;
+-
+- /* Did we find the lowest supported VL? */
+- if (vq < sve_vq_from_vl(vl))
+- break;
++ if (!res)
++ return true;
+
+- /* Skip missing VLs */
+- vq = sve_vq_from_vl(vl);
+-
+- vls[nvls++] = vl;
+- }
+-
+- /* We need at least one VL */
+- if (nvls < 1) {
+- fprintf(stderr, "Only %d VL supported\n", nvls);
+- return false;
+- }
++ if (res == KSFT_SKIP)
++ td->result = KSFT_SKIP;
+
+- return true;
++ return false;
+ }
+
+ static void setup_za_regs(void)
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 81d4757ecd4c65..848fffa250227f 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -427,23 +427,24 @@ $(OUTPUT)/cgroup_getset_retval_hooks.o: cgroup_getset_retval_hooks.h
+ # $1 - input .c file
+ # $2 - output .o file
+ # $3 - CFLAGS
++# $4 - binary name
+ define CLANG_BPF_BUILD_RULE
+- $(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
++ $(call msg,CLNG-BPF,$4,$2)
+ $(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v3 -o $2
+ endef
+ # Similar to CLANG_BPF_BUILD_RULE, but with disabled alu32
+ define CLANG_NOALU32_BPF_BUILD_RULE
+- $(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
++ $(call msg,CLNG-BPF,$4,$2)
+ $(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v2 -o $2
+ endef
+ # Similar to CLANG_BPF_BUILD_RULE, but with cpu-v4
+ define CLANG_CPUV4_BPF_BUILD_RULE
+- $(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
++ $(call msg,CLNG-BPF,$4,$2)
+ $(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v4 -o $2
+ endef
+ # Build BPF object using GCC
+ define GCC_BPF_BUILD_RULE
+- $(call msg,GCC-BPF,$(TRUNNER_BINARY),$2)
++ $(call msg,GCC-BPF,$4,$2)
+ $(Q)$(BPF_GCC) $3 -DBPF_NO_PRESERVE_ACCESS_INDEX -Wno-attributes -O2 -c $1 -o $2
+ endef
+
+@@ -535,7 +536,7 @@ $(TRUNNER_BPF_OBJS): $(TRUNNER_OUTPUT)/%.bpf.o: \
+ $$(call $(TRUNNER_BPF_BUILD_RULE),$$<,$$@, \
+ $(TRUNNER_BPF_CFLAGS) \
+ $$($$<-CFLAGS) \
+- $$($$<-$2-CFLAGS))
++ $$($$<-$2-CFLAGS),$(TRUNNER_BINARY))
+
+ $(TRUNNER_BPF_SKELS): %.skel.h: %.bpf.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
+ $$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
+@@ -762,6 +763,8 @@ $(OUTPUT)/veristat: $(OUTPUT)/veristat.o
+ $(call msg,BINARY,,$@)
+ $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@
+
++# Linking uprobe_multi can fail due to relocation overflows on mips.
++$(OUTPUT)/uprobe_multi: CFLAGS += $(if $(filter mips, $(ARCH)),-mxgot)
+ $(OUTPUT)/uprobe_multi: uprobe_multi.c
+ $(call msg,BINARY,,$@)
+ $(Q)$(CC) $(CFLAGS) -O0 $(LDFLAGS) $^ $(LDLIBS) -o $@
+diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
+index 627b74ae041b52..90dc3aca32bd8f 100644
+--- a/tools/testing/selftests/bpf/bench.c
++++ b/tools/testing/selftests/bpf/bench.c
+@@ -10,6 +10,7 @@
+ #include <sys/sysinfo.h>
+ #include <signal.h>
+ #include "bench.h"
++#include "bpf_util.h"
+ #include "testing_helpers.h"
+
+ struct env env = {
+diff --git a/tools/testing/selftests/bpf/bench.h b/tools/testing/selftests/bpf/bench.h
+index 68180d8f8558ec..005c401b3e2275 100644
+--- a/tools/testing/selftests/bpf/bench.h
++++ b/tools/testing/selftests/bpf/bench.h
+@@ -10,6 +10,7 @@
+ #include <math.h>
+ #include <time.h>
+ #include <sys/syscall.h>
++#include <limits.h>
+
+ struct cpu_set {
+ bool *cpus;
+diff --git a/tools/testing/selftests/bpf/map_tests/sk_storage_map.c b/tools/testing/selftests/bpf/map_tests/sk_storage_map.c
+index 18405c3b7cee9a..af10c309359a77 100644
+--- a/tools/testing/selftests/bpf/map_tests/sk_storage_map.c
++++ b/tools/testing/selftests/bpf/map_tests/sk_storage_map.c
+@@ -412,7 +412,7 @@ static void test_sk_storage_map_stress_free(void)
+ rlim_new.rlim_max = rlim_new.rlim_cur + 128;
+ err = setrlimit(RLIMIT_NOFILE, &rlim_new);
+ CHECK(err, "setrlimit(RLIMIT_NOFILE)", "rlim_new:%lu errno:%d",
+- rlim_new.rlim_cur, errno);
++ (unsigned long) rlim_new.rlim_cur, errno);
+ }
+
+ err = do_sk_storage_map_stress_free();
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt.c
+index b52ff8ce34db82..16bed9dd8e6a30 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt.c
+@@ -95,7 +95,7 @@ static unsigned short get_local_port(int fd)
+ struct sockaddr_in6 addr;
+ socklen_t addrlen = sizeof(addr);
+
+- if (!getsockname(fd, &addr, &addrlen))
++ if (!getsockname(fd, (struct sockaddr *)&addr, &addrlen))
+ return ntohs(addr.sin6_port);
+
+ return 0;
+diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+index 47f42e6801056b..26019313e1fc20 100644
+--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
++++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+@@ -1,4 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
++#define _GNU_SOURCE
+ #include <test_progs.h>
+ #include "progs/core_reloc_types.h"
+ #include "bpf_testmod/bpf_testmod.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c b/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c
+index b1a3a49a822a7b..42bd07f7218dc3 100644
+--- a/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c
++++ b/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c
+@@ -4,7 +4,6 @@
+ #include <sys/types.h>
+ #include <sys/socket.h>
+ #include <net/if.h>
+-#include <linux/in6.h>
+ #include <linux/if_alg.h>
+
+ #include "test_progs.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/decap_sanity.c b/tools/testing/selftests/bpf/prog_tests/decap_sanity.c
+index dcb9e5070cc3d9..d79f398ec6b7c1 100644
+--- a/tools/testing/selftests/bpf/prog_tests/decap_sanity.c
++++ b/tools/testing/selftests/bpf/prog_tests/decap_sanity.c
+@@ -4,7 +4,6 @@
+ #include <sys/types.h>
+ #include <sys/socket.h>
+ #include <net/if.h>
+-#include <linux/in6.h>
+
+ #include "test_progs.h"
+ #include "network_helpers.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/flow_dissector.c b/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
+index 9e5f38739104bf..3171047414a7dc 100644
+--- a/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
++++ b/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
++#define _GNU_SOURCE
+ #include <test_progs.h>
+ #include <network_helpers.h>
+-#include <error.h>
+ #include <linux/if_tun.h>
+ #include <sys/uio.h>
+
+diff --git a/tools/testing/selftests/bpf/prog_tests/kfree_skb.c b/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
+index c07991544a789e..34f8822fd2219c 100644
+--- a/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
++++ b/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
+@@ -1,4 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
++#define _GNU_SOURCE
+ #include <test_progs.h>
+ #include <network_helpers.h>
+ #include "kfree_skb.skel.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c b/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c
+index 835a1d756c1662..b6e8d822e8e95f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c
+@@ -47,7 +47,6 @@
+ #include <linux/if_ether.h>
+ #include <linux/if_packet.h>
+ #include <linux/if_tun.h>
+-#include <linux/icmp.h>
+ #include <arpa/inet.h>
+ #include <unistd.h>
+ #include <errno.h>
+diff --git a/tools/testing/selftests/bpf/prog_tests/lwt_reroute.c b/tools/testing/selftests/bpf/prog_tests/lwt_reroute.c
+index 03825d2b45a8b7..6c50c0f63f4365 100644
+--- a/tools/testing/selftests/bpf/prog_tests/lwt_reroute.c
++++ b/tools/testing/selftests/bpf/prog_tests/lwt_reroute.c
+@@ -49,6 +49,7 @@
+ * is not crashed, it is considered successful.
+ */
+ #define NETNS "ns_lwt_reroute"
++#include <netinet/in.h>
+ #include "lwt_helpers.h"
+ #include "network_helpers.h"
+ #include <linux/net_tstamp.h>
+diff --git a/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c b/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
+index e72d75d6baa71e..c29787e092d66a 100644
+--- a/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
++++ b/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
+@@ -11,7 +11,7 @@
+ #include <sched.h>
+ #include <sys/wait.h>
+ #include <sys/mount.h>
+-#include <sys/fcntl.h>
++#include <fcntl.h>
+ #include "network_helpers.h"
+
+ #define STACK_SIZE (1024 * 1024)
+diff --git a/tools/testing/selftests/bpf/prog_tests/parse_tcp_hdr_opt.c b/tools/testing/selftests/bpf/prog_tests/parse_tcp_hdr_opt.c
+index daa952711d8fdf..e9c07d561ded6d 100644
+--- a/tools/testing/selftests/bpf/prog_tests/parse_tcp_hdr_opt.c
++++ b/tools/testing/selftests/bpf/prog_tests/parse_tcp_hdr_opt.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+
++#define _GNU_SOURCE
+ #include <test_progs.h>
+ #include <network_helpers.h>
+ #include "test_parse_tcp_hdr_opt.skel.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+index ae87c00867ba42..dcb2f62cdec6c3 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
++++ b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+@@ -18,7 +18,6 @@
+ #include <arpa/inet.h>
+ #include <assert.h>
+ #include <errno.h>
+-#include <error.h>
+ #include <fcntl.h>
+ #include <sched.h>
+ #include <stdio.h>
+diff --git a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+index 327d51f5914279..53b8ffc943dce9 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+@@ -471,7 +471,7 @@ static int set_forwarding(bool enable)
+
+ static int __rcv_tstamp(int fd, const char *expected, size_t s, __u64 *tstamp)
+ {
+- struct __kernel_timespec pkt_ts = {};
++ struct timespec pkt_ts = {};
+ char ctl[CMSG_SPACE(sizeof(pkt_ts))];
+ struct timespec now_ts;
+ struct msghdr msg = {};
+@@ -495,7 +495,7 @@ static int __rcv_tstamp(int fd, const char *expected, size_t s, __u64 *tstamp)
+
+ cmsg = CMSG_FIRSTHDR(&msg);
+ if (cmsg && cmsg->cmsg_level == SOL_SOCKET &&
+- cmsg->cmsg_type == SO_TIMESTAMPNS_NEW)
++ cmsg->cmsg_type == SO_TIMESTAMPNS)
+ memcpy(&pkt_ts, CMSG_DATA(cmsg), sizeof(pkt_ts));
+
+ pkt_ns = pkt_ts.tv_sec * NSEC_PER_SEC + pkt_ts.tv_nsec;
+@@ -537,9 +537,9 @@ static int wait_netstamp_needed_key(void)
+ if (!ASSERT_GE(srv_fd, 0, "start_server"))
+ goto done;
+
+- err = setsockopt(srv_fd, SOL_SOCKET, SO_TIMESTAMPNS_NEW,
++ err = setsockopt(srv_fd, SOL_SOCKET, SO_TIMESTAMPNS,
+ &opt, sizeof(opt));
+- if (!ASSERT_OK(err, "setsockopt(SO_TIMESTAMPNS_NEW)"))
++ if (!ASSERT_OK(err, "setsockopt(SO_TIMESTAMPNS)"))
+ goto done;
+
+ cli_fd = connect_to_fd(srv_fd, TIMEOUT_MILLIS);
+@@ -621,9 +621,9 @@ static void test_inet_dtime(int family, int type, const char *addr, __u16 port)
+ return;
+
+ /* Ensure the kernel puts the (rcv) timestamp for all skb */
+- err = setsockopt(listen_fd, SOL_SOCKET, SO_TIMESTAMPNS_NEW,
++ err = setsockopt(listen_fd, SOL_SOCKET, SO_TIMESTAMPNS,
+ &opt, sizeof(opt));
+- if (!ASSERT_OK(err, "setsockopt(SO_TIMESTAMPNS_NEW)"))
++ if (!ASSERT_OK(err, "setsockopt(SO_TIMESTAMPNS)"))
+ goto done;
+
+ if (type == SOCK_STREAM) {
+diff --git a/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c b/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
+index f2b99d95d91607..c38784c1c066e6 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
++++ b/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
+@@ -1,4 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
++#define _GNU_SOURCE
+ #include <test_progs.h>
+ #include "cgroup_helpers.h"
+ #include "network_helpers.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c b/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
+index e51721df14fc19..dfff6feac12c3c 100644
+--- a/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
++++ b/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
+@@ -4,6 +4,7 @@
+ #define _GNU_SOURCE
+ #include <linux/compiler.h>
+ #include <linux/ring_buffer.h>
++#include <linux/build_bug.h>
+ #include <pthread.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
+index 81097a3f15eb56..4f10297437341b 100644
+--- a/tools/testing/selftests/bpf/progs/bpf_misc.h
++++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
+@@ -2,6 +2,9 @@
+ #ifndef __BPF_MISC_H__
+ #define __BPF_MISC_H__
+
++#define XSTR(s) STR(s)
++#define STR(s) #s
++
+ /* This set of attributes controls behavior of the
+ * test_loader.c:test_loader__run_subtests().
+ *
+@@ -26,6 +29,9 @@
+ *
+ * __regex Same as __msg, but using a regular expression.
+ * __regex_unpriv Same as __msg_unpriv but using a regular expression.
++ * __xlated Expect a line in a disassembly log after verifier applies rewrites.
++ * Multiple __xlated attributes could be specified.
++ * __xlated_unpriv Same as __xlated but for unprivileged mode.
+ *
+ * __success Expect program load success in privileged mode.
+ * __success_unpriv Expect program load success in unprivileged mode.
+@@ -60,14 +66,20 @@
+ * __auxiliary Annotated program is not a separate test, but used as auxiliary
+ * for some other test cases and should always be loaded.
+ * __auxiliary_unpriv Same, but load program in unprivileged mode.
++ *
++ * __arch_* Specify on which architecture the test case should be tested.
++ * Several __arch_* annotations could be specified at once.
++ * When test case is not run on current arch it is marked as skipped.
+ */
+-#define __msg(msg) __attribute__((btf_decl_tag("comment:test_expect_msg=" msg)))
+-#define __regex(regex) __attribute__((btf_decl_tag("comment:test_expect_regex=" regex)))
++#define __msg(msg) __attribute__((btf_decl_tag("comment:test_expect_msg=" XSTR(__COUNTER__) "=" msg)))
++#define __regex(regex) __attribute__((btf_decl_tag("comment:test_expect_regex=" XSTR(__COUNTER__) "=" regex)))
++#define __xlated(msg) __attribute__((btf_decl_tag("comment:test_expect_xlated=" XSTR(__COUNTER__) "=" msg)))
+ #define __failure __attribute__((btf_decl_tag("comment:test_expect_failure")))
+ #define __success __attribute__((btf_decl_tag("comment:test_expect_success")))
+ #define __description(desc) __attribute__((btf_decl_tag("comment:test_description=" desc)))
+-#define __msg_unpriv(msg) __attribute__((btf_decl_tag("comment:test_expect_msg_unpriv=" msg)))
+-#define __regex_unpriv(regex) __attribute__((btf_decl_tag("comment:test_expect_regex_unpriv=" regex)))
++#define __msg_unpriv(msg) __attribute__((btf_decl_tag("comment:test_expect_msg_unpriv=" XSTR(__COUNTER__) "=" msg)))
++#define __regex_unpriv(regex) __attribute__((btf_decl_tag("comment:test_expect_regex_unpriv=" XSTR(__COUNTER__) "=" regex)))
++#define __xlated_unpriv(msg) __attribute__((btf_decl_tag("comment:test_expect_xlated_unpriv=" XSTR(__COUNTER__) "=" msg)))
+ #define __failure_unpriv __attribute__((btf_decl_tag("comment:test_expect_failure_unpriv")))
+ #define __success_unpriv __attribute__((btf_decl_tag("comment:test_expect_success_unpriv")))
+ #define __log_level(lvl) __attribute__((btf_decl_tag("comment:test_log_level="#lvl)))
+@@ -77,6 +89,10 @@
+ #define __auxiliary __attribute__((btf_decl_tag("comment:test_auxiliary")))
+ #define __auxiliary_unpriv __attribute__((btf_decl_tag("comment:test_auxiliary_unpriv")))
+ #define __btf_path(path) __attribute__((btf_decl_tag("comment:test_btf_path=" path)))
++#define __arch(arch) __attribute__((btf_decl_tag("comment:test_arch=" arch)))
++#define __arch_x86_64 __arch("X86_64")
++#define __arch_arm64 __arch("ARM64")
++#define __arch_riscv64 __arch("RISCV64")
+
+ /* Convenience macro for use with 'asm volatile' blocks */
+ #define __naked __attribute__((naked))
+diff --git a/tools/testing/selftests/bpf/progs/cg_storage_multi.h b/tools/testing/selftests/bpf/progs/cg_storage_multi.h
+index a0778fe7857a14..41d59f0ee606c7 100644
+--- a/tools/testing/selftests/bpf/progs/cg_storage_multi.h
++++ b/tools/testing/selftests/bpf/progs/cg_storage_multi.h
+@@ -3,8 +3,6 @@
+ #ifndef __PROGS_CG_STORAGE_MULTI_H
+ #define __PROGS_CG_STORAGE_MULTI_H
+
+-#include <asm/types.h>
+-
+ struct cgroup_value {
+ __u32 egress_pkts;
+ __u32 ingress_pkts;
+diff --git a/tools/testing/selftests/bpf/progs/test_libbpf_get_fd_by_id_opts.c b/tools/testing/selftests/bpf/progs/test_libbpf_get_fd_by_id_opts.c
+index f5ac5f3e89196f..568816307f7125 100644
+--- a/tools/testing/selftests/bpf/progs/test_libbpf_get_fd_by_id_opts.c
++++ b/tools/testing/selftests/bpf/progs/test_libbpf_get_fd_by_id_opts.c
+@@ -31,6 +31,7 @@ int BPF_PROG(check_access, struct bpf_map *map, fmode_t fmode)
+
+ if (fmode & FMODE_WRITE)
+ return -EACCES;
++ barrier();
+
+ return 0;
+ }
+diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+index 85e48069c9e616..d4b99c3b4719b2 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
++++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+@@ -1213,10 +1213,10 @@ __success __log_level(2)
+ * - once for path entry - label 2;
+ * - once for path entry - label 1 - label 2.
+ */
+-__msg("r1 = *(u64 *)(r10 -8)")
+-__msg("exit")
+-__msg("r1 = *(u64 *)(r10 -8)")
+-__msg("exit")
++__msg("8: (79) r1 = *(u64 *)(r10 -8)")
++__msg("9: (95) exit")
++__msg("from 2 to 7")
++__msg("8: safe")
+ __msg("processed 11 insns")
+ __flag(BPF_F_TEST_STATE_FREQ)
+ __naked void old_stack_misc_vs_cur_ctx_ptr(void)
+diff --git a/tools/testing/selftests/bpf/test_cpp.cpp b/tools/testing/selftests/bpf/test_cpp.cpp
+index dde0bb16e782e9..abc2a56ab26164 100644
+--- a/tools/testing/selftests/bpf/test_cpp.cpp
++++ b/tools/testing/selftests/bpf/test_cpp.cpp
+@@ -6,6 +6,10 @@
+ #include <bpf/libbpf.h>
+ #include <bpf/bpf.h>
+ #include <bpf/btf.h>
++
++#ifndef _Bool
++#define _Bool bool
++#endif
+ #include "test_core_extern.skel.h"
+ #include "struct_ops_module.skel.h"
+
+diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
+index f14e10b0de96e5..2150e9c9b53fb1 100644
+--- a/tools/testing/selftests/bpf/test_loader.c
++++ b/tools/testing/selftests/bpf/test_loader.c
+@@ -7,6 +7,7 @@
+ #include <bpf/btf.h>
+
+ #include "autoconf_helper.h"
++#include "disasm_helpers.h"
+ #include "unpriv_helpers.h"
+ #include "cap_helpers.h"
+
+@@ -19,10 +20,12 @@
+ #define TEST_TAG_EXPECT_SUCCESS "comment:test_expect_success"
+ #define TEST_TAG_EXPECT_MSG_PFX "comment:test_expect_msg="
+ #define TEST_TAG_EXPECT_REGEX_PFX "comment:test_expect_regex="
++#define TEST_TAG_EXPECT_XLATED_PFX "comment:test_expect_xlated="
+ #define TEST_TAG_EXPECT_FAILURE_UNPRIV "comment:test_expect_failure_unpriv"
+ #define TEST_TAG_EXPECT_SUCCESS_UNPRIV "comment:test_expect_success_unpriv"
+ #define TEST_TAG_EXPECT_MSG_PFX_UNPRIV "comment:test_expect_msg_unpriv="
+ #define TEST_TAG_EXPECT_REGEX_PFX_UNPRIV "comment:test_expect_regex_unpriv="
++#define TEST_TAG_EXPECT_XLATED_PFX_UNPRIV "comment:test_expect_xlated_unpriv="
+ #define TEST_TAG_LOG_LEVEL_PFX "comment:test_log_level="
+ #define TEST_TAG_PROG_FLAGS_PFX "comment:test_prog_flags="
+ #define TEST_TAG_DESCRIPTION_PFX "comment:test_description="
+@@ -31,6 +34,7 @@
+ #define TEST_TAG_AUXILIARY "comment:test_auxiliary"
+ #define TEST_TAG_AUXILIARY_UNPRIV "comment:test_auxiliary_unpriv"
+ #define TEST_BTF_PATH "comment:test_btf_path="
++#define TEST_TAG_ARCH "comment:test_arch="
+
+ /* Warning: duplicated in bpf_misc.h */
+ #define POINTER_VALUE 0xcafe4all
+@@ -55,11 +59,16 @@ struct expect_msg {
+ regex_t regex;
+ };
+
++struct expected_msgs {
++ struct expect_msg *patterns;
++ size_t cnt;
++};
++
+ struct test_subspec {
+ char *name;
+ bool expect_failure;
+- struct expect_msg *expect_msgs;
+- size_t expect_msg_cnt;
++ struct expected_msgs expect_msgs;
++ struct expected_msgs expect_xlated;
+ int retval;
+ bool execute;
+ };
+@@ -72,6 +81,7 @@ struct test_spec {
+ int log_level;
+ int prog_flags;
+ int mode_mask;
++ int arch_mask;
+ bool auxiliary;
+ bool valid;
+ };
+@@ -96,44 +106,47 @@ void test_loader_fini(struct test_loader *tester)
+ free(tester->log_buf);
+ }
+
+-static void free_test_spec(struct test_spec *spec)
++static void free_msgs(struct expected_msgs *msgs)
+ {
+ int i;
+
++ for (i = 0; i < msgs->cnt; i++)
++ if (msgs->patterns[i].regex_str)
++ regfree(&msgs->patterns[i].regex);
++ free(msgs->patterns);
++ msgs->patterns = NULL;
++ msgs->cnt = 0;
++}
++
++static void free_test_spec(struct test_spec *spec)
++{
+ /* Deallocate expect_msgs arrays. */
+- for (i = 0; i < spec->priv.expect_msg_cnt; i++)
+- if (spec->priv.expect_msgs[i].regex_str)
+- regfree(&spec->priv.expect_msgs[i].regex);
+- for (i = 0; i < spec->unpriv.expect_msg_cnt; i++)
+- if (spec->unpriv.expect_msgs[i].regex_str)
+- regfree(&spec->unpriv.expect_msgs[i].regex);
++ free_msgs(&spec->priv.expect_msgs);
++ free_msgs(&spec->unpriv.expect_msgs);
++ free_msgs(&spec->priv.expect_xlated);
++ free_msgs(&spec->unpriv.expect_xlated);
+
+ free(spec->priv.name);
+ free(spec->unpriv.name);
+- free(spec->priv.expect_msgs);
+- free(spec->unpriv.expect_msgs);
+-
+ spec->priv.name = NULL;
+ spec->unpriv.name = NULL;
+- spec->priv.expect_msgs = NULL;
+- spec->unpriv.expect_msgs = NULL;
+ }
+
+-static int push_msg(const char *substr, const char *regex_str, struct test_subspec *subspec)
++static int push_msg(const char *substr, const char *regex_str, struct expected_msgs *msgs)
+ {
+ void *tmp;
+ int regcomp_res;
+ char error_msg[100];
+ struct expect_msg *msg;
+
+- tmp = realloc(subspec->expect_msgs,
+- (1 + subspec->expect_msg_cnt) * sizeof(struct expect_msg));
++ tmp = realloc(msgs->patterns,
++ (1 + msgs->cnt) * sizeof(struct expect_msg));
+ if (!tmp) {
+ ASSERT_FAIL("failed to realloc memory for messages\n");
+ return -ENOMEM;
+ }
+- subspec->expect_msgs = tmp;
+- msg = &subspec->expect_msgs[subspec->expect_msg_cnt];
++ msgs->patterns = tmp;
++ msg = &msgs->patterns[msgs->cnt];
+
+ if (substr) {
+ msg->substr = substr;
+@@ -150,7 +163,7 @@ static int push_msg(const char *substr, const char *regex_str, struct test_subsp
+ }
+ }
+
+- subspec->expect_msg_cnt += 1;
++ msgs->cnt += 1;
+ return 0;
+ }
+
+@@ -202,6 +215,41 @@ static void update_flags(int *flags, int flag, bool clear)
+ *flags |= flag;
+ }
+
++/* Matches a string of form '<pfx>[^=]=.*' and returns it's suffix.
++ * Used to parse btf_decl_tag values.
++ * Such values require unique prefix because compiler does not add
++ * same __attribute__((btf_decl_tag(...))) twice.
++ * Test suite uses two-component tags for such cases:
++ *
++ * <pfx> __COUNTER__ '='
++ *
++ * For example, two consecutive __msg tags '__msg("foo") __msg("foo")'
++ * would be encoded as:
++ *
++ * [18] DECL_TAG 'comment:test_expect_msg=0=foo' type_id=15 component_idx=-1
++ * [19] DECL_TAG 'comment:test_expect_msg=1=foo' type_id=15 component_idx=-1
++ *
++ * And the purpose of this function is to extract 'foo' from the above.
++ */
++static const char *skip_dynamic_pfx(const char *s, const char *pfx)
++{
++ const char *msg;
++
++ if (strncmp(s, pfx, strlen(pfx)) != 0)
++ return NULL;
++ msg = s + strlen(pfx);
++ msg = strchr(msg, '=');
++ if (!msg)
++ return NULL;
++ return msg + 1;
++}
++
++enum arch {
++ ARCH_X86_64 = 0x1,
++ ARCH_ARM64 = 0x2,
++ ARCH_RISCV64 = 0x4,
++};
++
+ /* Uses btf_decl_tag attributes to describe the expected test
+ * behavior, see bpf_misc.h for detailed description of each attribute
+ * and attribute combinations.
+@@ -215,6 +263,7 @@ static int parse_test_spec(struct test_loader *tester,
+ bool has_unpriv_result = false;
+ bool has_unpriv_retval = false;
+ int func_id, i, err = 0;
++ u32 arch_mask = 0;
+ struct btf *btf;
+
+ memset(spec, 0, sizeof(*spec));
+@@ -270,27 +319,33 @@ static int parse_test_spec(struct test_loader *tester,
+ } else if (strcmp(s, TEST_TAG_AUXILIARY_UNPRIV) == 0) {
+ spec->auxiliary = true;
+ spec->mode_mask |= UNPRIV;
+- } else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) {
+- msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
+- err = push_msg(msg, NULL, &spec->priv);
++ } else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_MSG_PFX))) {
++ err = push_msg(msg, NULL, &spec->priv.expect_msgs);
+ if (err)
+ goto cleanup;
+ spec->mode_mask |= PRIV;
+- } else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX_UNPRIV)) {
+- msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX_UNPRIV) - 1;
+- err = push_msg(msg, NULL, &spec->unpriv);
++ } else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_MSG_PFX_UNPRIV))) {
++ err = push_msg(msg, NULL, &spec->unpriv.expect_msgs);
+ if (err)
+ goto cleanup;
+ spec->mode_mask |= UNPRIV;
+- } else if (str_has_pfx(s, TEST_TAG_EXPECT_REGEX_PFX)) {
+- msg = s + sizeof(TEST_TAG_EXPECT_REGEX_PFX) - 1;
+- err = push_msg(NULL, msg, &spec->priv);
++ } else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_REGEX_PFX))) {
++ err = push_msg(NULL, msg, &spec->priv.expect_msgs);
+ if (err)
+ goto cleanup;
+ spec->mode_mask |= PRIV;
+- } else if (str_has_pfx(s, TEST_TAG_EXPECT_REGEX_PFX_UNPRIV)) {
+- msg = s + sizeof(TEST_TAG_EXPECT_REGEX_PFX_UNPRIV) - 1;
+- err = push_msg(NULL, msg, &spec->unpriv);
++ } else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_REGEX_PFX_UNPRIV))) {
++ err = push_msg(NULL, msg, &spec->unpriv.expect_msgs);
++ if (err)
++ goto cleanup;
++ spec->mode_mask |= UNPRIV;
++ } else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_XLATED_PFX))) {
++ err = push_msg(msg, NULL, &spec->priv.expect_xlated);
++ if (err)
++ goto cleanup;
++ spec->mode_mask |= PRIV;
++ } else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_XLATED_PFX_UNPRIV))) {
++ err = push_msg(msg, NULL, &spec->unpriv.expect_xlated);
+ if (err)
+ goto cleanup;
+ spec->mode_mask |= UNPRIV;
+@@ -341,11 +396,26 @@ static int parse_test_spec(struct test_loader *tester,
+ goto cleanup;
+ update_flags(&spec->prog_flags, flags, clear);
+ }
++ } else if (str_has_pfx(s, TEST_TAG_ARCH)) {
++ val = s + sizeof(TEST_TAG_ARCH) - 1;
++ if (strcmp(val, "X86_64") == 0) {
++ arch_mask |= ARCH_X86_64;
++ } else if (strcmp(val, "ARM64") == 0) {
++ arch_mask |= ARCH_ARM64;
++ } else if (strcmp(val, "RISCV64") == 0) {
++ arch_mask |= ARCH_RISCV64;
++ } else {
++ PRINT_FAIL("bad arch spec: '%s'", val);
++ err = -EINVAL;
++ goto cleanup;
++ }
+ } else if (str_has_pfx(s, TEST_BTF_PATH)) {
+ spec->btf_custom_path = s + sizeof(TEST_BTF_PATH) - 1;
+ }
+ }
+
++ spec->arch_mask = arch_mask;
++
+ if (spec->mode_mask == 0)
+ spec->mode_mask = PRIV;
+
+@@ -387,11 +457,22 @@ static int parse_test_spec(struct test_loader *tester,
+ spec->unpriv.execute = spec->priv.execute;
+ }
+
+- if (!spec->unpriv.expect_msgs) {
+- for (i = 0; i < spec->priv.expect_msg_cnt; i++) {
+- struct expect_msg *msg = &spec->priv.expect_msgs[i];
++ if (spec->unpriv.expect_msgs.cnt == 0) {
++ for (i = 0; i < spec->priv.expect_msgs.cnt; i++) {
++ struct expect_msg *msg = &spec->priv.expect_msgs.patterns[i];
++
++ err = push_msg(msg->substr, msg->regex_str,
++ &spec->unpriv.expect_msgs);
++ if (err)
++ goto cleanup;
++ }
++ }
++ if (spec->unpriv.expect_xlated.cnt == 0) {
++ for (i = 0; i < spec->priv.expect_xlated.cnt; i++) {
++ struct expect_msg *msg = &spec->priv.expect_xlated.patterns[i];
+
+- err = push_msg(msg->substr, msg->regex_str, &spec->unpriv);
++ err = push_msg(msg->substr, msg->regex_str,
++ &spec->unpriv.expect_xlated);
+ if (err)
+ goto cleanup;
+ }
+@@ -434,7 +515,6 @@ static void prepare_case(struct test_loader *tester,
+ bpf_program__set_flags(prog, prog_flags | spec->prog_flags);
+
+ tester->log_buf[0] = '\0';
+- tester->next_match_pos = 0;
+ }
+
+ static void emit_verifier_log(const char *log_buf, bool force)
+@@ -444,39 +524,41 @@ static void emit_verifier_log(const char *log_buf, bool force)
+ fprintf(stdout, "VERIFIER LOG:\n=============\n%s=============\n", log_buf);
+ }
+
+-static void validate_case(struct test_loader *tester,
+- struct test_subspec *subspec,
+- struct bpf_object *obj,
+- struct bpf_program *prog,
+- int load_err)
++static void emit_xlated(const char *xlated, bool force)
++{
++ if (!force && env.verbosity == VERBOSE_NONE)
++ return;
++ fprintf(stdout, "XLATED:\n=============\n%s=============\n", xlated);
++}
++
++static void validate_msgs(char *log_buf, struct expected_msgs *msgs,
++ void (*emit_fn)(const char *buf, bool force))
+ {
+- int i, j, err;
+- char *match;
+ regmatch_t reg_match[1];
++ const char *log = log_buf;
++ int i, j, err;
+
+- for (i = 0; i < subspec->expect_msg_cnt; i++) {
+- struct expect_msg *msg = &subspec->expect_msgs[i];
++ for (i = 0; i < msgs->cnt; i++) {
++ struct expect_msg *msg = &msgs->patterns[i];
++ const char *match = NULL;
+
+ if (msg->substr) {
+- match = strstr(tester->log_buf + tester->next_match_pos, msg->substr);
++ match = strstr(log, msg->substr);
+ if (match)
+- tester->next_match_pos = match - tester->log_buf + strlen(msg->substr);
++ log = match + strlen(msg->substr);
+ } else {
+- err = regexec(&msg->regex,
+- tester->log_buf + tester->next_match_pos, 1, reg_match, 0);
++ err = regexec(&msg->regex, log, 1, reg_match, 0);
+ if (err == 0) {
+- match = tester->log_buf + tester->next_match_pos + reg_match[0].rm_so;
+- tester->next_match_pos += reg_match[0].rm_eo;
+- } else {
+- match = NULL;
++ match = log + reg_match[0].rm_so;
++ log += reg_match[0].rm_eo;
+ }
+ }
+
+ if (!ASSERT_OK_PTR(match, "expect_msg")) {
+ if (env.verbosity == VERBOSE_NONE)
+- emit_verifier_log(tester->log_buf, true /*force*/);
++ emit_fn(log_buf, true /*force*/);
+ for (j = 0; j <= i; j++) {
+- msg = &subspec->expect_msgs[j];
++ msg = &msgs->patterns[j];
+ fprintf(stderr, "%s %s: '%s'\n",
+ j < i ? "MATCHED " : "EXPECTED",
+ msg->substr ? "SUBSTR" : " REGEX",
+@@ -611,6 +693,51 @@ static bool should_do_test_run(struct test_spec *spec, struct test_subspec *subs
+ return true;
+ }
+
++/* Get a disassembly of BPF program after verifier applies all rewrites */
++static int get_xlated_program_text(int prog_fd, char *text, size_t text_sz)
++{
++ struct bpf_insn *insn_start = NULL, *insn, *insn_end;
++ __u32 insns_cnt = 0, i;
++ char buf[64];
++ FILE *out = NULL;
++ int err;
++
++ err = get_xlated_program(prog_fd, &insn_start, &insns_cnt);
++ if (!ASSERT_OK(err, "get_xlated_program"))
++ goto out;
++ out = fmemopen(text, text_sz, "w");
++ if (!ASSERT_OK_PTR(out, "open_memstream"))
++ goto out;
++ insn_end = insn_start + insns_cnt;
++ insn = insn_start;
++ while (insn < insn_end) {
++ i = insn - insn_start;
++ insn = disasm_insn(insn, buf, sizeof(buf));
++ fprintf(out, "%d: %s\n", i, buf);
++ }
++ fflush(out);
++
++out:
++ free(insn_start);
++ if (out)
++ fclose(out);
++ return err;
++}
++
++static bool run_on_current_arch(int arch_mask)
++{
++ if (arch_mask == 0)
++ return true;
++#if defined(__x86_64__)
++ return arch_mask & ARCH_X86_64;
++#elif defined(__aarch64__)
++ return arch_mask & ARCH_ARM64;
++#elif defined(__riscv) && __riscv_xlen == 64
++ return arch_mask & ARCH_RISCV64;
++#endif
++ return false;
++}
++
+ /* this function is forced noinline and has short generic name to look better
+ * in test_progs output (in case of a failure)
+ */
+@@ -635,6 +762,11 @@ void run_subtest(struct test_loader *tester,
+ if (!test__start_subtest(subspec->name))
+ return;
+
++ if (!run_on_current_arch(spec->arch_mask)) {
++ test__skip();
++ return;
++ }
++
+ if (unpriv) {
+ if (!can_execute_unpriv(tester, spec)) {
+ test__skip();
+@@ -695,9 +827,17 @@ void run_subtest(struct test_loader *tester,
+ goto tobj_cleanup;
+ }
+ }
+-
+ emit_verifier_log(tester->log_buf, false /*force*/);
+- validate_case(tester, subspec, tobj, tprog, err);
++ validate_msgs(tester->log_buf, &subspec->expect_msgs, emit_verifier_log);
++
++ if (subspec->expect_xlated.cnt) {
++ err = get_xlated_program_text(bpf_program__fd(tprog),
++ tester->log_buf, tester->log_buf_sz);
++ if (err)
++ goto tobj_cleanup;
++ emit_xlated(tester->log_buf, false /*force*/);
++ validate_msgs(tester->log_buf, &subspec->expect_xlated, emit_xlated);
++ }
+
+ if (should_do_test_run(spec, subspec)) {
+ /* For some reason test_verifier executes programs
+diff --git a/tools/testing/selftests/bpf/test_lru_map.c b/tools/testing/selftests/bpf/test_lru_map.c
+index 4d0650cfb5cd8b..fda7589c50236c 100644
+--- a/tools/testing/selftests/bpf/test_lru_map.c
++++ b/tools/testing/selftests/bpf/test_lru_map.c
+@@ -126,7 +126,8 @@ static int sched_next_online(int pid, int *next_to_try)
+
+ while (next < nr_cpus) {
+ CPU_ZERO(&cpuset);
+- CPU_SET(next++, &cpuset);
++ CPU_SET(next, &cpuset);
++ next++;
+ if (!sched_setaffinity(pid, sizeof(cpuset), &cpuset)) {
+ ret = 0;
+ break;
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index 89ff704e9dad5e..d5d0cb4eb1975b 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -10,7 +10,6 @@
+ #include <sched.h>
+ #include <signal.h>
+ #include <string.h>
+-#include <execinfo.h> /* backtrace */
+ #include <sys/sysinfo.h> /* get_nprocs */
+ #include <netinet/in.h>
+ #include <sys/select.h>
+@@ -19,6 +18,21 @@
+ #include <bpf/btf.h>
+ #include "json_writer.h"
+
++#ifdef __GLIBC__
++#include <execinfo.h> /* backtrace */
++#endif
++
++/* Default backtrace funcs if missing at link */
++__weak int backtrace(void **buffer, int size)
++{
++ return 0;
++}
++
++__weak void backtrace_symbols_fd(void *const *buffer, int size, int fd)
++{
++ dprintf(fd, "<backtrace not supported>\n");
++}
++
+ static bool verbose(void)
+ {
+ return env.verbosity > VERBOSE_NONE;
+@@ -1731,7 +1745,7 @@ int main(int argc, char **argv)
+ /* launch workers if requested */
+ env.worker_id = -1; /* main process */
+ if (env.workers) {
+- env.worker_pids = calloc(sizeof(__pid_t), env.workers);
++ env.worker_pids = calloc(sizeof(pid_t), env.workers);
+ env.worker_socks = calloc(sizeof(int), env.workers);
+ if (env.debug)
+ fprintf(stdout, "Launching %d workers.\n", env.workers);
+diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
+index 51341d50213b9d..b1e949fb16cf33 100644
+--- a/tools/testing/selftests/bpf/test_progs.h
++++ b/tools/testing/selftests/bpf/test_progs.h
+@@ -447,7 +447,6 @@ typedef int (*pre_execution_cb)(struct bpf_object *obj);
+ struct test_loader {
+ char *log_buf;
+ size_t log_buf_sz;
+- size_t next_match_pos;
+ pre_execution_cb pre_execution_cb;
+
+ struct bpf_object *obj;
+diff --git a/tools/testing/selftests/bpf/testing_helpers.c b/tools/testing/selftests/bpf/testing_helpers.c
+index d5379a0e6da804..680e452583a78b 100644
+--- a/tools/testing/selftests/bpf/testing_helpers.c
++++ b/tools/testing/selftests/bpf/testing_helpers.c
+@@ -220,13 +220,13 @@ int parse_test_list(const char *s,
+ bool is_glob_pattern)
+ {
+ char *input, *state = NULL, *test_spec;
+- int err = 0;
++ int err = 0, cnt = 0;
+
+ input = strdup(s);
+ if (!input)
+ return -ENOMEM;
+
+- while ((test_spec = strtok_r(state ? NULL : input, ",", &state))) {
++ while ((test_spec = strtok_r(cnt++ ? NULL : input, ",", &state))) {
+ err = insert_test(set, test_spec, is_glob_pattern);
+ if (err)
+ break;
+@@ -451,7 +451,7 @@ int get_xlated_program(int fd_prog, struct bpf_insn **buf, __u32 *cnt)
+
+ *cnt = xlated_prog_len / buf_element_size;
+ *buf = calloc(*cnt, buf_element_size);
+- if (!buf) {
++ if (!*buf) {
+ perror("can't allocate xlated program buffer");
+ return -ENOMEM;
+ }
+diff --git a/tools/testing/selftests/bpf/unpriv_helpers.c b/tools/testing/selftests/bpf/unpriv_helpers.c
+index b6d016461fb023..220f6a96381345 100644
+--- a/tools/testing/selftests/bpf/unpriv_helpers.c
++++ b/tools/testing/selftests/bpf/unpriv_helpers.c
+@@ -2,7 +2,6 @@
+
+ #include <stdbool.h>
+ #include <stdlib.h>
+-#include <error.h>
+ #include <stdio.h>
+ #include <string.h>
+ #include <unistd.h>
+diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c
+index b2854238d4a0eb..fd9780082ff48a 100644
+--- a/tools/testing/selftests/bpf/veristat.c
++++ b/tools/testing/selftests/bpf/veristat.c
+@@ -784,13 +784,13 @@ static int parse_stat(const char *stat_name, struct stat_specs *specs)
+ static int parse_stats(const char *stats_str, struct stat_specs *specs)
+ {
+ char *input, *state = NULL, *next;
+- int err;
++ int err, cnt = 0;
+
+ input = strdup(stats_str);
+ if (!input)
+ return -ENOMEM;
+
+- while ((next = strtok_r(state ? NULL : input, ",", &state))) {
++ while ((next = strtok_r(cnt++ ? NULL : input, ",", &state))) {
+ err = parse_stat(next, specs);
+ if (err) {
+ free(input);
+@@ -1493,7 +1493,7 @@ static int parse_stats_csv(const char *filename, struct stat_specs *specs,
+ while (fgets(line, sizeof(line), f)) {
+ char *input = line, *state = NULL, *next;
+ struct verif_stats *st = NULL;
+- int col = 0;
++ int col = 0, cnt = 0;
+
+ if (!header) {
+ void *tmp;
+@@ -1511,7 +1511,7 @@ static int parse_stats_csv(const char *filename, struct stat_specs *specs,
+ *stat_cntp += 1;
+ }
+
+- while ((next = strtok_r(state ? NULL : input, ",\n", &state))) {
++ while ((next = strtok_r(cnt++ ? NULL : input, ",\n", &state))) {
+ if (header) {
+ /* for the first line, set up spec stats */
+ err = parse_stat(next, specs);
+diff --git a/tools/testing/selftests/dt/test_unprobed_devices.sh b/tools/testing/selftests/dt/test_unprobed_devices.sh
+index 2d7e70c5ad2d36..5e3f42ef249eec 100755
+--- a/tools/testing/selftests/dt/test_unprobed_devices.sh
++++ b/tools/testing/selftests/dt/test_unprobed_devices.sh
+@@ -34,8 +34,21 @@ nodes_compatible=$(
+ # Check if node is available
+ if [[ -e "${node}"/status ]]; then
+ status=$(tr -d '\000' < "${node}"/status)
+- [[ "${status}" != "okay" && "${status}" != "ok" ]] && continue
++ if [[ "${status}" != "okay" && "${status}" != "ok" ]]; then
++ if [ -n "${disabled_nodes_regex}" ]; then
++ disabled_nodes_regex="${disabled_nodes_regex}|${node}"
++ else
++ disabled_nodes_regex="${node}"
++ fi
++ continue
++ fi
+ fi
++
++ # Ignore this node if one of its ancestors was disabled
++ if [ -n "${disabled_nodes_regex}" ]; then
++ echo "${node}" | grep -q -E "${disabled_nodes_regex}" && continue
++ fi
++
+ echo "${node}" | sed -e 's|\/proc\/device-tree||'
+ done | sort
+ )
+diff --git a/tools/testing/selftests/ftrace/test.d/00basic/test_ownership.tc b/tools/testing/selftests/ftrace/test.d/00basic/test_ownership.tc
+index c45094d1e1d2db..803efd7b56c77d 100644
+--- a/tools/testing/selftests/ftrace/test.d/00basic/test_ownership.tc
++++ b/tools/testing/selftests/ftrace/test.d/00basic/test_ownership.tc
+@@ -6,6 +6,18 @@ original_group=`stat -c "%g" .`
+ original_owner=`stat -c "%u" .`
+
+ mount_point=`stat -c '%m' .`
++
++# If stat -c '%m' does not work (e.g. busybox) or failed, try to use the
++# current working directory (which should be a tracefs) as the mount point.
++if [ ! -d "$mount_point" ]; then
++ if mount | grep -qw $PWD ; then
++ mount_point=$PWD
++ else
++ # If PWD doesn't work, that is an environmental problem.
++ exit_unresolved
++ fi
++fi
++
+ mount_options=`mount | grep "$mount_point" | sed -e 's/.*(\(.*\)).*/\1/'`
+
+ # find another owner and group that is not the original
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
+index 073a748b9380a1..263f6b798c853e 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
+@@ -19,7 +19,14 @@ fail() { # mesg
+
+ FILTER=set_ftrace_filter
+ FUNC1="schedule"
+-FUNC2="sched_tick"
++if grep '^sched_tick\b' available_filter_functions; then
++ FUNC2="sched_tick"
++elif grep '^scheduler_tick\b' available_filter_functions; then
++ FUNC2="scheduler_tick"
++else
++ exit_unresolved
++fi
++
+
+ ALL_FUNCS="#### all functions enabled ####"
+
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_char.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_char.tc
+index e21c9c27ece476..77f4c07cdcb899 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_char.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_char.tc
+@@ -1,7 +1,7 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+ # description: Kprobe event char type argument
+-# requires: kprobe_events
++# requires: kprobe_events available_filter_functions
+
+ case `uname -m` in
+ x86_64)
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_string.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_string.tc
+index 93217d4595563f..39001073f7ed5d 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_string.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_string.tc
+@@ -1,7 +1,7 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+ # description: Kprobe event string type argument
+-# requires: kprobe_events
++# requires: kprobe_events available_filter_functions
+
+ case `uname -m` in
+ x86_64)
+diff --git a/tools/testing/selftests/kselftest.h b/tools/testing/selftests/kselftest.h
+index b8967b6e29d503..e195ec1568599a 100644
+--- a/tools/testing/selftests/kselftest.h
++++ b/tools/testing/selftests/kselftest.h
+@@ -61,6 +61,7 @@
+ #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+ #endif
+
++#if defined(__i386__) || defined(__x86_64__) /* arch */
+ /*
+ * gcc cpuid.h provides __cpuid_count() since v4.4.
+ * Clang/LLVM cpuid.h provides __cpuid_count() since v3.4.0.
+@@ -75,6 +76,7 @@
+ : "=a" (a), "=b" (b), "=c" (c), "=d" (d) \
+ : "0" (level), "2" (count))
+ #endif
++#endif /* end arch */
+
+ /* define kselftest exit codes */
+ #define KSFT_PASS 0
+diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c
+index bfcea5cf9a4842..473853718542ef 100644
+--- a/tools/testing/selftests/mm/mseal_test.c
++++ b/tools/testing/selftests/mm/mseal_test.c
+@@ -99,6 +99,16 @@ static int sys_madvise(void *start, size_t len, int types)
+ return sret;
+ }
+
++static void *sys_mremap(void *addr, size_t old_len, size_t new_len,
++ unsigned long flags, void *new_addr)
++{
++ void *sret;
++
++ errno = 0;
++ sret = (void *) syscall(__NR_mremap, addr, old_len, new_len, flags, new_addr);
++ return sret;
++}
++
+ static int sys_pkey_alloc(unsigned long flags, unsigned long init_val)
+ {
+ int ret = syscall(__NR_pkey_alloc, flags, init_val);
+@@ -1104,12 +1114,12 @@ static void test_seal_mremap_shrink(bool seal)
+ }
+
+ /* shrink from 4 pages to 2 pages. */
+- ret2 = mremap(ptr, size, 2 * page_size, 0, 0);
++ ret2 = sys_mremap(ptr, size, 2 * page_size, 0, 0);
+ if (seal) {
+- FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
++ FAIL_TEST_IF_FALSE(ret2 == (void *) MAP_FAILED);
+ FAIL_TEST_IF_FALSE(errno == EPERM);
+ } else {
+- FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
++ FAIL_TEST_IF_FALSE(ret2 != (void *) MAP_FAILED);
+
+ }
+
+@@ -1136,7 +1146,7 @@ static void test_seal_mremap_expand(bool seal)
+ }
+
+ /* expand from 2 page to 4 pages. */
+- ret2 = mremap(ptr, 2 * page_size, 4 * page_size, 0, 0);
++ ret2 = sys_mremap(ptr, 2 * page_size, 4 * page_size, 0, 0);
+ if (seal) {
+ FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ FAIL_TEST_IF_FALSE(errno == EPERM);
+@@ -1169,7 +1179,7 @@ static void test_seal_mremap_move(bool seal)
+ }
+
+ /* move from ptr to fixed address. */
+- ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr);
++ ret2 = sys_mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr);
+ if (seal) {
+ FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ FAIL_TEST_IF_FALSE(errno == EPERM);
+@@ -1288,7 +1298,7 @@ static void test_seal_mremap_shrink_fixed(bool seal)
+ }
+
+ /* mremap to move and shrink to fixed address */
+- ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
++ ret2 = sys_mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
+ newAddr);
+ if (seal) {
+ FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+@@ -1319,7 +1329,7 @@ static void test_seal_mremap_expand_fixed(bool seal)
+ }
+
+ /* mremap to move and expand to fixed address */
+- ret2 = mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED,
++ ret2 = sys_mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED,
+ newAddr);
+ if (seal) {
+ FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+@@ -1350,7 +1360,7 @@ static void test_seal_mremap_move_fixed(bool seal)
+ }
+
+ /* mremap to move to fixed address */
+- ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr);
++ ret2 = sys_mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr);
+ if (seal) {
+ FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ FAIL_TEST_IF_FALSE(errno == EPERM);
+@@ -1379,14 +1389,13 @@ static void test_seal_mremap_move_fixed_zero(bool seal)
+ /*
+ * MREMAP_FIXED can move the mapping to zero address
+ */
+- ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
++ ret2 = sys_mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
+ 0);
+ if (seal) {
+ FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ FAIL_TEST_IF_FALSE(errno == EPERM);
+ } else {
+ FAIL_TEST_IF_FALSE(ret2 == 0);
+-
+ }
+
+ REPORT_TEST_PASS();
+@@ -1409,13 +1418,13 @@ static void test_seal_mremap_move_dontunmap(bool seal)
+ }
+
+ /* mremap to move, and don't unmap src addr. */
+- ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0);
++ ret2 = sys_mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0);
+ if (seal) {
+ FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ FAIL_TEST_IF_FALSE(errno == EPERM);
+ } else {
++ /* kernel will allocate a new address */
+ FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
+-
+ }
+
+ REPORT_TEST_PASS();
+@@ -1423,7 +1432,7 @@ static void test_seal_mremap_move_dontunmap(bool seal)
+
+ static void test_seal_mremap_move_dontunmap_anyaddr(bool seal)
+ {
+- void *ptr;
++ void *ptr, *ptr2;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+@@ -1438,24 +1447,30 @@ static void test_seal_mremap_move_dontunmap_anyaddr(bool seal)
+ }
+
+ /*
+- * The 0xdeaddead should not have effect on dest addr
+- * when MREMAP_DONTUNMAP is set.
++ * The new address is any address that not allocated.
++ * use allocate/free to similate that.
+ */
+- ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
+- 0xdeaddead);
++ setup_single_address(size, &ptr2);
++ FAIL_TEST_IF_FALSE(ptr2 != (void *)-1);
++ ret = sys_munmap(ptr2, size);
++ FAIL_TEST_IF_FALSE(!ret);
++
++ /*
++ * remap to any address.
++ */
++ ret2 = sys_mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
++ (void *) ptr2);
+ if (seal) {
+ FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ FAIL_TEST_IF_FALSE(errno == EPERM);
+ } else {
+- FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
+- FAIL_TEST_IF_FALSE((long)ret2 != 0xdeaddead);
+-
++ /* remap success and return ptr2 */
++ FAIL_TEST_IF_FALSE(ret2 == ptr2);
+ }
+
+ REPORT_TEST_PASS();
+ }
+
+-
+ static void test_seal_merge_and_split(void)
+ {
+ void *ptr;
+diff --git a/tools/testing/selftests/net/af_unix/msg_oob.c b/tools/testing/selftests/net/af_unix/msg_oob.c
+index 535eb2c3d7d1c8..3ed3882a93b8b1 100644
+--- a/tools/testing/selftests/net/af_unix/msg_oob.c
++++ b/tools/testing/selftests/net/af_unix/msg_oob.c
+@@ -525,6 +525,29 @@ TEST_F(msg_oob, ex_oob_drop_2)
+ }
+ }
+
++TEST_F(msg_oob, ex_oob_oob)
++{
++ sendpair("x", 1, MSG_OOB);
++ epollpair(true);
++ siocatmarkpair(true);
++
++ recvpair("x", 1, 1, MSG_OOB);
++ epollpair(false);
++ siocatmarkpair(true);
++
++ sendpair("y", 1, MSG_OOB);
++ epollpair(true);
++ siocatmarkpair(true);
++
++ recvpair("", -EAGAIN, 1, 0);
++ epollpair(false);
++ siocatmarkpair(false);
++
++ recvpair("", -EINVAL, 1, MSG_OOB);
++ epollpair(false);
++ siocatmarkpair(false);
++}
++
+ TEST_F(msg_oob, ex_oob_ahead_break)
+ {
+ sendpair("hello", 5, MSG_OOB);
+diff --git a/tools/testing/selftests/net/netfilter/ipvs.sh b/tools/testing/selftests/net/netfilter/ipvs.sh
+index 4ceee9fb39495b..d3edb16cd4b3f6 100755
+--- a/tools/testing/selftests/net/netfilter/ipvs.sh
++++ b/tools/testing/selftests/net/netfilter/ipvs.sh
+@@ -97,7 +97,7 @@ cleanup() {
+ }
+
+ server_listen() {
+- ip netns exec "$ns2" socat -u -4 TCP-LISTEN:8080,reuseaddr STDOUT > "${outfile}" &
++ ip netns exec "$ns2" timeout 5 socat -u -4 TCP-LISTEN:8080,reuseaddr STDOUT > "${outfile}" &
+ server_pid=$!
+ sleep 0.2
+ }
+diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
+index 742782438ca3b0..94cfdba5308d8c 100644
+--- a/tools/testing/selftests/resctrl/cat_test.c
++++ b/tools/testing/selftests/resctrl/cat_test.c
+@@ -290,12 +290,12 @@ static int cat_run_test(const struct resctrl_test *test, const struct user_param
+
+ static bool arch_supports_noncont_cat(const struct resctrl_test *test)
+ {
+- unsigned int eax, ebx, ecx, edx;
+-
+ /* AMD always supports non-contiguous CBM. */
+ if (get_vendor() == ARCH_AMD)
+ return true;
+
++#if defined(__i386__) || defined(__x86_64__) /* arch */
++ unsigned int eax, ebx, ecx, edx;
+ /* Intel support for non-contiguous CBM needs to be discovered. */
+ if (!strcmp(test->resource, "L3"))
+ __cpuid_count(0x10, 1, eax, ebx, ecx, edx);
+@@ -305,6 +305,9 @@ static bool arch_supports_noncont_cat(const struct resctrl_test *test)
+ return false;
+
+ return ((ecx >> 3) & 1);
++#endif /* end arch */
++
++ return false;
+ }
+
+ static int noncont_cat_run_test(const struct resctrl_test *test,
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index cb2b78e92910fb..7164a9ece20874 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -5575,6 +5575,7 @@ __visible bool kvm_rebooting;
+ EXPORT_SYMBOL_GPL(kvm_rebooting);
+
+ static DEFINE_PER_CPU(bool, hardware_enabled);
++static DEFINE_MUTEX(kvm_usage_lock);
+ static int kvm_usage_count;
+
+ static int __hardware_enable_nolock(void)
+@@ -5607,10 +5608,10 @@ static int kvm_online_cpu(unsigned int cpu)
+ * be enabled. Otherwise running VMs would encounter unrecoverable
+ * errors when scheduled to this CPU.
+ */
+- mutex_lock(&kvm_lock);
++ mutex_lock(&kvm_usage_lock);
+ if (kvm_usage_count)
+ ret = __hardware_enable_nolock();
+- mutex_unlock(&kvm_lock);
++ mutex_unlock(&kvm_usage_lock);
+ return ret;
+ }
+
+@@ -5630,10 +5631,10 @@ static void hardware_disable_nolock(void *junk)
+
+ static int kvm_offline_cpu(unsigned int cpu)
+ {
+- mutex_lock(&kvm_lock);
++ mutex_lock(&kvm_usage_lock);
+ if (kvm_usage_count)
+ hardware_disable_nolock(NULL);
+- mutex_unlock(&kvm_lock);
++ mutex_unlock(&kvm_usage_lock);
+ return 0;
+ }
+
+@@ -5649,9 +5650,9 @@ static void hardware_disable_all_nolock(void)
+ static void hardware_disable_all(void)
+ {
+ cpus_read_lock();
+- mutex_lock(&kvm_lock);
++ mutex_lock(&kvm_usage_lock);
+ hardware_disable_all_nolock();
+- mutex_unlock(&kvm_lock);
++ mutex_unlock(&kvm_usage_lock);
+ cpus_read_unlock();
+ }
+
+@@ -5682,7 +5683,7 @@ static int hardware_enable_all(void)
+ * enable hardware multiple times.
+ */
+ cpus_read_lock();
+- mutex_lock(&kvm_lock);
++ mutex_lock(&kvm_usage_lock);
+
+ r = 0;
+
+@@ -5696,7 +5697,7 @@ static int hardware_enable_all(void)
+ }
+ }
+
+- mutex_unlock(&kvm_lock);
++ mutex_unlock(&kvm_usage_lock);
+ cpus_read_unlock();
+
+ return r;
+@@ -5724,13 +5725,13 @@ static int kvm_suspend(void)
+ {
+ /*
+ * Secondary CPUs and CPU hotplug are disabled across the suspend/resume
+- * callbacks, i.e. no need to acquire kvm_lock to ensure the usage count
+- * is stable. Assert that kvm_lock is not held to ensure the system
+- * isn't suspended while KVM is enabling hardware. Hardware enabling
+- * can be preempted, but the task cannot be frozen until it has dropped
+- * all locks (userspace tasks are frozen via a fake signal).
++ * callbacks, i.e. no need to acquire kvm_usage_lock to ensure the usage
++ * count is stable. Assert that kvm_usage_lock is not held to ensure
++ * the system isn't suspended while KVM is enabling hardware. Hardware
++ * enabling can be preempted, but the task cannot be frozen until it has
++ * dropped all locks (userspace tasks are frozen via a fake signal).
+ */
+- lockdep_assert_not_held(&kvm_lock);
++ lockdep_assert_not_held(&kvm_usage_lock);
+ lockdep_assert_irqs_disabled();
+
+ if (kvm_usage_count)
+@@ -5740,7 +5741,7 @@ static int kvm_suspend(void)
+
+ static void kvm_resume(void)
+ {
+- lockdep_assert_not_held(&kvm_lock);
++ lockdep_assert_not_held(&kvm_usage_lock);
+ lockdep_assert_irqs_disabled();
+
+ if (kvm_usage_count)
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-10-04 22:53 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-10-04 22:53 UTC (permalink / raw
To: gentoo-commits
commit: de599559deffc5a4a19abb41233177322aff11bd
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Oct 4 22:52:15 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Oct 4 22:52:15 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=de599559
libbpf: workaround (another) -Wmaybe-uninitialized false positive
Bug: https://bugs.gentoo.org/939106
Signed-off-by: Sam James <sam <AT> gentoo.org>
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
...workaround-Wmaybe-uninitialized-false-pos.patch | 64 ++++++++++++++++++++++
2 files changed, 68 insertions(+)
diff --git a/0000_README b/0000_README
index 5383d35f..17f65fb8 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
From: https://lore.kernel.org/bpf/
Desc: libbpf: workaround -Wmaybe-uninitialized false positive
+Patch: 2991_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch
+From: https://lore.kernel.org/bpf/f6962729197ae7cdf4f6d1512625bd92f2322d31.1725630494.git.sam@gentoo.org/
+Desc: libbpf: workaround (another) -Wmaybe-uninitialized false positive
+
Patch: 2995_dtrace-6.11_p1.patch
From: https://github.com/thesamesam/linux/tree/dtrace-sam/v2/6.11-flat
Desc: dtrace patch for 6.11.X (CTF, modules.builtin.objs)
diff --git a/2991_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch b/2991_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch
new file mode 100644
index 00000000..f01221c7
--- /dev/null
+++ b/2991_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch
@@ -0,0 +1,64 @@
+From git@z Thu Jan 1 00:00:00 1970
+Subject: [PATCH] libbpf: workaround (another) -Wmaybe-uninitialized false
+ positive
+From: Sam James <sam@gentoo.org>
+Date: Fri, 06 Sep 2024 14:48:14 +0100
+Message-Id: <f6962729197ae7cdf4f6d1512625bd92f2322d31.1725630494.git.sam@gentoo.org>
+MIME-Version: 1.0
+Content-Type: text/plain; charset="utf-8"
+Content-Transfer-Encoding: 8bit
+
+We get this with GCC 15 -O3 (at least):
+```
+libbpf.c: In function ‘bpf_map__init_kern_struct_ops’:
+libbpf.c:1109:18: error: ‘mod_btf’ may be used uninitialized [-Werror=maybe-uninitialized]
+ 1109 | kern_btf = mod_btf ? mod_btf->btf : obj->btf_vmlinux;
+ | ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+libbpf.c:1094:28: note: ‘mod_btf’ was declared here
+ 1094 | struct module_btf *mod_btf;
+ | ^~~~~~~
+In function ‘find_struct_ops_kern_types’,
+ inlined from ‘bpf_map__init_kern_struct_ops’ at libbpf.c:1102:8:
+libbpf.c:982:21: error: ‘btf’ may be used uninitialized [-Werror=maybe-uninitialized]
+ 982 | kern_type = btf__type_by_id(btf, kern_type_id);
+ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+libbpf.c: In function ‘bpf_map__init_kern_struct_ops’:
+libbpf.c:967:21: note: ‘btf’ was declared here
+ 967 | struct btf *btf;
+ | ^~~
+```
+
+This is similar to the other libbpf fix from a few weeks ago for
+the same modelling-errno issue (fab45b962749184e1a1a57c7c583782b78fad539).
+
+Link: https://bugs.gentoo.org/939106
+Signed-off-by: Sam James <sam@gentoo.org>
+---
+ tools/lib/bpf/libbpf.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index a3be6f8fac09e..7315120574c29 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -988,7 +988,7 @@ find_struct_ops_kern_types(struct bpf_object *obj, const char *tname_raw,
+ {
+ const struct btf_type *kern_type, *kern_vtype;
+ const struct btf_member *kern_data_member;
+- struct btf *btf;
++ struct btf *btf = NULL;
+ __s32 kern_vtype_id, kern_type_id;
+ char tname[256];
+ __u32 i;
+@@ -1115,7 +1115,7 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map)
+ const struct btf *btf = obj->btf;
+ struct bpf_struct_ops *st_ops;
+ const struct btf *kern_btf;
+- struct module_btf *mod_btf;
++ struct module_btf *mod_btf = NULL;
+ void *data, *kern_data;
+ const char *tname;
+ int err;
+--
+2.46.0
+
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-10-10 11:34 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-10-10 11:34 UTC (permalink / raw
To: gentoo-commits
commit: e5deddfeed7e2afc00ce8e6a64e3e8ef84e86971
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 10 11:34:06 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct 10 11:34:06 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e5deddfe
Linux patch 6.11.3
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1002_linux-6.11.3.patch | 27624 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 27628 insertions(+)
diff --git a/0000_README b/0000_README
index 17f65fb8..7b982874 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-6.11.2.patch
From: https://www.kernel.org
Desc: Linux 6.11.2
+Patch: 1002_linux-6.11.3.patch
+From: https://www.kernel.org
+Desc: Linux 6.11.3
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1002_linux-6.11.3.patch b/1002_linux-6.11.3.patch
new file mode 100644
index 00000000..584f5748
--- /dev/null
+++ b/1002_linux-6.11.3.patch
@@ -0,0 +1,27624 @@
+diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
+index cad6c3dc1f9c1f..d0c1acfcad405e 100644
+--- a/Documentation/ABI/testing/sysfs-fs-f2fs
++++ b/Documentation/ABI/testing/sysfs-fs-f2fs
+@@ -763,3 +763,25 @@ Date: November 2023
+ Contact: "Chao Yu" <chao@kernel.org>
+ Description: It controls to enable/disable IO aware feature for background discard.
+ By default, the value is 1 which indicates IO aware is on.
++
++What: /sys/fs/f2fs/<disk>/blkzone_alloc_policy
++Date: July 2024
++Contact: "Yuanhong Liao" <liaoyuanhong@vivo.com>
++Description: The zone UFS we are currently using consists of two parts:
++ conventional zones and sequential zones. It can be used to control which part
++ to prioritize for writes, with a default value of 0.
++
++ ======================== =========================================
++ value description
++ blkzone_alloc_policy = 0 Prioritize writing to sequential zones
++ blkzone_alloc_policy = 1 Only allow writing to sequential zones
++ blkzone_alloc_policy = 2 Prioritize writing to conventional zones
++ ======================== =========================================
++
++What: /sys/fs/f2fs/<disk>/migration_window_granularity
++Date: September 2024
++Contact: "Daeho Jeong" <daehojeong@google.com>
++Description: Controls migration window granularity of garbage collection on large
++ section. it can control the scanning window granularity for GC migration
++ in a unit of segment, while migration_granularity controls the number
++ of segments which can be migrated at the same turn.
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 09126bb8cc9ffb..be010fec765410 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4788,6 +4788,16 @@
+ printk.time= Show timing data prefixed to each printk message line
+ Format: <bool> (1/Y/y=enable, 0/N/n=disable)
+
++ proc_mem.force_override= [KNL]
++ Format: {always | ptrace | never}
++ Traditionally /proc/pid/mem allows memory permissions to be
++ overridden without restrictions. This option may be set to
++ restrict that. Can be one of:
++ - 'always': traditional behavior always allows mem overrides.
++ - 'ptrace': only allow mem overrides for active ptracers.
++ - 'never': never allow mem overrides.
++ If not specified, default is the CONFIG_PROC_MEM_* choice.
++
+ processor.max_cstate= [HW,ACPI]
+ Limit processor to maximum C-state
+ max_cstate=9 overrides any DMI blacklist limit.
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index 39c52385f11fb3..8cd4f365044b67 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -146,6 +146,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A715 | #2645198 | ARM64_ERRATUM_2645198 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM | Cortex-A715 | #3456084 | ARM64_ERRATUM_3194386 |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A720 | #3456091 | ARM64_ERRATUM_3194386 |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A725 | #3456106 | ARM64_ERRATUM_3194386 |
+@@ -186,6 +188,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Neoverse-N2 | #3324339 | ARM64_ERRATUM_3194386 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM | Neoverse-N3 | #3456111 | ARM64_ERRATUM_3194386 |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Neoverse-V1 | #1619801 | N/A |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Neoverse-V1 | #3324341 | ARM64_ERRATUM_3194386 |
+@@ -289,3 +293,5 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Microsoft | Azure Cobalt 100| #2253138 | ARM64_ERRATUM_2253138 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Microsoft | Azure Cobalt 100| #3324339 | ARM64_ERRATUM_3194386 |
+++----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
+index bbe89ea9590ceb..e95c216282818e 100644
+--- a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
++++ b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
+@@ -34,6 +34,7 @@ properties:
+ and length of the AXI DMA controller IO space, unless
+ axistream-connected is specified, in which case the reg
+ attribute of the node referenced by it is used.
++ minItems: 1
+ maxItems: 2
+
+ interrupts:
+@@ -181,7 +182,7 @@ examples:
+ clock-names = "s_axi_lite_clk", "axis_clk", "ref_clk", "mgt_clk";
+ clocks = <&axi_clk>, <&axi_clk>, <&pl_enet_ref_clk>, <&mgt_clk>;
+ phy-mode = "mii";
+- reg = <0x00 0x40000000 0x00 0x40000>;
++ reg = <0x40000000 0x40000>;
+ xlnx,rxcsum = <0x2>;
+ xlnx,rxmem = <0x800>;
+ xlnx,txcsum = <0x2>;
+diff --git a/Documentation/networking/net_cachelines/net_device.rst b/Documentation/networking/net_cachelines/net_device.rst
+index 70c4fb9d4e5ce0..d68f37f5b1f821 100644
+--- a/Documentation/networking/net_cachelines/net_device.rst
++++ b/Documentation/networking/net_cachelines/net_device.rst
+@@ -98,7 +98,7 @@ unsigned_int num_rx_queues
+ unsigned_int real_num_rx_queues - read_mostly get_rps_cpu
+ struct_bpf_prog* xdp_prog - read_mostly netif_elide_gro()
+ unsigned_long gro_flush_timeout - read_mostly napi_complete_done
+-int napi_defer_hard_irqs - read_mostly napi_complete_done
++u32 napi_defer_hard_irqs - read_mostly napi_complete_done
+ unsigned_int gro_max_size - read_mostly skb_gro_receive
+ unsigned_int gro_ipv4_max_size - read_mostly skb_gro_receive
+ rx_handler_func_t* rx_handler read_mostly - __netif_receive_skb_core
+diff --git a/Documentation/rust/general-information.rst b/Documentation/rust/general-information.rst
+index e3f388ef4ee423..a82926d7b379bd 100644
+--- a/Documentation/rust/general-information.rst
++++ b/Documentation/rust/general-information.rst
+@@ -75,7 +75,7 @@ should provide as-safe-as-possible abstractions as needed.
+ .. code-block::
+
+ rust/bindings/
+- (rust/helpers.c)
++ (rust/helpers/)
+
+ include/ -----+ <-+
+ | |
+@@ -112,7 +112,7 @@ output files in the ``rust/bindings/`` directory.
+
+ For parts of the C header that ``bindgen`` does not auto generate, e.g. C
+ ``inline`` functions or non-trivial macros, it is acceptable to add a small
+-wrapper function to ``rust/helpers.c`` to make it available for the Rust side as
++wrapper function to ``rust/helpers/`` to make it available for the Rust side as
+ well.
+
+ Abstractions
+diff --git a/Makefile b/Makefile
+index 7bcf0c32ea5eab..108d314ea95b52 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 11
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c
+index b668c97663ec0c..f5b66f4cf45d96 100644
+--- a/arch/arm/crypto/aes-ce-glue.c
++++ b/arch/arm/crypto/aes-ce-glue.c
+@@ -711,7 +711,7 @@ static int __init aes_init(void)
+ algname = aes_algs[i].base.cra_name + 2;
+ drvname = aes_algs[i].base.cra_driver_name + 2;
+ basename = aes_algs[i].base.cra_driver_name;
+- simd = simd_skcipher_create_compat(algname, drvname, basename);
++ simd = simd_skcipher_create_compat(aes_algs + i, algname, drvname, basename);
+ err = PTR_ERR(simd);
+ if (IS_ERR(simd))
+ goto unregister_simds;
+diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c
+index 201eb35dde37e0..735a2441ad4849 100644
+--- a/arch/arm/crypto/aes-neonbs-glue.c
++++ b/arch/arm/crypto/aes-neonbs-glue.c
+@@ -540,7 +540,7 @@ static int __init aes_init(void)
+ algname = aes_algs[i].base.cra_name + 2;
+ drvname = aes_algs[i].base.cra_driver_name + 2;
+ basename = aes_algs[i].base.cra_driver_name;
+- simd = simd_skcipher_create_compat(algname, drvname, basename);
++ simd = simd_skcipher_create_compat(aes_algs + i, algname, drvname, basename);
+ err = PTR_ERR(simd);
+ if (IS_ERR(simd))
+ goto unregister_simds;
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index c8cba20a4d11b2..89b331575ed493 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -196,7 +196,8 @@ config ARM64
+ select HAVE_DMA_CONTIGUOUS
+ select HAVE_DYNAMIC_FTRACE
+ select HAVE_DYNAMIC_FTRACE_WITH_ARGS \
+- if $(cc-option,-fpatchable-function-entry=2)
++ if (GCC_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS || \
++ CLANG_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS)
+ select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS \
+ if DYNAMIC_FTRACE_WITH_ARGS && DYNAMIC_FTRACE_WITH_CALL_OPS
+ select HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS \
+@@ -269,12 +270,10 @@ config CLANG_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS
+ def_bool CC_IS_CLANG
+ # https://github.com/ClangBuiltLinux/linux/issues/1507
+ depends on AS_IS_GNU || (AS_IS_LLVM && (LD_IS_LLD || LD_VERSION >= 23600))
+- select HAVE_DYNAMIC_FTRACE_WITH_ARGS
+
+ config GCC_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS
+ def_bool CC_IS_GCC
+ depends on $(cc-option,-fpatchable-function-entry=2)
+- select HAVE_DYNAMIC_FTRACE_WITH_ARGS
+
+ config 64BIT
+ def_bool y
+@@ -1080,6 +1079,7 @@ config ARM64_ERRATUM_3194386
+ * ARM Cortex-A78C erratum 3324346
+ * ARM Cortex-A78C erratum 3324347
+ * ARM Cortex-A710 erratam 3324338
++ * ARM Cortex-A715 errartum 3456084
+ * ARM Cortex-A720 erratum 3456091
+ * ARM Cortex-A725 erratum 3456106
+ * ARM Cortex-X1 erratum 3324344
+@@ -1090,6 +1090,7 @@ config ARM64_ERRATUM_3194386
+ * ARM Cortex-X925 erratum 3324334
+ * ARM Neoverse-N1 erratum 3324349
+ * ARM Neoverse N2 erratum 3324339
++ * ARM Neoverse-N3 erratum 3456111
+ * ARM Neoverse-V1 erratum 3324341
+ * ARM Neoverse V2 erratum 3324336
+ * ARM Neoverse-V3 erratum 3312417
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 5a7dfeb8e8eb55..488f8e75134959 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -94,6 +94,7 @@
+ #define ARM_CPU_PART_NEOVERSE_V3 0xD84
+ #define ARM_CPU_PART_CORTEX_X925 0xD85
+ #define ARM_CPU_PART_CORTEX_A725 0xD87
++#define ARM_CPU_PART_NEOVERSE_N3 0xD8E
+
+ #define APM_CPU_PART_XGENE 0x000
+ #define APM_CPU_VAR_POTENZA 0x00
+@@ -176,6 +177,7 @@
+ #define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3)
+ #define MIDR_CORTEX_X925 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X925)
+ #define MIDR_CORTEX_A725 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A725)
++#define MIDR_NEOVERSE_N3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N3)
+ #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
+ #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+ #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index a33f5996ca9f1d..428934f64e9fc1 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -1423,11 +1423,6 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val);
+ sign_extend64(__val, id##_##fld##_WIDTH - 1); \
+ })
+
+-#define expand_field_sign(id, fld, val) \
+- (id##_##fld##_SIGNED ? \
+- __expand_field_sign_signed(id, fld, val) : \
+- __expand_field_sign_unsigned(id, fld, val))
+-
+ #define get_idreg_field_unsigned(kvm, id, fld) \
+ ({ \
+ u64 __val = kvm_read_vm_id_reg((kvm), SYS_##id); \
+@@ -1443,20 +1438,26 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val);
+ #define get_idreg_field_enum(kvm, id, fld) \
+ get_idreg_field_unsigned(kvm, id, fld)
+
+-#define get_idreg_field(kvm, id, fld) \
++#define kvm_cmp_feat_signed(kvm, id, fld, op, limit) \
++ (get_idreg_field_signed((kvm), id, fld) op __expand_field_sign_signed(id, fld, limit))
++
++#define kvm_cmp_feat_unsigned(kvm, id, fld, op, limit) \
++ (get_idreg_field_unsigned((kvm), id, fld) op __expand_field_sign_unsigned(id, fld, limit))
++
++#define kvm_cmp_feat(kvm, id, fld, op, limit) \
+ (id##_##fld##_SIGNED ? \
+- get_idreg_field_signed(kvm, id, fld) : \
+- get_idreg_field_unsigned(kvm, id, fld))
++ kvm_cmp_feat_signed(kvm, id, fld, op, limit) : \
++ kvm_cmp_feat_unsigned(kvm, id, fld, op, limit))
+
+ #define kvm_has_feat(kvm, id, fld, limit) \
+- (get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, limit))
++ kvm_cmp_feat(kvm, id, fld, >=, limit)
+
+ #define kvm_has_feat_enum(kvm, id, fld, val) \
+- (get_idreg_field_unsigned((kvm), id, fld) == __expand_field_sign_unsigned(id, fld, val))
++ kvm_cmp_feat_unsigned(kvm, id, fld, ==, val)
+
+ #define kvm_has_feat_range(kvm, id, fld, min, max) \
+- (get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, min) && \
+- get_idreg_field((kvm), id, fld) <= expand_field_sign(id, fld, max))
++ (kvm_cmp_feat(kvm, id, fld, >=, min) && \
++ kvm_cmp_feat(kvm, id, fld, <=, max))
+
+ /* Check for a given level of PAuth support */
+ #define kvm_has_pauth(k, l) \
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index dfefbdf4073a6a..a78f247029aec3 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -439,6 +439,7 @@ static const struct midr_range erratum_spec_ssbs_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A715),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A720),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A725),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
+@@ -447,8 +448,10 @@ static const struct midr_range erratum_spec_ssbs_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_X3),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_X4),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_X925),
++ MIDR_ALL_VERSIONS(MIDR_MICROSOFT_AZURE_COBALT_100),
+ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
+ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N3),
+ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
+ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V2),
+ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V3),
+diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c
+index 5139a28130c088..0f7b484cb2ff20 100644
+--- a/arch/arm64/mm/trans_pgd.c
++++ b/arch/arm64/mm/trans_pgd.c
+@@ -42,14 +42,16 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr)
+ * the temporary mappings we use during restore.
+ */
+ __set_pte(dst_ptep, pte_mkwrite_novma(pte));
+- } else if ((debug_pagealloc_enabled() ||
+- is_kfence_address((void *)addr)) && !pte_none(pte)) {
++ } else if (!pte_none(pte)) {
+ /*
+ * debug_pagealloc will removed the PTE_VALID bit if
+ * the page isn't in use by the resume kernel. It may have
+ * been in use by the original kernel, in which case we need
+ * to put it back in our copy to do the restore.
+ *
++ * Other cases include kfence / vmalloc / memfd_secret which
++ * may call `set_direct_map_invalid_noflush()`.
++ *
+ * Before marking this entry valid, check the pfn should
+ * be mapped.
+ */
+diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig
+index b4252c357c8e23..75b366407a60a3 100644
+--- a/arch/loongarch/configs/loongson3_defconfig
++++ b/arch/loongarch/configs/loongson3_defconfig
+@@ -96,7 +96,6 @@ CONFIG_ZPOOL=y
+ CONFIG_ZSWAP=y
+ CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD=y
+ CONFIG_ZBUD=y
+-CONFIG_Z3FOLD=y
+ CONFIG_ZSMALLOC=m
+ # CONFIG_COMPAT_BRK is not set
+ CONFIG_MEMORY_HOTPLUG=y
+diff --git a/arch/parisc/include/asm/mman.h b/arch/parisc/include/asm/mman.h
+index 47c5a1991d1034..89b6beeda0b869 100644
+--- a/arch/parisc/include/asm/mman.h
++++ b/arch/parisc/include/asm/mman.h
+@@ -11,4 +11,18 @@ static inline bool arch_memory_deny_write_exec_supported(void)
+ }
+ #define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported
+
++static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags)
++{
++ /*
++ * The stack on parisc grows upwards, so if userspace requests memory
++ * for a stack, mark it with VM_GROWSUP so that the stack expansion in
++ * the fault handler will work.
++ */
++ if (flags & MAP_STACK)
++ return VM_GROWSUP;
++
++ return 0;
++}
++#define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags)
++
+ #endif /* __ASM_MMAN_H__ */
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index ab23e61a6f016a..ea57bcc21dc5fe 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -1051,8 +1051,7 @@ ENTRY_CFI(intr_save) /* for os_hpmc */
+ STREG %r16, PT_ISR(%r29)
+ STREG %r17, PT_IOR(%r29)
+
+-#if 0 && defined(CONFIG_64BIT)
+- /* Revisit when we have 64-bit code above 4Gb */
++#if defined(CONFIG_64BIT)
+ b,n intr_save2
+
+ skip_save_ior:
+@@ -1060,8 +1059,7 @@ skip_save_ior:
+ * need to adjust iasq/iaoq here in the same way we adjusted isr/ior
+ * above.
+ */
+- extrd,u,* %r8,PSW_W_BIT,1,%r1
+- cmpib,COND(=),n 1,%r1,intr_save2
++ bb,COND(>=),n %r8,PSW_W_BIT,intr_save2
+ LDREG PT_IASQ0(%r29), %r16
+ LDREG PT_IAOQ0(%r29), %r17
+ /* adjust iasq/iaoq */
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index 1f51aa9c8230cc..0fa81bf1466b15 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -243,10 +243,10 @@ linux_gateway_entry:
+
+ #ifdef CONFIG_64BIT
+ ldil L%sys_call_table, %r1
+- or,= %r2,%r2,%r2
+- addil L%(sys_call_table64-sys_call_table), %r1
++ or,ev %r2,%r2,%r2
++ ldil L%sys_call_table64, %r1
+ ldo R%sys_call_table(%r1), %r19
+- or,= %r2,%r2,%r2
++ or,ev %r2,%r2,%r2
+ ldo R%sys_call_table64(%r1), %r19
+ #else
+ load32 sys_call_table, %r19
+@@ -379,10 +379,10 @@ tracesys_next:
+ extrd,u %r19,63,1,%r2 /* W hidden in bottom bit */
+
+ ldil L%sys_call_table, %r1
+- or,= %r2,%r2,%r2
+- addil L%(sys_call_table64-sys_call_table), %r1
++ or,ev %r2,%r2,%r2
++ ldil L%sys_call_table64, %r1
+ ldo R%sys_call_table(%r1), %r19
+- or,= %r2,%r2,%r2
++ or,ev %r2,%r2,%r2
+ ldo R%sys_call_table64(%r1), %r19
+ #else
+ load32 sys_call_table, %r19
+@@ -1327,6 +1327,8 @@ ENTRY(sys_call_table)
+ END(sys_call_table)
+
+ #ifdef CONFIG_64BIT
++#undef __SYSCALL_WITH_COMPAT
++#define __SYSCALL_WITH_COMPAT(nr, native, compat) __SYSCALL(nr, native)
+ .align 8
+ ENTRY(sys_call_table64)
+ #include <asm/syscall_table_64.h> /* 64-bit syscalls */
+diff --git a/arch/powerpc/configs/ppc64_defconfig b/arch/powerpc/configs/ppc64_defconfig
+index 544a65fda77bcb..d39284489aa263 100644
+--- a/arch/powerpc/configs/ppc64_defconfig
++++ b/arch/powerpc/configs/ppc64_defconfig
+@@ -81,7 +81,6 @@ CONFIG_MODULE_SIG_SHA512=y
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_BINFMT_MISC=m
+ CONFIG_ZSWAP=y
+-CONFIG_Z3FOLD=y
+ CONFIG_ZSMALLOC=y
+ # CONFIG_SLAB_MERGE_DEFAULT is not set
+ CONFIG_SLAB_FREELIST_RANDOM=y
+diff --git a/arch/powerpc/include/asm/vdso_datapage.h b/arch/powerpc/include/asm/vdso_datapage.h
+index a585c8e538ff0f..939daf6b695ef1 100644
+--- a/arch/powerpc/include/asm/vdso_datapage.h
++++ b/arch/powerpc/include/asm/vdso_datapage.h
+@@ -111,6 +111,21 @@ extern struct vdso_arch_data *vdso_data;
+ addi \ptr, \ptr, (_vdso_datapage - 999b)@l
+ .endm
+
++#include <asm/asm-offsets.h>
++#include <asm/page.h>
++
++.macro get_realdatapage ptr scratch
++ get_datapage \ptr
++#ifdef CONFIG_TIME_NS
++ lwz \scratch, VDSO_CLOCKMODE_OFFSET(\ptr)
++ xoris \scratch, \scratch, VDSO_CLOCKMODE_TIMENS@h
++ xori \scratch, \scratch, VDSO_CLOCKMODE_TIMENS@l
++ cntlzw \scratch, \scratch
++ rlwinm \scratch, \scratch, PAGE_SHIFT - 5, 1 << PAGE_SHIFT
++ add \ptr, \ptr, \scratch
++#endif
++.endm
++
+ #endif /* __ASSEMBLY__ */
+
+ #endif /* __KERNEL__ */
+diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
+index 23733282de4d9f..7b3feb6bc2103b 100644
+--- a/arch/powerpc/kernel/asm-offsets.c
++++ b/arch/powerpc/kernel/asm-offsets.c
+@@ -346,6 +346,8 @@ int main(void)
+ #else
+ OFFSET(CFG_SYSCALL_MAP32, vdso_arch_data, syscall_map);
+ #endif
++ OFFSET(VDSO_CLOCKMODE_OFFSET, vdso_arch_data, data[0].clock_mode);
++ DEFINE(VDSO_CLOCKMODE_TIMENS, VDSO_CLOCKMODE_TIMENS);
+
+ #ifdef CONFIG_BUG
+ DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry));
+diff --git a/arch/powerpc/kernel/vdso/cacheflush.S b/arch/powerpc/kernel/vdso/cacheflush.S
+index 0085ae464dac9c..3b2479bd2f9a1d 100644
+--- a/arch/powerpc/kernel/vdso/cacheflush.S
++++ b/arch/powerpc/kernel/vdso/cacheflush.S
+@@ -30,7 +30,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
+ #ifdef CONFIG_PPC64
+ mflr r12
+ .cfi_register lr,r12
+- get_datapage r10
++ get_realdatapage r10, r11
+ mtlr r12
+ .cfi_restore lr
+ #endif
+diff --git a/arch/powerpc/kernel/vdso/datapage.S b/arch/powerpc/kernel/vdso/datapage.S
+index db8e167f01667e..2b19b6201a33a8 100644
+--- a/arch/powerpc/kernel/vdso/datapage.S
++++ b/arch/powerpc/kernel/vdso/datapage.S
+@@ -28,7 +28,7 @@ V_FUNCTION_BEGIN(__kernel_get_syscall_map)
+ mflr r12
+ .cfi_register lr,r12
+ mr. r4,r3
+- get_datapage r3
++ get_realdatapage r3, r11
+ mtlr r12
+ #ifdef __powerpc64__
+ addi r3,r3,CFG_SYSCALL_MAP64
+@@ -52,7 +52,7 @@ V_FUNCTION_BEGIN(__kernel_get_tbfreq)
+ .cfi_startproc
+ mflr r12
+ .cfi_register lr,r12
+- get_datapage r3
++ get_realdatapage r3, r11
+ #ifndef __powerpc64__
+ lwz r4,(CFG_TB_TICKS_PER_SEC + 4)(r3)
+ #endif
+diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c
+index 47f8eabd1bee31..9873b916b23704 100644
+--- a/arch/powerpc/platforms/pseries/dlpar.c
++++ b/arch/powerpc/platforms/pseries/dlpar.c
+@@ -334,23 +334,6 @@ int handle_dlpar_errorlog(struct pseries_hp_errorlog *hp_elog)
+ {
+ int rc;
+
+- /* pseries error logs are in BE format, convert to cpu type */
+- switch (hp_elog->id_type) {
+- case PSERIES_HP_ELOG_ID_DRC_COUNT:
+- hp_elog->_drc_u.drc_count =
+- be32_to_cpu(hp_elog->_drc_u.drc_count);
+- break;
+- case PSERIES_HP_ELOG_ID_DRC_INDEX:
+- hp_elog->_drc_u.drc_index =
+- be32_to_cpu(hp_elog->_drc_u.drc_index);
+- break;
+- case PSERIES_HP_ELOG_ID_DRC_IC:
+- hp_elog->_drc_u.ic.count =
+- be32_to_cpu(hp_elog->_drc_u.ic.count);
+- hp_elog->_drc_u.ic.index =
+- be32_to_cpu(hp_elog->_drc_u.ic.index);
+- }
+-
+ switch (hp_elog->resource) {
+ case PSERIES_HP_ELOG_RESOURCE_MEM:
+ rc = dlpar_memory(hp_elog);
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index e62835a12d73fc..6838a0fcda296b 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -757,7 +757,7 @@ int dlpar_cpu(struct pseries_hp_errorlog *hp_elog)
+ u32 drc_index;
+ int rc;
+
+- drc_index = hp_elog->_drc_u.drc_index;
++ drc_index = be32_to_cpu(hp_elog->_drc_u.drc_index);
+
+ lock_device_hotplug();
+
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index 3fe3ddb30c04b4..38dc4f7c9296b2 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -817,16 +817,16 @@ int dlpar_memory(struct pseries_hp_errorlog *hp_elog)
+ case PSERIES_HP_ELOG_ACTION_ADD:
+ switch (hp_elog->id_type) {
+ case PSERIES_HP_ELOG_ID_DRC_COUNT:
+- count = hp_elog->_drc_u.drc_count;
++ count = be32_to_cpu(hp_elog->_drc_u.drc_count);
+ rc = dlpar_memory_add_by_count(count);
+ break;
+ case PSERIES_HP_ELOG_ID_DRC_INDEX:
+- drc_index = hp_elog->_drc_u.drc_index;
++ drc_index = be32_to_cpu(hp_elog->_drc_u.drc_index);
+ rc = dlpar_memory_add_by_index(drc_index);
+ break;
+ case PSERIES_HP_ELOG_ID_DRC_IC:
+- count = hp_elog->_drc_u.ic.count;
+- drc_index = hp_elog->_drc_u.ic.index;
++ count = be32_to_cpu(hp_elog->_drc_u.ic.count);
++ drc_index = be32_to_cpu(hp_elog->_drc_u.ic.index);
+ rc = dlpar_memory_add_by_ic(count, drc_index);
+ break;
+ default:
+@@ -838,16 +838,16 @@ int dlpar_memory(struct pseries_hp_errorlog *hp_elog)
+ case PSERIES_HP_ELOG_ACTION_REMOVE:
+ switch (hp_elog->id_type) {
+ case PSERIES_HP_ELOG_ID_DRC_COUNT:
+- count = hp_elog->_drc_u.drc_count;
++ count = be32_to_cpu(hp_elog->_drc_u.drc_count);
+ rc = dlpar_memory_remove_by_count(count);
+ break;
+ case PSERIES_HP_ELOG_ID_DRC_INDEX:
+- drc_index = hp_elog->_drc_u.drc_index;
++ drc_index = be32_to_cpu(hp_elog->_drc_u.drc_index);
+ rc = dlpar_memory_remove_by_index(drc_index);
+ break;
+ case PSERIES_HP_ELOG_ID_DRC_IC:
+- count = hp_elog->_drc_u.ic.count;
+- drc_index = hp_elog->_drc_u.ic.index;
++ count = be32_to_cpu(hp_elog->_drc_u.ic.count);
++ drc_index = be32_to_cpu(hp_elog->_drc_u.ic.index);
+ rc = dlpar_memory_remove_by_ic(count, drc_index);
+ break;
+ default:
+diff --git a/arch/powerpc/platforms/pseries/pmem.c b/arch/powerpc/platforms/pseries/pmem.c
+index 3c290b9ed01b39..0f1d45f32e4a44 100644
+--- a/arch/powerpc/platforms/pseries/pmem.c
++++ b/arch/powerpc/platforms/pseries/pmem.c
+@@ -121,7 +121,7 @@ int dlpar_hp_pmem(struct pseries_hp_errorlog *hp_elog)
+ return -EINVAL;
+ }
+
+- drc_index = hp_elog->_drc_u.drc_index;
++ drc_index = be32_to_cpu(hp_elog->_drc_u.drc_index);
+
+ lock_device_hotplug();
+
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 939ea7f6a2289c..d11c2479d8e170 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -319,6 +319,11 @@ config GENERIC_HWEIGHT
+ config FIX_EARLYCON_MEM
+ def_bool MMU
+
++config ILLEGAL_POINTER_VALUE
++ hex
++ default 0 if 32BIT
++ default 0xdead000000000000 if 64BIT
++
+ config PGTABLE_LEVELS
+ int
+ default 5 if 64BIT
+@@ -758,8 +763,7 @@ config IRQ_STACKS
+ config THREAD_SIZE_ORDER
+ int "Kernel stack size (in power-of-two numbers of page size)" if VMAP_STACK && EXPERT
+ range 0 4
+- default 1 if 32BIT && !KASAN
+- default 3 if 64BIT && KASAN
++ default 1 if 32BIT
+ default 2
+ help
+ Specify the Pages of thread stack size (from 4KB to 64KB), which also
+diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
+index fca5c6be2b8112..385b43211a71dc 100644
+--- a/arch/riscv/include/asm/thread_info.h
++++ b/arch/riscv/include/asm/thread_info.h
+@@ -13,7 +13,12 @@
+ #include <linux/sizes.h>
+
+ /* thread information allocation */
+-#define THREAD_SIZE_ORDER CONFIG_THREAD_SIZE_ORDER
++#ifdef CONFIG_KASAN
++#define KASAN_STACK_ORDER 1
++#else
++#define KASAN_STACK_ORDER 0
++#endif
++#define THREAD_SIZE_ORDER (CONFIG_THREAD_SIZE_ORDER + KASAN_STACK_ORDER)
+ #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
+
+ /*
+diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S
+index 0ffb072be95615..0bbec1c75cd0be 100644
+--- a/arch/x86/crypto/sha256-avx2-asm.S
++++ b/arch/x86/crypto/sha256-avx2-asm.S
+@@ -592,22 +592,22 @@ SYM_TYPED_FUNC_START(sha256_transform_rorx)
+ leaq K256+0*32(%rip), INP ## reuse INP as scratch reg
+ vpaddd (INP, SRND), X0, XFER
+ vmovdqa XFER, 0*32+_XFER(%rsp, SRND)
+- FOUR_ROUNDS_AND_SCHED _XFER + 0*32
++ FOUR_ROUNDS_AND_SCHED (_XFER + 0*32)
+
+ leaq K256+1*32(%rip), INP
+ vpaddd (INP, SRND), X0, XFER
+ vmovdqa XFER, 1*32+_XFER(%rsp, SRND)
+- FOUR_ROUNDS_AND_SCHED _XFER + 1*32
++ FOUR_ROUNDS_AND_SCHED (_XFER + 1*32)
+
+ leaq K256+2*32(%rip), INP
+ vpaddd (INP, SRND), X0, XFER
+ vmovdqa XFER, 2*32+_XFER(%rsp, SRND)
+- FOUR_ROUNDS_AND_SCHED _XFER + 2*32
++ FOUR_ROUNDS_AND_SCHED (_XFER + 2*32)
+
+ leaq K256+3*32(%rip), INP
+ vpaddd (INP, SRND), X0, XFER
+ vmovdqa XFER, 3*32+_XFER(%rsp, SRND)
+- FOUR_ROUNDS_AND_SCHED _XFER + 3*32
++ FOUR_ROUNDS_AND_SCHED (_XFER + 3*32)
+
+ add $4*32, SRND
+ cmp $3*4*32, SRND
+@@ -618,12 +618,12 @@ SYM_TYPED_FUNC_START(sha256_transform_rorx)
+ leaq K256+0*32(%rip), INP
+ vpaddd (INP, SRND), X0, XFER
+ vmovdqa XFER, 0*32+_XFER(%rsp, SRND)
+- DO_4ROUNDS _XFER + 0*32
++ DO_4ROUNDS (_XFER + 0*32)
+
+ leaq K256+1*32(%rip), INP
+ vpaddd (INP, SRND), X1, XFER
+ vmovdqa XFER, 1*32+_XFER(%rsp, SRND)
+- DO_4ROUNDS _XFER + 1*32
++ DO_4ROUNDS (_XFER + 1*32)
+ add $2*32, SRND
+
+ vmovdqa X2, X0
+@@ -651,8 +651,8 @@ SYM_TYPED_FUNC_START(sha256_transform_rorx)
+ xor SRND, SRND
+ .align 16
+ .Lloop3:
+- DO_4ROUNDS _XFER + 0*32 + 16
+- DO_4ROUNDS _XFER + 1*32 + 16
++ DO_4ROUNDS (_XFER + 0*32 + 16)
++ DO_4ROUNDS (_XFER + 1*32 + 16)
+ add $2*32, SRND
+ cmp $4*4*32, SRND
+ jb .Lloop3
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index be01823b1bb453..65ab6460aed4d7 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -41,6 +41,8 @@
+ #include <asm/desc.h>
+ #include <asm/ldt.h>
+ #include <asm/unwind.h>
++#include <asm/uprobes.h>
++#include <asm/ibt.h>
+
+ #include "perf_event.h"
+
+@@ -2816,6 +2818,46 @@ static unsigned long get_segment_base(unsigned int segment)
+ return get_desc_base(desc);
+ }
+
++#ifdef CONFIG_UPROBES
++/*
++ * Heuristic-based check if uprobe is installed at the function entry.
++ *
++ * Under assumption of user code being compiled with frame pointers,
++ * `push %rbp/%ebp` is a good indicator that we indeed are.
++ *
++ * Similarly, `endbr64` (assuming 64-bit mode) is also a common pattern.
++ * If we get this wrong, captured stack trace might have one extra bogus
++ * entry, but the rest of stack trace will still be meaningful.
++ */
++static bool is_uprobe_at_func_entry(struct pt_regs *regs)
++{
++ struct arch_uprobe *auprobe;
++
++ if (!current->utask)
++ return false;
++
++ auprobe = current->utask->auprobe;
++ if (!auprobe)
++ return false;
++
++ /* push %rbp/%ebp */
++ if (auprobe->insn[0] == 0x55)
++ return true;
++
++ /* endbr64 (64-bit only) */
++ if (user_64bit_mode(regs) && is_endbr(*(u32 *)auprobe->insn))
++ return true;
++
++ return false;
++}
++
++#else
++static bool is_uprobe_at_func_entry(struct pt_regs *regs)
++{
++ return false;
++}
++#endif /* CONFIG_UPROBES */
++
+ #ifdef CONFIG_IA32_EMULATION
+
+ #include <linux/compat.h>
+@@ -2827,6 +2869,7 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent
+ unsigned long ss_base, cs_base;
+ struct stack_frame_ia32 frame;
+ const struct stack_frame_ia32 __user *fp;
++ u32 ret_addr;
+
+ if (user_64bit_mode(regs))
+ return 0;
+@@ -2836,6 +2879,12 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent
+
+ fp = compat_ptr(ss_base + regs->bp);
+ pagefault_disable();
++
++ /* see perf_callchain_user() below for why we do this */
++ if (is_uprobe_at_func_entry(regs) &&
++ !get_user(ret_addr, (const u32 __user *)regs->sp))
++ perf_callchain_store(entry, ret_addr);
++
+ while (entry->nr < entry->max_stack) {
+ if (!valid_user_frame(fp, sizeof(frame)))
+ break;
+@@ -2864,6 +2913,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
+ {
+ struct stack_frame frame;
+ const struct stack_frame __user *fp;
++ unsigned long ret_addr;
+
+ if (perf_guest_state()) {
+ /* TODO: We don't support guest os callchain now */
+@@ -2887,6 +2937,19 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
+ return;
+
+ pagefault_disable();
++
++ /*
++ * If we are called from uprobe handler, and we are indeed at the very
++ * entry to user function (which is normally a `push %rbp` instruction,
++ * under assumption of application being compiled with frame pointers),
++ * we should read return address from *regs->sp before proceeding
++ * to follow frame pointers, otherwise we'll skip immediate caller
++ * as %rbp is not yet setup.
++ */
++ if (is_uprobe_at_func_entry(regs) &&
++ !get_user(ret_addr, (const unsigned long __user *)regs->sp))
++ perf_callchain_store(entry, ret_addr);
++
+ while (entry->nr < entry->max_stack) {
+ if (!valid_user_frame(fp, sizeof(frame)))
+ break;
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 9327eb00e96d09..be2045a18e69b9 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -345,20 +345,12 @@ extern struct apic *apic;
+ * APIC drivers are probed based on how they are listed in the .apicdrivers
+ * section. So the order is important and enforced by the ordering
+ * of different apic driver files in the Makefile.
+- *
+- * For the files having two apic drivers, we use apic_drivers()
+- * to enforce the order with in them.
+ */
+ #define apic_driver(sym) \
+ static const struct apic *__apicdrivers_##sym __used \
+ __aligned(sizeof(struct apic *)) \
+ __section(".apicdrivers") = { &sym }
+
+-#define apic_drivers(sym1, sym2) \
+- static struct apic *__apicdrivers_##sym1##sym2[2] __used \
+- __aligned(sizeof(struct apic *)) \
+- __section(".apicdrivers") = { &sym1, &sym2 }
+-
+ extern struct apic *__apicdrivers[], *__apicdrivers_end[];
+
+ /*
+diff --git a/arch/x86/include/asm/fpu/signal.h b/arch/x86/include/asm/fpu/signal.h
+index 611fa41711affd..eccc75bc9c4f3d 100644
+--- a/arch/x86/include/asm/fpu/signal.h
++++ b/arch/x86/include/asm/fpu/signal.h
+@@ -29,7 +29,7 @@ fpu__alloc_mathframe(unsigned long sp, int ia32_frame,
+
+ unsigned long fpu__get_fpstate_size(void);
+
+-extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size);
++extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size, u32 pkru);
+ extern void fpu__clear_user_states(struct fpu *fpu);
+ extern bool fpu__restore_sig(void __user *buf, int ia32_frame);
+
+diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
+index 79bbe2be900eb7..ee34ab00a8d6d9 100644
+--- a/arch/x86/include/asm/sev.h
++++ b/arch/x86/include/asm/sev.h
+@@ -164,7 +164,7 @@ struct snp_guest_msg_hdr {
+
+ struct snp_guest_msg {
+ struct snp_guest_msg_hdr hdr;
+- u8 payload[4000];
++ u8 payload[PAGE_SIZE - sizeof(struct snp_guest_msg_hdr)];
+ } __packed;
+
+ struct sev_guest_platform_data {
+diff --git a/arch/x86/include/asm/syscall.h b/arch/x86/include/asm/syscall.h
+index 2fc7bc3863ff6f..7c488ff0c7641b 100644
+--- a/arch/x86/include/asm/syscall.h
++++ b/arch/x86/include/asm/syscall.h
+@@ -82,7 +82,12 @@ static inline void syscall_get_arguments(struct task_struct *task,
+ struct pt_regs *regs,
+ unsigned long *args)
+ {
+- memcpy(args, ®s->bx, 6 * sizeof(args[0]));
++ args[0] = regs->bx;
++ args[1] = regs->cx;
++ args[2] = regs->dx;
++ args[3] = regs->si;
++ args[4] = regs->di;
++ args[5] = regs->bp;
+ }
+
+ static inline int syscall_get_arch(struct task_struct *task)
+diff --git a/arch/x86/kernel/apic/apic_flat_64.c b/arch/x86/kernel/apic/apic_flat_64.c
+index f37ad3392fec91..e0308d8c4e6c27 100644
+--- a/arch/x86/kernel/apic/apic_flat_64.c
++++ b/arch/x86/kernel/apic/apic_flat_64.c
+@@ -8,129 +8,25 @@
+ * Martin Bligh, Andi Kleen, James Bottomley, John Stultz, and
+ * James Cleverdon.
+ */
+-#include <linux/cpumask.h>
+ #include <linux/export.h>
+-#include <linux/acpi.h>
+
+-#include <asm/jailhouse_para.h>
+ #include <asm/apic.h>
+
+ #include "local.h"
+
+-static struct apic apic_physflat;
+-static struct apic apic_flat;
+-
+-struct apic *apic __ro_after_init = &apic_flat;
+-EXPORT_SYMBOL_GPL(apic);
+-
+-static int flat_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
+-{
+- return 1;
+-}
+-
+-static void _flat_send_IPI_mask(unsigned long mask, int vector)
+-{
+- unsigned long flags;
+-
+- local_irq_save(flags);
+- __default_send_IPI_dest_field(mask, vector, APIC_DEST_LOGICAL);
+- local_irq_restore(flags);
+-}
+-
+-static void flat_send_IPI_mask(const struct cpumask *cpumask, int vector)
+-{
+- unsigned long mask = cpumask_bits(cpumask)[0];
+-
+- _flat_send_IPI_mask(mask, vector);
+-}
+-
+-static void
+-flat_send_IPI_mask_allbutself(const struct cpumask *cpumask, int vector)
+-{
+- unsigned long mask = cpumask_bits(cpumask)[0];
+- int cpu = smp_processor_id();
+-
+- if (cpu < BITS_PER_LONG)
+- __clear_bit(cpu, &mask);
+-
+- _flat_send_IPI_mask(mask, vector);
+-}
+-
+-static u32 flat_get_apic_id(u32 x)
++static u32 physflat_get_apic_id(u32 x)
+ {
+ return (x >> 24) & 0xFF;
+ }
+
+-static int flat_probe(void)
++static int physflat_probe(void)
+ {
+ return 1;
+ }
+
+-static struct apic apic_flat __ro_after_init = {
+- .name = "flat",
+- .probe = flat_probe,
+- .acpi_madt_oem_check = flat_acpi_madt_oem_check,
+-
+- .dest_mode_logical = true,
+-
+- .disable_esr = 0,
+-
+- .init_apic_ldr = default_init_apic_ldr,
+- .cpu_present_to_apicid = default_cpu_present_to_apicid,
+-
+- .max_apic_id = 0xFE,
+- .get_apic_id = flat_get_apic_id,
+-
+- .calc_dest_apicid = apic_flat_calc_apicid,
+-
+- .send_IPI = default_send_IPI_single,
+- .send_IPI_mask = flat_send_IPI_mask,
+- .send_IPI_mask_allbutself = flat_send_IPI_mask_allbutself,
+- .send_IPI_allbutself = default_send_IPI_allbutself,
+- .send_IPI_all = default_send_IPI_all,
+- .send_IPI_self = default_send_IPI_self,
+- .nmi_to_offline_cpu = true,
+-
+- .read = native_apic_mem_read,
+- .write = native_apic_mem_write,
+- .eoi = native_apic_mem_eoi,
+- .icr_read = native_apic_icr_read,
+- .icr_write = native_apic_icr_write,
+- .wait_icr_idle = apic_mem_wait_icr_idle,
+- .safe_wait_icr_idle = apic_mem_wait_icr_idle_timeout,
+-};
+-
+-/*
+- * Physflat mode is used when there are more than 8 CPUs on a system.
+- * We cannot use logical delivery in this case because the mask
+- * overflows, so use physical mode.
+- */
+ static int physflat_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
+ {
+-#ifdef CONFIG_ACPI
+- /*
+- * Quirk: some x86_64 machines can only use physical APIC mode
+- * regardless of how many processors are present (x86_64 ES7000
+- * is an example).
+- */
+- if (acpi_gbl_FADT.header.revision >= FADT2_REVISION_ID &&
+- (acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL)) {
+- printk(KERN_DEBUG "system APIC only can use physical flat");
+- return 1;
+- }
+-
+- if (!strncmp(oem_id, "IBM", 3) && !strncmp(oem_table_id, "EXA", 3)) {
+- printk(KERN_DEBUG "IBM Summit detected, will use apic physical");
+- return 1;
+- }
+-#endif
+-
+- return 0;
+-}
+-
+-static int physflat_probe(void)
+-{
+- return apic == &apic_physflat || num_possible_cpus() > 8 || jailhouse_paravirt();
++ return 1;
+ }
+
+ static struct apic apic_physflat __ro_after_init = {
+@@ -146,7 +42,7 @@ static struct apic apic_physflat __ro_after_init = {
+ .cpu_present_to_apicid = default_cpu_present_to_apicid,
+
+ .max_apic_id = 0xFE,
+- .get_apic_id = flat_get_apic_id,
++ .get_apic_id = physflat_get_apic_id,
+
+ .calc_dest_apicid = apic_default_calc_apicid,
+
+@@ -166,8 +62,7 @@ static struct apic apic_physflat __ro_after_init = {
+ .wait_icr_idle = apic_mem_wait_icr_idle,
+ .safe_wait_icr_idle = apic_mem_wait_icr_idle_timeout,
+ };
++apic_driver(apic_physflat);
+
+-/*
+- * We need to check for physflat first, so this order is important.
+- */
+-apic_drivers(apic_physflat, apic_flat);
++struct apic *apic __ro_after_init = &apic_physflat;
++EXPORT_SYMBOL_GPL(apic);
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 477b740b2f267b..d1ec1dcb637af0 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -352,27 +352,26 @@ static void ioapic_mask_entry(int apic, int pin)
+ * shared ISA-space IRQs, so we have to support them. We are super
+ * fast in the common case, and fast for shared ISA-space IRQs.
+ */
+-static int __add_pin_to_irq_node(struct mp_chip_data *data,
+- int node, int apic, int pin)
++static bool add_pin_to_irq_node(struct mp_chip_data *data, int node, int apic, int pin)
+ {
+ struct irq_pin_list *entry;
+
+- /* don't allow duplicates */
+- for_each_irq_pin(entry, data->irq_2_pin)
++ /* Don't allow duplicates */
++ for_each_irq_pin(entry, data->irq_2_pin) {
+ if (entry->apic == apic && entry->pin == pin)
+- return 0;
++ return true;
++ }
+
+ entry = kzalloc_node(sizeof(struct irq_pin_list), GFP_ATOMIC, node);
+ if (!entry) {
+- pr_err("can not alloc irq_pin_list (%d,%d,%d)\n",
+- node, apic, pin);
+- return -ENOMEM;
++ pr_err("Cannot allocate irq_pin_list (%d,%d,%d)\n", node, apic, pin);
++ return false;
+ }
++
+ entry->apic = apic;
+ entry->pin = pin;
+ list_add_tail(&entry->list, &data->irq_2_pin);
+-
+- return 0;
++ return true;
+ }
+
+ static void __remove_pin_from_irq(struct mp_chip_data *data, int apic, int pin)
+@@ -387,13 +386,6 @@ static void __remove_pin_from_irq(struct mp_chip_data *data, int apic, int pin)
+ }
+ }
+
+-static void add_pin_to_irq_node(struct mp_chip_data *data,
+- int node, int apic, int pin)
+-{
+- if (__add_pin_to_irq_node(data, node, apic, pin))
+- panic("IO-APIC: failed to add irq-pin. Can not proceed\n");
+-}
+-
+ /*
+ * Reroute an IRQ to a different pin.
+ */
+@@ -1002,8 +994,7 @@ static int alloc_isa_irq_from_domain(struct irq_domain *domain,
+ if (irq_data && irq_data->parent_data) {
+ if (!mp_check_pin_attr(irq, info))
+ return -EBUSY;
+- if (__add_pin_to_irq_node(irq_data->chip_data, node, ioapic,
+- info->ioapic.pin))
++ if (!add_pin_to_irq_node(irq_data->chip_data, node, ioapic, info->ioapic.pin))
+ return -ENOMEM;
+ } else {
+ info->flags |= X86_IRQ_ALLOC_LEGACY;
+@@ -3017,10 +3008,8 @@ int mp_irqdomain_alloc(struct irq_domain *domain, unsigned int virq,
+ return -ENOMEM;
+
+ ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, info);
+- if (ret < 0) {
+- kfree(data);
+- return ret;
+- }
++ if (ret < 0)
++ goto free_data;
+
+ INIT_LIST_HEAD(&data->irq_2_pin);
+ irq_data->hwirq = info->ioapic.pin;
+@@ -3029,7 +3018,10 @@ int mp_irqdomain_alloc(struct irq_domain *domain, unsigned int virq,
+ irq_data->chip_data = data;
+ mp_irqdomain_get_attr(mp_pin_to_gsi(ioapic, pin), data, info);
+
+- add_pin_to_irq_node(data, ioapic_alloc_attr_node(info), ioapic, pin);
++ if (!add_pin_to_irq_node(data, ioapic_alloc_attr_node(info), ioapic, pin)) {
++ ret = -ENOMEM;
++ goto free_irqs;
++ }
+
+ mp_preconfigure_entry(data);
+ mp_register_handler(virq, data->is_level);
+@@ -3044,6 +3036,12 @@ int mp_irqdomain_alloc(struct irq_domain *domain, unsigned int virq,
+ ioapic, mpc_ioapic_id(ioapic), pin, virq,
+ data->is_level, data->active_low);
+ return 0;
++
++free_irqs:
++ irq_domain_free_irqs_parent(domain, virq, nr_irqs);
++free_data:
++ kfree(data);
++ return ret;
+ }
+
+ void mp_irqdomain_free(struct irq_domain *domain, unsigned int virq,
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 45675da354f33d..468449f73a9575 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -2551,10 +2551,9 @@ static void __init srso_select_mitigation(void)
+ {
+ bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
+
+- if (cpu_mitigations_off())
+- return;
+-
+- if (!boot_cpu_has_bug(X86_BUG_SRSO)) {
++ if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
++ cpu_mitigations_off() ||
++ srso_cmd == SRSO_CMD_OFF) {
+ if (boot_cpu_has(X86_FEATURE_SBPB))
+ x86_pred_cmd = PRED_CMD_SBPB;
+ return;
+@@ -2585,11 +2584,6 @@ static void __init srso_select_mitigation(void)
+ }
+
+ switch (srso_cmd) {
+- case SRSO_CMD_OFF:
+- if (boot_cpu_has(X86_FEATURE_SBPB))
+- x86_pred_cmd = PRED_CMD_SBPB;
+- return;
+-
+ case SRSO_CMD_MICROCODE:
+ if (has_microcode) {
+ srso_mitigation = SRSO_MITIGATION_MICROCODE;
+@@ -2643,6 +2637,8 @@ static void __init srso_select_mitigation(void)
+ pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
+ }
+ break;
++ default:
++ break;
+ }
+
+ out:
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index d4e539d4e158cc..be307c9ef263d8 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1165,8 +1165,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+
+ VULNWL_INTEL(INTEL_CORE_YONAH, NO_SSB),
+
+- VULNWL_INTEL(INTEL_ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
+- VULNWL_INTEL(INTEL_ATOM_AIRMONT_NP, NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(INTEL_ATOM_AIRMONT_MID, NO_SSB | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | MSBDS_ONLY),
++ VULNWL_INTEL(INTEL_ATOM_AIRMONT_NP, NO_SSB | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+
+ VULNWL_INTEL(INTEL_ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+ VULNWL_INTEL(INTEL_ATOM_GOLDMONT_D, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 247f2225aa9f36..2b3b9e140dd41b 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -156,7 +156,7 @@ static inline bool save_xstate_epilog(void __user *buf, int ia32_frame,
+ return !err;
+ }
+
+-static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf)
++static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf, u32 pkru)
+ {
+ if (use_xsave())
+ return xsave_to_user_sigframe(buf);
+@@ -185,7 +185,7 @@ static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf)
+ * For [f]xsave state, update the SW reserved fields in the [f]xsave frame
+ * indicating the absence/presence of the extended state to the user.
+ */
+-bool copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size)
++bool copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size, u32 pkru)
+ {
+ struct task_struct *tsk = current;
+ struct fpstate *fpstate = tsk->thread.fpu.fpstate;
+@@ -228,7 +228,7 @@ bool copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size)
+ fpregs_restore_userregs();
+
+ pagefault_disable();
+- ret = copy_fpregs_to_sigframe(buf_fx);
++ ret = copy_fpregs_to_sigframe(buf_fx, pkru);
+ pagefault_enable();
+ fpregs_unlock();
+
+diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
+index cc0f7f70b17ba3..9c9ac606893e99 100644
+--- a/arch/x86/kernel/machine_kexec_64.c
++++ b/arch/x86/kernel/machine_kexec_64.c
+@@ -28,6 +28,7 @@
+ #include <asm/setup.h>
+ #include <asm/set_memory.h>
+ #include <asm/cpu.h>
++#include <asm/efi.h>
+
+ #ifdef CONFIG_ACPI
+ /*
+@@ -87,6 +88,8 @@ map_efi_systab(struct x86_mapping_info *info, pgd_t *level4p)
+ {
+ #ifdef CONFIG_EFI
+ unsigned long mstart, mend;
++ void *kaddr;
++ int ret;
+
+ if (!efi_enabled(EFI_BOOT))
+ return 0;
+@@ -102,6 +105,30 @@ map_efi_systab(struct x86_mapping_info *info, pgd_t *level4p)
+ if (!mstart)
+ return 0;
+
++ ret = kernel_ident_mapping_init(info, level4p, mstart, mend);
++ if (ret)
++ return ret;
++
++ kaddr = memremap(mstart, mend - mstart, MEMREMAP_WB);
++ if (!kaddr) {
++ pr_err("Could not map UEFI system table\n");
++ return -ENOMEM;
++ }
++
++ mstart = efi_config_table;
++
++ if (efi_enabled(EFI_64BIT)) {
++ efi_system_table_64_t *stbl = (efi_system_table_64_t *)kaddr;
++
++ mend = mstart + sizeof(efi_config_table_64_t) * stbl->nr_tables;
++ } else {
++ efi_system_table_32_t *stbl = (efi_system_table_32_t *)kaddr;
++
++ mend = mstart + sizeof(efi_config_table_32_t) * stbl->nr_tables;
++ }
++
++ memunmap(kaddr);
++
+ return kernel_ident_mapping_init(info, level4p, mstart, mend);
+ #endif
+ return 0;
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index 31b6f5dddfc274..1f1e8e0ac5a341 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -84,6 +84,7 @@ get_sigframe(struct ksignal *ksig, struct pt_regs *regs, size_t frame_size,
+ unsigned long math_size = 0;
+ unsigned long sp = regs->sp;
+ unsigned long buf_fx = 0;
++ u32 pkru = read_pkru();
+
+ /* redzone */
+ if (!ia32_frame)
+@@ -139,7 +140,7 @@ get_sigframe(struct ksignal *ksig, struct pt_regs *regs, size_t frame_size,
+ }
+
+ /* save i387 and extended state */
+- if (!copy_fpstate_to_sigframe(*fpstate, (void __user *)buf_fx, math_size))
++ if (!copy_fpstate_to_sigframe(*fpstate, (void __user *)buf_fx, math_size, pkru))
+ return (void __user *)-1L;
+
+ return (void __user *)sp;
+diff --git a/arch/x86/kernel/signal_64.c b/arch/x86/kernel/signal_64.c
+index 8a94053c544465..ee9453891901b7 100644
+--- a/arch/x86/kernel/signal_64.c
++++ b/arch/x86/kernel/signal_64.c
+@@ -260,13 +260,13 @@ SYSCALL_DEFINE0(rt_sigreturn)
+
+ set_current_blocked(&set);
+
+- if (!restore_sigcontext(regs, &frame->uc.uc_mcontext, uc_flags))
++ if (restore_altstack(&frame->uc.uc_stack))
+ goto badframe;
+
+- if (restore_signal_shadow_stack())
++ if (!restore_sigcontext(regs, &frame->uc.uc_mcontext, uc_flags))
+ goto badframe;
+
+- if (restore_altstack(&frame->uc.uc_stack))
++ if (restore_signal_shadow_stack())
+ goto badframe;
+
+ return regs->ax;
+diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
+index c45127265f2fa3..437e96fb497734 100644
+--- a/arch/x86/mm/ident_map.c
++++ b/arch/x86/mm/ident_map.c
+@@ -99,18 +99,31 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
+ for (; addr < end; addr = next) {
+ pud_t *pud = pud_page + pud_index(addr);
+ pmd_t *pmd;
++ bool use_gbpage;
+
+ next = (addr & PUD_MASK) + PUD_SIZE;
+ if (next > end)
+ next = end;
+
+- if (info->direct_gbpages) {
+- pud_t pudval;
++ /* if this is already a gbpage, this portion is already mapped */
++ if (pud_leaf(*pud))
++ continue;
++
++ /* Is using a gbpage allowed? */
++ use_gbpage = info->direct_gbpages;
+
+- if (pud_present(*pud))
+- continue;
++ /* Don't use gbpage if it maps more than the requested region. */
++ /* at the begining: */
++ use_gbpage &= ((addr & ~PUD_MASK) == 0);
++ /* ... or at the end: */
++ use_gbpage &= ((next & ~PUD_MASK) == 0);
++
++ /* Never overwrite existing mappings */
++ use_gbpage &= !pud_present(*pud);
++
++ if (use_gbpage) {
++ pud_t pudval;
+
+- addr &= PUD_MASK;
+ pudval = __pud((addr - info->offset) | info->page_flag);
+ set_pud(pud, pudval);
+ continue;
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 690ca99dfaca67..5a6098a3db57e0 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2076,7 +2076,7 @@ static void ioc_forgive_debts(struct ioc *ioc, u64 usage_us_sum, int nr_debtors,
+ struct ioc_now *now)
+ {
+ struct ioc_gq *iocg;
+- u64 dur, usage_pct, nr_cycles;
++ u64 dur, usage_pct, nr_cycles, nr_cycles_shift;
+
+ /* if no debtor, reset the cycle */
+ if (!nr_debtors) {
+@@ -2138,10 +2138,12 @@ static void ioc_forgive_debts(struct ioc *ioc, u64 usage_us_sum, int nr_debtors,
+ old_debt = iocg->abs_vdebt;
+ old_delay = iocg->delay;
+
++ nr_cycles_shift = min_t(u64, nr_cycles, BITS_PER_LONG - 1);
+ if (iocg->abs_vdebt)
+- iocg->abs_vdebt = iocg->abs_vdebt >> nr_cycles ?: 1;
++ iocg->abs_vdebt = iocg->abs_vdebt >> nr_cycles_shift ?: 1;
++
+ if (iocg->delay)
+- iocg->delay = iocg->delay >> nr_cycles ?: 1;
++ iocg->delay = iocg->delay >> nr_cycles_shift ?: 1;
+
+ iocg_kick_waitq(iocg, true, now);
+
+diff --git a/block/ioctl.c b/block/ioctl.c
+index e8e4a4190f183a..44257bdfeacbf4 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -126,7 +126,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
+ return -EINVAL;
+
+ filemap_invalidate_lock(bdev->bd_mapping);
+- err = truncate_bdev_range(bdev, mode, start, start + len - 1);
++ err = truncate_bdev_range(bdev, mode, start, end - 1);
+ if (err)
+ goto fail;
+
+@@ -163,7 +163,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
+ static int blk_ioctl_secure_erase(struct block_device *bdev, blk_mode_t mode,
+ void __user *argp)
+ {
+- uint64_t start, len;
++ uint64_t start, len, end;
+ uint64_t range[2];
+ int err;
+
+@@ -178,11 +178,12 @@ static int blk_ioctl_secure_erase(struct block_device *bdev, blk_mode_t mode,
+ len = range[1];
+ if ((start & 511) || (len & 511))
+ return -EINVAL;
+- if (start + len > bdev_nr_bytes(bdev))
++ if (check_add_overflow(start, len, &end) ||
++ end > bdev_nr_bytes(bdev))
+ return -EINVAL;
+
+ filemap_invalidate_lock(bdev->bd_mapping);
+- err = truncate_bdev_range(bdev, mode, start, start + len - 1);
++ err = truncate_bdev_range(bdev, mode, start, end - 1);
+ if (!err)
+ err = blkdev_issue_secure_erase(bdev, start >> 9, len >> 9,
+ GFP_KERNEL);
+diff --git a/crypto/simd.c b/crypto/simd.c
+index 2aa4f72e224fd9..b07721d1f3f6ea 100644
+--- a/crypto/simd.c
++++ b/crypto/simd.c
+@@ -136,27 +136,19 @@ static int simd_skcipher_init(struct crypto_skcipher *tfm)
+ return 0;
+ }
+
+-struct simd_skcipher_alg *simd_skcipher_create_compat(const char *algname,
++struct simd_skcipher_alg *simd_skcipher_create_compat(struct skcipher_alg *ialg,
++ const char *algname,
+ const char *drvname,
+ const char *basename)
+ {
+ struct simd_skcipher_alg *salg;
+- struct crypto_skcipher *tfm;
+- struct skcipher_alg *ialg;
+ struct skcipher_alg *alg;
+ int err;
+
+- tfm = crypto_alloc_skcipher(basename, CRYPTO_ALG_INTERNAL,
+- CRYPTO_ALG_INTERNAL | CRYPTO_ALG_ASYNC);
+- if (IS_ERR(tfm))
+- return ERR_CAST(tfm);
+-
+- ialg = crypto_skcipher_alg(tfm);
+-
+ salg = kzalloc(sizeof(*salg), GFP_KERNEL);
+ if (!salg) {
+ salg = ERR_PTR(-ENOMEM);
+- goto out_put_tfm;
++ goto out;
+ }
+
+ salg->ialg_name = basename;
+@@ -195,30 +187,16 @@ struct simd_skcipher_alg *simd_skcipher_create_compat(const char *algname,
+ if (err)
+ goto out_free_salg;
+
+-out_put_tfm:
+- crypto_free_skcipher(tfm);
++out:
+ return salg;
+
+ out_free_salg:
+ kfree(salg);
+ salg = ERR_PTR(err);
+- goto out_put_tfm;
++ goto out;
+ }
+ EXPORT_SYMBOL_GPL(simd_skcipher_create_compat);
+
+-struct simd_skcipher_alg *simd_skcipher_create(const char *algname,
+- const char *basename)
+-{
+- char drvname[CRYPTO_MAX_ALG_NAME];
+-
+- if (snprintf(drvname, CRYPTO_MAX_ALG_NAME, "simd-%s", basename) >=
+- CRYPTO_MAX_ALG_NAME)
+- return ERR_PTR(-ENAMETOOLONG);
+-
+- return simd_skcipher_create_compat(algname, drvname, basename);
+-}
+-EXPORT_SYMBOL_GPL(simd_skcipher_create);
+-
+ void simd_skcipher_free(struct simd_skcipher_alg *salg)
+ {
+ crypto_unregister_skcipher(&salg->alg);
+@@ -246,7 +224,7 @@ int simd_register_skciphers_compat(struct skcipher_alg *algs, int count,
+ algname = algs[i].base.cra_name + 2;
+ drvname = algs[i].base.cra_driver_name + 2;
+ basename = algs[i].base.cra_driver_name;
+- simd = simd_skcipher_create_compat(algname, drvname, basename);
++ simd = simd_skcipher_create_compat(algs + i, algname, drvname, basename);
+ err = PTR_ERR(simd);
+ if (IS_ERR(simd))
+ goto err_unregister;
+@@ -383,27 +361,19 @@ static int simd_aead_init(struct crypto_aead *tfm)
+ return 0;
+ }
+
+-struct simd_aead_alg *simd_aead_create_compat(const char *algname,
+- const char *drvname,
+- const char *basename)
++static struct simd_aead_alg *simd_aead_create_compat(struct aead_alg *ialg,
++ const char *algname,
++ const char *drvname,
++ const char *basename)
+ {
+ struct simd_aead_alg *salg;
+- struct crypto_aead *tfm;
+- struct aead_alg *ialg;
+ struct aead_alg *alg;
+ int err;
+
+- tfm = crypto_alloc_aead(basename, CRYPTO_ALG_INTERNAL,
+- CRYPTO_ALG_INTERNAL | CRYPTO_ALG_ASYNC);
+- if (IS_ERR(tfm))
+- return ERR_CAST(tfm);
+-
+- ialg = crypto_aead_alg(tfm);
+-
+ salg = kzalloc(sizeof(*salg), GFP_KERNEL);
+ if (!salg) {
+ salg = ERR_PTR(-ENOMEM);
+- goto out_put_tfm;
++ goto out;
+ }
+
+ salg->ialg_name = basename;
+@@ -442,36 +412,20 @@ struct simd_aead_alg *simd_aead_create_compat(const char *algname,
+ if (err)
+ goto out_free_salg;
+
+-out_put_tfm:
+- crypto_free_aead(tfm);
++out:
+ return salg;
+
+ out_free_salg:
+ kfree(salg);
+ salg = ERR_PTR(err);
+- goto out_put_tfm;
+-}
+-EXPORT_SYMBOL_GPL(simd_aead_create_compat);
+-
+-struct simd_aead_alg *simd_aead_create(const char *algname,
+- const char *basename)
+-{
+- char drvname[CRYPTO_MAX_ALG_NAME];
+-
+- if (snprintf(drvname, CRYPTO_MAX_ALG_NAME, "simd-%s", basename) >=
+- CRYPTO_MAX_ALG_NAME)
+- return ERR_PTR(-ENAMETOOLONG);
+-
+- return simd_aead_create_compat(algname, drvname, basename);
++ goto out;
+ }
+-EXPORT_SYMBOL_GPL(simd_aead_create);
+
+-void simd_aead_free(struct simd_aead_alg *salg)
++static void simd_aead_free(struct simd_aead_alg *salg)
+ {
+ crypto_unregister_aead(&salg->alg);
+ kfree(salg);
+ }
+-EXPORT_SYMBOL_GPL(simd_aead_free);
+
+ int simd_register_aeads_compat(struct aead_alg *algs, int count,
+ struct simd_aead_alg **simd_algs)
+@@ -493,7 +447,7 @@ int simd_register_aeads_compat(struct aead_alg *algs, int count,
+ algname = algs[i].base.cra_name + 2;
+ drvname = algs[i].base.cra_driver_name + 2;
+ basename = algs[i].base.cra_driver_name;
+- simd = simd_aead_create_compat(algname, drvname, basename);
++ simd = simd_aead_create_compat(algs + i, algname, drvname, basename);
+ err = PTR_ERR(simd);
+ if (IS_ERR(simd))
+ goto err_unregister;
+diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c
+index de3d661163756c..ede6165e09d90d 100644
+--- a/drivers/accel/ivpu/ivpu_fw.c
++++ b/drivers/accel/ivpu/ivpu_fw.c
+@@ -60,6 +60,10 @@ static struct {
+ { IVPU_HW_IP_40XX, "intel/vpu/vpu_40xx_v0.0.bin" },
+ };
+
++/* Production fw_names from the table above */
++MODULE_FIRMWARE("intel/vpu/vpu_37xx_v0.0.bin");
++MODULE_FIRMWARE("intel/vpu/vpu_40xx_v0.0.bin");
++
+ static int ivpu_fw_request(struct ivpu_device *vdev)
+ {
+ int ret = -ENOENT;
+diff --git a/drivers/acpi/acpi_pad.c b/drivers/acpi/acpi_pad.c
+index 350d3a8928896d..e84720f0246e8e 100644
+--- a/drivers/acpi/acpi_pad.c
++++ b/drivers/acpi/acpi_pad.c
+@@ -136,8 +136,10 @@ static void exit_round_robin(unsigned int tsk_index)
+ {
+ struct cpumask *pad_busy_cpus = to_cpumask(pad_busy_cpus_bits);
+
+- cpumask_clear_cpu(tsk_in_cpu[tsk_index], pad_busy_cpus);
+- tsk_in_cpu[tsk_index] = -1;
++ if (tsk_in_cpu[tsk_index] != -1) {
++ cpumask_clear_cpu(tsk_in_cpu[tsk_index], pad_busy_cpus);
++ tsk_in_cpu[tsk_index] = -1;
++ }
+ }
+
+ static unsigned int idle_pct = 5; /* percentage */
+diff --git a/drivers/acpi/acpica/dbconvert.c b/drivers/acpi/acpica/dbconvert.c
+index 2b84ac093698a3..8dbab693204998 100644
+--- a/drivers/acpi/acpica/dbconvert.c
++++ b/drivers/acpi/acpica/dbconvert.c
+@@ -174,6 +174,8 @@ acpi_status acpi_db_convert_to_package(char *string, union acpi_object *object)
+ elements =
+ ACPI_ALLOCATE_ZEROED(DB_DEFAULT_PKG_ELEMENTS *
+ sizeof(union acpi_object));
++ if (!elements)
++ return (AE_NO_MEMORY);
+
+ this = string;
+ for (i = 0; i < (DB_DEFAULT_PKG_ELEMENTS - 1); i++) {
+diff --git a/drivers/acpi/acpica/exprep.c b/drivers/acpi/acpica/exprep.c
+index 08196fa17080e2..82b1fa2d201fed 100644
+--- a/drivers/acpi/acpica/exprep.c
++++ b/drivers/acpi/acpica/exprep.c
+@@ -437,6 +437,9 @@ acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info)
+
+ if (info->connection_node) {
+ second_desc = info->connection_node->object;
++ if (second_desc == NULL) {
++ break;
++ }
+ if (!(second_desc->common.flags & AOPOBJ_DATA_VALID)) {
+ status =
+ acpi_ds_get_buffer_arguments(second_desc);
+diff --git a/drivers/acpi/acpica/psargs.c b/drivers/acpi/acpica/psargs.c
+index 422c074ed2897b..28582adfc0acaf 100644
+--- a/drivers/acpi/acpica/psargs.c
++++ b/drivers/acpi/acpica/psargs.c
+@@ -25,6 +25,8 @@ acpi_ps_get_next_package_length(struct acpi_parse_state *parser_state);
+ static union acpi_parse_object *acpi_ps_get_next_field(struct acpi_parse_state
+ *parser_state);
+
++static void acpi_ps_free_field_list(union acpi_parse_object *start);
++
+ /*******************************************************************************
+ *
+ * FUNCTION: acpi_ps_get_next_package_length
+@@ -683,6 +685,39 @@ static union acpi_parse_object *acpi_ps_get_next_field(struct acpi_parse_state
+ return_PTR(field);
+ }
+
++/*******************************************************************************
++ *
++ * FUNCTION: acpi_ps_free_field_list
++ *
++ * PARAMETERS: start - First Op in field list
++ *
++ * RETURN: None.
++ *
++ * DESCRIPTION: Free all Op objects inside a field list.
++ *
++ ******************************************************************************/
++
++static void acpi_ps_free_field_list(union acpi_parse_object *start)
++{
++ union acpi_parse_object *cur = start;
++ union acpi_parse_object *next;
++ union acpi_parse_object *arg;
++
++ while (cur) {
++ next = cur->common.next;
++
++ /* AML_INT_CONNECTION_OP can have a single argument */
++
++ arg = acpi_ps_get_arg(cur, 0);
++ if (arg) {
++ acpi_ps_free_op(arg);
++ }
++
++ acpi_ps_free_op(cur);
++ cur = next;
++ }
++}
++
+ /*******************************************************************************
+ *
+ * FUNCTION: acpi_ps_get_next_arg
+@@ -751,6 +786,10 @@ acpi_ps_get_next_arg(struct acpi_walk_state *walk_state,
+ while (parser_state->aml < parser_state->pkg_end) {
+ field = acpi_ps_get_next_field(parser_state);
+ if (!field) {
++ if (arg) {
++ acpi_ps_free_field_list(arg);
++ }
++
+ return_ACPI_STATUS(AE_NO_MEMORY);
+ }
+
+@@ -820,6 +859,10 @@ acpi_ps_get_next_arg(struct acpi_walk_state *walk_state,
+ acpi_ps_get_next_namepath(walk_state, parser_state,
+ arg,
+ ACPI_NOT_METHOD_CALL);
++ if (ACPI_FAILURE(status)) {
++ acpi_ps_free_op(arg);
++ return_ACPI_STATUS(status);
++ }
+ } else {
+ /* Single complex argument, nothing returned */
+
+@@ -854,6 +897,10 @@ acpi_ps_get_next_arg(struct acpi_walk_state *walk_state,
+ acpi_ps_get_next_namepath(walk_state, parser_state,
+ arg,
+ ACPI_POSSIBLE_METHOD_CALL);
++ if (ACPI_FAILURE(status)) {
++ acpi_ps_free_op(arg);
++ return_ACPI_STATUS(status);
++ }
+
+ if (arg->common.aml_opcode == AML_INT_METHODCALL_OP) {
+
+diff --git a/drivers/acpi/apei/einj-cxl.c b/drivers/acpi/apei/einj-cxl.c
+index 8b8be0c90709f9..d64e2713aae4bf 100644
+--- a/drivers/acpi/apei/einj-cxl.c
++++ b/drivers/acpi/apei/einj-cxl.c
+@@ -63,7 +63,7 @@ static int cxl_dport_get_sbdf(struct pci_dev *dport_dev, u64 *sbdf)
+ seg = bridge->domain_nr;
+
+ bus = pbus->number;
+- *sbdf = (seg << 24) | (bus << 16) | dport_dev->devfn;
++ *sbdf = (seg << 24) | (bus << 16) | (dport_dev->devfn << 8);
+
+ return 0;
+ }
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index da3a879d638a8d..4f1637ed76e5c6 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -706,28 +706,35 @@ static LIST_HEAD(acpi_battery_list);
+ static LIST_HEAD(battery_hook_list);
+ static DEFINE_MUTEX(hook_mutex);
+
+-static void __battery_hook_unregister(struct acpi_battery_hook *hook, int lock)
++static void battery_hook_unregister_unlocked(struct acpi_battery_hook *hook)
+ {
+ struct acpi_battery *battery;
++
+ /*
+ * In order to remove a hook, we first need to
+ * de-register all the batteries that are registered.
+ */
+- if (lock)
+- mutex_lock(&hook_mutex);
+ list_for_each_entry(battery, &acpi_battery_list, list) {
+ if (!hook->remove_battery(battery->bat, hook))
+ power_supply_changed(battery->bat);
+ }
+- list_del(&hook->list);
+- if (lock)
+- mutex_unlock(&hook_mutex);
++ list_del_init(&hook->list);
++
+ pr_info("extension unregistered: %s\n", hook->name);
+ }
+
+ void battery_hook_unregister(struct acpi_battery_hook *hook)
+ {
+- __battery_hook_unregister(hook, 1);
++ mutex_lock(&hook_mutex);
++ /*
++ * Ignore already unregistered battery hooks. This might happen
++ * if a battery hook was previously unloaded due to an error when
++ * adding a new battery.
++ */
++ if (!list_empty(&hook->list))
++ battery_hook_unregister_unlocked(hook);
++
++ mutex_unlock(&hook_mutex);
+ }
+ EXPORT_SYMBOL_GPL(battery_hook_unregister);
+
+@@ -736,7 +743,6 @@ void battery_hook_register(struct acpi_battery_hook *hook)
+ struct acpi_battery *battery;
+
+ mutex_lock(&hook_mutex);
+- INIT_LIST_HEAD(&hook->list);
+ list_add(&hook->list, &battery_hook_list);
+ /*
+ * Now that the driver is registered, we need
+@@ -753,7 +759,7 @@ void battery_hook_register(struct acpi_battery_hook *hook)
+ * hooks.
+ */
+ pr_err("extension failed to load: %s", hook->name);
+- __battery_hook_unregister(hook, 0);
++ battery_hook_unregister_unlocked(hook);
+ goto end;
+ }
+
+@@ -807,7 +813,7 @@ static void battery_hook_add_battery(struct acpi_battery *battery)
+ */
+ pr_err("error in extension, unloading: %s",
+ hook_node->name);
+- __battery_hook_unregister(hook_node, 0);
++ battery_hook_unregister_unlocked(hook_node);
+ }
+ }
+ mutex_unlock(&hook_mutex);
+@@ -840,7 +846,7 @@ static void __exit battery_hook_exit(void)
+ * need to remove the hooks.
+ */
+ list_for_each_entry_safe(hook, ptr, &battery_hook_list, list) {
+- __battery_hook_unregister(hook, 1);
++ battery_hook_unregister(hook);
+ }
+ mutex_destroy(&hook_mutex);
+ }
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 28adea68e1cd61..5b06e236aabef2 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -103,6 +103,11 @@ static DEFINE_PER_CPU(struct cpc_desc *, cpc_desc_ptr);
+ (cpc)->cpc_entry.reg.space_id == \
+ ACPI_ADR_SPACE_PLATFORM_COMM)
+
++/* Check if a CPC register is in FFH */
++#define CPC_IN_FFH(cpc) ((cpc)->type == ACPI_TYPE_BUFFER && \
++ (cpc)->cpc_entry.reg.space_id == \
++ ACPI_ADR_SPACE_FIXED_HARDWARE)
++
+ /* Check if a CPC register is in SystemMemory */
+ #define CPC_IN_SYSTEM_MEMORY(cpc) ((cpc)->type == ACPI_TYPE_BUFFER && \
+ (cpc)->cpc_entry.reg.space_id == \
+@@ -1521,9 +1526,12 @@ int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable)
+ /* after writing CPC, transfer the ownership of PCC to platform */
+ ret = send_pcc_cmd(pcc_ss_id, CMD_WRITE);
+ up_write(&pcc_ss_data->pcc_lock);
++ } else if (osc_cpc_flexible_adr_space_confirmed &&
++ CPC_SUPPORTED(epp_set_reg) && CPC_IN_FFH(epp_set_reg)) {
++ ret = cpc_write(cpu, epp_set_reg, perf_ctrls->energy_perf);
+ } else {
+ ret = -ENOTSUPP;
+- pr_debug("_CPC in PCC is not supported\n");
++ pr_debug("_CPC in PCC and _CPC in FFH are not supported\n");
+ }
+
+ return ret;
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 38d2f6e6b12b4f..25399f6dde7e27 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -783,6 +783,9 @@ static int acpi_ec_transaction_unlocked(struct acpi_ec *ec,
+ unsigned long tmp;
+ int ret = 0;
+
++ if (t->rdata)
++ memset(t->rdata, 0, t->rlen);
++
+ /* start transaction */
+ spin_lock_irqsave(&ec->lock, tmp);
+ /* Enable GPE for command processing (IBF=0/OBF=1) */
+@@ -819,8 +822,6 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t)
+
+ if (!ec || (!t) || (t->wlen && !t->wdata) || (t->rlen && !t->rdata))
+ return -EINVAL;
+- if (t->rdata)
+- memset(t->rdata, 0, t->rlen);
+
+ mutex_lock(&ec->mutex);
+ if (ec->global_lock) {
+@@ -847,7 +848,7 @@ static int acpi_ec_burst_enable(struct acpi_ec *ec)
+ .wdata = NULL, .rdata = &d,
+ .wlen = 0, .rlen = 1};
+
+- return acpi_ec_transaction(ec, &t);
++ return acpi_ec_transaction_unlocked(ec, &t);
+ }
+
+ static int acpi_ec_burst_disable(struct acpi_ec *ec)
+@@ -857,7 +858,7 @@ static int acpi_ec_burst_disable(struct acpi_ec *ec)
+ .wlen = 0, .rlen = 0};
+
+ return (acpi_ec_read_status(ec) & ACPI_EC_FLAG_BURST) ?
+- acpi_ec_transaction(ec, &t) : 0;
++ acpi_ec_transaction_unlocked(ec, &t) : 0;
+ }
+
+ static int acpi_ec_read(struct acpi_ec *ec, u8 address, u8 *data)
+@@ -873,6 +874,19 @@ static int acpi_ec_read(struct acpi_ec *ec, u8 address, u8 *data)
+ return result;
+ }
+
++static int acpi_ec_read_unlocked(struct acpi_ec *ec, u8 address, u8 *data)
++{
++ int result;
++ u8 d;
++ struct transaction t = {.command = ACPI_EC_COMMAND_READ,
++ .wdata = &address, .rdata = &d,
++ .wlen = 1, .rlen = 1};
++
++ result = acpi_ec_transaction_unlocked(ec, &t);
++ *data = d;
++ return result;
++}
++
+ static int acpi_ec_write(struct acpi_ec *ec, u8 address, u8 data)
+ {
+ u8 wdata[2] = { address, data };
+@@ -883,6 +897,16 @@ static int acpi_ec_write(struct acpi_ec *ec, u8 address, u8 data)
+ return acpi_ec_transaction(ec, &t);
+ }
+
++static int acpi_ec_write_unlocked(struct acpi_ec *ec, u8 address, u8 data)
++{
++ u8 wdata[2] = { address, data };
++ struct transaction t = {.command = ACPI_EC_COMMAND_WRITE,
++ .wdata = wdata, .rdata = NULL,
++ .wlen = 2, .rlen = 0};
++
++ return acpi_ec_transaction_unlocked(ec, &t);
++}
++
+ int ec_read(u8 addr, u8 *val)
+ {
+ int err;
+@@ -1323,6 +1347,7 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address,
+ struct acpi_ec *ec = handler_context;
+ int result = 0, i, bytes = bits / 8;
+ u8 *value = (u8 *)value64;
++ u32 glk;
+
+ if ((address > 0xFF) || !value || !handler_context)
+ return AE_BAD_PARAMETER;
+@@ -1330,13 +1355,25 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address,
+ if (function != ACPI_READ && function != ACPI_WRITE)
+ return AE_BAD_PARAMETER;
+
++ mutex_lock(&ec->mutex);
++
++ if (ec->global_lock) {
++ acpi_status status;
++
++ status = acpi_acquire_global_lock(ACPI_EC_UDELAY_GLK, &glk);
++ if (ACPI_FAILURE(status)) {
++ result = -ENODEV;
++ goto unlock;
++ }
++ }
++
+ if (ec->busy_polling || bits > 8)
+ acpi_ec_burst_enable(ec);
+
+ for (i = 0; i < bytes; ++i, ++address, ++value) {
+ result = (function == ACPI_READ) ?
+- acpi_ec_read(ec, address, value) :
+- acpi_ec_write(ec, address, *value);
++ acpi_ec_read_unlocked(ec, address, value) :
++ acpi_ec_write_unlocked(ec, address, *value);
+ if (result < 0)
+ break;
+ }
+@@ -1344,6 +1381,12 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address,
+ if (ec->busy_polling || bits > 8)
+ acpi_ec_burst_disable(ec);
+
++ if (ec->global_lock)
++ acpi_release_global_lock(glk);
++
++unlock:
++ mutex_unlock(&ec->mutex);
++
+ switch (result) {
+ case -EINVAL:
+ return AE_BAD_PARAMETER;
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index cb2aacbb93357e..3d74ebe9dbd804 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -440,6 +440,13 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"),
+ },
+ },
++ {
++ /* Asus Vivobook X1704VAP */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_BOARD_NAME, "X1704VAP"),
++ },
++ },
+ {
+ /* Asus ExpertBook B1402CBA */
+ .matches = {
+@@ -504,17 +511,24 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ },
+ },
+ {
+- /* Asus Vivobook E1504GA */
++ /* Asus ExpertBook B2502CVA */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_BOARD_NAME, "E1504GA"),
++ DMI_MATCH(DMI_BOARD_NAME, "B2502CVA"),
+ },
+ },
+ {
+- /* Asus Vivobook E1504GAB */
++ /* Asus Vivobook Go E1404GA* */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_BOARD_NAME, "E1504GAB"),
++ DMI_MATCH(DMI_BOARD_NAME, "E1404GA"),
++ },
++ },
++ {
++ /* Asus Vivobook E1504GA* */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_BOARD_NAME, "E1504GA"),
+ },
+ },
+ {
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 75a5f559402f87..b05064578293f4 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -254,6 +254,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "PCG-FRV35"),
+ },
+ },
++ {
++ .callback = video_detect_force_vendor,
++ /* Panasonic Toughbook CF-18 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Matsushita Electric Industrial"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "CF-18"),
++ },
++ },
+
+ /*
+ * Toshiba models with Transflective display, these need to use
+@@ -836,6 +844,15 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ * controller board in their ACPI tables (and may even have one), but
+ * which need native backlight control nevertheless.
+ */
++ {
++ /* https://github.com/zabbly/linux/issues/26 */
++ .callback = video_detect_force_native,
++ /* Dell OptiPlex 5480 AIO */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 5480 AIO"),
++ },
++ },
+ {
+ /* https://bugzilla.redhat.com/show_bug.cgi?id=2303936 */
+ .callback = video_detect_force_native,
+diff --git a/drivers/ata/pata_serverworks.c b/drivers/ata/pata_serverworks.c
+index 549ff24a982311..4edddf6bcc1507 100644
+--- a/drivers/ata/pata_serverworks.c
++++ b/drivers/ata/pata_serverworks.c
+@@ -46,10 +46,11 @@
+ #define SVWKS_CSB5_REVISION_NEW 0x92 /* min PCI_REVISION_ID for UDMA5 (A2.0) */
+ #define SVWKS_CSB6_REVISION 0xa0 /* min PCI_REVISION_ID for UDMA4 (A1.0) */
+
+-/* Seagate Barracuda ATA IV Family drives in UDMA mode 5
+- * can overrun their FIFOs when used with the CSB5 */
+-
+-static const char *csb_bad_ata100[] = {
++/*
++ * Seagate Barracuda ATA IV Family drives in UDMA mode 5
++ * can overrun their FIFOs when used with the CSB5.
++ */
++static const char * const csb_bad_ata100[] = {
+ "ST320011A",
+ "ST340016A",
+ "ST360021A",
+@@ -163,10 +164,11 @@ static unsigned int serverworks_osb4_filter(struct ata_device *adev, unsigned in
+ * @adev: ATA device
+ * @mask: Mask of proposed modes
+ *
+- * Check the blacklist and disable UDMA5 if matched
++ * Check the list of devices with broken UDMA5 and
++ * disable UDMA5 if matched.
+ */
+-
+-static unsigned int serverworks_csb_filter(struct ata_device *adev, unsigned int mask)
++static unsigned int serverworks_csb_filter(struct ata_device *adev,
++ unsigned int mask)
+ {
+ const char *p;
+ char model_num[ATA_ID_PROD_LEN + 1];
+diff --git a/drivers/ata/sata_sil.c b/drivers/ata/sata_sil.c
+index cc77c024828431..df095659bae0f5 100644
+--- a/drivers/ata/sata_sil.c
++++ b/drivers/ata/sata_sil.c
+@@ -128,7 +128,7 @@ static const struct pci_device_id sil_pci_tbl[] = {
+ static const struct sil_drivelist {
+ const char *product;
+ unsigned int quirk;
+-} sil_blacklist [] = {
++} sil_quirks[] = {
+ { "ST320012AS", SIL_QUIRK_MOD15WRITE },
+ { "ST330013AS", SIL_QUIRK_MOD15WRITE },
+ { "ST340017AS", SIL_QUIRK_MOD15WRITE },
+@@ -600,8 +600,8 @@ static void sil_thaw(struct ata_port *ap)
+ * list, and apply the fixups to only the specific
+ * devices/hosts/firmwares that need it.
+ *
+- * 20040111 - Seagate drives affected by the Mod15Write bug are blacklisted
+- * The Maxtor quirk is in the blacklist, but I'm keeping the original
++ * 20040111 - Seagate drives affected by the Mod15Write bug are quirked
++ * The Maxtor quirk is in sil_quirks, but I'm keeping the original
+ * pessimistic fix for the following reasons...
+ * - There seems to be less info on it, only one device gleaned off the
+ * Windows driver, maybe only one is affected. More info would be greatly
+@@ -620,9 +620,9 @@ static void sil_dev_config(struct ata_device *dev)
+
+ ata_id_c_string(dev->id, model_num, ATA_ID_PROD, sizeof(model_num));
+
+- for (n = 0; sil_blacklist[n].product; n++)
+- if (!strcmp(sil_blacklist[n].product, model_num)) {
+- quirks = sil_blacklist[n].quirk;
++ for (n = 0; sil_quirks[n].product; n++)
++ if (!strcmp(sil_quirks[n].product, model_num)) {
++ quirks = sil_quirks[n].quirk;
+ break;
+ }
+
+diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
+index cc9077b588d7e7..d1f4ddc576451a 100644
+--- a/drivers/block/aoe/aoecmd.c
++++ b/drivers/block/aoe/aoecmd.c
+@@ -361,6 +361,7 @@ ata_rw_frameinit(struct frame *f)
+ }
+
+ ah->cmdstat = ATA_CMD_PIO_READ | writebit | extbit;
++ dev_hold(t->ifp->nd);
+ skb->dev = t->ifp->nd;
+ }
+
+@@ -401,6 +402,8 @@ aoecmd_ata_rw(struct aoedev *d)
+ __skb_queue_head_init(&queue);
+ __skb_queue_tail(&queue, skb);
+ aoenet_xmit(&queue);
++ } else {
++ dev_put(f->t->ifp->nd);
+ }
+ return 1;
+ }
+@@ -483,10 +486,13 @@ resend(struct aoedev *d, struct frame *f)
+ memcpy(h->dst, t->addr, sizeof h->dst);
+ memcpy(h->src, t->ifp->nd->dev_addr, sizeof h->src);
+
++ dev_hold(t->ifp->nd);
+ skb->dev = t->ifp->nd;
+ skb = skb_clone(skb, GFP_ATOMIC);
+- if (skb == NULL)
++ if (skb == NULL) {
++ dev_put(t->ifp->nd);
+ return;
++ }
+ f->sent = ktime_get();
+ __skb_queue_head_init(&queue);
+ __skb_queue_tail(&queue, skb);
+@@ -617,6 +623,8 @@ probe(struct aoetgt *t)
+ __skb_queue_head_init(&queue);
+ __skb_queue_tail(&queue, skb);
+ aoenet_xmit(&queue);
++ } else {
++ dev_put(f->t->ifp->nd);
+ }
+ }
+
+@@ -1395,6 +1403,7 @@ aoecmd_ata_id(struct aoedev *d)
+ ah->cmdstat = ATA_CMD_ID_ATA;
+ ah->lba3 = 0xa0;
+
++ dev_hold(t->ifp->nd);
+ skb->dev = t->ifp->nd;
+
+ d->rttavg = RTTAVG_INIT;
+@@ -1404,6 +1413,8 @@ aoecmd_ata_id(struct aoedev *d)
+ skb = skb_clone(skb, GFP_ATOMIC);
+ if (skb)
+ f->sent = ktime_get();
++ else
++ dev_put(t->ifp->nd);
+
+ return skb;
+ }
+diff --git a/drivers/bluetooth/btmrvl_sdio.c b/drivers/bluetooth/btmrvl_sdio.c
+index 85b7f2bb425982..07cd308f7abf6d 100644
+--- a/drivers/bluetooth/btmrvl_sdio.c
++++ b/drivers/bluetooth/btmrvl_sdio.c
+@@ -92,7 +92,7 @@ static int btmrvl_sdio_probe_of(struct device *dev,
+ } else {
+ ret = devm_request_irq(dev, cfg->irq_bt,
+ btmrvl_wake_irq_bt,
+- 0, "bt_wake", card);
++ IRQF_NO_AUTOEN, "bt_wake", card);
+ if (ret) {
+ dev_err(dev,
+ "Failed to request irq_bt %d (%d)\n",
+@@ -101,7 +101,6 @@ static int btmrvl_sdio_probe_of(struct device *dev,
+
+ /* Configure wakeup (enabled by default) */
+ device_init_wakeup(dev, true);
+- disable_irq(cfg->irq_bt);
+ }
+ }
+
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index fd7991ea76726e..7cce4abc8a0230 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -1296,6 +1296,7 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
+ btrealtek_set_flag(hdev, REALTEK_ALT6_CONTINUOUS_TX_CHIP);
+
+ if (btrtl_dev->project_id == CHIP_ID_8852A ||
++ btrtl_dev->project_id == CHIP_ID_8852B ||
+ btrtl_dev->project_id == CHIP_ID_8852C)
+ set_bit(HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER, &hdev->quirks);
+
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 1ec71a2fb63eac..93dbeb8b348d5a 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -540,6 +540,8 @@ static const struct usb_device_id quirks_table[] = {
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3592), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe122), .driver_info = BTUSB_REALTEK |
++ BTUSB_WIDEBAND_SPEECH },
+
+ /* Realtek 8852BE Bluetooth devices */
+ { USB_DEVICE(0x0cb8, 0xc559), .driver_info = BTUSB_REALTEK |
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index b59317e234c615..931550b9ea699d 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -1713,7 +1713,7 @@ static int __alpha_pll_trion_set_rate(struct clk_hw *hw, unsigned long rate,
+ if (ret < 0)
+ return ret;
+
+- regmap_write(pll->clkr.regmap, PLL_L_VAL(pll), l);
++ regmap_update_bits(pll->clkr.regmap, PLL_L_VAL(pll), LUCID_EVO_PLL_L_VAL_MASK, l);
+ regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL(pll), a);
+
+ /* Latch the PLL input */
+diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c
+index bb82abeed88f3b..4acde937114af3 100644
+--- a/drivers/clk/qcom/clk-rpmh.c
++++ b/drivers/clk/qcom/clk-rpmh.c
+@@ -263,6 +263,8 @@ static int clk_rpmh_bcm_send_cmd(struct clk_rpmh *c, bool enable)
+ cmd_state = 0;
+ }
+
++ cmd_state = min(cmd_state, BCM_TCS_CMD_VOTE_MASK);
++
+ if (c->last_sent_aggr_state != cmd_state) {
+ cmd.addr = c->res_addr;
+ cmd.data = BCM_TCS_CMD(1, enable, 0, cmd_state);
+diff --git a/drivers/clk/qcom/dispcc-sm8250.c b/drivers/clk/qcom/dispcc-sm8250.c
+index 6665e064a55519..884bbd3fb30571 100644
+--- a/drivers/clk/qcom/dispcc-sm8250.c
++++ b/drivers/clk/qcom/dispcc-sm8250.c
+@@ -849,6 +849,7 @@ static struct clk_branch disp_cc_mdss_dp_link1_intf_clk = {
+ &disp_cc_mdss_dp_link1_div_clk_src.clkr.hw,
+ },
+ .num_parents = 1,
++ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+@@ -884,6 +885,7 @@ static struct clk_branch disp_cc_mdss_dp_link_intf_clk = {
+ &disp_cc_mdss_dp_link_div_clk_src.clkr.hw,
+ },
+ .num_parents = 1,
++ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+@@ -1009,6 +1011,7 @@ static struct clk_branch disp_cc_mdss_mdp_lut_clk = {
+ &disp_cc_mdss_mdp_clk_src.clkr.hw,
+ },
+ .num_parents = 1,
++ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+diff --git a/drivers/clk/qcom/gcc-sc8180x.c b/drivers/clk/qcom/gcc-sc8180x.c
+index ad135bfa4c7686..199e66954bc21b 100644
+--- a/drivers/clk/qcom/gcc-sc8180x.c
++++ b/drivers/clk/qcom/gcc-sc8180x.c
+@@ -142,6 +142,23 @@ static struct clk_alpha_pll gpll7 = {
+ },
+ };
+
++static struct clk_alpha_pll gpll9 = {
++ .offset = 0x1c000,
++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_TRION],
++ .clkr = {
++ .enable_reg = 0x52000,
++ .enable_mask = BIT(9),
++ .hw.init = &(const struct clk_init_data) {
++ .name = "gpll9",
++ .parent_data = &(const struct clk_parent_data) {
++ .fw_name = "bi_tcxo",
++ },
++ .num_parents = 1,
++ .ops = &clk_alpha_pll_fixed_trion_ops,
++ },
++ },
++};
++
+ static const struct parent_map gcc_parent_map_0[] = {
+ { P_BI_TCXO, 0 },
+ { P_GPLL0_OUT_MAIN, 1 },
+@@ -241,7 +258,7 @@ static const struct parent_map gcc_parent_map_7[] = {
+ static const struct clk_parent_data gcc_parents_7[] = {
+ { .fw_name = "bi_tcxo", },
+ { .hw = &gpll0.clkr.hw },
+- { .name = "gppl9" },
++ { .hw = &gpll9.clkr.hw },
+ { .hw = &gpll4.clkr.hw },
+ { .hw = &gpll0_out_even.clkr.hw },
+ };
+@@ -260,28 +277,6 @@ static const struct clk_parent_data gcc_parents_8[] = {
+ { .hw = &gpll0_out_even.clkr.hw },
+ };
+
+-static const struct freq_tbl ftbl_gcc_cpuss_ahb_clk_src[] = {
+- F(19200000, P_BI_TCXO, 1, 0, 0),
+- F(50000000, P_GPLL0_OUT_MAIN, 12, 0, 0),
+- F(100000000, P_GPLL0_OUT_MAIN, 6, 0, 0),
+- { }
+-};
+-
+-static struct clk_rcg2 gcc_cpuss_ahb_clk_src = {
+- .cmd_rcgr = 0x48014,
+- .mnd_width = 0,
+- .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
+- .freq_tbl = ftbl_gcc_cpuss_ahb_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_cpuss_ahb_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
+-};
+-
+ static const struct freq_tbl ftbl_gcc_emac_ptp_clk_src[] = {
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(50000000, P_GPLL0_OUT_EVEN, 6, 0, 0),
+@@ -609,19 +604,29 @@ static const struct freq_tbl ftbl_gcc_qupv3_wrap0_s0_clk_src[] = {
+ { }
+ };
+
++static struct clk_init_data gcc_qupv3_wrap0_s0_clk_src_init = {
++ .name = "gcc_qupv3_wrap0_s0_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
++};
++
+ static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = {
+ .cmd_rcgr = 0x17148,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap0_s0_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap0_s0_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap0_s1_clk_src_init = {
++ .name = "gcc_qupv3_wrap0_s1_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = {
+@@ -630,13 +635,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap0_s1_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap0_s1_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap0_s2_clk_src_init = {
++ .name = "gcc_qupv3_wrap0_s2_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = {
+@@ -645,13 +652,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap0_s2_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap0_s2_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap0_s3_clk_src_init = {
++ .name = "gcc_qupv3_wrap0_s3_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = {
+@@ -660,13 +669,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap0_s3_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap0_s3_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap0_s4_clk_src_init = {
++ .name = "gcc_qupv3_wrap0_s4_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = {
+@@ -675,13 +686,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap0_s4_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap0_s4_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap0_s5_clk_src_init = {
++ .name = "gcc_qupv3_wrap0_s5_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = {
+@@ -690,13 +703,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap0_s5_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap0_s5_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap0_s6_clk_src_init = {
++ .name = "gcc_qupv3_wrap0_s6_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = {
+@@ -705,13 +720,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap0_s6_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap0_s6_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap0_s7_clk_src_init = {
++ .name = "gcc_qupv3_wrap0_s7_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = {
+@@ -720,13 +737,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap0_s7_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap0_s7_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap1_s0_clk_src_init = {
++ .name = "gcc_qupv3_wrap1_s0_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = {
+@@ -735,13 +754,15 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap1_s0_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap1_s0_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap1_s1_clk_src_init = {
++ .name = "gcc_qupv3_wrap1_s1_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = {
+@@ -750,13 +771,15 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap1_s1_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap1_s1_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap1_s2_clk_src_init = {
++ .name = "gcc_qupv3_wrap1_s2_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = {
+@@ -765,13 +788,15 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap1_s2_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap1_s2_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap1_s3_clk_src_init = {
++ .name = "gcc_qupv3_wrap1_s3_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = {
+@@ -780,13 +805,15 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap1_s3_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap1_s3_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap1_s4_clk_src_init = {
++ .name = "gcc_qupv3_wrap1_s4_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = {
+@@ -795,13 +822,15 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap1_s4_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap1_s4_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap1_s5_clk_src_init = {
++ .name = "gcc_qupv3_wrap1_s5_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = {
+@@ -810,13 +839,15 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap1_s5_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap1_s5_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap2_s0_clk_src_init = {
++ .name = "gcc_qupv3_wrap2_s0_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = {
+@@ -825,13 +856,15 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap2_s0_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap2_s0_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap2_s1_clk_src_init = {
++ .name = "gcc_qupv3_wrap2_s1_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = {
+@@ -840,28 +873,33 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap2_s1_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap2_s1_clk_src_init,
+ };
+
++static struct clk_init_data gcc_qupv3_wrap2_s2_clk_src_init = {
++ .name = "gcc_qupv3_wrap2_s2_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
++};
++
++
+ static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = {
+ .cmd_rcgr = 0x1e3a8,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap2_s2_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap2_s2_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap2_s3_clk_src_init = {
++ .name = "gcc_qupv3_wrap2_s3_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = {
+@@ -870,13 +908,15 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap2_s3_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap2_s3_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap2_s4_clk_src_init = {
++ .name = "gcc_qupv3_wrap2_s4_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = {
+@@ -885,13 +925,15 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap2_s4_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap2_s4_clk_src_init,
++};
++
++static struct clk_init_data gcc_qupv3_wrap2_s5_clk_src_init = {
++ .name = "gcc_qupv3_wrap2_s5_clk_src",
++ .parent_data = gcc_parents_0,
++ .num_parents = ARRAY_SIZE(gcc_parents_0),
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = {
+@@ -900,13 +942,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = {
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+- .clkr.hw.init = &(struct clk_init_data){
+- .name = "gcc_qupv3_wrap2_s5_clk_src",
+- .parent_data = gcc_parents_0,
+- .num_parents = ARRAY_SIZE(gcc_parents_0),
+- .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
+- },
++ .clkr.hw.init = &gcc_qupv3_wrap2_s5_clk_src_init,
+ };
+
+ static const struct freq_tbl ftbl_gcc_sdcc2_apps_clk_src[] = {
+@@ -916,7 +952,7 @@ static const struct freq_tbl ftbl_gcc_sdcc2_apps_clk_src[] = {
+ F(25000000, P_GPLL0_OUT_MAIN, 12, 1, 2),
+ F(50000000, P_GPLL0_OUT_MAIN, 12, 0, 0),
+ F(100000000, P_GPLL0_OUT_MAIN, 6, 0, 0),
+- F(200000000, P_GPLL0_OUT_MAIN, 3, 0, 0),
++ F(202000000, P_GPLL9_OUT_MAIN, 4, 0, 0),
+ { }
+ };
+
+@@ -939,9 +975,8 @@ static const struct freq_tbl ftbl_gcc_sdcc4_apps_clk_src[] = {
+ F(400000, P_BI_TCXO, 12, 1, 4),
+ F(9600000, P_BI_TCXO, 2, 0, 0),
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+- F(37500000, P_GPLL0_OUT_MAIN, 16, 0, 0),
+ F(50000000, P_GPLL0_OUT_MAIN, 12, 0, 0),
+- F(75000000, P_GPLL0_OUT_MAIN, 8, 0, 0),
++ F(100000000, P_GPLL0_OUT_MAIN, 6, 0, 0),
+ { }
+ };
+
+@@ -1599,25 +1634,6 @@ static struct clk_branch gcc_cfg_noc_usb3_sec_axi_clk = {
+ },
+ };
+
+-/* For CPUSS functionality the AHB clock needs to be left enabled */
+-static struct clk_branch gcc_cpuss_ahb_clk = {
+- .halt_reg = 0x48000,
+- .halt_check = BRANCH_HALT_VOTED,
+- .clkr = {
+- .enable_reg = 0x52004,
+- .enable_mask = BIT(21),
+- .hw.init = &(struct clk_init_data){
+- .name = "gcc_cpuss_ahb_clk",
+- .parent_hws = (const struct clk_hw *[]){
+- &gcc_cpuss_ahb_clk_src.clkr.hw
+- },
+- .num_parents = 1,
+- .flags = CLK_IS_CRITICAL | CLK_SET_RATE_PARENT,
+- .ops = &clk_branch2_ops,
+- },
+- },
+-};
+-
+ static struct clk_branch gcc_cpuss_rbcpr_clk = {
+ .halt_reg = 0x48008,
+ .halt_check = BRANCH_HALT,
+@@ -3150,25 +3166,6 @@ static struct clk_branch gcc_sdcc4_apps_clk = {
+ },
+ };
+
+-/* For CPUSS functionality the SYS NOC clock needs to be left enabled */
+-static struct clk_branch gcc_sys_noc_cpuss_ahb_clk = {
+- .halt_reg = 0x4819c,
+- .halt_check = BRANCH_HALT_VOTED,
+- .clkr = {
+- .enable_reg = 0x52004,
+- .enable_mask = BIT(0),
+- .hw.init = &(struct clk_init_data){
+- .name = "gcc_sys_noc_cpuss_ahb_clk",
+- .parent_hws = (const struct clk_hw *[]){
+- &gcc_cpuss_ahb_clk_src.clkr.hw
+- },
+- .num_parents = 1,
+- .flags = CLK_IS_CRITICAL | CLK_SET_RATE_PARENT,
+- .ops = &clk_branch2_ops,
+- },
+- },
+-};
+-
+ static struct clk_branch gcc_tsif_ahb_clk = {
+ .halt_reg = 0x36004,
+ .halt_check = BRANCH_HALT,
+@@ -4284,8 +4281,6 @@ static struct clk_regmap *gcc_sc8180x_clocks[] = {
+ [GCC_CFG_NOC_USB3_MP_AXI_CLK] = &gcc_cfg_noc_usb3_mp_axi_clk.clkr,
+ [GCC_CFG_NOC_USB3_PRIM_AXI_CLK] = &gcc_cfg_noc_usb3_prim_axi_clk.clkr,
+ [GCC_CFG_NOC_USB3_SEC_AXI_CLK] = &gcc_cfg_noc_usb3_sec_axi_clk.clkr,
+- [GCC_CPUSS_AHB_CLK] = &gcc_cpuss_ahb_clk.clkr,
+- [GCC_CPUSS_AHB_CLK_SRC] = &gcc_cpuss_ahb_clk_src.clkr,
+ [GCC_CPUSS_RBCPR_CLK] = &gcc_cpuss_rbcpr_clk.clkr,
+ [GCC_DDRSS_GPU_AXI_CLK] = &gcc_ddrss_gpu_axi_clk.clkr,
+ [GCC_DISP_HF_AXI_CLK] = &gcc_disp_hf_axi_clk.clkr,
+@@ -4422,7 +4417,6 @@ static struct clk_regmap *gcc_sc8180x_clocks[] = {
+ [GCC_SDCC4_AHB_CLK] = &gcc_sdcc4_ahb_clk.clkr,
+ [GCC_SDCC4_APPS_CLK] = &gcc_sdcc4_apps_clk.clkr,
+ [GCC_SDCC4_APPS_CLK_SRC] = &gcc_sdcc4_apps_clk_src.clkr,
+- [GCC_SYS_NOC_CPUSS_AHB_CLK] = &gcc_sys_noc_cpuss_ahb_clk.clkr,
+ [GCC_TSIF_AHB_CLK] = &gcc_tsif_ahb_clk.clkr,
+ [GCC_TSIF_INACTIVITY_TIMERS_CLK] = &gcc_tsif_inactivity_timers_clk.clkr,
+ [GCC_TSIF_REF_CLK] = &gcc_tsif_ref_clk.clkr,
+@@ -4511,6 +4505,7 @@ static struct clk_regmap *gcc_sc8180x_clocks[] = {
+ [GPLL1] = &gpll1.clkr,
+ [GPLL4] = &gpll4.clkr,
+ [GPLL7] = &gpll7.clkr,
++ [GPLL9] = &gpll9.clkr,
+ };
+
+ static const struct qcom_reset_map gcc_sc8180x_resets[] = {
+@@ -4561,6 +4556,29 @@ static const struct qcom_reset_map gcc_sc8180x_resets[] = {
+ [GCC_VIDEO_AXI1_CLK_BCR] = { .reg = 0xb028, .bit = 2, .udelay = 150 },
+ };
+
++static const struct clk_rcg_dfs_data gcc_dfs_clocks[] = {
++ DEFINE_RCG_DFS(gcc_qupv3_wrap0_s0_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap0_s1_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap0_s2_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap0_s3_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap0_s4_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap0_s5_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap0_s6_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap0_s7_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap1_s0_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap1_s1_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap1_s2_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap1_s3_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap1_s4_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap1_s5_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap2_s0_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap2_s1_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap2_s2_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap2_s3_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap2_s4_clk_src),
++ DEFINE_RCG_DFS(gcc_qupv3_wrap2_s5_clk_src),
++};
++
+ static struct gdsc *gcc_sc8180x_gdscs[] = {
+ [EMAC_GDSC] = &emac_gdsc,
+ [PCIE_0_GDSC] = &pcie_0_gdsc,
+@@ -4602,6 +4620,7 @@ MODULE_DEVICE_TABLE(of, gcc_sc8180x_match_table);
+ static int gcc_sc8180x_probe(struct platform_device *pdev)
+ {
+ struct regmap *regmap;
++ int ret;
+
+ regmap = qcom_cc_map(pdev, &gcc_sc8180x_desc);
+ if (IS_ERR(regmap))
+@@ -4623,6 +4642,11 @@ static int gcc_sc8180x_probe(struct platform_device *pdev)
+ regmap_update_bits(regmap, 0x4d110, 0x3, 0x3);
+ regmap_update_bits(regmap, 0x71028, 0x3, 0x3);
+
++ ret = qcom_cc_register_rcg_dfs(regmap, gcc_dfs_clocks,
++ ARRAY_SIZE(gcc_dfs_clocks));
++ if (ret)
++ return ret;
++
+ return qcom_cc_really_probe(&pdev->dev, &gcc_sc8180x_desc, regmap);
+ }
+
+diff --git a/drivers/clk/qcom/gcc-sm8250.c b/drivers/clk/qcom/gcc-sm8250.c
+index 991cd8b8d59726..1c59d70e0f96c5 100644
+--- a/drivers/clk/qcom/gcc-sm8250.c
++++ b/drivers/clk/qcom/gcc-sm8250.c
+@@ -3226,7 +3226,7 @@ static struct gdsc pcie_0_gdsc = {
+ .pd = {
+ .name = "pcie_0_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ };
+
+ static struct gdsc pcie_1_gdsc = {
+@@ -3234,7 +3234,7 @@ static struct gdsc pcie_1_gdsc = {
+ .pd = {
+ .name = "pcie_1_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ };
+
+ static struct gdsc pcie_2_gdsc = {
+@@ -3242,7 +3242,7 @@ static struct gdsc pcie_2_gdsc = {
+ .pd = {
+ .name = "pcie_2_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ };
+
+ static struct gdsc ufs_card_gdsc = {
+diff --git a/drivers/clk/qcom/gcc-sm8450.c b/drivers/clk/qcom/gcc-sm8450.c
+index 639a9a955914f4..c445c271678a5f 100644
+--- a/drivers/clk/qcom/gcc-sm8450.c
++++ b/drivers/clk/qcom/gcc-sm8450.c
+@@ -2974,7 +2974,7 @@ static struct gdsc pcie_0_gdsc = {
+ .pd = {
+ .name = "pcie_0_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ };
+
+ static struct gdsc pcie_1_gdsc = {
+@@ -2982,7 +2982,7 @@ static struct gdsc pcie_1_gdsc = {
+ .pd = {
+ .name = "pcie_1_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ };
+
+ static struct gdsc ufs_phy_gdsc = {
+diff --git a/drivers/clk/rockchip/clk.c b/drivers/clk/rockchip/clk.c
+index 73d2cbdc716b45..2fa7253c73b2cd 100644
+--- a/drivers/clk/rockchip/clk.c
++++ b/drivers/clk/rockchip/clk.c
+@@ -450,12 +450,13 @@ void rockchip_clk_register_branches(struct rockchip_clk_provider *ctx,
+ struct rockchip_clk_branch *list,
+ unsigned int nr_clk)
+ {
+- struct clk *clk = NULL;
++ struct clk *clk;
+ unsigned int idx;
+ unsigned long flags;
+
+ for (idx = 0; idx < nr_clk; idx++, list++) {
+ flags = list->flags;
++ clk = NULL;
+
+ /* catch simple muxes */
+ switch (list->branch_type) {
+diff --git a/drivers/clk/samsung/clk-exynos7885.c b/drivers/clk/samsung/clk-exynos7885.c
+index f7d7427a558ba0..87387d4cbf48a2 100644
+--- a/drivers/clk/samsung/clk-exynos7885.c
++++ b/drivers/clk/samsung/clk-exynos7885.c
+@@ -20,7 +20,7 @@
+ #define CLKS_NR_TOP (CLK_GOUT_FSYS_USB30DRD + 1)
+ #define CLKS_NR_CORE (CLK_GOUT_TREX_P_CORE_PCLK_P_CORE + 1)
+ #define CLKS_NR_PERI (CLK_GOUT_WDT1_PCLK + 1)
+-#define CLKS_NR_FSYS (CLK_GOUT_MMC_SDIO_SDCLKIN + 1)
++#define CLKS_NR_FSYS (CLK_MOUT_FSYS_USB30DRD_USER + 1)
+
+ /* ---- CMU_TOP ------------------------------------------------------------- */
+
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 073ca9caf52ac5..589fde37ccd7ab 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -659,7 +659,12 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
+ unsigned long max_perf, min_perf, des_perf,
+ cap_perf, lowest_nonlinear_perf;
+ struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
+- struct amd_cpudata *cpudata = policy->driver_data;
++ struct amd_cpudata *cpudata;
++
++ if (!policy)
++ return;
++
++ cpudata = policy->driver_data;
+
+ if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
+ amd_pstate_update_min_max_limit(policy);
+@@ -873,11 +878,16 @@ static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata)
+ static void amd_pstate_update_limits(unsigned int cpu)
+ {
+ struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
+- struct amd_cpudata *cpudata = policy->driver_data;
++ struct amd_cpudata *cpudata;
+ u32 prev_high = 0, cur_high = 0;
+ int ret;
+ bool highest_perf_changed = false;
+
++ if (!policy)
++ return;
++
++ cpudata = policy->driver_data;
++
+ mutex_lock(&amd_pstate_driver_lock);
+ if ((!amd_pstate_prefcore) || (!cpudata->hw_prefcore))
+ goto free_cpufreq_put;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index c0278d023cfce5..949ead440da9ec 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -1623,7 +1623,7 @@ static void intel_pstate_notify_work(struct work_struct *work)
+ wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_STATUS, 0);
+ }
+
+-static DEFINE_SPINLOCK(hwp_notify_lock);
++static DEFINE_RAW_SPINLOCK(hwp_notify_lock);
+ static cpumask_t hwp_intr_enable_mask;
+
+ #define HWP_GUARANTEED_PERF_CHANGE_STATUS BIT(0)
+@@ -1646,7 +1646,7 @@ void notify_hwp_interrupt(void)
+ if (!(value & status_mask))
+ return;
+
+- spin_lock_irqsave(&hwp_notify_lock, flags);
++ raw_spin_lock_irqsave(&hwp_notify_lock, flags);
+
+ if (!cpumask_test_cpu(this_cpu, &hwp_intr_enable_mask))
+ goto ack_intr;
+@@ -1654,13 +1654,13 @@ void notify_hwp_interrupt(void)
+ schedule_delayed_work(&all_cpu_data[this_cpu]->hwp_notify_work,
+ msecs_to_jiffies(10));
+
+- spin_unlock_irqrestore(&hwp_notify_lock, flags);
++ raw_spin_unlock_irqrestore(&hwp_notify_lock, flags);
+
+ return;
+
+ ack_intr:
+ wrmsrl_safe(MSR_HWP_STATUS, 0);
+- spin_unlock_irqrestore(&hwp_notify_lock, flags);
++ raw_spin_unlock_irqrestore(&hwp_notify_lock, flags);
+ }
+
+ static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata)
+@@ -1673,9 +1673,9 @@ static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata)
+ /* wrmsrl_on_cpu has to be outside spinlock as this can result in IPC */
+ wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00);
+
+- spin_lock_irq(&hwp_notify_lock);
++ raw_spin_lock_irq(&hwp_notify_lock);
+ cancel_work = cpumask_test_and_clear_cpu(cpudata->cpu, &hwp_intr_enable_mask);
+- spin_unlock_irq(&hwp_notify_lock);
++ raw_spin_unlock_irq(&hwp_notify_lock);
+
+ if (cancel_work)
+ cancel_delayed_work_sync(&cpudata->hwp_notify_work);
+@@ -1690,10 +1690,10 @@ static void intel_pstate_enable_hwp_interrupt(struct cpudata *cpudata)
+ if (boot_cpu_has(X86_FEATURE_HWP_NOTIFY)) {
+ u64 interrupt_mask = HWP_GUARANTEED_PERF_CHANGE_REQ;
+
+- spin_lock_irq(&hwp_notify_lock);
++ raw_spin_lock_irq(&hwp_notify_lock);
+ INIT_DELAYED_WORK(&cpudata->hwp_notify_work, intel_pstate_notify_work);
+ cpumask_set_cpu(cpudata->cpu, &hwp_intr_enable_mask);
+- spin_unlock_irq(&hwp_notify_lock);
++ raw_spin_unlock_irq(&hwp_notify_lock);
+
+ if (cpu_feature_enabled(X86_FEATURE_HWP_HIGHEST_PERF_CHANGE))
+ interrupt_mask |= HWP_HIGHEST_PERF_CHANGE_REQ;
+diff --git a/drivers/cpufreq/loongson3_cpufreq.c b/drivers/cpufreq/loongson3_cpufreq.c
+index 5f79b6de127c9f..6b5e6798d9a283 100644
+--- a/drivers/cpufreq/loongson3_cpufreq.c
++++ b/drivers/cpufreq/loongson3_cpufreq.c
+@@ -176,7 +176,7 @@ static DEFINE_PER_CPU(struct loongson3_freq_data *, freq_data);
+ static inline int do_service_request(u32 id, u32 info, u32 cmd, u32 val, u32 extra)
+ {
+ int retries;
+- unsigned int cpu = smp_processor_id();
++ unsigned int cpu = raw_smp_processor_id();
+ unsigned int package = cpu_data[cpu].package;
+ union smc_message msg, last;
+
+diff --git a/drivers/crypto/hisilicon/sgl.c b/drivers/crypto/hisilicon/sgl.c
+index 568acd0aee3fa5..c974f95cd126fd 100644
+--- a/drivers/crypto/hisilicon/sgl.c
++++ b/drivers/crypto/hisilicon/sgl.c
+@@ -225,7 +225,7 @@ hisi_acc_sg_buf_map_to_hw_sgl(struct device *dev,
+ dma_addr_t curr_sgl_dma = 0;
+ struct acc_hw_sge *curr_hw_sge;
+ struct scatterlist *sg;
+- int sg_n;
++ int sg_n, ret;
+
+ if (!dev || !sgl || !pool || !hw_sgl_dma || index >= pool->count)
+ return ERR_PTR(-EINVAL);
+@@ -240,14 +240,15 @@ hisi_acc_sg_buf_map_to_hw_sgl(struct device *dev,
+
+ if (sg_n_mapped > pool->sge_nr) {
+ dev_err(dev, "the number of entries in input scatterlist is bigger than SGL pool setting.\n");
+- return ERR_PTR(-EINVAL);
++ ret = -EINVAL;
++ goto err_unmap;
+ }
+
+ curr_hw_sgl = acc_get_sgl(pool, index, &curr_sgl_dma);
+ if (IS_ERR(curr_hw_sgl)) {
+ dev_err(dev, "Get SGL error!\n");
+- dma_unmap_sg(dev, sgl, sg_n, DMA_BIDIRECTIONAL);
+- return ERR_PTR(-ENOMEM);
++ ret = -ENOMEM;
++ goto err_unmap;
+ }
+ curr_hw_sgl->entry_length_in_sgl = cpu_to_le16(pool->sge_nr);
+ curr_hw_sge = curr_hw_sgl->sge_entries;
+@@ -262,6 +263,11 @@ hisi_acc_sg_buf_map_to_hw_sgl(struct device *dev,
+ *hw_sgl_dma = curr_sgl_dma;
+
+ return curr_hw_sgl;
++
++err_unmap:
++ dma_unmap_sg(dev, sgl, sg_n, DMA_BIDIRECTIONAL);
++
++ return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(hisi_acc_sg_buf_map_to_hw_sgl);
+
+diff --git a/drivers/crypto/marvell/Kconfig b/drivers/crypto/marvell/Kconfig
+index a48591af12d025..78217577aa5403 100644
+--- a/drivers/crypto/marvell/Kconfig
++++ b/drivers/crypto/marvell/Kconfig
+@@ -28,6 +28,7 @@ config CRYPTO_DEV_OCTEONTX_CPT
+ select CRYPTO_SKCIPHER
+ select CRYPTO_HASH
+ select CRYPTO_AEAD
++ select CRYPTO_AUTHENC
+ select CRYPTO_DEV_MARVELL
+ help
+ This driver allows you to utilize the Marvell Cryptographic
+@@ -47,6 +48,7 @@ config CRYPTO_DEV_OCTEONTX2_CPT
+ select CRYPTO_SKCIPHER
+ select CRYPTO_HASH
+ select CRYPTO_AEAD
++ select CRYPTO_AUTHENC
+ select NET_DEVLINK
+ help
+ This driver allows you to utilize the Marvell Cryptographic
+diff --git a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
+index 3c5d577d8f0d5e..0a1b85ad0057f1 100644
+--- a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
++++ b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
+@@ -17,7 +17,6 @@
+ #include <crypto/sha2.h>
+ #include <crypto/xts.h>
+ #include <crypto/scatterwalk.h>
+-#include <linux/rtnetlink.h>
+ #include <linux/sort.h>
+ #include <linux/module.h>
+ #include "otx_cptvf.h"
+@@ -66,6 +65,8 @@ static struct cpt_device_table ae_devices = {
+ .count = ATOMIC_INIT(0)
+ };
+
++static struct otx_cpt_sdesc *alloc_sdesc(struct crypto_shash *alg);
++
+ static inline int get_se_device(struct pci_dev **pdev, int *cpu_num)
+ {
+ int count, ret = 0;
+@@ -509,44 +510,61 @@ static int cpt_aead_init(struct crypto_aead *tfm, u8 cipher_type, u8 mac_type)
+ ctx->cipher_type = cipher_type;
+ ctx->mac_type = mac_type;
+
++ switch (ctx->mac_type) {
++ case OTX_CPT_SHA1:
++ ctx->hashalg = crypto_alloc_shash("sha1", 0, 0);
++ break;
++
++ case OTX_CPT_SHA256:
++ ctx->hashalg = crypto_alloc_shash("sha256", 0, 0);
++ break;
++
++ case OTX_CPT_SHA384:
++ ctx->hashalg = crypto_alloc_shash("sha384", 0, 0);
++ break;
++
++ case OTX_CPT_SHA512:
++ ctx->hashalg = crypto_alloc_shash("sha512", 0, 0);
++ break;
++ }
++
++ if (IS_ERR(ctx->hashalg))
++ return PTR_ERR(ctx->hashalg);
++
++ crypto_aead_set_reqsize_dma(tfm, sizeof(struct otx_cpt_req_ctx));
++
++ if (!ctx->hashalg)
++ return 0;
++
+ /*
+ * When selected cipher is NULL we use HMAC opcode instead of
+ * FLEXICRYPTO opcode therefore we don't need to use HASH algorithms
+ * for calculating ipad and opad
+ */
+ if (ctx->cipher_type != OTX_CPT_CIPHER_NULL) {
+- switch (ctx->mac_type) {
+- case OTX_CPT_SHA1:
+- ctx->hashalg = crypto_alloc_shash("sha1", 0,
+- CRYPTO_ALG_ASYNC);
+- if (IS_ERR(ctx->hashalg))
+- return PTR_ERR(ctx->hashalg);
+- break;
+-
+- case OTX_CPT_SHA256:
+- ctx->hashalg = crypto_alloc_shash("sha256", 0,
+- CRYPTO_ALG_ASYNC);
+- if (IS_ERR(ctx->hashalg))
+- return PTR_ERR(ctx->hashalg);
+- break;
++ int ss = crypto_shash_statesize(ctx->hashalg);
+
+- case OTX_CPT_SHA384:
+- ctx->hashalg = crypto_alloc_shash("sha384", 0,
+- CRYPTO_ALG_ASYNC);
+- if (IS_ERR(ctx->hashalg))
+- return PTR_ERR(ctx->hashalg);
+- break;
++ ctx->ipad = kzalloc(ss, GFP_KERNEL);
++ if (!ctx->ipad) {
++ crypto_free_shash(ctx->hashalg);
++ return -ENOMEM;
++ }
+
+- case OTX_CPT_SHA512:
+- ctx->hashalg = crypto_alloc_shash("sha512", 0,
+- CRYPTO_ALG_ASYNC);
+- if (IS_ERR(ctx->hashalg))
+- return PTR_ERR(ctx->hashalg);
+- break;
++ ctx->opad = kzalloc(ss, GFP_KERNEL);
++ if (!ctx->opad) {
++ kfree(ctx->ipad);
++ crypto_free_shash(ctx->hashalg);
++ return -ENOMEM;
+ }
+ }
+
+- crypto_aead_set_reqsize_dma(tfm, sizeof(struct otx_cpt_req_ctx));
++ ctx->sdesc = alloc_sdesc(ctx->hashalg);
++ if (!ctx->sdesc) {
++ kfree(ctx->opad);
++ kfree(ctx->ipad);
++ crypto_free_shash(ctx->hashalg);
++ return -ENOMEM;
++ }
+
+ return 0;
+ }
+@@ -602,8 +620,7 @@ static void otx_cpt_aead_exit(struct crypto_aead *tfm)
+
+ kfree(ctx->ipad);
+ kfree(ctx->opad);
+- if (ctx->hashalg)
+- crypto_free_shash(ctx->hashalg);
++ crypto_free_shash(ctx->hashalg);
+ kfree(ctx->sdesc);
+ }
+
+@@ -699,7 +716,7 @@ static inline void swap_data64(void *buf, u32 len)
+ *dst = cpu_to_be64p(src);
+ }
+
+-static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
++static int swap_pad(u8 mac_type, u8 *pad)
+ {
+ struct sha512_state *sha512;
+ struct sha256_state *sha256;
+@@ -707,22 +724,19 @@ static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
+
+ switch (mac_type) {
+ case OTX_CPT_SHA1:
+- sha1 = (struct sha1_state *) in_pad;
++ sha1 = (struct sha1_state *)pad;
+ swap_data32(sha1->state, SHA1_DIGEST_SIZE);
+- memcpy(out_pad, &sha1->state, SHA1_DIGEST_SIZE);
+ break;
+
+ case OTX_CPT_SHA256:
+- sha256 = (struct sha256_state *) in_pad;
++ sha256 = (struct sha256_state *)pad;
+ swap_data32(sha256->state, SHA256_DIGEST_SIZE);
+- memcpy(out_pad, &sha256->state, SHA256_DIGEST_SIZE);
+ break;
+
+ case OTX_CPT_SHA384:
+ case OTX_CPT_SHA512:
+- sha512 = (struct sha512_state *) in_pad;
++ sha512 = (struct sha512_state *)pad;
+ swap_data64(sha512->state, SHA512_DIGEST_SIZE);
+- memcpy(out_pad, &sha512->state, SHA512_DIGEST_SIZE);
+ break;
+
+ default:
+@@ -732,55 +746,53 @@ static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
+ return 0;
+ }
+
+-static int aead_hmac_init(struct crypto_aead *cipher)
++static int aead_hmac_init(struct crypto_aead *cipher,
++ struct crypto_authenc_keys *keys)
+ {
+ struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+- int state_size = crypto_shash_statesize(ctx->hashalg);
+ int ds = crypto_shash_digestsize(ctx->hashalg);
+ int bs = crypto_shash_blocksize(ctx->hashalg);
+- int authkeylen = ctx->auth_key_len;
++ int authkeylen = keys->authkeylen;
+ u8 *ipad = NULL, *opad = NULL;
+- int ret = 0, icount = 0;
++ int icount = 0;
++ int ret;
+
+- ctx->sdesc = alloc_sdesc(ctx->hashalg);
+- if (!ctx->sdesc)
+- return -ENOMEM;
++ if (authkeylen > bs) {
++ ret = crypto_shash_digest(&ctx->sdesc->shash, keys->authkey,
++ authkeylen, ctx->key);
++ if (ret)
++ return ret;
++ authkeylen = ds;
++ } else
++ memcpy(ctx->key, keys->authkey, authkeylen);
+
+- ctx->ipad = kzalloc(bs, GFP_KERNEL);
+- if (!ctx->ipad) {
+- ret = -ENOMEM;
+- goto calc_fail;
+- }
++ ctx->enc_key_len = keys->enckeylen;
++ ctx->auth_key_len = authkeylen;
+
+- ctx->opad = kzalloc(bs, GFP_KERNEL);
+- if (!ctx->opad) {
+- ret = -ENOMEM;
+- goto calc_fail;
+- }
++ if (ctx->cipher_type == OTX_CPT_CIPHER_NULL)
++ return keys->enckeylen ? -EINVAL : 0;
+
+- ipad = kzalloc(state_size, GFP_KERNEL);
+- if (!ipad) {
+- ret = -ENOMEM;
+- goto calc_fail;
++ switch (keys->enckeylen) {
++ case AES_KEYSIZE_128:
++ ctx->key_type = OTX_CPT_AES_128_BIT;
++ break;
++ case AES_KEYSIZE_192:
++ ctx->key_type = OTX_CPT_AES_192_BIT;
++ break;
++ case AES_KEYSIZE_256:
++ ctx->key_type = OTX_CPT_AES_256_BIT;
++ break;
++ default:
++ /* Invalid key length */
++ return -EINVAL;
+ }
+
+- opad = kzalloc(state_size, GFP_KERNEL);
+- if (!opad) {
+- ret = -ENOMEM;
+- goto calc_fail;
+- }
++ memcpy(ctx->key + authkeylen, keys->enckey, keys->enckeylen);
+
+- if (authkeylen > bs) {
+- ret = crypto_shash_digest(&ctx->sdesc->shash, ctx->key,
+- authkeylen, ipad);
+- if (ret)
+- goto calc_fail;
+-
+- authkeylen = ds;
+- } else {
+- memcpy(ipad, ctx->key, authkeylen);
+- }
++ ipad = ctx->ipad;
++ opad = ctx->opad;
+
++ memcpy(ipad, ctx->key, authkeylen);
+ memset(ipad + authkeylen, 0, bs - authkeylen);
+ memcpy(opad, ipad, bs);
+
+@@ -798,7 +810,7 @@ static int aead_hmac_init(struct crypto_aead *cipher)
+ crypto_shash_init(&ctx->sdesc->shash);
+ crypto_shash_update(&ctx->sdesc->shash, ipad, bs);
+ crypto_shash_export(&ctx->sdesc->shash, ipad);
+- ret = copy_pad(ctx->mac_type, ctx->ipad, ipad);
++ ret = swap_pad(ctx->mac_type, ipad);
+ if (ret)
+ goto calc_fail;
+
+@@ -806,25 +818,9 @@ static int aead_hmac_init(struct crypto_aead *cipher)
+ crypto_shash_init(&ctx->sdesc->shash);
+ crypto_shash_update(&ctx->sdesc->shash, opad, bs);
+ crypto_shash_export(&ctx->sdesc->shash, opad);
+- ret = copy_pad(ctx->mac_type, ctx->opad, opad);
+- if (ret)
+- goto calc_fail;
+-
+- kfree(ipad);
+- kfree(opad);
+-
+- return 0;
++ ret = swap_pad(ctx->mac_type, opad);
+
+ calc_fail:
+- kfree(ctx->ipad);
+- ctx->ipad = NULL;
+- kfree(ctx->opad);
+- ctx->opad = NULL;
+- kfree(ipad);
+- kfree(opad);
+- kfree(ctx->sdesc);
+- ctx->sdesc = NULL;
+-
+ return ret;
+ }
+
+@@ -832,57 +828,15 @@ static int otx_cpt_aead_cbc_aes_sha_setkey(struct crypto_aead *cipher,
+ const unsigned char *key,
+ unsigned int keylen)
+ {
+- struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+- struct crypto_authenc_key_param *param;
+- int enckeylen = 0, authkeylen = 0;
+- struct rtattr *rta = (void *)key;
+- int status = -EINVAL;
+-
+- if (!RTA_OK(rta, keylen))
+- goto badkey;
+-
+- if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+- goto badkey;
+-
+- if (RTA_PAYLOAD(rta) < sizeof(*param))
+- goto badkey;
+-
+- param = RTA_DATA(rta);
+- enckeylen = be32_to_cpu(param->enckeylen);
+- key += RTA_ALIGN(rta->rta_len);
+- keylen -= RTA_ALIGN(rta->rta_len);
+- if (keylen < enckeylen)
+- goto badkey;
++ struct crypto_authenc_keys authenc_keys;
++ int status;
+
+- if (keylen > OTX_CPT_MAX_KEY_SIZE)
+- goto badkey;
+-
+- authkeylen = keylen - enckeylen;
+- memcpy(ctx->key, key, keylen);
+-
+- switch (enckeylen) {
+- case AES_KEYSIZE_128:
+- ctx->key_type = OTX_CPT_AES_128_BIT;
+- break;
+- case AES_KEYSIZE_192:
+- ctx->key_type = OTX_CPT_AES_192_BIT;
+- break;
+- case AES_KEYSIZE_256:
+- ctx->key_type = OTX_CPT_AES_256_BIT;
+- break;
+- default:
+- /* Invalid key length */
+- goto badkey;
+- }
+-
+- ctx->enc_key_len = enckeylen;
+- ctx->auth_key_len = authkeylen;
+-
+- status = aead_hmac_init(cipher);
++ status = crypto_authenc_extractkeys(&authenc_keys, key, keylen);
+ if (status)
+ goto badkey;
+
+- return 0;
++ status = aead_hmac_init(cipher, &authenc_keys);
++
+ badkey:
+ return status;
+ }
+@@ -891,36 +845,7 @@ static int otx_cpt_aead_ecb_null_sha_setkey(struct crypto_aead *cipher,
+ const unsigned char *key,
+ unsigned int keylen)
+ {
+- struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+- struct crypto_authenc_key_param *param;
+- struct rtattr *rta = (void *)key;
+- int enckeylen = 0;
+-
+- if (!RTA_OK(rta, keylen))
+- goto badkey;
+-
+- if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+- goto badkey;
+-
+- if (RTA_PAYLOAD(rta) < sizeof(*param))
+- goto badkey;
+-
+- param = RTA_DATA(rta);
+- enckeylen = be32_to_cpu(param->enckeylen);
+- key += RTA_ALIGN(rta->rta_len);
+- keylen -= RTA_ALIGN(rta->rta_len);
+- if (enckeylen != 0)
+- goto badkey;
+-
+- if (keylen > OTX_CPT_MAX_KEY_SIZE)
+- goto badkey;
+-
+- memcpy(ctx->key, key, keylen);
+- ctx->enc_key_len = enckeylen;
+- ctx->auth_key_len = keylen;
+- return 0;
+-badkey:
+- return -EINVAL;
++ return otx_cpt_aead_cbc_aes_sha_setkey(cipher, key, keylen);
+ }
+
+ static int otx_cpt_aead_gcm_aes_setkey(struct crypto_aead *cipher,
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
+index 1604fc58dc13ec..5aa56f20f888ce 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
+@@ -11,7 +11,6 @@
+ #include <crypto/xts.h>
+ #include <crypto/gcm.h>
+ #include <crypto/scatterwalk.h>
+-#include <linux/rtnetlink.h>
+ #include <linux/sort.h>
+ #include <linux/module.h>
+ #include "otx2_cptvf.h"
+@@ -55,6 +54,8 @@ static struct cpt_device_table se_devices = {
+ .count = ATOMIC_INIT(0)
+ };
+
++static struct otx2_cpt_sdesc *alloc_sdesc(struct crypto_shash *alg);
++
+ static inline int get_se_device(struct pci_dev **pdev, int *cpu_num)
+ {
+ int count;
+@@ -598,40 +599,56 @@ static int cpt_aead_init(struct crypto_aead *atfm, u8 cipher_type, u8 mac_type)
+ ctx->cipher_type = cipher_type;
+ ctx->mac_type = mac_type;
+
++ switch (ctx->mac_type) {
++ case OTX2_CPT_SHA1:
++ ctx->hashalg = crypto_alloc_shash("sha1", 0, 0);
++ break;
++
++ case OTX2_CPT_SHA256:
++ ctx->hashalg = crypto_alloc_shash("sha256", 0, 0);
++ break;
++
++ case OTX2_CPT_SHA384:
++ ctx->hashalg = crypto_alloc_shash("sha384", 0, 0);
++ break;
++
++ case OTX2_CPT_SHA512:
++ ctx->hashalg = crypto_alloc_shash("sha512", 0, 0);
++ break;
++ }
++
++ if (IS_ERR(ctx->hashalg))
++ return PTR_ERR(ctx->hashalg);
++
++ if (ctx->hashalg) {
++ ctx->sdesc = alloc_sdesc(ctx->hashalg);
++ if (!ctx->sdesc) {
++ crypto_free_shash(ctx->hashalg);
++ return -ENOMEM;
++ }
++ }
++
+ /*
+ * When selected cipher is NULL we use HMAC opcode instead of
+ * FLEXICRYPTO opcode therefore we don't need to use HASH algorithms
+ * for calculating ipad and opad
+ */
+- if (ctx->cipher_type != OTX2_CPT_CIPHER_NULL) {
+- switch (ctx->mac_type) {
+- case OTX2_CPT_SHA1:
+- ctx->hashalg = crypto_alloc_shash("sha1", 0,
+- CRYPTO_ALG_ASYNC);
+- if (IS_ERR(ctx->hashalg))
+- return PTR_ERR(ctx->hashalg);
+- break;
+-
+- case OTX2_CPT_SHA256:
+- ctx->hashalg = crypto_alloc_shash("sha256", 0,
+- CRYPTO_ALG_ASYNC);
+- if (IS_ERR(ctx->hashalg))
+- return PTR_ERR(ctx->hashalg);
+- break;
++ if (ctx->cipher_type != OTX2_CPT_CIPHER_NULL && ctx->hashalg) {
++ int ss = crypto_shash_statesize(ctx->hashalg);
+
+- case OTX2_CPT_SHA384:
+- ctx->hashalg = crypto_alloc_shash("sha384", 0,
+- CRYPTO_ALG_ASYNC);
+- if (IS_ERR(ctx->hashalg))
+- return PTR_ERR(ctx->hashalg);
+- break;
++ ctx->ipad = kzalloc(ss, GFP_KERNEL);
++ if (!ctx->ipad) {
++ kfree(ctx->sdesc);
++ crypto_free_shash(ctx->hashalg);
++ return -ENOMEM;
++ }
+
+- case OTX2_CPT_SHA512:
+- ctx->hashalg = crypto_alloc_shash("sha512", 0,
+- CRYPTO_ALG_ASYNC);
+- if (IS_ERR(ctx->hashalg))
+- return PTR_ERR(ctx->hashalg);
+- break;
++ ctx->opad = kzalloc(ss, GFP_KERNEL);
++ if (!ctx->opad) {
++ kfree(ctx->ipad);
++ kfree(ctx->sdesc);
++ crypto_free_shash(ctx->hashalg);
++ return -ENOMEM;
+ }
+ }
+ switch (ctx->cipher_type) {
+@@ -713,8 +730,7 @@ static void otx2_cpt_aead_exit(struct crypto_aead *tfm)
+
+ kfree(ctx->ipad);
+ kfree(ctx->opad);
+- if (ctx->hashalg)
+- crypto_free_shash(ctx->hashalg);
++ crypto_free_shash(ctx->hashalg);
+ kfree(ctx->sdesc);
+
+ if (ctx->fbk_cipher) {
+@@ -788,7 +804,7 @@ static inline void swap_data64(void *buf, u32 len)
+ cpu_to_be64s(src);
+ }
+
+-static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
++static int swap_pad(u8 mac_type, u8 *pad)
+ {
+ struct sha512_state *sha512;
+ struct sha256_state *sha256;
+@@ -796,22 +812,19 @@ static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
+
+ switch (mac_type) {
+ case OTX2_CPT_SHA1:
+- sha1 = (struct sha1_state *) in_pad;
++ sha1 = (struct sha1_state *)pad;
+ swap_data32(sha1->state, SHA1_DIGEST_SIZE);
+- memcpy(out_pad, &sha1->state, SHA1_DIGEST_SIZE);
+ break;
+
+ case OTX2_CPT_SHA256:
+- sha256 = (struct sha256_state *) in_pad;
++ sha256 = (struct sha256_state *)pad;
+ swap_data32(sha256->state, SHA256_DIGEST_SIZE);
+- memcpy(out_pad, &sha256->state, SHA256_DIGEST_SIZE);
+ break;
+
+ case OTX2_CPT_SHA384:
+ case OTX2_CPT_SHA512:
+- sha512 = (struct sha512_state *) in_pad;
++ sha512 = (struct sha512_state *)pad;
+ swap_data64(sha512->state, SHA512_DIGEST_SIZE);
+- memcpy(out_pad, &sha512->state, SHA512_DIGEST_SIZE);
+ break;
+
+ default:
+@@ -821,55 +834,54 @@ static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
+ return 0;
+ }
+
+-static int aead_hmac_init(struct crypto_aead *cipher)
++static int aead_hmac_init(struct crypto_aead *cipher,
++ struct crypto_authenc_keys *keys)
+ {
+ struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+- int state_size = crypto_shash_statesize(ctx->hashalg);
+ int ds = crypto_shash_digestsize(ctx->hashalg);
+ int bs = crypto_shash_blocksize(ctx->hashalg);
+- int authkeylen = ctx->auth_key_len;
++ int authkeylen = keys->authkeylen;
+ u8 *ipad = NULL, *opad = NULL;
+- int ret = 0, icount = 0;
++ int icount = 0;
++ int ret;
+
+- ctx->sdesc = alloc_sdesc(ctx->hashalg);
+- if (!ctx->sdesc)
+- return -ENOMEM;
++ if (authkeylen > bs) {
++ ret = crypto_shash_digest(&ctx->sdesc->shash, keys->authkey,
++ authkeylen, ctx->key);
++ if (ret)
++ goto calc_fail;
+
+- ctx->ipad = kzalloc(bs, GFP_KERNEL);
+- if (!ctx->ipad) {
+- ret = -ENOMEM;
+- goto calc_fail;
+- }
++ authkeylen = ds;
++ } else
++ memcpy(ctx->key, keys->authkey, authkeylen);
+
+- ctx->opad = kzalloc(bs, GFP_KERNEL);
+- if (!ctx->opad) {
+- ret = -ENOMEM;
+- goto calc_fail;
+- }
++ ctx->enc_key_len = keys->enckeylen;
++ ctx->auth_key_len = authkeylen;
+
+- ipad = kzalloc(state_size, GFP_KERNEL);
+- if (!ipad) {
+- ret = -ENOMEM;
+- goto calc_fail;
+- }
++ if (ctx->cipher_type == OTX2_CPT_CIPHER_NULL)
++ return keys->enckeylen ? -EINVAL : 0;
+
+- opad = kzalloc(state_size, GFP_KERNEL);
+- if (!opad) {
+- ret = -ENOMEM;
+- goto calc_fail;
++ switch (keys->enckeylen) {
++ case AES_KEYSIZE_128:
++ ctx->key_type = OTX2_CPT_AES_128_BIT;
++ break;
++ case AES_KEYSIZE_192:
++ ctx->key_type = OTX2_CPT_AES_192_BIT;
++ break;
++ case AES_KEYSIZE_256:
++ ctx->key_type = OTX2_CPT_AES_256_BIT;
++ break;
++ default:
++ /* Invalid key length */
++ return -EINVAL;
+ }
+
+- if (authkeylen > bs) {
+- ret = crypto_shash_digest(&ctx->sdesc->shash, ctx->key,
+- authkeylen, ipad);
+- if (ret)
+- goto calc_fail;
++ memcpy(ctx->key + authkeylen, keys->enckey, keys->enckeylen);
+
+- authkeylen = ds;
+- } else {
+- memcpy(ipad, ctx->key, authkeylen);
+- }
++ ipad = ctx->ipad;
++ opad = ctx->opad;
+
++ memcpy(ipad, ctx->key, authkeylen);
+ memset(ipad + authkeylen, 0, bs - authkeylen);
+ memcpy(opad, ipad, bs);
+
+@@ -887,7 +899,7 @@ static int aead_hmac_init(struct crypto_aead *cipher)
+ crypto_shash_init(&ctx->sdesc->shash);
+ crypto_shash_update(&ctx->sdesc->shash, ipad, bs);
+ crypto_shash_export(&ctx->sdesc->shash, ipad);
+- ret = copy_pad(ctx->mac_type, ctx->ipad, ipad);
++ ret = swap_pad(ctx->mac_type, ipad);
+ if (ret)
+ goto calc_fail;
+
+@@ -895,25 +907,9 @@ static int aead_hmac_init(struct crypto_aead *cipher)
+ crypto_shash_init(&ctx->sdesc->shash);
+ crypto_shash_update(&ctx->sdesc->shash, opad, bs);
+ crypto_shash_export(&ctx->sdesc->shash, opad);
+- ret = copy_pad(ctx->mac_type, ctx->opad, opad);
+- if (ret)
+- goto calc_fail;
+-
+- kfree(ipad);
+- kfree(opad);
+-
+- return 0;
++ ret = swap_pad(ctx->mac_type, opad);
+
+ calc_fail:
+- kfree(ctx->ipad);
+- ctx->ipad = NULL;
+- kfree(ctx->opad);
+- ctx->opad = NULL;
+- kfree(ipad);
+- kfree(opad);
+- kfree(ctx->sdesc);
+- ctx->sdesc = NULL;
+-
+ return ret;
+ }
+
+@@ -921,87 +917,17 @@ static int otx2_cpt_aead_cbc_aes_sha_setkey(struct crypto_aead *cipher,
+ const unsigned char *key,
+ unsigned int keylen)
+ {
+- struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+- struct crypto_authenc_key_param *param;
+- int enckeylen = 0, authkeylen = 0;
+- struct rtattr *rta = (void *)key;
+-
+- if (!RTA_OK(rta, keylen))
+- return -EINVAL;
++ struct crypto_authenc_keys authenc_keys;
+
+- if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+- return -EINVAL;
+-
+- if (RTA_PAYLOAD(rta) < sizeof(*param))
+- return -EINVAL;
+-
+- param = RTA_DATA(rta);
+- enckeylen = be32_to_cpu(param->enckeylen);
+- key += RTA_ALIGN(rta->rta_len);
+- keylen -= RTA_ALIGN(rta->rta_len);
+- if (keylen < enckeylen)
+- return -EINVAL;
+-
+- if (keylen > OTX2_CPT_MAX_KEY_SIZE)
+- return -EINVAL;
+-
+- authkeylen = keylen - enckeylen;
+- memcpy(ctx->key, key, keylen);
+-
+- switch (enckeylen) {
+- case AES_KEYSIZE_128:
+- ctx->key_type = OTX2_CPT_AES_128_BIT;
+- break;
+- case AES_KEYSIZE_192:
+- ctx->key_type = OTX2_CPT_AES_192_BIT;
+- break;
+- case AES_KEYSIZE_256:
+- ctx->key_type = OTX2_CPT_AES_256_BIT;
+- break;
+- default:
+- /* Invalid key length */
+- return -EINVAL;
+- }
+-
+- ctx->enc_key_len = enckeylen;
+- ctx->auth_key_len = authkeylen;
+-
+- return aead_hmac_init(cipher);
++ return crypto_authenc_extractkeys(&authenc_keys, key, keylen) ?:
++ aead_hmac_init(cipher, &authenc_keys);
+ }
+
+ static int otx2_cpt_aead_ecb_null_sha_setkey(struct crypto_aead *cipher,
+ const unsigned char *key,
+ unsigned int keylen)
+ {
+- struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+- struct crypto_authenc_key_param *param;
+- struct rtattr *rta = (void *)key;
+- int enckeylen = 0;
+-
+- if (!RTA_OK(rta, keylen))
+- return -EINVAL;
+-
+- if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+- return -EINVAL;
+-
+- if (RTA_PAYLOAD(rta) < sizeof(*param))
+- return -EINVAL;
+-
+- param = RTA_DATA(rta);
+- enckeylen = be32_to_cpu(param->enckeylen);
+- key += RTA_ALIGN(rta->rta_len);
+- keylen -= RTA_ALIGN(rta->rta_len);
+- if (enckeylen != 0)
+- return -EINVAL;
+-
+- if (keylen > OTX2_CPT_MAX_KEY_SIZE)
+- return -EINVAL;
+-
+- memcpy(ctx->key, key, keylen);
+- ctx->enc_key_len = enckeylen;
+- ctx->auth_key_len = keylen;
+-
+- return 0;
++ return otx2_cpt_aead_cbc_aes_sha_setkey(cipher, key, keylen);
+ }
+
+ static int otx2_cpt_aead_gcm_aes_setkey(struct crypto_aead *cipher,
+diff --git a/drivers/firmware/sysfb.c b/drivers/firmware/sysfb.c
+index 02a07d3d0d40a9..a3df782fa687b0 100644
+--- a/drivers/firmware/sysfb.c
++++ b/drivers/firmware/sysfb.c
+@@ -67,9 +67,11 @@ static bool sysfb_unregister(void)
+ void sysfb_disable(struct device *dev)
+ {
+ struct screen_info *si = &screen_info;
++ struct device *parent;
+
+ mutex_lock(&disable_lock);
+- if (!dev || dev == sysfb_parent_dev(si)) {
++ parent = sysfb_parent_dev(si);
++ if (!dev || !parent || dev == parent) {
+ sysfb_unregister();
+ disabled = true;
+ }
+diff --git a/drivers/firmware/tegra/bpmp.c b/drivers/firmware/tegra/bpmp.c
+index c1590d3aa9cb78..c3a1dc3449617f 100644
+--- a/drivers/firmware/tegra/bpmp.c
++++ b/drivers/firmware/tegra/bpmp.c
+@@ -24,12 +24,6 @@
+ #define MSG_RING BIT(1)
+ #define TAG_SZ 32
+
+-static inline struct tegra_bpmp *
+-mbox_client_to_bpmp(struct mbox_client *client)
+-{
+- return container_of(client, struct tegra_bpmp, mbox.client);
+-}
+-
+ static inline const struct tegra_bpmp_ops *
+ channel_to_ops(struct tegra_bpmp_channel *channel)
+ {
+diff --git a/drivers/gpio/gpio-davinci.c b/drivers/gpio/gpio-davinci.c
+index 1d0175d6350b78..0ecfa7de5ce26e 100644
+--- a/drivers/gpio/gpio-davinci.c
++++ b/drivers/gpio/gpio-davinci.c
+@@ -289,7 +289,7 @@ static int davinci_gpio_probe(struct platform_device *pdev)
+ * serve as EDMA event triggers.
+ */
+
+-static void gpio_irq_disable(struct irq_data *d)
++static void gpio_irq_mask(struct irq_data *d)
+ {
+ struct davinci_gpio_regs __iomem *g = irq2regs(d);
+ uintptr_t mask = (uintptr_t)irq_data_get_irq_handler_data(d);
+@@ -298,7 +298,7 @@ static void gpio_irq_disable(struct irq_data *d)
+ writel_relaxed(mask, &g->clr_rising);
+ }
+
+-static void gpio_irq_enable(struct irq_data *d)
++static void gpio_irq_unmask(struct irq_data *d)
+ {
+ struct davinci_gpio_regs __iomem *g = irq2regs(d);
+ uintptr_t mask = (uintptr_t)irq_data_get_irq_handler_data(d);
+@@ -324,8 +324,8 @@ static int gpio_irq_type(struct irq_data *d, unsigned trigger)
+
+ static struct irq_chip gpio_irqchip = {
+ .name = "GPIO",
+- .irq_enable = gpio_irq_enable,
+- .irq_disable = gpio_irq_disable,
++ .irq_unmask = gpio_irq_unmask,
++ .irq_mask = gpio_irq_mask,
+ .irq_set_type = gpio_irq_type,
+ .flags = IRQCHIP_SET_TYPE_MASKED | IRQCHIP_SKIP_SET_WAKE,
+ };
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 3a9668cc100df5..148bcfbf98e024 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -115,12 +115,12 @@ const char *gpiod_get_label(struct gpio_desc *desc)
+ srcu_read_lock_held(&desc->gdev->desc_srcu));
+
+ if (test_bit(FLAG_USED_AS_IRQ, &flags))
+- return label->str ?: "interrupt";
++ return label ? label->str : "interrupt";
+
+ if (!test_bit(FLAG_REQUESTED, &flags))
+ return NULL;
+
+- return label->str;
++ return label ? label->str : NULL;
+ }
+
+ static void desc_free_label(struct rcu_head *rh)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+index 19158cc30f31f2..43f3e72fb247a7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+@@ -80,6 +80,9 @@ static void aca_banks_release(struct aca_banks *banks)
+ {
+ struct aca_bank_node *node, *tmp;
+
++ if (list_empty(&banks->list))
++ return;
++
+ list_for_each_entry_safe(node, tmp, &banks->list, node) {
+ list_del(&node->node);
+ kvfree(node);
+@@ -562,9 +565,13 @@ static void aca_error_fini(struct aca_error *aerr)
+ struct aca_bank_error *bank_error, *tmp;
+
+ mutex_lock(&aerr->lock);
++ if (list_empty(&aerr->list))
++ goto out_unlock;
++
+ list_for_each_entry_safe(bank_error, tmp, &aerr->list, node)
+ aca_bank_error_remove(aerr, bank_error);
+
++out_unlock:
+ mutex_destroy(&aerr->lock);
+ }
+
+@@ -680,6 +687,9 @@ static void aca_manager_fini(struct aca_handle_manager *mgr)
+ {
+ struct aca_handle *handle, *tmp;
+
++ if (list_empty(&mgr->list))
++ return;
++
+ list_for_each_entry_safe(handle, tmp, &mgr->list, node)
+ amdgpu_aca_remove_handle(handle);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index 03205e3c374638..c272461d70a9a0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -364,15 +364,15 @@ int amdgpu_amdkfd_alloc_gtt_mem(struct amdgpu_device *adev, size_t size,
+ return r;
+ }
+
+-void amdgpu_amdkfd_free_gtt_mem(struct amdgpu_device *adev, void *mem_obj)
++void amdgpu_amdkfd_free_gtt_mem(struct amdgpu_device *adev, void **mem_obj)
+ {
+- struct amdgpu_bo *bo = (struct amdgpu_bo *) mem_obj;
++ struct amdgpu_bo **bo = (struct amdgpu_bo **) mem_obj;
+
+- amdgpu_bo_reserve(bo, true);
+- amdgpu_bo_kunmap(bo);
+- amdgpu_bo_unpin(bo);
+- amdgpu_bo_unreserve(bo);
+- amdgpu_bo_unref(&(bo));
++ amdgpu_bo_reserve(*bo, true);
++ amdgpu_bo_kunmap(*bo);
++ amdgpu_bo_unpin(*bo);
++ amdgpu_bo_unreserve(*bo);
++ amdgpu_bo_unref(bo);
+ }
+
+ int amdgpu_amdkfd_alloc_gws(struct amdgpu_device *adev, size_t size,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+index e7bb1ca3580142..8b4108a23636a3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+@@ -235,7 +235,7 @@ int amdgpu_amdkfd_bo_validate_and_fence(struct amdgpu_bo *bo,
+ int amdgpu_amdkfd_alloc_gtt_mem(struct amdgpu_device *adev, size_t size,
+ void **mem_obj, uint64_t *gpu_addr,
+ void **cpu_ptr, bool mqd_gfx9);
+-void amdgpu_amdkfd_free_gtt_mem(struct amdgpu_device *adev, void *mem_obj);
++void amdgpu_amdkfd_free_gtt_mem(struct amdgpu_device *adev, void **mem_obj);
+ int amdgpu_amdkfd_alloc_gws(struct amdgpu_device *adev, size_t size,
+ void **mem_obj);
+ void amdgpu_amdkfd_free_gws(struct amdgpu_device *adev, void *mem_obj);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+index 7dc102f0bc1d3c..0c8975ac5af9ed 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+@@ -1018,8 +1018,9 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
+ if (clock_type == COMPUTE_ENGINE_PLL_PARAM) {
+ args.v3.ulClockParams = cpu_to_le32((clock_type << 24) | clock);
+
+- amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+- sizeof(args));
++ if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++ index, (uint32_t *)&args, sizeof(args)))
++ return -EINVAL;
+
+ dividers->post_div = args.v3.ucPostDiv;
+ dividers->enable_post_div = (args.v3.ucCntlFlag &
+@@ -1039,8 +1040,9 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
+ if (strobe_mode)
+ args.v5.ucInputFlag = ATOM_PLL_INPUT_FLAG_PLL_STROBE_MODE_EN;
+
+- amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+- sizeof(args));
++ if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++ index, (uint32_t *)&args, sizeof(args)))
++ return -EINVAL;
+
+ dividers->post_div = args.v5.ucPostDiv;
+ dividers->enable_post_div = (args.v5.ucCntlFlag &
+@@ -1058,8 +1060,9 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
+ /* fusion */
+ args.v4.ulClock = cpu_to_le32(clock); /* 10 khz */
+
+- amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+- sizeof(args));
++ if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++ index, (uint32_t *)&args, sizeof(args)))
++ return -EINVAL;
+
+ dividers->post_divider = dividers->post_div = args.v4.ucPostDiv;
+ dividers->real_clock = le32_to_cpu(args.v4.ulClock);
+@@ -1070,8 +1073,9 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
+ args.v6_in.ulClock.ulComputeClockFlag = clock_type;
+ args.v6_in.ulClock.ulClockFreq = cpu_to_le32(clock); /* 10 khz */
+
+- amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+- sizeof(args));
++ if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++ index, (uint32_t *)&args, sizeof(args)))
++ return -EINVAL;
+
+ dividers->whole_fb_div = le16_to_cpu(args.v6_out.ulFbDiv.usFbDiv);
+ dividers->frac_fb_div = le16_to_cpu(args.v6_out.ulFbDiv.usFbDivFrac);
+@@ -1113,8 +1117,9 @@ int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
+ if (strobe_mode)
+ args.ucInputFlag |= MPLL_INPUT_FLAG_STROBE_MODE_EN;
+
+- amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+- sizeof(args));
++ if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++ index, (uint32_t *)&args, sizeof(args)))
++ return -EINVAL;
+
+ mpll_param->clkfrac = le16_to_cpu(args.ulFbDiv.usFbDivFrac);
+ mpll_param->clkf = le16_to_cpu(args.ulFbDiv.usFbDiv);
+@@ -1211,8 +1216,9 @@ int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
+ args.v2.ucVoltageMode = 0;
+ args.v2.usVoltageLevel = 0;
+
+- amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+- sizeof(args));
++ if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++ index, (uint32_t *)&args, sizeof(args)))
++ return -EINVAL;
+
+ *voltage = le16_to_cpu(args.v2.usVoltageLevel);
+ break;
+@@ -1221,8 +1227,9 @@ int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
+ args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
+ args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
+
+- amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+- sizeof(args));
++ if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++ index, (uint32_t *)&args, sizeof(args)))
++ return -EINVAL;
+
+ *voltage = le16_to_cpu(args.v3.usVoltageLevel);
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 6dfdff58bffd1f..78b3c067fea7e2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -263,6 +263,10 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
+ if (size < sizeof(struct drm_amdgpu_bo_list_in))
+ goto free_partial_kdata;
+
++ /* Only a single BO list is allowed to simplify handling. */
++ if (p->bo_list)
++ ret = -EINVAL;
++
+ ret = amdgpu_cs_p1_bo_handles(p, p->chunks[i].kdata);
+ if (ret)
+ goto free_partial_kdata;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 1849510a308add..3ff39d3ec317c0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -882,8 +882,11 @@ int amdgpu_gfx_ras_late_init(struct amdgpu_device *adev, struct ras_common_if *r
+ int r;
+
+ if (amdgpu_ras_is_supported(adev, ras_block->block)) {
+- if (!amdgpu_persistent_edc_harvesting_supported(adev))
+- amdgpu_ras_reset_error_status(adev, AMDGPU_RAS_BLOCK__GFX);
++ if (!amdgpu_persistent_edc_harvesting_supported(adev)) {
++ r = amdgpu_ras_reset_error_status(adev, AMDGPU_RAS_BLOCK__GFX);
++ if (r)
++ return r;
++ }
+
+ r = amdgpu_ras_block_late_init(adev, ras_block);
+ if (r)
+@@ -1027,7 +1030,10 @@ uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg, uint32_t xcc_
+ pr_err("critical bug! too many kiq readers\n");
+ goto failed_unlock;
+ }
+- amdgpu_ring_alloc(ring, 32);
++ r = amdgpu_ring_alloc(ring, 32);
++ if (r)
++ goto failed_unlock;
++
+ amdgpu_ring_emit_rreg(ring, reg, reg_val_offs);
+ r = amdgpu_fence_emit_polling(ring, &seq, MAX_KIQ_REG_WAIT);
+ if (r)
+@@ -1093,7 +1099,10 @@ void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v, uint3
+ }
+
+ spin_lock_irqsave(&kiq->ring_lock, flags);
+- amdgpu_ring_alloc(ring, 32);
++ r = amdgpu_ring_alloc(ring, 32);
++ if (r)
++ goto failed_unlock;
++
+ amdgpu_ring_emit_wreg(ring, reg, v);
+ r = amdgpu_fence_emit_polling(ring, &seq, MAX_KIQ_REG_WAIT);
+ if (r)
+@@ -1129,6 +1138,7 @@ void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v, uint3
+
+ failed_undo:
+ amdgpu_ring_undo(ring);
++failed_unlock:
+ spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ failed_kiq_write:
+ dev_err(adev->dev, "failed to write reg:%x\n", reg);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 66782be5917b9e..96af9ff1acb673 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -43,6 +43,7 @@
+ #include "amdgpu_gem.h"
+ #include "amdgpu_display.h"
+ #include "amdgpu_ras.h"
++#include "amdgpu_reset.h"
+ #include "amd_pcie.h"
+
+ void amdgpu_unregister_gpu_instance(struct amdgpu_device *adev)
+@@ -778,6 +779,7 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ ? -EFAULT : 0;
+ }
+ case AMDGPU_INFO_READ_MMR_REG: {
++ int ret = 0;
+ unsigned int n, alloc_size;
+ uint32_t *regs;
+ unsigned int se_num = (info->read_mmr_reg.instance >>
+@@ -787,24 +789,37 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ AMDGPU_INFO_MMR_SH_INDEX_SHIFT) &
+ AMDGPU_INFO_MMR_SH_INDEX_MASK;
+
++ if (!down_read_trylock(&adev->reset_domain->sem))
++ return -ENOENT;
++
+ /* set full masks if the userspace set all bits
+ * in the bitfields
+ */
+- if (se_num == AMDGPU_INFO_MMR_SE_INDEX_MASK)
++ if (se_num == AMDGPU_INFO_MMR_SE_INDEX_MASK) {
+ se_num = 0xffffffff;
+- else if (se_num >= AMDGPU_GFX_MAX_SE)
+- return -EINVAL;
+- if (sh_num == AMDGPU_INFO_MMR_SH_INDEX_MASK)
++ } else if (se_num >= AMDGPU_GFX_MAX_SE) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ if (sh_num == AMDGPU_INFO_MMR_SH_INDEX_MASK) {
+ sh_num = 0xffffffff;
+- else if (sh_num >= AMDGPU_GFX_MAX_SH_PER_SE)
+- return -EINVAL;
++ } else if (sh_num >= AMDGPU_GFX_MAX_SH_PER_SE) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+- if (info->read_mmr_reg.count > 128)
+- return -EINVAL;
++ if (info->read_mmr_reg.count > 128) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ regs = kmalloc_array(info->read_mmr_reg.count, sizeof(*regs), GFP_KERNEL);
+- if (!regs)
+- return -ENOMEM;
++ if (!regs) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
+ alloc_size = info->read_mmr_reg.count * sizeof(*regs);
+
+ amdgpu_gfx_off_ctrl(adev, false);
+@@ -816,13 +831,17 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ info->read_mmr_reg.dword_offset + i);
+ kfree(regs);
+ amdgpu_gfx_off_ctrl(adev, true);
+- return -EFAULT;
++ ret = -EFAULT;
++ goto out;
+ }
+ }
+ amdgpu_gfx_off_ctrl(adev, true);
+ n = copy_to_user(out, regs, min(size, alloc_size));
+ kfree(regs);
+- return n ? -EFAULT : 0;
++ ret = (n ? -EFAULT : 0);
++out:
++ up_read(&adev->reset_domain->sem);
++ return ret;
+ }
+ case AMDGPU_INFO_DEV_INFO: {
+ struct drm_amdgpu_info_device *dev_info;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.h
+index 90138bc5f03d1c..32775260556f44 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.h
+@@ -180,6 +180,6 @@ amdgpu_get_next_xcp(struct amdgpu_xcp_mgr *xcp_mgr, int *from)
+
+ #define for_each_xcp(xcp_mgr, xcp, i) \
+ for (i = 0, xcp = amdgpu_get_next_xcp(xcp_mgr, &i); xcp; \
+- xcp = amdgpu_get_next_xcp(xcp_mgr, &i))
++ ++i, xcp = amdgpu_get_next_xcp(xcp_mgr, &i))
+
+ #endif
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index e444e621ddaa01..1bb602c4f9b3f8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -4649,7 +4649,7 @@ static void gfx_v10_0_alloc_ip_dump(struct amdgpu_device *adev)
+ uint32_t inst;
+
+ ptr = kcalloc(reg_count, sizeof(uint32_t), GFP_KERNEL);
+- if (ptr == NULL) {
++ if (!ptr) {
+ DRM_ERROR("Failed to allocate memory for GFX IP Dump\n");
+ adev->gfx.ip_dump_core = NULL;
+ } else {
+@@ -4662,7 +4662,7 @@ static void gfx_v10_0_alloc_ip_dump(struct amdgpu_device *adev)
+ adev->gfx.mec.num_queue_per_pipe;
+
+ ptr = kcalloc(reg_count * inst, sizeof(uint32_t), GFP_KERNEL);
+- if (ptr == NULL) {
++ if (!ptr) {
+ DRM_ERROR("Failed to allocate memory for Compute Queues IP Dump\n");
+ adev->gfx.ip_dump_compute_queues = NULL;
+ } else {
+@@ -4675,7 +4675,7 @@ static void gfx_v10_0_alloc_ip_dump(struct amdgpu_device *adev)
+ adev->gfx.me.num_queue_per_pipe;
+
+ ptr = kcalloc(reg_count * inst, sizeof(uint32_t), GFP_KERNEL);
+- if (ptr == NULL) {
++ if (!ptr) {
+ DRM_ERROR("Failed to allocate memory for GFX Queues IP Dump\n");
+ adev->gfx.ip_dump_gfx_queues = NULL;
+ } else {
+@@ -8889,7 +8889,9 @@ static void gfx_v10_0_ring_soft_recovery(struct amdgpu_ring *ring,
+ value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+ value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+ value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
++ amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
+ WREG32_SOC15(GC, 0, mmSQ_CMD, value);
++ amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
+ }
+
+ static void
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index dcef399074492d..b80b1b6f2eea70 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -1484,7 +1484,7 @@ static void gfx_v11_0_alloc_ip_dump(struct amdgpu_device *adev)
+ uint32_t inst;
+
+ ptr = kcalloc(reg_count, sizeof(uint32_t), GFP_KERNEL);
+- if (ptr == NULL) {
++ if (!ptr) {
+ DRM_ERROR("Failed to allocate memory for GFX IP Dump\n");
+ adev->gfx.ip_dump_core = NULL;
+ } else {
+@@ -1497,7 +1497,7 @@ static void gfx_v11_0_alloc_ip_dump(struct amdgpu_device *adev)
+ adev->gfx.mec.num_queue_per_pipe;
+
+ ptr = kcalloc(reg_count * inst, sizeof(uint32_t), GFP_KERNEL);
+- if (ptr == NULL) {
++ if (!ptr) {
+ DRM_ERROR("Failed to allocate memory for Compute Queues IP Dump\n");
+ adev->gfx.ip_dump_compute_queues = NULL;
+ } else {
+@@ -1510,7 +1510,7 @@ static void gfx_v11_0_alloc_ip_dump(struct amdgpu_device *adev)
+ adev->gfx.me.num_queue_per_pipe;
+
+ ptr = kcalloc(reg_count * inst, sizeof(uint32_t), GFP_KERNEL);
+- if (ptr == NULL) {
++ if (!ptr) {
+ DRM_ERROR("Failed to allocate memory for GFX Queues IP Dump\n");
+ adev->gfx.ip_dump_gfx_queues = NULL;
+ } else {
+@@ -4707,6 +4707,8 @@ static int gfx_v11_0_soft_reset(void *handle)
+ int r, i, j, k;
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
++ gfx_v11_0_set_safe_mode(adev, 0);
++
+ tmp = RREG32_SOC15(GC, 0, regCP_INT_CNTL);
+ tmp = REG_SET_FIELD(tmp, CP_INT_CNTL, CMP_BUSY_INT_ENABLE, 0);
+ tmp = REG_SET_FIELD(tmp, CP_INT_CNTL, CNTX_BUSY_INT_ENABLE, 0);
+@@ -4714,8 +4716,6 @@ static int gfx_v11_0_soft_reset(void *handle)
+ tmp = REG_SET_FIELD(tmp, CP_INT_CNTL, GFX_IDLE_INT_ENABLE, 0);
+ WREG32_SOC15(GC, 0, regCP_INT_CNTL, tmp);
+
+- gfx_v11_0_set_safe_mode(adev, 0);
+-
+ mutex_lock(&adev->srbm_mutex);
+ for (i = 0; i < adev->gfx.mec.num_mec; ++i) {
+ for (j = 0; j < adev->gfx.mec.num_queue_per_pipe; j++) {
+@@ -6008,7 +6008,9 @@ static void gfx_v11_0_ring_soft_recovery(struct amdgpu_ring *ring,
+ value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+ value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+ value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
++ amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
+ WREG32_SOC15(GC, 0, regSQ_CMD, value);
++ amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
+ }
+
+ static void
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index 34b95ca700b233..515fc7d6f8389e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -1690,26 +1690,68 @@ static void gfx_v12_0_constants_init(struct amdgpu_device *adev)
+ gfx_v12_0_init_compute_vmid(adev);
+ }
+
++static u32 gfx_v12_0_get_cpg_int_cntl(struct amdgpu_device *adev,
++ int me, int pipe)
++{
++ if (me != 0)
++ return 0;
++
++ switch (pipe) {
++ case 0:
++ return SOC15_REG_OFFSET(GC, 0, regCP_INT_CNTL_RING0);
++ default:
++ return 0;
++ }
++}
++
++static u32 gfx_v12_0_get_cpc_int_cntl(struct amdgpu_device *adev,
++ int me, int pipe)
++{
++ /*
++ * amdgpu controls only the first MEC. That's why this function only
++ * handles the setting of interrupts for this specific MEC. All other
++ * pipes' interrupts are set by amdkfd.
++ */
++ if (me != 1)
++ return 0;
++
++ switch (pipe) {
++ case 0:
++ return SOC15_REG_OFFSET(GC, 0, regCP_ME1_PIPE0_INT_CNTL);
++ case 1:
++ return SOC15_REG_OFFSET(GC, 0, regCP_ME1_PIPE1_INT_CNTL);
++ default:
++ return 0;
++ }
++}
++
+ static void gfx_v12_0_enable_gui_idle_interrupt(struct amdgpu_device *adev,
+- bool enable)
++ bool enable)
+ {
+- u32 tmp;
++ u32 tmp, cp_int_cntl_reg;
++ int i, j;
+
+ if (amdgpu_sriov_vf(adev))
+ return;
+
+- tmp = RREG32_SOC15(GC, 0, regCP_INT_CNTL_RING0);
+-
+- tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE,
+- enable ? 1 : 0);
+- tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_EMPTY_INT_ENABLE,
+- enable ? 1 : 0);
+- tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CMP_BUSY_INT_ENABLE,
+- enable ? 1 : 0);
+- tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, GFX_IDLE_INT_ENABLE,
+- enable ? 1 : 0);
+-
+- WREG32_SOC15(GC, 0, regCP_INT_CNTL_RING0, tmp);
++ for (i = 0; i < adev->gfx.me.num_me; i++) {
++ for (j = 0; j < adev->gfx.me.num_pipe_per_me; j++) {
++ cp_int_cntl_reg = gfx_v12_0_get_cpg_int_cntl(adev, i, j);
++
++ if (cp_int_cntl_reg) {
++ tmp = RREG32_SOC15_IP(GC, cp_int_cntl_reg);
++ tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE,
++ enable ? 1 : 0);
++ tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_EMPTY_INT_ENABLE,
++ enable ? 1 : 0);
++ tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CMP_BUSY_INT_ENABLE,
++ enable ? 1 : 0);
++ tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, GFX_IDLE_INT_ENABLE,
++ enable ? 1 : 0);
++ WREG32_SOC15_IP(GC, cp_int_cntl_reg, tmp);
++ }
++ }
++ }
+ }
+
+ static int gfx_v12_0_init_csb(struct amdgpu_device *adev)
+@@ -4571,7 +4613,9 @@ static void gfx_v12_0_ring_soft_recovery(struct amdgpu_ring *ring,
+ value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+ value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+ value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
++ amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
+ WREG32_SOC15(GC, 0, regSQ_CMD, value);
++ amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
+ }
+
+ static void
+@@ -4755,15 +4799,42 @@ static int gfx_v12_0_eop_irq(struct amdgpu_device *adev,
+
+ static int gfx_v12_0_set_priv_reg_fault_state(struct amdgpu_device *adev,
+ struct amdgpu_irq_src *source,
+- unsigned type,
++ unsigned int type,
+ enum amdgpu_interrupt_state state)
+ {
++ u32 cp_int_cntl_reg, cp_int_cntl;
++ int i, j;
++
+ switch (state) {
+ case AMDGPU_IRQ_STATE_DISABLE:
+ case AMDGPU_IRQ_STATE_ENABLE:
+- WREG32_FIELD15_PREREG(GC, 0, CP_INT_CNTL_RING0,
+- PRIV_REG_INT_ENABLE,
+- state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++ for (i = 0; i < adev->gfx.me.num_me; i++) {
++ for (j = 0; j < adev->gfx.me.num_pipe_per_me; j++) {
++ cp_int_cntl_reg = gfx_v12_0_get_cpg_int_cntl(adev, i, j);
++
++ if (cp_int_cntl_reg) {
++ cp_int_cntl = RREG32_SOC15_IP(GC, cp_int_cntl_reg);
++ cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
++ PRIV_REG_INT_ENABLE,
++ state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++ WREG32_SOC15_IP(GC, cp_int_cntl_reg, cp_int_cntl);
++ }
++ }
++ }
++ for (i = 0; i < adev->gfx.mec.num_mec; i++) {
++ for (j = 0; j < adev->gfx.mec.num_pipe_per_mec; j++) {
++ /* MECs start at 1 */
++ cp_int_cntl_reg = gfx_v12_0_get_cpc_int_cntl(adev, i + 1, j);
++
++ if (cp_int_cntl_reg) {
++ cp_int_cntl = RREG32_SOC15_IP(GC, cp_int_cntl_reg);
++ cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_ME1_PIPE0_INT_CNTL,
++ PRIV_REG_INT_ENABLE,
++ state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++ WREG32_SOC15_IP(GC, cp_int_cntl_reg, cp_int_cntl);
++ }
++ }
++ }
+ break;
+ default:
+ break;
+@@ -4774,15 +4845,28 @@ static int gfx_v12_0_set_priv_reg_fault_state(struct amdgpu_device *adev,
+
+ static int gfx_v12_0_set_priv_inst_fault_state(struct amdgpu_device *adev,
+ struct amdgpu_irq_src *source,
+- unsigned type,
++ unsigned int type,
+ enum amdgpu_interrupt_state state)
+ {
++ u32 cp_int_cntl_reg, cp_int_cntl;
++ int i, j;
++
+ switch (state) {
+ case AMDGPU_IRQ_STATE_DISABLE:
+ case AMDGPU_IRQ_STATE_ENABLE:
+- WREG32_FIELD15_PREREG(GC, 0, CP_INT_CNTL_RING0,
+- PRIV_INSTR_INT_ENABLE,
+- state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++ for (i = 0; i < adev->gfx.me.num_me; i++) {
++ for (j = 0; j < adev->gfx.me.num_pipe_per_me; j++) {
++ cp_int_cntl_reg = gfx_v12_0_get_cpg_int_cntl(adev, i, j);
++
++ if (cp_int_cntl_reg) {
++ cp_int_cntl = RREG32_SOC15_IP(GC, cp_int_cntl_reg);
++ cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
++ PRIV_INSTR_INT_ENABLE,
++ state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++ WREG32_SOC15_IP(GC, cp_int_cntl_reg, cp_int_cntl);
++ }
++ }
++ }
+ break;
+ default:
+ break;
+@@ -4806,8 +4890,8 @@ static void gfx_v12_0_handle_priv_fault(struct amdgpu_device *adev,
+ case 0:
+ for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+ ring = &adev->gfx.gfx_ring[i];
+- /* we only enabled 1 gfx queue per pipe for now */
+- if (ring->me == me_id && ring->pipe == pipe_id)
++ if (ring->me == me_id && ring->pipe == pipe_id &&
++ ring->queue == queue_id)
+ drm_sched_fault(&ring->sched);
+ }
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 2929c8972ea731..02eb5bd9d7d825 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1301,6 +1301,10 @@ static const struct amdgpu_gfxoff_quirk amdgpu_gfxoff_quirk_list[] = {
+ { 0x1002, 0x15dd, 0x1002, 0x15dd, 0xc6 },
+ /* Apple MacBook Pro (15-inch, 2019) Radeon Pro Vega 20 4 GB */
+ { 0x1002, 0x69af, 0x106b, 0x019a, 0xc0 },
++ /* https://bbs.openkylin.top/t/topic/171497 */
++ { 0x1002, 0x15d8, 0x19e5, 0x3e14, 0xc2 },
++ /* HP 705G4 DM with R5 2400G */
++ { 0x1002, 0x15dd, 0x103c, 0x8464, 0xd6 },
+ { 0, 0, 0, 0, 0 },
+ };
+
+@@ -2129,7 +2133,7 @@ static void gfx_v9_0_alloc_ip_dump(struct amdgpu_device *adev)
+ uint32_t inst;
+
+ ptr = kcalloc(reg_count, sizeof(uint32_t), GFP_KERNEL);
+- if (ptr == NULL) {
++ if (!ptr) {
+ DRM_ERROR("Failed to allocate memory for GFX IP Dump\n");
+ adev->gfx.ip_dump_core = NULL;
+ } else {
+@@ -2142,7 +2146,7 @@ static void gfx_v9_0_alloc_ip_dump(struct amdgpu_device *adev)
+ adev->gfx.mec.num_queue_per_pipe;
+
+ ptr = kcalloc(reg_count * inst, sizeof(uint32_t), GFP_KERNEL);
+- if (ptr == NULL) {
++ if (!ptr) {
+ DRM_ERROR("Failed to allocate memory for Compute Queues IP Dump\n");
+ adev->gfx.ip_dump_compute_queues = NULL;
+ } else {
+@@ -2634,7 +2638,7 @@ static void gfx_v9_0_enable_gui_idle_interrupt(struct amdgpu_device *adev,
+ tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE, enable ? 1 : 0);
+ tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_EMPTY_INT_ENABLE, enable ? 1 : 0);
+ tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CMP_BUSY_INT_ENABLE, enable ? 1 : 0);
+- if(adev->gfx.num_gfx_rings)
++ if (adev->gfx.num_gfx_rings)
+ tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, GFX_IDLE_INT_ENABLE, enable ? 1 : 0);
+
+ WREG32_SOC15(GC, 0, mmCP_INT_CNTL_RING0, tmp);
+@@ -5858,7 +5862,9 @@ static void gfx_v9_0_ring_soft_recovery(struct amdgpu_ring *ring, unsigned vmid)
+ value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+ value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+ value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
++ amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
+ WREG32_SOC15(GC, 0, mmSQ_CMD, value);
++ amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
+ }
+
+ static void gfx_v9_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
+@@ -5929,17 +5935,59 @@ static void gfx_v9_0_set_compute_eop_interrupt_state(struct amdgpu_device *adev,
+ }
+ }
+
++static u32 gfx_v9_0_get_cpc_int_cntl(struct amdgpu_device *adev,
++ int me, int pipe)
++{
++ /*
++ * amdgpu controls only the first MEC. That's why this function only
++ * handles the setting of interrupts for this specific MEC. All other
++ * pipes' interrupts are set by amdkfd.
++ */
++ if (me != 1)
++ return 0;
++
++ switch (pipe) {
++ case 0:
++ return SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE0_INT_CNTL);
++ case 1:
++ return SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE1_INT_CNTL);
++ case 2:
++ return SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE2_INT_CNTL);
++ case 3:
++ return SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE3_INT_CNTL);
++ default:
++ return 0;
++ }
++}
++
+ static int gfx_v9_0_set_priv_reg_fault_state(struct amdgpu_device *adev,
+ struct amdgpu_irq_src *source,
+ unsigned type,
+ enum amdgpu_interrupt_state state)
+ {
++ u32 cp_int_cntl_reg, cp_int_cntl;
++ int i, j;
++
+ switch (state) {
+ case AMDGPU_IRQ_STATE_DISABLE:
+ case AMDGPU_IRQ_STATE_ENABLE:
+ WREG32_FIELD15(GC, 0, CP_INT_CNTL_RING0,
+ PRIV_REG_INT_ENABLE,
+ state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++ for (i = 0; i < adev->gfx.mec.num_mec; i++) {
++ for (j = 0; j < adev->gfx.mec.num_pipe_per_mec; j++) {
++ /* MECs start at 1 */
++ cp_int_cntl_reg = gfx_v9_0_get_cpc_int_cntl(adev, i + 1, j);
++
++ if (cp_int_cntl_reg) {
++ cp_int_cntl = RREG32_SOC15_IP(GC, cp_int_cntl_reg);
++ cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_ME1_PIPE0_INT_CNTL,
++ PRIV_REG_INT_ENABLE,
++ state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++ WREG32_SOC15_IP(GC, cp_int_cntl_reg, cp_int_cntl);
++ }
++ }
++ }
+ break;
+ default:
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+index 20ea6cb01edfda..d95f9a84f97b46 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+@@ -2886,21 +2886,63 @@ static void gfx_v9_4_3_xcc_set_compute_eop_interrupt_state(
+ }
+ }
+
++static u32 gfx_v9_4_3_get_cpc_int_cntl(struct amdgpu_device *adev,
++ int xcc_id, int me, int pipe)
++{
++ /*
++ * amdgpu controls only the first MEC. That's why this function only
++ * handles the setting of interrupts for this specific MEC. All other
++ * pipes' interrupts are set by amdkfd.
++ */
++ if (me != 1)
++ return 0;
++
++ switch (pipe) {
++ case 0:
++ return SOC15_REG_OFFSET(GC, GET_INST(GC, xcc_id), regCP_ME1_PIPE0_INT_CNTL);
++ case 1:
++ return SOC15_REG_OFFSET(GC, GET_INST(GC, xcc_id), regCP_ME1_PIPE1_INT_CNTL);
++ case 2:
++ return SOC15_REG_OFFSET(GC, GET_INST(GC, xcc_id), regCP_ME1_PIPE2_INT_CNTL);
++ case 3:
++ return SOC15_REG_OFFSET(GC, GET_INST(GC, xcc_id), regCP_ME1_PIPE3_INT_CNTL);
++ default:
++ return 0;
++ }
++}
++
+ static int gfx_v9_4_3_set_priv_reg_fault_state(struct amdgpu_device *adev,
+ struct amdgpu_irq_src *source,
+ unsigned type,
+ enum amdgpu_interrupt_state state)
+ {
+- int i, num_xcc;
++ u32 mec_int_cntl_reg, mec_int_cntl;
++ int i, j, k, num_xcc;
+
+ num_xcc = NUM_XCC(adev->gfx.xcc_mask);
+ switch (state) {
+ case AMDGPU_IRQ_STATE_DISABLE:
+ case AMDGPU_IRQ_STATE_ENABLE:
+- for (i = 0; i < num_xcc; i++)
++ for (i = 0; i < num_xcc; i++) {
+ WREG32_FIELD15_PREREG(GC, GET_INST(GC, i), CP_INT_CNTL_RING0,
+- PRIV_REG_INT_ENABLE,
+- state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++ PRIV_REG_INT_ENABLE,
++ state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++ for (j = 0; j < adev->gfx.mec.num_mec; j++) {
++ for (k = 0; k < adev->gfx.mec.num_pipe_per_mec; k++) {
++ /* MECs start at 1 */
++ mec_int_cntl_reg = gfx_v9_4_3_get_cpc_int_cntl(adev, i, j + 1, k);
++
++ if (mec_int_cntl_reg) {
++ mec_int_cntl = RREG32_XCC(mec_int_cntl_reg, i);
++ mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
++ PRIV_REG_INT_ENABLE,
++ state == AMDGPU_IRQ_STATE_ENABLE ?
++ 1 : 0);
++ WREG32_XCC(mec_int_cntl_reg, mec_int_cntl, i);
++ }
++ }
++ }
++ }
+ break;
+ default:
+ break;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 32e5db509560ee..546b02f2241a67 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -423,7 +423,7 @@ static int kfd_ioctl_create_queue(struct file *filep, struct kfd_process *p,
+
+ err_create_queue:
+ if (wptr_bo)
+- amdgpu_amdkfd_free_gtt_mem(dev->adev, wptr_bo);
++ amdgpu_amdkfd_free_gtt_mem(dev->adev, (void **)&wptr_bo);
+ err_wptr_map_gart:
+ err_bind_process:
+ err_pdd:
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index f4d20adaa06899..6619028dd58ba5 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -907,7 +907,7 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ kfd_doorbell_error:
+ kfd_gtt_sa_fini(kfd);
+ kfd_gtt_sa_init_error:
+- amdgpu_amdkfd_free_gtt_mem(kfd->adev, kfd->gtt_mem);
++ amdgpu_amdkfd_free_gtt_mem(kfd->adev, &kfd->gtt_mem);
+ alloc_gtt_mem_failure:
+ dev_err(kfd_device,
+ "device %x:%x NOT added due to errors\n",
+@@ -925,7 +925,7 @@ void kgd2kfd_device_exit(struct kfd_dev *kfd)
+ kfd_doorbell_fini(kfd);
+ ida_destroy(&kfd->doorbell_ida);
+ kfd_gtt_sa_fini(kfd);
+- amdgpu_amdkfd_free_gtt_mem(kfd->adev, kfd->gtt_mem);
++ amdgpu_amdkfd_free_gtt_mem(kfd->adev, &kfd->gtt_mem);
+ }
+
+ kfree(kfd);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 4f48507418d2f1..420444eb8e9823 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -2621,7 +2621,7 @@ static void deallocate_hiq_sdma_mqd(struct kfd_node *dev,
+ {
+ WARN(!mqd, "No hiq sdma mqd trunk to free");
+
+- amdgpu_amdkfd_free_gtt_mem(dev->adev, mqd->gtt_mem);
++ amdgpu_amdkfd_free_gtt_mem(dev->adev, &mqd->gtt_mem);
+ }
+
+ void device_queue_manager_uninit(struct device_queue_manager *dqm)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
+index a9c3580be8c9b9..fecdbbab98949e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
+@@ -431,25 +431,9 @@ static void event_interrupt_wq_v9(struct kfd_node *dev,
+ client_id == SOC15_IH_CLIENTID_UTCL2) {
+ struct kfd_vm_fault_info info = {0};
+ uint16_t ring_id = SOC15_RING_ID_FROM_IH_ENTRY(ih_ring_entry);
+- uint32_t node_id = SOC15_NODEID_FROM_IH_ENTRY(ih_ring_entry);
+- uint32_t vmid_type = SOC15_VMID_TYPE_FROM_IH_ENTRY(ih_ring_entry);
+- int hub_inst = 0;
+ struct kfd_hsa_memory_exception_data exception_data;
+
+- /* gfxhub */
+- if (!vmid_type && dev->adev->gfx.funcs->ih_node_to_logical_xcc) {
+- hub_inst = dev->adev->gfx.funcs->ih_node_to_logical_xcc(dev->adev,
+- node_id);
+- if (hub_inst < 0)
+- hub_inst = 0;
+- }
+-
+- /* mmhub */
+- if (vmid_type && client_id == SOC15_IH_CLIENTID_VMC)
+- hub_inst = node_id / 4;
+-
+- if (amdgpu_amdkfd_ras_query_utcl2_poison_status(dev->adev,
+- hub_inst, vmid_type)) {
++ if (source_id == SOC15_INTSRC_VMC_UTCL2_POISON) {
+ event_interrupt_poison_consumption_v9(dev, pasid, client_id);
+ return;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+index 50a81da43ce195..d9ae854b690849 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+@@ -225,7 +225,7 @@ void kfd_free_mqd_cp(struct mqd_manager *mm, void *mqd,
+ struct kfd_mem_obj *mqd_mem_obj)
+ {
+ if (mqd_mem_obj->gtt_mem) {
+- amdgpu_amdkfd_free_gtt_mem(mm->dev->adev, mqd_mem_obj->gtt_mem);
++ amdgpu_amdkfd_free_gtt_mem(mm->dev->adev, &mqd_mem_obj->gtt_mem);
+ kfree(mqd_mem_obj);
+ } else {
+ kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 17e42161b01519..9e29b92eb523d0 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -1048,7 +1048,7 @@ static void kfd_process_destroy_pdds(struct kfd_process *p)
+
+ if (pdd->dev->kfd->shared_resources.enable_mes)
+ amdgpu_amdkfd_free_gtt_mem(pdd->dev->adev,
+- pdd->proc_ctx_bo);
++ &pdd->proc_ctx_bo);
+ /*
+ * before destroying pdd, make sure to report availability
+ * for auto suspend
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index 21f5a1fb3bf88d..e0f19f3ae2207d 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -204,9 +204,9 @@ static void pqm_clean_queue_resource(struct process_queue_manager *pqm,
+ }
+
+ if (dev->kfd->shared_resources.enable_mes) {
+- amdgpu_amdkfd_free_gtt_mem(dev->adev, pqn->q->gang_ctx_bo);
++ amdgpu_amdkfd_free_gtt_mem(dev->adev, &pqn->q->gang_ctx_bo);
+ if (pqn->q->wptr_bo)
+- amdgpu_amdkfd_free_gtt_mem(dev->adev, pqn->q->wptr_bo);
++ amdgpu_amdkfd_free_gtt_mem(dev->adev, (void **)&pqn->q->wptr_bo);
+ }
+ }
+
+@@ -988,6 +988,7 @@ int kfd_criu_restore_queue(struct kfd_process *p,
+ pr_debug("Queue id %d was restored successfully\n", queue_id);
+
+ kfree(q_data);
++ kfree(q_extra_data);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/soc15_int.h b/drivers/gpu/drm/amd/amdkfd/soc15_int.h
+index 10138676f27fd7..e5c0205f26181e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/soc15_int.h
++++ b/drivers/gpu/drm/amd/amdkfd/soc15_int.h
+@@ -29,6 +29,7 @@
+ #define SOC15_INTSRC_CP_BAD_OPCODE 183
+ #define SOC15_INTSRC_SQ_INTERRUPT_MSG 239
+ #define SOC15_INTSRC_VMC_FAULT 0
++#define SOC15_INTSRC_VMC_UTCL2_POISON 1
+ #define SOC15_INTSRC_SDMA_TRAP 224
+ #define SOC15_INTSRC_SDMA_ECC 220
+ #define SOC21_INTSRC_SDMA_TRAP 49
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 74bb1e0e913487..1ab7cd8a6b6aed 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -770,6 +770,12 @@ static void dmub_hpd_callback(struct amdgpu_device *adev,
+ return;
+ }
+
++ /* Skip DMUB HPD IRQ in suspend/resume. We will probe them later. */
++ if (notify->type == DMUB_NOTIFICATION_HPD && adev->in_suspend) {
++ DRM_INFO("Skip DMUB HPD IRQ callback in suspend/resume\n");
++ return;
++ }
++
+ link_index = notify->link_index;
+ link = adev->dm.dc->links[link_index];
+ dev = adev->dm.ddev;
+@@ -2005,7 +2011,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ DRM_ERROR("amdgpu: failed to initialize vblank_workqueue.\n");
+ }
+
+- if (adev->dm.dc->caps.ips_support && adev->dm.dc->config.disable_ips == DMUB_IPS_ENABLE)
++ if (adev->dm.dc->caps.ips_support &&
++ adev->dm.dc->config.disable_ips != DMUB_IPS_DISABLE_ALL)
+ adev->dm.idle_workqueue = idle_create_workqueue(adev);
+
+ if (adev->dm.dc->caps.max_links > 0 && adev->family >= AMDGPU_FAMILY_RV) {
+@@ -4478,7 +4485,7 @@ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm,
+ int spread = caps.max_input_signal - caps.min_input_signal;
+
+ if (caps.max_input_signal > AMDGPU_DM_DEFAULT_MAX_BACKLIGHT ||
+- caps.min_input_signal < AMDGPU_DM_DEFAULT_MIN_BACKLIGHT ||
++ caps.min_input_signal < 0 ||
+ spread > AMDGPU_DM_DEFAULT_MAX_BACKLIGHT ||
+ spread < AMDGPU_DM_MIN_SPREAD) {
+ DRM_DEBUG_KMS("DM: Invalid backlight caps: min=%d, max=%d\n",
+@@ -4919,18 +4926,14 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ /* Determine whether to enable Replay support by default. */
+ if (!(amdgpu_dc_debug_mask & DC_DISABLE_REPLAY)) {
+ switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) {
+-/*
+- * Disabled by default due to https://gitlab.freedesktop.org/drm/amd/-/issues/3344
+- * case IP_VERSION(3, 1, 4):
+- * case IP_VERSION(3, 1, 5):
+- * case IP_VERSION(3, 1, 6):
+- * case IP_VERSION(3, 2, 0):
+- * case IP_VERSION(3, 2, 1):
+- * case IP_VERSION(3, 5, 0):
+- * case IP_VERSION(3, 5, 1):
+- * replay_feature_enabled = true;
+- * break;
+- */
++ case IP_VERSION(3, 1, 4):
++ case IP_VERSION(3, 2, 0):
++ case IP_VERSION(3, 2, 1):
++ case IP_VERSION(3, 5, 0):
++ case IP_VERSION(3, 5, 1):
++ replay_feature_enabled = true;
++ break;
++
+ default:
+ replay_feature_enabled = amdgpu_dc_feature_mask & DC_REPLAY_MASK;
+ break;
+@@ -6717,12 +6720,21 @@ create_stream_for_sink(struct drm_connector *connector,
+ if (stream->signal == SIGNAL_TYPE_DISPLAY_PORT ||
+ stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST ||
+ stream->signal == SIGNAL_TYPE_EDP) {
++ const struct dc_edid_caps *edid_caps;
++ unsigned int disable_colorimetry = 0;
++
++ if (aconnector->dc_sink) {
++ edid_caps = &aconnector->dc_sink->edid_caps;
++ disable_colorimetry = edid_caps->panel_patch.disable_colorimetry;
++ }
++
+ //
+ // should decide stream support vsc sdp colorimetry capability
+ // before building vsc info packet
+ //
+ stream->use_vsc_sdp_for_colorimetry = stream->link->dpcd_caps.dpcd_rev.raw >= 0x14 &&
+- stream->link->dpcd_caps.dprx_feature.bits.VSC_SDP_COLORIMETRY_SUPPORTED;
++ stream->link->dpcd_caps.dprx_feature.bits.VSC_SDP_COLORIMETRY_SUPPORTED &&
++ !disable_colorimetry;
+
+ if (stream->out_transfer_func.tf == TRANSFER_FUNCTION_GAMMA22)
+ tf = TRANSFER_FUNC_GAMMA_22;
+@@ -7279,6 +7291,9 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ int requested_bpc = drm_state ? drm_state->max_requested_bpc : 8;
+ enum dc_status dc_result = DC_OK;
+
++ if (!dm_state)
++ return NULL;
++
+ do {
+ stream = create_stream_for_sink(connector, drm_mode,
+ dm_state, old_stream,
+@@ -8796,7 +8811,8 @@ static void amdgpu_dm_update_cursor(struct drm_plane *plane,
+ adev->dm.dc->caps.color.dpp.gamma_corr)
+ attributes.attribute_flags.bits.ENABLE_CURSOR_DEGAMMA = 1;
+
+- attributes.pitch = afb->base.pitches[0] / afb->base.format->cpp[0];
++ if (afb)
++ attributes.pitch = afb->base.pitches[0] / afb->base.format->cpp[0];
+
+ if (crtc_state->stream) {
+ if (!dc_stream_set_cursor_attributes(crtc_state->stream,
+@@ -9386,7 +9402,7 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ if (acrtc)
+ old_crtc_state = drm_atomic_get_old_crtc_state(state, &acrtc->base);
+
+- if (!acrtc->wb_enabled)
++ if (!acrtc || !acrtc->wb_enabled)
+ continue;
+
+ dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
+@@ -9790,9 +9806,10 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+
+ DRM_INFO("[HDCP_DM] hdcp_update_display enable_encryption = %x\n", enable_encryption);
+
+- hdcp_update_display(
+- adev->dm.hdcp_workqueue, aconnector->dc_link->link_index, aconnector,
+- new_con_state->hdcp_content_type, enable_encryption);
++ if (aconnector->dc_link)
++ hdcp_update_display(
++ adev->dm.hdcp_workqueue, aconnector->dc_link->link_index, aconnector,
++ new_con_state->hdcp_content_type, enable_encryption);
+ }
+ }
+
+@@ -11812,25 +11829,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ return ret;
+ }
+
+-static bool is_dp_capable_without_timing_msa(struct dc *dc,
+- struct amdgpu_dm_connector *amdgpu_dm_connector)
+-{
+- u8 dpcd_data;
+- bool capable = false;
+-
+- if (amdgpu_dm_connector->dc_link &&
+- dm_helpers_dp_read_dpcd(
+- NULL,
+- amdgpu_dm_connector->dc_link,
+- DP_DOWN_STREAM_PORT_COUNT,
+- &dpcd_data,
+- sizeof(dpcd_data))) {
+- capable = (dpcd_data & DP_MSA_TIMING_PAR_IGNORED) ? true:false;
+- }
+-
+- return capable;
+-}
+-
+ static bool dm_edid_parser_send_cea(struct amdgpu_display_manager *dm,
+ unsigned int offset,
+ unsigned int total_length,
+@@ -12133,8 +12131,8 @@ void amdgpu_dm_update_freesync_caps(struct drm_connector *connector,
+ sink->sink_signal == SIGNAL_TYPE_EDP)) {
+ bool edid_check_required = false;
+
+- if (is_dp_capable_without_timing_msa(adev->dm.dc,
+- amdgpu_dm_connector)) {
++ if (amdgpu_dm_connector->dc_link &&
++ amdgpu_dm_connector->dc_link->dpcd_caps.allow_invalid_MSA_timing_param) {
+ if (edid->features & DRM_EDID_FEATURE_CONTINUOUS_FREQ) {
+ amdgpu_dm_connector->min_vfreq = connector->display_info.monitor_range.min_vfreq;
+ amdgpu_dm_connector->max_vfreq = connector->display_info.monitor_range.max_vfreq;
+@@ -12240,6 +12238,12 @@ void amdgpu_dm_update_freesync_caps(struct drm_connector *connector,
+ if (dm_con_state)
+ dm_con_state->freesync_capable = freesync_capable;
+
++ if (connector->state && amdgpu_dm_connector->dc_link && !freesync_capable &&
++ amdgpu_dm_connector->dc_link->replay_settings.config.replay_supported) {
++ amdgpu_dm_connector->dc_link->replay_settings.config.replay_supported = false;
++ amdgpu_dm_connector->dc_link->replay_settings.replay_feature_enabled = false;
++ }
++
+ if (connector->vrr_capable_property)
+ drm_connector_set_vrr_capable_property(connector,
+ freesync_capable);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index 8befc79478f870..65abbfb6972ed7 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -73,6 +73,10 @@ static void apply_edid_quirks(struct edid *edid, struct dc_edid_caps *edid_caps)
+ DRM_DEBUG_DRIVER("Clearing DPCD 0x317 on monitor with panel id %X\n", panel_id);
+ edid_caps->panel_patch.remove_sink_ext_caps = true;
+ break;
++ case drm_edid_encode_panel_id('S', 'D', 'C', 0x4154):
++ DRM_DEBUG_DRIVER("Disabling VSC on monitor with panel id %X\n", panel_id);
++ edid_caps->panel_patch.disable_colorimetry = true;
++ break;
+ default:
+ return;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 6c72709aa25816..905c11af017164 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1027,6 +1027,7 @@ static int try_disable_dsc(struct drm_atomic_state *state,
+ int remaining_to_try = 0;
+ int ret;
+ uint16_t fec_overhead_multiplier_x1000 = get_fec_overhead_multiplier(dc_link);
++ int var_pbn;
+
+ for (i = 0; i < count; i++) {
+ if (vars[i + k].dsc_enabled
+@@ -1057,13 +1058,18 @@ static int try_disable_dsc(struct drm_atomic_state *state,
+ break;
+
+ DRM_DEBUG_DRIVER("MST_DSC index #%d, try no compression\n", next_index);
++ var_pbn = vars[next_index].pbn;
+ vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps, fec_overhead_multiplier_x1000);
+ ret = drm_dp_atomic_find_time_slots(state,
+ params[next_index].port->mgr,
+ params[next_index].port,
+ vars[next_index].pbn);
+- if (ret < 0)
++ if (ret < 0) {
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC index #%d, failed to set pbn to the state, %d\n",
++ __func__, __LINE__, next_index, ret);
++ vars[next_index].pbn = var_pbn;
+ return ret;
++ }
+
+ ret = drm_dp_mst_atomic_check(state);
+ if (ret == 0) {
+@@ -1071,14 +1077,17 @@ static int try_disable_dsc(struct drm_atomic_state *state,
+ vars[next_index].dsc_enabled = false;
+ vars[next_index].bpp_x16 = 0;
+ } else {
+- DRM_DEBUG_DRIVER("MST_DSC index #%d, restore minimum compression\n", next_index);
+- vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.max_kbps, fec_overhead_multiplier_x1000);
++ DRM_DEBUG_DRIVER("MST_DSC index #%d, restore optimized pbn value\n", next_index);
++ vars[next_index].pbn = var_pbn;
+ ret = drm_dp_atomic_find_time_slots(state,
+ params[next_index].port->mgr,
+ params[next_index].port,
+ vars[next_index].pbn);
+- if (ret < 0)
++ if (ret < 0) {
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC index #%d, failed to set pbn to the state, %d\n",
++ __func__, __LINE__, next_index, ret);
+ return ret;
++ }
+ }
+
+ tried[next_index] = true;
+@@ -1344,9 +1353,6 @@ static bool is_dsc_need_re_compute(
+ DRM_DEBUG_DRIVER("%s: MST_DSC check on %d streams in current dc_state\n",
+ __func__, dc->current_state->stream_count);
+
+- if (new_stream_on_link_num == 0)
+- return false;
+-
+ /* check current_state if there stream on link but it is not in
+ * new request state
+ */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+index 5cb11cc2d06368..a573a66398984c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+@@ -1377,7 +1377,8 @@ void amdgpu_dm_plane_handle_cursor_update(struct drm_plane *plane,
+ adev->dm.dc->caps.color.dpp.gamma_corr)
+ attributes.attribute_flags.bits.ENABLE_CURSOR_DEGAMMA = 1;
+
+- attributes.pitch = afb->base.pitches[0] / afb->base.format->cpp[0];
++ if (afb)
++ attributes.pitch = afb->base.pitches[0] / afb->base.format->cpp[0];
+
+ if (crtc_state->stream) {
+ mutex_lock(&adev->dm.dc_lock);
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
+index 78df96882d6ec5..f8409453434c1c 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
+@@ -195,7 +195,7 @@ void dce11_pplib_apply_display_requirements(
+ * , then change minimum memory clock based on real-time bandwidth
+ * limitation.
+ */
+- if ((dc->ctx->asic_id.chip_family == FAMILY_AI) &&
++ if (dc->bw_vbios && (dc->ctx->asic_id.chip_family == FAMILY_AI) &&
+ ASICREV_IS_VEGA20_P(dc->ctx->asic_id.hw_internal_rev) && (context->stream_count >= 2)) {
+ pp_display_cfg->min_memory_clock_khz = max(pp_display_cfg->min_memory_clock_khz,
+ (uint32_t) div64_s64(
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 85a2ef82afa535..9e05d77453ac38 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -3739,7 +3739,7 @@ static void commit_planes_for_stream_fast(struct dc *dc,
+ surface_count,
+ stream,
+ context);
+- } else {
++ } else if (stream_status) {
+ build_dmub_cmd_list(dc,
+ srf_updates,
+ surface_count,
+@@ -4127,7 +4127,8 @@ static void commit_planes_for_stream(struct dc *dc,
+ }
+
+ if ((update_type != UPDATE_TYPE_FAST) && stream->update_flags.bits.dsc_changed)
+- if (top_pipe_to_program->stream_res.tg->funcs->lock_doublebuffer_enable) {
++ if (top_pipe_to_program &&
++ top_pipe_to_program->stream_res.tg->funcs->lock_doublebuffer_enable) {
+ top_pipe_to_program->stream_res.tg->funcs->wait_for_state(
+ top_pipe_to_program->stream_res.tg,
+ CRTC_STATE_VACTIVE);
+@@ -5382,7 +5383,8 @@ void dc_allow_idle_optimizations_internal(struct dc *dc, bool allow, char const
+ if (allow == dc->idle_optimizations_allowed)
+ return;
+
+- if (dc->hwss.apply_idle_power_optimizations && dc->hwss.apply_idle_power_optimizations(dc, allow))
++ if (dc->hwss.apply_idle_power_optimizations && dc->clk_mgr != NULL &&
++ dc->hwss.apply_idle_power_optimizations(dc, allow))
+ dc->idle_optimizations_allowed = allow;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+index 87e36d51c56d80..4a88412fdfab13 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+@@ -636,57 +636,59 @@ void hwss_build_fast_sequence(struct dc *dc,
+ while (current_pipe) {
+ current_mpc_pipe = current_pipe;
+ while (current_mpc_pipe) {
+- if (dc->hwss.set_flip_control_gsl && current_mpc_pipe->plane_state && current_mpc_pipe->plane_state->update_flags.raw) {
+- block_sequence[*num_steps].params.set_flip_control_gsl_params.pipe_ctx = current_mpc_pipe;
+- block_sequence[*num_steps].params.set_flip_control_gsl_params.flip_immediate = current_mpc_pipe->plane_state->flip_immediate;
+- block_sequence[*num_steps].func = HUBP_SET_FLIP_CONTROL_GSL;
+- (*num_steps)++;
+- }
+- if (dc->hwss.program_triplebuffer && dc->debug.enable_tri_buf && current_mpc_pipe->plane_state->update_flags.raw) {
+- block_sequence[*num_steps].params.program_triplebuffer_params.dc = dc;
+- block_sequence[*num_steps].params.program_triplebuffer_params.pipe_ctx = current_mpc_pipe;
+- block_sequence[*num_steps].params.program_triplebuffer_params.enableTripleBuffer = current_mpc_pipe->plane_state->triplebuffer_flips;
+- block_sequence[*num_steps].func = HUBP_PROGRAM_TRIPLEBUFFER;
+- (*num_steps)++;
+- }
+- if (dc->hwss.update_plane_addr && current_mpc_pipe->plane_state->update_flags.bits.addr_update) {
+- if (resource_is_pipe_type(current_mpc_pipe, OTG_MASTER) &&
+- stream_status->mall_stream_config.type == SUBVP_MAIN) {
+- block_sequence[*num_steps].params.subvp_save_surf_addr.dc_dmub_srv = dc->ctx->dmub_srv;
+- block_sequence[*num_steps].params.subvp_save_surf_addr.addr = ¤t_mpc_pipe->plane_state->address;
+- block_sequence[*num_steps].params.subvp_save_surf_addr.subvp_index = current_mpc_pipe->subvp_index;
+- block_sequence[*num_steps].func = DMUB_SUBVP_SAVE_SURF_ADDR;
++ if (current_mpc_pipe->plane_state) {
++ if (dc->hwss.set_flip_control_gsl && current_mpc_pipe->plane_state->update_flags.raw) {
++ block_sequence[*num_steps].params.set_flip_control_gsl_params.pipe_ctx = current_mpc_pipe;
++ block_sequence[*num_steps].params.set_flip_control_gsl_params.flip_immediate = current_mpc_pipe->plane_state->flip_immediate;
++ block_sequence[*num_steps].func = HUBP_SET_FLIP_CONTROL_GSL;
++ (*num_steps)++;
++ }
++ if (dc->hwss.program_triplebuffer && dc->debug.enable_tri_buf && current_mpc_pipe->plane_state->update_flags.raw) {
++ block_sequence[*num_steps].params.program_triplebuffer_params.dc = dc;
++ block_sequence[*num_steps].params.program_triplebuffer_params.pipe_ctx = current_mpc_pipe;
++ block_sequence[*num_steps].params.program_triplebuffer_params.enableTripleBuffer = current_mpc_pipe->plane_state->triplebuffer_flips;
++ block_sequence[*num_steps].func = HUBP_PROGRAM_TRIPLEBUFFER;
++ (*num_steps)++;
++ }
++ if (dc->hwss.update_plane_addr && current_mpc_pipe->plane_state->update_flags.bits.addr_update) {
++ if (resource_is_pipe_type(current_mpc_pipe, OTG_MASTER) &&
++ stream_status->mall_stream_config.type == SUBVP_MAIN) {
++ block_sequence[*num_steps].params.subvp_save_surf_addr.dc_dmub_srv = dc->ctx->dmub_srv;
++ block_sequence[*num_steps].params.subvp_save_surf_addr.addr = ¤t_mpc_pipe->plane_state->address;
++ block_sequence[*num_steps].params.subvp_save_surf_addr.subvp_index = current_mpc_pipe->subvp_index;
++ block_sequence[*num_steps].func = DMUB_SUBVP_SAVE_SURF_ADDR;
++ (*num_steps)++;
++ }
++
++ block_sequence[*num_steps].params.update_plane_addr_params.dc = dc;
++ block_sequence[*num_steps].params.update_plane_addr_params.pipe_ctx = current_mpc_pipe;
++ block_sequence[*num_steps].func = HUBP_UPDATE_PLANE_ADDR;
+ (*num_steps)++;
+ }
+
+- block_sequence[*num_steps].params.update_plane_addr_params.dc = dc;
+- block_sequence[*num_steps].params.update_plane_addr_params.pipe_ctx = current_mpc_pipe;
+- block_sequence[*num_steps].func = HUBP_UPDATE_PLANE_ADDR;
+- (*num_steps)++;
+- }
+-
+- if (hws->funcs.set_input_transfer_func && current_mpc_pipe->plane_state->update_flags.bits.gamma_change) {
+- block_sequence[*num_steps].params.set_input_transfer_func_params.dc = dc;
+- block_sequence[*num_steps].params.set_input_transfer_func_params.pipe_ctx = current_mpc_pipe;
+- block_sequence[*num_steps].params.set_input_transfer_func_params.plane_state = current_mpc_pipe->plane_state;
+- block_sequence[*num_steps].func = DPP_SET_INPUT_TRANSFER_FUNC;
+- (*num_steps)++;
+- }
++ if (hws->funcs.set_input_transfer_func && current_mpc_pipe->plane_state->update_flags.bits.gamma_change) {
++ block_sequence[*num_steps].params.set_input_transfer_func_params.dc = dc;
++ block_sequence[*num_steps].params.set_input_transfer_func_params.pipe_ctx = current_mpc_pipe;
++ block_sequence[*num_steps].params.set_input_transfer_func_params.plane_state = current_mpc_pipe->plane_state;
++ block_sequence[*num_steps].func = DPP_SET_INPUT_TRANSFER_FUNC;
++ (*num_steps)++;
++ }
+
+- if (dc->hwss.program_gamut_remap && current_mpc_pipe->plane_state->update_flags.bits.gamut_remap_change) {
+- block_sequence[*num_steps].params.program_gamut_remap_params.pipe_ctx = current_mpc_pipe;
+- block_sequence[*num_steps].func = DPP_PROGRAM_GAMUT_REMAP;
+- (*num_steps)++;
+- }
+- if (current_mpc_pipe->plane_state->update_flags.bits.input_csc_change) {
+- block_sequence[*num_steps].params.setup_dpp_params.pipe_ctx = current_mpc_pipe;
+- block_sequence[*num_steps].func = DPP_SETUP_DPP;
+- (*num_steps)++;
+- }
+- if (current_mpc_pipe->plane_state->update_flags.bits.coeff_reduction_change) {
+- block_sequence[*num_steps].params.program_bias_and_scale_params.pipe_ctx = current_mpc_pipe;
+- block_sequence[*num_steps].func = DPP_PROGRAM_BIAS_AND_SCALE;
+- (*num_steps)++;
++ if (dc->hwss.program_gamut_remap && current_mpc_pipe->plane_state->update_flags.bits.gamut_remap_change) {
++ block_sequence[*num_steps].params.program_gamut_remap_params.pipe_ctx = current_mpc_pipe;
++ block_sequence[*num_steps].func = DPP_PROGRAM_GAMUT_REMAP;
++ (*num_steps)++;
++ }
++ if (current_mpc_pipe->plane_state->update_flags.bits.input_csc_change) {
++ block_sequence[*num_steps].params.setup_dpp_params.pipe_ctx = current_mpc_pipe;
++ block_sequence[*num_steps].func = DPP_SETUP_DPP;
++ (*num_steps)++;
++ }
++ if (current_mpc_pipe->plane_state->update_flags.bits.coeff_reduction_change) {
++ block_sequence[*num_steps].params.program_bias_and_scale_params.pipe_ctx = current_mpc_pipe;
++ block_sequence[*num_steps].func = DPP_PROGRAM_BIAS_AND_SCALE;
++ (*num_steps)++;
++ }
+ }
+ if (hws->funcs.set_output_transfer_func && current_mpc_pipe->stream->update_flags.bits.out_tf) {
+ block_sequence[*num_steps].params.set_output_transfer_func_params.dc = dc;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index bcb5267b5a6bc2..5ab5866dc73afc 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -3241,6 +3241,8 @@ static bool are_stream_backends_same(
+ bool dc_is_stream_unchanged(
+ struct dc_stream_state *old_stream, struct dc_stream_state *stream)
+ {
++ if (!old_stream || !stream)
++ return false;
+
+ if (!are_stream_backends_same(old_stream, stream))
+ return false;
+@@ -3771,8 +3773,10 @@ static bool planes_changed_for_existing_stream(struct dc_state *context,
+ }
+ }
+
+- if (!stream_status)
++ if (!stream_status) {
+ ASSERT(0);
++ return false;
++ }
+
+ for (i = 0; i < set_count; i++)
+ if (set[i].stream == stream)
+@@ -5271,3 +5275,31 @@ void resource_init_common_dml2_callbacks(struct dc *dc, struct dml2_configuratio
+ dml2_options->svp_pstate.callbacks.remove_phantom_streams_and_planes = &dc_state_remove_phantom_streams_and_planes;
+ dml2_options->svp_pstate.callbacks.release_phantom_streams_and_planes = &dc_state_release_phantom_streams_and_planes;
+ }
++
++/* Returns number of DET segments allocated for a given OTG_MASTER pipe */
++int resource_calculate_det_for_stream(struct dc_state *state, struct pipe_ctx *otg_master)
++{
++ struct pipe_ctx *opp_heads[MAX_PIPES];
++ struct pipe_ctx *dpp_pipes[MAX_PIPES];
++
++ int dpp_count = 0;
++ int det_segments = 0;
++
++ if (!otg_master->stream)
++ return 0;
++
++ int slice_count = resource_get_opp_heads_for_otg_master(otg_master,
++ &state->res_ctx, opp_heads);
++
++ for (int slice_idx = 0; slice_idx < slice_count; slice_idx++) {
++ if (opp_heads[slice_idx]->plane_state) {
++ dpp_count = resource_get_dpp_pipes_for_opp_head(
++ opp_heads[slice_idx],
++ &state->res_ctx,
++ dpp_pipes);
++ for (int dpp_idx = 0; dpp_idx < dpp_count; dpp_idx++)
++ det_segments += dpp_pipes[dpp_idx]->hubp_regs.det_size;
++ }
++ }
++ return det_segments;
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_state.c b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+index e990346e51f67e..665157f8d4cbe2 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_state.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+@@ -211,10 +211,16 @@ struct dc_state *dc_state_create(struct dc *dc, struct dc_state_create_params *p
+ #ifdef CONFIG_DRM_AMD_DC_FP
+ if (dc->debug.using_dml2) {
+ dml2_opt->use_clock_dc_limits = false;
+- dml2_create(dc, dml2_opt, &state->bw_ctx.dml2);
++ if (!dml2_create(dc, dml2_opt, &state->bw_ctx.dml2)) {
++ dc_state_release(state);
++ return NULL;
++ }
+
+ dml2_opt->use_clock_dc_limits = true;
+- dml2_create(dc, dml2_opt, &state->bw_ctx.dml2_dc_power_source);
++ if (!dml2_create(dc, dml2_opt, &state->bw_ctx.dml2_dc_power_source)) {
++ dc_state_release(state);
++ return NULL;
++ }
+ }
+ #endif
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h b/drivers/gpu/drm/amd/display/dc/dc_types.h
+index c550e89970336a..33793b11279b35 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
+@@ -178,6 +178,7 @@ struct dc_panel_patch {
+ unsigned int skip_avmute;
+ unsigned int mst_start_top_delay;
+ unsigned int remove_sink_ext_caps;
++ unsigned int disable_colorimetry;
+ };
+
+ struct dc_edid_caps {
+diff --git a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+index 68cd3258f4a97a..a64d8f3ec93a39 100644
+--- a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+@@ -47,7 +47,8 @@ static void dccg35_trigger_dio_fifo_resync(struct dccg *dccg)
+ uint32_t dispclk_rdivider_value = 0;
+
+ REG_GET(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_RDIVIDER, &dispclk_rdivider_value);
+- REG_UPDATE(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_WDIVIDER, dispclk_rdivider_value);
++ if (dispclk_rdivider_value != 0)
++ REG_UPDATE(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_WDIVIDER, dispclk_rdivider_value);
+ }
+
+ static void dcn35_set_dppclk_enable(struct dccg *dccg,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
+index 0b49362f71b06c..eaed5d1c398aa0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
+@@ -591,6 +591,8 @@ bool cm_helper_translate_curve_to_degamma_hw_format(
+ i += increment) {
+ if (j == hw_points - 1)
+ break;
++ if (i >= TRANSFER_FUNC_POINTS)
++ return false;
+ rgb_resulted[j].red = output_tf->tf_pts.red[i];
+ rgb_resulted[j].green = output_tf->tf_pts.green[i];
+ rgb_resulted[j].blue = output_tf->tf_pts.blue[i];
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c
+index b8327237ed4418..0433f6b5dac78a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c
+@@ -177,6 +177,8 @@ bool cm3_helper_translate_curve_to_hw_format(
+ i += increment) {
+ if (j == hw_points)
+ break;
++ if (i >= TRANSFER_FUNC_POINTS)
++ return false;
+ rgb_resulted[j].red = output_tf->tf_pts.red[i];
+ rgb_resulted[j].green = output_tf->tf_pts.green[i];
+ rgb_resulted[j].blue = output_tf->tf_pts.blue[i];
+@@ -335,6 +337,8 @@ bool cm3_helper_translate_curve_to_degamma_hw_format(
+ i += increment) {
+ if (j == hw_points - 1)
+ break;
++ if (i >= TRANSFER_FUNC_POINTS)
++ return false;
+ rgb_resulted[j].red = output_tf->tf_pts.red[i];
+ rgb_resulted[j].green = output_tf->tf_pts.green[i];
+ rgb_resulted[j].blue = output_tf->tf_pts.blue[i];
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
+index 7c56ad0f881224..e7019c95ba79ec 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
+@@ -78,7 +78,7 @@ static void calculate_ttu_cursor(struct display_mode_lib *mode_lib,
+
+ static unsigned int get_bytes_per_element(enum source_format_class source_format, bool is_chroma)
+ {
+- unsigned int ret_val = 0;
++ unsigned int ret_val = 1;
+
+ if (source_format == dm_444_16) {
+ if (!is_chroma)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
+index 3d95bfa5aca23b..ae525104172801 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
+@@ -78,7 +78,7 @@ static void calculate_ttu_cursor(struct display_mode_lib *mode_lib,
+
+ static unsigned int get_bytes_per_element(enum source_format_class source_format, bool is_chroma)
+ {
+- unsigned int ret_val = 0;
++ unsigned int ret_val = 1;
+
+ if (source_format == dm_444_16) {
+ if (!is_chroma)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
+index 98502a4f056726..9e1c18b90805d2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
+@@ -53,7 +53,7 @@ static void calculate_ttu_cursor(
+
+ static unsigned int get_bytes_per_element(enum source_format_class source_format, bool is_chroma)
+ {
+- unsigned int ret_val = 0;
++ unsigned int ret_val = 1;
+
+ if (source_format == dm_444_16) {
+ if (!is_chroma)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+index 9d399c4ce957d5..4cb0227bdd2707 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+@@ -871,8 +871,9 @@ static bool subvp_drr_schedulable(struct dc *dc, struct dc_state *context)
+ * for VBLANK: (VACTIVE region of the SubVP pipe can fit the MALL prefetch, VBLANK frame time,
+ * and the max of (VBLANK blanking time, MALL region)).
+ */
+- if (stretched_drr_us < (1 / (double)drr_timing->min_refresh_in_uhz) * 1000000 * 1000000 &&
+- subvp_active_us - prefetch_us - stretched_drr_us - max_vblank_mallregion > 0)
++ if (drr_timing &&
++ stretched_drr_us < (1 / (double)drr_timing->min_refresh_in_uhz) * 1000000 * 1000000 &&
++ subvp_active_us - prefetch_us - stretched_drr_us - max_vblank_mallregion > 0)
+ schedulable = true;
+
+ return schedulable;
+@@ -937,7 +938,7 @@ static bool subvp_vblank_schedulable(struct dc *dc, struct dc_state *context)
+ if (!subvp_pipe && pipe_mall_type == SUBVP_MAIN)
+ subvp_pipe = pipe;
+ }
+- if (found) {
++ if (found && subvp_pipe) {
+ phantom_stream = dc_state_get_paired_subvp_stream(context, subvp_pipe->stream);
+ main_timing = &subvp_pipe->stream->timing;
+ phantom_timing = &phantom_stream->timing;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+index dae13f202220e3..d8bfc85e5dcd0f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+@@ -39,7 +39,7 @@
+
+ static unsigned int get_bytes_per_element(enum source_format_class source_format, bool is_chroma)
+ {
+- unsigned int ret_val = 0;
++ unsigned int ret_val = 1;
+
+ if (source_format == dm_444_16) {
+ if (!is_chroma)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+index 2baaf602815ec9..ef0150a258b121 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+@@ -827,6 +827,7 @@ static void populate_dml21_plane_config_from_plane_state(struct dml2_context *dm
+
+ if (plane_state->mcm_luts.lut3d_data.lut3d_src == DC_CM2_TRANSFER_FUNC_SOURCE_VIDMEM) {
+ plane->tdlut.setup_for_tdlut = true;
++
+ switch (plane_state->mcm_luts.lut3d_data.gpu_mem_params.layout) {
+ case DC_CM2_GPU_MEM_LAYOUT_3D_SWIZZLE_LINEAR_RGB:
+ case DC_CM2_GPU_MEM_LAYOUT_3D_SWIZZLE_LINEAR_BGR:
+@@ -836,6 +837,7 @@ static void populate_dml21_plane_config_from_plane_state(struct dml2_context *dm
+ plane->tdlut.tdlut_addressing_mode = dml2_tdlut_simple_linear;
+ break;
+ }
++
+ switch (plane_state->mcm_luts.lut3d_data.gpu_mem_params.size) {
+ case DC_CM2_GPU_MEM_SIZE_171717:
+ plane->tdlut.tdlut_width_mode = dml2_tdlut_width_17_cube;
+@@ -844,8 +846,8 @@ static void populate_dml21_plane_config_from_plane_state(struct dml2_context *dm
+ //plane->tdlut.tdlut_width_mode = dml2_tdlut_width_flatten; // dml2_tdlut_width_flatten undefined
+ break;
+ }
+- } else
+- plane->tdlut.setup_for_tdlut = false;
++ }
++ plane->tdlut.setup_for_tdlut |= dml_ctx->config.force_tdlut_enable;
+
+ plane->dynamic_meta_data.enable = false;
+ plane->dynamic_meta_data.lines_before_active_required = 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+index 6f4026e396e098..c40cd5d634568b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+@@ -7231,10 +7231,9 @@ static bool dml_core_mode_support(struct dml2_core_calcs_mode_support_ex *in_out
+ /* Cursor Support Check */
+ mode_lib->ms.support.CursorSupport = true;
+ for (k = 0; k < mode_lib->ms.num_active_planes; k++) {
+- if (display_cfg->plane_descriptors[k].cursor.cursor_width > 0.0) {
+- if (display_cfg->plane_descriptors[k].cursor.cursor_bpp == 64 && mode_lib->ip.cursor_64bpp_support == false) {
++ if (display_cfg->plane_descriptors[k].cursor.num_cursors > 0) {
++ if (display_cfg->plane_descriptors[k].cursor.cursor_bpp == 64 && mode_lib->ip.cursor_64bpp_support == false)
+ mode_lib->ms.support.CursorSupport = false;
+- }
+ }
+ }
+
+@@ -8091,27 +8090,31 @@ static bool dml_core_mode_support(struct dml2_core_calcs_mode_support_ex *in_out
+ for (k = 0; k < mode_lib->ms.num_active_planes; ++k) {
+ double line_time_us = (double)display_cfg->stream_descriptors[display_cfg->plane_descriptors[k].stream_index].timing.h_total / ((double)display_cfg->stream_descriptors[display_cfg->plane_descriptors[k].stream_index].timing.pixel_clock_khz / 1000);
+ bool cursor_not_enough_urgent_latency_hiding = 0;
+- calculate_cursor_req_attributes(
+- display_cfg->plane_descriptors[k].cursor.cursor_width,
+- display_cfg->plane_descriptors[k].cursor.cursor_bpp,
+
+- // output
+- &s->cursor_lines_per_chunk[k],
+- &s->cursor_bytes_per_line[k],
+- &s->cursor_bytes_per_chunk[k],
+- &s->cursor_bytes[k]);
+-
+- calculate_cursor_urgent_burst_factor(
+- mode_lib->ip.cursor_buffer_size,
+- display_cfg->plane_descriptors[k].cursor.cursor_width,
+- s->cursor_bytes_per_chunk[k],
+- s->cursor_lines_per_chunk[k],
+- line_time_us,
+- mode_lib->ms.UrgLatency,
++ if (display_cfg->plane_descriptors[k].cursor.num_cursors > 0) {
++ calculate_cursor_req_attributes(
++ display_cfg->plane_descriptors[k].cursor.cursor_width,
++ display_cfg->plane_descriptors[k].cursor.cursor_bpp,
++
++ // output
++ &s->cursor_lines_per_chunk[k],
++ &s->cursor_bytes_per_line[k],
++ &s->cursor_bytes_per_chunk[k],
++ &s->cursor_bytes[k]);
++
++ calculate_cursor_urgent_burst_factor(
++ mode_lib->ip.cursor_buffer_size,
++ display_cfg->plane_descriptors[k].cursor.cursor_width,
++ s->cursor_bytes_per_chunk[k],
++ s->cursor_lines_per_chunk[k],
++ line_time_us,
++ mode_lib->ms.UrgLatency,
++
++ // output
++ &mode_lib->ms.UrgentBurstFactorCursor[k],
++ &cursor_not_enough_urgent_latency_hiding);
++ }
+
+- // output
+- &mode_lib->ms.UrgentBurstFactorCursor[k],
+- &cursor_not_enough_urgent_latency_hiding);
+ mode_lib->ms.UrgentBurstFactorCursorPre[k] = mode_lib->ms.UrgentBurstFactorCursor[k];
+
+ #ifdef __DML_VBA_DEBUG__
+@@ -10592,31 +10595,33 @@ static bool dml_core_mode_programming(struct dml2_core_calcs_mode_programming_ex
+
+ for (k = 0; k < s->num_active_planes; ++k) {
+ bool cursor_not_enough_urgent_latency_hiding = 0;
+- double line_time_us;
++ double line_time_us = 0.0;
+
+- calculate_cursor_req_attributes(
+- display_cfg->plane_descriptors[k].cursor.cursor_width,
+- display_cfg->plane_descriptors[k].cursor.cursor_bpp,
++ line_time_us = display_cfg->stream_descriptors[display_cfg->plane_descriptors[k].stream_index].timing.h_total /
++ ((double)display_cfg->stream_descriptors[display_cfg->plane_descriptors[k].stream_index].timing.pixel_clock_khz / 1000);
++ if (display_cfg->plane_descriptors[k].cursor.num_cursors > 0) {
++ calculate_cursor_req_attributes(
++ display_cfg->plane_descriptors[k].cursor.cursor_width,
++ display_cfg->plane_descriptors[k].cursor.cursor_bpp,
+
+- // output
+- &s->cursor_lines_per_chunk[k],
+- &s->cursor_bytes_per_line[k],
+- &s->cursor_bytes_per_chunk[k],
+- &s->cursor_bytes[k]);
+-
+- line_time_us = display_cfg->stream_descriptors[display_cfg->plane_descriptors[k].stream_index].timing.h_total / ((double)display_cfg->stream_descriptors[display_cfg->plane_descriptors[k].stream_index].timing.pixel_clock_khz / 1000);
+-
+- calculate_cursor_urgent_burst_factor(
+- mode_lib->ip.cursor_buffer_size,
+- display_cfg->plane_descriptors[k].cursor.cursor_width,
+- s->cursor_bytes_per_chunk[k],
+- s->cursor_lines_per_chunk[k],
+- line_time_us,
+- mode_lib->mp.UrgentLatency,
++ // output
++ &s->cursor_lines_per_chunk[k],
++ &s->cursor_bytes_per_line[k],
++ &s->cursor_bytes_per_chunk[k],
++ &s->cursor_bytes[k]);
++
++ calculate_cursor_urgent_burst_factor(
++ mode_lib->ip.cursor_buffer_size,
++ display_cfg->plane_descriptors[k].cursor.cursor_width,
++ s->cursor_bytes_per_chunk[k],
++ s->cursor_lines_per_chunk[k],
++ line_time_us,
++ mode_lib->mp.UrgentLatency,
+
+- // output
+- &mode_lib->mp.UrgentBurstFactorCursor[k],
+- &cursor_not_enough_urgent_latency_hiding);
++ // output
++ &mode_lib->mp.UrgentBurstFactorCursor[k],
++ &cursor_not_enough_urgent_latency_hiding);
++ }
+ mode_lib->mp.UrgentBurstFactorCursorPre[k] = mode_lib->mp.UrgentBurstFactorCursor[k];
+
+ CalculateUrgentBurstFactor(
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared.c
+index 81f0a6f19f87b7..679b2003190341 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared.c
+@@ -9386,8 +9386,8 @@ static void CalculateVMGroupAndRequestTimes(
+ double TimePerVMRequestVBlank[],
+ double TimePerVMRequestFlip[])
+ {
+- unsigned int num_group_per_lower_vm_stage = 0;
+- unsigned int num_req_per_lower_vm_stage = 0;
++ unsigned int num_group_per_lower_vm_stage = 1;
++ unsigned int num_req_per_lower_vm_stage = 1;
+
+ #ifdef __DML_VBA_DEBUG__
+ dml2_printf("DML::%s: NumberOfActiveSurfaces = %u\n", __func__, NumberOfActiveSurfaces);
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared_types.h b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared_types.h
+index 1343b744eeb319..67e32a4ab01140 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared_types.h
+@@ -865,7 +865,7 @@ struct dml2_core_calcs_mode_support_locals {
+ unsigned int dpte_row_bytes_per_row_l[DML2_MAX_PLANES];
+ unsigned int dpte_row_bytes_per_row_c[DML2_MAX_PLANES];
+
+- bool dummy_boolean[2];
++ bool dummy_boolean[3];
+ unsigned int dummy_integer[3];
+ unsigned int dummy_integer_array[36][DML2_MAX_PLANES];
+ enum dml2_odm_mode dummy_odm_mode[DML2_MAX_PLANES];
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c
+index c4c52173ef2240..11c904ae29586d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c
+@@ -303,7 +303,6 @@ void build_unoptimized_policy_settings(enum dml_project_id project, struct dml_m
+ if (project == dml_project_dcn35 ||
+ project == dml_project_dcn351) {
+ policy->DCCProgrammingAssumesScanDirectionUnknownFinal = false;
+- policy->EnhancedPrefetchScheduleAccelerationFinal = 0;
+ policy->AllowForPStateChangeOrStutterInVBlankFinal = dml_prefetch_support_uclk_fclk_and_stutter_if_possible; /*new*/
+ policy->UseOnlyMaxPrefetchModes = 1;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+index 8b9dcee772660c..73c285b751d6fb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+@@ -953,7 +953,9 @@ static void get_scaler_data_for_plane(const struct dc_plane_state *in, struct dc
+ memcpy(out, &temp_pipe->plane_res.scl_data, sizeof(*out));
+ }
+
+-static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned int location, const struct dc_stream_state *in)
++static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned int location,
++ const struct dc_stream_state *in,
++ const struct soc_bounding_box_st *soc)
+ {
+ dml_uint_t width, height;
+
+@@ -970,7 +972,7 @@ static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned
+ out->CursorBPP[location] = dml_cur_32bit;
+ out->CursorWidth[location] = 256;
+
+- out->GPUVMMinPageSizeKBytes[location] = 256;
++ out->GPUVMMinPageSizeKBytes[location] = soc->gpuvm_min_page_size_kbytes;
+
+ out->ViewportWidth[location] = width;
+ out->ViewportHeight[location] = height;
+@@ -1007,7 +1009,9 @@ static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned
+ out->ScalerEnabled[location] = false;
+ }
+
+-static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out, unsigned int location, const struct dc_plane_state *in, struct dc_state *context)
++static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out, unsigned int location,
++ const struct dc_plane_state *in, struct dc_state *context,
++ const struct soc_bounding_box_st *soc)
+ {
+ struct scaler_data *scaler_data = kzalloc(sizeof(*scaler_data), GFP_KERNEL);
+ if (!scaler_data)
+@@ -1018,7 +1022,7 @@ static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out
+ out->CursorBPP[location] = dml_cur_32bit;
+ out->CursorWidth[location] = 256;
+
+- out->GPUVMMinPageSizeKBytes[location] = 256;
++ out->GPUVMMinPageSizeKBytes[location] = soc->gpuvm_min_page_size_kbytes;
+
+ out->ViewportWidth[location] = scaler_data->viewport.width;
+ out->ViewportHeight[location] = scaler_data->viewport.height;
+@@ -1299,7 +1303,8 @@ void map_dc_state_into_dml_display_cfg(struct dml2_context *dml2, struct dc_stat
+ disp_cfg_plane_location = dml_dispcfg->num_surfaces++;
+
+ populate_dummy_dml_surface_cfg(&dml_dispcfg->surface, disp_cfg_plane_location, context->streams[i]);
+- populate_dummy_dml_plane_cfg(&dml_dispcfg->plane, disp_cfg_plane_location, context->streams[i]);
++ populate_dummy_dml_plane_cfg(&dml_dispcfg->plane, disp_cfg_plane_location,
++ context->streams[i], &dml2->v20.dml_core_ctx.soc);
+
+ dml_dispcfg->plane.BlendingAndTiming[disp_cfg_plane_location] = disp_cfg_stream_location;
+
+@@ -1315,7 +1320,10 @@ void map_dc_state_into_dml_display_cfg(struct dml2_context *dml2, struct dc_stat
+ ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__);
+
+ populate_dml_surface_cfg_from_plane_state(dml2->v20.dml_core_ctx.project, &dml_dispcfg->surface, disp_cfg_plane_location, context->stream_status[i].plane_states[j]);
+- populate_dml_plane_cfg_from_plane_state(&dml_dispcfg->plane, disp_cfg_plane_location, context->stream_status[i].plane_states[j], context);
++ populate_dml_plane_cfg_from_plane_state(
++ &dml_dispcfg->plane, disp_cfg_plane_location,
++ context->stream_status[i].plane_states[j], context,
++ &dml2->v20.dml_core_ctx.soc);
+
+ if (stream_mall_type == SUBVP_MAIN) {
+ dml_dispcfg->plane.UseMALLForPStateChange[disp_cfg_plane_location] = dml_use_mall_pstate_change_sub_viewport;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h
+index 023325e8f6e22f..0f944fcfd5a5bb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h
+@@ -236,6 +236,7 @@ struct dml2_configuration_options {
+
+ bool use_clock_dc_limits;
+ bool gpuvm_enable;
++ bool force_tdlut_enable;
+ struct dml2_soc_bb *bb_from_dmub;
+ };
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn401/dcn401_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn401/dcn401_hubbub.c
+index 181041d6d177c0..70ddc0392a5b67 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn401/dcn401_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn401/dcn401_hubbub.c
+@@ -1170,6 +1170,28 @@ static void dcn401_program_compbuf_segments(struct hubbub *hubbub, unsigned comp
+ }
+ }
+
++static void dcn401_wait_for_det_update(struct hubbub *hubbub, int hubp_inst)
++{
++ struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
++
++ switch (hubp_inst) {
++ case 0:
++ REG_WAIT(DCHUBBUB_DET0_CTRL, DET0_SIZE_CURRENT, hubbub2->det0_size, 1, 100000); /* 1 vupdate at 10hz */
++ break;
++ case 1:
++ REG_WAIT(DCHUBBUB_DET1_CTRL, DET1_SIZE_CURRENT, hubbub2->det1_size, 1, 100000);
++ break;
++ case 2:
++ REG_WAIT(DCHUBBUB_DET2_CTRL, DET2_SIZE_CURRENT, hubbub2->det2_size, 1, 100000);
++ break;
++ case 3:
++ REG_WAIT(DCHUBBUB_DET3_CTRL, DET3_SIZE_CURRENT, hubbub2->det3_size, 1, 100000);
++ break;
++ default:
++ break;
++ }
++}
++
+ static const struct hubbub_funcs hubbub4_01_funcs = {
+ .update_dchub = hubbub2_update_dchub,
+ .init_dchub_sys_ctx = hubbub3_init_dchub_sys_ctx,
+@@ -1192,6 +1214,7 @@ static const struct hubbub_funcs hubbub4_01_funcs = {
+ .set_request_limit = hubbub32_set_request_limit,
+ .program_det_segments = dcn401_program_det_segments,
+ .program_compbuf_segments = dcn401_program_compbuf_segments,
++ .wait_for_det_update = dcn401_wait_for_det_update,
+ };
+
+ void hubbub401_construct(struct dcn20_hubbub *hubbub2,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c
+index bf399819ca800e..22ac2b7e49aeae 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c
+@@ -749,7 +749,8 @@ bool hubp1_is_flip_pending(struct hubp *hubp)
+ if (flip_pending)
+ return true;
+
+- if (earliest_inuse_address.grph.addr.quad_part != hubp->request_address.grph.addr.quad_part)
++ if (hubp &&
++ earliest_inuse_address.grph.addr.quad_part != hubp->request_address.grph.addr.quad_part)
+ return true;
+
+ return false;
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
+index 6bba020ad6fbfe..0637e4c552d8a2 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
+@@ -927,7 +927,8 @@ bool hubp2_is_flip_pending(struct hubp *hubp)
+ if (flip_pending)
+ return true;
+
+- if (earliest_inuse_address.grph.addr.quad_part != hubp->request_address.grph.addr.quad_part)
++ if (hubp &&
++ earliest_inuse_address.grph.addr.quad_part != hubp->request_address.grph.addr.quad_part)
+ return true;
+
+ return false;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+index 4d6e90c49ad531..96371dac72fc10 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+@@ -950,7 +950,7 @@ void dce110_edp_backlight_control(
+ {
+ struct dc_context *ctx = link->ctx;
+ struct bp_transmitter_control cntl = { 0 };
+- uint8_t pwrseq_instance;
++ uint8_t pwrseq_instance = 0;
+ unsigned int pre_T11_delay = OLED_PRE_T11_DELAY;
+ unsigned int post_T7_delay = OLED_POST_T7_DELAY;
+
+@@ -1003,7 +1003,8 @@ void dce110_edp_backlight_control(
+ */
+ /* dc_service_sleep_in_milliseconds(50); */
+ /*edp 1.2*/
+- pwrseq_instance = link->panel_cntl->pwrseq_inst;
++ if (link->panel_cntl)
++ pwrseq_instance = link->panel_cntl->pwrseq_inst;
+
+ if (cntl.action == TRANSMITTER_CONTROL_BACKLIGHT_ON) {
+ if (!link->dc->config.edp_no_power_sequencing)
+@@ -2085,13 +2086,20 @@ static void set_drr(struct pipe_ctx **pipe_ctx,
+ * as well.
+ */
+ for (i = 0; i < num_pipes; i++) {
+- pipe_ctx[i]->stream_res.tg->funcs->set_drr(
+- pipe_ctx[i]->stream_res.tg, ¶ms);
+-
+- if (adjust.v_total_max != 0 && adjust.v_total_min != 0)
+- pipe_ctx[i]->stream_res.tg->funcs->set_static_screen_control(
+- pipe_ctx[i]->stream_res.tg,
+- event_triggers, num_frames);
++ /* dc_state_destruct() might null the stream resources, so fetch tg
++ * here first to avoid a race condition. The lifetime of the pointee
++ * itself (the timing_generator object) is not a problem here.
++ */
++ struct timing_generator *tg = pipe_ctx[i]->stream_res.tg;
++
++ if ((tg != NULL) && tg->funcs) {
++ if (tg->funcs->set_drr)
++ tg->funcs->set_drr(tg, ¶ms);
++ if (adjust.v_total_max != 0 && adjust.v_total_min != 0)
++ if (tg->funcs->set_static_screen_control)
++ tg->funcs->set_static_screen_control(
++ tg, event_triggers, num_frames);
++ }
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+index 1d2be574f66809..83942abd08e848 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+@@ -1554,7 +1554,7 @@ void dcn10_init_hw(struct dc *dc)
+ dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
+
+ /* Align bw context with hw config when system resume. */
+- if (dc->clk_mgr->clks.dispclk_khz != 0 && dc->clk_mgr->clks.dppclk_khz != 0) {
++ if (dc->clk_mgr && dc->clk_mgr->clks.dispclk_khz != 0 && dc->clk_mgr->clks.dppclk_khz != 0) {
+ dc->current_state->bw_ctx.bw.dcn.clk.dispclk_khz = dc->clk_mgr->clks.dispclk_khz;
+ dc->current_state->bw_ctx.bw.dcn.clk.dppclk_khz = dc->clk_mgr->clks.dppclk_khz;
+ }
+@@ -1674,7 +1674,7 @@ void dcn10_init_hw(struct dc *dc)
+ REG_UPDATE(DCFCLK_CNTL, DCFCLK_GATE_DIS, 0);
+ }
+
+- if (dc->clk_mgr->funcs->notify_wm_ranges)
++ if (dc->clk_mgr && dc->clk_mgr->funcs->notify_wm_ranges)
+ dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index 2532ad410cb566..936c0ec076bc4c 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -1044,7 +1044,8 @@ bool dcn20_set_output_transfer_func(struct dc *dc, struct pipe_ctx *pipe_ctx,
+ /*
+ * if above if is not executed then 'params' equal to 0 and set in bypass
+ */
+- mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++ if (mpc->funcs->set_output_gamma)
++ mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
+
+ return true;
+ }
+@@ -1921,22 +1922,34 @@ static void dcn20_program_pipe(
+ dc->res_pool->hubbub, pipe_ctx->plane_res.hubp->inst, pipe_ctx->hubp_regs.det_size);
+ }
+
+- if (pipe_ctx->update_flags.raw || pipe_ctx->plane_state->update_flags.raw || pipe_ctx->stream->update_flags.raw)
++ if (pipe_ctx->update_flags.raw ||
++ (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.raw) ||
++ pipe_ctx->stream->update_flags.raw)
+ dcn20_update_dchubp_dpp(dc, pipe_ctx, context);
+
+- if (pipe_ctx->update_flags.bits.enable
+- || pipe_ctx->plane_state->update_flags.bits.hdr_mult)
++ if (pipe_ctx->update_flags.bits.enable ||
++ (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.bits.hdr_mult))
++ hws->funcs.set_hdr_multiplier(pipe_ctx);
++
++ if ((pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.bits.hdr_mult) ||
++ pipe_ctx->update_flags.bits.enable)
+ hws->funcs.set_hdr_multiplier(pipe_ctx);
+
++
+ if (hws->funcs.populate_mcm_luts) {
+- hws->funcs.populate_mcm_luts(dc, pipe_ctx, pipe_ctx->plane_state->mcm_luts,
+- pipe_ctx->plane_state->lut_bank_a);
+- pipe_ctx->plane_state->lut_bank_a = !pipe_ctx->plane_state->lut_bank_a;
++ if (pipe_ctx->plane_state) {
++ hws->funcs.populate_mcm_luts(dc, pipe_ctx, pipe_ctx->plane_state->mcm_luts,
++ pipe_ctx->plane_state->lut_bank_a);
++ pipe_ctx->plane_state->lut_bank_a = !pipe_ctx->plane_state->lut_bank_a;
++ }
+ }
+- if (pipe_ctx->update_flags.bits.enable ||
+- pipe_ctx->plane_state->update_flags.bits.in_transfer_func_change ||
+- pipe_ctx->plane_state->update_flags.bits.gamma_change ||
+- pipe_ctx->plane_state->update_flags.bits.lut_3d)
++ if ((pipe_ctx->plane_state &&
++ pipe_ctx->plane_state->update_flags.bits.in_transfer_func_change) ||
++ (pipe_ctx->plane_state &&
++ pipe_ctx->plane_state->update_flags.bits.gamma_change) ||
++ (pipe_ctx->plane_state &&
++ pipe_ctx->plane_state->update_flags.bits.lut_3d) ||
++ pipe_ctx->update_flags.bits.enable)
+ hws->funcs.set_input_transfer_func(dc, pipe_ctx, pipe_ctx->plane_state);
+
+ /* dcn10_translate_regamma_to_hw_format takes 750us to finish
+@@ -1946,7 +1959,8 @@ static void dcn20_program_pipe(
+ if (pipe_ctx->update_flags.bits.enable ||
+ pipe_ctx->update_flags.bits.plane_changed ||
+ pipe_ctx->stream->update_flags.bits.out_tf ||
+- pipe_ctx->plane_state->update_flags.bits.output_tf_change)
++ (pipe_ctx->plane_state &&
++ pipe_ctx->plane_state->update_flags.bits.output_tf_change))
+ hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
+
+ /* If the pipe has been enabled or has a different opp, we
+@@ -1970,7 +1984,7 @@ static void dcn20_program_pipe(
+ }
+
+ /* Set ABM pipe after other pipe configurations done */
+- if (pipe_ctx->plane_state->visible) {
++ if ((pipe_ctx->plane_state && pipe_ctx->plane_state->visible)) {
+ if (pipe_ctx->stream_res.abm) {
+ dc->hwss.set_pipe(pipe_ctx);
+ pipe_ctx->stream_res.abm->funcs->set_abm_level(pipe_ctx->stream_res.abm,
+@@ -2283,6 +2297,9 @@ void dcn20_post_unlock_program_front_end(
+ }
+ }
+
++ if (!hwseq)
++ return;
++
+ /* P-State support transitions:
+ * Natural -> FPO: P-State disabled in prepare, force disallow anytime is safe
+ * FPO -> Natural: Unforce anytime after FW disable is safe (P-State will assert naturally)
+@@ -2290,7 +2307,7 @@ void dcn20_post_unlock_program_front_end(
+ * FPO -> Unsupported: P-State disabled in prepare, unforce disallow anytime is safe
+ * FPO <-> SubVP: Force disallow is maintained on the FPO / SubVP pipes
+ */
+- if (hwseq && hwseq->funcs.update_force_pstate)
++ if (hwseq->funcs.update_force_pstate)
+ dc->hwseq->funcs.update_force_pstate(dc, context);
+
+ /* Only program the MALL registers after all the main and phantom pipes
+@@ -2529,6 +2546,9 @@ bool dcn20_wait_for_blank_complete(
+ {
+ int counter;
+
++ if (!opp)
++ return false;
++
+ for (counter = 0; counter < 1000; counter++) {
+ if (!opp->funcs->dpg_is_pending(opp))
+ break;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+index 2b4dae2d4b0c18..6f5c6f7b19db63 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+@@ -629,7 +629,7 @@ void dcn30_init_hw(struct dc *dc)
+ uint32_t backlight = MAX_BACKLIGHT_LEVEL;
+ uint32_t user_level = MAX_BACKLIGHT_LEVEL;
+
+- if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
++ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->init_clocks)
+ dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
+
+ // Initialize the dccg
+@@ -790,11 +790,12 @@ void dcn30_init_hw(struct dc *dc)
+ if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
+ dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
+
+- if (dc->clk_mgr->funcs->notify_wm_ranges)
++ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->notify_wm_ranges)
+ dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+
+ //if softmax is enabled then hardmax will be set by a different call
+- if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
++ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->set_hard_max_memclk &&
++ !dc->clk_mgr->dc_mode_softmax_enabled)
+ dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
+
+ if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
+index 746c522adf84ca..3d4b31bd994691 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
+@@ -256,10 +256,10 @@ void dcn31_init_hw(struct dc *dc)
+ if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
+ dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
+
+- if (dc->clk_mgr->funcs->notify_wm_ranges)
++ if (dc->clk_mgr && dc->clk_mgr->funcs->notify_wm_ranges)
+ dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+
+- if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
++ if (dc->clk_mgr && dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
+ dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
+
+ if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+index df80072174b79d..33e3d94e429d4e 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+@@ -582,7 +582,9 @@ bool dcn32_set_output_transfer_func(struct dc *dc,
+ }
+ }
+
+- mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++ if (mpc->funcs->set_output_gamma)
++ mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++
+ return ret;
+ }
+
+@@ -779,7 +781,7 @@ void dcn32_init_hw(struct dc *dc)
+ uint32_t backlight = MAX_BACKLIGHT_LEVEL;
+ uint32_t user_level = MAX_BACKLIGHT_LEVEL;
+
+- if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
++ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->init_clocks)
+ dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
+
+ // Initialize the dccg
+@@ -958,10 +960,11 @@ void dcn32_init_hw(struct dc *dc)
+ if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
+ dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
+
+- if (dc->clk_mgr->funcs->notify_wm_ranges)
++ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->notify_wm_ranges)
+ dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+
+- if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
++ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->set_hard_max_memclk &&
++ !dc->clk_mgr->dc_mode_softmax_enabled)
+ dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
+
+ if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index 80db40787e0197..12a39dbc35419e 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -235,7 +235,7 @@ void dcn35_init_hw(struct dc *dc)
+ if (hws->funcs.enable_power_gating_plane)
+ hws->funcs.enable_power_gating_plane(dc->hwseq, true);
+ */
+- if (res_pool->hubbub->funcs->dchubbub_init)
++ if (res_pool->hubbub && res_pool->hubbub->funcs->dchubbub_init)
+ res_pool->hubbub->funcs->dchubbub_init(dc->res_pool->hubbub);
+ /* If taking control over from VBIOS, we may want to optimize our first
+ * mode set, so we need to skip powering down pipes until we know which
+@@ -328,10 +328,10 @@ void dcn35_init_hw(struct dc *dc)
+ if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
+ dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
+
+- if (dc->clk_mgr->funcs->notify_wm_ranges)
++ if (dc->clk_mgr && dc->clk_mgr->funcs->notify_wm_ranges)
+ dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+
+- if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
++ if (dc->clk_mgr && dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
+ dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
+
+
+@@ -1054,7 +1054,7 @@ void dcn35_calc_blocks_to_gate(struct dc *dc, struct dc_state *context,
+ if (pipe_ctx->plane_res.hubp)
+ update_state->pg_pipe_res_update[PG_HUBP][pipe_ctx->plane_res.hubp->inst] = false;
+
+- if (pipe_ctx->plane_res.dpp)
++ if (pipe_ctx->plane_res.dpp && pipe_ctx->plane_res.hubp)
+ update_state->pg_pipe_res_update[PG_DPP][pipe_ctx->plane_res.hubp->inst] = false;
+
+ if (pipe_ctx->plane_res.dpp || pipe_ctx->stream_res.opp)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+index 2c50c0f745a0be..edd302ebbdfcff 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+@@ -221,8 +221,9 @@ void dcn401_init_hw(struct dc *dc)
+ int edp_num;
+ uint32_t backlight = MAX_BACKLIGHT_LEVEL;
+ uint32_t user_level = MAX_BACKLIGHT_LEVEL;
++ int current_dchub_ref_freq = 0;
+
+- if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks) {
++ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->init_clocks) {
+ dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
+
+ // mark dcmode limits present if any clock has distinct AC and DC values from SMU
+@@ -264,6 +265,8 @@ void dcn401_init_hw(struct dc *dc)
+ dc->ctx->dc_bios->fw_info.pll_info.crystal_frequency,
+ &res_pool->ref_clocks.dccg_ref_clock_inKhz);
+
++ current_dchub_ref_freq = res_pool->ref_clocks.dchub_ref_clock_inKhz / 1000;
++
+ (res_pool->hubbub->funcs->get_dchub_ref_freq)(res_pool->hubbub,
+ res_pool->ref_clocks.dccg_ref_clock_inKhz,
+ &res_pool->ref_clocks.dchub_ref_clock_inKhz);
+@@ -413,7 +416,7 @@ void dcn401_init_hw(struct dc *dc)
+ if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
+ dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
+
+- if (dc->clk_mgr->funcs->notify_wm_ranges)
++ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->notify_wm_ranges)
+ dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+
+ if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
+@@ -436,9 +439,12 @@ void dcn401_init_hw(struct dc *dc)
+ dc->caps.dmub_caps.mclk_sw = dc->ctx->dmub_srv->dmub->feature_caps.fw_assisted_mclk_switch_ver > 0;
+ dc->caps.dmub_caps.fams_ver = dc->ctx->dmub_srv->dmub->feature_caps.fw_assisted_mclk_switch_ver;
+ dc->debug.fams2_config.bits.enable &= dc->ctx->dmub_srv->dmub->feature_caps.fw_assisted_mclk_switch_ver == 2;
+- if (!dc->debug.fams2_config.bits.enable && dc->res_pool->funcs->update_bw_bounding_box) {
+- /* update bounding box if FAMS2 disabled */
+- dc->res_pool->funcs->update_bw_bounding_box(dc, dc->clk_mgr->bw_params);
++ if ((!dc->debug.fams2_config.bits.enable && dc->res_pool->funcs->update_bw_bounding_box)
++ || res_pool->ref_clocks.dchub_ref_clock_inKhz / 1000 != current_dchub_ref_freq) {
++ /* update bounding box if FAMS2 disabled, or if dchub clk has changed */
++ if (dc->clk_mgr)
++ dc->res_pool->funcs->update_bw_bounding_box(dc,
++ dc->clk_mgr->bw_params);
+ }
+ }
+ }
+@@ -742,7 +748,9 @@ bool dcn401_set_output_transfer_func(struct dc *dc,
+ }
+ }
+
+- mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++ if (mpc->funcs->set_output_gamma)
++ mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++
+ return ret;
+ }
+
+@@ -1669,3 +1677,94 @@ void dcn401_hardware_release(struct dc *dc)
+ }
+ }
+
++void dcn401_wait_for_det_buffer_update(struct dc *dc, struct dc_state *context, struct pipe_ctx *otg_master)
++{
++ struct pipe_ctx *opp_heads[MAX_PIPES];
++ struct pipe_ctx *dpp_pipes[MAX_PIPES];
++ struct hubbub *hubbub = dc->res_pool->hubbub;
++ int dpp_count = 0;
++
++ if (!otg_master->stream)
++ return;
++
++ int slice_count = resource_get_opp_heads_for_otg_master(otg_master,
++ &context->res_ctx, opp_heads);
++
++ for (int slice_idx = 0; slice_idx < slice_count; slice_idx++) {
++ if (opp_heads[slice_idx]->plane_state) {
++ dpp_count = resource_get_dpp_pipes_for_opp_head(
++ opp_heads[slice_idx],
++ &context->res_ctx,
++ dpp_pipes);
++ for (int dpp_idx = 0; dpp_idx < dpp_count; dpp_idx++) {
++ struct pipe_ctx *dpp_pipe = dpp_pipes[dpp_idx];
++ if (dpp_pipe && hubbub &&
++ dpp_pipe->plane_res.hubp &&
++ hubbub->funcs->wait_for_det_update)
++ hubbub->funcs->wait_for_det_update(hubbub, dpp_pipe->plane_res.hubp->inst);
++ }
++ }
++ }
++}
++
++void dcn401_interdependent_update_lock(struct dc *dc,
++ struct dc_state *context, bool lock)
++{
++ unsigned int i = 0;
++ struct pipe_ctx *pipe = NULL;
++ struct timing_generator *tg = NULL;
++ bool pipe_unlocked[MAX_PIPES] = {0};
++
++ if (lock) {
++ for (i = 0; i < dc->res_pool->pipe_count; i++) {
++ pipe = &context->res_ctx.pipe_ctx[i];
++ tg = pipe->stream_res.tg;
++
++ if (!resource_is_pipe_type(pipe, OTG_MASTER) ||
++ !tg->funcs->is_tg_enabled(tg) ||
++ dc_state_get_pipe_subvp_type(context, pipe) == SUBVP_PHANTOM)
++ continue;
++ dc->hwss.pipe_control_lock(dc, pipe, true);
++ }
++ } else {
++ /* Unlock pipes based on the change in DET allocation instead of pipe index
++ * Prevents over allocation of DET during unlock process
++ * e.g. 2 pipe config with different streams with a max of 20 DET segments
++ * Before: After:
++ * - Pipe0: 10 DET segments - Pipe0: 12 DET segments
++ * - Pipe1: 10 DET segments - Pipe1: 8 DET segments
++ * If Pipe0 gets updated first, 22 DET segments will be allocated
++ */
++ for (i = 0; i < dc->res_pool->pipe_count; i++) {
++ pipe = &context->res_ctx.pipe_ctx[i];
++ tg = pipe->stream_res.tg;
++ int current_pipe_idx = i;
++
++ if (!resource_is_pipe_type(pipe, OTG_MASTER) ||
++ !tg->funcs->is_tg_enabled(tg) ||
++ dc_state_get_pipe_subvp_type(context, pipe) == SUBVP_PHANTOM) {
++ pipe_unlocked[i] = true;
++ continue;
++ }
++
++ // If the same stream exists in old context, ensure the OTG_MASTER pipes for the same stream get compared
++ struct pipe_ctx *old_otg_master = resource_get_otg_master_for_stream(&dc->current_state->res_ctx, pipe->stream);
++
++ if (old_otg_master)
++ current_pipe_idx = old_otg_master->pipe_idx;
++ if (resource_calculate_det_for_stream(context, pipe) <
++ resource_calculate_det_for_stream(dc->current_state, &dc->current_state->res_ctx.pipe_ctx[current_pipe_idx])) {
++ dc->hwss.pipe_control_lock(dc, pipe, false);
++ pipe_unlocked[i] = true;
++ dcn401_wait_for_det_buffer_update(dc, context, pipe);
++ }
++ }
++
++ for (i = 0; i < dc->res_pool->pipe_count; i++) {
++ if (pipe_unlocked[i])
++ continue;
++ pipe = &context->res_ctx.pipe_ctx[i];
++ dc->hwss.pipe_control_lock(dc, pipe, false);
++ }
++ }
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.h b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.h
+index 8e9c1c17aa6627..3ecb1ebffcee87 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.h
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.h
+@@ -81,4 +81,6 @@ void dcn401_hardware_release(struct dc *dc);
+ void dcn401_update_odm(struct dc *dc, struct dc_state *context,
+ struct pipe_ctx *otg_master);
+ void adjust_hotspot_between_slices_for_2x_magnify(uint32_t cursor_width, struct dc_cursor_position *pos_cpy);
++void dcn401_wait_for_det_buffer_update(struct dc *dc, struct dc_state *context, struct pipe_ctx *otg_master);
++void dcn401_interdependent_update_lock(struct dc *dc, struct dc_state *context, bool lock);
+ #endif /* __DC_HWSS_DCN401_H__ */
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c
+index 6a768702c7bdec..28f3eb8f4b2d81 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c
+@@ -38,7 +38,7 @@ static const struct hw_sequencer_funcs dcn401_funcs = {
+ .disable_audio_stream = dce110_disable_audio_stream,
+ .disable_plane = dcn20_disable_plane,
+ .pipe_control_lock = dcn20_pipe_control_lock,
+- .interdependent_update_lock = dcn32_interdependent_update_lock,
++ .interdependent_update_lock = dcn401_interdependent_update_lock,
+ .cursor_lock = dcn10_cursor_lock,
+ .prepare_bandwidth = dcn401_prepare_bandwidth,
+ .optimize_bandwidth = dcn401_optimize_bandwidth,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
+index dd2b2864876c79..67c32401893e86 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
+@@ -227,6 +227,7 @@ struct hubbub_funcs {
+ void (*get_mall_en)(struct hubbub *hubbub, unsigned int *mall_in_use);
+ void (*program_det_segments)(struct hubbub *hubbub, int hubp_inst, unsigned det_buffer_size_seg);
+ void (*program_compbuf_segments)(struct hubbub *hubbub, unsigned compbuf_size_seg, bool safe_to_increase);
++ void (*wait_for_det_update)(struct hubbub *hubbub, int hubp_inst);
+ };
+
+ struct hubbub {
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/resource.h b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+index 96d40d33a1f99e..9cd80d3864c7b4 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/resource.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+@@ -639,4 +639,9 @@ struct dscl_prog_data *resource_get_dscl_prog_data(struct pipe_ctx *pipe_ctx);
+ * @dml2_options: struct to hold callbacks
+ */
+ void resource_init_common_dml2_callbacks(struct dc *dc, struct dml2_configuration_options *dml2_options);
++
++/*
++ *Calculate total DET allocated for all pipes for a given OTG_MASTER pipe
++ */
++int resource_calculate_det_for_stream(struct dc_state *state, struct pipe_ctx *otg_master);
+ #endif /* DRIVERS_GPU_DRM_AMD_DC_DEV_DC_INC_RESOURCE_H_ */
+diff --git a/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c b/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
+index 555c1c484cfddc..df3781081da7ae 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
++++ b/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
+@@ -804,8 +804,11 @@ bool dp_set_test_pattern(
+ break;
+ }
+
++ if (!pipe_ctx->stream)
++ return false;
++
+ if (pipe_ctx->stream_res.tg->funcs->lock_doublebuffer_enable) {
+- if (pipe_ctx->stream && should_use_dmub_lock(pipe_ctx->stream->link)) {
++ if (should_use_dmub_lock(pipe_ctx->stream->link)) {
+ union dmub_hw_lock_flags hw_locks = { 0 };
+ struct dmub_hw_lock_inst_flags inst_flags = { 0 };
+
+diff --git a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_dio.c b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_dio.c
+index b76737b7b9e41b..3e47a6735912a5 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_dio.c
++++ b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_dio.c
+@@ -74,7 +74,10 @@ void reset_dio_stream_encoder(struct pipe_ctx *pipe_ctx)
+ struct link_encoder *link_enc = link_enc_cfg_get_link_enc(pipe_ctx->stream->link);
+ struct stream_encoder *stream_enc = pipe_ctx->stream_res.stream_enc;
+
+- if (stream_enc && stream_enc->funcs->disable_fifo)
++ if (!stream_enc)
++ return;
++
++ if (stream_enc->funcs->disable_fifo)
+ stream_enc->funcs->disable_fifo(stream_enc);
+ if (stream_enc->funcs->set_input_mode)
+ stream_enc->funcs->set_input_mode(stream_enc, 0);
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_factory.c b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+index 8246006857b30b..49d069dae29bf5 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_factory.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+@@ -385,7 +385,7 @@ static void link_destruct(struct dc_link *link)
+ if (link->panel_cntl)
+ link->panel_cntl->funcs->destroy(&link->panel_cntl);
+
+- if (link->link_enc) {
++ if (link->link_enc && !link->is_dig_mapping_flexible) {
+ /* Update link encoder resource tracking variables. These are used for
+ * the dynamic assignment of link encoders to streams. Virtual links
+ * are not assigned encoder resources on creation.
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+index 46bb7a855bc218..60015e94c4aa8c 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+@@ -2254,7 +2254,7 @@ bool dp_verify_link_cap_with_retries(
+
+ memset(&link->verified_link_cap, 0,
+ sizeof(struct dc_link_settings));
+- if (!link_detect_connection_type(link, &type) || type == dc_connection_none) {
++ if (link->link_enc && (!link_detect_connection_type(link, &type) || type == dc_connection_none)) {
+ link->verified_link_cap = fail_safe_link_settings;
+ break;
+ } else if (dp_verify_link_cap(link, known_limit_link_setting, &fail_count)) {
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dce112/dce112_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dce112/dce112_resource.c
+index 88afb2a30eef50..162856c523e40c 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dce112/dce112_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dce112/dce112_resource.c
+@@ -1067,7 +1067,10 @@ static void bw_calcs_data_update_from_pplib(struct dc *dc)
+ struct dm_pp_clock_levels clks = {0};
+ int memory_type_multiplier = MEMORY_TYPE_MULTIPLIER_CZ;
+
+- if (dc->bw_vbios && dc->bw_vbios->memory_type == bw_def_hbm)
++ if (!dc->bw_vbios)
++ return;
++
++ if (dc->bw_vbios->memory_type == bw_def_hbm)
+ memory_type_multiplier = MEMORY_TYPE_HBM;
+
+ /*do system clock TODO PPLIB: after PPLIB implement,
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
+index 5e7cfa8e8ec93d..eea2b3b307cd5f 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
+@@ -2040,6 +2040,7 @@ bool dcn20_fast_validate_bw(
+ {
+ bool out = false;
+ int split[MAX_PIPES] = { 0 };
++ bool merge[MAX_PIPES] = { false };
+ int pipe_cnt, i, pipe_idx, vlevel;
+
+ ASSERT(pipes);
+@@ -2064,7 +2065,7 @@ bool dcn20_fast_validate_bw(
+ if (vlevel > context->bw_ctx.dml.soc.num_states)
+ goto validate_fail;
+
+- vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, vlevel, split, NULL);
++ vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, vlevel, split, merge);
+
+ /*initialize pipe_just_split_from to invalid idx*/
+ for (i = 0; i < MAX_PIPES; i++)
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn201/dcn201_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn201/dcn201_resource.c
+index 131d98025bd475..fc54483b91047a 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn201/dcn201_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn201/dcn201_resource.c
+@@ -1007,8 +1007,10 @@ static struct pipe_ctx *dcn201_acquire_free_pipe_for_layer(
+ struct pipe_ctx *head_pipe = resource_get_otg_master_for_stream(res_ctx, opp_head_pipe->stream);
+ struct pipe_ctx *idle_pipe = resource_find_free_secondary_pipe_legacy(res_ctx, pool, head_pipe);
+
+- if (!head_pipe)
++ if (!head_pipe) {
+ ASSERT(0);
++ return NULL;
++ }
+
+ if (!idle_pipe)
+ return NULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
+index 8663cbc3d1cf5e..347e6aaea582fb 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
+@@ -774,6 +774,7 @@ bool dcn21_fast_validate_bw(struct dc *dc,
+ {
+ bool out = false;
+ int split[MAX_PIPES] = { 0 };
++ bool merge[MAX_PIPES] = { false };
+ int pipe_cnt, i, pipe_idx, vlevel;
+
+ ASSERT(pipes);
+@@ -816,7 +817,7 @@ bool dcn21_fast_validate_bw(struct dc *dc,
+ goto validate_fail;
+ }
+
+- vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, vlevel, split, NULL);
++ vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, vlevel, split, merge);
+
+ for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
+ struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+index 969658313fd65a..8bacff23c3563e 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+@@ -1651,6 +1651,9 @@ static void dcn32_enable_phantom_plane(struct dc *dc,
+ else
+ phantom_plane = dc_state_create_phantom_plane(dc, context, curr_pipe->plane_state);
+
++ if (!phantom_plane)
++ continue;
++
+ memcpy(&phantom_plane->address, &curr_pipe->plane_state->address, sizeof(phantom_plane->address));
+ memcpy(&phantom_plane->scaling_quality, &curr_pipe->plane_state->scaling_quality,
+ sizeof(phantom_plane->scaling_quality));
+@@ -1717,6 +1720,9 @@ void dcn32_add_phantom_pipes(struct dc *dc, struct dc_state *context,
+ // be a valid candidate for SubVP (i.e. has a plane, stream, doesn't
+ // already have phantom pipe assigned, etc.) by previous checks.
+ phantom_stream = dcn32_enable_phantom_stream(dc, context, pipes, pipe_cnt, index);
++ if (!phantom_stream)
++ return;
++
+ dcn32_enable_phantom_plane(dc, context, phantom_stream, index);
+
+ for (i = 0; i < dc->res_pool->pipe_count; i++) {
+@@ -2671,8 +2677,10 @@ static struct pipe_ctx *dcn32_acquire_idle_pipe_for_head_pipe_in_layer(
+ struct resource_context *old_ctx = &stream->ctx->dc->current_state->res_ctx;
+ int head_index;
+
+- if (!head_pipe)
++ if (!head_pipe) {
+ ASSERT(0);
++ return NULL;
++ }
+
+ /*
+ * Modified from dcn20_acquire_idle_pipe_for_layer
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource_helpers.c b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource_helpers.c
+index d184105ce2b3e6..f5a4e97c40ced2 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource_helpers.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource_helpers.c
+@@ -218,12 +218,12 @@ bool dcn32_is_center_timing(struct pipe_ctx *pipe)
+ pipe->stream->timing.v_addressable != pipe->stream->src.height) {
+ is_center_timing = true;
+ }
+- }
+
+- if (pipe->plane_state) {
+- if (pipe->stream->timing.v_addressable != pipe->plane_state->dst_rect.height &&
+- pipe->stream->timing.v_addressable != pipe->plane_state->src_rect.height) {
+- is_center_timing = true;
++ if (pipe->plane_state) {
++ if (pipe->stream->timing.v_addressable != pipe->plane_state->dst_rect.height &&
++ pipe->stream->timing.v_addressable != pipe->plane_state->src_rect.height) {
++ is_center_timing = true;
++ }
+ }
+ }
+
+@@ -663,7 +663,7 @@ bool dcn32_subvp_drr_admissable(struct dc *dc, struct dc_state *context)
+
+ subvp_disallow |= disallow_subvp_in_active_plus_blank(pipe);
+ refresh_rate = (pipe->stream->timing.pix_clk_100hz * (uint64_t)100 +
+- pipe->stream->timing.v_total * pipe->stream->timing.h_total - (uint64_t)1);
++ pipe->stream->timing.v_total * (unsigned long long)pipe->stream->timing.h_total - (uint64_t)1);
+ refresh_rate = div_u64(refresh_rate, pipe->stream->timing.v_total);
+ refresh_rate = div_u64(refresh_rate, pipe->stream->timing.h_total);
+ }
+@@ -724,7 +724,7 @@ bool dcn32_subvp_vblank_admissable(struct dc *dc, struct dc_state *context, int
+
+ subvp_disallow |= disallow_subvp_in_active_plus_blank(pipe);
+ refresh_rate = (pipe->stream->timing.pix_clk_100hz * (uint64_t)100 +
+- pipe->stream->timing.v_total * pipe->stream->timing.h_total - (uint64_t)1);
++ pipe->stream->timing.v_total * (unsigned long long)pipe->stream->timing.h_total - (uint64_t)1);
+ refresh_rate = div_u64(refresh_rate, pipe->stream->timing.v_total);
+ refresh_rate = div_u64(refresh_rate, pipe->stream->timing.h_total);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+index da9101b83e8c1e..70abd32ce2ad18 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+@@ -766,6 +766,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ .disable_dmub_reallow_idle = false,
+ .static_screen_wait_frames = 2,
+ .notify_dpia_hr_bw = true,
++ .min_disp_clk_khz = 50000,
+ };
+
+ static const struct dc_panel_config panel_config_defaults = {
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
+index 34b02147881ddb..ec676d269d33f5 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
+@@ -1188,7 +1188,7 @@ static struct stream_encoder *dcn401_stream_encoder_create(
+ vpg = dcn401_vpg_create(ctx, vpg_inst);
+ afmt = dcn401_afmt_create(ctx, afmt_inst);
+
+- if (!enc1 || !vpg || !afmt) {
++ if (!enc1 || !vpg || !afmt || eng_id >= ARRAY_SIZE(stream_enc_regs)) {
+ kfree(enc1);
+ kfree(vpg);
+ kfree(afmt);
+@@ -2099,6 +2099,7 @@ static bool dcn401_resource_construct(
+ dc->dml2_options.use_native_soc_bb_construction = true;
+ dc->dml2_options.minimize_dispclk_using_odm = true;
+ dc->dml2_options.map_dc_pipes_with_callbacks = true;
++ dc->dml2_options.force_tdlut_enable = true;
+
+ resource_init_common_dml2_callbacks(dc, &dc->dml2_options);
+ dc->dml2_options.callbacks.can_support_mclk_switch_using_fw_based_vblank_stretch = &dcn30_can_support_mclk_switch_using_fw_based_vblank_stretch;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/processpptables.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/processpptables.c
+index ca1c7ae8d146d5..f06b29e33ba452 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/processpptables.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/processpptables.c
+@@ -1183,6 +1183,8 @@ static int init_overdrive_limits(struct pp_hwmgr *hwmgr,
+ fw_info = smu_atom_get_data_table(hwmgr->adev,
+ GetIndexIntoMasterTable(DATA, FirmwareInfo),
+ &size, &frev, &crev);
++ PP_ASSERT_WITH_CODE(fw_info != NULL,
++ "Missing firmware info!", return -EINVAL);
+
+ if ((fw_info->ucTableFormatRevision == 1)
+ && (le16_to_cpu(fw_info->usStructureSize) >= sizeof(ATOM_FIRMWARE_INFO_V1_4)))
+diff --git a/drivers/gpu/drm/display/drm_hdmi_state_helper.c b/drivers/gpu/drm/display/drm_hdmi_state_helper.c
+index 7854820089ec6e..feb7a3a759811a 100644
+--- a/drivers/gpu/drm/display/drm_hdmi_state_helper.c
++++ b/drivers/gpu/drm/display/drm_hdmi_state_helper.c
+@@ -521,8 +521,6 @@ int drm_atomic_helper_connector_hdmi_check(struct drm_connector *connector,
+ }
+ EXPORT_SYMBOL(drm_atomic_helper_connector_hdmi_check);
+
+-#define HDMI_MAX_INFOFRAME_SIZE 29
+-
+ static int clear_device_infoframe(struct drm_connector *connector,
+ enum hdmi_infoframe_type type)
+ {
+@@ -563,7 +561,7 @@ static int write_device_infoframe(struct drm_connector *connector,
+ {
+ const struct drm_connector_hdmi_funcs *funcs = connector->hdmi.funcs;
+ struct drm_device *dev = connector->dev;
+- u8 buffer[HDMI_MAX_INFOFRAME_SIZE];
++ u8 buffer[HDMI_INFOFRAME_SIZE(MAX)];
+ int ret;
+ int len;
+
+diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c
+index 7936c202395518..370dc676e3aa54 100644
+--- a/drivers/gpu/drm/drm_atomic_uapi.c
++++ b/drivers/gpu/drm/drm_atomic_uapi.c
+@@ -543,7 +543,7 @@ static int drm_atomic_plane_set_property(struct drm_plane *plane,
+ &state->fb_damage_clips,
+ val,
+ -1,
+- sizeof(struct drm_rect),
++ sizeof(struct drm_mode_rect),
+ &replaced);
+ return ret;
+ } else if (property == plane->scaling_filter_property) {
+diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
+index 6b239a24f1dff3..9d3e6dd68810e3 100644
+--- a/drivers/gpu/drm/drm_debugfs.c
++++ b/drivers/gpu/drm/drm_debugfs.c
+@@ -520,8 +520,6 @@ static const struct file_operations drm_connector_fops = {
+ .write = connector_write
+ };
+
+-#define HDMI_MAX_INFOFRAME_SIZE 29
+-
+ static ssize_t
+ audio_infoframe_read(struct file *filp, char __user *ubuf, size_t count, loff_t *ppos)
+ {
+@@ -579,7 +577,7 @@ static ssize_t _f##_read_infoframe(struct file *filp, \
+ struct drm_connector *connector; \
+ union hdmi_infoframe *frame; \
+ struct drm_device *dev; \
+- u8 buf[HDMI_MAX_INFOFRAME_SIZE]; \
++ u8 buf[HDMI_INFOFRAME_SIZE(MAX)]; \
+ ssize_t len = 0; \
+ \
+ connector = filp->private_data; \
+diff --git a/drivers/gpu/drm/drm_print.c b/drivers/gpu/drm/drm_print.c
+index cf24dfdeb6b274..0081190201a7f9 100644
+--- a/drivers/gpu/drm/drm_print.c
++++ b/drivers/gpu/drm/drm_print.c
+@@ -100,8 +100,9 @@ void __drm_puts_coredump(struct drm_printer *p, const char *str)
+ copy = iterator->remain;
+
+ /* Copy out the bit of the string that we need */
+- memcpy(iterator->data,
+- str + (iterator->start - iterator->offset), copy);
++ if (iterator->data)
++ memcpy(iterator->data,
++ str + (iterator->start - iterator->offset), copy);
+
+ iterator->offset = iterator->start + copy;
+ iterator->remain -= copy;
+@@ -110,7 +111,8 @@ void __drm_puts_coredump(struct drm_printer *p, const char *str)
+
+ len = min_t(ssize_t, strlen(str), iterator->remain);
+
+- memcpy(iterator->data + pos, str, len);
++ if (iterator->data)
++ memcpy(iterator->data + pos, str, len);
+
+ iterator->offset += len;
+ iterator->remain -= len;
+@@ -140,8 +142,9 @@ void __drm_printfn_coredump(struct drm_printer *p, struct va_format *vaf)
+ if ((iterator->offset >= iterator->start) && (len < iterator->remain)) {
+ ssize_t pos = iterator->offset - iterator->start;
+
+- snprintf(((char *) iterator->data) + pos,
+- iterator->remain, "%pV", vaf);
++ if (iterator->data)
++ snprintf(((char *) iterator->data) + pos,
++ iterator->remain, "%pV", vaf);
+
+ iterator->offset += len;
+ iterator->remain -= len;
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index a07aca96e55177..5b6aabce4c32f5 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -916,7 +916,7 @@ intel_ddi_main_link_aux_domain(struct intel_digital_port *dig_port,
+ * instead of a specific AUX_IO_<port> reference without powering up any
+ * extra wells.
+ */
+- if (intel_encoder_can_psr(&dig_port->base))
++ if (intel_psr_needs_aux_io_power(&dig_port->base, crtc_state))
+ return intel_display_power_aux_io_domain(i915, dig_port->aux_ch);
+ else if (DISPLAY_VER(i915) < 14 &&
+ (intel_crtc_has_dp_encoder(crtc_state) ||
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index ebe7fe5417ae40..ffc0d1b1404554 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -535,6 +535,10 @@ static void
+ intel_dp_set_source_rates(struct intel_dp *intel_dp)
+ {
+ /* The values must be in increasing order */
++ static const int bmg_rates[] = {
++ 162000, 216000, 243000, 270000, 324000, 432000, 540000, 675000,
++ 810000, 1000000, 1350000,
++ };
+ static const int mtl_rates[] = {
+ 162000, 216000, 243000, 270000, 324000, 432000, 540000, 675000,
+ 810000, 1000000, 2000000,
+@@ -565,8 +569,13 @@ intel_dp_set_source_rates(struct intel_dp *intel_dp)
+ intel_dp->source_rates || intel_dp->num_source_rates);
+
+ if (DISPLAY_VER(dev_priv) >= 14) {
+- source_rates = mtl_rates;
+- size = ARRAY_SIZE(mtl_rates);
++ if (IS_BATTLEMAGE(dev_priv)) {
++ source_rates = bmg_rates;
++ size = ARRAY_SIZE(bmg_rates);
++ } else {
++ source_rates = mtl_rates;
++ size = ARRAY_SIZE(mtl_rates);
++ }
+ max_rate = mtl_max_source_rate(intel_dp);
+ } else if (DISPLAY_VER(dev_priv) >= 11) {
+ source_rates = icl_rates;
+@@ -3955,6 +3964,9 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp, struct intel_connector *connector
+ drm_dp_is_branch(intel_dp->dpcd));
+ intel_init_dpcd_quirks(intel_dp, &intel_dp->desc.ident);
+
++ intel_dp->colorimetry_support =
++ intel_dp_get_colorimetry_status(intel_dp);
++
+ /*
+ * Read the eDP display control registers.
+ *
+@@ -4068,6 +4080,9 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
+
+ intel_init_dpcd_quirks(intel_dp, &intel_dp->desc.ident);
+
++ intel_dp->colorimetry_support =
++ intel_dp_get_colorimetry_status(intel_dp);
++
+ intel_dp_update_sink_caps(intel_dp);
+ }
+
+@@ -6852,9 +6867,6 @@ intel_dp_init_connector(struct intel_digital_port *dig_port,
+ "HDCP init failed, skipping.\n");
+ }
+
+- intel_dp->colorimetry_support =
+- intel_dp_get_colorimetry_status(intel_dp);
+-
+ intel_dp->frl.is_trained = false;
+ intel_dp->frl.trained_rate_gbps = 0;
+
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 9cb1cdaaeefa7d..da242ba19ed95b 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -203,6 +203,25 @@ bool intel_encoder_can_psr(struct intel_encoder *encoder)
+ return false;
+ }
+
++bool intel_psr_needs_aux_io_power(struct intel_encoder *encoder,
++ const struct intel_crtc_state *crtc_state)
++{
++ /*
++ * For PSR/PR modes only eDP requires the AUX IO power to be enabled whenever
++ * the output is enabled. For non-eDP outputs the main link is always
++ * on, hence it doesn't require the HW initiated AUX wake-up signaling used
++ * for eDP.
++ *
++ * TODO:
++ * - Consider leaving AUX IO disabled for eDP / PR as well, in case
++ * the ALPM with main-link off mode is not enabled.
++ * - Leave AUX IO enabled for DP / PR, once support for ALPM with
++ * main-link off mode is added for it and this mode gets enabled.
++ */
++ return intel_crtc_has_type(crtc_state, INTEL_OUTPUT_EDP) &&
++ intel_encoder_can_psr(encoder);
++}
++
+ static bool psr_global_enabled(struct intel_dp *intel_dp)
+ {
+ struct intel_connector *connector = intel_dp->attached_connector;
+@@ -2746,13 +2765,6 @@ static int _psr1_ready_for_pipe_update_locked(struct intel_dp *intel_dp)
+ EDP_PSR_STATUS_STATE_MASK, 50);
+ }
+
+-static int _panel_replay_ready_for_pipe_update_locked(struct intel_dp *intel_dp)
+-{
+- return intel_dp_is_edp(intel_dp) ?
+- _psr2_ready_for_pipe_update_locked(intel_dp) :
+- _psr1_ready_for_pipe_update_locked(intel_dp);
+-}
+-
+ /**
+ * intel_psr_wait_for_idle_locked - wait for PSR be ready for a pipe update
+ * @new_crtc_state: new CRTC state
+@@ -2775,12 +2787,10 @@ void intel_psr_wait_for_idle_locked(const struct intel_crtc_state *new_crtc_stat
+
+ lockdep_assert_held(&intel_dp->psr.lock);
+
+- if (!intel_dp->psr.enabled)
++ if (!intel_dp->psr.enabled || intel_dp->psr.panel_replay_enabled)
+ continue;
+
+- if (intel_dp->psr.panel_replay_enabled)
+- ret = _panel_replay_ready_for_pipe_update_locked(intel_dp);
+- else if (intel_dp->psr.sel_update_enabled)
++ if (intel_dp->psr.sel_update_enabled)
+ ret = _psr2_ready_for_pipe_update_locked(intel_dp);
+ else
+ ret = _psr1_ready_for_pipe_update_locked(intel_dp);
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.h b/drivers/gpu/drm/i915/display/intel_psr.h
+index d483c85870e1db..e719f548e1606b 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.h
++++ b/drivers/gpu/drm/i915/display/intel_psr.h
+@@ -25,6 +25,8 @@ struct intel_plane_state;
+ (intel_dp)->psr.source_panel_replay_support)
+
+ bool intel_encoder_can_psr(struct intel_encoder *encoder);
++bool intel_psr_needs_aux_io_power(struct intel_encoder *encoder,
++ const struct intel_crtc_state *crtc_state);
+ void intel_psr_init_dpcd(struct intel_dp *intel_dp);
+ void intel_psr_enable_sink(struct intel_dp *intel_dp,
+ const struct intel_crtc_state *crtc_state);
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+index 5c72462d1f57e3..b22e2019768f04 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+@@ -1131,7 +1131,7 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf)
+ GEM_WARN_ON(!i915_ttm_cpu_maps_iomem(bo->resource));
+ }
+
+- if (wakeref & CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND)
++ if (wakeref && CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND != 0)
+ intel_wakeref_auto(&to_i915(obj->base.dev)->runtime_pm.userfault_wakeref,
+ msecs_to_jiffies_timeout(CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND));
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
+index 1a2a73757370bc..8c2ba908687347 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
+@@ -523,8 +523,10 @@ static int ovl_adaptor_comp_init(struct device *dev, struct component_match **ma
+ }
+
+ comp_pdev = of_find_device_by_node(node);
+- if (!comp_pdev)
++ if (!comp_pdev) {
++ of_node_put(node);
+ return -EPROBE_DEFER;
++ }
+
+ priv->ovl_adaptor_comp[id] = &comp_pdev->dev;
+
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index 3896123ec51c96..83caca2c4026a1 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -1083,6 +1083,7 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
+ adreno_gpu->chip_id = config->chip_id;
+
+ gpu->allow_relocs = config->info->family < ADRENO_6XX_GEN1;
++ gpu->pdev = pdev;
+
+ /* Only handle the core clock when GMU is not in use (or is absent). */
+ if (adreno_has_gmu_wrapper(adreno_gpu) ||
+diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
+index 3666b42b4ecd7f..a274b846642374 100644
+--- a/drivers/gpu/drm/msm/msm_gpu.c
++++ b/drivers/gpu/drm/msm/msm_gpu.c
+@@ -931,7 +931,6 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
+ if (IS_ERR(gpu->gpu_cx))
+ gpu->gpu_cx = NULL;
+
+- gpu->pdev = pdev;
+ platform_set_drvdata(pdev, &gpu->adreno_smmu);
+
+ msm_devfreq_init(gpu);
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
+index 6598c9c08ba11e..d3eac4817d7687 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.c
++++ b/drivers/gpu/drm/omapdrm/omap_drv.c
+@@ -695,6 +695,10 @@ static int omapdrm_init(struct omap_drm_private *priv, struct device *dev)
+ soc = soc_device_match(omapdrm_soc_devices);
+ priv->omaprev = soc ? (uintptr_t)soc->data : 0;
+ priv->wq = alloc_ordered_workqueue("omapdrm", 0);
++ if (!priv->wq) {
++ ret = -ENOMEM;
++ goto err_alloc_workqueue;
++ }
+
+ mutex_init(&priv->list_lock);
+ INIT_LIST_HEAD(&priv->obj_list);
+@@ -753,6 +757,7 @@ static int omapdrm_init(struct omap_drm_private *priv, struct device *dev)
+ drm_mode_config_cleanup(ddev);
+ omap_gem_deinit(ddev);
+ destroy_workqueue(priv->wq);
++err_alloc_workqueue:
+ omap_disconnect_pipelines(ddev);
+ drm_dev_put(ddev);
+ return ret;
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
+index cc6e13a9778358..ce8e8a93d70767 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.c
++++ b/drivers/gpu/drm/panthor/panthor_mmu.c
+@@ -1251,9 +1251,17 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
+ goto err_cleanup;
+ }
+
++ /* drm_gpuvm_bo_obtain_prealloc() will call drm_gpuvm_bo_put() on our
++ * pre-allocated BO if the <BO,VM> association exists. Given we
++ * only have one ref on preallocated_vm_bo, drm_gpuvm_bo_destroy() will
++ * be called immediately, and we have to hold the VM resv lock when
++ * calling this function.
++ */
++ dma_resv_lock(panthor_vm_resv(vm), NULL);
+ mutex_lock(&bo->gpuva_list_lock);
+ op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
+ mutex_unlock(&bo->gpuva_list_lock);
++ dma_resv_unlock(panthor_vm_resv(vm));
+
+ /* If the a vm_bo for this <VM,BO> combination exists, it already
+ * retains a pin ref, and we can release the one we took earlier.
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index 12b272a912f861..4d1d5a342a4a6e 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -1103,7 +1103,13 @@ cs_slot_sync_queue_state_locked(struct panthor_device *ptdev, u32 csg_id, u32 cs
+ list_move_tail(&group->wait_node,
+ &group->ptdev->scheduler->groups.waiting);
+ }
+- group->blocked_queues |= BIT(cs_id);
++
++ /* The queue is only blocked if there's no deferred operation
++ * pending, which can be checked through the scoreboard status.
++ */
++ if (!cs_iface->output->status_scoreboards)
++ group->blocked_queues |= BIT(cs_id);
++
+ queue->syncwait.gpu_va = cs_iface->output->status_wait_sync_ptr;
+ queue->syncwait.ref = cs_iface->output->status_wait_sync_value;
+ status_wait_cond = cs_iface->output->status_wait & CS_STATUS_WAIT_SYNC_COND_MASK;
+@@ -2046,6 +2052,7 @@ static void
+ tick_ctx_cleanup(struct panthor_scheduler *sched,
+ struct panthor_sched_tick_ctx *ctx)
+ {
++ struct panthor_device *ptdev = sched->ptdev;
+ struct panthor_group *group, *tmp;
+ u32 i;
+
+@@ -2054,7 +2061,7 @@ tick_ctx_cleanup(struct panthor_scheduler *sched,
+ /* If everything went fine, we should only have groups
+ * to be terminated in the old_groups lists.
+ */
+- drm_WARN_ON(&group->ptdev->base, !ctx->csg_upd_failed_mask &&
++ drm_WARN_ON(&ptdev->base, !ctx->csg_upd_failed_mask &&
+ group_can_run(group));
+
+ if (!group_can_run(group)) {
+@@ -2077,7 +2084,7 @@ tick_ctx_cleanup(struct panthor_scheduler *sched,
+ /* If everything went fine, the groups to schedule lists should
+ * be empty.
+ */
+- drm_WARN_ON(&group->ptdev->base,
++ drm_WARN_ON(&ptdev->base,
+ !ctx->csg_upd_failed_mask && !list_empty(&ctx->groups[i]));
+
+ list_for_each_entry_safe(group, tmp, &ctx->groups[i], run_node) {
+@@ -3242,6 +3249,18 @@ int panthor_group_destroy(struct panthor_file *pfile, u32 group_handle)
+ return 0;
+ }
+
++static struct panthor_group *group_from_handle(struct panthor_group_pool *pool,
++ u32 group_handle)
++{
++ struct panthor_group *group;
++
++ xa_lock(&pool->xa);
++ group = group_get(xa_load(&pool->xa, group_handle));
++ xa_unlock(&pool->xa);
++
++ return group;
++}
++
+ int panthor_group_get_state(struct panthor_file *pfile,
+ struct drm_panthor_group_get_state *get_state)
+ {
+@@ -3253,7 +3272,7 @@ int panthor_group_get_state(struct panthor_file *pfile,
+ if (get_state->pad)
+ return -EINVAL;
+
+- group = group_get(xa_load(&gpool->xa, get_state->group_handle));
++ group = group_from_handle(gpool, get_state->group_handle);
+ if (!group)
+ return -EINVAL;
+
+@@ -3384,7 +3403,7 @@ panthor_job_create(struct panthor_file *pfile,
+ job->call_info.latest_flush = qsubmit->latest_flush;
+ INIT_LIST_HEAD(&job->node);
+
+- job->group = group_get(xa_load(&gpool->xa, group_handle));
++ job->group = group_from_handle(gpool, group_handle);
+ if (!job->group) {
+ ret = -EINVAL;
+ goto err_put_job;
+@@ -3424,13 +3443,8 @@ void panthor_job_update_resvs(struct drm_exec *exec, struct drm_sched_job *sched
+ {
+ struct panthor_job *job = container_of(sched_job, struct panthor_job, base);
+
+- /* Still not sure why we want USAGE_WRITE for external objects, since I
+- * was assuming this would be handled through explicit syncs being imported
+- * to external BOs with DMA_BUF_IOCTL_IMPORT_SYNC_FILE, but other drivers
+- * seem to pass DMA_RESV_USAGE_WRITE, so there must be a good reason.
+- */
+ panthor_vm_update_resvs(job->group->vm, exec, &sched_job->s_fence->finished,
+- DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_WRITE);
++ DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_BOOKKEEP);
+ }
+
+ void panthor_sched_unplug(struct panthor_device *ptdev)
+diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
+index 0b1e19345f43a7..bfd42e3e161e98 100644
+--- a/drivers/gpu/drm/radeon/r100.c
++++ b/drivers/gpu/drm/radeon/r100.c
+@@ -1016,45 +1016,65 @@ static int r100_cp_init_microcode(struct radeon_device *rdev)
+
+ DRM_DEBUG_KMS("\n");
+
+- if ((rdev->family == CHIP_R100) || (rdev->family == CHIP_RV100) ||
+- (rdev->family == CHIP_RV200) || (rdev->family == CHIP_RS100) ||
+- (rdev->family == CHIP_RS200)) {
++ switch (rdev->family) {
++ case CHIP_R100:
++ case CHIP_RV100:
++ case CHIP_RV200:
++ case CHIP_RS100:
++ case CHIP_RS200:
+ DRM_INFO("Loading R100 Microcode\n");
+ fw_name = FIRMWARE_R100;
+- } else if ((rdev->family == CHIP_R200) ||
+- (rdev->family == CHIP_RV250) ||
+- (rdev->family == CHIP_RV280) ||
+- (rdev->family == CHIP_RS300)) {
++ break;
++
++ case CHIP_R200:
++ case CHIP_RV250:
++ case CHIP_RV280:
++ case CHIP_RS300:
+ DRM_INFO("Loading R200 Microcode\n");
+ fw_name = FIRMWARE_R200;
+- } else if ((rdev->family == CHIP_R300) ||
+- (rdev->family == CHIP_R350) ||
+- (rdev->family == CHIP_RV350) ||
+- (rdev->family == CHIP_RV380) ||
+- (rdev->family == CHIP_RS400) ||
+- (rdev->family == CHIP_RS480)) {
++ break;
++
++ case CHIP_R300:
++ case CHIP_R350:
++ case CHIP_RV350:
++ case CHIP_RV380:
++ case CHIP_RS400:
++ case CHIP_RS480:
+ DRM_INFO("Loading R300 Microcode\n");
+ fw_name = FIRMWARE_R300;
+- } else if ((rdev->family == CHIP_R420) ||
+- (rdev->family == CHIP_R423) ||
+- (rdev->family == CHIP_RV410)) {
++ break;
++
++ case CHIP_R420:
++ case CHIP_R423:
++ case CHIP_RV410:
+ DRM_INFO("Loading R400 Microcode\n");
+ fw_name = FIRMWARE_R420;
+- } else if ((rdev->family == CHIP_RS690) ||
+- (rdev->family == CHIP_RS740)) {
++ break;
++
++ case CHIP_RS690:
++ case CHIP_RS740:
+ DRM_INFO("Loading RS690/RS740 Microcode\n");
+ fw_name = FIRMWARE_RS690;
+- } else if (rdev->family == CHIP_RS600) {
++ break;
++
++ case CHIP_RS600:
+ DRM_INFO("Loading RS600 Microcode\n");
+ fw_name = FIRMWARE_RS600;
+- } else if ((rdev->family == CHIP_RV515) ||
+- (rdev->family == CHIP_R520) ||
+- (rdev->family == CHIP_RV530) ||
+- (rdev->family == CHIP_R580) ||
+- (rdev->family == CHIP_RV560) ||
+- (rdev->family == CHIP_RV570)) {
++ break;
++
++ case CHIP_RV515:
++ case CHIP_R520:
++ case CHIP_RV530:
++ case CHIP_R580:
++ case CHIP_RV560:
++ case CHIP_RV570:
+ DRM_INFO("Loading R500 Microcode\n");
+ fw_name = FIRMWARE_R520;
++ break;
++
++ default:
++ DRM_ERROR("Unsupported Radeon family %u\n", rdev->family);
++ return -EINVAL;
+ }
+
+ err = request_firmware(&rdev->me_fw, fw_name, rdev->dev);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 4a9c6ea7f15dc3..f161f40d8ce4c8 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -1583,6 +1583,10 @@ static void vop_crtc_atomic_flush(struct drm_crtc *crtc,
+ VOP_AFBC_SET(vop, enable, s->enable_afbc);
+ vop_cfg_done(vop);
+
++ /* Ack the DMA transfer of the previous frame (RK3066). */
++ if (VOP_HAS_REG(vop, common, dma_stop))
++ VOP_REG_SET(vop, common, dma_stop, 0);
++
+ spin_unlock(&vop->reg_lock);
+
+ /*
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.h b/drivers/gpu/drm/rockchip/rockchip_drm_vop.h
+index b33e5bdc26be16..0cf512cc16144b 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.h
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.h
+@@ -122,6 +122,7 @@ struct vop_common {
+ struct vop_reg lut_buffer_index;
+ struct vop_reg gate_en;
+ struct vop_reg mmu_en;
++ struct vop_reg dma_stop;
+ struct vop_reg out_mode;
+ struct vop_reg standby;
+ };
+diff --git a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+index b9ee02061d5bf3..e2c6ba26f4377d 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
++++ b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+@@ -466,6 +466,7 @@ static const struct vop_output rk3066_output = {
+ };
+
+ static const struct vop_common rk3066_common = {
++ .dma_stop = VOP_REG(RK3066_SYS_CTRL0, 0x1, 0),
+ .standby = VOP_REG(RK3066_SYS_CTRL0, 0x1, 1),
+ .out_mode = VOP_REG(RK3066_DSP_CTRL0, 0xf, 0),
+ .cfg_done = VOP_REG(RK3066_REG_CFG_DONE, 0x1, 0),
+@@ -514,6 +515,7 @@ static const struct vop_data rk3066_vop = {
+ .output = &rk3066_output,
+ .win = rk3066_vop_win_data,
+ .win_size = ARRAY_SIZE(rk3066_vop_win_data),
++ .feature = VOP_FEATURE_INTERNAL_RGB,
+ .max_output = { 1920, 1080 },
+ };
+
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index 58c8161289fea9..a75eede8bf8dab 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -133,8 +133,10 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity,
+ {
+ WARN_ON(!num_sched_list || !sched_list);
+
++ spin_lock(&entity->rq_lock);
+ entity->sched_list = sched_list;
+ entity->num_sched_list = num_sched_list;
++ spin_unlock(&entity->rq_lock);
+ }
+ EXPORT_SYMBOL(drm_sched_entity_modify_sched);
+
+@@ -380,7 +382,7 @@ static void drm_sched_entity_wakeup(struct dma_fence *f,
+ container_of(cb, struct drm_sched_entity, cb);
+
+ drm_sched_entity_clear_dep(f, cb);
+- drm_sched_wakeup(entity->rq->sched, entity);
++ drm_sched_wakeup(entity->rq->sched);
+ }
+
+ /**
+@@ -597,6 +599,9 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job)
+
+ /* first job wakes up scheduler */
+ if (first) {
++ struct drm_gpu_scheduler *sched;
++ struct drm_sched_rq *rq;
++
+ /* Add the entity to the run queue */
+ spin_lock(&entity->rq_lock);
+ if (entity->stopped) {
+@@ -606,13 +611,16 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job)
+ return;
+ }
+
+- drm_sched_rq_add_entity(entity->rq, entity);
++ rq = entity->rq;
++ sched = rq->sched;
++
++ drm_sched_rq_add_entity(rq, entity);
+ spin_unlock(&entity->rq_lock);
+
+ if (drm_sched_policy == DRM_SCHED_POLICY_FIFO)
+ drm_sched_rq_update_fifo(entity, submit_ts);
+
+- drm_sched_wakeup(entity->rq->sched, entity);
++ drm_sched_wakeup(sched);
+ }
+ }
+ EXPORT_SYMBOL(drm_sched_entity_push_job);
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index 7e90c9f95611a0..a124d5e77b5e86 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -1022,15 +1022,12 @@ EXPORT_SYMBOL(drm_sched_job_cleanup);
+ /**
+ * drm_sched_wakeup - Wake up the scheduler if it is ready to queue
+ * @sched: scheduler instance
+- * @entity: the scheduler entity
+ *
+ * Wake up the scheduler if we can queue jobs.
+ */
+-void drm_sched_wakeup(struct drm_gpu_scheduler *sched,
+- struct drm_sched_entity *entity)
++void drm_sched_wakeup(struct drm_gpu_scheduler *sched)
+ {
+- if (drm_sched_can_queue(sched, entity))
+- drm_sched_run_job_queue(sched);
++ drm_sched_run_job_queue(sched);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/stm/drv.c b/drivers/gpu/drm/stm/drv.c
+index 4d2db079ad4ff3..e1232f74dfa537 100644
+--- a/drivers/gpu/drm/stm/drv.c
++++ b/drivers/gpu/drm/stm/drv.c
+@@ -25,6 +25,7 @@
+ #include <drm/drm_module.h>
+ #include <drm/drm_probe_helper.h>
+ #include <drm/drm_vblank.h>
++#include <drm/drm_managed.h>
+
+ #include "ltdc.h"
+
+@@ -75,7 +76,7 @@ static int drv_load(struct drm_device *ddev)
+
+ DRM_DEBUG("%s\n", __func__);
+
+- ldev = devm_kzalloc(ddev->dev, sizeof(*ldev), GFP_KERNEL);
++ ldev = drmm_kzalloc(ddev, sizeof(*ldev), GFP_KERNEL);
+ if (!ldev)
+ return -ENOMEM;
+
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index 5aec1e58c968c2..0832b749b66e7f 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -36,6 +36,7 @@
+ #include <drm/drm_probe_helper.h>
+ #include <drm/drm_simple_kms_helper.h>
+ #include <drm/drm_vblank.h>
++#include <drm/drm_managed.h>
+
+ #include <video/videomode.h>
+
+@@ -1199,7 +1200,6 @@ static void ltdc_crtc_atomic_print_state(struct drm_printer *p,
+ }
+
+ static const struct drm_crtc_funcs ltdc_crtc_funcs = {
+- .destroy = drm_crtc_cleanup,
+ .set_config = drm_atomic_helper_set_config,
+ .page_flip = drm_atomic_helper_page_flip,
+ .reset = drm_atomic_helper_crtc_reset,
+@@ -1212,7 +1212,6 @@ static const struct drm_crtc_funcs ltdc_crtc_funcs = {
+ };
+
+ static const struct drm_crtc_funcs ltdc_crtc_with_crc_support_funcs = {
+- .destroy = drm_crtc_cleanup,
+ .set_config = drm_atomic_helper_set_config,
+ .page_flip = drm_atomic_helper_page_flip,
+ .reset = drm_atomic_helper_crtc_reset,
+@@ -1514,6 +1513,9 @@ static void ltdc_plane_atomic_disable(struct drm_plane *plane,
+ /* Disable layer */
+ regmap_write_bits(ldev->regmap, LTDC_L1CR + lofs, LXCR_LEN | LXCR_CLUTEN | LXCR_HMEN, 0);
+
++ /* Reset the layer transparency to hide any related background color */
++ regmap_write_bits(ldev->regmap, LTDC_L1CACR + lofs, LXCACR_CONSTA, 0x00);
++
+ /* Commit shadow registers = update plane at next vblank */
+ if (ldev->caps.plane_reg_shadow)
+ regmap_write_bits(ldev->regmap, LTDC_L1RCR + lofs,
+@@ -1545,7 +1547,6 @@ static void ltdc_plane_atomic_print_state(struct drm_printer *p,
+ static const struct drm_plane_funcs ltdc_plane_funcs = {
+ .update_plane = drm_atomic_helper_update_plane,
+ .disable_plane = drm_atomic_helper_disable_plane,
+- .destroy = drm_plane_cleanup,
+ .reset = drm_atomic_helper_plane_reset,
+ .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
+@@ -1572,7 +1573,6 @@ static struct drm_plane *ltdc_plane_create(struct drm_device *ddev,
+ const u64 *modifiers = ltdc_format_modifiers;
+ u32 lofs = index * LAY_OFS;
+ u32 val;
+- int ret;
+
+ /* Allocate the biggest size according to supported color formats */
+ formats = devm_kzalloc(dev, (ldev->caps.pix_fmt_nb +
+@@ -1615,14 +1615,10 @@ static struct drm_plane *ltdc_plane_create(struct drm_device *ddev,
+ }
+ }
+
+- plane = devm_kzalloc(dev, sizeof(*plane), GFP_KERNEL);
+- if (!plane)
+- return NULL;
+-
+- ret = drm_universal_plane_init(ddev, plane, possible_crtcs,
+- <dc_plane_funcs, formats, nb_fmt,
+- modifiers, type, NULL);
+- if (ret < 0)
++ plane = drmm_universal_plane_alloc(ddev, struct drm_plane, dev,
++ possible_crtcs, <dc_plane_funcs, formats,
++ nb_fmt, modifiers, type, NULL);
++ if (IS_ERR(plane))
+ return NULL;
+
+ if (ldev->caps.ycbcr_input) {
+@@ -1645,15 +1641,6 @@ static struct drm_plane *ltdc_plane_create(struct drm_device *ddev,
+ return plane;
+ }
+
+-static void ltdc_plane_destroy_all(struct drm_device *ddev)
+-{
+- struct drm_plane *plane, *plane_temp;
+-
+- list_for_each_entry_safe(plane, plane_temp,
+- &ddev->mode_config.plane_list, head)
+- drm_plane_cleanup(plane);
+-}
+-
+ static int ltdc_crtc_init(struct drm_device *ddev, struct drm_crtc *crtc)
+ {
+ struct ltdc_device *ldev = ddev->dev_private;
+@@ -1679,14 +1666,14 @@ static int ltdc_crtc_init(struct drm_device *ddev, struct drm_crtc *crtc)
+
+ /* Init CRTC according to its hardware features */
+ if (ldev->caps.crc)
+- ret = drm_crtc_init_with_planes(ddev, crtc, primary, NULL,
+- <dc_crtc_with_crc_support_funcs, NULL);
++ ret = drmm_crtc_init_with_planes(ddev, crtc, primary, NULL,
++ <dc_crtc_with_crc_support_funcs, NULL);
+ else
+- ret = drm_crtc_init_with_planes(ddev, crtc, primary, NULL,
+- <dc_crtc_funcs, NULL);
++ ret = drmm_crtc_init_with_planes(ddev, crtc, primary, NULL,
++ <dc_crtc_funcs, NULL);
+ if (ret) {
+ DRM_ERROR("Can not initialize CRTC\n");
+- goto cleanup;
++ return ret;
+ }
+
+ drm_crtc_helper_add(crtc, <dc_crtc_helper_funcs);
+@@ -1700,9 +1687,8 @@ static int ltdc_crtc_init(struct drm_device *ddev, struct drm_crtc *crtc)
+ for (i = 1; i < ldev->caps.nb_layers; i++) {
+ overlay = ltdc_plane_create(ddev, DRM_PLANE_TYPE_OVERLAY, i);
+ if (!overlay) {
+- ret = -ENOMEM;
+ DRM_ERROR("Can not create overlay plane %d\n", i);
+- goto cleanup;
++ return -ENOMEM;
+ }
+ if (ldev->caps.dynamic_zorder)
+ drm_plane_create_zpos_property(overlay, i, 0, ldev->caps.nb_layers - 1);
+@@ -1715,10 +1701,6 @@ static int ltdc_crtc_init(struct drm_device *ddev, struct drm_crtc *crtc)
+ }
+
+ return 0;
+-
+-cleanup:
+- ltdc_plane_destroy_all(ddev);
+- return ret;
+ }
+
+ static void ltdc_encoder_disable(struct drm_encoder *encoder)
+@@ -1778,23 +1760,19 @@ static int ltdc_encoder_init(struct drm_device *ddev, struct drm_bridge *bridge)
+ struct drm_encoder *encoder;
+ int ret;
+
+- encoder = devm_kzalloc(ddev->dev, sizeof(*encoder), GFP_KERNEL);
+- if (!encoder)
+- return -ENOMEM;
++ encoder = drmm_simple_encoder_alloc(ddev, struct drm_encoder, dev,
++ DRM_MODE_ENCODER_DPI);
++ if (IS_ERR(encoder))
++ return PTR_ERR(encoder);
+
+ encoder->possible_crtcs = CRTC_MASK;
+ encoder->possible_clones = 0; /* No cloning support */
+
+- drm_simple_encoder_init(ddev, encoder, DRM_MODE_ENCODER_DPI);
+-
+ drm_encoder_helper_add(encoder, <dc_encoder_helper_funcs);
+
+ ret = drm_bridge_attach(encoder, bridge, NULL, 0);
+- if (ret) {
+- if (ret != -EPROBE_DEFER)
+- drm_encoder_cleanup(encoder);
++ if (ret)
+ return ret;
+- }
+
+ DRM_DEBUG_DRIVER("Bridge encoder:%d created\n", encoder->base.id);
+
+@@ -1964,8 +1942,7 @@ int ltdc_load(struct drm_device *ddev)
+ goto err;
+
+ if (panel) {
+- bridge = drm_panel_bridge_add_typed(panel,
+- DRM_MODE_CONNECTOR_DPI);
++ bridge = drmm_panel_bridge_add(ddev, panel);
+ if (IS_ERR(bridge)) {
+ DRM_ERROR("panel-bridge endpoint %d\n", i);
+ ret = PTR_ERR(bridge);
+@@ -2047,7 +2024,7 @@ int ltdc_load(struct drm_device *ddev)
+ }
+ }
+
+- crtc = devm_kzalloc(dev, sizeof(*crtc), GFP_KERNEL);
++ crtc = drmm_kzalloc(ddev, sizeof(*crtc), GFP_KERNEL);
+ if (!crtc) {
+ DRM_ERROR("Failed to allocate crtc\n");
+ ret = -ENOMEM;
+@@ -2074,9 +2051,6 @@ int ltdc_load(struct drm_device *ddev)
+
+ return 0;
+ err:
+- for (i = 0; i < nb_endpoints; i++)
+- drm_of_panel_bridge_remove(ddev->dev->of_node, 0, i);
+-
+ clk_disable_unprepare(ldev->pixel_clk);
+
+ return ret;
+@@ -2084,16 +2058,8 @@ int ltdc_load(struct drm_device *ddev)
+
+ void ltdc_unload(struct drm_device *ddev)
+ {
+- struct device *dev = ddev->dev;
+- int nb_endpoints, i;
+-
+ DRM_DEBUG_DRIVER("\n");
+
+- nb_endpoints = of_graph_get_endpoint_count(dev->of_node);
+-
+- for (i = 0; i < nb_endpoints; i++)
+- drm_of_panel_bridge_remove(ddev->dev->of_node, 0, i);
+-
+ pm_runtime_disable(ddev->dev);
+ }
+
+diff --git a/drivers/gpu/drm/v3d/v3d_submit.c b/drivers/gpu/drm/v3d/v3d_submit.c
+index 4cdfabbf4964f9..d310e95aa66293 100644
+--- a/drivers/gpu/drm/v3d/v3d_submit.c
++++ b/drivers/gpu/drm/v3d/v3d_submit.c
+@@ -671,6 +671,9 @@ v3d_get_cpu_reset_performance_params(struct drm_file *file_priv,
+ if (reset.nperfmons > V3D_MAX_PERFMONS)
+ return -EINVAL;
+
++ if (reset.nperfmons > V3D_MAX_PERFMONS)
++ return -EINVAL;
++
+ job->job_type = V3D_CPU_JOB_TYPE_RESET_PERFORMANCE_QUERY;
+
+ job->performance_query.queries = kvmalloc_array(reset.count,
+@@ -755,6 +758,9 @@ v3d_get_cpu_copy_performance_query_params(struct drm_file *file_priv,
+ if (copy.nperfmons > V3D_MAX_PERFMONS)
+ return -EINVAL;
+
++ if (copy.nperfmons > V3D_MAX_PERFMONS)
++ return -EINVAL;
++
+ job->job_type = V3D_CPU_JOB_TYPE_COPY_PERFORMANCE_QUERY;
+
+ job->performance_query.queries = kvmalloc_array(copy.count,
+diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
+index e97c9da451b36c..1979614a90bddd 100644
+--- a/drivers/gpu/drm/xe/Makefile
++++ b/drivers/gpu/drm/xe/Makefile
+@@ -12,34 +12,15 @@ subdir-ccflags-$(CONFIG_DRM_XE_WERROR) += -Werror
+ subdir-ccflags-y += -I$(obj) -I$(src)
+
+ # generated sources
+-hostprogs := xe_gen_wa_oob
+
++hostprogs := xe_gen_wa_oob
+ generated_oob := $(obj)/generated/xe_wa_oob.c $(obj)/generated/xe_wa_oob.h
+-
+ quiet_cmd_wa_oob = GEN $(notdir $(generated_oob))
+ cmd_wa_oob = mkdir -p $(@D); $^ $(generated_oob)
+-
+ $(obj)/generated/%_wa_oob.c $(obj)/generated/%_wa_oob.h: $(obj)/xe_gen_wa_oob \
+ $(src)/xe_wa_oob.rules
+ $(call cmd,wa_oob)
+
+-uses_generated_oob := \
+- $(obj)/xe_ggtt.o \
+- $(obj)/xe_device.o \
+- $(obj)/xe_gsc.o \
+- $(obj)/xe_gt.o \
+- $(obj)/xe_guc.o \
+- $(obj)/xe_guc_ads.o \
+- $(obj)/xe_guc_pc.o \
+- $(obj)/xe_migrate.o \
+- $(obj)/xe_pat.o \
+- $(obj)/xe_ring_ops.o \
+- $(obj)/xe_vm.o \
+- $(obj)/xe_wa.o \
+- $(obj)/xe_ttm_stolen_mgr.o
+-
+-$(uses_generated_oob): $(generated_oob)
+-
+ # Please keep these build lists sorted!
+
+ # core driver code
+@@ -322,3 +303,6 @@ quiet_cmd_hdrtest = HDRTEST $(patsubst %.hdrtest,%.h,$@)
+
+ $(obj)/%.hdrtest: $(src)/%.h FORCE
+ $(call if_changed_dep,hdrtest)
++
++uses_generated_oob := $(addprefix $(obj)/, $(xe-y))
++$(uses_generated_oob): $(obj)/generated/xe_wa_oob.h
+diff --git a/drivers/gpu/drm/xe/display/intel_fbdev_fb.c b/drivers/gpu/drm/xe/display/intel_fbdev_fb.c
+index 816ad13821a837..cd8948c08661b5 100644
+--- a/drivers/gpu/drm/xe/display/intel_fbdev_fb.c
++++ b/drivers/gpu/drm/xe/display/intel_fbdev_fb.c
+@@ -10,6 +10,9 @@
+ #include "xe_bo.h"
+ #include "xe_gt.h"
+ #include "xe_ttm_stolen_mgr.h"
++#include "xe_wa.h"
++
++#include <generated/xe_wa_oob.h>
+
+ struct intel_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
+ struct drm_fb_helper_surface_size *sizes)
+@@ -37,7 +40,7 @@ struct intel_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
+ size = PAGE_ALIGN(size);
+ obj = ERR_PTR(-ENODEV);
+
+- if (!IS_DGFX(xe)) {
++ if (!IS_DGFX(xe) && !XE_WA(xe_root_mmio_gt(xe), 22019338487_display)) {
+ obj = xe_bo_create_pin_map(xe, xe_device_get_root_tile(xe),
+ NULL, size,
+ ttm_bo_type_kernel, XE_BO_FLAG_SCANOUT |
+@@ -48,6 +51,7 @@ struct intel_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
+ else
+ drm_info(&xe->drm, "Allocated fbdev into stolen failed: %li\n", PTR_ERR(obj));
+ }
++
+ if (IS_ERR(obj)) {
+ obj = xe_bo_create_pin_map(xe, xe_device_get_root_tile(xe), NULL, size,
+ ttm_bo_type_kernel, XE_BO_FLAG_SCANOUT |
+diff --git a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+index 990285aa9b2612..0af667ebebf982 100644
+--- a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
++++ b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+@@ -40,10 +40,14 @@ bool intel_hdcp_gsc_check_status(struct xe_device *xe)
+ {
+ struct xe_tile *tile = xe_device_get_root_tile(xe);
+ struct xe_gt *gt = tile->media_gt;
++ struct xe_gsc *gsc = >->uc.gsc;
+ bool ret = true;
+
+- if (!xe_uc_fw_is_enabled(>->uc.gsc.fw))
++ if (!gsc && !xe_uc_fw_is_enabled(&gsc->fw)) {
++ drm_dbg_kms(&xe->drm,
++ "GSC Components not ready for HDCP2.x\n");
+ return false;
++ }
+
+ xe_pm_runtime_get(xe);
+ if (xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC)) {
+@@ -53,7 +57,7 @@ bool intel_hdcp_gsc_check_status(struct xe_device *xe)
+ goto out;
+ }
+
+- if (!xe_gsc_proxy_init_done(>->uc.gsc))
++ if (!xe_gsc_proxy_init_done(gsc))
+ ret = false;
+
+ xe_force_wake_put(gt_to_fw(gt), XE_FW_GSC);
+diff --git a/drivers/gpu/drm/xe/display/xe_plane_initial.c b/drivers/gpu/drm/xe/display/xe_plane_initial.c
+index 5eccd6abb3ef51..a50ab9eae40ae4 100644
+--- a/drivers/gpu/drm/xe/display/xe_plane_initial.c
++++ b/drivers/gpu/drm/xe/display/xe_plane_initial.c
+@@ -18,6 +18,9 @@
+ #include "intel_frontbuffer.h"
+ #include "intel_plane_initial.h"
+ #include "xe_bo.h"
++#include "xe_wa.h"
++
++#include <generated/xe_wa_oob.h>
+
+ static bool
+ intel_reuse_initial_plane_obj(struct intel_crtc *this,
+@@ -104,6 +107,9 @@ initial_plane_bo(struct xe_device *xe,
+ phys_base = base;
+ flags |= XE_BO_FLAG_STOLEN;
+
++ if (XE_WA(xe_root_mmio_gt(xe), 22019338487_display))
++ return NULL;
++
+ /*
+ * If the FB is too big, just don't use it since fbdev is not very
+ * important and we should probably use that space with FBC or other
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 261d3d6c8a9315..e147ef1d0578f3 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -680,8 +680,8 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
+ tt_has_data = ttm && (ttm_tt_is_populated(ttm) ||
+ (ttm->page_flags & TTM_TT_FLAG_SWAPPED));
+
+- move_lacks_source = handle_system_ccs ? (!bo->ccs_cleared) :
+- (!mem_type_is_vram(old_mem_type) && !tt_has_data);
++ move_lacks_source = !old_mem || (handle_system_ccs ? (!bo->ccs_cleared) :
++ (!mem_type_is_vram(old_mem_type) && !tt_has_data));
+
+ needs_clear = (ttm && ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC) ||
+ (!ttm && ttm_bo->type == ttm_bo_type_device);
+diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
+index c89deffffb6d06..8a44a2b6dcbb66 100644
+--- a/drivers/gpu/drm/xe/xe_device.c
++++ b/drivers/gpu/drm/xe/xe_device.c
+@@ -159,10 +159,8 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
+ xe_exec_queue_kill(q);
+ xe_exec_queue_put(q);
+ }
+- mutex_lock(&xef->vm.lock);
+ xa_for_each(&xef->vm.xa, idx, vm)
+ xe_vm_close_and_put(vm);
+- mutex_unlock(&xef->vm.lock);
+
+ xe_file_put(xef);
+
+@@ -285,6 +283,9 @@ static void xe_device_destroy(struct drm_device *dev, void *dummy)
+ if (xe->unordered_wq)
+ destroy_workqueue(xe->unordered_wq);
+
++ if (xe->destroy_wq)
++ destroy_workqueue(xe->destroy_wq);
++
+ ttm_device_fini(&xe->ttm);
+ }
+
+@@ -350,8 +351,9 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
+ xe->preempt_fence_wq = alloc_ordered_workqueue("xe-preempt-fence-wq", 0);
+ xe->ordered_wq = alloc_ordered_workqueue("xe-ordered-wq", 0);
+ xe->unordered_wq = alloc_workqueue("xe-unordered-wq", 0, 0);
++ xe->destroy_wq = alloc_workqueue("xe-destroy-wq", 0, 0);
+ if (!xe->ordered_wq || !xe->unordered_wq ||
+- !xe->preempt_fence_wq) {
++ !xe->preempt_fence_wq || !xe->destroy_wq) {
+ /*
+ * Cleanup done in xe_device_destroy via
+ * drmm_add_action_or_reset register above
+diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
+index 9e5fdf96750b6a..a7c7812d579157 100644
+--- a/drivers/gpu/drm/xe/xe_device_types.h
++++ b/drivers/gpu/drm/xe/xe_device_types.h
+@@ -392,6 +392,9 @@ struct xe_device {
+ /** @unordered_wq: used to serialize unordered work, mostly display */
+ struct workqueue_struct *unordered_wq;
+
++ /** @destroy_wq: used to serialize user destroy work, like queue */
++ struct workqueue_struct *destroy_wq;
++
+ /** @tiles: device tiles */
+ struct xe_tile tiles[XE_MAX_TILES_PER_DEVICE];
+
+@@ -555,15 +558,23 @@ struct xe_file {
+ struct {
+ /** @vm.xe: xarray to store VMs */
+ struct xarray xa;
+- /** @vm.lock: protects file VM state */
++ /**
++ * @vm.lock: Protects VM lookup + reference and removal a from
++ * file xarray. Not an intended to be an outer lock which does
++ * thing while being held.
++ */
+ struct mutex lock;
+ } vm;
+
+ /** @exec_queue: Submission exec queue state for file */
+ struct {
+- /** @exec_queue.xe: xarray to store engines */
++ /** @exec_queue.xa: xarray to store exece queues */
+ struct xarray xa;
+- /** @exec_queue.lock: protects file engine state */
++ /**
++ * @exec_queue.lock: Protects exec queue lookup + reference and
++ * removal a frommfile xarray. Not an intended to be an outer
++ * lock which does thing while being held.
++ */
+ struct mutex lock;
+ } exec_queue;
+
+diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
+index 1af95b9b917156..c237ced421833d 100644
+--- a/drivers/gpu/drm/xe/xe_drm_client.c
++++ b/drivers/gpu/drm/xe/xe_drm_client.c
+@@ -288,8 +288,15 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
+
+ /* Accumulate all the exec queues from this client */
+ mutex_lock(&xef->exec_queue.lock);
+- xa_for_each(&xef->exec_queue.xa, i, q)
++ xa_for_each(&xef->exec_queue.xa, i, q) {
++ xe_exec_queue_get(q);
++ mutex_unlock(&xef->exec_queue.lock);
++
+ xe_exec_queue_update_run_ticks(q);
++
++ mutex_lock(&xef->exec_queue.lock);
++ xe_exec_queue_put(q);
++ }
+ mutex_unlock(&xef->exec_queue.lock);
+
+ /* Get the total GPU cycles */
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
+index d0bbb1d9b1ac15..2179c65dc60ab1 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue.c
++++ b/drivers/gpu/drm/xe/xe_exec_queue.c
+@@ -627,9 +627,7 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
+ }
+ }
+
+- mutex_lock(&xef->exec_queue.lock);
+ err = xa_alloc(&xef->exec_queue.xa, &id, q, xa_limit_32b, GFP_KERNEL);
+- mutex_unlock(&xef->exec_queue.lock);
+ if (err)
+ goto kill_exec_queue;
+
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
+index f6ee0ae80fd63d..fc2a1a20b7e4b0 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
++++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
+@@ -169,9 +169,11 @@ struct xe_exec_queue_ops {
+ int (*suspend)(struct xe_exec_queue *q);
+ /**
+ * @suspend_wait: Wait for an exec queue to suspend executing, should be
+- * call after suspend.
++ * call after suspend. In dma-fencing path thus must return within a
++ * reasonable amount of time. -ETIME return shall indicate an error
++ * waiting for suspend resulting in associated VM getting killed.
+ */
+- void (*suspend_wait)(struct xe_exec_queue *q);
++ int (*suspend_wait)(struct xe_exec_queue *q);
+ /**
+ * @resume: Resume exec queue execution, exec queue must be in a suspended
+ * state and dma fence returned from most recent suspend call must be
+diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
+index db906117db6d69..7502e3486eafa7 100644
+--- a/drivers/gpu/drm/xe/xe_execlist.c
++++ b/drivers/gpu/drm/xe/xe_execlist.c
+@@ -422,10 +422,11 @@ static int execlist_exec_queue_suspend(struct xe_exec_queue *q)
+ return 0;
+ }
+
+-static void execlist_exec_queue_suspend_wait(struct xe_exec_queue *q)
++static int execlist_exec_queue_suspend_wait(struct xe_exec_queue *q)
+
+ {
+ /* NIY */
++ return 0;
+ }
+
+ static void execlist_exec_queue_resume(struct xe_exec_queue *q)
+diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
+index e4ad1d6ce1d5ff..7f24e58cc992f2 100644
+--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
++++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
+@@ -90,6 +90,11 @@ void xe_sched_submission_stop(struct xe_gpu_scheduler *sched)
+ cancel_work_sync(&sched->work_process_msg);
+ }
+
++void xe_sched_submission_resume_tdr(struct xe_gpu_scheduler *sched)
++{
++ drm_sched_resume_timeout(&sched->base, sched->base.timeout);
++}
++
+ void xe_sched_add_msg(struct xe_gpu_scheduler *sched,
+ struct xe_sched_msg *msg)
+ {
+diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+index 10c6bb9c938681..6aac7fe686735a 100644
+--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
++++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+@@ -22,6 +22,8 @@ void xe_sched_fini(struct xe_gpu_scheduler *sched);
+ void xe_sched_submission_start(struct xe_gpu_scheduler *sched);
+ void xe_sched_submission_stop(struct xe_gpu_scheduler *sched);
+
++void xe_sched_submission_resume_tdr(struct xe_gpu_scheduler *sched);
++
+ void xe_sched_add_msg(struct xe_gpu_scheduler *sched,
+ struct xe_sched_msg *msg);
+
+diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
+index b2a7fa55bd1815..730eec07795e20 100644
+--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
++++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
+@@ -287,7 +287,7 @@ static bool get_pagefault(struct pf_queue *pf_queue, struct pagefault *pf)
+ PFD_VIRTUAL_ADDR_LO_SHIFT;
+
+ pf_queue->tail = (pf_queue->tail + PF_MSG_LEN_DW) %
+- PF_QUEUE_NUM_DW;
++ pf_queue->num_dw;
+ ret = true;
+ }
+ spin_unlock_irq(&pf_queue->lock);
+@@ -299,7 +299,8 @@ static bool pf_queue_full(struct pf_queue *pf_queue)
+ {
+ lockdep_assert_held(&pf_queue->lock);
+
+- return CIRC_SPACE(pf_queue->head, pf_queue->tail, PF_QUEUE_NUM_DW) <=
++ return CIRC_SPACE(pf_queue->head, pf_queue->tail,
++ pf_queue->num_dw) <=
+ PF_MSG_LEN_DW;
+ }
+
+@@ -312,22 +313,23 @@ int xe_guc_pagefault_handler(struct xe_guc *guc, u32 *msg, u32 len)
+ u32 asid;
+ bool full;
+
+- /*
+- * The below logic doesn't work unless PF_QUEUE_NUM_DW % PF_MSG_LEN_DW == 0
+- */
+- BUILD_BUG_ON(PF_QUEUE_NUM_DW % PF_MSG_LEN_DW);
+-
+ if (unlikely(len != PF_MSG_LEN_DW))
+ return -EPROTO;
+
+ asid = FIELD_GET(PFD_ASID, msg[1]);
+ pf_queue = gt->usm.pf_queue + (asid % NUM_PF_QUEUE);
+
++ /*
++ * The below logic doesn't work unless PF_QUEUE_NUM_DW % PF_MSG_LEN_DW == 0
++ */
++ xe_gt_assert(gt, !(pf_queue->num_dw % PF_MSG_LEN_DW));
++
+ spin_lock_irqsave(&pf_queue->lock, flags);
+ full = pf_queue_full(pf_queue);
+ if (!full) {
+ memcpy(pf_queue->data + pf_queue->head, msg, len * sizeof(u32));
+- pf_queue->head = (pf_queue->head + len) % PF_QUEUE_NUM_DW;
++ pf_queue->head = (pf_queue->head + len) %
++ pf_queue->num_dw;
+ queue_work(gt->usm.pf_wq, &pf_queue->worker);
+ } else {
+ drm_warn(&xe->drm, "PF Queue full, shouldn't be possible");
+@@ -394,18 +396,47 @@ static void pagefault_fini(void *arg)
+ destroy_workqueue(gt->usm.pf_wq);
+ }
+
++static int xe_alloc_pf_queue(struct xe_gt *gt, struct pf_queue *pf_queue)
++{
++ struct xe_device *xe = gt_to_xe(gt);
++ xe_dss_mask_t all_dss;
++ int num_dss, num_eus;
++
++ bitmap_or(all_dss, gt->fuse_topo.g_dss_mask, gt->fuse_topo.c_dss_mask,
++ XE_MAX_DSS_FUSE_BITS);
++
++ num_dss = bitmap_weight(all_dss, XE_MAX_DSS_FUSE_BITS);
++ num_eus = bitmap_weight(gt->fuse_topo.eu_mask_per_dss,
++ XE_MAX_EU_FUSE_BITS) * num_dss;
++
++ /* user can issue separate page faults per EU and per CS */
++ pf_queue->num_dw =
++ (num_eus + XE_NUM_HW_ENGINES) * PF_MSG_LEN_DW;
++
++ pf_queue->gt = gt;
++ pf_queue->data = devm_kcalloc(xe->drm.dev, pf_queue->num_dw,
++ sizeof(u32), GFP_KERNEL);
++ if (!pf_queue->data)
++ return -ENOMEM;
++
++ spin_lock_init(&pf_queue->lock);
++ INIT_WORK(&pf_queue->worker, pf_queue_work_func);
++
++ return 0;
++}
++
+ int xe_gt_pagefault_init(struct xe_gt *gt)
+ {
+ struct xe_device *xe = gt_to_xe(gt);
+- int i;
++ int i, ret = 0;
+
+ if (!xe->info.has_usm)
+ return 0;
+
+ for (i = 0; i < NUM_PF_QUEUE; ++i) {
+- gt->usm.pf_queue[i].gt = gt;
+- spin_lock_init(>->usm.pf_queue[i].lock);
+- INIT_WORK(>->usm.pf_queue[i].worker, pf_queue_work_func);
++ ret = xe_alloc_pf_queue(gt, >->usm.pf_queue[i]);
++ if (ret)
++ return ret;
+ }
+ for (i = 0; i < NUM_ACC_QUEUE; ++i) {
+ gt->usm.acc_queue[i].gt = gt;
+diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
+index c582541970dff4..ba6662c9863b5f 100644
+--- a/drivers/gpu/drm/xe/xe_gt_types.h
++++ b/drivers/gpu/drm/xe/xe_gt_types.h
+@@ -233,9 +233,14 @@ struct xe_gt {
+ struct pf_queue {
+ /** @usm.pf_queue.gt: back pointer to GT */
+ struct xe_gt *gt;
+-#define PF_QUEUE_NUM_DW 128
+ /** @usm.pf_queue.data: data in the page fault queue */
+- u32 data[PF_QUEUE_NUM_DW];
++ u32 *data;
++ /**
++ * @usm.pf_queue.num_dw: number of DWORDS in the page
++ * fault queue. Dynamically calculated based on the number
++ * of compute resources available.
++ */
++ u32 num_dw;
+ /**
+ * @usm.pf_queue.tail: tail pointer in DWs for page fault queue,
+ * moved by worker which processes faults (consumer).
+diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
+index ccd574e948aa30..034b29984d5ed4 100644
+--- a/drivers/gpu/drm/xe/xe_guc_pc.c
++++ b/drivers/gpu/drm/xe/xe_guc_pc.c
+@@ -1042,7 +1042,7 @@ static void xe_guc_pc_fini_hw(void *arg)
+ return;
+
+ XE_WARN_ON(xe_force_wake_get(gt_to_fw(pc_to_gt(pc)), XE_FORCEWAKE_ALL));
+- XE_WARN_ON(xe_guc_pc_gucrc_disable(pc));
++ xe_guc_pc_gucrc_disable(pc);
+ XE_WARN_ON(xe_guc_pc_stop(pc));
+
+ /* Bind requested freq to mert_freq_cap before unload */
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 59b36c7998c243..62c3982ad7fdc9 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -276,10 +276,26 @@ static struct workqueue_struct *get_submit_wq(struct xe_guc *guc)
+ }
+ #endif
+
++static void xe_guc_submit_fini(struct xe_guc *guc)
++{
++ struct xe_device *xe = guc_to_xe(guc);
++ struct xe_gt *gt = guc_to_gt(guc);
++ int ret;
++
++ ret = wait_event_timeout(guc->submission_state.fini_wq,
++ xa_empty(&guc->submission_state.exec_queue_lookup),
++ HZ * 5);
++
++ drain_workqueue(xe->destroy_wq);
++
++ xe_gt_assert(gt, ret);
++}
++
+ static void guc_submit_fini(struct drm_device *drm, void *arg)
+ {
+ struct xe_guc *guc = arg;
+
++ xe_guc_submit_fini(guc);
+ xa_destroy(&guc->submission_state.exec_queue_lookup);
+ free_submit_wq(guc);
+ }
+@@ -290,9 +306,15 @@ static void guc_submit_wedged_fini(void *arg)
+ struct xe_exec_queue *q;
+ unsigned long index;
+
+- xa_for_each(&guc->submission_state.exec_queue_lookup, index, q)
+- if (exec_queue_wedged(q))
++ mutex_lock(&guc->submission_state.lock);
++ xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) {
++ if (exec_queue_wedged(q)) {
++ mutex_unlock(&guc->submission_state.lock);
+ xe_exec_queue_put(q);
++ mutex_lock(&guc->submission_state.lock);
++ }
++ }
++ mutex_unlock(&guc->submission_state.lock);
+ }
+
+ static const struct xe_exec_queue_ops guc_exec_queue_ops;
+@@ -345,6 +367,8 @@ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids)
+
+ xa_init(&guc->submission_state.exec_queue_lookup);
+
++ init_waitqueue_head(&guc->submission_state.fini_wq);
++
+ primelockdep(guc);
+
+ return drmm_add_action_or_reset(&xe->drm, guc_submit_fini, guc);
+@@ -361,6 +385,9 @@ static void __release_guc_id(struct xe_guc *guc, struct xe_exec_queue *q, u32 xa
+
+ xe_guc_id_mgr_release_locked(&guc->submission_state.idm,
+ q->guc->id, q->width);
++
++ if (xa_empty(&guc->submission_state.exec_queue_lookup))
++ wake_up(&guc->submission_state.fini_wq);
+ }
+
+ static int alloc_guc_id(struct xe_guc *guc, struct xe_exec_queue *q)
+@@ -1261,13 +1288,16 @@ static void __guc_exec_queue_fini_async(struct work_struct *w)
+
+ static void guc_exec_queue_fini_async(struct xe_exec_queue *q)
+ {
++ struct xe_guc *guc = exec_queue_to_guc(q);
++ struct xe_device *xe = guc_to_xe(guc);
++
+ INIT_WORK(&q->guc->fini_async, __guc_exec_queue_fini_async);
+
+ /* We must block on kernel engines so slabs are empty on driver unload */
+ if (q->flags & EXEC_QUEUE_FLAG_PERMANENT || exec_queue_wedged(q))
+ __guc_exec_queue_fini_async(&q->guc->fini_async);
+ else
+- queue_work(system_wq, &q->guc->fini_async);
++ queue_work(xe->destroy_wq, &q->guc->fini_async);
+ }
+
+ static void __guc_exec_queue_fini(struct xe_guc *guc, struct xe_exec_queue *q)
+@@ -1312,6 +1342,15 @@ static void __guc_exec_queue_process_msg_set_sched_props(struct xe_sched_msg *ms
+ kfree(msg);
+ }
+
++static void __suspend_fence_signal(struct xe_exec_queue *q)
++{
++ if (!q->guc->suspend_pending)
++ return;
++
++ WRITE_ONCE(q->guc->suspend_pending, false);
++ wake_up(&q->guc->suspend_wait);
++}
++
+ static void suspend_fence_signal(struct xe_exec_queue *q)
+ {
+ struct xe_guc *guc = exec_queue_to_guc(q);
+@@ -1321,9 +1360,7 @@ static void suspend_fence_signal(struct xe_exec_queue *q)
+ guc_read_stopped(guc));
+ xe_assert(xe, q->guc->suspend_pending);
+
+- q->guc->suspend_pending = false;
+- smp_wmb();
+- wake_up(&q->guc->suspend_wait);
++ __suspend_fence_signal(q);
+ }
+
+ static void __guc_exec_queue_process_msg_suspend(struct xe_sched_msg *msg)
+@@ -1480,6 +1517,7 @@ static void guc_exec_queue_kill(struct xe_exec_queue *q)
+ {
+ trace_xe_exec_queue_kill(q);
+ set_exec_queue_killed(q);
++ __suspend_fence_signal(q);
+ xe_guc_exec_queue_trigger_cleanup(q);
+ }
+
+@@ -1578,12 +1616,31 @@ static int guc_exec_queue_suspend(struct xe_exec_queue *q)
+ return 0;
+ }
+
+-static void guc_exec_queue_suspend_wait(struct xe_exec_queue *q)
++static int guc_exec_queue_suspend_wait(struct xe_exec_queue *q)
+ {
+ struct xe_guc *guc = exec_queue_to_guc(q);
++ int ret;
++
++ /*
++ * Likely don't need to check exec_queue_killed() as we clear
++ * suspend_pending upon kill but to be paranoid but races in which
++ * suspend_pending is set after kill also check kill here.
++ */
++ ret = wait_event_timeout(q->guc->suspend_wait,
++ !READ_ONCE(q->guc->suspend_pending) ||
++ exec_queue_killed(q) ||
++ guc_read_stopped(guc),
++ HZ * 5);
+
+- wait_event(q->guc->suspend_wait, !q->guc->suspend_pending ||
+- guc_read_stopped(guc));
++ if (!ret) {
++ xe_gt_warn(guc_to_gt(guc),
++ "Suspend fence, guc_id=%d, failed to respond",
++ q->guc->id);
++ /* XXX: Trigger GT reset? */
++ return -ETIME;
++ }
++
++ return 0;
+ }
+
+ static void guc_exec_queue_resume(struct xe_exec_queue *q)
+@@ -1734,6 +1791,7 @@ static void guc_exec_queue_start(struct xe_exec_queue *q)
+ }
+
+ xe_sched_submission_start(sched);
++ xe_sched_submission_resume_tdr(sched);
+ }
+
+ int xe_guc_submit_start(struct xe_guc *guc)
+diff --git a/drivers/gpu/drm/xe/xe_guc_types.h b/drivers/gpu/drm/xe/xe_guc_types.h
+index 546ac6350a31ff..69046f69827174 100644
+--- a/drivers/gpu/drm/xe/xe_guc_types.h
++++ b/drivers/gpu/drm/xe/xe_guc_types.h
+@@ -81,6 +81,8 @@ struct xe_guc {
+ #endif
+ /** @submission_state.enabled: submission is enabled */
+ bool enabled;
++ /** @submission_state.fini_wq: submit fini wait queue */
++ wait_queue_head_t fini_wq;
+ } submission_state;
+ /** @hwconfig: Hardware config state */
+ struct {
+diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
+index 58121821f0814a..974a9cd8c37950 100644
+--- a/drivers/gpu/drm/xe/xe_lrc.c
++++ b/drivers/gpu/drm/xe/xe_lrc.c
+@@ -5,6 +5,8 @@
+
+ #include "xe_lrc.h"
+
++#include <generated/xe_wa_oob.h>
++
+ #include <linux/ascii85.h>
+
+ #include "instructions/xe_mi_commands.h"
+@@ -24,6 +26,7 @@
+ #include "xe_memirq.h"
+ #include "xe_sriov.h"
+ #include "xe_vm.h"
++#include "xe_wa.h"
+
+ #define LRC_VALID BIT_ULL(0)
+ #define LRC_PRIVILEGE BIT_ULL(8)
+@@ -1581,19 +1584,31 @@ void xe_lrc_emit_hwe_state_instructions(struct xe_exec_queue *q, struct xe_bb *b
+ int state_table_size = 0;
+
+ /*
+- * At the moment we only need to emit non-register state for the RCS
+- * engine.
++ * Wa_14019789679
++ *
++ * If the driver doesn't explicitly emit the SVG instructions while
++ * setting up the default LRC, the context switch will write 0's
++ * (noops) into the LRC memory rather than the expected instruction
++ * headers. Application contexts start out as a copy of the default
++ * LRC, and if they also do not emit specific settings for some SVG
++ * state, then on context restore they'll unintentionally inherit
++ * whatever state setting the previous context had programmed into the
++ * hardware (i.e., the lack of a 3DSTATE_* instruction in the LRC will
++ * prevent the hardware from resetting that state back to any specific
++ * value).
++ *
++ * The official workaround only requires emitting 3DSTATE_MESH_CONTROL
++ * since that's a specific state setting that can easily cause GPU
++ * hangs if unintentionally inherited. However to be safe we'll
++ * continue to emit all of the SVG state since it's best not to leak
++ * any of the state between contexts, even if that leakage is harmless.
+ */
+- if (q->hwe->class != XE_ENGINE_CLASS_RENDER)
+- return;
+-
+- switch (GRAPHICS_VERx100(xe)) {
+- case 1255:
+- case 1270 ... 2004:
++ if (XE_WA(gt, 14019789679) && q->hwe->class == XE_ENGINE_CLASS_RENDER) {
+ state_table = xe_hpg_svg_state;
+ state_table_size = ARRAY_SIZE(xe_hpg_svg_state);
+- break;
+- default:
++ }
++
++ if (!state_table) {
+ xe_gt_dbg(gt, "No non-register state to emit on graphics ver %d.%02d\n",
+ GRAPHICS_VER(xe), GRAPHICS_VERx100(xe) % 100);
+ return;
+diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
+index 22f14eba2c6346..5906df55dfdf2d 100644
+--- a/drivers/gpu/drm/xe/xe_oa.c
++++ b/drivers/gpu/drm/xe/xe_oa.c
+@@ -709,8 +709,7 @@ static int xe_oa_configure_oar_context(struct xe_oa_stream *stream, bool enable)
+ {
+ RING_CONTEXT_CONTROL(stream->hwe->mmio_base),
+ regs_offset + CTX_CONTEXT_CONTROL,
+- _MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE,
+- enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0)
++ _MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE),
+ },
+ };
+ struct xe_oa_reg reg_lri = { OAR_OACONTROL, oacontrol };
+@@ -742,10 +741,8 @@ static int xe_oa_configure_oac_context(struct xe_oa_stream *stream, bool enable)
+ {
+ RING_CONTEXT_CONTROL(stream->hwe->mmio_base),
+ regs_offset + CTX_CONTEXT_CONTROL,
+- _MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE,
+- enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0) |
+- _MASKED_FIELD(CTX_CTRL_RUN_ALONE,
+- enable ? CTX_CTRL_RUN_ALONE : 0),
++ _MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE) |
++ _MASKED_FIELD(CTX_CTRL_RUN_ALONE, enable ? CTX_CTRL_RUN_ALONE : 0),
+ },
+ };
+ struct xe_oa_reg reg_lri = { OAC_OACONTROL, oacontrol };
+diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
+index 732ee0d02124f7..5929ac61dbe0a3 100644
+--- a/drivers/gpu/drm/xe/xe_pci.c
++++ b/drivers/gpu/drm/xe/xe_pci.c
+@@ -924,6 +924,8 @@ static int xe_pci_resume(struct device *dev)
+ if (err)
+ return err;
+
++ pci_restore_state(pdev);
++
+ err = pci_enable_device(pdev);
+ if (err)
+ return err;
+diff --git a/drivers/gpu/drm/xe/xe_preempt_fence.c b/drivers/gpu/drm/xe/xe_preempt_fence.c
+index c453f45328b1c5..83fbeea5aa201a 100644
+--- a/drivers/gpu/drm/xe/xe_preempt_fence.c
++++ b/drivers/gpu/drm/xe/xe_preempt_fence.c
+@@ -17,10 +17,16 @@ static void preempt_fence_work_func(struct work_struct *w)
+ container_of(w, typeof(*pfence), preempt_work);
+ struct xe_exec_queue *q = pfence->q;
+
+- if (pfence->error)
++ if (pfence->error) {
+ dma_fence_set_error(&pfence->base, pfence->error);
+- else
+- q->ops->suspend_wait(q);
++ } else if (!q->ops->reset_status(q)) {
++ int err = q->ops->suspend_wait(q);
++
++ if (err)
++ dma_fence_set_error(&pfence->base, err);
++ } else {
++ dma_fence_set_error(&pfence->base, -ENOENT);
++ }
+
+ dma_fence_signal(&pfence->base);
+ /*
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index b49bee0dfac5da..49ba9a1e375f42 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -133,8 +133,10 @@ static int wait_for_existing_preempt_fences(struct xe_vm *vm)
+ if (q->lr.pfence) {
+ long timeout = dma_fence_wait(q->lr.pfence, false);
+
+- if (timeout < 0)
++ /* Only -ETIME on fence indicates VM needs to be killed */
++ if (timeout < 0 || q->lr.pfence->error == -ETIME)
+ return -ETIME;
++
+ dma_fence_put(q->lr.pfence);
+ q->lr.pfence = NULL;
+ }
+@@ -311,6 +313,14 @@ int __xe_vm_userptr_needs_repin(struct xe_vm *vm)
+
+ #define XE_VM_REBIND_RETRY_TIMEOUT_MS 1000
+
++/*
++ * xe_vm_kill() - VM Kill
++ * @vm: The VM.
++ * @unlocked: Flag indicates the VM's dma-resv is not held
++ *
++ * Kill the VM by setting banned flag indicated VM is no longer available for
++ * use. If in preempt fence mode, also kill all exec queue attached to the VM.
++ */
+ static void xe_vm_kill(struct xe_vm *vm, bool unlocked)
+ {
+ struct xe_exec_queue *q;
+@@ -1895,12 +1905,6 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data,
+ if (IS_ERR(vm))
+ return PTR_ERR(vm);
+
+- mutex_lock(&xef->vm.lock);
+- err = xa_alloc(&xef->vm.xa, &id, vm, xa_limit_32b, GFP_KERNEL);
+- mutex_unlock(&xef->vm.lock);
+- if (err)
+- goto err_close_and_put;
+-
+ if (xe->info.has_asid) {
+ mutex_lock(&xe->usm.lock);
+ err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm,
+@@ -1908,12 +1912,11 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data,
+ &xe->usm.next_asid, GFP_KERNEL);
+ mutex_unlock(&xe->usm.lock);
+ if (err < 0)
+- goto err_free_id;
++ goto err_close_and_put;
+
+ vm->usm.asid = asid;
+ }
+
+- args->vm_id = id;
+ vm->xef = xe_file_get(xef);
+
+ /* Record BO memory for VM pagetable created against client */
+@@ -1926,12 +1929,15 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data,
+ args->reserved[0] = xe_bo_main_addr(vm->pt_root[0]->bo, XE_PAGE_SIZE);
+ #endif
+
++ /* user id alloc must always be last in ioctl to prevent UAF */
++ err = xa_alloc(&xef->vm.xa, &id, vm, xa_limit_32b, GFP_KERNEL);
++ if (err)
++ goto err_close_and_put;
++
++ args->vm_id = id;
++
+ return 0;
+
+-err_free_id:
+- mutex_lock(&xef->vm.lock);
+- xa_erase(&xef->vm.xa, id);
+- mutex_unlock(&xef->vm.lock);
+ err_close_and_put:
+ xe_vm_close_and_put(vm);
+
+diff --git a/drivers/gpu/drm/xe/xe_vram.c b/drivers/gpu/drm/xe/xe_vram.c
+index 5bcd59190353ea..80ba2fc78837ac 100644
+--- a/drivers/gpu/drm/xe/xe_vram.c
++++ b/drivers/gpu/drm/xe/xe_vram.c
+@@ -182,6 +182,7 @@ static inline u64 get_flat_ccs_offset(struct xe_gt *gt, u64 tile_size)
+ offset = offset_hi << 32; /* HW view bits 39:32 */
+ offset |= offset_lo << 6; /* HW view bits 31:6 */
+ offset *= num_enabled; /* convert to SW view */
++ offset = round_up(offset, SZ_128K); /* SW must round up to nearest 128K */
+
+ /* We don't expect any holes */
+ xe_assert_msg(xe, offset == (xe_mmio_read64_2x32(gt, GSMBASE) - ccs_size),
+diff --git a/drivers/gpu/drm/xe/xe_wa_oob.rules b/drivers/gpu/drm/xe/xe_wa_oob.rules
+index 08f7336881e32d..24a5b7d7cdcc1f 100644
+--- a/drivers/gpu/drm/xe/xe_wa_oob.rules
++++ b/drivers/gpu/drm/xe/xe_wa_oob.rules
+@@ -29,4 +29,7 @@
+ 13011645652 GRAPHICS_VERSION(2004)
+ 22019338487 MEDIA_VERSION(2000)
+ GRAPHICS_VERSION(2001)
++22019338487_display PLATFORM(LUNARLAKE)
+ 16023588340 GRAPHICS_VERSION(2001)
++14019789679 GRAPHICS_VERSION(1255)
++ GRAPHICS_VERSION_RANGE(1270, 2004)
+diff --git a/drivers/hid/bpf/hid_bpf_struct_ops.c b/drivers/hid/bpf/hid_bpf_struct_ops.c
+index cd696c59ba0f4f..702c22fae136aa 100644
+--- a/drivers/hid/bpf/hid_bpf_struct_ops.c
++++ b/drivers/hid/bpf/hid_bpf_struct_ops.c
+@@ -276,9 +276,23 @@ static int __hid_bpf_rdesc_fixup(struct hid_bpf_ctx *ctx)
+ return 0;
+ }
+
++static int __hid_bpf_hw_request(struct hid_bpf_ctx *ctx, unsigned char reportnum,
++ enum hid_report_type rtype, enum hid_class_request reqtype,
++ u64 source)
++{
++ return 0;
++}
++
++static int __hid_bpf_hw_output_report(struct hid_bpf_ctx *ctx, u64 source)
++{
++ return 0;
++}
++
+ static struct hid_bpf_ops __bpf_hid_bpf_ops = {
+ .hid_device_event = __hid_bpf_device_event,
+ .hid_rdesc_fixup = __hid_bpf_rdesc_fixup,
++ .hid_hw_request = __hid_bpf_hw_request,
++ .hid_hw_output_report = __hid_bpf_hw_output_report,
+ };
+
+ static struct bpf_struct_ops bpf_hid_bpf_ops = {
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 781c5aa298598a..06104a4e0fdc15 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -417,24 +417,8 @@
+ #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401
+ #define USB_DEVICE_ID_HP_X2 0x074d
+ #define USB_DEVICE_ID_HP_X2_10_COVER 0x0755
+-#define I2C_DEVICE_ID_HP_ENVY_X360_15 0x2d05
+-#define I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100 0x29CF
+-#define I2C_DEVICE_ID_HP_ENVY_X360_EU0009NV 0x2CF9
+-#define I2C_DEVICE_ID_HP_SPECTRE_X360_15 0x2817
+-#define I2C_DEVICE_ID_HP_SPECTRE_X360_13_AW0020NG 0x29DF
+-#define I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN 0x2BC8
+-#define I2C_DEVICE_ID_ASUS_GV301RA_TOUCHSCREEN 0x2C82
+-#define I2C_DEVICE_ID_ASUS_UX3402_TOUCHSCREEN 0x2F2C
+-#define I2C_DEVICE_ID_ASUS_UX6404_TOUCHSCREEN 0x4116
+ #define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN 0x2544
+ #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706
+-#define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN 0x261A
+-#define I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN 0x2A1C
+-#define I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN 0x279F
+-#define I2C_DEVICE_ID_HP_SPECTRE_X360_13T_AW100 0x29F5
+-#define I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V1 0x2BED
+-#define I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V2 0x2BEE
+-#define I2C_DEVICE_ID_HP_ENVY_X360_15_EU0556NG 0x2D02
+ #define I2C_DEVICE_ID_CHROMEBOOK_TROGDOR_POMPOM 0x2F81
+
+ #define USB_VENDOR_ID_ELECOM 0x056e
+@@ -810,6 +794,7 @@
+ #define USB_DEVICE_ID_LENOVO_X1_TAB 0x60a3
+ #define USB_DEVICE_ID_LENOVO_X1_TAB3 0x60b5
+ #define USB_DEVICE_ID_LENOVO_X12_TAB 0x60fe
++#define USB_DEVICE_ID_LENOVO_X12_TAB2 0x61ae
+ #define USB_DEVICE_ID_LENOVO_OPTICAL_USB_MOUSE_600E 0x600e
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D 0x608d
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019 0x6019
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index c9094a4f281e90..fda9dce3da9980 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -373,14 +373,6 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
+ USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD),
+ HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_GV301RA_TOUCHSCREEN),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_UX3402_TOUCHSCREEN),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_UX6404_TOUCHSCREEN),
+- HID_BATTERY_QUIRK_IGNORE },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN),
+ HID_BATTERY_QUIRK_IGNORE },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN),
+@@ -391,32 +383,13 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ HID_BATTERY_QUIRK_AVOID_QUERY },
+ { HID_USB_DEVICE(USB_VENDOR_ID_UGEE, USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_SW),
+ HID_BATTERY_QUIRK_AVOID_QUERY },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_EU0009NV),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_15),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_13_AW0020NG),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_13T_AW100),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V1),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V2),
+- HID_BATTERY_QUIRK_IGNORE },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15_EU0556NG),
+- HID_BATTERY_QUIRK_IGNORE },
+ { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_CHROMEBOOK_TROGDOR_POMPOM),
+ HID_BATTERY_QUIRK_AVOID_QUERY },
++ /*
++ * Elan I2C-HID touchscreens seem to all report a non present battery,
++ * set HID_BATTERY_QUIRK_IGNORE for all Elan I2C-HID devices.
++ */
++ { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_IGNORE },
+ {}
+ };
+
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 99812c0f830b5e..c4a6908bbe5404 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2113,6 +2113,12 @@ static const struct hid_device_id mt_devices[] = {
+ USB_VENDOR_ID_LENOVO,
+ USB_DEVICE_ID_LENOVO_X12_TAB) },
+
++ /* Lenovo X12 TAB Gen 2 */
++ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
++ HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
++ USB_VENDOR_ID_LENOVO,
++ USB_DEVICE_ID_LENOVO_X12_TAB2) },
++
+ /* Logitech devices */
+ { .driver_data = MT_CLS_NSMU,
+ HID_DEVICE(BUS_BLUETOOTH, HID_GROUP_MULTITOUCH_WIN_8,
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 632eaf9e11a6b6..2f8a9d3f1e861e 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -105,6 +105,7 @@ struct i2c_hid {
+
+ wait_queue_head_t wait; /* For waiting the interrupt */
+
++ struct mutex cmd_lock; /* protects cmdbuf and rawbuf */
+ struct mutex reset_lock;
+
+ struct i2chid_ops *ops;
+@@ -220,6 +221,8 @@ static int i2c_hid_xfer(struct i2c_hid *ihid,
+ static int i2c_hid_read_register(struct i2c_hid *ihid, __le16 reg,
+ void *buf, size_t len)
+ {
++ guard(mutex)(&ihid->cmd_lock);
++
+ *(__le16 *)ihid->cmdbuf = reg;
+
+ return i2c_hid_xfer(ihid, ihid->cmdbuf, sizeof(__le16), buf, len);
+@@ -252,6 +255,8 @@ static int i2c_hid_get_report(struct i2c_hid *ihid,
+
+ i2c_hid_dbg(ihid, "%s\n", __func__);
+
++ guard(mutex)(&ihid->cmd_lock);
++
+ /* Command register goes first */
+ *(__le16 *)ihid->cmdbuf = ihid->hdesc.wCommandRegister;
+ length += sizeof(__le16);
+@@ -342,6 +347,8 @@ static int i2c_hid_set_or_send_report(struct i2c_hid *ihid,
+ if (!do_set && le16_to_cpu(ihid->hdesc.wMaxOutputLength) == 0)
+ return -ENOSYS;
+
++ guard(mutex)(&ihid->cmd_lock);
++
+ if (do_set) {
+ /* Command register goes first */
+ *(__le16 *)ihid->cmdbuf = ihid->hdesc.wCommandRegister;
+@@ -384,6 +391,8 @@ static int i2c_hid_set_power_command(struct i2c_hid *ihid, int power_state)
+ {
+ size_t length;
+
++ guard(mutex)(&ihid->cmd_lock);
++
+ /* SET_POWER uses command register */
+ *(__le16 *)ihid->cmdbuf = ihid->hdesc.wCommandRegister;
+ length = sizeof(__le16);
+@@ -440,25 +449,27 @@ static int i2c_hid_start_hwreset(struct i2c_hid *ihid)
+ if (ret)
+ return ret;
+
+- /* Prepare reset command. Command register goes first. */
+- *(__le16 *)ihid->cmdbuf = ihid->hdesc.wCommandRegister;
+- length += sizeof(__le16);
+- /* Next is RESET command itself */
+- length += i2c_hid_encode_command(ihid->cmdbuf + length,
+- I2C_HID_OPCODE_RESET, 0, 0);
++ scoped_guard(mutex, &ihid->cmd_lock) {
++ /* Prepare reset command. Command register goes first. */
++ *(__le16 *)ihid->cmdbuf = ihid->hdesc.wCommandRegister;
++ length += sizeof(__le16);
++ /* Next is RESET command itself */
++ length += i2c_hid_encode_command(ihid->cmdbuf + length,
++ I2C_HID_OPCODE_RESET, 0, 0);
+
+- set_bit(I2C_HID_RESET_PENDING, &ihid->flags);
++ set_bit(I2C_HID_RESET_PENDING, &ihid->flags);
+
+- ret = i2c_hid_xfer(ihid, ihid->cmdbuf, length, NULL, 0);
+- if (ret) {
+- dev_err(&ihid->client->dev,
+- "failed to reset device: %d\n", ret);
+- goto err_clear_reset;
+- }
++ ret = i2c_hid_xfer(ihid, ihid->cmdbuf, length, NULL, 0);
++ if (ret) {
++ dev_err(&ihid->client->dev,
++ "failed to reset device: %d\n", ret);
++ break;
++ }
+
+- return 0;
++ return 0;
++ }
+
+-err_clear_reset:
++ /* Clean up if sending reset command failed */
+ clear_bit(I2C_HID_RESET_PENDING, &ihid->flags);
+ i2c_hid_set_power(ihid, I2C_HID_PWR_SLEEP);
+ return ret;
+@@ -1200,6 +1211,7 @@ int i2c_hid_core_probe(struct i2c_client *client, struct i2chid_ops *ops,
+ ihid->is_panel_follower = drm_is_panel_follower(&client->dev);
+
+ init_waitqueue_head(&ihid->wait);
++ mutex_init(&ihid->cmd_lock);
+ mutex_init(&ihid->reset_lock);
+ INIT_WORK(&ihid->panel_follower_prepare_work, ihid_core_panel_prepare_work);
+
+diff --git a/drivers/hwmon/nct6775-platform.c b/drivers/hwmon/nct6775-platform.c
+index 9aa4dcf4a6f336..096f1daa8f2bcf 100644
+--- a/drivers/hwmon/nct6775-platform.c
++++ b/drivers/hwmon/nct6775-platform.c
+@@ -1269,6 +1269,7 @@ static const char * const asus_msi_boards[] = {
+ "EX-B760M-V5 D4",
+ "EX-H510M-V3",
+ "EX-H610M-V3 D4",
++ "G15CF",
+ "PRIME A620M-A",
+ "PRIME B560-PLUS",
+ "PRIME B560-PLUS AC-HES",
+diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c
+index e8a688d04aee0f..edda6a70907b43 100644
+--- a/drivers/i2c/busses/i2c-designware-common.c
++++ b/drivers/i2c/busses/i2c-designware-common.c
+@@ -441,6 +441,7 @@ int i2c_dw_set_sda_hold(struct dw_i2c_dev *dev)
+
+ void __i2c_dw_disable(struct dw_i2c_dev *dev)
+ {
++ struct i2c_timings *t = &dev->timings;
+ unsigned int raw_intr_stats;
+ unsigned int enable;
+ int timeout = 100;
+@@ -453,6 +454,19 @@ void __i2c_dw_disable(struct dw_i2c_dev *dev)
+
+ abort_needed = raw_intr_stats & DW_IC_INTR_MST_ON_HOLD;
+ if (abort_needed) {
++ if (!(enable & DW_IC_ENABLE_ENABLE)) {
++ regmap_write(dev->map, DW_IC_ENABLE, DW_IC_ENABLE_ENABLE);
++ /*
++ * Wait 10 times the signaling period of the highest I2C
++ * transfer supported by the driver (for 400KHz this is
++ * 25us) to ensure the I2C ENABLE bit is already set
++ * as described in the DesignWare I2C databook.
++ */
++ fsleep(DIV_ROUND_CLOSEST_ULL(10 * MICRO, t->bus_freq_hz));
++ /* Set ENABLE bit before setting ABORT */
++ enable |= DW_IC_ENABLE_ENABLE;
++ }
++
+ regmap_write(dev->map, DW_IC_ENABLE, enable | DW_IC_ENABLE_ABORT);
+ ret = regmap_read_poll_timeout(dev->map, DW_IC_ENABLE, enable,
+ !(enable & DW_IC_ENABLE_ABORT), 10,
+diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
+index e9606c00b8d103..e45daedad96724 100644
+--- a/drivers/i2c/busses/i2c-designware-core.h
++++ b/drivers/i2c/busses/i2c-designware-core.h
+@@ -109,6 +109,7 @@
+ DW_IC_INTR_RX_UNDER | \
+ DW_IC_INTR_RD_REQ)
+
++#define DW_IC_ENABLE_ENABLE BIT(0)
+ #define DW_IC_ENABLE_ABORT BIT(1)
+
+ #define DW_IC_STATUS_ACTIVITY BIT(0)
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index c7e56002809ace..7b260a3617f69d 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -253,6 +253,34 @@ static void i2c_dw_xfer_init(struct dw_i2c_dev *dev)
+ __i2c_dw_write_intr_mask(dev, DW_IC_INTR_MASTER_MASK);
+ }
+
++/*
++ * This function waits for the controller to be idle before disabling I2C
++ * When the controller is not in the IDLE state, the MST_ACTIVITY bit
++ * (IC_STATUS[5]) is set.
++ *
++ * Values:
++ * 0x1 (ACTIVE): Controller not idle
++ * 0x0 (IDLE): Controller is idle
++ *
++ * The function is called after completing the current transfer.
++ *
++ * Returns:
++ * False when the controller is in the IDLE state.
++ * True when the controller is in the ACTIVE state.
++ */
++static bool i2c_dw_is_controller_active(struct dw_i2c_dev *dev)
++{
++ u32 status;
++
++ regmap_read(dev->map, DW_IC_STATUS, &status);
++ if (!(status & DW_IC_STATUS_MASTER_ACTIVITY))
++ return false;
++
++ return regmap_read_poll_timeout(dev->map, DW_IC_STATUS, status,
++ !(status & DW_IC_STATUS_MASTER_ACTIVITY),
++ 1100, 20000) != 0;
++}
++
+ static int i2c_dw_check_stopbit(struct dw_i2c_dev *dev)
+ {
+ u32 val;
+@@ -788,6 +816,16 @@ i2c_dw_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
+ goto done;
+ }
+
++ /*
++ * This happens rarely (~1:500) and is hard to reproduce. Debug trace
++ * showed that IC_STATUS had value of 0x23 when STOP_DET occurred,
++ * if disable IC_ENABLE.ENABLE immediately that can result in
++ * IC_RAW_INTR_STAT.MASTER_ON_HOLD holding SCL low. Check if
++ * controller is still ACTIVE before disabling I2C.
++ */
++ if (i2c_dw_is_controller_active(dev))
++ dev_err(dev->dev, "controller active\n");
++
+ /*
+ * We must disable the adapter before returning and signaling the end
+ * of the current transfer. Otherwise the hardware might continue
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 06e836e3e87733..4c9050a4d58e7d 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -818,15 +818,13 @@ static int geni_i2c_probe(struct platform_device *pdev)
+ init_completion(&gi2c->done);
+ spin_lock_init(&gi2c->lock);
+ platform_set_drvdata(pdev, gi2c);
+- ret = devm_request_irq(dev, gi2c->irq, geni_i2c_irq, 0,
++ ret = devm_request_irq(dev, gi2c->irq, geni_i2c_irq, IRQF_NO_AUTOEN,
+ dev_name(dev), gi2c);
+ if (ret) {
+ dev_err(dev, "Request_irq failed:%d: err:%d\n",
+ gi2c->irq, ret);
+ return ret;
+ }
+- /* Disable the interrupt so that the system can enter low-power mode */
+- disable_irq(gi2c->irq);
+ i2c_set_adapdata(&gi2c->adap, gi2c);
+ gi2c->adap.dev.parent = dev;
+ gi2c->adap.dev.of_node = dev->of_node;
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index cfee2d9c09de36..0174ead99de6c1 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -2395,7 +2395,7 @@ static int __maybe_unused stm32f7_i2c_runtime_suspend(struct device *dev)
+ struct stm32f7_i2c_dev *i2c_dev = dev_get_drvdata(dev);
+
+ if (!stm32f7_i2c_is_slave_registered(i2c_dev))
+- clk_disable_unprepare(i2c_dev->clk);
++ clk_disable(i2c_dev->clk);
+
+ return 0;
+ }
+@@ -2406,9 +2406,9 @@ static int __maybe_unused stm32f7_i2c_runtime_resume(struct device *dev)
+ int ret;
+
+ if (!stm32f7_i2c_is_slave_registered(i2c_dev)) {
+- ret = clk_prepare_enable(i2c_dev->clk);
++ ret = clk_enable(i2c_dev->clk);
+ if (ret) {
+- dev_err(dev, "failed to prepare_enable clock\n");
++ dev_err(dev, "failed to enable clock\n");
+ return ret;
+ }
+ }
+diff --git a/drivers/i2c/busses/i2c-synquacer.c b/drivers/i2c/busses/i2c-synquacer.c
+index 4eccbcd0fbfc00..bbb9062669e4b2 100644
+--- a/drivers/i2c/busses/i2c-synquacer.c
++++ b/drivers/i2c/busses/i2c-synquacer.c
+@@ -550,12 +550,13 @@ static int synquacer_i2c_probe(struct platform_device *pdev)
+ device_property_read_u32(&pdev->dev, "socionext,pclk-rate",
+ &i2c->pclkrate);
+
+- pclk = devm_clk_get_enabled(&pdev->dev, "pclk");
++ pclk = devm_clk_get_optional_enabled(&pdev->dev, "pclk");
+ if (IS_ERR(pclk))
+ return dev_err_probe(&pdev->dev, PTR_ERR(pclk),
+ "failed to get and enable clock\n");
+
+- i2c->pclkrate = clk_get_rate(pclk);
++ if (pclk)
++ i2c->pclkrate = clk_get_rate(pclk);
+
+ if (i2c->pclkrate < SYNQUACER_I2C_MIN_CLK_RATE ||
+ i2c->pclkrate > SYNQUACER_I2C_MAX_CLK_RATE)
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index d3ca7d2f81a614..1d68177241a6b3 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -772,14 +772,17 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
+ goto out;
+ }
+
+- xiic_fill_tx_fifo(i2c);
+-
+- /* current message sent and there is space in the fifo */
+- if (!xiic_tx_space(i2c) && xiic_tx_fifo_space(i2c) >= 2) {
++ if (xiic_tx_space(i2c)) {
++ xiic_fill_tx_fifo(i2c);
++ } else {
++ /* current message fully written */
+ dev_dbg(i2c->adap.dev.parent,
+ "%s end of message sent, nmsgs: %d\n",
+ __func__, i2c->nmsgs);
+- if (i2c->nmsgs > 1) {
++ /* Don't move onto the next message until the TX FIFO empties,
++ * to ensure that a NAK is not missed.
++ */
++ if (i2c->nmsgs > 1 && (pend & XIIC_INTR_TX_EMPTY_MASK)) {
+ i2c->nmsgs--;
+ i2c->tx_msg++;
+ xfer_more = 1;
+@@ -790,11 +793,7 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
+ "%s Got TX IRQ but no more to do...\n",
+ __func__);
+ }
+- } else if (!xiic_tx_space(i2c) && (i2c->nmsgs == 1))
+- /* current frame is sent and is last,
+- * make sure to disable tx half
+- */
+- xiic_irq_dis(i2c, XIIC_INTR_TX_HALF_MASK);
++ }
+ }
+
+ if (pend & XIIC_INTR_BNB_MASK) {
+@@ -1338,8 +1337,8 @@ static int xiic_i2c_probe(struct platform_device *pdev)
+ return 0;
+
+ err_pm_disable:
+- pm_runtime_set_suspended(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_set_suspended(&pdev->dev);
+
+ return ret;
+ }
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index b63f75e442964b..e39e8d792d0341 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -915,6 +915,27 @@ int i2c_dev_irq_from_resources(const struct resource *resources,
+ return 0;
+ }
+
++/*
++ * Serialize device instantiation in case it can be instantiated explicitly
++ * and by auto-detection
++ */
++static int i2c_lock_addr(struct i2c_adapter *adap, unsigned short addr,
++ unsigned short flags)
++{
++ if (!(flags & I2C_CLIENT_TEN) &&
++ test_and_set_bit(addr, adap->addrs_in_instantiation))
++ return -EBUSY;
++
++ return 0;
++}
++
++static void i2c_unlock_addr(struct i2c_adapter *adap, unsigned short addr,
++ unsigned short flags)
++{
++ if (!(flags & I2C_CLIENT_TEN))
++ clear_bit(addr, adap->addrs_in_instantiation);
++}
++
+ /**
+ * i2c_new_client_device - instantiate an i2c device
+ * @adap: the adapter managing the device
+@@ -962,6 +983,10 @@ i2c_new_client_device(struct i2c_adapter *adap, struct i2c_board_info const *inf
+ goto out_err_silent;
+ }
+
++ status = i2c_lock_addr(adap, client->addr, client->flags);
++ if (status)
++ goto out_err_silent;
++
+ /* Check for address business */
+ status = i2c_check_addr_busy(adap, i2c_encode_flags_to_addr(client));
+ if (status)
+@@ -993,6 +1018,8 @@ i2c_new_client_device(struct i2c_adapter *adap, struct i2c_board_info const *inf
+ dev_dbg(&adap->dev, "client [%s] registered with bus id %s\n",
+ client->name, dev_name(&client->dev));
+
++ i2c_unlock_addr(adap, client->addr, client->flags);
++
+ return client;
+
+ out_remove_swnode:
+@@ -1004,6 +1031,7 @@ i2c_new_client_device(struct i2c_adapter *adap, struct i2c_board_info const *inf
+ dev_err(&adap->dev,
+ "Failed to register i2c client %s at 0x%02x (%d)\n",
+ client->name, client->addr, status);
++ i2c_unlock_addr(adap, client->addr, client->flags);
+ out_err_silent:
+ if (need_put)
+ put_device(&client->dev);
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index 0a68fd1b81d4e9..e084ba648b4aef 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -1775,6 +1775,7 @@ static void svc_i3c_master_remove(struct platform_device *pdev)
+ {
+ struct svc_i3c_master *master = platform_get_drvdata(pdev);
+
++ cancel_work_sync(&master->hj_work);
+ i3c_master_unregister(&master->base);
+
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index 88470602b789e1..67aebfe0fed665 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -1530,6 +1530,10 @@ static const struct idle_cpu idle_cpu_dnv __initconst = {
+ .use_acpi = true,
+ };
+
++static const struct idle_cpu idle_cpu_tmt __initconst = {
++ .disable_promotion_to_c1e = true,
++};
++
+ static const struct idle_cpu idle_cpu_snr __initconst = {
+ .state_table = snr_cstates,
+ .disable_promotion_to_c1e = true,
+@@ -1594,6 +1598,8 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
+ X86_MATCH_VFM(INTEL_ATOM_GOLDMONT, &idle_cpu_bxt),
+ X86_MATCH_VFM(INTEL_ATOM_GOLDMONT_PLUS, &idle_cpu_bxt),
+ X86_MATCH_VFM(INTEL_ATOM_GOLDMONT_D, &idle_cpu_dnv),
++ X86_MATCH_VFM(INTEL_ATOM_TREMONT, &idle_cpu_tmt),
++ X86_MATCH_VFM(INTEL_ATOM_TREMONT_L, &idle_cpu_tmt),
+ X86_MATCH_VFM(INTEL_ATOM_TREMONT_D, &idle_cpu_snr),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, &idle_cpu_grr),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, &idle_cpu_srf),
+@@ -2142,7 +2148,7 @@ static void __init intel_idle_cpuidle_driver_init(struct cpuidle_driver *drv)
+
+ drv->state_count = 1;
+
+- if (icpu)
++ if (icpu && icpu->state_table)
+ intel_idle_init_cstates_icpu(drv);
+ else
+ intel_idle_init_cstates_acpi(drv);
+@@ -2276,7 +2282,11 @@ static int __init intel_idle_init(void)
+
+ icpu = (const struct idle_cpu *)id->driver_data;
+ if (icpu) {
+- cpuidle_state_table = icpu->state_table;
++ if (icpu->state_table)
++ cpuidle_state_table = icpu->state_table;
++ else if (!intel_idle_acpi_cst_extract())
++ return -ENODEV;
++
+ auto_demotion_disable_flags = icpu->auto_demotion_disable_flags;
+ if (icpu->disable_promotion_to_c1e)
+ c1e_promotion = C1E_PROMOTION_DISABLE;
+diff --git a/drivers/iio/magnetometer/ak8975.c b/drivers/iio/magnetometer/ak8975.c
+index ccbebe5b66cde2..e78de8a971c7c4 100644
+--- a/drivers/iio/magnetometer/ak8975.c
++++ b/drivers/iio/magnetometer/ak8975.c
+@@ -692,22 +692,8 @@ static int ak8975_start_read_axis(struct ak8975_data *data,
+ if (ret < 0)
+ return ret;
+
+- /* This will be executed only for non-interrupt based waiting case */
+- if (ret & data->def->ctrl_masks[ST1_DRDY]) {
+- ret = i2c_smbus_read_byte_data(client,
+- data->def->ctrl_regs[ST2]);
+- if (ret < 0) {
+- dev_err(&client->dev, "Error in reading ST2\n");
+- return ret;
+- }
+- if (ret & (data->def->ctrl_masks[ST2_DERR] |
+- data->def->ctrl_masks[ST2_HOFL])) {
+- dev_err(&client->dev, "ST2 status error 0x%x\n", ret);
+- return -EINVAL;
+- }
+- }
+-
+- return 0;
++ /* Return with zero if the data is ready. */
++ return !data->def->ctrl_regs[ST1_DRDY];
+ }
+
+ /* Retrieve raw flux value for one of the x, y, or z axis. */
+@@ -734,6 +720,20 @@ static int ak8975_read_axis(struct iio_dev *indio_dev, int index, int *val)
+ if (ret < 0)
+ goto exit;
+
++ /* Read out ST2 for release lock on measurment data. */
++ ret = i2c_smbus_read_byte_data(client, data->def->ctrl_regs[ST2]);
++ if (ret < 0) {
++ dev_err(&client->dev, "Error in reading ST2\n");
++ goto exit;
++ }
++
++ if (ret & (data->def->ctrl_masks[ST2_DERR] |
++ data->def->ctrl_masks[ST2_HOFL])) {
++ dev_err(&client->dev, "ST2 status error 0x%x\n", ret);
++ ret = -EINVAL;
++ goto exit;
++ }
++
+ mutex_unlock(&data->lock);
+
+ pm_runtime_mark_last_busy(&data->client->dev);
+diff --git a/drivers/iio/pressure/bmp280-core.c b/drivers/iio/pressure/bmp280-core.c
+index 49081b72961802..b21282a613ffeb 100644
+--- a/drivers/iio/pressure/bmp280-core.c
++++ b/drivers/iio/pressure/bmp280-core.c
+@@ -882,7 +882,7 @@ const struct bmp280_chip_info bme280_chip_info = {
+ .id_reg = BMP280_REG_ID,
+ .chip_id = bme280_chip_ids,
+ .num_chip_id = ARRAY_SIZE(bme280_chip_ids),
+- .regmap_config = &bmp280_regmap_config,
++ .regmap_config = &bme280_regmap_config,
+ .start_up_time = 2000,
+ .channels = bmp280_channels,
+ .num_channels = 3,
+@@ -1272,10 +1272,11 @@ static int bmp380_chip_config(struct bmp280_data *data)
+ }
+ /*
+ * Waits for measurement before checking configuration error
+- * flag. Selected longest measure time indicated in
+- * section 3.9.1 in the datasheet.
++ * flag. Selected longest measurement time, calculated from
++ * formula in datasheet section 3.9.2 with an offset of ~+15%
++ * as it seen as well in table 3.9.1.
+ */
+- msleep(80);
++ msleep(150);
+
+ /* Check config error flag */
+ ret = regmap_read(data->regmap, BMP380_REG_ERROR, &tmp);
+diff --git a/drivers/iio/pressure/bmp280-regmap.c b/drivers/iio/pressure/bmp280-regmap.c
+index fa52839474b18b..d27d68edd90656 100644
+--- a/drivers/iio/pressure/bmp280-regmap.c
++++ b/drivers/iio/pressure/bmp280-regmap.c
+@@ -41,7 +41,7 @@ const struct regmap_config bmp180_regmap_config = {
+ };
+ EXPORT_SYMBOL_NS(bmp180_regmap_config, IIO_BMP280);
+
+-static bool bmp280_is_writeable_reg(struct device *dev, unsigned int reg)
++static bool bme280_is_writeable_reg(struct device *dev, unsigned int reg)
+ {
+ switch (reg) {
+ case BMP280_REG_CONFIG:
+@@ -54,7 +54,35 @@ static bool bmp280_is_writeable_reg(struct device *dev, unsigned int reg)
+ }
+ }
+
++static bool bmp280_is_writeable_reg(struct device *dev, unsigned int reg)
++{
++ switch (reg) {
++ case BMP280_REG_CONFIG:
++ case BMP280_REG_CTRL_MEAS:
++ case BMP280_REG_RESET:
++ return true;
++ default:
++ return false;
++ }
++}
++
+ static bool bmp280_is_volatile_reg(struct device *dev, unsigned int reg)
++{
++ switch (reg) {
++ case BMP280_REG_TEMP_XLSB:
++ case BMP280_REG_TEMP_LSB:
++ case BMP280_REG_TEMP_MSB:
++ case BMP280_REG_PRESS_XLSB:
++ case BMP280_REG_PRESS_LSB:
++ case BMP280_REG_PRESS_MSB:
++ case BMP280_REG_STATUS:
++ return true;
++ default:
++ return false;
++ }
++}
++
++static bool bme280_is_volatile_reg(struct device *dev, unsigned int reg)
+ {
+ switch (reg) {
+ case BME280_REG_HUMIDITY_LSB:
+@@ -71,7 +99,6 @@ static bool bmp280_is_volatile_reg(struct device *dev, unsigned int reg)
+ return false;
+ }
+ }
+-
+ static bool bmp380_is_writeable_reg(struct device *dev, unsigned int reg)
+ {
+ switch (reg) {
+@@ -167,7 +194,7 @@ const struct regmap_config bmp280_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+
+- .max_register = BME280_REG_HUMIDITY_LSB,
++ .max_register = BMP280_REG_TEMP_XLSB,
+ .cache_type = REGCACHE_RBTREE,
+
+ .writeable_reg = bmp280_is_writeable_reg,
+@@ -175,6 +202,18 @@ const struct regmap_config bmp280_regmap_config = {
+ };
+ EXPORT_SYMBOL_NS(bmp280_regmap_config, IIO_BMP280);
+
++const struct regmap_config bme280_regmap_config = {
++ .reg_bits = 8,
++ .val_bits = 8,
++
++ .max_register = BME280_REG_HUMIDITY_LSB,
++ .cache_type = REGCACHE_RBTREE,
++
++ .writeable_reg = bme280_is_writeable_reg,
++ .volatile_reg = bme280_is_volatile_reg,
++};
++EXPORT_SYMBOL_NS(bme280_regmap_config, IIO_BMP280);
++
+ const struct regmap_config bmp380_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+diff --git a/drivers/iio/pressure/bmp280.h b/drivers/iio/pressure/bmp280.h
+index 7c30e4d523beb6..4cf1c28c18e392 100644
+--- a/drivers/iio/pressure/bmp280.h
++++ b/drivers/iio/pressure/bmp280.h
+@@ -464,6 +464,7 @@ extern const struct bmp280_chip_info bmp580_chip_info;
+ /* Regmap configurations */
+ extern const struct regmap_config bmp180_regmap_config;
+ extern const struct regmap_config bmp280_regmap_config;
++extern const struct regmap_config bme280_regmap_config;
+ extern const struct regmap_config bmp380_regmap_config;
+ extern const struct regmap_config bmp580_regmap_config;
+
+diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c
+index d13abc954d2aa3..67c2d43135a8af 100644
+--- a/drivers/infiniband/hw/mana/main.c
++++ b/drivers/infiniband/hw/mana/main.c
+@@ -383,7 +383,7 @@ static int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem
+
+ create_req->length = umem->length;
+ create_req->offset_in_page = ib_umem_dma_offset(umem, page_sz);
+- create_req->gdma_page_type = order_base_2(page_sz) - PAGE_SHIFT;
++ create_req->gdma_page_type = order_base_2(page_sz) - MANA_PAGE_SHIFT;
+ create_req->page_count = num_pages_total;
+
+ ibdev_dbg(&dev->ib_dev, "size_dma_region %lu num_pages_total %lu\n",
+@@ -511,13 +511,13 @@ int mana_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
+ PAGE_SHIFT;
+ prot = pgprot_writecombine(vma->vm_page_prot);
+
+- ret = rdma_user_mmap_io(ibcontext, vma, pfn, gc->db_page_size, prot,
++ ret = rdma_user_mmap_io(ibcontext, vma, pfn, PAGE_SIZE, prot,
+ NULL);
+ if (ret)
+ ibdev_dbg(ibdev, "can't rdma_user_mmap_io ret %d\n", ret);
+ else
+- ibdev_dbg(ibdev, "mapped I/O pfn 0x%llx page_size %u, ret %d\n",
+- pfn, gc->db_page_size, ret);
++ ibdev_dbg(ibdev, "mapped I/O pfn 0x%llx page_size %lu, ret %d\n",
++ pfn, PAGE_SIZE, ret);
+
+ return ret;
+ }
+diff --git a/drivers/input/keyboard/adp5589-keys.c b/drivers/input/keyboard/adp5589-keys.c
+index 8996e00cd63a82..922d3ab998f3a5 100644
+--- a/drivers/input/keyboard/adp5589-keys.c
++++ b/drivers/input/keyboard/adp5589-keys.c
+@@ -391,10 +391,17 @@ static int adp5589_gpio_get_value(struct gpio_chip *chip, unsigned off)
+ struct adp5589_kpad *kpad = gpiochip_get_data(chip);
+ unsigned int bank = kpad->var->bank(kpad->gpiomap[off]);
+ unsigned int bit = kpad->var->bit(kpad->gpiomap[off]);
++ int val;
+
+- return !!(adp5589_read(kpad->client,
+- kpad->var->reg(ADP5589_GPI_STATUS_A) + bank) &
+- bit);
++ mutex_lock(&kpad->gpio_lock);
++ if (kpad->dir[bank] & bit)
++ val = kpad->dat_out[bank];
++ else
++ val = adp5589_read(kpad->client,
++ kpad->var->reg(ADP5589_GPI_STATUS_A) + bank);
++ mutex_unlock(&kpad->gpio_lock);
++
++ return !!(val & bit);
+ }
+
+ static void adp5589_gpio_set_value(struct gpio_chip *chip,
+@@ -936,10 +943,9 @@ static int adp5589_keypad_add(struct adp5589_kpad *kpad, unsigned int revid)
+
+ static void adp5589_clear_config(void *data)
+ {
+- struct i2c_client *client = data;
+- struct adp5589_kpad *kpad = i2c_get_clientdata(client);
++ struct adp5589_kpad *kpad = data;
+
+- adp5589_write(client, kpad->var->reg(ADP5589_GENERAL_CFG), 0);
++ adp5589_write(kpad->client, kpad->var->reg(ADP5589_GENERAL_CFG), 0);
+ }
+
+ static int adp5589_probe(struct i2c_client *client)
+@@ -983,7 +989,7 @@ static int adp5589_probe(struct i2c_client *client)
+ }
+
+ error = devm_add_action_or_reset(&client->dev, adp5589_clear_config,
+- client);
++ kpad);
+ if (error)
+ return error;
+
+@@ -1010,8 +1016,6 @@ static int adp5589_probe(struct i2c_client *client)
+ if (error)
+ return error;
+
+- i2c_set_clientdata(client, kpad);
+-
+ dev_info(&client->dev, "Rev.%d keypad, irq %d\n", revid, client->irq);
+ return 0;
+ }
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index f490385c13605c..473eb772ea2106 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -1012,7 +1012,8 @@ void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
+ used_bits[2] |=
+ cpu_to_le64(STRTAB_STE_2_S2VMID | STRTAB_STE_2_VTCR |
+ STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2ENDI |
+- STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2R);
++ STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2S |
++ STRTAB_STE_2_S2R);
+ used_bits[3] |= cpu_to_le64(STRTAB_STE_3_S2TTB_MASK);
+ }
+
+@@ -1184,8 +1185,8 @@ static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
+ {
+ size_t size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
+
+- l1_desc->l2ptr = dmam_alloc_coherent(smmu->dev, size,
+- &l1_desc->l2ptr_dma, GFP_KERNEL);
++ l1_desc->l2ptr = dma_alloc_coherent(smmu->dev, size,
++ &l1_desc->l2ptr_dma, GFP_KERNEL);
+ if (!l1_desc->l2ptr) {
+ dev_warn(smmu->dev,
+ "failed to allocate context descriptor table\n");
+@@ -1399,17 +1400,17 @@ static int arm_smmu_alloc_cd_tables(struct arm_smmu_master *master)
+ cd_table->num_l1_ents = DIV_ROUND_UP(max_contexts,
+ CTXDESC_L2_ENTRIES);
+
+- cd_table->l1_desc = devm_kcalloc(smmu->dev, cd_table->num_l1_ents,
+- sizeof(*cd_table->l1_desc),
+- GFP_KERNEL);
++ cd_table->l1_desc = kcalloc(cd_table->num_l1_ents,
++ sizeof(*cd_table->l1_desc),
++ GFP_KERNEL);
+ if (!cd_table->l1_desc)
+ return -ENOMEM;
+
+ l1size = cd_table->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
+ }
+
+- cd_table->cdtab = dmam_alloc_coherent(smmu->dev, l1size, &cd_table->cdtab_dma,
+- GFP_KERNEL);
++ cd_table->cdtab = dma_alloc_coherent(smmu->dev, l1size,
++ &cd_table->cdtab_dma, GFP_KERNEL);
+ if (!cd_table->cdtab) {
+ dev_warn(smmu->dev, "failed to allocate context descriptor\n");
+ ret = -ENOMEM;
+@@ -1420,7 +1421,7 @@ static int arm_smmu_alloc_cd_tables(struct arm_smmu_master *master)
+
+ err_free_l1:
+ if (cd_table->l1_desc) {
+- devm_kfree(smmu->dev, cd_table->l1_desc);
++ kfree(cd_table->l1_desc);
+ cd_table->l1_desc = NULL;
+ }
+ return ret;
+@@ -1440,21 +1441,18 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_master *master)
+ if (!cd_table->l1_desc[i].l2ptr)
+ continue;
+
+- dmam_free_coherent(smmu->dev, size,
+- cd_table->l1_desc[i].l2ptr,
+- cd_table->l1_desc[i].l2ptr_dma);
++ dma_free_coherent(smmu->dev, size,
++ cd_table->l1_desc[i].l2ptr,
++ cd_table->l1_desc[i].l2ptr_dma);
+ }
+- devm_kfree(smmu->dev, cd_table->l1_desc);
+- cd_table->l1_desc = NULL;
++ kfree(cd_table->l1_desc);
+
+ l1size = cd_table->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
+ } else {
+ l1size = cd_table->num_l1_ents * (CTXDESC_CD_DWORDS << 3);
+ }
+
+- dmam_free_coherent(smmu->dev, l1size, cd_table->cdtab, cd_table->cdtab_dma);
+- cd_table->cdtab_dma = 0;
+- cd_table->cdtab = NULL;
++ dma_free_coherent(smmu->dev, l1size, cd_table->cdtab, cd_table->cdtab_dma);
+ }
+
+ /* Stream table manipulation functions */
+@@ -1646,6 +1644,7 @@ void arm_smmu_make_s2_domain_ste(struct arm_smmu_ste *target,
+ STRTAB_STE_2_S2ENDI |
+ #endif
+ STRTAB_STE_2_S2PTW |
++ (master->stall_enabled ? STRTAB_STE_2_S2S : 0) |
+ STRTAB_STE_2_S2R);
+
+ target->data[3] = cpu_to_le64(pgtbl_cfg->arm_lpae_s2_cfg.vttbr &
+@@ -1739,10 +1738,6 @@ static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt)
+ return -EOPNOTSUPP;
+ }
+
+- /* Stage-2 is always pinned at the moment */
+- if (evt[1] & EVTQ_1_S2)
+- return -EFAULT;
+-
+ if (!(evt[1] & EVTQ_1_STALL))
+ return -EOPNOTSUPP;
+
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+index 14bca41a981b43..0dc7ad43c64c03 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+@@ -267,6 +267,7 @@ struct arm_smmu_ste {
+ #define STRTAB_STE_2_S2AA64 (1UL << 51)
+ #define STRTAB_STE_2_S2ENDI (1UL << 52)
+ #define STRTAB_STE_2_S2PTW (1UL << 54)
++#define STRTAB_STE_2_S2S (1UL << 57)
+ #define STRTAB_STE_2_S2R (1UL << 58)
+
+ #define STRTAB_STE_3_S2TTB_MASK GENMASK_ULL(51, 4)
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 1c8d3141cb55c0..01e157d89a1632 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -1204,9 +1204,7 @@ static void free_iommu(struct intel_iommu *iommu)
+ */
+ static inline void reclaim_free_desc(struct q_inval *qi)
+ {
+- while (qi->desc_status[qi->free_tail] == QI_DONE ||
+- qi->desc_status[qi->free_tail] == QI_ABORT) {
+- qi->desc_status[qi->free_tail] = QI_FREE;
++ while (qi->desc_status[qi->free_tail] == QI_FREE && qi->free_tail != qi->free_head) {
+ qi->free_tail = (qi->free_tail + 1) % QI_LENGTH;
+ qi->free_cnt++;
+ }
+@@ -1463,8 +1461,16 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc,
+ raw_spin_lock(&qi->q_lock);
+ }
+
+- for (i = 0; i < count; i++)
+- qi->desc_status[(index + i) % QI_LENGTH] = QI_DONE;
++ /*
++ * The reclaim code can free descriptors from multiple submissions
++ * starting from the tail of the queue. When count == 0, the
++ * status of the standalone wait descriptor at the tail of the queue
++ * must be set to QI_FREE to allow the reclaim code to proceed.
++ * It is also possible that descriptors from one of the previous
++ * submissions has to be reclaimed by a subsequent submission.
++ */
++ for (i = 0; i <= count; i++)
++ qi->desc_status[(index + i) % QI_LENGTH] = QI_FREE;
+
+ reclaim_free_desc(qi);
+ raw_spin_unlock_irqrestore(&qi->q_lock, flags);
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 4aa070cf56e703..e3e513cabc86ac 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -1447,10 +1447,10 @@ static int iommu_init_domains(struct intel_iommu *iommu)
+ * entry for first-level or pass-through translation modes should
+ * be programmed with a domain id different from those used for
+ * second-level or nested translation. We reserve a domain id for
+- * this purpose.
++ * this purpose. This domain id is also used for identity domain
++ * in legacy mode.
+ */
+- if (sm_supported(iommu))
+- set_bit(FLPT_DEFAULT_DID, iommu->domain_ids);
++ set_bit(FLPT_DEFAULT_DID, iommu->domain_ids);
+
+ return 0;
+ }
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index b51fc268dc845a..2e5fa0a232999f 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -264,9 +264,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
+ else
+ iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
+
+- /* Device IOTLB doesn't need to be flushed in caching mode. */
+- if (!cap_caching_mode(iommu->cap))
+- devtlb_invalidation_with_pasid(iommu, dev, pasid);
++ devtlb_invalidation_with_pasid(iommu, dev, pasid);
+ }
+
+ /*
+@@ -493,9 +491,7 @@ int intel_pasid_setup_dirty_tracking(struct intel_iommu *iommu,
+
+ iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
+
+- /* Device IOTLB doesn't need to be flushed in caching mode. */
+- if (!cap_caching_mode(iommu->cap))
+- devtlb_invalidation_with_pasid(iommu, dev, pasid);
++ devtlb_invalidation_with_pasid(iommu, dev, pasid);
+
+ return 0;
+ }
+@@ -572,9 +568,7 @@ void intel_pasid_setup_page_snoop_control(struct intel_iommu *iommu,
+ pasid_cache_invalidation_with_pasid(iommu, did, pasid);
+ qi_flush_piotlb(iommu, did, pasid, 0, -1, 0);
+
+- /* Device IOTLB doesn't need to be flushed in caching mode. */
+- if (!cap_caching_mode(iommu->cap))
+- devtlb_invalidation_with_pasid(iommu, dev, pasid);
++ devtlb_invalidation_with_pasid(iommu, dev, pasid);
+ }
+
+ /**
+diff --git a/drivers/leds/leds-pca9532.c b/drivers/leds/leds-pca9532.c
+index 9f3fac66a11c7f..bcc3063bd4413b 100644
+--- a/drivers/leds/leds-pca9532.c
++++ b/drivers/leds/leds-pca9532.c
+@@ -215,8 +215,7 @@ static int pca9532_update_hw_blink(struct pca9532_led *led,
+ if (other->state == PCA9532_PWM1) {
+ if (other->ldev.blink_delay_on != delay_on ||
+ other->ldev.blink_delay_off != delay_off) {
+- dev_err(&led->client->dev,
+- "HW can handle only one blink configuration at a time\n");
++ /* HW can handle only one blink configuration at a time */
+ return -EINVAL;
+ }
+ }
+@@ -224,7 +223,7 @@ static int pca9532_update_hw_blink(struct pca9532_led *led,
+
+ psc = ((delay_on + delay_off) * PCA9532_PWM_PERIOD_DIV - 1) / 1000;
+ if (psc > U8_MAX) {
+- dev_err(&led->client->dev, "Blink period too long to be handled by hardware\n");
++ /* Blink period too long to be handled by hardware */
+ return -EINVAL;
+ }
+
+diff --git a/drivers/mailbox/Kconfig b/drivers/mailbox/Kconfig
+index 4eed972959279a..cbd9206cd7de34 100644
+--- a/drivers/mailbox/Kconfig
++++ b/drivers/mailbox/Kconfig
+@@ -25,6 +25,7 @@ config ARM_MHU_V2
+
+ config ARM_MHU_V3
+ tristate "ARM MHUv3 Mailbox"
++ depends on ARM64 || COMPILE_TEST
+ depends on HAS_IOMEM || COMPILE_TEST
+ depends on OF
+ help
+diff --git a/drivers/mailbox/bcm2835-mailbox.c b/drivers/mailbox/bcm2835-mailbox.c
+index fbfd0202047c37..ea12fb8d24015c 100644
+--- a/drivers/mailbox/bcm2835-mailbox.c
++++ b/drivers/mailbox/bcm2835-mailbox.c
+@@ -145,7 +145,8 @@ static int bcm2835_mbox_probe(struct platform_device *pdev)
+ spin_lock_init(&mbox->lock);
+
+ ret = devm_request_irq(dev, irq_of_parse_and_map(dev->of_node, 0),
+- bcm2835_mbox_irq, 0, dev_name(dev), mbox);
++ bcm2835_mbox_irq, IRQF_NO_SUSPEND, dev_name(dev),
++ mbox);
+ if (ret) {
+ dev_err(dev, "Failed to register a mailbox IRQ handler: %d\n",
+ ret);
+diff --git a/drivers/mailbox/rockchip-mailbox.c b/drivers/mailbox/rockchip-mailbox.c
+index 8ffad059e8984e..4d966cb2ed0367 100644
+--- a/drivers/mailbox/rockchip-mailbox.c
++++ b/drivers/mailbox/rockchip-mailbox.c
+@@ -159,7 +159,7 @@ static const struct of_device_id rockchip_mbox_of_match[] = {
+ { .compatible = "rockchip,rk3368-mailbox", .data = &rk3368_drv_data},
+ { },
+ };
+-MODULE_DEVICE_TABLE(of, rockchp_mbox_of_match);
++MODULE_DEVICE_TABLE(of, rockchip_mbox_of_match);
+
+ static int rockchip_mbox_probe(struct platform_device *pdev)
+ {
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index 0217392fcc0d9c..8b0de1cb08808b 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -2601,13 +2601,6 @@ int vb2_core_queue_init(struct vb2_queue *q)
+ if (WARN_ON(q->supports_requests && q->min_queued_buffers))
+ return -EINVAL;
+
+- /*
+- * The minimum requirement is 2: one buffer is used
+- * by the hardware while the other is being processed by userspace.
+- */
+- if (q->min_reqbufs_allocation < 2)
+- q->min_reqbufs_allocation = 2;
+-
+ /*
+ * If the driver needs 'min_queued_buffers' in the queue before
+ * calling start_streaming() then the minimum requirement is
+diff --git a/drivers/media/i2c/ar0521.c b/drivers/media/i2c/ar0521.c
+index 09331cf95c62d0..d557f3b3de3d33 100644
+--- a/drivers/media/i2c/ar0521.c
++++ b/drivers/media/i2c/ar0521.c
+@@ -844,7 +844,8 @@ static int ar0521_power_off(struct device *dev)
+ clk_disable_unprepare(sensor->extclk);
+
+ if (sensor->reset_gpio)
+- gpiod_set_value(sensor->reset_gpio, 1); /* assert RESET signal */
++ /* assert RESET signal */
++ gpiod_set_value_cansleep(sensor->reset_gpio, 1);
+
+ for (i = ARRAY_SIZE(ar0521_supply_names) - 1; i >= 0; i--) {
+ if (sensor->supplies[i])
+@@ -878,7 +879,7 @@ static int ar0521_power_on(struct device *dev)
+
+ if (sensor->reset_gpio)
+ /* deassert RESET signal */
+- gpiod_set_value(sensor->reset_gpio, 0);
++ gpiod_set_value_cansleep(sensor->reset_gpio, 0);
+ usleep_range(4500, 5000); /* min 45000 clocks */
+
+ for (cnt = 0; cnt < ARRAY_SIZE(initial_regs); cnt++) {
+diff --git a/drivers/media/i2c/imx335.c b/drivers/media/i2c/imx335.c
+index 990d74214cc2e4..54a1de53d49730 100644
+--- a/drivers/media/i2c/imx335.c
++++ b/drivers/media/i2c/imx335.c
+@@ -997,7 +997,7 @@ static int imx335_parse_hw_config(struct imx335 *imx335)
+
+ /* Request optional reset pin */
+ imx335->reset_gpio = devm_gpiod_get_optional(imx335->dev, "reset",
+- GPIOD_OUT_LOW);
++ GPIOD_OUT_HIGH);
+ if (IS_ERR(imx335->reset_gpio)) {
+ dev_err(imx335->dev, "failed to get reset gpio %ld\n",
+ PTR_ERR(imx335->reset_gpio));
+@@ -1110,8 +1110,7 @@ static int imx335_power_on(struct device *dev)
+
+ usleep_range(500, 550); /* Tlow */
+
+- /* Set XCLR */
+- gpiod_set_value_cansleep(imx335->reset_gpio, 1);
++ gpiod_set_value_cansleep(imx335->reset_gpio, 0);
+
+ ret = clk_prepare_enable(imx335->inclk);
+ if (ret) {
+@@ -1124,7 +1123,7 @@ static int imx335_power_on(struct device *dev)
+ return 0;
+
+ error_reset:
+- gpiod_set_value_cansleep(imx335->reset_gpio, 0);
++ gpiod_set_value_cansleep(imx335->reset_gpio, 1);
+ regulator_bulk_disable(ARRAY_SIZE(imx335_supply_name), imx335->supplies);
+
+ return ret;
+@@ -1141,7 +1140,7 @@ static int imx335_power_off(struct device *dev)
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct imx335 *imx335 = to_imx335(sd);
+
+- gpiod_set_value_cansleep(imx335->reset_gpio, 0);
++ gpiod_set_value_cansleep(imx335->reset_gpio, 1);
+ clk_disable_unprepare(imx335->inclk);
+ regulator_bulk_disable(ARRAY_SIZE(imx335_supply_name), imx335->supplies);
+
+diff --git a/drivers/media/i2c/ov5675.c b/drivers/media/i2c/ov5675.c
+index 3641911bc73f68..5b5127f8953ff4 100644
+--- a/drivers/media/i2c/ov5675.c
++++ b/drivers/media/i2c/ov5675.c
+@@ -972,12 +972,10 @@ static int ov5675_set_stream(struct v4l2_subdev *sd, int enable)
+
+ static int ov5675_power_off(struct device *dev)
+ {
+- /* 512 xvclk cycles after the last SCCB transation or MIPI frame end */
+- u32 delay_us = DIV_ROUND_UP(512, OV5675_XVCLK_19_2 / 1000 / 1000);
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct ov5675 *ov5675 = to_ov5675(sd);
+
+- usleep_range(delay_us, delay_us * 2);
++ usleep_range(90, 100);
+
+ clk_disable_unprepare(ov5675->xvclk);
+ gpiod_set_value_cansleep(ov5675->reset_gpio, 1);
+@@ -988,7 +986,6 @@ static int ov5675_power_off(struct device *dev)
+
+ static int ov5675_power_on(struct device *dev)
+ {
+- u32 delay_us = DIV_ROUND_UP(8192, OV5675_XVCLK_19_2 / 1000 / 1000);
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct ov5675 *ov5675 = to_ov5675(sd);
+ int ret;
+@@ -1014,8 +1011,11 @@ static int ov5675_power_on(struct device *dev)
+
+ gpiod_set_value_cansleep(ov5675->reset_gpio, 0);
+
+- /* 8192 xvclk cycles prior to the first SCCB transation */
+- usleep_range(delay_us, delay_us * 2);
++ /* Worst case quiesence gap is 1.365 milliseconds @ 6MHz XVCLK
++ * Add an additional threshold grace period to ensure reset
++ * completion before initiating our first I2C transaction.
++ */
++ usleep_range(1500, 1600);
+
+ return 0;
+ }
+diff --git a/drivers/media/platform/qcom/camss/camss-video.c b/drivers/media/platform/qcom/camss/camss-video.c
+index cd72feca618ca4..3b8fc31d957c77 100644
+--- a/drivers/media/platform/qcom/camss/camss-video.c
++++ b/drivers/media/platform/qcom/camss/camss-video.c
+@@ -297,12 +297,6 @@ static void video_stop_streaming(struct vb2_queue *q)
+
+ ret = v4l2_subdev_call(subdev, video, s_stream, 0);
+
+- if (entity->use_count > 1) {
+- /* Don't stop if other instances of the pipeline are still running */
+- dev_dbg(video->camss->dev, "Video pipeline still used, don't stop streaming.\n");
+- return;
+- }
+-
+ if (ret) {
+ dev_err(video->camss->dev, "Video pipeline stop failed: %d\n", ret);
+ return;
+diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c
+index 51b1d3550421a4..d64985ca6e884f 100644
+--- a/drivers/media/platform/qcom/camss/camss.c
++++ b/drivers/media/platform/qcom/camss/camss.c
+@@ -2283,6 +2283,8 @@ static int camss_probe(struct platform_device *pdev)
+
+ v4l2_async_nf_init(&camss->notifier, &camss->v4l2_dev);
+
++ pm_runtime_enable(dev);
++
+ num_subdevs = camss_of_parse_ports(camss);
+ if (num_subdevs < 0) {
+ ret = num_subdevs;
+@@ -2323,8 +2325,6 @@ static int camss_probe(struct platform_device *pdev)
+ }
+ }
+
+- pm_runtime_enable(dev);
+-
+ return 0;
+
+ err_register_subdevs:
+@@ -2332,6 +2332,7 @@ static int camss_probe(struct platform_device *pdev)
+ err_v4l2_device_unregister:
+ v4l2_device_unregister(&camss->v4l2_dev);
+ v4l2_async_nf_cleanup(&camss->notifier);
++ pm_runtime_disable(dev);
+ err_genpd_cleanup:
+ camss_genpd_cleanup(camss);
+
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index 165c947a670354..84e95a46dfc983 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -430,6 +430,7 @@ static void venus_remove(struct platform_device *pdev)
+ struct device *dev = core->dev;
+ int ret;
+
++ cancel_delayed_work_sync(&core->work);
+ ret = pm_runtime_get_sync(dev);
+ WARN_ON(ret < 0);
+
+diff --git a/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c b/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c
+index 097a3a08ef7d1f..dbb26c7b2f8d7a 100644
+--- a/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c
++++ b/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c
+@@ -39,6 +39,10 @@ static const struct media_entity_operations sun4i_csi_video_entity_ops = {
+ .link_validate = v4l2_subdev_link_validate,
+ };
+
++static const struct media_entity_operations sun4i_csi_subdev_entity_ops = {
++ .link_validate = v4l2_subdev_link_validate,
++};
++
+ static int sun4i_csi_notify_bound(struct v4l2_async_notifier *notifier,
+ struct v4l2_subdev *subdev,
+ struct v4l2_async_connection *asd)
+@@ -214,6 +218,7 @@ static int sun4i_csi_probe(struct platform_device *pdev)
+ subdev->internal_ops = &sun4i_csi_subdev_internal_ops;
+ subdev->flags = V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS;
+ subdev->entity.function = MEDIA_ENT_F_VID_IF_BRIDGE;
++ subdev->entity.ops = &sun4i_csi_subdev_entity_ops;
+ subdev->owner = THIS_MODULE;
+ snprintf(subdev->name, sizeof(subdev->name), "sun4i-csi-0");
+ v4l2_set_subdevdata(subdev, csi);
+diff --git a/drivers/memory/tegra/tegra186-emc.c b/drivers/memory/tegra/tegra186-emc.c
+index 57d9ae12fcfe1a..33d67d25171940 100644
+--- a/drivers/memory/tegra/tegra186-emc.c
++++ b/drivers/memory/tegra/tegra186-emc.c
+@@ -35,11 +35,6 @@ struct tegra186_emc {
+ struct icc_provider provider;
+ };
+
+-static inline struct tegra186_emc *to_tegra186_emc(struct icc_provider *provider)
+-{
+- return container_of(provider, struct tegra186_emc, provider);
+-}
+-
+ /*
+ * debugfs interface
+ *
+diff --git a/drivers/net/can/dev/netlink.c b/drivers/net/can/dev/netlink.c
+index dfdc039d92a6c1..01aacdcda26066 100644
+--- a/drivers/net/can/dev/netlink.c
++++ b/drivers/net/can/dev/netlink.c
+@@ -65,15 +65,6 @@ static int can_validate(struct nlattr *tb[], struct nlattr *data[],
+ if (!data)
+ return 0;
+
+- if (data[IFLA_CAN_BITTIMING]) {
+- struct can_bittiming bt;
+-
+- memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
+- err = can_validate_bittiming(&bt, extack);
+- if (err)
+- return err;
+- }
+-
+ if (data[IFLA_CAN_CTRLMODE]) {
+ struct can_ctrlmode *cm = nla_data(data[IFLA_CAN_CTRLMODE]);
+ u32 tdc_flags = cm->flags & CAN_CTRLMODE_TDC_MASK;
+@@ -114,6 +105,15 @@ static int can_validate(struct nlattr *tb[], struct nlattr *data[],
+ }
+ }
+
++ if (data[IFLA_CAN_BITTIMING]) {
++ struct can_bittiming bt;
++
++ memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
++ err = can_validate_bittiming(&bt, extack);
++ if (err)
++ return err;
++ }
++
+ if (is_can_fd) {
+ if (!data[IFLA_CAN_BITTIMING] || !data[IFLA_CAN_DATA_BITTIMING])
+ return -EOPNOTSUPP;
+@@ -195,48 +195,6 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+ /* We need synchronization with dev->stop() */
+ ASSERT_RTNL();
+
+- if (data[IFLA_CAN_BITTIMING]) {
+- struct can_bittiming bt;
+-
+- /* Do not allow changing bittiming while running */
+- if (dev->flags & IFF_UP)
+- return -EBUSY;
+-
+- /* Calculate bittiming parameters based on
+- * bittiming_const if set, otherwise pass bitrate
+- * directly via do_set_bitrate(). Bail out if neither
+- * is given.
+- */
+- if (!priv->bittiming_const && !priv->do_set_bittiming &&
+- !priv->bitrate_const)
+- return -EOPNOTSUPP;
+-
+- memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
+- err = can_get_bittiming(dev, &bt,
+- priv->bittiming_const,
+- priv->bitrate_const,
+- priv->bitrate_const_cnt,
+- extack);
+- if (err)
+- return err;
+-
+- if (priv->bitrate_max && bt.bitrate > priv->bitrate_max) {
+- NL_SET_ERR_MSG_FMT(extack,
+- "arbitration bitrate %u bps surpasses transceiver capabilities of %u bps",
+- bt.bitrate, priv->bitrate_max);
+- return -EINVAL;
+- }
+-
+- memcpy(&priv->bittiming, &bt, sizeof(bt));
+-
+- if (priv->do_set_bittiming) {
+- /* Finally, set the bit-timing registers */
+- err = priv->do_set_bittiming(dev);
+- if (err)
+- return err;
+- }
+- }
+-
+ if (data[IFLA_CAN_CTRLMODE]) {
+ struct can_ctrlmode *cm;
+ u32 ctrlstatic;
+@@ -284,6 +242,48 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+ priv->ctrlmode &= cm->flags | ~CAN_CTRLMODE_TDC_MASK;
+ }
+
++ if (data[IFLA_CAN_BITTIMING]) {
++ struct can_bittiming bt;
++
++ /* Do not allow changing bittiming while running */
++ if (dev->flags & IFF_UP)
++ return -EBUSY;
++
++ /* Calculate bittiming parameters based on
++ * bittiming_const if set, otherwise pass bitrate
++ * directly via do_set_bitrate(). Bail out if neither
++ * is given.
++ */
++ if (!priv->bittiming_const && !priv->do_set_bittiming &&
++ !priv->bitrate_const)
++ return -EOPNOTSUPP;
++
++ memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
++ err = can_get_bittiming(dev, &bt,
++ priv->bittiming_const,
++ priv->bitrate_const,
++ priv->bitrate_const_cnt,
++ extack);
++ if (err)
++ return err;
++
++ if (priv->bitrate_max && bt.bitrate > priv->bitrate_max) {
++ NL_SET_ERR_MSG_FMT(extack,
++ "arbitration bitrate %u bps surpasses transceiver capabilities of %u bps",
++ bt.bitrate, priv->bitrate_max);
++ return -EINVAL;
++ }
++
++ memcpy(&priv->bittiming, &bt, sizeof(bt));
++
++ if (priv->do_set_bittiming) {
++ /* Finally, set the bit-timing registers */
++ err = priv->do_set_bittiming(dev);
++ if (err)
++ return err;
++ }
++ }
++
+ if (data[IFLA_CAN_RESTART_MS]) {
+ /* Do not allow changing restart delay while running */
+ if (dev->flags & IFF_UP)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
+index d0aecd1d735738..876b95306404e5 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
+@@ -266,7 +266,7 @@ static void aq_ethtool_get_strings(struct net_device *ndev,
+ const int rx_stat_cnt = ARRAY_SIZE(aq_ethtool_queue_rx_stat_names);
+ const int tx_stat_cnt = ARRAY_SIZE(aq_ethtool_queue_tx_stat_names);
+ char tc_string[8];
+- int tc;
++ unsigned int tc;
+
+ memset(tc_string, 0, sizeof(tc_string));
+ memcpy(p, aq_ethtool_stat_names,
+@@ -275,7 +275,7 @@ static void aq_ethtool_get_strings(struct net_device *ndev,
+
+ for (tc = 0; tc < cfg->tcs; tc++) {
+ if (cfg->is_qos)
+- snprintf(tc_string, 8, "TC%d ", tc);
++ snprintf(tc_string, 8, "TC%u ", tc);
+
+ for (i = 0; i < cfg->vecs; i++) {
+ for (si = 0; si < rx_stat_cnt; si++) {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 4cf9bf8b01b09d..ac06f4a4cf97ce 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -4157,7 +4157,7 @@ static void bnxt_get_pkgver(struct net_device *dev)
+
+ if (!bnxt_get_pkginfo(dev, buf, sizeof(buf))) {
+ len = strlen(bp->fw_ver_str);
+- snprintf(bp->fw_ver_str + len, FW_VER_STR_LEN - len - 1,
++ snprintf(bp->fw_ver_str + len, FW_VER_STR_LEN - len,
+ "/pkg %s", buf);
+ }
+ }
+diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
+index a19cb2a786fd28..1cca0425d49397 100644
+--- a/drivers/net/ethernet/freescale/fec.h
++++ b/drivers/net/ethernet/freescale/fec.h
+@@ -691,10 +691,19 @@ struct fec_enet_private {
+ /* XDP BPF Program */
+ struct bpf_prog *xdp_prog;
+
++ struct {
++ int pps_enable;
++ u64 ns_sys, ns_phc;
++ u32 at_corr;
++ u8 at_inc_corr;
++ } ptp_saved_state;
++
+ u64 ethtool_stats[];
+ };
+
+ void fec_ptp_init(struct platform_device *pdev, int irq_idx);
++void fec_ptp_restore_state(struct fec_enet_private *fep);
++void fec_ptp_save_state(struct fec_enet_private *fep);
+ void fec_ptp_stop(struct platform_device *pdev);
+ void fec_ptp_start_cyclecounter(struct net_device *ndev);
+ int fec_ptp_set(struct net_device *ndev, struct kernel_hwtstamp_config *config,
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index a923cb95cdc62c..570f8a14d975b5 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1077,6 +1077,8 @@ fec_restart(struct net_device *ndev)
+ u32 rcntl = OPT_FRAME_SIZE | 0x04;
+ u32 ecntl = FEC_ECR_ETHEREN;
+
++ fec_ptp_save_state(fep);
++
+ /* Whack a reset. We should wait for this.
+ * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+ * instead of reset MAC itself.
+@@ -1244,8 +1246,10 @@ fec_restart(struct net_device *ndev)
+ writel(ecntl, fep->hwp + FEC_ECNTRL);
+ fec_enet_active_rxring(ndev);
+
+- if (fep->bufdesc_ex)
++ if (fep->bufdesc_ex) {
+ fec_ptp_start_cyclecounter(ndev);
++ fec_ptp_restore_state(fep);
++ }
+
+ /* Enable interrupts we wish to service */
+ if (fep->link)
+@@ -1336,6 +1340,8 @@ fec_stop(struct net_device *ndev)
+ netdev_err(ndev, "Graceful transmit stop did not complete!\n");
+ }
+
++ fec_ptp_save_state(fep);
++
+ /* Whack a reset. We should wait for this.
+ * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+ * instead of reset MAC itself.
+@@ -1366,6 +1372,9 @@ fec_stop(struct net_device *ndev)
+ val = readl(fep->hwp + FEC_ECNTRL);
+ val |= FEC_ECR_EN1588;
+ writel(val, fep->hwp + FEC_ECNTRL);
++
++ fec_ptp_start_cyclecounter(ndev);
++ fec_ptp_restore_state(fep);
+ }
+ }
+
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index 2e4f3e1782a252..5e8fac50f945d4 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -770,6 +770,56 @@ void fec_ptp_init(struct platform_device *pdev, int irq_idx)
+ schedule_delayed_work(&fep->time_keep, HZ);
+ }
+
++void fec_ptp_save_state(struct fec_enet_private *fep)
++{
++ unsigned long flags;
++ u32 atime_inc_corr;
++
++ spin_lock_irqsave(&fep->tmreg_lock, flags);
++
++ fep->ptp_saved_state.pps_enable = fep->pps_enable;
++
++ fep->ptp_saved_state.ns_phc = timecounter_read(&fep->tc);
++ fep->ptp_saved_state.ns_sys = ktime_get_ns();
++
++ fep->ptp_saved_state.at_corr = readl(fep->hwp + FEC_ATIME_CORR);
++ atime_inc_corr = readl(fep->hwp + FEC_ATIME_INC) & FEC_T_INC_CORR_MASK;
++ fep->ptp_saved_state.at_inc_corr = (u8)(atime_inc_corr >> FEC_T_INC_CORR_OFFSET);
++
++ spin_unlock_irqrestore(&fep->tmreg_lock, flags);
++}
++
++/* Restore PTP functionality after a reset */
++void fec_ptp_restore_state(struct fec_enet_private *fep)
++{
++ u32 atime_inc = readl(fep->hwp + FEC_ATIME_INC) & FEC_T_INC_MASK;
++ unsigned long flags;
++ u32 counter;
++ u64 ns;
++
++ spin_lock_irqsave(&fep->tmreg_lock, flags);
++
++ /* Reset turned it off, so adjust our status flag */
++ fep->pps_enable = 0;
++
++ writel(fep->ptp_saved_state.at_corr, fep->hwp + FEC_ATIME_CORR);
++ atime_inc |= ((u32)fep->ptp_saved_state.at_inc_corr) << FEC_T_INC_CORR_OFFSET;
++ writel(atime_inc, fep->hwp + FEC_ATIME_INC);
++
++ ns = ktime_get_ns() - fep->ptp_saved_state.ns_sys + fep->ptp_saved_state.ns_phc;
++ counter = ns & fep->cc.mask;
++ writel(counter, fep->hwp + FEC_ATIME);
++ timecounter_init(&fep->tc, &fep->cc, ns);
++
++ spin_unlock_irqrestore(&fep->tmreg_lock, flags);
++
++ /* Restart PPS if needed */
++ if (fep->ptp_saved_state.pps_enable) {
++ /* Re-enable PPS */
++ fec_ptp_enable_pps(fep, 1);
++ }
++}
++
+ void fec_ptp_stop(struct platform_device *pdev)
+ {
+ struct net_device *ndev = platform_get_drvdata(pdev);
+diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c
+index b91e7a06b97f7d..beb815e5289b12 100644
+--- a/drivers/net/ethernet/hisilicon/hip04_eth.c
++++ b/drivers/net/ethernet/hisilicon/hip04_eth.c
+@@ -947,6 +947,7 @@ static int hip04_mac_probe(struct platform_device *pdev)
+ priv->tx_coalesce_timer.function = tx_done;
+
+ priv->map = syscon_node_to_regmap(arg.np);
++ of_node_put(arg.np);
+ if (IS_ERR(priv->map)) {
+ dev_warn(d, "no syscon hisilicon,hip04-ppe\n");
+ ret = PTR_ERR(priv->map);
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+index f75668c4793519..616a2768e5048a 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+@@ -933,6 +933,7 @@ static int hns_mac_get_info(struct hns_mac_cb *mac_cb)
+ mac_cb->cpld_ctrl = NULL;
+ } else {
+ syscon = syscon_node_to_regmap(cpld_args.np);
++ of_node_put(cpld_args.np);
+ if (IS_ERR_OR_NULL(syscon)) {
+ dev_dbg(mac_cb->dev, "no cpld-syscon found!\n");
+ mac_cb->cpld_ctrl = NULL;
+diff --git a/drivers/net/ethernet/hisilicon/hns_mdio.c b/drivers/net/ethernet/hisilicon/hns_mdio.c
+index ed73707176c1af..8a047145f0c50b 100644
+--- a/drivers/net/ethernet/hisilicon/hns_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns_mdio.c
+@@ -575,6 +575,7 @@ static int hns_mdio_probe(struct platform_device *pdev)
+ MDIO_SC_RESET_ST;
+ }
+ }
++ of_node_put(reg_args.np);
+ } else {
+ dev_warn(&pdev->dev, "find syscon ret = %#x\n", ret);
+ mdio_dev->subctrl_vbase = NULL;
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 360ee26557f770..f103249b12facf 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -6671,8 +6671,10 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
+ if (adapter->flags2 & FLAG2_HAS_PHY_WAKEUP) {
+ /* enable wakeup by the PHY */
+ retval = e1000_init_phy_wakeup(adapter, wufc);
+- if (retval)
+- return retval;
++ if (retval) {
++ e_err("Failed to enable wakeup\n");
++ goto skip_phy_configurations;
++ }
+ } else {
+ /* enable wakeup by the MAC */
+ ew32(WUFC, wufc);
+@@ -6693,8 +6695,10 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
+ * or broadcast.
+ */
+ retval = e1000_enable_ulp_lpt_lp(hw, !runtime);
+- if (retval)
+- return retval;
++ if (retval) {
++ e_err("Failed to enable ULP\n");
++ goto skip_phy_configurations;
++ }
+ }
+ }
+
+@@ -6726,6 +6730,7 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
+ hw->phy.ops.release(hw);
+ }
+
++skip_phy_configurations:
+ /* Release control of h/w to f/w. If f/w is AMT enabled, this
+ * would have already happened in close and is redundant.
+ */
+@@ -6968,15 +6973,13 @@ static int e1000e_pm_suspend(struct device *dev)
+ e1000e_pm_freeze(dev);
+
+ rc = __e1000_shutdown(pdev, false);
+- if (rc) {
+- e1000e_pm_thaw(dev);
+- } else {
++ if (!rc) {
+ /* Introduce S0ix implementation */
+ if (adapter->flags2 & FLAG2_ENABLE_S0IX_FLOWS)
+ e1000e_s0ix_entry_flow(adapter);
+ }
+
+- return rc;
++ return 0;
+ }
+
+ static int e1000e_pm_resume(struct device *dev)
+diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
+index ecf8f5d6029219..6ca13c5dcb14e7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sched.c
++++ b/drivers/net/ethernet/intel/ice/ice_sched.c
+@@ -28,9 +28,8 @@ ice_sched_add_root_node(struct ice_port_info *pi,
+ if (!root)
+ return -ENOMEM;
+
+- /* coverity[suspicious_sizeof] */
+ root->children = devm_kcalloc(ice_hw_to_dev(hw), hw->max_children[0],
+- sizeof(*root), GFP_KERNEL);
++ sizeof(*root->children), GFP_KERNEL);
+ if (!root->children) {
+ devm_kfree(ice_hw_to_dev(hw), root);
+ return -ENOMEM;
+@@ -186,10 +185,9 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+ if (!node)
+ return -ENOMEM;
+ if (hw->max_children[layer]) {
+- /* coverity[suspicious_sizeof] */
+ node->children = devm_kcalloc(ice_hw_to_dev(hw),
+ hw->max_children[layer],
+- sizeof(*node), GFP_KERNEL);
++ sizeof(*node->children), GFP_KERNEL);
+ if (!node->children) {
+ devm_kfree(ice_hw_to_dev(hw), node);
+ return -ENOMEM;
+diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c
+index 9e69848153864b..804914e7d9a83e 100644
+--- a/drivers/net/ethernet/lantiq_etop.c
++++ b/drivers/net/ethernet/lantiq_etop.c
+@@ -482,7 +482,9 @@ ltq_etop_tx(struct sk_buff *skb, struct net_device *dev)
+ unsigned long flags;
+ u32 byte_offset;
+
+- len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len;
++ if (skb_put_padto(skb, ETH_ZLEN))
++ return NETDEV_TX_OK;
++ len = skb->len;
+
+ if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) {
+ netdev_err(dev, "tx ring full\n");
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+index e809f91c08fb9d..9e02e4367bec81 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+@@ -1088,7 +1088,7 @@ struct mvpp2 {
+ unsigned int max_port_rxqs;
+
+ /* Workqueue to gather hardware statistics */
+- char queue_name[30];
++ char queue_name[31];
+ struct workqueue_struct *stats_queue;
+
+ /* Debugfs root entry */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index d9e241423bc567..6cff0c45ff9810 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -627,7 +627,7 @@ struct mlx5e_shampo_hd {
+ struct mlx5e_dma_info *info;
+ struct mlx5e_frag_page *pages;
+ u16 curr_page_index;
+- u16 hd_per_wq;
++ u32 hd_per_wq;
+ u16 hd_per_wqe;
+ unsigned long *bitmap;
+ u16 pi;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c
+index d4239e3b3c88ef..11f724ad90dbfb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c
+@@ -23,6 +23,9 @@ struct mlx5e_tir_builder *mlx5e_tir_builder_alloc(bool modify)
+ struct mlx5e_tir_builder *builder;
+
+ builder = kvzalloc(sizeof(*builder), GFP_KERNEL);
++ if (!builder)
++ return NULL;
++
+ builder->modify = modify;
+
+ return builder;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+index 3d274599015be1..ca92e518be7669 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+@@ -67,7 +67,6 @@ static void mlx5e_ipsec_handle_sw_limits(struct work_struct *_work)
+ return;
+
+ spin_lock_bh(&x->lock);
+- xfrm_state_check_expire(x);
+ if (x->km.state == XFRM_STATE_EXPIRED) {
+ sa_entry->attrs.drop = true;
+ spin_unlock_bh(&x->lock);
+@@ -75,6 +74,13 @@ static void mlx5e_ipsec_handle_sw_limits(struct work_struct *_work)
+ mlx5e_accel_ipsec_fs_modify(sa_entry);
+ return;
+ }
++
++ if (x->km.state != XFRM_STATE_VALID) {
++ spin_unlock_bh(&x->lock);
++ return;
++ }
++
++ xfrm_state_check_expire(x);
+ spin_unlock_bh(&x->lock);
+
+ queue_delayed_work(sa_entry->ipsec->wq, &dwork->dwork,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index b09e9abd39f37f..f8c7912abe0e3f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -642,7 +642,6 @@ mlx5e_sq_xmit_mpwqe(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ return;
+
+ err_unmap:
+- mlx5e_dma_unmap_wqe_err(sq, 1);
+ sq->stats->dropped++;
+ dev_kfree_skb_any(skb);
+ mlx5e_tx_flush(sq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
+index d0b595ba611014..432c98f2626db9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
+@@ -24,6 +24,11 @@
+ pci_write_config_dword((dev)->pdev, (dev)->vsc_addr + (offset), (val))
+ #define VSC_MAX_RETRIES 2048
+
++/* Reading VSC registers can take relatively long time.
++ * Yield the cpu every 128 registers read.
++ */
++#define VSC_GW_READ_BLOCK_COUNT 128
++
+ enum {
+ VSC_CTRL_OFFSET = 0x4,
+ VSC_COUNTER_OFFSET = 0x8,
+@@ -273,6 +278,7 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
+ {
+ unsigned int next_read_addr = 0;
+ unsigned int read_addr = 0;
++ unsigned int count = 0;
+
+ while (read_addr < length) {
+ if (mlx5_vsc_gw_read_fast(dev, read_addr, &next_read_addr,
+@@ -280,6 +286,10 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
+ return read_addr;
+
+ read_addr = next_read_addr;
++ if (++count == VSC_GW_READ_BLOCK_COUNT) {
++ cond_resched();
++ count = 0;
++ }
+ }
+ return length;
+ }
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c b/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
+index f3f5fb4204689b..70427643f777c0 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
+@@ -45,8 +45,12 @@ void sparx5_ifh_parse(u32 *ifh, struct frame_info *info)
+ fwd = (fwd >> 5);
+ info->src_port = FIELD_GET(GENMASK(7, 1), fwd);
+
++ /*
++ * Bit 270-271 are occasionally unexpectedly set by the hardware,
++ * clear bits before extracting timestamp
++ */
+ info->timestamp =
+- ((u64)xtr_hdr[2] << 24) |
++ ((u64)(xtr_hdr[2] & GENMASK(5, 0)) << 24) |
+ ((u64)xtr_hdr[3] << 16) |
+ ((u64)xtr_hdr[4] << 8) |
+ ((u64)xtr_hdr[5] << 0);
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index 182ba0a8b095bc..6e0929af0f725b 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -821,14 +821,13 @@ nfp_net_prepare_vector(struct nfp_net *nn, struct nfp_net_r_vector *r_vec,
+
+ snprintf(r_vec->name, sizeof(r_vec->name),
+ "%s-rxtx-%d", nfp_net_name(nn), idx);
+- err = request_irq(r_vec->irq_vector, r_vec->handler, 0, r_vec->name,
+- r_vec);
++ err = request_irq(r_vec->irq_vector, r_vec->handler, IRQF_NO_AUTOEN,
++ r_vec->name, r_vec);
+ if (err) {
+ nfp_net_napi_del(&nn->dp, r_vec);
+ nn_err(nn, "Error requesting IRQ %d\n", r_vec->irq_vector);
+ return err;
+ }
+- disable_irq(r_vec->irq_vector);
+
+ irq_set_affinity_hint(r_vec->irq_vector, &r_vec->affinity_mask);
+
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 3507c2e28110d2..01e18f645c0eda 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -576,7 +576,34 @@ struct rtl8169_counters {
+ __le64 rx_broadcast;
+ __le32 rx_multicast;
+ __le16 tx_aborted;
+- __le16 tx_underun;
++ __le16 tx_underrun;
++ /* new since RTL8125 */
++ __le64 tx_octets;
++ __le64 rx_octets;
++ __le64 rx_multicast64;
++ __le64 tx_unicast64;
++ __le64 tx_broadcast64;
++ __le64 tx_multicast64;
++ __le32 tx_pause_on;
++ __le32 tx_pause_off;
++ __le32 tx_pause_all;
++ __le32 tx_deferred;
++ __le32 tx_late_collision;
++ __le32 tx_all_collision;
++ __le32 tx_aborted32;
++ __le32 align_errors32;
++ __le32 rx_frame_too_long;
++ __le32 rx_runt;
++ __le32 rx_pause_on;
++ __le32 rx_pause_off;
++ __le32 rx_pause_all;
++ __le32 rx_unknown_opcode;
++ __le32 rx_mac_error;
++ __le32 tx_underrun32;
++ __le32 rx_mac_missed;
++ __le32 rx_tcam_dropped;
++ __le32 tdu;
++ __le32 rdu;
+ };
+
+ struct rtl8169_tc_offsets {
+@@ -1841,7 +1868,7 @@ static void rtl8169_get_ethtool_stats(struct net_device *dev,
+ data[9] = le64_to_cpu(counters->rx_broadcast);
+ data[10] = le32_to_cpu(counters->rx_multicast);
+ data[11] = le16_to_cpu(counters->tx_aborted);
+- data[12] = le16_to_cpu(counters->tx_underun);
++ data[12] = le16_to_cpu(counters->tx_underrun);
+ }
+
+ static void rtl8169_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index 31c387cc5f269d..5e64cf15670b10 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -14,6 +14,7 @@
+ #include <linux/slab.h>
+ #include <linux/ethtool.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include "stmmac.h"
+ #include "stmmac_pcs.h"
+ #include "dwmac4.h"
+@@ -475,7 +476,7 @@ static int dwmac4_write_vlan_filter(struct net_device *dev,
+ u8 index, u32 data)
+ {
+ void __iomem *ioaddr = (void __iomem *)dev->base_addr;
+- int i, timeout = 10;
++ int ret;
+ u32 val;
+
+ if (index >= hw->num_vlan)
+@@ -491,16 +492,15 @@ static int dwmac4_write_vlan_filter(struct net_device *dev,
+
+ writel(val, ioaddr + GMAC_VLAN_TAG);
+
+- for (i = 0; i < timeout; i++) {
+- val = readl(ioaddr + GMAC_VLAN_TAG);
+- if (!(val & GMAC_VLAN_TAG_CTRL_OB))
+- return 0;
+- udelay(1);
++ ret = readl_poll_timeout(ioaddr + GMAC_VLAN_TAG, val,
++ !(val & GMAC_VLAN_TAG_CTRL_OB),
++ 1000, 500000);
++ if (ret) {
++ netdev_err(dev, "Timeout accessing MAC_VLAN_Tag_Filter\n");
++ return -EBUSY;
+ }
+
+- netdev_err(dev, "Timeout accessing MAC_VLAN_Tag_Filter\n");
+-
+- return -EBUSY;
++ return 0;
+ }
+
+ static int dwmac4_add_hw_vlan_rx_fltr(struct net_device *dev,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+index 996f2bcd07a243..308ef42417684e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+@@ -396,6 +396,7 @@ static int tc_setup_cbs(struct stmmac_priv *priv,
+ return ret;
+
+ priv->plat->tx_queues_cfg[queue].mode_to_use = MTL_QUEUE_DCB;
++ return 0;
+ }
+
+ /* Final adjustments for HW */
+diff --git a/drivers/net/ieee802154/Kconfig b/drivers/net/ieee802154/Kconfig
+index 95da876c561384..1075e24b11defc 100644
+--- a/drivers/net/ieee802154/Kconfig
++++ b/drivers/net/ieee802154/Kconfig
+@@ -101,6 +101,7 @@ config IEEE802154_CA8210_DEBUGFS
+
+ config IEEE802154_MCR20A
+ tristate "MCR20A transceiver driver"
++ select REGMAP_SPI
+ depends on IEEE802154_DRIVERS && MAC802154
+ depends on SPI
+ help
+diff --git a/drivers/net/ieee802154/mcr20a.c b/drivers/net/ieee802154/mcr20a.c
+index 433fb583920310..020d392a98b69d 100644
+--- a/drivers/net/ieee802154/mcr20a.c
++++ b/drivers/net/ieee802154/mcr20a.c
+@@ -1302,16 +1302,13 @@ mcr20a_probe(struct spi_device *spi)
+ irq_type = IRQF_TRIGGER_FALLING;
+
+ ret = devm_request_irq(&spi->dev, spi->irq, mcr20a_irq_isr,
+- irq_type, dev_name(&spi->dev), lp);
++ irq_type | IRQF_NO_AUTOEN, dev_name(&spi->dev), lp);
+ if (ret) {
+ dev_err(&spi->dev, "could not request_irq for mcr20a\n");
+ ret = -ENODEV;
+ goto free_dev;
+ }
+
+- /* disable_irq by default and wait for starting hardware */
+- disable_irq(spi->irq);
+-
+ ret = ieee802154_register_hw(hw);
+ if (ret) {
+ dev_crit(&spi->dev, "ieee802154_register_hw failed\n");
+diff --git a/drivers/net/pcs/pcs-xpcs-wx.c b/drivers/net/pcs/pcs-xpcs-wx.c
+index 19c75886f070ea..5f5cd3596cb846 100644
+--- a/drivers/net/pcs/pcs-xpcs-wx.c
++++ b/drivers/net/pcs/pcs-xpcs-wx.c
+@@ -109,7 +109,7 @@ static void txgbe_pma_config_1g(struct dw_xpcs *xpcs)
+ txgbe_write_pma(xpcs, TXGBE_DFE_TAP_CTL0, 0);
+ val = txgbe_read_pma(xpcs, TXGBE_RX_GEN_CTL3);
+ val = u16_replace_bits(val, 0x4, TXGBE_RX_GEN_CTL3_LOS_TRSHLD0);
+- txgbe_write_pma(xpcs, TXGBE_RX_EQ_ATTN_CTL, val);
++ txgbe_write_pma(xpcs, TXGBE_RX_GEN_CTL3, val);
+
+ txgbe_write_pma(xpcs, TXGBE_MPLLA_CTL0, 0x20);
+ txgbe_write_pma(xpcs, TXGBE_MPLLA_CTL3, 0x46);
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 785182fa5fe01f..b88d857ea23b80 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -342,14 +342,19 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd)
+ if (mdio_phy_id_is_c45(mii_data->phy_id)) {
+ prtad = mdio_phy_id_prtad(mii_data->phy_id);
+ devad = mdio_phy_id_devad(mii_data->phy_id);
+- mii_data->val_out = mdiobus_c45_read(
+- phydev->mdio.bus, prtad, devad,
+- mii_data->reg_num);
++ ret = mdiobus_c45_read(phydev->mdio.bus, prtad, devad,
++ mii_data->reg_num);
++
+ } else {
+- mii_data->val_out = mdiobus_read(
+- phydev->mdio.bus, mii_data->phy_id,
+- mii_data->reg_num);
++ ret = mdiobus_read(phydev->mdio.bus, mii_data->phy_id,
++ mii_data->reg_num);
+ }
++
++ if (ret < 0)
++ return ret;
++
++ mii_data->val_out = ret;
++
+ return 0;
+
+ case SIOCSMIIREG:
+diff --git a/drivers/net/phy/realtek.c b/drivers/net/phy/realtek.c
+index 25e5bfbb6f89b8..c15d2f66ef0dc2 100644
+--- a/drivers/net/phy/realtek.c
++++ b/drivers/net/phy/realtek.c
+@@ -527,6 +527,9 @@ static int rtl8211f_led_hw_control_get(struct phy_device *phydev, u8 index,
+ {
+ int val;
+
++ if (index >= RTL8211F_LED_COUNT)
++ return -EINVAL;
++
+ val = phy_read_paged(phydev, 0xd04, RTL8211F_LEDCR);
+ if (val < 0)
+ return val;
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index eb9acfcaeb0974..9d2656afba660a 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -2269,7 +2269,7 @@ static bool ppp_channel_bridge_input(struct channel *pch, struct sk_buff *skb)
+ if (!pchb)
+ goto out_rcu;
+
+- spin_lock(&pchb->downl);
++ spin_lock_bh(&pchb->downl);
+ if (!pchb->chan) {
+ /* channel got unregistered */
+ kfree_skb(skb);
+@@ -2281,7 +2281,7 @@ static bool ppp_channel_bridge_input(struct channel *pch, struct sk_buff *skb)
+ kfree_skb(skb);
+
+ outl:
+- spin_unlock(&pchb->downl);
++ spin_unlock_bh(&pchb->downl);
+ out_rcu:
+ rcu_read_unlock();
+
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 040f0bb36c0ea4..dc3866a70da815 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -607,7 +607,9 @@ static void vrf_finish_direct(struct sk_buff *skb)
+ eth_zero_addr(eth->h_dest);
+ eth->h_proto = skb->protocol;
+
++ rcu_read_lock_bh();
+ dev_queue_xmit_nit(skb, vrf_dev);
++ rcu_read_unlock_bh();
+
+ skb_pull(skb, ETH_HLEN);
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 86485580dd8953..c087d8a0f5b255 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -2697,7 +2697,7 @@ int ath11k_dp_process_rx(struct ath11k_base *ab, int ring_id,
+ if (unlikely(push_reason !=
+ HAL_REO_DEST_RING_PUSH_REASON_ROUTING_INSTRUCTION)) {
+ dev_kfree_skb_any(msdu);
+- ab->soc_stats.hal_reo_error[dp->reo_dst_ring[ring_id].ring_id]++;
++ ab->soc_stats.hal_reo_error[ring_id]++;
+ continue;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index 14236d0a0c89db..91e3393f7b5f40 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -2681,7 +2681,7 @@ int ath12k_dp_rx_process(struct ath12k_base *ab, int ring_id,
+ if (push_reason !=
+ HAL_REO_DEST_RING_PUSH_REASON_ROUTING_INSTRUCTION) {
+ dev_kfree_skb_any(msdu);
+- ab->soc_stats.hal_reo_error[dp->reo_dst_ring[ring_id].ring_id]++;
++ ab->soc_stats.hal_reo_error[ring_id]++;
+ continue;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath9k/debug.c b/drivers/net/wireless/ath/ath9k/debug.c
+index bf3da631c69fda..51abc470125b3c 100644
+--- a/drivers/net/wireless/ath/ath9k/debug.c
++++ b/drivers/net/wireless/ath/ath9k/debug.c
+@@ -1325,11 +1325,11 @@ void ath9k_get_et_stats(struct ieee80211_hw *hw,
+ struct ath_softc *sc = hw->priv;
+ int i = 0;
+
+- data[i++] = (sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BE)].tx_pkts_all +
++ data[i++] = ((u64)sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BE)].tx_pkts_all +
+ sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BK)].tx_pkts_all +
+ sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_VI)].tx_pkts_all +
+ sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_VO)].tx_pkts_all);
+- data[i++] = (sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BE)].tx_bytes_all +
++ data[i++] = ((u64)sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BE)].tx_bytes_all +
+ sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BK)].tx_bytes_all +
+ sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_VI)].tx_bytes_all +
+ sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_VO)].tx_bytes_all);
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index 0c7841f952287f..a3733c9b484e46 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -716,8 +716,7 @@ static void ath9k_hif_usb_rx_cb(struct urb *urb)
+ }
+
+ resubmit:
+- skb_reset_tail_pointer(skb);
+- skb_trim(skb, 0);
++ __skb_set_length(skb, 0);
+
+ usb_anchor_urb(urb, &hif_dev->rx_submitted);
+ ret = usb_submit_urb(urb, GFP_ATOMIC);
+@@ -754,8 +753,7 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ case -ESHUTDOWN:
+ goto free_skb;
+ default:
+- skb_reset_tail_pointer(skb);
+- skb_trim(skb, 0);
++ __skb_set_length(skb, 0);
+
+ goto resubmit;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index 8c8880b4482701..a7cea0a55b35af 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -357,6 +357,11 @@ int iwl_acpi_get_mcc(struct iwl_fw_runtime *fwrt, char *mcc)
+ }
+
+ mcc_val = wifi_pkg->package.elements[1].integer.value;
++ if (mcc_val != BIOS_MCC_CHINA) {
++ ret = -EINVAL;
++ IWL_DEBUG_RADIO(fwrt, "ACPI WRDD is supported only for CN\n");
++ goto out_free;
++ }
+
+ mcc[0] = (mcc_val >> 8) & 0xff;
+ mcc[1] = mcc_val & 0xff;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
+index 8598031567bba5..0aefdf353b214f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
+@@ -1132,6 +1132,19 @@ struct iwl_umac_scan_abort {
+ __le32 flags;
+ } __packed; /* SCAN_ABORT_CMD_UMAC_API_S_VER_1 */
+
++/**
++ * enum iwl_umac_scan_abort_status
++ *
++ * @IWL_UMAC_SCAN_ABORT_STATUS_SUCCESS: scan was successfully aborted
++ * @IWL_UMAC_SCAN_ABORT_STATUS_IN_PROGRESS: scan abort is in progress
++ * @IWL_UMAC_SCAN_ABORT_STATUS_NOT_FOUND: nothing to abort
++ */
++enum iwl_umac_scan_abort_status {
++ IWL_UMAC_SCAN_ABORT_STATUS_SUCCESS = 0,
++ IWL_UMAC_SCAN_ABORT_STATUS_IN_PROGRESS,
++ IWL_UMAC_SCAN_ABORT_STATUS_NOT_FOUND,
++};
++
+ /**
+ * struct iwl_umac_scan_complete
+ * @uid: scan id, &enum iwl_umac_scan_uid_offsets
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h b/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h
+index e2c056f483c1c7..c5bd89e61d4a86 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h
+@@ -45,6 +45,8 @@
+ #define IWL_WTAS_ENABLE_IEC_MSK 0x4
+ #define IWL_WTAS_USA_UHB_MSK BIT(16)
+
++#define BIOS_MCC_CHINA 0x434e
++
+ /*
+ * The profile for revision 2 is a superset of revision 1, which is in
+ * turn a superset of revision 0. So we can store all revisions
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+index fb982d4fe85100..2cf878f237ac68 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+@@ -638,7 +638,7 @@ int iwl_uefi_get_mcc(struct iwl_fw_runtime *fwrt, char *mcc)
+ goto out;
+ }
+
+- if (data->mcc != UEFI_MCC_CHINA) {
++ if (data->mcc != BIOS_MCC_CHINA) {
+ ret = -EINVAL;
+ IWL_DEBUG_RADIO(fwrt, "UEFI WRDD is supported only for CN\n");
+ goto out;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.h b/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
+index 1f8884ca8997c4..e0ef981cd8f28d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
+@@ -149,8 +149,6 @@ struct uefi_cnv_var_splc {
+ u32 default_pwr_limit;
+ } __packed;
+
+-#define UEFI_MCC_CHINA 0x434e
+-
+ /* struct uefi_cnv_var_wrdd - WRDD table as defined in UEFI
+ * @revision: the revision of the table
+ * @mcc: country identifier as defined in ISO/IEC 3166-1 Alpha 2 code
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 625ccf566e1c2a..1ebcc6417ecef1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -838,20 +838,10 @@ void iwl_mvm_mac_tx(struct ieee80211_hw *hw,
+ if (ieee80211_is_mgmt(hdr->frame_control))
+ sta = NULL;
+
+- /* If there is no sta, and it's not offchannel - send through AP */
++ /* this shouldn't even happen: just drop */
+ if (!sta && info->control.vif->type == NL80211_IFTYPE_STATION &&
+- !offchannel) {
+- struct iwl_mvm_vif *mvmvif =
+- iwl_mvm_vif_from_mac80211(info->control.vif);
+- u8 ap_sta_id = READ_ONCE(mvmvif->deflink.ap_sta_id);
+-
+- if (ap_sta_id < mvm->fw->ucode_capa.num_stations) {
+- /* mac80211 holds rcu read lock */
+- sta = rcu_dereference(mvm->fw_id_to_mac_id[ap_sta_id]);
+- if (IS_ERR_OR_NULL(sta))
+- goto drop;
+- }
+- }
++ !offchannel)
++ goto drop;
+
+ if (tmp_sta && !sta && link_id != IEEE80211_LINK_UNSPECIFIED &&
+ !ieee80211_is_probe_resp(hdr->frame_control)) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
+index 8a38fc4b0b0f97..455f5f41750642 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
+@@ -144,7 +144,7 @@ static void iwl_mvm_mld_update_sta_key(struct ieee80211_hw *hw,
+ if (sta != data->sta || key->link_id >= 0)
+ return;
+
+- err = iwl_mvm_send_cmd_pdu(mvm, cmd_id, CMD_ASYNC, sizeof(cmd), &cmd);
++ err = iwl_mvm_send_cmd_pdu(mvm, cmd_id, 0, sizeof(cmd), &cmd);
+
+ if (err)
+ data->err = err;
+@@ -162,8 +162,8 @@ int iwl_mvm_mld_update_sta_keys(struct iwl_mvm *mvm,
+ .new_sta_mask = new_sta_mask,
+ };
+
+- ieee80211_iter_keys_rcu(mvm->hw, vif, iwl_mvm_mld_update_sta_key,
+- &data);
++ ieee80211_iter_keys(mvm->hw, vif, iwl_mvm_mld_update_sta_key,
++ &data);
+ return data.err;
+ }
+
+@@ -402,7 +402,7 @@ void iwl_mvm_sec_key_remove_ap(struct iwl_mvm *mvm,
+ if (!sec_key_ver)
+ return;
+
+- ieee80211_iter_keys_rcu(mvm->hw, vif,
+- iwl_mvm_sec_key_remove_ap_iter,
+- (void *)(uintptr_t)link_id);
++ ieee80211_iter_keys(mvm->hw, vif,
++ iwl_mvm_sec_key_remove_ap_iter,
++ (void *)(uintptr_t)link_id);
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 1cc9c426bb1599..3a9018595ea909 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -3313,13 +3313,23 @@ void iwl_mvm_rx_umac_scan_iter_complete_notif(struct iwl_mvm *mvm,
+ mvm->scan_start);
+ }
+
+-static int iwl_mvm_umac_scan_abort(struct iwl_mvm *mvm, int type)
++static int iwl_mvm_umac_scan_abort(struct iwl_mvm *mvm, int type, bool *wait)
+ {
+- struct iwl_umac_scan_abort cmd = {};
++ struct iwl_umac_scan_abort abort_cmd = {};
++ struct iwl_host_cmd cmd = {
++ .id = WIDE_ID(IWL_ALWAYS_LONG_GROUP, SCAN_ABORT_UMAC),
++ .len = { sizeof(abort_cmd), },
++ .data = { &abort_cmd, },
++ .flags = CMD_SEND_IN_RFKILL,
++ };
++
+ int uid, ret;
++ u32 status = IWL_UMAC_SCAN_ABORT_STATUS_NOT_FOUND;
+
+ lockdep_assert_held(&mvm->mutex);
+
++ *wait = true;
++
+ /* We should always get a valid index here, because we already
+ * checked that this type of scan was running in the generic
+ * code.
+@@ -3328,17 +3338,28 @@ static int iwl_mvm_umac_scan_abort(struct iwl_mvm *mvm, int type)
+ if (WARN_ON_ONCE(uid < 0))
+ return uid;
+
+- cmd.uid = cpu_to_le32(uid);
++ abort_cmd.uid = cpu_to_le32(uid);
+
+ IWL_DEBUG_SCAN(mvm, "Sending scan abort, uid %u\n", uid);
+
+- ret = iwl_mvm_send_cmd_pdu(mvm,
+- WIDE_ID(IWL_ALWAYS_LONG_GROUP, SCAN_ABORT_UMAC),
+- CMD_SEND_IN_RFKILL, sizeof(cmd), &cmd);
++ ret = iwl_mvm_send_cmd_status(mvm, &cmd, &status);
++
++ IWL_DEBUG_SCAN(mvm, "Scan abort: ret=%d, status=%u\n", ret, status);
+ if (!ret)
+ mvm->scan_uid_status[uid] = type << IWL_MVM_SCAN_STOPPING_SHIFT;
+
+- IWL_DEBUG_SCAN(mvm, "Scan abort: ret=%d\n", ret);
++ /* Handle the case that the FW is no longer familiar with the scan that
++ * is to be stopped. In such a case, it is expected that the scan
++ * complete notification was already received but not yet processed.
++ * In such a case, there is no need to wait for a scan complete
++ * notification and the flow should continue similar to the case that
++ * the scan was really aborted.
++ */
++ if (status == IWL_UMAC_SCAN_ABORT_STATUS_NOT_FOUND) {
++ mvm->scan_uid_status[uid] = type << IWL_MVM_SCAN_STOPPING_SHIFT;
++ *wait = false;
++ }
++
+ return ret;
+ }
+
+@@ -3348,6 +3369,7 @@ static int iwl_mvm_scan_stop_wait(struct iwl_mvm *mvm, int type)
+ static const u16 scan_done_notif[] = { SCAN_COMPLETE_UMAC,
+ SCAN_OFFLOAD_COMPLETE, };
+ int ret;
++ bool wait = true;
+
+ lockdep_assert_held(&mvm->mutex);
+
+@@ -3359,7 +3381,7 @@ static int iwl_mvm_scan_stop_wait(struct iwl_mvm *mvm, int type)
+ IWL_DEBUG_SCAN(mvm, "Preparing to stop scan, type %x\n", type);
+
+ if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_UMAC_SCAN))
+- ret = iwl_mvm_umac_scan_abort(mvm, type);
++ ret = iwl_mvm_umac_scan_abort(mvm, type, &wait);
+ else
+ ret = iwl_mvm_lmac_scan_abort(mvm);
+
+@@ -3367,6 +3389,10 @@ static int iwl_mvm_scan_stop_wait(struct iwl_mvm *mvm, int type)
+ IWL_DEBUG_SCAN(mvm, "couldn't stop scan type %d\n", type);
+ iwl_remove_notification(&mvm->notif_wait, &wait_scan_done);
+ return ret;
++ } else if (!wait) {
++ IWL_DEBUG_SCAN(mvm, "no need to wait for scan type %d\n", type);
++ iwl_remove_notification(&mvm->notif_wait, &wait_scan_done);
++ return 0;
+ }
+
+ return iwl_wait_notification(&mvm->notif_wait, &wait_scan_done,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 7ff5ea5e7aca57..db926b2f4d8d51 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1203,6 +1203,9 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ bool is_ampdu = false;
+ int hdrlen;
+
++ if (WARN_ON_ONCE(!sta))
++ return -1;
++
+ mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ fc = hdr->frame_control;
+ hdrlen = ieee80211_hdrlen(fc);
+@@ -1210,9 +1213,6 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ if (IWL_MVM_NON_TRANSMITTING_AP && ieee80211_is_probe_resp(fc))
+ return -1;
+
+- if (WARN_ON_ONCE(!mvmsta))
+- return -1;
+-
+ if (WARN_ON_ONCE(mvmsta->deflink.sta_id == IWL_MVM_INVALID_STA))
+ return -1;
+
+@@ -1343,7 +1343,7 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
+ struct ieee80211_sta *sta)
+ {
+- struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
++ struct iwl_mvm_sta *mvmsta;
+ struct ieee80211_tx_info info;
+ struct sk_buff_head mpdus_skbs;
+ struct ieee80211_vif *vif;
+@@ -1352,9 +1352,11 @@ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
+ struct sk_buff *orig_skb = skb;
+ const u8 *addr3;
+
+- if (WARN_ON_ONCE(!mvmsta))
++ if (WARN_ON_ONCE(!sta))
+ return -1;
+
++ mvmsta = iwl_mvm_sta_from_mac80211(sta);
++
+ if (WARN_ON_ONCE(mvmsta->deflink.sta_id == IWL_MVM_INVALID_STA))
+ return -1;
+
+diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
+index 3adc447b715f68..5b072120e3f213 100644
+--- a/drivers/net/wireless/marvell/mwifiex/fw.h
++++ b/drivers/net/wireless/marvell/mwifiex/fw.h
+@@ -1587,7 +1587,7 @@ struct host_cmd_ds_802_11_scan_rsp {
+
+ struct host_cmd_ds_802_11_scan_ext {
+ u32 reserved;
+- u8 tlv_buffer[1];
++ u8 tlv_buffer[];
+ } __packed;
+
+ struct mwifiex_ie_types_bss_mode {
+diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
+index 0326b121747cb2..17ce84f5207e3a 100644
+--- a/drivers/net/wireless/marvell/mwifiex/scan.c
++++ b/drivers/net/wireless/marvell/mwifiex/scan.c
+@@ -2530,8 +2530,7 @@ int mwifiex_ret_802_11_scan_ext(struct mwifiex_private *priv,
+ ext_scan_resp = &resp->params.ext_scan;
+
+ tlv = (void *)ext_scan_resp->tlv_buffer;
+- buf_left = le16_to_cpu(resp->size) - (sizeof(*ext_scan_resp) + S_DS_GEN
+- - 1);
++ buf_left = le16_to_cpu(resp->size) - (sizeof(*ext_scan_resp) + S_DS_GEN);
+
+ while (buf_left >= sizeof(struct mwifiex_ie_types_header)) {
+ type = le16_to_cpu(tlv->type);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 7bc3b4cd359255..6bef96e3d2a3d9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -400,6 +400,7 @@ mt7915_init_wiphy(struct mt7915_phy *phy)
+ ieee80211_hw_set(hw, SUPPORTS_RX_DECAP_OFFLOAD);
+ ieee80211_hw_set(hw, SUPPORTS_MULTI_BSSID);
+ ieee80211_hw_set(hw, WANT_MONITOR_VIF);
++ ieee80211_hw_set(hw, SUPPORTS_TX_FRAG);
+
+ hw->max_tx_fragments = 4;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index 8008ce3fa6c7ea..387d47e9fcd386 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -1537,12 +1537,14 @@ void mt7915_mac_reset_work(struct work_struct *work)
+ set_bit(MT76_RESET, &phy2->mt76->state);
+ cancel_delayed_work_sync(&phy2->mt76->mac_work);
+ }
++
++ mutex_lock(&dev->mt76.mutex);
++
+ mt76_worker_disable(&dev->mt76.tx_worker);
+ mt76_for_each_q_rx(&dev->mt76, i)
+ napi_disable(&dev->mt76.napi[i]);
+ napi_disable(&dev->mt76.tx_napi);
+
+- mutex_lock(&dev->mt76.mutex);
+
+ if (mtk_wed_device_active(&dev->mt76.mmio.wed))
+ mtk_wed_device_stop(&dev->mt76.mmio.wed);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index efbb8b23e4719b..e094358005799d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -1577,6 +1577,12 @@ mt7915_twt_teardown_request(struct ieee80211_hw *hw,
+ mutex_unlock(&dev->mt76.mutex);
+ }
+
++static int
++mt7915_set_frag_threshold(struct ieee80211_hw *hw, u32 val)
++{
++ return 0;
++}
++
+ static int
+ mt7915_set_radar_background(struct ieee80211_hw *hw,
+ struct cfg80211_chan_def *chandef)
+@@ -1707,6 +1713,7 @@ const struct ieee80211_ops mt7915_ops = {
+ .sta_set_decap_offload = mt7915_sta_set_decap_offload,
+ .add_twt_setup = mt7915_mac_add_twt_setup,
+ .twt_teardown_request = mt7915_twt_teardown_request,
++ .set_frag_threshold = mt7915_set_frag_threshold,
+ CFG80211_TESTMODE_CMD(mt76_testmode_cmd)
+ CFG80211_TESTMODE_DUMP(mt76_testmode_dump)
+ #ifdef CONFIG_MAC80211_DEBUGFS
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 2185cd24e2e1cd..2f4755820b3cd7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -690,13 +690,17 @@ int mt7915_mcu_add_tx_ba(struct mt7915_dev *dev,
+ {
+ struct mt7915_sta *msta = (struct mt7915_sta *)params->sta->drv_priv;
+ struct mt7915_vif *mvif = msta->vif;
++ int ret;
+
++ mt76_worker_disable(&dev->mt76.tx_worker);
+ if (enable && !params->amsdu)
+ msta->wcid.amsdu = false;
++ ret = mt76_connac_mcu_sta_ba(&dev->mt76, &mvif->mt76, params,
++ MCU_EXT_CMD(STA_REC_UPDATE),
++ enable, true);
++ mt76_worker_enable(&dev->mt76.tx_worker);
+
+- return mt76_connac_mcu_sta_ba(&dev->mt76, &mvif->mt76, params,
+- MCU_EXT_CMD(STA_REC_UPDATE),
+- enable, true);
++ return ret;
+ }
+
+ int mt7915_mcu_add_rx_ba(struct mt7915_dev *dev,
+diff --git a/drivers/net/wireless/microchip/wilc1000/sdio.c b/drivers/net/wireless/microchip/wilc1000/sdio.c
+index 0043f7a0fdf97b..7999aeb76901f3 100644
+--- a/drivers/net/wireless/microchip/wilc1000/sdio.c
++++ b/drivers/net/wireless/microchip/wilc1000/sdio.c
+@@ -977,6 +977,9 @@ static int wilc_sdio_suspend(struct device *dev)
+
+ dev_info(dev, "sdio suspend\n");
+
++ if (!wilc->initialized)
++ return 0;
++
+ if (!IS_ERR(wilc->rtc_clk))
+ clk_disable_unprepare(wilc->rtc_clk);
+
+@@ -999,6 +1002,10 @@ static int wilc_sdio_resume(struct device *dev)
+ struct wilc *wilc = sdio_get_drvdata(func);
+
+ dev_info(dev, "sdio resume\n");
++
++ if (!wilc->initialized)
++ return 0;
++
+ wilc_sdio_init(wilc, true);
+ wilc_sdio_enable_interrupt(wilc);
+
+diff --git a/drivers/net/wireless/realtek/rtw88/Kconfig b/drivers/net/wireless/realtek/rtw88/Kconfig
+index 22838ede03cd80..02b0d698413bec 100644
+--- a/drivers/net/wireless/realtek/rtw88/Kconfig
++++ b/drivers/net/wireless/realtek/rtw88/Kconfig
+@@ -12,6 +12,7 @@ if RTW88
+
+ config RTW88_CORE
+ tristate
++ select WANT_DEV_COREDUMP
+
+ config RTW88_PCI
+ tristate
+diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
+index 9c282d84743b9e..46dfb0b294db94 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.h
++++ b/drivers/net/wireless/realtek/rtw89/core.h
+@@ -3909,16 +3909,22 @@ struct rtw89_txpwr_conf {
+ const void *data;
+ };
+
++static inline bool rtw89_txpwr_entcpy(void *entry, const void *cursor, u8 size,
++ const struct rtw89_txpwr_conf *conf)
++{
++ u8 valid_size = min(size, conf->ent_sz);
++
++ memcpy(entry, cursor, valid_size);
++ return true;
++}
++
+ #define rtw89_txpwr_conf_valid(conf) (!!(conf)->data)
+
+ #define rtw89_for_each_in_txpwr_conf(entry, cursor, conf) \
+- for (typecheck(const void *, cursor), (cursor) = (conf)->data, \
+- memcpy(&(entry), cursor, \
+- min_t(u8, sizeof(entry), (conf)->ent_sz)); \
++ for (typecheck(const void *, cursor), (cursor) = (conf)->data; \
+ (cursor) < (conf)->data + (conf)->num_ents * (conf)->ent_sz; \
+- (cursor) += (conf)->ent_sz, \
+- memcpy(&(entry), cursor, \
+- min_t(u8, sizeof(entry), (conf)->ent_sz)))
++ (cursor) += (conf)->ent_sz) \
++ if (rtw89_txpwr_entcpy(&(entry), cursor, sizeof(entry), conf))
+
+ struct rtw89_txpwr_byrate_data {
+ struct rtw89_txpwr_conf conf;
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index 9a4f23d83bf2a8..5c07db4f471d64 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -3788,7 +3788,7 @@ static int rtw89_mac_enable_cpu_ax(struct rtw89_dev *rtwdev, u8 boot_reason,
+
+ rtw89_write32(rtwdev, R_AX_WCPU_FW_CTRL, val);
+
+- if (rtwdev->chip->chip_id == RTL8852B)
++ if (rtw89_is_rtl885xb(rtwdev))
+ rtw89_write32_mask(rtwdev, R_AX_SEC_CTRL,
+ B_AX_SEC_IDMEM_SIZE_CONFIG_MASK, 0x2);
+
+diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
+index 1508693032cb20..de3e7e4c6e76bb 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
+@@ -126,7 +126,9 @@ static int rtw89_ops_add_interface(struct ieee80211_hw *hw,
+ rtwvif->rtwdev = rtwdev;
+ rtwvif->roc.state = RTW89_ROC_IDLE;
+ rtwvif->offchan = false;
+- list_add_tail(&rtwvif->list, &rtwdev->rtwvifs_list);
++ if (!rtw89_rtwvif_in_list(rtwdev, rtwvif))
++ list_add_tail(&rtwvif->list, &rtwdev->rtwvifs_list);
++
+ INIT_WORK(&rtwvif->update_beacon_work, rtw89_core_update_beacon_work);
+ INIT_DELAYED_WORK(&rtwvif->roc.roc_work, rtw89_roc_work);
+ rtw89_leave_ps_mode(rtwdev);
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
+index ad11d1414874ab..c038e5ca3e45a8 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.c
++++ b/drivers/net/wireless/realtek/rtw89/phy.c
+@@ -353,8 +353,8 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+ csi_mode = RTW89_RA_RPT_MODE_HT;
+ ra_mask |= ((u64)sta->deflink.ht_cap.mcs.rx_mask[3] << 48) |
+ ((u64)sta->deflink.ht_cap.mcs.rx_mask[2] << 36) |
+- (sta->deflink.ht_cap.mcs.rx_mask[1] << 24) |
+- (sta->deflink.ht_cap.mcs.rx_mask[0] << 12);
++ ((u64)sta->deflink.ht_cap.mcs.rx_mask[1] << 24) |
++ ((u64)sta->deflink.ht_cap.mcs.rx_mask[0] << 12);
+ high_rate_masks = rtw89_ra_mask_ht_rates;
+ if (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_RX_STBC)
+ stbc_en = 1;
+diff --git a/drivers/net/wireless/realtek/rtw89/util.h b/drivers/net/wireless/realtek/rtw89/util.h
+index e82e7df052d889..e669544cafd3f9 100644
+--- a/drivers/net/wireless/realtek/rtw89/util.h
++++ b/drivers/net/wireless/realtek/rtw89/util.h
+@@ -16,6 +16,24 @@
+ #define rtw89_for_each_rtwvif(rtwdev, rtwvif) \
+ list_for_each_entry(rtwvif, &(rtwdev)->rtwvifs_list, list)
+
++/* Before adding rtwvif to list, we need to check if it already exist, beacase
++ * in some case such as SER L2 happen during WoWLAN flow, calling reconfig
++ * twice cause the list to be added twice.
++ */
++static inline bool rtw89_rtwvif_in_list(struct rtw89_dev *rtwdev,
++ struct rtw89_vif *new)
++{
++ struct rtw89_vif *rtwvif;
++
++ lockdep_assert_held(&rtwdev->mutex);
++
++ rtw89_for_each_rtwvif(rtwdev, rtwvif)
++ if (rtwvif == new)
++ return true;
++
++ return false;
++}
++
+ /* The result of negative dividend and positive divisor is undefined, but it
+ * should be one case of round-down or round-up. So, make it round-down if the
+ * result is round-up.
+diff --git a/drivers/net/wwan/qcom_bam_dmux.c b/drivers/net/wwan/qcom_bam_dmux.c
+index 26ca719fa0de43..5dcb9a84a12e35 100644
+--- a/drivers/net/wwan/qcom_bam_dmux.c
++++ b/drivers/net/wwan/qcom_bam_dmux.c
+@@ -823,17 +823,17 @@ static int bam_dmux_probe(struct platform_device *pdev)
+ ret = devm_request_threaded_irq(dev, pc_ack_irq, NULL, bam_dmux_pc_ack_irq,
+ IRQF_ONESHOT, NULL, dmux);
+ if (ret)
+- return ret;
++ goto err_disable_pm;
+
+ ret = devm_request_threaded_irq(dev, dmux->pc_irq, NULL, bam_dmux_pc_irq,
+ IRQF_ONESHOT, NULL, dmux);
+ if (ret)
+- return ret;
++ goto err_disable_pm;
+
+ ret = irq_get_irqchip_state(dmux->pc_irq, IRQCHIP_STATE_LINE_LEVEL,
+ &dmux->pc_state);
+ if (ret)
+- return ret;
++ goto err_disable_pm;
+
+ /* Check if remote finished initialization before us */
+ if (dmux->pc_state) {
+@@ -844,6 +844,11 @@ static int bam_dmux_probe(struct platform_device *pdev)
+ }
+
+ return 0;
++
++err_disable_pm:
++ pm_runtime_disable(dev);
++ pm_runtime_dont_use_autosuspend(dev);
++ return ret;
+ }
+
+ static void bam_dmux_remove(struct platform_device *pdev)
+diff --git a/drivers/net/xen-netback/hash.c b/drivers/net/xen-netback/hash.c
+index ff96f22648efde..45ddce35f6d2c6 100644
+--- a/drivers/net/xen-netback/hash.c
++++ b/drivers/net/xen-netback/hash.c
+@@ -95,7 +95,7 @@ static u32 xenvif_new_hash(struct xenvif *vif, const u8 *data,
+
+ static void xenvif_flush_hash(struct xenvif *vif)
+ {
+- struct xenvif_hash_cache_entry *entry;
++ struct xenvif_hash_cache_entry *entry, *n;
+ unsigned long flags;
+
+ if (xenvif_hash_cache_size == 0)
+@@ -103,8 +103,7 @@ static void xenvif_flush_hash(struct xenvif *vif)
+
+ spin_lock_irqsave(&vif->hash.cache.lock, flags);
+
+- list_for_each_entry_rcu(entry, &vif->hash.cache.list, link,
+- lockdep_is_held(&vif->hash.cache.lock)) {
++ list_for_each_entry_safe(entry, n, &vif->hash.cache.list, link) {
+ list_del_rcu(&entry->link);
+ vif->hash.cache.count--;
+ kfree_rcu(entry, rcu);
+diff --git a/drivers/nvme/common/keyring.c b/drivers/nvme/common/keyring.c
+index 6f7e7a8fa5ae47..ed5167f942d89b 100644
+--- a/drivers/nvme/common/keyring.c
++++ b/drivers/nvme/common/keyring.c
+@@ -20,6 +20,28 @@ key_serial_t nvme_keyring_id(void)
+ }
+ EXPORT_SYMBOL_GPL(nvme_keyring_id);
+
++static bool nvme_tls_psk_revoked(struct key *psk)
++{
++ return test_bit(KEY_FLAG_REVOKED, &psk->flags) ||
++ test_bit(KEY_FLAG_INVALIDATED, &psk->flags);
++}
++
++struct key *nvme_tls_key_lookup(key_serial_t key_id)
++{
++ struct key *key = key_lookup(key_id);
++
++ if (IS_ERR(key)) {
++ pr_err("key id %08x not found\n", key_id);
++ return key;
++ }
++ if (nvme_tls_psk_revoked(key)) {
++ pr_err("key id %08x revoked\n", key_id);
++ return ERR_PTR(-EKEYREVOKED);
++ }
++ return key;
++}
++EXPORT_SYMBOL_GPL(nvme_tls_key_lookup);
++
+ static void nvme_tls_psk_describe(const struct key *key, struct seq_file *m)
+ {
+ seq_puts(m, key->description);
+@@ -36,14 +58,12 @@ static bool nvme_tls_psk_match(const struct key *key,
+ pr_debug("%s: no key description\n", __func__);
+ return false;
+ }
+- match_len = strlen(key->description);
+- pr_debug("%s: id %s len %zd\n", __func__, key->description, match_len);
+-
+ if (!match_data->raw_data) {
+ pr_debug("%s: no match data\n", __func__);
+ return false;
+ }
+ match_id = match_data->raw_data;
++ match_len = strlen(match_id);
+ pr_debug("%s: match '%s' '%s' len %zd\n",
+ __func__, match_id, key->description, match_len);
+ return !memcmp(key->description, match_id, match_len);
+@@ -71,7 +91,7 @@ static struct key_type nvme_tls_psk_key_type = {
+
+ static struct key *nvme_tls_psk_lookup(struct key *keyring,
+ const char *hostnqn, const char *subnqn,
+- int hmac, bool generated)
++ u8 hmac, u8 psk_ver, bool generated)
+ {
+ char *identity;
+ size_t identity_len = (NVMF_NQN_SIZE) * 2 + 11;
+@@ -82,8 +102,8 @@ static struct key *nvme_tls_psk_lookup(struct key *keyring,
+ if (!identity)
+ return ERR_PTR(-ENOMEM);
+
+- snprintf(identity, identity_len, "NVMe0%c%02d %s %s",
+- generated ? 'G' : 'R', hmac, hostnqn, subnqn);
++ snprintf(identity, identity_len, "NVMe%u%c%02u %s %s",
++ psk_ver, generated ? 'G' : 'R', hmac, hostnqn, subnqn);
+
+ if (!keyring)
+ keyring = nvme_keyring;
+@@ -107,21 +127,38 @@ static struct key *nvme_tls_psk_lookup(struct key *keyring,
+ /*
+ * NVMe PSK priority list
+ *
+- * 'Retained' PSKs (ie 'generated == false')
+- * should be preferred to 'generated' PSKs,
+- * and SHA-384 should be preferred to SHA-256.
++ * 'Retained' PSKs (ie 'generated == false') should be preferred to 'generated'
++ * PSKs, PSKs with hash (psk_ver 1) should be preferred to PSKs without hash
++ * (psk_ver 0), and SHA-384 should be preferred to SHA-256.
+ */
+ static struct nvme_tls_psk_priority_list {
+ bool generated;
++ u8 psk_ver;
+ enum nvme_tcp_tls_cipher cipher;
+ } nvme_tls_psk_prio[] = {
+ { .generated = false,
++ .psk_ver = 1,
++ .cipher = NVME_TCP_TLS_CIPHER_SHA384, },
++ { .generated = false,
++ .psk_ver = 1,
++ .cipher = NVME_TCP_TLS_CIPHER_SHA256, },
++ { .generated = false,
++ .psk_ver = 0,
+ .cipher = NVME_TCP_TLS_CIPHER_SHA384, },
+ { .generated = false,
++ .psk_ver = 0,
++ .cipher = NVME_TCP_TLS_CIPHER_SHA256, },
++ { .generated = true,
++ .psk_ver = 1,
++ .cipher = NVME_TCP_TLS_CIPHER_SHA384, },
++ { .generated = true,
++ .psk_ver = 1,
+ .cipher = NVME_TCP_TLS_CIPHER_SHA256, },
+ { .generated = true,
++ .psk_ver = 0,
+ .cipher = NVME_TCP_TLS_CIPHER_SHA384, },
+ { .generated = true,
++ .psk_ver = 0,
+ .cipher = NVME_TCP_TLS_CIPHER_SHA256, },
+ };
+
+@@ -137,10 +174,11 @@ key_serial_t nvme_tls_psk_default(struct key *keyring,
+
+ for (prio = 0; prio < ARRAY_SIZE(nvme_tls_psk_prio); prio++) {
+ bool generated = nvme_tls_psk_prio[prio].generated;
++ u8 ver = nvme_tls_psk_prio[prio].psk_ver;
+ enum nvme_tcp_tls_cipher cipher = nvme_tls_psk_prio[prio].cipher;
+
+ tls_key = nvme_tls_psk_lookup(keyring, hostnqn, subnqn,
+- cipher, generated);
++ cipher, ver, generated);
+ if (!IS_ERR(tls_key)) {
+ tls_key_id = tls_key->serial;
+ key_put(tls_key);
+diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
+index a3caef75aa0a83..486afe59818454 100644
+--- a/drivers/nvme/host/Kconfig
++++ b/drivers/nvme/host/Kconfig
+@@ -41,6 +41,7 @@ config NVME_HWMON
+
+ config NVME_FABRICS
+ select NVME_CORE
++ select NVME_KEYRING if NVME_TCP_TLS
+ tristate
+
+ config NVME_RDMA
+@@ -94,7 +95,6 @@ config NVME_TCP
+ config NVME_TCP_TLS
+ bool "NVMe over Fabrics TCP TLS encryption support"
+ depends on NVME_TCP
+- select NVME_KEYRING
+ select NET_HANDSHAKE
+ select KEYS
+ help
+@@ -109,6 +109,7 @@ config NVME_HOST_AUTH
+ bool "NVMe over Fabrics In-Band Authentication in host side"
+ depends on NVME_CORE
+ select NVME_AUTH
++ select NVME_KEYRING if NVME_TCP_TLS
+ help
+ This provides support for NVMe over Fabrics In-Band Authentication in
+ host side.
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 983909a600adb8..a6fb1359a7e148 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4678,7 +4678,6 @@ static void nvme_free_ctrl(struct device *dev)
+
+ if (!subsys || ctrl->instance != subsys->instance)
+ ida_free(&nvme_instance_ida, ctrl->instance);
+- key_put(ctrl->tls_key);
+ nvme_free_cels(ctrl);
+ nvme_mpath_uninit(ctrl);
+ cleanup_srcu_struct(&ctrl->srcu);
+diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
+index f5f545fa010354..432efcbf9e2f5f 100644
+--- a/drivers/nvme/host/fabrics.c
++++ b/drivers/nvme/host/fabrics.c
+@@ -665,7 +665,7 @@ static struct key *nvmf_parse_key(int key_id)
+ return ERR_PTR(-EINVAL);
+ }
+
+- key = key_lookup(key_id);
++ key = nvme_tls_key_lookup(key_id);
+ if (IS_ERR(key))
+ pr_err("key id %08x not found\n", key_id);
+ else
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index f1d58e70933f54..15c93ce07e2636 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -4,6 +4,7 @@
+ * Copyright (c) 2017-2021 Christoph Hellwig.
+ */
+ #include <linux/bio-integrity.h>
++#include <linux/blk-integrity.h>
+ #include <linux/ptrace.h> /* for force_successful_syscall_return */
+ #include <linux/nvme_ioctl.h>
+ #include <linux/io_uring/cmd.h>
+@@ -119,9 +120,14 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
+ struct request_queue *q = req->q;
+ struct nvme_ns *ns = q->queuedata;
+ struct block_device *bdev = ns ? ns->disk->part0 : NULL;
++ bool supports_metadata = bdev && blk_get_integrity(bdev->bd_disk);
++ bool has_metadata = meta_buffer && meta_len;
+ struct bio *bio = NULL;
+ int ret;
+
++ if (has_metadata && !supports_metadata)
++ return -EINVAL;
++
+ if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
+ struct iov_iter iter;
+
+@@ -143,15 +149,15 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
+ goto out;
+
+ bio = req->bio;
+- if (bdev) {
++ if (bdev)
+ bio_set_dev(bio, bdev);
+- if (meta_buffer && meta_len) {
+- ret = bio_integrity_map_user(bio, meta_buffer, meta_len,
+- meta_seed);
+- if (ret)
+- goto out_unmap;
+- req->cmd_flags |= REQ_INTEGRITY;
+- }
++
++ if (has_metadata) {
++ ret = bio_integrity_map_user(bio, meta_buffer, meta_len,
++ meta_seed);
++ if (ret)
++ goto out_unmap;
++ req->cmd_flags |= REQ_INTEGRITY;
+ }
+
+ return ret;
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index e01b1332d245a7..313a4f978a2cf3 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -377,7 +377,7 @@ struct nvme_ctrl {
+ struct nvme_dhchap_key *ctrl_key;
+ u16 transaction;
+ #endif
+- struct key *tls_key;
++ key_serial_t tls_pskid;
+
+ /* Power saving configuration */
+ u64 ps_max_latency_us;
+diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
+index ba05faaac562dc..72675b59a7a73b 100644
+--- a/drivers/nvme/host/sysfs.c
++++ b/drivers/nvme/host/sysfs.c
+@@ -670,9 +670,9 @@ static ssize_t tls_key_show(struct device *dev,
+ {
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+
+- if (!ctrl->tls_key)
++ if (!ctrl->tls_pskid)
+ return 0;
+- return sysfs_emit(buf, "%08x", key_serial(ctrl->tls_key));
++ return sysfs_emit(buf, "%08x", ctrl->tls_pskid);
+ }
+ static DEVICE_ATTR_RO(tls_key);
+ #endif
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index a2a47d3ab99f0d..e3d82e91151afa 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -165,6 +165,7 @@ struct nvme_tcp_queue {
+
+ bool hdr_digest;
+ bool data_digest;
++ bool tls_enabled;
+ struct ahash_request *rcv_hash;
+ struct ahash_request *snd_hash;
+ __le32 exp_ddgst;
+@@ -213,7 +214,21 @@ static inline int nvme_tcp_queue_id(struct nvme_tcp_queue *queue)
+ return queue - queue->ctrl->queues;
+ }
+
+-static inline bool nvme_tcp_tls(struct nvme_ctrl *ctrl)
++/*
++ * Check if the queue is TLS encrypted
++ */
++static inline bool nvme_tcp_queue_tls(struct nvme_tcp_queue *queue)
++{
++ if (!IS_ENABLED(CONFIG_NVME_TCP_TLS))
++ return 0;
++
++ return queue->tls_enabled;
++}
++
++/*
++ * Check if TLS is configured for the controller.
++ */
++static inline bool nvme_tcp_tls_configured(struct nvme_ctrl *ctrl)
+ {
+ if (!IS_ENABLED(CONFIG_NVME_TCP_TLS))
+ return 0;
+@@ -368,7 +383,7 @@ static inline bool nvme_tcp_queue_has_pending(struct nvme_tcp_queue *queue)
+
+ static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
+ {
+- return !nvme_tcp_tls(&queue->ctrl->ctrl) &&
++ return !nvme_tcp_queue_tls(queue) &&
+ nvme_tcp_queue_has_pending(queue);
+ }
+
+@@ -1427,7 +1442,7 @@ static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue)
+ memset(&msg, 0, sizeof(msg));
+ iov.iov_base = icresp;
+ iov.iov_len = sizeof(*icresp);
+- if (nvme_tcp_tls(&queue->ctrl->ctrl)) {
++ if (nvme_tcp_queue_tls(queue)) {
+ msg.msg_control = cbuf;
+ msg.msg_controllen = sizeof(cbuf);
+ }
+@@ -1439,7 +1454,7 @@ static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue)
+ goto free_icresp;
+ }
+ ret = -ENOTCONN;
+- if (nvme_tcp_tls(&queue->ctrl->ctrl)) {
++ if (nvme_tcp_queue_tls(queue)) {
+ ctype = tls_get_record_type(queue->sock->sk,
+ (struct cmsghdr *)cbuf);
+ if (ctype != TLS_RECORD_TYPE_DATA) {
+@@ -1581,13 +1596,16 @@ static void nvme_tcp_tls_done(void *data, int status, key_serial_t pskid)
+ goto out_complete;
+ }
+
+- tls_key = key_lookup(pskid);
++ tls_key = nvme_tls_key_lookup(pskid);
+ if (IS_ERR(tls_key)) {
+ dev_warn(ctrl->ctrl.device, "queue %d: Invalid key %x\n",
+ qid, pskid);
+ queue->tls_err = -ENOKEY;
+ } else {
+- ctrl->ctrl.tls_key = tls_key;
++ queue->tls_enabled = true;
++ if (qid == 0)
++ ctrl->ctrl.tls_pskid = key_serial(tls_key);
++ key_put(tls_key);
+ queue->tls_err = 0;
+ }
+
+@@ -1768,7 +1786,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid,
+ }
+
+ /* If PSKs are configured try to start TLS */
+- if (IS_ENABLED(CONFIG_NVME_TCP_TLS) && pskid) {
++ if (nvme_tcp_tls_configured(nctrl) && pskid) {
+ ret = nvme_tcp_start_tls(nctrl, queue, pskid);
+ if (ret)
+ goto err_init_connect;
+@@ -1829,6 +1847,8 @@ static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid)
+ mutex_lock(&queue->queue_lock);
+ if (test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags))
+ __nvme_tcp_stop_queue(queue);
++ /* Stopping the queue will disable TLS */
++ queue->tls_enabled = false;
+ mutex_unlock(&queue->queue_lock);
+ }
+
+@@ -1925,16 +1945,17 @@ static int nvme_tcp_alloc_admin_queue(struct nvme_ctrl *ctrl)
+ int ret;
+ key_serial_t pskid = 0;
+
+- if (nvme_tcp_tls(ctrl)) {
++ if (nvme_tcp_tls_configured(ctrl)) {
+ if (ctrl->opts->tls_key)
+ pskid = key_serial(ctrl->opts->tls_key);
+- else
++ else {
+ pskid = nvme_tls_psk_default(ctrl->opts->keyring,
+ ctrl->opts->host->nqn,
+ ctrl->opts->subsysnqn);
+- if (!pskid) {
+- dev_err(ctrl->device, "no valid PSK found\n");
+- return -ENOKEY;
++ if (!pskid) {
++ dev_err(ctrl->device, "no valid PSK found\n");
++ return -ENOKEY;
++ }
+ }
+ }
+
+@@ -1957,13 +1978,14 @@ static int __nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
+ {
+ int i, ret;
+
+- if (nvme_tcp_tls(ctrl) && !ctrl->tls_key) {
++ if (nvme_tcp_tls_configured(ctrl) && !ctrl->tls_pskid) {
+ dev_err(ctrl->device, "no PSK negotiated\n");
+ return -ENOKEY;
+ }
++
+ for (i = 1; i < ctrl->queue_count; i++) {
+ ret = nvme_tcp_alloc_queue(ctrl, i,
+- key_serial(ctrl->tls_key));
++ ctrl->tls_pskid);
+ if (ret)
+ goto out_free_queues;
+ }
+@@ -2144,6 +2166,11 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
+ if (remove)
+ nvme_unquiesce_admin_queue(ctrl);
+ nvme_tcp_destroy_admin_queue(ctrl, remove);
++ if (ctrl->tls_pskid) {
++ dev_dbg(ctrl->device, "Wipe negotiated TLS_PSK %08x\n",
++ ctrl->tls_pskid);
++ ctrl->tls_pskid = 0;
++ }
+ }
+
+ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index d669ce25b5f9c1..7e59283a44723c 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -8,6 +8,7 @@
+ #include <linux/logic_pio.h>
+ #include <linux/module.h>
+ #include <linux/of_address.h>
++#include <linux/overflow.h>
+ #include <linux/pci.h>
+ #include <linux/pci_regs.h>
+ #include <linux/sizes.h>
+@@ -1061,7 +1062,11 @@ static int __of_address_to_resource(struct device_node *dev, int index, int bar_
+ if (of_mmio_is_nonposted(dev))
+ flags |= IORESOURCE_MEM_NONPOSTED;
+
++ if (overflows_type(taddr, r->start))
++ return -EOVERFLOW;
+ r->start = taddr;
++ if (overflows_type(taddr + size - 1, r->end))
++ return -EOVERFLOW;
+ r->end = taddr + size - 1;
+ r->flags = flags;
+ r->name = name ? name : dev->full_name;
+diff --git a/drivers/of/irq.c b/drivers/of/irq.c
+index 8fd63100ba8f08..36351ad6115eb1 100644
+--- a/drivers/of/irq.c
++++ b/drivers/of/irq.c
+@@ -357,8 +357,8 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ addr = of_get_property(device, "reg", &addr_len);
+
+ /* Prevent out-of-bounds read in case of longer interrupt parent address size */
+- if (addr_len > (3 * sizeof(__be32)))
+- addr_len = 3 * sizeof(__be32);
++ if (addr_len > sizeof(addr_buf))
++ addr_len = sizeof(addr_buf);
+ if (addr)
+ memcpy(addr_buf, addr, addr_len);
+
+@@ -716,8 +716,7 @@ struct irq_domain *of_msi_map_get_device_domain(struct device *dev, u32 id,
+ * @np: device node for @dev
+ * @token: bus type for this domain
+ *
+- * Parse the msi-parent property (both the simple and the complex
+- * versions), and returns the corresponding MSI domain.
++ * Parse the msi-parent property and returns the corresponding MSI domain.
+ *
+ * Returns: the MSI domain for this device (or NULL on failure).
+ */
+@@ -725,33 +724,14 @@ struct irq_domain *of_msi_get_domain(struct device *dev,
+ struct device_node *np,
+ enum irq_domain_bus_token token)
+ {
+- struct device_node *msi_np;
++ struct of_phandle_iterator it;
+ struct irq_domain *d;
++ int err;
+
+- /* Check for a single msi-parent property */
+- msi_np = of_parse_phandle(np, "msi-parent", 0);
+- if (msi_np && !of_property_read_bool(msi_np, "#msi-cells")) {
+- d = irq_find_matching_host(msi_np, token);
+- if (!d)
+- of_node_put(msi_np);
+- return d;
+- }
+-
+- if (token == DOMAIN_BUS_PLATFORM_MSI) {
+- /* Check for the complex msi-parent version */
+- struct of_phandle_args args;
+- int index = 0;
+-
+- while (!of_parse_phandle_with_args(np, "msi-parent",
+- "#msi-cells",
+- index, &args)) {
+- d = irq_find_matching_host(args.np, token);
+- if (d)
+- return d;
+-
+- of_node_put(args.np);
+- index++;
+- }
++ of_for_each_phandle(&it, err, np, "msi-parent", "#msi-cells", 0) {
++ d = irq_find_matching_host(it.node, token);
++ if (d)
++ return d;
+ }
+
+ return NULL;
+diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
+index 9100d82bfabc0d..3569050f9cf375 100644
+--- a/drivers/perf/arm_spe_pmu.c
++++ b/drivers/perf/arm_spe_pmu.c
+@@ -41,7 +41,7 @@
+
+ /*
+ * Cache if the event is allowed to trace Context information.
+- * This allows us to perform the check, i.e, perfmon_capable(),
++ * This allows us to perform the check, i.e, perf_allow_kernel(),
+ * in the context of the event owner, once, during the event_init().
+ */
+ #define SPE_PMU_HW_FLAGS_CX 0x00001
+@@ -50,7 +50,7 @@ static_assert((PERF_EVENT_FLAG_ARCH & SPE_PMU_HW_FLAGS_CX) == SPE_PMU_HW_FLAGS_C
+
+ static void set_spe_event_has_cx(struct perf_event *event)
+ {
+- if (IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR) && perfmon_capable())
++ if (IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR) && !perf_allow_kernel(&event->attr))
+ event->hw.flags |= SPE_PMU_HW_FLAGS_CX;
+ }
+
+@@ -745,9 +745,8 @@ static int arm_spe_pmu_event_init(struct perf_event *event)
+
+ set_spe_event_has_cx(event);
+ reg = arm_spe_event_to_pmscr(event);
+- if (!perfmon_capable() &&
+- (reg & (PMSCR_EL1_PA | PMSCR_EL1_PCT)))
+- return -EACCES;
++ if (reg & (PMSCR_EL1_PA | PMSCR_EL1_PCT))
++ return perf_allow_kernel(&event->attr);
+
+ return 0;
+ }
+diff --git a/drivers/perf/riscv_pmu_legacy.c b/drivers/perf/riscv_pmu_legacy.c
+index 04487ad7fba0b5..93c8e0fdb58985 100644
+--- a/drivers/perf/riscv_pmu_legacy.c
++++ b/drivers/perf/riscv_pmu_legacy.c
+@@ -22,13 +22,13 @@ static int pmu_legacy_ctr_get_idx(struct perf_event *event)
+ struct perf_event_attr *attr = &event->attr;
+
+ if (event->attr.type != PERF_TYPE_HARDWARE)
+- return -EOPNOTSUPP;
++ return -ENOENT;
+ if (attr->config == PERF_COUNT_HW_CPU_CYCLES)
+ return RISCV_PMU_LEGACY_CYCLE;
+ else if (attr->config == PERF_COUNT_HW_INSTRUCTIONS)
+ return RISCV_PMU_LEGACY_INSTRET;
+ else
+- return -EOPNOTSUPP;
++ return -ENOENT;
+ }
+
+ /* For legacy config & counter index are same */
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index 25b1b699b3e279..671dc55cbd3a87 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -309,7 +309,7 @@ static void pmu_sbi_check_event(struct sbi_pmu_event_data *edata)
+ ret.value, 0x1, SBI_PMU_STOP_FLAG_RESET, 0, 0, 0);
+ } else if (ret.error == SBI_ERR_NOT_SUPPORTED) {
+ /* This event cannot be monitored by any counter */
+- edata->event_idx = -EINVAL;
++ edata->event_idx = -ENOENT;
+ }
+ }
+
+@@ -543,7 +543,7 @@ static int pmu_sbi_event_map(struct perf_event *event, u64 *econfig)
+ }
+ break;
+ default:
+- ret = -EINVAL;
++ ret = -ENOENT;
+ break;
+ }
+
+diff --git a/drivers/platform/mellanox/mlxbf-pmc.c b/drivers/platform/mellanox/mlxbf-pmc.c
+index 4ed9c7fd2b62af..9d18dfca6a673b 100644
+--- a/drivers/platform/mellanox/mlxbf-pmc.c
++++ b/drivers/platform/mellanox/mlxbf-pmc.c
+@@ -1774,6 +1774,7 @@ static int mlxbf_pmc_init_perftype_counter(struct device *dev, unsigned int blk_
+
+ /* "event_list" sysfs to list events supported by the block */
+ attr = &pmc->block[blk_num].attr_event_list;
++ sysfs_attr_init(&attr->dev_attr.attr);
+ attr->dev_attr.attr.mode = 0444;
+ attr->dev_attr.show = mlxbf_pmc_event_list_show;
+ attr->nr = blk_num;
+@@ -1787,6 +1788,7 @@ static int mlxbf_pmc_init_perftype_counter(struct device *dev, unsigned int blk_
+ if (strstr(pmc->block_name[blk_num], "l3cache") ||
+ ((pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE))) {
+ attr = &pmc->block[blk_num].attr_enable;
++ sysfs_attr_init(&attr->dev_attr.attr);
+ attr->dev_attr.attr.mode = 0644;
+ attr->dev_attr.show = mlxbf_pmc_enable_show;
+ attr->dev_attr.store = mlxbf_pmc_enable_store;
+@@ -1814,6 +1816,7 @@ static int mlxbf_pmc_init_perftype_counter(struct device *dev, unsigned int blk_
+ /* "eventX" and "counterX" sysfs to program and read counter values */
+ for (j = 0; j < pmc->block[blk_num].counters; ++j) {
+ attr = &pmc->block[blk_num].attr_counter[j];
++ sysfs_attr_init(&attr->dev_attr.attr);
+ attr->dev_attr.attr.mode = 0644;
+ attr->dev_attr.show = mlxbf_pmc_counter_show;
+ attr->dev_attr.store = mlxbf_pmc_counter_store;
+@@ -1826,6 +1829,7 @@ static int mlxbf_pmc_init_perftype_counter(struct device *dev, unsigned int blk_
+ attr = NULL;
+
+ attr = &pmc->block[blk_num].attr_event[j];
++ sysfs_attr_init(&attr->dev_attr.attr);
+ attr->dev_attr.attr.mode = 0644;
+ attr->dev_attr.show = mlxbf_pmc_event_show;
+ attr->dev_attr.store = mlxbf_pmc_event_store;
+@@ -1861,6 +1865,7 @@ static int mlxbf_pmc_init_perftype_reg(struct device *dev, unsigned int blk_num)
+ while (count > 0) {
+ --count;
+ attr = &pmc->block[blk_num].attr_event[count];
++ sysfs_attr_init(&attr->dev_attr.attr);
+ attr->dev_attr.attr.mode = 0644;
+ attr->dev_attr.show = mlxbf_pmc_counter_show;
+ attr->dev_attr.store = mlxbf_pmc_counter_store;
+diff --git a/drivers/platform/x86/amd/pmf/pmf-quirks.c b/drivers/platform/x86/amd/pmf/pmf-quirks.c
+index 48870ca52b4139..7cde5733b9cacf 100644
+--- a/drivers/platform/x86/amd/pmf/pmf-quirks.c
++++ b/drivers/platform/x86/amd/pmf/pmf-quirks.c
+@@ -37,6 +37,14 @@ static const struct dmi_system_id fwbug_list[] = {
+ },
+ .driver_data = &quirk_no_sps_bug,
+ },
++ {
++ .ident = "ASUS TUF Gaming A14",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "FA401W"),
++ },
++ .driver_data = &quirk_no_sps_bug,
++ },
+ {}
+ };
+
+diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+index 10e21563fa46ff..b51c061352da74 100644
+--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+@@ -316,7 +316,9 @@ static struct pci_dev *_isst_if_get_pci_dev(int cpu, int bus_no, int dev, int fn
+ cpu >= nr_cpu_ids || cpu >= num_possible_cpus())
+ return NULL;
+
+- pkg_id = topology_physical_package_id(cpu);
++ pkg_id = topology_logical_package_id(cpu);
++ if (pkg_id >= topology_max_packages())
++ return NULL;
+
+ bus_number = isst_cpu_info[cpu].bus_info[bus_no];
+ if (bus_number < 0)
+diff --git a/drivers/platform/x86/lenovo-ymc.c b/drivers/platform/x86/lenovo-ymc.c
+index e0bbd6a14a89cb..bd9f95404c7cb0 100644
+--- a/drivers/platform/x86/lenovo-ymc.c
++++ b/drivers/platform/x86/lenovo-ymc.c
+@@ -43,6 +43,8 @@ struct lenovo_ymc_private {
+ };
+
+ static const struct key_entry lenovo_ymc_keymap[] = {
++ /* Ignore the uninitialized state */
++ { KE_IGNORE, 0x00 },
+ /* Laptop */
+ { KE_SW, 0x01, { .sw = { SW_TABLET_MODE, 0 } } },
+ /* Tablet */
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index f74af0a689f20c..0a39f68c641d15 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -840,6 +840,21 @@ static const struct ts_dmi_data rwc_nanote_p8_data = {
+ .properties = rwc_nanote_p8_props,
+ };
+
++static const struct property_entry rwc_nanote_next_props[] = {
++ PROPERTY_ENTRY_U32("touchscreen-min-x", 5),
++ PROPERTY_ENTRY_U32("touchscreen-min-y", 5),
++ PROPERTY_ENTRY_U32("touchscreen-size-x", 1785),
++ PROPERTY_ENTRY_U32("touchscreen-size-y", 1145),
++ PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
++ PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-rwc-nanote-next.fw"),
++ { }
++};
++
++static const struct ts_dmi_data rwc_nanote_next_data = {
++ .acpi_name = "MSSL1680:00",
++ .properties = rwc_nanote_next_props,
++};
++
+ static const struct property_entry schneider_sct101ctm_props[] = {
+ PROPERTY_ENTRY_U32("touchscreen-size-x", 1715),
+ PROPERTY_ENTRY_U32("touchscreen-size-y", 1140),
+@@ -1589,6 +1604,17 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_SKU, "0001")
+ },
+ },
++ {
++ /* RWC NANOTE NEXT */
++ .driver_data = (void *)&rwc_nanote_next_data,
++ .matches = {
++ DMI_MATCH(DMI_PRODUCT_NAME, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_BOARD_NAME, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_BOARD_VENDOR, "To be filled by O.E.M."),
++ /* Above matches are too generic, add bios-version match */
++ DMI_MATCH(DMI_BIOS_VERSION, "S8A70R100-V005"),
++ },
++ },
+ {
+ /* Schneider SCT101CTM */
+ .driver_data = (void *)&schneider_sct101ctm_data,
+diff --git a/drivers/platform/x86/x86-android-tablets/core.c b/drivers/platform/x86/x86-android-tablets/core.c
+index 919ef447122958..2d62715359d811 100644
+--- a/drivers/platform/x86/x86-android-tablets/core.c
++++ b/drivers/platform/x86/x86-android-tablets/core.c
+@@ -390,8 +390,9 @@ static __init int x86_android_tablet_probe(struct platform_device *pdev)
+ for (i = 0; i < pdev_count; i++) {
+ pdevs[i] = platform_device_register_full(&dev_info->pdev_info[i]);
+ if (IS_ERR(pdevs[i])) {
++ ret = PTR_ERR(pdevs[i]);
+ x86_android_tablet_remove(pdev);
+- return PTR_ERR(pdevs[i]);
++ return ret;
+ }
+ }
+
+@@ -443,8 +444,9 @@ static __init int x86_android_tablet_probe(struct platform_device *pdev)
+ PLATFORM_DEVID_AUTO,
+ &pdata, sizeof(pdata));
+ if (IS_ERR(pdevs[pdev_count])) {
++ ret = PTR_ERR(pdevs[pdev_count]);
+ x86_android_tablet_remove(pdev);
+- return PTR_ERR(pdevs[pdev_count]);
++ return ret;
+ }
+ pdev_count++;
+ }
+diff --git a/drivers/platform/x86/x86-android-tablets/other.c b/drivers/platform/x86/x86-android-tablets/other.c
+index eb0e55c69dfedb..2549c348c8825a 100644
+--- a/drivers/platform/x86/x86-android-tablets/other.c
++++ b/drivers/platform/x86/x86-android-tablets/other.c
+@@ -670,7 +670,7 @@ static const struct software_node *ktd2026_node_group[] = {
+ * is controlled by the "pwm_soc_lpss_2" PWM output.
+ */
+ #define XIAOMI_MIPAD2_LED_PERIOD_NS 19200
+-#define XIAOMI_MIPAD2_LED_DEFAULT_DUTY 6000 /* From Android kernel */
++#define XIAOMI_MIPAD2_LED_MAX_DUTY_NS 6000 /* From Android kernel */
+
+ static struct pwm_device *xiaomi_mipad2_led_pwm;
+
+@@ -679,7 +679,7 @@ static int xiaomi_mipad2_brightness_set(struct led_classdev *led_cdev,
+ {
+ struct pwm_state state = {
+ .period = XIAOMI_MIPAD2_LED_PERIOD_NS,
+- .duty_cycle = val,
++ .duty_cycle = XIAOMI_MIPAD2_LED_MAX_DUTY_NS * val / LED_FULL,
+ /* Always set PWM enabled to avoid the pin floating */
+ .enabled = true,
+ };
+@@ -701,11 +701,11 @@ static int __init xiaomi_mipad2_init(struct device *dev)
+ return -ENOMEM;
+
+ led_cdev->name = "mipad2:white:touch-buttons-backlight";
+- led_cdev->max_brightness = XIAOMI_MIPAD2_LED_PERIOD_NS;
+- /* "input-events" trigger uses blink_brightness */
+- led_cdev->blink_brightness = XIAOMI_MIPAD2_LED_DEFAULT_DUTY;
++ led_cdev->max_brightness = LED_FULL;
+ led_cdev->default_trigger = "input-events";
+ led_cdev->brightness_set_blocking = xiaomi_mipad2_brightness_set;
++ /* Turn LED off during suspend */
++ led_cdev->flags = LED_CORE_SUSPENDRESUME;
+
+ ret = devm_led_classdev_register(dev, led_cdev);
+ if (ret)
+diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
+index acdc3e7b2eae2a..3a4e59b93677c3 100644
+--- a/drivers/pmdomain/core.c
++++ b/drivers/pmdomain/core.c
+@@ -1758,7 +1758,6 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
+ genpd_lock(genpd);
+
+ genpd_set_cpumask(genpd, gpd_data->cpu);
+- dev_pm_domain_set(dev, &genpd->domain);
+
+ genpd->device_count++;
+ if (gd)
+@@ -1767,6 +1766,7 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
+ list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
+
+ genpd_unlock(genpd);
++ dev_pm_domain_set(dev, &genpd->domain);
+ out:
+ if (ret)
+ genpd_free_dev_data(dev, gpd_data);
+@@ -1823,12 +1823,13 @@ static int genpd_remove_device(struct generic_pm_domain *genpd,
+ genpd->gd->max_off_time_changed = true;
+
+ genpd_clear_cpumask(genpd, gpd_data->cpu);
+- dev_pm_domain_set(dev, NULL);
+
+ list_del_init(&pdd->list_node);
+
+ genpd_unlock(genpd);
+
++ dev_pm_domain_set(dev, NULL);
++
+ if (genpd->detach_dev)
+ genpd->detach_dev(genpd, dev);
+
+@@ -3181,7 +3182,7 @@ static void rtpm_status_str(struct seq_file *s, struct device *dev)
+ else
+ WARN_ON(1);
+
+- seq_printf(s, "%-25s ", p);
++ seq_printf(s, "%-26s ", p);
+ }
+
+ static void mode_status_str(struct seq_file *s, struct device *dev)
+@@ -3190,7 +3191,7 @@ static void mode_status_str(struct seq_file *s, struct device *dev)
+
+ gpd_data = to_gpd_data(dev->power.subsys_data->domain_data);
+
+- seq_printf(s, "%9s", gpd_data->hw_mode ? "HW" : "SW");
++ seq_printf(s, "%2s", gpd_data->hw_mode ? "HW" : "SW");
+ }
+
+ static void perf_status_str(struct seq_file *s, struct device *dev)
+@@ -3209,7 +3210,6 @@ static int genpd_summary_one(struct seq_file *s,
+ [GENPD_STATE_OFF] = "off"
+ };
+ struct pm_domain_data *pm_data;
+- const char *kobj_path;
+ struct gpd_link *link;
+ char state[16];
+ int ret;
+@@ -3226,7 +3226,7 @@ static int genpd_summary_one(struct seq_file *s,
+ else
+ snprintf(state, sizeof(state), "%s",
+ status_lookup[genpd->status]);
+- seq_printf(s, "%-30s %-49s %u", genpd->name, state, genpd->performance_state);
++ seq_printf(s, "%-30s %-30s %u", genpd->name, state, genpd->performance_state);
+
+ /*
+ * Modifications on the list require holding locks on both
+@@ -3242,17 +3242,10 @@ static int genpd_summary_one(struct seq_file *s,
+ }
+
+ list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
+- kobj_path = kobject_get_path(&pm_data->dev->kobj,
+- genpd_is_irq_safe(genpd) ?
+- GFP_ATOMIC : GFP_KERNEL);
+- if (kobj_path == NULL)
+- continue;
+-
+- seq_printf(s, "\n %-50s ", kobj_path);
++ seq_printf(s, "\n %-30s ", dev_name(pm_data->dev));
+ rtpm_status_str(s, pm_data->dev);
+ perf_status_str(s, pm_data->dev);
+ mode_status_str(s, pm_data->dev);
+- kfree(kobj_path);
+ }
+
+ seq_puts(s, "\n");
+@@ -3267,9 +3260,9 @@ static int summary_show(struct seq_file *s, void *data)
+ struct generic_pm_domain *genpd;
+ int ret = 0;
+
+- seq_puts(s, "domain status children performance\n");
+- seq_puts(s, " /device runtime status managed by\n");
+- seq_puts(s, "------------------------------------------------------------------------------------------------------------\n");
++ seq_puts(s, "domain status children performance\n");
++ seq_puts(s, " /device runtime status managed by\n");
++ seq_puts(s, "------------------------------------------------------------------------------\n");
+
+ ret = mutex_lock_interruptible(&gpd_list_lock);
+ if (ret)
+@@ -3421,23 +3414,14 @@ static int devices_show(struct seq_file *s, void *data)
+ {
+ struct generic_pm_domain *genpd = s->private;
+ struct pm_domain_data *pm_data;
+- const char *kobj_path;
+ int ret = 0;
+
+ ret = genpd_lock_interruptible(genpd);
+ if (ret)
+ return -ERESTARTSYS;
+
+- list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
+- kobj_path = kobject_get_path(&pm_data->dev->kobj,
+- genpd_is_irq_safe(genpd) ?
+- GFP_ATOMIC : GFP_KERNEL);
+- if (kobj_path == NULL)
+- continue;
+-
+- seq_printf(s, "%s\n", kobj_path);
+- kfree(kobj_path);
+- }
++ list_for_each_entry(pm_data, &genpd->dev_list, list_node)
++ seq_printf(s, "%s\n", dev_name(pm_data->dev));
+
+ genpd_unlock(genpd);
+ return ret;
+diff --git a/drivers/power/reset/brcmstb-reboot.c b/drivers/power/reset/brcmstb-reboot.c
+index 0f2944dc935516..a04713f191a112 100644
+--- a/drivers/power/reset/brcmstb-reboot.c
++++ b/drivers/power/reset/brcmstb-reboot.c
+@@ -62,9 +62,6 @@ static int brcmstb_restart_handler(struct notifier_block *this,
+ return NOTIFY_DONE;
+ }
+
+- while (1)
+- ;
+-
+ return NOTIFY_DONE;
+ }
+
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index 8f6025acd10a08..b43c2557241e75 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -1232,11 +1232,7 @@ EXPORT_SYMBOL_GPL(power_supply_set_property);
+ int power_supply_property_is_writeable(struct power_supply *psy,
+ enum power_supply_property psp)
+ {
+- if (atomic_read(&psy->use_cnt) <= 0 ||
+- !psy->desc->property_is_writeable)
+- return -ENODEV;
+-
+- return psy->desc->property_is_writeable(psy, psp);
++ return psy->desc->property_is_writeable && psy->desc->property_is_writeable(psy, psp);
+ }
+ EXPORT_SYMBOL_GPL(power_supply_property_is_writeable);
+
+diff --git a/drivers/power/supply/power_supply_hwmon.c b/drivers/power/supply/power_supply_hwmon.c
+index baacefbdf768ab..6fbbfb1c685e6c 100644
+--- a/drivers/power/supply/power_supply_hwmon.c
++++ b/drivers/power/supply/power_supply_hwmon.c
+@@ -318,7 +318,8 @@ static const struct hwmon_channel_info * const power_supply_hwmon_info[] = {
+ HWMON_T_INPUT |
+ HWMON_T_MAX |
+ HWMON_T_MIN |
+- HWMON_T_MIN_ALARM,
++ HWMON_T_MIN_ALARM |
++ HWMON_T_MAX_ALARM,
+
+ HWMON_T_LABEL |
+ HWMON_T_INPUT |
+diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+index 39a47540c59006..2992fd4eca6486 100644
+--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+@@ -194,6 +194,10 @@ static void k3_r5_rproc_mbox_callback(struct mbox_client *client, void *data)
+ const char *name = kproc->rproc->name;
+ u32 msg = omap_mbox_message(data);
+
++ /* Do not forward message from a detached core */
++ if (kproc->rproc->state == RPROC_DETACHED)
++ return;
++
+ dev_dbg(dev, "mbox msg: 0x%x\n", msg);
+
+ switch (msg) {
+@@ -229,6 +233,10 @@ static void k3_r5_rproc_kick(struct rproc *rproc, int vqid)
+ mbox_msg_t msg = (mbox_msg_t)vqid;
+ int ret;
+
++ /* Do not forward message to a detached core */
++ if (kproc->rproc->state == RPROC_DETACHED)
++ return;
++
+ /* send the index of the triggered virtqueue in the mailbox payload */
+ ret = mbox_send_message(kproc->mbox, (void *)msg);
+ if (ret < 0)
+@@ -399,12 +407,9 @@ static int k3_r5_rproc_request_mbox(struct rproc *rproc)
+ client->knows_txdone = false;
+
+ kproc->mbox = mbox_request_channel(client, 0);
+- if (IS_ERR(kproc->mbox)) {
+- ret = -EBUSY;
+- dev_err(dev, "mbox_request_channel failed: %ld\n",
+- PTR_ERR(kproc->mbox));
+- return ret;
+- }
++ if (IS_ERR(kproc->mbox))
++ return dev_err_probe(dev, PTR_ERR(kproc->mbox),
++ "mbox_request_channel failed\n");
+
+ /*
+ * Ping the remote processor, this is only for sanity-sake for now;
+@@ -464,8 +469,6 @@ static int k3_r5_rproc_prepare(struct rproc *rproc)
+ ret);
+ return ret;
+ }
+- core->released_from_reset = true;
+- wake_up_interruptible(&cluster->core_transition);
+
+ /*
+ * Newer IP revisions like on J7200 SoCs support h/w auto-initialization
+@@ -552,10 +555,6 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ u32 boot_addr;
+ int ret;
+
+- ret = k3_r5_rproc_request_mbox(rproc);
+- if (ret)
+- return ret;
+-
+ boot_addr = rproc->bootaddr;
+ /* TODO: add boot_addr sanity checking */
+ dev_dbg(dev, "booting R5F core using boot addr = 0x%x\n", boot_addr);
+@@ -564,7 +563,7 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ core = kproc->core;
+ ret = ti_sci_proc_set_config(core->tsp, boot_addr, 0, 0);
+ if (ret)
+- goto put_mbox;
++ return ret;
+
+ /* unhalt/run all applicable cores */
+ if (cluster->mode == CLUSTER_MODE_LOCKSTEP) {
+@@ -580,13 +579,15 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ if (core != core0 && core0->rproc->state == RPROC_OFFLINE) {
+ dev_err(dev, "%s: can not start core 1 before core 0\n",
+ __func__);
+- ret = -EPERM;
+- goto put_mbox;
++ return -EPERM;
+ }
+
+ ret = k3_r5_core_run(core);
+ if (ret)
+- goto put_mbox;
++ return ret;
++
++ core->released_from_reset = true;
++ wake_up_interruptible(&cluster->core_transition);
+ }
+
+ return 0;
+@@ -596,8 +597,6 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ if (k3_r5_core_halt(core))
+ dev_warn(core->dev, "core halt back failed\n");
+ }
+-put_mbox:
+- mbox_free_channel(kproc->mbox);
+ return ret;
+ }
+
+@@ -658,8 +657,6 @@ static int k3_r5_rproc_stop(struct rproc *rproc)
+ goto out;
+ }
+
+- mbox_free_channel(kproc->mbox);
+-
+ return 0;
+
+ unroll_core_halt:
+@@ -674,42 +671,22 @@ static int k3_r5_rproc_stop(struct rproc *rproc)
+ /*
+ * Attach to a running R5F remote processor (IPC-only mode)
+ *
+- * The R5F attach callback only needs to request the mailbox, the remote
+- * processor is already booted, so there is no need to issue any TI-SCI
+- * commands to boot the R5F cores in IPC-only mode. This callback is invoked
+- * only in IPC-only mode.
++ * The R5F attach callback is a NOP. The remote processor is already booted, and
++ * all required resources have been acquired during probe routine, so there is
++ * no need to issue any TI-SCI commands to boot the R5F cores in IPC-only mode.
++ * This callback is invoked only in IPC-only mode and exists because
++ * rproc_validate() checks for its existence.
+ */
+-static int k3_r5_rproc_attach(struct rproc *rproc)
+-{
+- struct k3_r5_rproc *kproc = rproc->priv;
+- struct device *dev = kproc->dev;
+- int ret;
+-
+- ret = k3_r5_rproc_request_mbox(rproc);
+- if (ret)
+- return ret;
+-
+- dev_info(dev, "R5F core initialized in IPC-only mode\n");
+- return 0;
+-}
++static int k3_r5_rproc_attach(struct rproc *rproc) { return 0; }
+
+ /*
+ * Detach from a running R5F remote processor (IPC-only mode)
+ *
+- * The R5F detach callback performs the opposite operation to attach callback
+- * and only needs to release the mailbox, the R5F cores are not stopped and
+- * will be left in booted state in IPC-only mode. This callback is invoked
+- * only in IPC-only mode.
++ * The R5F detach callback is a NOP. The R5F cores are not stopped and will be
++ * left in booted state in IPC-only mode. This callback is invoked only in
++ * IPC-only mode and exists for sanity sake.
+ */
+-static int k3_r5_rproc_detach(struct rproc *rproc)
+-{
+- struct k3_r5_rproc *kproc = rproc->priv;
+- struct device *dev = kproc->dev;
+-
+- mbox_free_channel(kproc->mbox);
+- dev_info(dev, "R5F core deinitialized in IPC-only mode\n");
+- return 0;
+-}
++static int k3_r5_rproc_detach(struct rproc *rproc) { return 0; }
+
+ /*
+ * This function implements the .get_loaded_rsc_table() callback and is used
+@@ -1278,6 +1255,10 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev)
+ kproc->rproc = rproc;
+ core->rproc = rproc;
+
++ ret = k3_r5_rproc_request_mbox(rproc);
++ if (ret)
++ return ret;
++
+ ret = k3_r5_rproc_configure_mode(kproc);
+ if (ret < 0)
+ goto err_config;
+@@ -1332,7 +1313,7 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev)
+ dev_err(dev,
+ "Timed out waiting for %s core to power up!\n",
+ rproc->name);
+- return ret;
++ goto err_powerup;
+ }
+ }
+
+@@ -1348,6 +1329,7 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev)
+ }
+ }
+
++err_powerup:
+ rproc_del(rproc);
+ err_add:
+ k3_r5_reserved_mem_exit(kproc);
+@@ -1395,6 +1377,8 @@ static void k3_r5_cluster_rproc_exit(void *data)
+ }
+ }
+
++ mbox_free_channel(kproc->mbox);
++
+ rproc_del(rproc);
+
+ k3_r5_reserved_mem_exit(kproc);
+diff --git a/drivers/rtc/rtc-at91sam9.c b/drivers/rtc/rtc-at91sam9.c
+index f93bee96e36233..993c0878fb6606 100644
+--- a/drivers/rtc/rtc-at91sam9.c
++++ b/drivers/rtc/rtc-at91sam9.c
+@@ -368,6 +368,7 @@ static int at91_rtc_probe(struct platform_device *pdev)
+ return ret;
+
+ rtc->gpbr = syscon_node_to_regmap(args.np);
++ of_node_put(args.np);
+ rtc->gpbr_offset = args.args[0];
+ if (IS_ERR(rtc->gpbr)) {
+ dev_err(&pdev->dev, "failed to retrieve gpbr regmap, aborting.\n");
+diff --git a/drivers/scsi/NCR5380.c b/drivers/scsi/NCR5380.c
+index 00e245173320c3..4fcb73b727aa5d 100644
+--- a/drivers/scsi/NCR5380.c
++++ b/drivers/scsi/NCR5380.c
+@@ -1807,8 +1807,11 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
+ return;
+ case PHASE_MSGIN:
+ len = 1;
++ tmp = 0xff;
+ data = &tmp;
+ NCR5380_transfer_pio(instance, &phase, &len, &data, 0);
++ if (tmp == 0xff)
++ break;
+ ncmd->message = tmp;
+
+ switch (tmp) {
+@@ -1996,6 +1999,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
+ break;
+ case PHASE_STATIN:
+ len = 1;
++ tmp = ncmd->status;
+ data = &tmp;
+ NCR5380_transfer_pio(instance, &phase, &len, &data, 0);
+ ncmd->status = tmp;
+diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h
+index 7d5a155073c627..9b66fa29fb05ca 100644
+--- a/drivers/scsi/aacraid/aacraid.h
++++ b/drivers/scsi/aacraid/aacraid.h
+@@ -2029,8 +2029,8 @@ struct aac_srb_reply
+ };
+
+ struct aac_srb_unit {
+- struct aac_srb srb;
+ struct aac_srb_reply srb_reply;
++ struct aac_srb srb;
+ };
+
+ /*
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 7c147d6ea8a8ff..e5a9c5a323f8be 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -306,6 +306,14 @@ struct lpfc_stats {
+
+ struct lpfc_hba;
+
++/* Data structure to keep withheld FLOGI_ACC information */
++struct lpfc_defer_flogi_acc {
++ bool flag;
++ u16 rx_id;
++ u16 ox_id;
++ struct lpfc_nodelist *ndlp;
++
++};
+
+ #define LPFC_VMID_TIMER 300 /* timer interval in seconds */
+
+@@ -1430,9 +1438,7 @@ struct lpfc_hba {
+ uint16_t vlan_id;
+ struct list_head fcf_conn_rec_list;
+
+- bool defer_flogi_acc_flag;
+- uint16_t defer_flogi_acc_rx_id;
+- uint16_t defer_flogi_acc_ox_id;
++ struct lpfc_defer_flogi_acc defer_flogi_acc;
+
+ spinlock_t ct_ev_lock; /* synchronize access to ct_ev_waiters */
+ struct list_head ct_ev_waiters;
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 929cbfc95163bf..e27f5d955edb41 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1390,7 +1390,7 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ phba->link_flag &= ~LS_EXTERNAL_LOOPBACK;
+
+ /* Check for a deferred FLOGI ACC condition */
+- if (phba->defer_flogi_acc_flag) {
++ if (phba->defer_flogi_acc.flag) {
+ /* lookup ndlp for received FLOGI */
+ ndlp = lpfc_findnode_did(vport, 0);
+ if (!ndlp)
+@@ -1404,34 +1404,38 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ if (phba->sli_rev == LPFC_SLI_REV4) {
+ bf_set(wqe_ctxt_tag,
+ &defer_flogi_acc.wqe.xmit_els_rsp.wqe_com,
+- phba->defer_flogi_acc_rx_id);
++ phba->defer_flogi_acc.rx_id);
+ bf_set(wqe_rcvoxid,
+ &defer_flogi_acc.wqe.xmit_els_rsp.wqe_com,
+- phba->defer_flogi_acc_ox_id);
++ phba->defer_flogi_acc.ox_id);
+ } else {
+ icmd = &defer_flogi_acc.iocb;
+- icmd->ulpContext = phba->defer_flogi_acc_rx_id;
++ icmd->ulpContext = phba->defer_flogi_acc.rx_id;
+ icmd->unsli3.rcvsli3.ox_id =
+- phba->defer_flogi_acc_ox_id;
++ phba->defer_flogi_acc.ox_id;
+ }
+
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ "3354 Xmit deferred FLOGI ACC: rx_id: x%x,"
+ " ox_id: x%x, hba_flag x%lx\n",
+- phba->defer_flogi_acc_rx_id,
+- phba->defer_flogi_acc_ox_id, phba->hba_flag);
++ phba->defer_flogi_acc.rx_id,
++ phba->defer_flogi_acc.ox_id, phba->hba_flag);
+
+ /* Send deferred FLOGI ACC */
+ lpfc_els_rsp_acc(vport, ELS_CMD_FLOGI, &defer_flogi_acc,
+ ndlp, NULL);
+
+- phba->defer_flogi_acc_flag = false;
+- vport->fc_myDID = did;
++ phba->defer_flogi_acc.flag = false;
+
+- /* Decrement ndlp reference count to indicate the node can be
+- * released when other references are removed.
++ /* Decrement the held ndlp that was incremented when the
++ * deferred flogi acc flag was set.
+ */
+- lpfc_nlp_put(ndlp);
++ if (phba->defer_flogi_acc.ndlp) {
++ lpfc_nlp_put(phba->defer_flogi_acc.ndlp);
++ phba->defer_flogi_acc.ndlp = NULL;
++ }
++
++ vport->fc_myDID = did;
+ }
+
+ return 0;
+@@ -5240,9 +5244,10 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ /* ACC to LOGO completes to NPort <nlp_DID> */
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ "0109 ACC to LOGO completes to NPort x%x refcnt %d "
+- "Data: x%x x%x x%x\n",
+- ndlp->nlp_DID, kref_read(&ndlp->kref), ndlp->nlp_flag,
+- ndlp->nlp_state, ndlp->nlp_rpi);
++ "last els x%x Data: x%x x%x x%x\n",
++ ndlp->nlp_DID, kref_read(&ndlp->kref),
++ ndlp->nlp_last_elscmd, ndlp->nlp_flag, ndlp->nlp_state,
++ ndlp->nlp_rpi);
+
+ /* This clause allows the LOGO ACC to complete and free resources
+ * for the Fabric Domain Controller. It does deliberately skip
+@@ -5254,18 +5259,22 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ goto out;
+
+ if (ndlp->nlp_state == NLP_STE_NPR_NODE) {
+- /* If PLOGI is being retried, PLOGI completion will cleanup the
+- * node. The NLP_NPR_2B_DISC flag needs to be retained to make
+- * progress on nodes discovered from last RSCN.
+- */
+- if ((ndlp->nlp_flag & NLP_DELAY_TMO) &&
+- (ndlp->nlp_last_elscmd == ELS_CMD_PLOGI))
+- goto out;
+-
+ if (ndlp->nlp_flag & NLP_RPI_REGISTERED)
+ lpfc_unreg_rpi(vport, ndlp);
+
++ /* If came from PRLO, then PRLO_ACC is done.
++ * Start rediscovery now.
++ */
++ if (ndlp->nlp_last_elscmd == ELS_CMD_PRLO) {
++ spin_lock_irq(&ndlp->lock);
++ ndlp->nlp_flag |= NLP_NPR_2B_DISC;
++ spin_unlock_irq(&ndlp->lock);
++ ndlp->nlp_prev_state = ndlp->nlp_state;
++ lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE);
++ lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
++ }
+ }
++
+ out:
+ /*
+ * The driver received a LOGO from the rport and has ACK'd it.
+@@ -8454,9 +8463,9 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+
+ /* Defer ACC response until AFTER we issue a FLOGI */
+ if (!test_bit(HBA_FLOGI_ISSUED, &phba->hba_flag)) {
+- phba->defer_flogi_acc_rx_id = bf_get(wqe_ctxt_tag,
++ phba->defer_flogi_acc.rx_id = bf_get(wqe_ctxt_tag,
+ &wqe->xmit_els_rsp.wqe_com);
+- phba->defer_flogi_acc_ox_id = bf_get(wqe_rcvoxid,
++ phba->defer_flogi_acc.ox_id = bf_get(wqe_rcvoxid,
+ &wqe->xmit_els_rsp.wqe_com);
+
+ vport->fc_myDID = did;
+@@ -8464,11 +8473,17 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ "3344 Deferring FLOGI ACC: rx_id: x%x,"
+ " ox_id: x%x, hba_flag x%lx\n",
+- phba->defer_flogi_acc_rx_id,
+- phba->defer_flogi_acc_ox_id, phba->hba_flag);
++ phba->defer_flogi_acc.rx_id,
++ phba->defer_flogi_acc.ox_id, phba->hba_flag);
+
+- phba->defer_flogi_acc_flag = true;
++ phba->defer_flogi_acc.flag = true;
+
++ /* This nlp_get is paired with nlp_puts that reset the
++ * defer_flogi_acc.flag back to false. We need to retain
++ * a kref on the ndlp until the deferred FLOGI ACC is
++ * processed or cancelled.
++ */
++ phba->defer_flogi_acc.ndlp = lpfc_nlp_get(ndlp);
+ return 0;
+ }
+
+@@ -10504,7 +10519,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+
+ lpfc_els_rcv_flogi(vport, elsiocb, ndlp);
+ /* retain node if our response is deferred */
+- if (phba->defer_flogi_acc_flag)
++ if (phba->defer_flogi_acc.flag)
+ break;
+ if (newnode)
+ lpfc_disc_state_machine(vport, ndlp, NULL,
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 6943f6c6395c41..35c9181c6608a5 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -175,7 +175,8 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ ndlp->nlp_state, ndlp->fc4_xpt_flags);
+
+ /* Don't schedule a worker thread event if the vport is going down. */
+- if (test_bit(FC_UNLOADING, &vport->load_flag)) {
++ if (test_bit(FC_UNLOADING, &vport->load_flag) ||
++ !test_bit(HBA_SETUP, &phba->hba_flag)) {
+ spin_lock_irqsave(&ndlp->lock, iflags);
+ ndlp->rport = NULL;
+
+@@ -1254,7 +1255,14 @@ lpfc_linkdown(struct lpfc_hba *phba)
+ lpfc_scsi_dev_block(phba);
+ offline = pci_channel_offline(phba->pcidev);
+
+- phba->defer_flogi_acc_flag = false;
++ /* Decrement the held ndlp if there is a deferred flogi acc */
++ if (phba->defer_flogi_acc.flag) {
++ if (phba->defer_flogi_acc.ndlp) {
++ lpfc_nlp_put(phba->defer_flogi_acc.ndlp);
++ phba->defer_flogi_acc.ndlp = NULL;
++ }
++ }
++ phba->defer_flogi_acc.flag = false;
+
+ /* Clear external loopback plug detected flag */
+ phba->link_flag &= ~LS_EXTERNAL_LOOPBACK;
+@@ -1376,7 +1384,7 @@ lpfc_linkup_port(struct lpfc_vport *vport)
+ (vport != phba->pport))
+ return;
+
+- if (phba->defer_flogi_acc_flag) {
++ if (phba->defer_flogi_acc.flag) {
+ clear_bit(FC_ABORT_DISCOVERY, &vport->fc_flag);
+ clear_bit(FC_RSCN_MODE, &vport->fc_flag);
+ clear_bit(FC_NLP_MORE, &vport->fc_flag);
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index f6a53446e57f9d..4574716c8764fb 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -2652,8 +2652,26 @@ lpfc_rcv_prlo_mapped_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ /* flush the target */
+ lpfc_sli_abort_iocb(vport, ndlp->nlp_sid, 0, LPFC_CTX_TGT);
+
+- /* Treat like rcv logo */
+- lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_PRLO);
++ /* Send PRLO_ACC */
++ spin_lock_irq(&ndlp->lock);
++ ndlp->nlp_flag |= NLP_LOGO_ACC;
++ spin_unlock_irq(&ndlp->lock);
++ lpfc_els_rsp_acc(vport, ELS_CMD_PRLO, cmdiocb, ndlp, NULL);
++
++ /* Save ELS_CMD_PRLO as the last elscmd and then set to NPR.
++ * lpfc_cmpl_els_logo_acc is expected to restart discovery.
++ */
++ ndlp->nlp_last_elscmd = ELS_CMD_PRLO;
++ ndlp->nlp_prev_state = ndlp->nlp_state;
++
++ lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE | LOG_ELS | LOG_DISCOVERY,
++ "3422 DID x%06x nflag x%x lastels x%x ref cnt %u\n",
++ ndlp->nlp_DID, ndlp->nlp_flag,
++ ndlp->nlp_last_elscmd,
++ kref_read(&ndlp->kref));
++
++ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
++
+ return ndlp->nlp_state;
+ }
+
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 9f0b59672e1915..0eaede8275dac4 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -5555,11 +5555,20 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+
+ iocb = &lpfc_cmd->cur_iocbq;
+ if (phba->sli_rev == LPFC_SLI_REV4) {
+- pring_s4 = phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq->pring;
+- if (!pring_s4) {
++ /* if the io_wq & pring are gone, the port was reset. */
++ if (!phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq ||
++ !phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq->pring) {
++ lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
++ "2877 SCSI Layer I/O Abort Request "
++ "IO CMPL Status x%x ID %d LUN %llu "
++ "HBA_SETUP %d\n", FAILED,
++ cmnd->device->id,
++ (u64)cmnd->device->lun,
++ test_bit(HBA_SETUP, &phba->hba_flag));
+ ret = FAILED;
+ goto out_unlock_hba;
+ }
++ pring_s4 = phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq->pring;
+ spin_lock(&pring_s4->ring_lock);
+ }
+ /* the command is in process of being cancelled */
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 88debef2fb6db0..7dc34c71eb78cf 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -4687,6 +4687,17 @@ lpfc_sli_flush_io_rings(struct lpfc_hba *phba)
+ /* Look on all the FCP Rings for the iotag */
+ if (phba->sli_rev >= LPFC_SLI_REV4) {
+ for (i = 0; i < phba->cfg_hdw_queue; i++) {
++ if (!phba->sli4_hba.hdwq ||
++ !phba->sli4_hba.hdwq[i].io_wq) {
++ lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
++ "7777 hdwq's deleted %lx "
++ "%lx %x %x\n",
++ phba->pport->load_flag,
++ phba->hba_flag,
++ phba->link_state,
++ phba->sli.sli_flag);
++ return;
++ }
+ pring = phba->sli4_hba.hdwq[i].io_wq->pring;
+
+ spin_lock_irq(&pring->ring_lock);
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index 1e63cb6cd8e327..33e1eba62ca12c 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -100,10 +100,12 @@ static void pm8001_map_queues(struct Scsi_Host *shost)
+ struct pm8001_hba_info *pm8001_ha = sha->lldd_ha;
+ struct blk_mq_queue_map *qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT];
+
+- if (pm8001_ha->number_of_intr > 1)
++ if (pm8001_ha->number_of_intr > 1) {
+ blk_mq_pci_map_queues(qmap, pm8001_ha->pdev, 1);
++ return;
++ }
+
+- return blk_mq_map_queues(qmap);
++ blk_mq_map_queues(qmap);
+ }
+
+ /*
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index c1524fb334eb5c..4230714e5f3a12 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -5917,7 +5917,7 @@ static bool pqi_is_parity_write_stream(struct pqi_ctrl_info *ctrl_info,
+ int rc;
+ struct pqi_scsi_dev *device;
+ struct pqi_stream_data *pqi_stream_data;
+- struct pqi_scsi_dev_raid_map_data rmd;
++ struct pqi_scsi_dev_raid_map_data rmd = { 0 };
+
+ if (!ctrl_info->enable_stream_detection)
+ return false;
+@@ -9428,6 +9428,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x152d, 0x8a37)
+ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x193d, 0x0462)
++ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x193d, 0x1104)
+@@ -9456,6 +9460,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x193d, 0x110b)
+ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x193d, 0x1110)
++ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x193d, 0x8460)
+@@ -9464,6 +9472,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x193d, 0x8461)
+ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x193d, 0x8462)
++ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x193d, 0xc460)
+@@ -9572,6 +9584,14 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1bd4, 0x0089)
+ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x00a1)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1f3a, 0x0104)
++ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x19e5, 0xd227)
+@@ -10164,6 +10184,110 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1137, 0x02fa)
+ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1137, 0x02fe)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1137, 0x02ff)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1137, 0x0300)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0045)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0046)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0047)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0048)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x004a)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x004b)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x004c)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x004f)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0051)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0052)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0053)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0054)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x006b)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x006c)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x006d)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x006f)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0070)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0071)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0072)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0086)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0087)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0088)
++ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x0089)
++ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1e93, 0x1000)
+@@ -10248,6 +10372,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1f51, 0x1045)
+ },
++ {
++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++ 0x1ff9, 0x00a3)
++ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_ANY_ID, PCI_ANY_ID)
+diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
+index 0d8ce1a92168c4..d50bad3a2ce92b 100644
+--- a/drivers/scsi/st.c
++++ b/drivers/scsi/st.c
+@@ -834,6 +834,9 @@ static int flush_buffer(struct scsi_tape *STp, int seek_next)
+ int backspace, result;
+ struct st_partstat *STps;
+
++ if (STp->ready != ST_READY)
++ return 0;
++
+ /*
+ * If there was a bus reset, block further access
+ * to this device.
+@@ -841,8 +844,6 @@ static int flush_buffer(struct scsi_tape *STp, int seek_next)
+ if (STp->pos_unknown)
+ return (-EIO);
+
+- if (STp->ready != ST_READY)
+- return 0;
+ STps = &(STp->ps[STp->partition]);
+ if (STps->rw == ST_WRITING) /* Writing */
+ return st_flush_write_buffer(STp);
+diff --git a/drivers/spi/spi-bcm63xx.c b/drivers/spi/spi-bcm63xx.c
+index 2fb8d4e55c7773..ef3a7226db125c 100644
+--- a/drivers/spi/spi-bcm63xx.c
++++ b/drivers/spi/spi-bcm63xx.c
+@@ -466,6 +466,7 @@ static const struct platform_device_id bcm63xx_spi_dev_match[] = {
+ {
+ },
+ };
++MODULE_DEVICE_TABLE(platform, bcm63xx_spi_dev_match);
+
+ static const struct of_device_id bcm63xx_spi_of_match[] = {
+ { .compatible = "brcm,bcm6348-spi", .data = &bcm6348_spi_reg_offsets },
+@@ -583,13 +584,15 @@ static int bcm63xx_spi_probe(struct platform_device *pdev)
+
+ bcm_spi_writeb(bs, SPI_INTR_CLEAR_ALL, SPI_INT_STATUS);
+
+- pm_runtime_enable(&pdev->dev);
++ ret = devm_pm_runtime_enable(&pdev->dev);
++ if (ret)
++ goto out_clk_disable;
+
+ /* register and we are done */
+ ret = devm_spi_register_controller(dev, host);
+ if (ret) {
+ dev_err(dev, "spi register failed\n");
+- goto out_pm_disable;
++ goto out_clk_disable;
+ }
+
+ dev_info(dev, "at %pr (irq %d, FIFOs size %d)\n",
+@@ -597,8 +600,6 @@ static int bcm63xx_spi_probe(struct platform_device *pdev)
+
+ return 0;
+
+-out_pm_disable:
+- pm_runtime_disable(&pdev->dev);
+ out_clk_disable:
+ clk_disable_unprepare(clk);
+ out_err:
+diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c
+index e07e081de5ea41..3c87d2bf786a9e 100644
+--- a/drivers/spi/spi-cadence.c
++++ b/drivers/spi/spi-cadence.c
+@@ -678,8 +678,8 @@ static int cdns_spi_probe(struct platform_device *pdev)
+
+ clk_dis_all:
+ if (!spi_controller_is_target(ctlr)) {
+- pm_runtime_set_suspended(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_set_suspended(&pdev->dev);
+ }
+ remove_ctlr:
+ spi_controller_put(ctlr);
+@@ -701,8 +701,10 @@ static void cdns_spi_remove(struct platform_device *pdev)
+
+ cdns_spi_write(xspi, CDNS_SPI_ER, CDNS_SPI_ER_DISABLE);
+
+- pm_runtime_set_suspended(&pdev->dev);
+- pm_runtime_disable(&pdev->dev);
++ if (!spi_controller_is_target(ctlr)) {
++ pm_runtime_disable(&pdev->dev);
++ pm_runtime_set_suspended(&pdev->dev);
++ }
+
+ spi_unregister_controller(ctlr);
+ }
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 85bd1a82a34eb4..4c31d36f3130a9 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1865,8 +1865,8 @@ static int spi_imx_probe(struct platform_device *pdev)
+ spi_imx_sdma_exit(spi_imx);
+ out_runtime_pm_put:
+ pm_runtime_dont_use_autosuspend(spi_imx->dev);
+- pm_runtime_set_suspended(&pdev->dev);
+ pm_runtime_disable(spi_imx->dev);
++ pm_runtime_set_suspended(&pdev->dev);
+
+ clk_disable_unprepare(spi_imx->clk_ipg);
+ out_put_per:
+diff --git a/drivers/spi/spi-rpc-if.c b/drivers/spi/spi-rpc-if.c
+index d3f07fd719bdb7..b468a95972bf72 100644
+--- a/drivers/spi/spi-rpc-if.c
++++ b/drivers/spi/spi-rpc-if.c
+@@ -198,9 +198,16 @@ static int __maybe_unused rpcif_spi_resume(struct device *dev)
+
+ static SIMPLE_DEV_PM_OPS(rpcif_spi_pm_ops, rpcif_spi_suspend, rpcif_spi_resume);
+
++static const struct platform_device_id rpc_if_spi_id_table[] = {
++ { .name = "rpc-if-spi" },
++ { /* sentinel */ }
++};
++MODULE_DEVICE_TABLE(platform, rpc_if_spi_id_table);
++
+ static struct platform_driver rpcif_spi_driver = {
+ .probe = rpcif_spi_probe,
+ .remove_new = rpcif_spi_remove,
++ .id_table = rpc_if_spi_id_table,
+ .driver = {
+ .name = "rpc-if-spi",
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c
+index 833c58c88e4086..6ab416a3396690 100644
+--- a/drivers/spi/spi-s3c64xx.c
++++ b/drivers/spi/spi-s3c64xx.c
+@@ -245,7 +245,7 @@ static void s3c64xx_flush_fifo(struct s3c64xx_spi_driver_data *sdd)
+ loops = msecs_to_loops(1);
+ do {
+ val = readl(regs + S3C64XX_SPI_STATUS);
+- } while (TX_FIFO_LVL(val, sdd) && loops--);
++ } while (TX_FIFO_LVL(val, sdd) && --loops);
+
+ if (loops == 0)
+ dev_warn(&sdd->pdev->dev, "Timed out flushing TX FIFO\n");
+@@ -258,7 +258,7 @@ static void s3c64xx_flush_fifo(struct s3c64xx_spi_driver_data *sdd)
+ readl(regs + S3C64XX_SPI_RX_DATA);
+ else
+ break;
+- } while (loops--);
++ } while (--loops);
+
+ if (loops == 0)
+ dev_warn(&sdd->pdev->dev, "Timed out flushing RX FIFO\n");
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index 006ffacf1c56cb..c50f3fb49a6686 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -1029,20 +1029,23 @@ vhost_scsi_get_req(struct vhost_virtqueue *vq, struct vhost_scsi_ctx *vc,
+ /* virtio-scsi spec requires byte 0 of the lun to be 1 */
+ vq_err(vq, "Illegal virtio-scsi lun: %u\n", *vc->lunp);
+ } else {
+- struct vhost_scsi_tpg **vs_tpg, *tpg;
+-
+- vs_tpg = vhost_vq_get_backend(vq); /* validated at handler entry */
+-
+- tpg = READ_ONCE(vs_tpg[*vc->target]);
+- if (unlikely(!tpg)) {
+- vq_err(vq, "Target 0x%x does not exist\n", *vc->target);
+- } else {
+- if (tpgp)
+- *tpgp = tpg;
+- ret = 0;
++ struct vhost_scsi_tpg **vs_tpg, *tpg = NULL;
++
++ if (vc->target) {
++ /* validated at handler entry */
++ vs_tpg = vhost_vq_get_backend(vq);
++ tpg = READ_ONCE(vs_tpg[*vc->target]);
++ if (unlikely(!tpg)) {
++ vq_err(vq, "Target 0x%x does not exist\n", *vc->target);
++ goto out;
++ }
+ }
+- }
+
++ if (tpgp)
++ *tpgp = tpg;
++ ret = 0;
++ }
++out:
+ return ret;
+ }
+
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index 8dd82afb3452ba..595b8e27bea66d 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -561,15 +561,10 @@ static int efifb_probe(struct platform_device *dev)
+ break;
+ }
+
+- err = sysfs_create_groups(&dev->dev.kobj, efifb_groups);
+- if (err) {
+- pr_err("efifb: cannot add sysfs attrs\n");
+- goto err_unmap;
+- }
+ err = fb_alloc_cmap(&info->cmap, 256, 0);
+ if (err < 0) {
+ pr_err("efifb: cannot allocate colormap\n");
+- goto err_groups;
++ goto err_unmap;
+ }
+
+ err = devm_aperture_acquire_for_platform_device(dev, par->base, par->size);
+@@ -587,8 +582,6 @@ static int efifb_probe(struct platform_device *dev)
+
+ err_fb_dealloc_cmap:
+ fb_dealloc_cmap(&info->cmap);
+-err_groups:
+- sysfs_remove_groups(&dev->dev.kobj, efifb_groups);
+ err_unmap:
+ if (mem_flags & (EFI_MEMORY_UC | EFI_MEMORY_WC))
+ iounmap(info->screen_base);
+@@ -608,12 +601,12 @@ static void efifb_remove(struct platform_device *pdev)
+
+ /* efifb_destroy takes care of info cleanup */
+ unregister_framebuffer(info);
+- sysfs_remove_groups(&pdev->dev.kobj, efifb_groups);
+ }
+
+ static struct platform_driver efifb_driver = {
+ .driver = {
+ .name = "efi-framebuffer",
++ .dev_groups = efifb_groups,
+ },
+ .probe = efifb_probe,
+ .remove_new = efifb_remove,
+diff --git a/drivers/video/fbdev/pxafb.c b/drivers/video/fbdev/pxafb.c
+index 2ef56fa28aff36..5ce02495cda638 100644
+--- a/drivers/video/fbdev/pxafb.c
++++ b/drivers/video/fbdev/pxafb.c
+@@ -2403,6 +2403,7 @@ static void pxafb_remove(struct platform_device *dev)
+ info = &fbi->fb;
+
+ pxafb_overlay_exit(fbi);
++ cancel_work_sync(&fbi->task);
+ unregister_framebuffer(info);
+
+ pxafb_disable_controller(fbi);
+diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
+index 6fc7884ea0a11d..c86be0cd8ecd2f 100644
+--- a/drivers/virt/coco/sev-guest/sev-guest.c
++++ b/drivers/virt/coco/sev-guest/sev-guest.c
+@@ -1090,6 +1090,8 @@ static int __init sev_guest_probe(struct platform_device *pdev)
+ void __iomem *mapping;
+ int ret;
+
++ BUILD_BUG_ON(sizeof(struct snp_guest_msg) > PAGE_SIZE);
++
+ if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
+ return -ENODEV;
+
+diff --git a/fs/afs/file.c b/fs/afs/file.c
+index ec1be0091fdb56..290f60460ec751 100644
+--- a/fs/afs/file.c
++++ b/fs/afs/file.c
+@@ -404,6 +404,7 @@ const struct netfs_request_ops afs_req_ops = {
+ .begin_writeback = afs_begin_writeback,
+ .prepare_write = afs_prepare_write,
+ .issue_write = afs_issue_write,
++ .retry_request = afs_retry_request,
+ };
+
+ static void afs_add_open_mmap(struct afs_vnode *vnode)
+diff --git a/fs/afs/fs_operation.c b/fs/afs/fs_operation.c
+index 3546b087e791d4..428721bbe4f6e3 100644
+--- a/fs/afs/fs_operation.c
++++ b/fs/afs/fs_operation.c
+@@ -201,7 +201,7 @@ void afs_wait_for_operation(struct afs_operation *op)
+ }
+ }
+
+- if (op->call_responded)
++ if (op->call_responded && op->server)
+ set_bit(AFS_SERVER_FL_RESPONDING, &op->server->flags);
+
+ if (!afs_op_error(op)) {
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index a2de5c05f97c89..57e327833ed1d6 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -3179,10 +3179,14 @@ void btrfs_backref_release_cache(struct btrfs_backref_cache *cache)
+ btrfs_backref_cleanup_node(cache, node);
+ }
+
+- cache->last_trans = 0;
+-
+- for (i = 0; i < BTRFS_MAX_LEVEL; i++)
+- ASSERT(list_empty(&cache->pending[i]));
++ for (i = 0; i < BTRFS_MAX_LEVEL; i++) {
++ while (!list_empty(&cache->pending[i])) {
++ node = list_first_entry(&cache->pending[i],
++ struct btrfs_backref_node,
++ list);
++ btrfs_backref_cleanup_node(cache, node);
++ }
++ }
+ ASSERT(list_empty(&cache->pending_edge));
+ ASSERT(list_empty(&cache->useless_node));
+ ASSERT(list_empty(&cache->changed));
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index a6f5441e62d103..216b3b412aa0e1 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4265,6 +4265,17 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ /* clear out the rbtree of defraggable inodes */
+ btrfs_cleanup_defrag_inodes(fs_info);
+
++ /*
++ * Wait for any fixup workers to complete.
++ * If we don't wait for them here and they are still running by the time
++ * we call kthread_stop() against the cleaner kthread further below, we
++ * get an use-after-free on the cleaner because the fixup worker adds an
++ * inode to the list of delayed iputs and then attempts to wakeup the
++ * cleaner kthread, which was already stopped and destroyed. We parked
++ * already the cleaner, but below we run all pending delayed iputs.
++ */
++ btrfs_flush_workqueue(fs_info->fixup_workers);
++
+ /*
+ * After we parked the cleaner kthread, ordered extents may have
+ * completed and created new delayed iputs. If one of the async reclaim
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 0533d0f82dc993..f3834f8d26b456 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -36,6 +36,7 @@
+ #include "relocation.h"
+ #include "super.h"
+ #include "tree-checker.h"
++#include "raid-stripe-tree.h"
+
+ /*
+ * Relocation overview
+@@ -231,70 +232,6 @@ static struct btrfs_backref_node *walk_down_backref(
+ return NULL;
+ }
+
+-static void update_backref_node(struct btrfs_backref_cache *cache,
+- struct btrfs_backref_node *node, u64 bytenr)
+-{
+- struct rb_node *rb_node;
+- rb_erase(&node->rb_node, &cache->rb_root);
+- node->bytenr = bytenr;
+- rb_node = rb_simple_insert(&cache->rb_root, node->bytenr, &node->rb_node);
+- if (rb_node)
+- btrfs_backref_panic(cache->fs_info, bytenr, -EEXIST);
+-}
+-
+-/*
+- * update backref cache after a transaction commit
+- */
+-static int update_backref_cache(struct btrfs_trans_handle *trans,
+- struct btrfs_backref_cache *cache)
+-{
+- struct btrfs_backref_node *node;
+- int level = 0;
+-
+- if (cache->last_trans == 0) {
+- cache->last_trans = trans->transid;
+- return 0;
+- }
+-
+- if (cache->last_trans == trans->transid)
+- return 0;
+-
+- /*
+- * detached nodes are used to avoid unnecessary backref
+- * lookup. transaction commit changes the extent tree.
+- * so the detached nodes are no longer useful.
+- */
+- while (!list_empty(&cache->detached)) {
+- node = list_entry(cache->detached.next,
+- struct btrfs_backref_node, list);
+- btrfs_backref_cleanup_node(cache, node);
+- }
+-
+- while (!list_empty(&cache->changed)) {
+- node = list_entry(cache->changed.next,
+- struct btrfs_backref_node, list);
+- list_del_init(&node->list);
+- BUG_ON(node->pending);
+- update_backref_node(cache, node, node->new_bytenr);
+- }
+-
+- /*
+- * some nodes can be left in the pending list if there were
+- * errors during processing the pending nodes.
+- */
+- for (level = 0; level < BTRFS_MAX_LEVEL; level++) {
+- list_for_each_entry(node, &cache->pending[level], list) {
+- BUG_ON(!node->pending);
+- if (node->bytenr == node->new_bytenr)
+- continue;
+- update_backref_node(cache, node, node->new_bytenr);
+- }
+- }
+-
+- cache->last_trans = 0;
+- return 1;
+-}
+-
+ static bool reloc_root_is_dead(const struct btrfs_root *root)
+ {
+ /*
+@@ -550,9 +487,6 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
+ struct btrfs_backref_edge *new_edge;
+ struct rb_node *rb_node;
+
+- if (cache->last_trans > 0)
+- update_backref_cache(trans, cache);
+-
+ rb_node = rb_simple_search(&cache->rb_root, src->commit_root->start);
+ if (rb_node) {
+ node = rb_entry(rb_node, struct btrfs_backref_node, rb_node);
+@@ -922,7 +856,7 @@ int btrfs_update_reloc_root(struct btrfs_trans_handle *trans,
+ btrfs_grab_root(reloc_root);
+
+ /* root->reloc_root will stay until current relocation finished */
+- if (fs_info->reloc_ctl->merge_reloc_tree &&
++ if (fs_info->reloc_ctl && fs_info->reloc_ctl->merge_reloc_tree &&
+ btrfs_root_refs(root_item) == 0) {
+ set_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state);
+ /*
+@@ -2965,21 +2899,34 @@ static int relocate_one_folio(struct reloc_control *rc,
+ u64 folio_end;
+ u64 cur;
+ int ret;
++ const bool use_rst = btrfs_need_stripe_tree_update(fs_info, rc->block_group->flags);
+
+ ASSERT(index <= last_index);
+ folio = filemap_lock_folio(inode->i_mapping, index);
+ if (IS_ERR(folio)) {
+- page_cache_sync_readahead(inode->i_mapping, ra, NULL,
+- index, last_index + 1 - index);
++
++ /*
++ * On relocation we're doing readahead on the relocation inode,
++ * but if the filesystem is backed by a RAID stripe tree we can
++ * get ENOENT (e.g. due to preallocated extents not being
++ * mapped in the RST) from the lookup.
++ *
++ * But readahead doesn't handle the error and submits invalid
++ * reads to the device, causing a assertion failures.
++ */
++ if (!use_rst)
++ page_cache_sync_readahead(inode->i_mapping, ra, NULL,
++ index, last_index + 1 - index);
+ folio = __filemap_get_folio(inode->i_mapping, index,
+- FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mask);
++ FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
++ mask);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
+ }
+
+ WARN_ON(folio_order(folio));
+
+- if (folio_test_readahead(folio))
++ if (folio_test_readahead(folio) && !use_rst)
+ page_cache_async_readahead(inode->i_mapping, ra, NULL,
+ folio, last_index + 1 - index);
+
+@@ -3684,11 +3631,9 @@ static noinline_for_stack int relocate_block_group(struct reloc_control *rc)
+ break;
+ }
+ restart:
+- if (update_backref_cache(trans, &rc->backref_cache)) {
+- btrfs_end_transaction(trans);
+- trans = NULL;
+- continue;
+- }
++ if (rc->backref_cache.last_trans != trans->transid)
++ btrfs_backref_release_cache(&rc->backref_cache);
++ rc->backref_cache.last_trans = trans->transid;
+
+ ret = find_next_extent(rc, path, &key);
+ if (ret < 0)
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 619fa0b8b3f6f9..65db353e7e729f 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -346,8 +346,10 @@ struct name_cache_entry {
+ u64 parent_gen;
+ int ret;
+ int need_later_update;
++ /* Name length without NUL terminator. */
+ int name_len;
+- char name[] __counted_by(name_len);
++ /* Not NUL terminated. */
++ char name[] __counted_by(name_len) __nonstring;
+ };
+
+ /* See the comment at lru_cache.h about struct btrfs_lru_cache_entry. */
+@@ -2388,7 +2390,7 @@ static int __get_cur_name_and_parent(struct send_ctx *sctx,
+ /*
+ * Store the result of the lookup in the name cache.
+ */
+- nce = kmalloc(sizeof(*nce) + fs_path_len(dest) + 1, GFP_KERNEL);
++ nce = kmalloc(sizeof(*nce) + fs_path_len(dest), GFP_KERNEL);
+ if (!nce) {
+ ret = -ENOMEM;
+ goto out;
+@@ -2400,7 +2402,7 @@ static int __get_cur_name_and_parent(struct send_ctx *sctx,
+ nce->parent_gen = *parent_gen;
+ nce->name_len = fs_path_len(dest);
+ nce->ret = ret;
+- strcpy(nce->name, dest->start);
++ memcpy(nce->name, dest->start, nce->name_len);
+
+ if (ino < sctx->send_progress)
+ nce->need_later_update = 0;
+@@ -6187,8 +6189,29 @@ static int send_write_or_clone(struct send_ctx *sctx,
+ if (ret < 0)
+ return ret;
+
+- if (clone_root->offset + num_bytes == info.size)
++ if (clone_root->offset + num_bytes == info.size) {
++ /*
++ * The final size of our file matches the end offset, but it may
++ * be that its current size is larger, so we have to truncate it
++ * to any value between the start offset of the range and the
++ * final i_size, otherwise the clone operation is invalid
++ * because it's unaligned and it ends before the current EOF.
++ * We do this truncate to the final i_size when we finish
++ * processing the inode, but it's too late by then. And here we
++ * truncate to the start offset of the range because it's always
++ * sector size aligned while if it were the final i_size it
++ * would result in dirtying part of a page, filling part of a
++ * page with zeroes and then having the clone operation at the
++ * receiver trigger IO and wait for it due to the dirty page.
++ */
++ if (sctx->parent_root != NULL) {
++ ret = send_truncate(sctx, sctx->cur_ino,
++ sctx->cur_inode_gen, offset);
++ if (ret < 0)
++ return ret;
++ }
+ goto clone_data;
++ }
+
+ write_data:
+ ret = send_extent_data(sctx, path, offset, num_bytes);
+diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
+index f53977169db48c..2b3f9935dbb44d 100644
+--- a/fs/cachefiles/namei.c
++++ b/fs/cachefiles/namei.c
+@@ -595,14 +595,12 @@ static bool cachefiles_open_file(struct cachefiles_object *object,
+ * write and readdir but not lookup or open).
+ */
+ touch_atime(&file->f_path);
+- dput(dentry);
+ return true;
+
+ check_failed:
+ fscache_cookie_lookup_negative(object->cookie);
+ cachefiles_unmark_inode_in_use(object, file);
+ fput(file);
+- dput(dentry);
+ if (ret == -ESTALE)
+ return cachefiles_create_file(object);
+ return false;
+@@ -611,7 +609,6 @@ static bool cachefiles_open_file(struct cachefiles_object *object,
+ fput(file);
+ error:
+ cachefiles_do_unmark_inode_in_use(object, d_inode(dentry));
+- dput(dentry);
+ return false;
+ }
+
+@@ -654,7 +651,9 @@ bool cachefiles_look_up_object(struct cachefiles_object *object)
+ goto new_file;
+ }
+
+- if (!cachefiles_open_file(object, dentry))
++ ret = cachefiles_open_file(object, dentry);
++ dput(dentry);
++ if (!ret)
+ return false;
+
+ _leave(" = t [%lu]", file_inode(object->file)->i_ino);
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index c4744a02db753c..17f9368a8ca195 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -95,7 +95,6 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
+
+ /* dirty the head */
+ spin_lock(&ci->i_ceph_lock);
+- BUG_ON(ci->i_wr_ref == 0); // caller should hold Fw reference
+ if (__ceph_have_pending_cap_snap(ci)) {
+ struct ceph_cap_snap *capsnap =
+ list_last_entry(&ci->i_cap_snaps,
+@@ -474,8 +473,11 @@ static int ceph_init_request(struct netfs_io_request *rreq, struct file *file)
+ rreq->netfs_priv = priv;
+
+ out:
+- if (ret < 0)
++ if (ret < 0) {
++ if (got)
++ ceph_put_cap_refs(ceph_inode(inode), got);
+ kfree(priv);
++ }
+
+ return ret;
+ }
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 276e34ab3e2cca..2e4b3ee7446c8e 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -6015,6 +6015,18 @@ static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
+ ceph_mdsmap_destroy(mdsc->mdsmap);
+ kfree(mdsc->sessions);
+ ceph_caps_finalize(mdsc);
++
++ if (mdsc->s_cap_auths) {
++ int i;
++
++ for (i = 0; i < mdsc->s_cap_auths_num; i++) {
++ kfree(mdsc->s_cap_auths[i].match.gids);
++ kfree(mdsc->s_cap_auths[i].match.path);
++ kfree(mdsc->s_cap_auths[i].match.fs_name);
++ }
++ kfree(mdsc->s_cap_auths);
++ }
++
+ ceph_pool_perm_destroy(mdsc);
+ }
+
+diff --git a/fs/dax.c b/fs/dax.c
+index becb4a6920c6aa..c62acd2812f8d4 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -1305,11 +1305,15 @@ int dax_file_unshare(struct inode *inode, loff_t pos, loff_t len,
+ struct iomap_iter iter = {
+ .inode = inode,
+ .pos = pos,
+- .len = len,
+ .flags = IOMAP_WRITE | IOMAP_UNSHARE | IOMAP_DAX,
+ };
++ loff_t size = i_size_read(inode);
+ int ret;
+
++ if (pos < 0 || pos >= size)
++ return 0;
++
++ iter.len = min(len, size - pos);
+ while ((ret = iomap_iter(&iter, ops)) > 0)
+ iter.processed = dax_unshare_iter(&iter);
+ return ret;
+diff --git a/fs/exec.c b/fs/exec.c
+index 50e76cc633c4ba..dad402d55681b2 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -145,13 +145,11 @@ SYSCALL_DEFINE1(uselib, const char __user *, library)
+ goto out;
+
+ /*
+- * may_open() has already checked for this, so it should be
+- * impossible to trip now. But we need to be extra cautious
+- * and check again at the very end too.
++ * Check do_open_execat() for an explanation.
+ */
+ error = -EACCES;
+- if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode) ||
+- path_noexec(&file->f_path)))
++ if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode)) ||
++ path_noexec(&file->f_path))
+ goto exit;
+
+ error = -ENOEXEC;
+@@ -813,7 +811,8 @@ int setup_arg_pages(struct linux_binprm *bprm,
+ stack_base = calc_max_stack_size(stack_base);
+
+ /* Add space for stack randomization. */
+- stack_base += (STACK_RND_MASK << PAGE_SHIFT);
++ if (current->flags & PF_RANDOMIZE)
++ stack_base += (STACK_RND_MASK << PAGE_SHIFT);
+
+ /* Make sure we didn't let the argument array grow too large. */
+ if (vma->vm_end - vma->vm_start > stack_base)
+@@ -954,7 +953,6 @@ EXPORT_SYMBOL(transfer_args_to_stack);
+ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+ {
+ struct file *file;
+- int err;
+ struct open_flags open_exec_flags = {
+ .open_flag = O_LARGEFILE | O_RDONLY | __FMODE_EXEC,
+ .acc_mode = MAY_EXEC,
+@@ -971,24 +969,20 @@ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+
+ file = do_filp_open(fd, name, &open_exec_flags);
+ if (IS_ERR(file))
+- goto out;
++ return file;
+
+ /*
+- * may_open() has already checked for this, so it should be
+- * impossible to trip now. But we need to be extra cautious
+- * and check again at the very end too.
++ * In the past the regular type check was here. It moved to may_open() in
++ * 633fb6ac3980 ("exec: move S_ISREG() check earlier"). Since then it is
++ * an invariant that all non-regular files error out before we get here.
+ */
+- err = -EACCES;
+- if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode) ||
+- path_noexec(&file->f_path)))
+- goto exit;
++ if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode)) ||
++ path_noexec(&file->f_path)) {
++ fput(file);
++ return ERR_PTR(-EACCES);
++ }
+
+-out:
+ return file;
+-
+-exit:
+- fput(file);
+- return ERR_PTR(err);
+ }
+
+ /**
+diff --git a/fs/exfat/balloc.c b/fs/exfat/balloc.c
+index 0356c88252bd34..ce9be95c9172f6 100644
+--- a/fs/exfat/balloc.c
++++ b/fs/exfat/balloc.c
+@@ -91,11 +91,8 @@ int exfat_load_bitmap(struct super_block *sb)
+ return -EIO;
+
+ type = exfat_get_entry_type(ep);
+- if (type == TYPE_UNUSED)
+- break;
+- if (type != TYPE_BITMAP)
+- continue;
+- if (ep->dentry.bitmap.flags == 0x0) {
++ if (type == TYPE_BITMAP &&
++ ep->dentry.bitmap.flags == 0x0) {
+ int err;
+
+ err = exfat_allocate_bitmap(sb, ep);
+@@ -103,6 +100,9 @@ int exfat_load_bitmap(struct super_block *sb)
+ return err;
+ }
+ brelse(bh);
++
++ if (type == TYPE_UNUSED)
++ return -EINVAL;
+ }
+
+ if (exfat_get_next_cluster(sb, &clu.dir))
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index ff4514e4626bdb..b8b6b06015cd3b 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -279,12 +279,20 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx)
+ struct fscrypt_str de_name =
+ FSTR_INIT(de->name,
+ de->name_len);
++ u32 hash;
++ u32 minor_hash;
++
++ if (IS_CASEFOLDED(inode)) {
++ hash = EXT4_DIRENT_HASH(de);
++ minor_hash = EXT4_DIRENT_MINOR_HASH(de);
++ } else {
++ hash = 0;
++ minor_hash = 0;
++ }
+
+ /* Directory is encrypted */
+ err = fscrypt_fname_disk_to_usr(inode,
+- EXT4_DIRENT_HASH(de),
+- EXT4_DIRENT_MINOR_HASH(de),
+- &de_name, &fstr);
++ hash, minor_hash, &de_name, &fstr);
+ de_name = fstr;
+ fstr.len = save_len;
+ if (err)
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 08acd152261ed8..8bd302392d759a 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -2462,6 +2462,7 @@ static inline __le16 ext4_rec_len_to_disk(unsigned len, unsigned blocksize)
+ #define DX_HASH_HALF_MD4_UNSIGNED 4
+ #define DX_HASH_TEA_UNSIGNED 5
+ #define DX_HASH_SIPHASH 6
++#define DX_HASH_LAST DX_HASH_SIPHASH
+
+ static inline u32 ext4_chksum(struct ext4_sb_info *sbi, u32 crc,
+ const void *address, unsigned int length)
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index e067f2dd0335cd..c64f7c1b1d9082 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -957,6 +957,8 @@ ext4_find_extent(struct inode *inode, ext4_lblk_t block,
+
+ ext4_ext_show_path(inode, path);
+
++ if (orig_path)
++ *orig_path = path;
+ return path;
+
+ err:
+@@ -1877,6 +1879,7 @@ static void ext4_ext_try_to_merge_up(handle_t *handle,
+ path[0].p_hdr->eh_max = cpu_to_le16(max_root);
+
+ brelse(path[1].p_bh);
++ path[1].p_bh = NULL;
+ ext4_free_blocks(handle, inode, NULL, blk, 1,
+ EXT4_FREE_BLOCKS_METADATA | EXT4_FREE_BLOCKS_FORGET);
+ }
+@@ -2103,6 +2106,7 @@ int ext4_ext_insert_extent(handle_t *handle, struct inode *inode,
+ ppath, newext);
+ if (err)
+ goto cleanup;
++ path = *ppath;
+ depth = ext_depth(inode);
+ eh = path[depth].p_hdr;
+
+@@ -3230,6 +3234,24 @@ static int ext4_split_extent_at(handle_t *handle,
+ if (err != -ENOSPC && err != -EDQUOT && err != -ENOMEM)
+ goto out;
+
++ /*
++ * Update path is required because previous ext4_ext_insert_extent()
++ * may have freed or reallocated the path. Using EXT4_EX_NOFAIL
++ * guarantees that ext4_find_extent() will not return -ENOMEM,
++ * otherwise -ENOMEM will cause a retry in do_writepages(), and a
++ * WARN_ON may be triggered in ext4_da_update_reserve_space() due to
++ * an incorrect ee_len causing the i_reserved_data_blocks exception.
++ */
++ path = ext4_find_extent(inode, ee_block, ppath,
++ flags | EXT4_EX_NOFAIL);
++ if (IS_ERR(path)) {
++ EXT4_ERROR_INODE(inode, "Failed split extent on %u, err %ld",
++ split, PTR_ERR(path));
++ return PTR_ERR(path);
++ }
++ depth = ext_depth(inode);
++ ex = path[depth].p_ext;
++
+ if (EXT4_EXT_MAY_ZEROOUT & split_flag) {
+ if (split_flag & (EXT4_EXT_DATA_VALID1|EXT4_EXT_DATA_VALID2)) {
+ if (split_flag & EXT4_EXT_DATA_VALID1) {
+@@ -3282,12 +3304,12 @@ static int ext4_split_extent_at(handle_t *handle,
+ ext4_ext_dirty(handle, inode, path + path->p_depth);
+ return err;
+ out:
+- ext4_ext_show_leaf(inode, path);
++ ext4_ext_show_leaf(inode, *ppath);
+ return err;
+ }
+
+ /*
+- * ext4_split_extents() splits an extent and mark extent which is covered
++ * ext4_split_extent() splits an extent and mark extent which is covered
+ * by @map as split_flags indicates
+ *
+ * It may result in splitting the extent into multiple extents (up to three)
+@@ -3363,7 +3385,7 @@ static int ext4_split_extent(handle_t *handle,
+ goto out;
+ }
+
+- ext4_ext_show_leaf(inode, path);
++ ext4_ext_show_leaf(inode, *ppath);
+ out:
+ return err ? err : allocated;
+ }
+@@ -3828,14 +3850,13 @@ ext4_ext_handle_unwritten_extents(handle_t *handle, struct inode *inode,
+ struct ext4_ext_path **ppath, int flags,
+ unsigned int allocated, ext4_fsblk_t newblock)
+ {
+- struct ext4_ext_path __maybe_unused *path = *ppath;
+ int ret = 0;
+ int err = 0;
+
+ ext_debug(inode, "logical block %llu, max_blocks %u, flags 0x%x, allocated %u\n",
+ (unsigned long long)map->m_lblk, map->m_len, flags,
+ allocated);
+- ext4_ext_show_leaf(inode, path);
++ ext4_ext_show_leaf(inode, *ppath);
+
+ /*
+ * When writing into unwritten space, we should not fail to
+@@ -3932,7 +3953,7 @@ ext4_ext_handle_unwritten_extents(handle_t *handle, struct inode *inode,
+ if (allocated > map->m_len)
+ allocated = map->m_len;
+ map->m_len = allocated;
+- ext4_ext_show_leaf(inode, path);
++ ext4_ext_show_leaf(inode, *ppath);
+ out2:
+ return err ? err : allocated;
+ }
+@@ -5535,6 +5556,7 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len)
+ path = ext4_find_extent(inode, offset_lblk, NULL, 0);
+ if (IS_ERR(path)) {
+ up_write(&EXT4_I(inode)->i_data_sem);
++ ret = PTR_ERR(path);
+ goto out_stop;
+ }
+
+@@ -5880,7 +5902,7 @@ int ext4_clu_mapped(struct inode *inode, ext4_lblk_t lclu)
+ int ext4_ext_replay_update_ex(struct inode *inode, ext4_lblk_t start,
+ int len, int unwritten, ext4_fsblk_t pblk)
+ {
+- struct ext4_ext_path *path = NULL, *ppath;
++ struct ext4_ext_path *path;
+ struct ext4_extent *ex;
+ int ret;
+
+@@ -5896,30 +5918,29 @@ int ext4_ext_replay_update_ex(struct inode *inode, ext4_lblk_t start,
+ if (le32_to_cpu(ex->ee_block) != start ||
+ ext4_ext_get_actual_len(ex) != len) {
+ /* We need to split this extent to match our extent first */
+- ppath = path;
+ down_write(&EXT4_I(inode)->i_data_sem);
+- ret = ext4_force_split_extent_at(NULL, inode, &ppath, start, 1);
++ ret = ext4_force_split_extent_at(NULL, inode, &path, start, 1);
+ up_write(&EXT4_I(inode)->i_data_sem);
+ if (ret)
+ goto out;
+- kfree(path);
+- path = ext4_find_extent(inode, start, NULL, 0);
++
++ path = ext4_find_extent(inode, start, &path, 0);
+ if (IS_ERR(path))
+- return -1;
+- ppath = path;
++ return PTR_ERR(path);
+ ex = path[path->p_depth].p_ext;
+ WARN_ON(le32_to_cpu(ex->ee_block) != start);
++
+ if (ext4_ext_get_actual_len(ex) != len) {
+ down_write(&EXT4_I(inode)->i_data_sem);
+- ret = ext4_force_split_extent_at(NULL, inode, &ppath,
++ ret = ext4_force_split_extent_at(NULL, inode, &path,
+ start + len, 1);
+ up_write(&EXT4_I(inode)->i_data_sem);
+ if (ret)
+ goto out;
+- kfree(path);
+- path = ext4_find_extent(inode, start, NULL, 0);
++
++ path = ext4_find_extent(inode, start, &path, 0);
+ if (IS_ERR(path))
+- return -EINVAL;
++ return PTR_ERR(path);
+ ex = path[path->p_depth].p_ext;
+ }
+ }
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 3926a05eceeed1..95667512010e10 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -339,22 +339,29 @@ void ext4_fc_mark_ineligible(struct super_block *sb, int reason, handle_t *handl
+ {
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ tid_t tid;
++ bool has_transaction = true;
++ bool is_ineligible;
+
+ if (ext4_fc_disabled(sb))
+ return;
+
+- ext4_set_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
+ if (handle && !IS_ERR(handle))
+ tid = handle->h_transaction->t_tid;
+ else {
+ read_lock(&sbi->s_journal->j_state_lock);
+- tid = sbi->s_journal->j_running_transaction ?
+- sbi->s_journal->j_running_transaction->t_tid : 0;
++ if (sbi->s_journal->j_running_transaction)
++ tid = sbi->s_journal->j_running_transaction->t_tid;
++ else
++ has_transaction = false;
+ read_unlock(&sbi->s_journal->j_state_lock);
+ }
+ spin_lock(&sbi->s_fc_lock);
+- if (tid_gt(tid, sbi->s_fc_ineligible_tid))
++ is_ineligible = ext4_test_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
++ if (has_transaction &&
++ (!is_ineligible ||
++ (is_ineligible && tid_gt(tid, sbi->s_fc_ineligible_tid))))
+ sbi->s_fc_ineligible_tid = tid;
++ ext4_set_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
+ spin_unlock(&sbi->s_fc_lock);
+ WARN_ON(reason >= EXT4_FC_REASON_MAX);
+ sbi->s_fc_stats.fc_ineligible_reason_count[reason]++;
+@@ -372,7 +379,7 @@ void ext4_fc_mark_ineligible(struct super_block *sb, int reason, handle_t *handl
+ */
+ static int ext4_fc_track_template(
+ handle_t *handle, struct inode *inode,
+- int (*__fc_track_fn)(struct inode *, void *, bool),
++ int (*__fc_track_fn)(handle_t *handle, struct inode *, void *, bool),
+ void *args, int enqueue)
+ {
+ bool update = false;
+@@ -389,7 +396,7 @@ static int ext4_fc_track_template(
+ ext4_fc_reset_inode(inode);
+ ei->i_sync_tid = tid;
+ }
+- ret = __fc_track_fn(inode, args, update);
++ ret = __fc_track_fn(handle, inode, args, update);
+ mutex_unlock(&ei->i_fc_lock);
+
+ if (!enqueue)
+@@ -413,7 +420,8 @@ struct __track_dentry_update_args {
+ };
+
+ /* __track_fn for directory entry updates. Called with ei->i_fc_lock. */
+-static int __track_dentry_update(struct inode *inode, void *arg, bool update)
++static int __track_dentry_update(handle_t *handle, struct inode *inode,
++ void *arg, bool update)
+ {
+ struct ext4_fc_dentry_update *node;
+ struct ext4_inode_info *ei = EXT4_I(inode);
+@@ -428,14 +436,14 @@ static int __track_dentry_update(struct inode *inode, void *arg, bool update)
+
+ if (IS_ENCRYPTED(dir)) {
+ ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_ENCRYPTED_FILENAME,
+- NULL);
++ handle);
+ mutex_lock(&ei->i_fc_lock);
+ return -EOPNOTSUPP;
+ }
+
+ node = kmem_cache_alloc(ext4_fc_dentry_cachep, GFP_NOFS);
+ if (!node) {
+- ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM, NULL);
++ ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM, handle);
+ mutex_lock(&ei->i_fc_lock);
+ return -ENOMEM;
+ }
+@@ -447,7 +455,7 @@ static int __track_dentry_update(struct inode *inode, void *arg, bool update)
+ node->fcd_name.name = kmalloc(dentry->d_name.len, GFP_NOFS);
+ if (!node->fcd_name.name) {
+ kmem_cache_free(ext4_fc_dentry_cachep, node);
+- ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM, NULL);
++ ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM, handle);
+ mutex_lock(&ei->i_fc_lock);
+ return -ENOMEM;
+ }
+@@ -569,7 +577,8 @@ void ext4_fc_track_create(handle_t *handle, struct dentry *dentry)
+ }
+
+ /* __track_fn for inode tracking */
+-static int __track_inode(struct inode *inode, void *arg, bool update)
++static int __track_inode(handle_t *handle, struct inode *inode, void *arg,
++ bool update)
+ {
+ if (update)
+ return -EEXIST;
+@@ -607,7 +616,8 @@ struct __track_range_args {
+ };
+
+ /* __track_fn for tracking data updates */
+-static int __track_range(struct inode *inode, void *arg, bool update)
++static int __track_range(handle_t *handle, struct inode *inode, void *arg,
++ bool update)
+ {
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ ext4_lblk_t oldstart;
+@@ -1288,8 +1298,21 @@ static void ext4_fc_cleanup(journal_t *journal, int full, tid_t tid)
+ list_del_init(&iter->i_fc_list);
+ ext4_clear_inode_state(&iter->vfs_inode,
+ EXT4_STATE_FC_COMMITTING);
+- if (tid_geq(tid, iter->i_sync_tid))
++ if (tid_geq(tid, iter->i_sync_tid)) {
+ ext4_fc_reset_inode(&iter->vfs_inode);
++ } else if (full) {
++ /*
++ * We are called after a full commit, inode has been
++ * modified while the commit was running. Re-enqueue
++ * the inode into STAGING, which will then be splice
++ * back into MAIN. This cannot happen during
++ * fastcommit because the journal is locked all the
++ * time in that case (and tid doesn't increase so
++ * tid check above isn't reliable).
++ */
++ list_add_tail(&EXT4_I(&iter->vfs_inode)->i_fc_list,
++ &sbi->s_fc_q[FC_Q_STAGING]);
++ }
+ /* Make sure EXT4_STATE_FC_COMMITTING bit is clear */
+ smp_mb();
+ #if (BITS_PER_LONG < 64)
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index c89e434db6b7ba..be061bb640672c 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -334,10 +334,10 @@ static ssize_t ext4_handle_inode_extension(struct inode *inode, loff_t offset,
+ * Clean up the inode after DIO or DAX extending write has completed and the
+ * inode size has been updated using ext4_handle_inode_extension().
+ */
+-static void ext4_inode_extension_cleanup(struct inode *inode, ssize_t count)
++static void ext4_inode_extension_cleanup(struct inode *inode, bool need_trunc)
+ {
+ lockdep_assert_held_write(&inode->i_rwsem);
+- if (count < 0) {
++ if (need_trunc) {
+ ext4_truncate_failed_write(inode);
+ /*
+ * If the truncate operation failed early, then the inode may
+@@ -586,7 +586,7 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ * writeback of delalloc blocks.
+ */
+ WARN_ON_ONCE(ret == -EIOCBQUEUED);
+- ext4_inode_extension_cleanup(inode, ret);
++ ext4_inode_extension_cleanup(inode, ret < 0);
+ }
+
+ out:
+@@ -670,7 +670,7 @@ ext4_dax_write_iter(struct kiocb *iocb, struct iov_iter *from)
+
+ if (extend) {
+ ret = ext4_handle_inode_extension(inode, offset, ret);
+- ext4_inode_extension_cleanup(inode, ret);
++ ext4_inode_extension_cleanup(inode, ret < (ssize_t)count);
+ }
+ out:
+ inode_unlock(inode);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 941c1c0d5c6ed9..a0fa5192db8ed3 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -5279,8 +5279,9 @@ static void ext4_wait_for_tail_page_commit(struct inode *inode)
+ {
+ unsigned offset;
+ journal_t *journal = EXT4_SB(inode->i_sb)->s_journal;
+- tid_t commit_tid = 0;
++ tid_t commit_tid;
+ int ret;
++ bool has_transaction;
+
+ offset = inode->i_size & (PAGE_SIZE - 1);
+ /*
+@@ -5305,12 +5306,14 @@ static void ext4_wait_for_tail_page_commit(struct inode *inode)
+ folio_put(folio);
+ if (ret != -EBUSY)
+ return;
+- commit_tid = 0;
++ has_transaction = false;
+ read_lock(&journal->j_state_lock);
+- if (journal->j_committing_transaction)
++ if (journal->j_committing_transaction) {
+ commit_tid = journal->j_committing_transaction->t_tid;
++ has_transaction = true;
++ }
+ read_unlock(&journal->j_state_lock);
+- if (commit_tid)
++ if (has_transaction)
+ jbd2_log_wait_commit(journal, commit_tid);
+ }
+ }
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index d98ac2af8199f1..a5e1492bbaaa56 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -663,8 +663,8 @@ int ext4_ind_migrate(struct inode *inode)
+ if (unlikely(ret2 && !ret))
+ ret = ret2;
+ errout:
+- ext4_journal_stop(handle);
+ up_write(&EXT4_I(inode)->i_data_sem);
++ ext4_journal_stop(handle);
+ out_unlock:
+ ext4_writepages_up_write(inode->i_sb, alloc_ctx);
+ return ret;
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index 204f53b236229f..c95e3e526390d7 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -36,7 +36,6 @@ get_ext_path(struct inode *inode, ext4_lblk_t lblock,
+ *ppath = NULL;
+ return -ENODATA;
+ }
+- *ppath = path;
+ return 0;
+ }
+
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 6a95713f9193b1..7a659a31f83766 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1482,7 +1482,7 @@ static bool ext4_match(struct inode *parent,
+ }
+
+ /*
+- * Returns 0 if not found, -1 on failure, and 1 on success
++ * Returns 0 if not found, -EFSCORRUPTED on failure, and 1 on success
+ */
+ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
+ struct inode *dir, struct ext4_filename *fname,
+@@ -1503,7 +1503,7 @@ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
+ * a full check */
+ if (ext4_check_dir_entry(dir, NULL, de, bh, search_buf,
+ buf_size, offset))
+- return -1;
++ return -EFSCORRUPTED;
+ *res_dir = de;
+ return 1;
+ }
+@@ -1511,7 +1511,7 @@ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
+ de_len = ext4_rec_len_from_disk(de->rec_len,
+ dir->i_sb->s_blocksize);
+ if (de_len <= 0)
+- return -1;
++ return -EFSCORRUPTED;
+ offset += de_len;
+ de = (struct ext4_dir_entry_2 *) ((char *) de + de_len);
+ }
+@@ -1663,8 +1663,10 @@ static struct buffer_head *__ext4_find_entry(struct inode *dir,
+ goto cleanup_and_exit;
+ } else {
+ brelse(bh);
+- if (i < 0)
++ if (i < 0) {
++ ret = ERR_PTR(i);
+ goto cleanup_and_exit;
++ }
+ }
+ next:
+ if (++block >= nblocks)
+@@ -1758,7 +1760,7 @@ static struct buffer_head * ext4_dx_find_entry(struct inode *dir,
+ if (retval == 1)
+ goto success;
+ brelse(bh);
+- if (retval == -1) {
++ if (retval < 0) {
+ bh = ERR_PTR(ERR_BAD_DX_DIR);
+ goto errout;
+ }
+@@ -1999,7 +2001,7 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ split = count/2;
+
+ hash2 = map[split].hash;
+- continued = hash2 == map[split - 1].hash;
++ continued = split > 0 ? hash2 == map[split - 1].hash : 0;
+ dxtrace(printk(KERN_INFO "Split block %lu at %x, %i/%i\n",
+ (unsigned long)dx_get_block(frame->at),
+ hash2, split, count-split));
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 0ba9837d65cac9..f12ccaabf13d8b 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -230,8 +230,8 @@ struct ext4_new_flex_group_data {
+ #define MAX_RESIZE_BG 16384
+
+ /*
+- * alloc_flex_gd() allocates a ext4_new_flex_group_data with size of
+- * @flexbg_size.
++ * alloc_flex_gd() allocates an ext4_new_flex_group_data that satisfies the
++ * resizing from @o_group to @n_group, its size is typically @flexbg_size.
+ *
+ * Returns NULL on failure otherwise address of the allocated structure.
+ */
+@@ -239,25 +239,27 @@ static struct ext4_new_flex_group_data *alloc_flex_gd(unsigned int flexbg_size,
+ ext4_group_t o_group, ext4_group_t n_group)
+ {
+ ext4_group_t last_group;
++ unsigned int max_resize_bg;
+ struct ext4_new_flex_group_data *flex_gd;
+
+ flex_gd = kmalloc(sizeof(*flex_gd), GFP_NOFS);
+ if (flex_gd == NULL)
+ goto out3;
+
+- if (unlikely(flexbg_size > MAX_RESIZE_BG))
+- flex_gd->resize_bg = MAX_RESIZE_BG;
+- else
+- flex_gd->resize_bg = flexbg_size;
++ max_resize_bg = umin(flexbg_size, MAX_RESIZE_BG);
++ flex_gd->resize_bg = max_resize_bg;
+
+ /* Avoid allocating large 'groups' array if not needed */
+ last_group = o_group | (flex_gd->resize_bg - 1);
+ if (n_group <= last_group)
+- flex_gd->resize_bg = 1 << fls(n_group - o_group + 1);
++ flex_gd->resize_bg = 1 << fls(n_group - o_group);
+ else if (n_group - last_group < flex_gd->resize_bg)
+- flex_gd->resize_bg = 1 << max(fls(last_group - o_group + 1),
++ flex_gd->resize_bg = 1 << max(fls(last_group - o_group),
+ fls(n_group - last_group));
+
++ if (WARN_ON_ONCE(flex_gd->resize_bg > max_resize_bg))
++ flex_gd->resize_bg = max_resize_bg;
++
+ flex_gd->groups = kmalloc_array(flex_gd->resize_bg,
+ sizeof(struct ext4_new_group_data),
+ GFP_NOFS);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 7e73e13741d1e2..687d406f47a92b 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -5087,16 +5087,27 @@ static int ext4_load_super(struct super_block *sb, ext4_fsblk_t *lsb,
+ return ret;
+ }
+
+-static void ext4_hash_info_init(struct super_block *sb)
++static int ext4_hash_info_init(struct super_block *sb)
+ {
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ struct ext4_super_block *es = sbi->s_es;
+ unsigned int i;
+
++ sbi->s_def_hash_version = es->s_def_hash_version;
++
++ if (sbi->s_def_hash_version > DX_HASH_LAST) {
++ ext4_msg(sb, KERN_ERR,
++ "Invalid default hash set in the superblock");
++ return -EINVAL;
++ } else if (sbi->s_def_hash_version == DX_HASH_SIPHASH) {
++ ext4_msg(sb, KERN_ERR,
++ "SIPHASH is not a valid default hash value");
++ return -EINVAL;
++ }
++
+ for (i = 0; i < 4; i++)
+ sbi->s_hash_seed[i] = le32_to_cpu(es->s_hash_seed[i]);
+
+- sbi->s_def_hash_version = es->s_def_hash_version;
+ if (ext4_has_feature_dir_index(sb)) {
+ i = le32_to_cpu(es->s_flags);
+ if (i & EXT2_FLAGS_UNSIGNED_HASH)
+@@ -5114,6 +5125,7 @@ static void ext4_hash_info_init(struct super_block *sb)
+ #endif
+ }
+ }
++ return 0;
+ }
+
+ static int ext4_block_group_meta_init(struct super_block *sb, int silent)
+@@ -5261,7 +5273,9 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ if (err)
+ goto failed_mount;
+
+- ext4_hash_info_init(sb);
++ err = ext4_hash_info_init(sb);
++ if (err)
++ goto failed_mount;
+
+ err = ext4_handle_clustersize(sb);
+ if (err)
+@@ -5319,6 +5333,8 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ INIT_LIST_HEAD(&sbi->s_orphan); /* unlinked but open files */
+ mutex_init(&sbi->s_orphan_lock);
+
++ spin_lock_init(&sbi->s_bdev_wb_lock);
++
+ ext4_fast_commit_init(sb);
+
+ sb->s_root = NULL;
+@@ -5540,7 +5556,6 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ * Save the original bdev mapping's wb_err value which could be
+ * used to detect the metadata async write error.
+ */
+- spin_lock_init(&sbi->s_bdev_wb_lock);
+ errseq_check_and_advance(&sb->s_bdev->bd_mapping->wb_err,
+ &sbi->s_bdev_wb_err);
+ EXT4_SB(sb)->s_mount_state |= EXT4_ORPHAN_FS;
+@@ -5620,8 +5635,8 @@ failed_mount8: __maybe_unused
+ failed_mount3:
+ /* flush s_sb_upd_work before sbi destroy */
+ flush_work(&sbi->s_sb_upd_work);
+- del_timer_sync(&sbi->s_err_report);
+ ext4_stop_mmpd(sbi);
++ del_timer_sync(&sbi->s_err_report);
+ ext4_group_desc_free(sbi);
+ failed_mount:
+ if (sbi->s_chksum_driver)
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 46ce2f21fef9dc..aea9e3c405f1fb 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -2559,6 +2559,8 @@ ext4_xattr_set(struct inode *inode, int name_index, const char *name,
+
+ error = ext4_xattr_set_handle(handle, inode, name_index, name,
+ value, value_len, flags);
++ ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_XATTR,
++ handle);
+ error2 = ext4_journal_stop(handle);
+ if (error == -ENOSPC &&
+ ext4_should_retry_alloc(sb, &retries))
+@@ -2566,7 +2568,6 @@ ext4_xattr_set(struct inode *inode, int name_index, const char *name,
+ if (error == 0)
+ error = error2;
+ }
+- ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_XATTR, NULL);
+
+ return error;
+ }
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 40e96d577982c0..3bdfa53b24bb9e 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -134,6 +134,12 @@ typedef u32 nid_t;
+
+ #define COMPRESS_EXT_NUM 16
+
++enum blkzone_allocation_policy {
++ BLKZONE_ALLOC_PRIOR_SEQ, /* Prioritize writing to sequential zones */
++ BLKZONE_ALLOC_ONLY_SEQ, /* Only allow writing to sequential zones */
++ BLKZONE_ALLOC_PRIOR_CONV, /* Prioritize writing to conventional zones */
++};
++
+ /*
+ * An implementation of an rwsem that is explicitly unfair to readers. This
+ * prevents priority inversion when a low-priority reader acquires the read lock
+@@ -1295,6 +1301,7 @@ struct f2fs_gc_control {
+ bool no_bg_gc; /* check the space and stop bg_gc */
+ bool should_migrate_blocks; /* should migrate blocks */
+ bool err_gc_skipped; /* return EAGAIN if GC skipped */
++ bool one_time; /* require one time GC in one migration unit */
+ unsigned int nr_free_secs; /* # of free sections to do GC */
+ };
+
+@@ -1563,6 +1570,8 @@ struct f2fs_sb_info {
+ #ifdef CONFIG_BLK_DEV_ZONED
+ unsigned int blocks_per_blkz; /* F2FS blocks per zone */
+ unsigned int max_open_zones; /* max open zone resources of the zoned device */
++ /* For adjust the priority writing position of data in zone UFS */
++ unsigned int blkzone_alloc_policy;
+ #endif
+
+ /* for node-related operations */
+@@ -1689,6 +1698,8 @@ struct f2fs_sb_info {
+ unsigned int max_victim_search;
+ /* migration granularity of garbage collection, unit: segment */
+ unsigned int migration_granularity;
++ /* migration window granularity of garbage collection, unit: segment */
++ unsigned int migration_window_granularity;
+
+ /*
+ * for stat information.
+@@ -2862,13 +2873,26 @@ static inline bool is_inflight_io(struct f2fs_sb_info *sbi, int type)
+ return false;
+ }
+
++static inline bool is_inflight_read_io(struct f2fs_sb_info *sbi)
++{
++ return get_pages(sbi, F2FS_RD_DATA) || get_pages(sbi, F2FS_DIO_READ);
++}
++
+ static inline bool is_idle(struct f2fs_sb_info *sbi, int type)
+ {
++ bool zoned_gc = (type == GC_TIME &&
++ F2FS_HAS_FEATURE(sbi, F2FS_FEATURE_BLKZONED));
++
+ if (sbi->gc_mode == GC_URGENT_HIGH)
+ return true;
+
+- if (is_inflight_io(sbi, type))
+- return false;
++ if (zoned_gc) {
++ if (is_inflight_read_io(sbi))
++ return false;
++ } else {
++ if (is_inflight_io(sbi, type))
++ return false;
++ }
+
+ if (sbi->gc_mode == GC_URGENT_MID)
+ return true;
+@@ -2877,6 +2901,9 @@ static inline bool is_idle(struct f2fs_sb_info *sbi, int type)
+ (type == DISCARD_TIME || type == GC_TIME))
+ return true;
+
++ if (zoned_gc)
++ return true;
++
+ return f2fs_time_over(sbi, type);
+ }
+
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 724bbcb447d32e..938249e7819e4e 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -81,6 +81,8 @@ static int gc_thread_func(void *data)
+ continue;
+ }
+
++ gc_control.one_time = false;
++
+ /*
+ * [GC triggering condition]
+ * 0. GC is not conducted currently.
+@@ -116,15 +118,29 @@ static int gc_thread_func(void *data)
+ goto next;
+ }
+
+- if (has_enough_invalid_blocks(sbi))
++ if (f2fs_sb_has_blkzoned(sbi)) {
++ if (has_enough_free_blocks(sbi, LIMIT_NO_ZONED_GC)) {
++ wait_ms = gc_th->no_gc_sleep_time;
++ f2fs_up_write(&sbi->gc_lock);
++ goto next;
++ }
++ if (wait_ms == gc_th->no_gc_sleep_time)
++ wait_ms = gc_th->max_sleep_time;
++ }
++
++ if (need_to_boost_gc(sbi)) {
+ decrease_sleep_time(gc_th, &wait_ms);
+- else
++ if (f2fs_sb_has_blkzoned(sbi))
++ gc_control.one_time = true;
++ } else {
+ increase_sleep_time(gc_th, &wait_ms);
++ }
+ do_gc:
+ stat_inc_gc_call_count(sbi, foreground ?
+ FOREGROUND : BACKGROUND);
+
+- sync_mode = F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_SYNC;
++ sync_mode = (F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_SYNC) ||
++ gc_control.one_time;
+
+ /* foreground GC was been triggered via f2fs_balance_fs() */
+ if (foreground)
+@@ -179,9 +195,16 @@ int f2fs_start_gc_thread(struct f2fs_sb_info *sbi)
+ return -ENOMEM;
+
+ gc_th->urgent_sleep_time = DEF_GC_THREAD_URGENT_SLEEP_TIME;
+- gc_th->min_sleep_time = DEF_GC_THREAD_MIN_SLEEP_TIME;
+- gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME;
+- gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME;
++
++ if (f2fs_sb_has_blkzoned(sbi)) {
++ gc_th->min_sleep_time = DEF_GC_THREAD_MIN_SLEEP_TIME_ZONED;
++ gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME_ZONED;
++ gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME_ZONED;
++ } else {
++ gc_th->min_sleep_time = DEF_GC_THREAD_MIN_SLEEP_TIME;
++ gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME;
++ gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME;
++ }
+
+ gc_th->gc_wake = false;
+
+@@ -1684,31 +1707,49 @@ static int __get_victim(struct f2fs_sb_info *sbi, unsigned int *victim,
+ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ unsigned int start_segno,
+ struct gc_inode_list *gc_list, int gc_type,
+- bool force_migrate)
++ bool force_migrate, bool one_time)
+ {
+ struct page *sum_page;
+ struct f2fs_summary_block *sum;
+ struct blk_plug plug;
+ unsigned int segno = start_segno;
+ unsigned int end_segno = start_segno + SEGS_PER_SEC(sbi);
++ unsigned int sec_end_segno;
+ int seg_freed = 0, migrated = 0;
+ unsigned char type = IS_DATASEG(get_seg_entry(sbi, segno)->type) ?
+ SUM_TYPE_DATA : SUM_TYPE_NODE;
+ unsigned char data_type = (type == SUM_TYPE_DATA) ? DATA : NODE;
+ int submitted = 0;
+
+- if (__is_large_section(sbi))
+- end_segno = rounddown(end_segno, SEGS_PER_SEC(sbi));
++ if (__is_large_section(sbi)) {
++ sec_end_segno = rounddown(end_segno, SEGS_PER_SEC(sbi));
+
+- /*
+- * zone-capacity can be less than zone-size in zoned devices,
+- * resulting in less than expected usable segments in the zone,
+- * calculate the end segno in the zone which can be garbage collected
+- */
+- if (f2fs_sb_has_blkzoned(sbi))
+- end_segno -= SEGS_PER_SEC(sbi) -
++ /*
++ * zone-capacity can be less than zone-size in zoned devices,
++ * resulting in less than expected usable segments in the zone,
++ * calculate the end segno in the zone which can be garbage
++ * collected
++ */
++ if (f2fs_sb_has_blkzoned(sbi))
++ sec_end_segno -= SEGS_PER_SEC(sbi) -
+ f2fs_usable_segs_in_sec(sbi, segno);
+
++ if (gc_type == BG_GC || one_time) {
++ unsigned int window_granularity =
++ sbi->migration_window_granularity;
++
++ if (f2fs_sb_has_blkzoned(sbi) &&
++ !has_enough_free_blocks(sbi,
++ LIMIT_BOOST_ZONED_GC))
++ window_granularity *= BOOST_GC_MULTIPLE;
++
++ end_segno = start_segno + window_granularity;
++ }
++
++ if (end_segno > sec_end_segno)
++ end_segno = sec_end_segno;
++ }
++
+ sanity_check_seg_type(sbi, get_seg_entry(sbi, segno)->type);
+
+ /* readahead multi ssa blocks those have contiguous address */
+@@ -1786,7 +1827,8 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+
+ if (__is_large_section(sbi))
+ sbi->next_victim_seg[gc_type] =
+- (segno + 1 < end_segno) ? segno + 1 : NULL_SEGNO;
++ (segno + 1 < sec_end_segno) ?
++ segno + 1 : NULL_SEGNO;
+ skip:
+ f2fs_put_page(sum_page, 0);
+ }
+@@ -1875,7 +1917,8 @@ int f2fs_gc(struct f2fs_sb_info *sbi, struct f2fs_gc_control *gc_control)
+ }
+
+ seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type,
+- gc_control->should_migrate_blocks);
++ gc_control->should_migrate_blocks,
++ gc_control->one_time);
+ if (seg_freed < 0)
+ goto stop;
+
+@@ -1886,6 +1929,9 @@ int f2fs_gc(struct f2fs_sb_info *sbi, struct f2fs_gc_control *gc_control)
+ total_sec_freed++;
+ }
+
++ if (gc_control->one_time)
++ goto stop;
++
+ if (gc_type == FG_GC) {
+ sbi->cur_victim_sec = NULL_SEGNO;
+
+@@ -2010,8 +2056,7 @@ int f2fs_gc_range(struct f2fs_sb_info *sbi,
+ .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
+ };
+
+- do_garbage_collect(sbi, segno, &gc_list, FG_GC,
+- dry_run_sections == 0);
++ do_garbage_collect(sbi, segno, &gc_list, FG_GC, true, false);
+ put_gc_inode(&gc_list);
+
+ if (!dry_run && get_valid_blocks(sbi, segno, true))
+diff --git a/fs/f2fs/gc.h b/fs/f2fs/gc.h
+index a8ea3301b815ad..78abeebd68b5ec 100644
+--- a/fs/f2fs/gc.h
++++ b/fs/f2fs/gc.h
+@@ -15,6 +15,11 @@
+ #define DEF_GC_THREAD_MAX_SLEEP_TIME 60000
+ #define DEF_GC_THREAD_NOGC_SLEEP_TIME 300000 /* wait 5 min */
+
++/* GC sleep parameters for zoned deivces */
++#define DEF_GC_THREAD_MIN_SLEEP_TIME_ZONED 10
++#define DEF_GC_THREAD_MAX_SLEEP_TIME_ZONED 20
++#define DEF_GC_THREAD_NOGC_SLEEP_TIME_ZONED 60000
++
+ /* choose candidates from sections which has age of more than 7 days */
+ #define DEF_GC_THREAD_AGE_THRESHOLD (60 * 60 * 24 * 7)
+ #define DEF_GC_THREAD_CANDIDATE_RATIO 20 /* select 20% oldest sections as candidates */
+@@ -25,6 +30,11 @@
+ #define LIMIT_INVALID_BLOCK 40 /* percentage over total user space */
+ #define LIMIT_FREE_BLOCK 40 /* percentage over invalid + free space */
+
++#define LIMIT_NO_ZONED_GC 60 /* percentage over total user space of no gc for zoned devices */
++#define LIMIT_BOOST_ZONED_GC 25 /* percentage over total user space of boosted gc for zoned devices */
++#define DEF_MIGRATION_WINDOW_GRANULARITY_ZONED 3
++#define BOOST_GC_MULTIPLE 5
++
+ #define DEF_GC_FAILED_PINNED_FILES 2048
+ #define MAX_GC_FAILED_PINNED_FILES USHRT_MAX
+
+@@ -152,6 +162,12 @@ static inline void decrease_sleep_time(struct f2fs_gc_kthread *gc_th,
+ *wait -= min_time;
+ }
+
++static inline bool has_enough_free_blocks(struct f2fs_sb_info *sbi,
++ unsigned int limit_perc)
++{
++ return free_sections(sbi) > ((sbi->total_sections * limit_perc) / 100);
++}
++
+ static inline bool has_enough_invalid_blocks(struct f2fs_sb_info *sbi)
+ {
+ block_t user_block_count = sbi->user_block_count;
+@@ -167,3 +183,10 @@ static inline bool has_enough_invalid_blocks(struct f2fs_sb_info *sbi)
+ free_user_blocks(sbi) <
+ limit_free_user_blocks(invalid_user_blocks));
+ }
++
++static inline bool need_to_boost_gc(struct f2fs_sb_info *sbi)
++{
++ if (f2fs_sb_has_blkzoned(sbi))
++ return !has_enough_free_blocks(sbi, LIMIT_BOOST_ZONED_GC);
++ return has_enough_invalid_blocks(sbi);
++}
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 418f2e663f6acb..c9320c98cb1425 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -2701,22 +2701,47 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ goto got_it;
+ }
+
++#ifdef CONFIG_BLK_DEV_ZONED
+ /*
+ * If we format f2fs on zoned storage, let's try to get pinned sections
+ * from beginning of the storage, which should be a conventional one.
+ */
+ if (f2fs_sb_has_blkzoned(sbi)) {
+- segno = pinning ? 0 : max(first_zoned_segno(sbi), *newseg);
++ /* Prioritize writing to conventional zones */
++ if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_PRIOR_CONV || pinning)
++ segno = 0;
++ else
++ segno = max(first_zoned_segno(sbi), *newseg);
+ hint = GET_SEC_FROM_SEG(sbi, segno);
+ }
++#endif
+
+ find_other_zone:
+ secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint);
++
++#ifdef CONFIG_BLK_DEV_ZONED
++ if (secno >= MAIN_SECS(sbi) && f2fs_sb_has_blkzoned(sbi)) {
++ /* Write only to sequential zones */
++ if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_ONLY_SEQ) {
++ hint = GET_SEC_FROM_SEG(sbi, first_zoned_segno(sbi));
++ secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint);
++ } else
++ secno = find_first_zero_bit(free_i->free_secmap,
++ MAIN_SECS(sbi));
++ if (secno >= MAIN_SECS(sbi)) {
++ ret = -ENOSPC;
++ f2fs_bug_on(sbi, 1);
++ goto out_unlock;
++ }
++ }
++#endif
++
+ if (secno >= MAIN_SECS(sbi)) {
+ secno = find_first_zero_bit(free_i->free_secmap,
+ MAIN_SECS(sbi));
+ if (secno >= MAIN_SECS(sbi)) {
+ ret = -ENOSPC;
++ f2fs_bug_on(sbi, 1);
+ goto out_unlock;
+ }
+ }
+@@ -2758,10 +2783,8 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ out_unlock:
+ spin_unlock(&free_i->segmap_lock);
+
+- if (ret == -ENOSPC) {
++ if (ret == -ENOSPC)
+ f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_NO_SEGMENT);
+- f2fs_bug_on(sbi, 1);
+- }
+ return ret;
+ }
+
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 29754dc50fa47a..0f6e2b3f6a4c51 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -707,6 +707,11 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
+ if (!strcmp(name, "on")) {
+ F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON;
+ } else if (!strcmp(name, "off")) {
++ if (f2fs_sb_has_blkzoned(sbi)) {
++ f2fs_warn(sbi, "zoned devices need bggc");
++ kfree(name);
++ return -EINVAL;
++ }
+ F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_OFF;
+ } else if (!strcmp(name, "sync")) {
+ F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_SYNC;
+@@ -3786,6 +3791,8 @@ static void init_sb_info(struct f2fs_sb_info *sbi)
+ sbi->next_victim_seg[FG_GC] = NULL_SEGNO;
+ sbi->max_victim_search = DEF_MAX_VICTIM_SEARCH;
+ sbi->migration_granularity = SEGS_PER_SEC(sbi);
++ sbi->migration_window_granularity = f2fs_sb_has_blkzoned(sbi) ?
++ DEF_MIGRATION_WINDOW_GRANULARITY_ZONED : SEGS_PER_SEC(sbi);
+ sbi->seq_file_ra_mul = MIN_RA_MUL;
+ sbi->max_fragment_chunk = DEF_FRAGMENT_SIZE;
+ sbi->max_fragment_hole = DEF_FRAGMENT_SIZE;
+@@ -4221,6 +4228,7 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
+ sbi->aligned_blksize = true;
+ #ifdef CONFIG_BLK_DEV_ZONED
+ sbi->max_open_zones = UINT_MAX;
++ sbi->blkzone_alloc_policy = BLKZONE_ALLOC_PRIOR_SEQ;
+ #endif
+
+ for (i = 0; i < max_devices; i++) {
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index fee7ee45ceaaab..9f26719425cf7b 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -561,6 +561,11 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
+ return -EINVAL;
+ }
+
++ if (!strcmp(a->attr.name, "migration_window_granularity")) {
++ if (t == 0 || t > SEGS_PER_SEC(sbi))
++ return -EINVAL;
++ }
++
+ if (!strcmp(a->attr.name, "gc_urgent")) {
+ if (t == 0) {
+ sbi->gc_mode = GC_NORMAL;
+@@ -627,6 +632,15 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
+ }
+ #endif
+
++#ifdef CONFIG_BLK_DEV_ZONED
++ if (!strcmp(a->attr.name, "blkzone_alloc_policy")) {
++ if (t < BLKZONE_ALLOC_PRIOR_SEQ || t > BLKZONE_ALLOC_PRIOR_CONV)
++ return -EINVAL;
++ sbi->blkzone_alloc_policy = t;
++ return count;
++ }
++#endif
++
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+ if (!strcmp(a->attr.name, "compr_written_block") ||
+ !strcmp(a->attr.name, "compr_saved_block")) {
+@@ -1001,6 +1015,7 @@ F2FS_SBI_RW_ATTR(gc_pin_file_thresh, gc_pin_file_threshold);
+ F2FS_SBI_RW_ATTR(gc_reclaimed_segments, gc_reclaimed_segs);
+ F2FS_SBI_GENERAL_RW_ATTR(max_victim_search);
+ F2FS_SBI_GENERAL_RW_ATTR(migration_granularity);
++F2FS_SBI_GENERAL_RW_ATTR(migration_window_granularity);
+ F2FS_SBI_GENERAL_RW_ATTR(dir_level);
+ #ifdef CONFIG_F2FS_IOSTAT
+ F2FS_SBI_GENERAL_RW_ATTR(iostat_enable);
+@@ -1033,6 +1048,7 @@ F2FS_SBI_GENERAL_RW_ATTR(warm_data_age_threshold);
+ F2FS_SBI_GENERAL_RW_ATTR(last_age_weight);
+ #ifdef CONFIG_BLK_DEV_ZONED
+ F2FS_SBI_GENERAL_RO_ATTR(unusable_blocks_per_sec);
++F2FS_SBI_GENERAL_RW_ATTR(blkzone_alloc_policy);
+ #endif
+
+ /* STAT_INFO ATTR */
+@@ -1140,6 +1156,7 @@ static struct attribute *f2fs_attrs[] = {
+ ATTR_LIST(min_ssr_sections),
+ ATTR_LIST(max_victim_search),
+ ATTR_LIST(migration_granularity),
++ ATTR_LIST(migration_window_granularity),
+ ATTR_LIST(dir_level),
+ ATTR_LIST(ram_thresh),
+ ATTR_LIST(ra_nid_pages),
+@@ -1187,6 +1204,7 @@ static struct attribute *f2fs_attrs[] = {
+ #endif
+ #ifdef CONFIG_BLK_DEV_ZONED
+ ATTR_LIST(unusable_blocks_per_sec),
++ ATTR_LIST(blkzone_alloc_policy),
+ #endif
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+ ATTR_LIST(compr_written_block),
+diff --git a/fs/file.c b/fs/file.c
+index 655338effe9c72..c2403cde40e4a4 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -272,59 +272,45 @@ static inline bool fd_is_open(unsigned int fd, const struct fdtable *fdt)
+ return test_bit(fd, fdt->open_fds);
+ }
+
+-static unsigned int count_open_files(struct fdtable *fdt)
+-{
+- unsigned int size = fdt->max_fds;
+- unsigned int i;
+-
+- /* Find the last open fd */
+- for (i = size / BITS_PER_LONG; i > 0; ) {
+- if (fdt->open_fds[--i])
+- break;
+- }
+- i = (i + 1) * BITS_PER_LONG;
+- return i;
+-}
+-
+ /*
+ * Note that a sane fdtable size always has to be a multiple of
+ * BITS_PER_LONG, since we have bitmaps that are sized by this.
+ *
+- * 'max_fds' will normally already be properly aligned, but it
+- * turns out that in the close_range() -> __close_range() ->
+- * unshare_fd() -> dup_fd() -> sane_fdtable_size() we can end
+- * up having a 'max_fds' value that isn't already aligned.
+- *
+- * Rather than make close_range() have to worry about this,
+- * just make that BITS_PER_LONG alignment be part of a sane
+- * fdtable size. Becuase that's really what it is.
++ * punch_hole is optional - when close_range() is asked to unshare
++ * and close, we don't need to copy descriptors in that range, so
++ * a smaller cloned descriptor table might suffice if the last
++ * currently opened descriptor falls into that range.
+ */
+-static unsigned int sane_fdtable_size(struct fdtable *fdt, unsigned int max_fds)
++static unsigned int sane_fdtable_size(struct fdtable *fdt, struct fd_range *punch_hole)
+ {
+- unsigned int count;
+-
+- count = count_open_files(fdt);
+- if (max_fds < NR_OPEN_DEFAULT)
+- max_fds = NR_OPEN_DEFAULT;
+- return ALIGN(min(count, max_fds), BITS_PER_LONG);
++ unsigned int last = find_last_bit(fdt->open_fds, fdt->max_fds);
++
++ if (last == fdt->max_fds)
++ return NR_OPEN_DEFAULT;
++ if (punch_hole && punch_hole->to >= last && punch_hole->from <= last) {
++ last = find_last_bit(fdt->open_fds, punch_hole->from);
++ if (last == punch_hole->from)
++ return NR_OPEN_DEFAULT;
++ }
++ return ALIGN(last + 1, BITS_PER_LONG);
+ }
+
+ /*
+- * Allocate a new files structure and copy contents from the
+- * passed in files structure.
+- * errorp will be valid only when the returned files_struct is NULL.
++ * Allocate a new descriptor table and copy contents from the passed in
++ * instance. Returns a pointer to cloned table on success, ERR_PTR()
++ * on failure. For 'punch_hole' see sane_fdtable_size().
+ */
+-struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int *errorp)
++struct files_struct *dup_fd(struct files_struct *oldf, struct fd_range *punch_hole)
+ {
+ struct files_struct *newf;
+ struct file **old_fds, **new_fds;
+ unsigned int open_files, i;
+ struct fdtable *old_fdt, *new_fdt;
++ int error;
+
+- *errorp = -ENOMEM;
+ newf = kmem_cache_alloc(files_cachep, GFP_KERNEL);
+ if (!newf)
+- goto out;
++ return ERR_PTR(-ENOMEM);
+
+ atomic_set(&newf->count, 1);
+
+@@ -341,7 +327,7 @@ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int
+
+ spin_lock(&oldf->file_lock);
+ old_fdt = files_fdtable(oldf);
+- open_files = sane_fdtable_size(old_fdt, max_fds);
++ open_files = sane_fdtable_size(old_fdt, punch_hole);
+
+ /*
+ * Check whether we need to allocate a larger fd array and fd set.
+@@ -354,14 +340,14 @@ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int
+
+ new_fdt = alloc_fdtable(open_files - 1);
+ if (!new_fdt) {
+- *errorp = -ENOMEM;
++ error = -ENOMEM;
+ goto out_release;
+ }
+
+ /* beyond sysctl_nr_open; nothing to do */
+ if (unlikely(new_fdt->max_fds < open_files)) {
+ __free_fdtable(new_fdt);
+- *errorp = -EMFILE;
++ error = -EMFILE;
+ goto out_release;
+ }
+
+@@ -372,7 +358,7 @@ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int
+ */
+ spin_lock(&oldf->file_lock);
+ old_fdt = files_fdtable(oldf);
+- open_files = sane_fdtable_size(old_fdt, max_fds);
++ open_files = sane_fdtable_size(old_fdt, punch_hole);
+ }
+
+ copy_fd_bitmaps(new_fdt, old_fdt, open_files / BITS_PER_LONG);
+@@ -406,8 +392,7 @@ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int
+
+ out_release:
+ kmem_cache_free(files_cachep, newf);
+-out:
+- return NULL;
++ return ERR_PTR(error);
+ }
+
+ static struct fdtable *close_files(struct files_struct * files)
+@@ -748,37 +733,25 @@ int __close_range(unsigned fd, unsigned max_fd, unsigned int flags)
+ if (fd > max_fd)
+ return -EINVAL;
+
+- if (flags & CLOSE_RANGE_UNSHARE) {
+- int ret;
+- unsigned int max_unshare_fds = NR_OPEN_MAX;
++ if ((flags & CLOSE_RANGE_UNSHARE) && atomic_read(&cur_fds->count) > 1) {
++ struct fd_range range = {fd, max_fd}, *punch_hole = ⦥
+
+ /*
+ * If the caller requested all fds to be made cloexec we always
+ * copy all of the file descriptors since they still want to
+ * use them.
+ */
+- if (!(flags & CLOSE_RANGE_CLOEXEC)) {
+- /*
+- * If the requested range is greater than the current
+- * maximum, we're closing everything so only copy all
+- * file descriptors beneath the lowest file descriptor.
+- */
+- rcu_read_lock();
+- if (max_fd >= last_fd(files_fdtable(cur_fds)))
+- max_unshare_fds = fd;
+- rcu_read_unlock();
+- }
+-
+- ret = unshare_fd(CLONE_FILES, max_unshare_fds, &fds);
+- if (ret)
+- return ret;
++ if (flags & CLOSE_RANGE_CLOEXEC)
++ punch_hole = NULL;
+
++ fds = dup_fd(cur_fds, punch_hole);
++ if (IS_ERR(fds))
++ return PTR_ERR(fds);
+ /*
+ * We used to share our file descriptor table, and have now
+ * created a private one, make sure we're using it below.
+ */
+- if (fds)
+- swap(cur_fds, fds);
++ swap(cur_fds, fds);
+ }
+
+ if (flags & CLOSE_RANGE_CLOEXEC)
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 12a769077ea0b3..4775c2cb8ae1b7 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -2249,6 +2249,7 @@ void gfs2_gl_hash_clear(struct gfs2_sbd *sdp)
+ gfs2_free_dead_glocks(sdp);
+ glock_hash_walk(dump_glock_func, sdp);
+ destroy_workqueue(sdp->sd_glock_wq);
++ sdp->sd_glock_wq = NULL;
+ }
+
+ static const char *state2str(unsigned state)
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index ff1f3e3dc65c9d..e83d293c361423 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -1307,7 +1307,8 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ fail_delete_wq:
+ destroy_workqueue(sdp->sd_delete_wq);
+ fail_glock_wq:
+- destroy_workqueue(sdp->sd_glock_wq);
++ if (sdp->sd_glock_wq)
++ destroy_workqueue(sdp->sd_glock_wq);
+ fail_free:
+ free_sbd(sdp);
+ sb->s_fs_info = NULL;
+diff --git a/fs/inode.c b/fs/inode.c
+index 7125b73b536753..30d42ab137f0a6 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -595,6 +595,7 @@ void dump_mapping(const struct address_space *mapping)
+ struct hlist_node *dentry_first;
+ struct dentry *dentry_ptr;
+ struct dentry dentry;
++ char fname[64] = {};
+ unsigned long ino;
+
+ /*
+@@ -631,11 +632,14 @@ void dump_mapping(const struct address_space *mapping)
+ return;
+ }
+
++ if (strncpy_from_kernel_nofault(fname, dentry.d_name.name, 63) < 0)
++ strscpy(fname, "<invalid>");
+ /*
+- * if dentry is corrupted, the %pd handler may still crash,
+- * but it's unlikely that we reach here with a corrupt mapping
++ * Even if strncpy_from_kernel_nofault() succeeded,
++ * the fname could be unreliable
+ */
+- pr_warn("aops:%ps ino:%lx dentry name:\"%pd\"\n", a_ops, ino, &dentry);
++ pr_warn("aops:%ps ino:%lx dentry name(?):\"%s\"\n",
++ a_ops, ino, fname);
+ }
+
+ void clear_inode(struct inode *inode)
+@@ -1574,9 +1578,7 @@ struct inode *ilookup(struct super_block *sb, unsigned long ino)
+ struct hlist_head *head = inode_hashtable + hash(sb, ino);
+ struct inode *inode;
+ again:
+- spin_lock(&inode_hash_lock);
+- inode = find_inode_fast(sb, head, ino, true);
+- spin_unlock(&inode_hash_lock);
++ inode = find_inode_fast(sb, head, ino, false);
+
+ if (inode) {
+ if (IS_ERR(inode))
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index f420c53d86acc5..8e6edb6628183a 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1241,7 +1241,15 @@ static int iomap_write_delalloc_release(struct inode *inode,
+ error = data_end;
+ goto out_unlock;
+ }
+- WARN_ON_ONCE(data_end <= start_byte);
++
++ /*
++ * If we race with post-direct I/O invalidation of the page cache,
++ * there might be no data left at start_byte.
++ */
++ if (data_end == start_byte)
++ continue;
++
++ WARN_ON_ONCE(data_end < start_byte);
+ WARN_ON_ONCE(data_end > scan_end_byte);
+
+ error = iomap_write_delalloc_scan(inode, &punch_start_byte,
+@@ -1382,11 +1390,15 @@ iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len,
+ struct iomap_iter iter = {
+ .inode = inode,
+ .pos = pos,
+- .len = len,
+ .flags = IOMAP_WRITE | IOMAP_UNSHARE,
+ };
++ loff_t size = i_size_read(inode);
+ int ret;
+
++ if (pos < 0 || pos >= size)
++ return 0;
++
++ iter.len = min(len, size - pos);
+ while ((ret = iomap_iter(&iter, ops)) > 0)
+ iter.processed = iomap_unshare_iter(&iter);
+ return ret;
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index 951f78634adfaa..b3971e91e8eb80 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -79,17 +79,23 @@ __releases(&journal->j_state_lock)
+ if (space_left < nblocks) {
+ int chkpt = journal->j_checkpoint_transactions != NULL;
+ tid_t tid = 0;
++ bool has_transaction = false;
+
+- if (journal->j_committing_transaction)
++ if (journal->j_committing_transaction) {
+ tid = journal->j_committing_transaction->t_tid;
++ has_transaction = true;
++ }
+ spin_unlock(&journal->j_list_lock);
+ write_unlock(&journal->j_state_lock);
+ if (chkpt) {
+ jbd2_log_do_checkpoint(journal);
+- } else if (jbd2_cleanup_journal_tail(journal) == 0) {
+- /* We were able to recover space; yay! */
++ } else if (jbd2_cleanup_journal_tail(journal) <= 0) {
++ /*
++ * We were able to recover space or the
++ * journal was aborted due to an error.
++ */
+ ;
+- } else if (tid) {
++ } else if (has_transaction) {
+ /*
+ * jbd2_journal_commit_transaction() may want
+ * to take the checkpoint_mutex if JBD2_FLUSHED
+@@ -407,6 +413,7 @@ unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal,
+ tid_t tid = 0;
+ unsigned long nr_freed = 0;
+ unsigned long freed;
++ bool first_set = false;
+
+ again:
+ spin_lock(&journal->j_list_lock);
+@@ -426,8 +433,10 @@ unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal,
+ else
+ transaction = journal->j_checkpoint_transactions;
+
+- if (!first_tid)
++ if (!first_set) {
+ first_tid = transaction->t_tid;
++ first_set = true;
++ }
+ last_transaction = journal->j_checkpoint_transactions->t_cpprev;
+ next_transaction = transaction;
+ last_tid = last_transaction->t_tid;
+@@ -457,7 +466,7 @@ unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal,
+ spin_unlock(&journal->j_list_lock);
+ cond_resched();
+
+- if (*nr_to_scan && next_tid)
++ if (*nr_to_scan && journal->j_shrink_transaction)
+ goto again;
+ out:
+ trace_jbd2_shrink_checkpoint_list(journal, first_tid, tid, last_tid,
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 1ebf2393bfb762..33a975193f1f10 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -710,7 +710,7 @@ int jbd2_fc_begin_commit(journal_t *journal, tid_t tid)
+ return -EINVAL;
+
+ write_lock(&journal->j_state_lock);
+- if (tid <= journal->j_commit_sequence) {
++ if (tid_geq(journal->j_commit_sequence, tid)) {
+ write_unlock(&journal->j_state_lock);
+ return -EALREADY;
+ }
+@@ -740,9 +740,9 @@ EXPORT_SYMBOL(jbd2_fc_begin_commit);
+ */
+ static int __jbd2_fc_end_commit(journal_t *journal, tid_t tid, bool fallback)
+ {
+- jbd2_journal_unlock_updates(journal);
+ if (journal->j_fc_cleanup_callback)
+ journal->j_fc_cleanup_callback(journal, 0, tid);
++ jbd2_journal_unlock_updates(journal);
+ write_lock(&journal->j_state_lock);
+ journal->j_flags &= ~JBD2_FAST_COMMIT_ONGOING;
+ if (fallback)
+diff --git a/fs/jfs/jfs_discard.c b/fs/jfs/jfs_discard.c
+index 575cb2ba74fc86..5f4b305030ad5e 100644
+--- a/fs/jfs/jfs_discard.c
++++ b/fs/jfs/jfs_discard.c
+@@ -65,7 +65,7 @@ void jfs_issue_discard(struct inode *ip, u64 blkno, u64 nblocks)
+ int jfs_ioc_trim(struct inode *ip, struct fstrim_range *range)
+ {
+ struct inode *ipbmap = JFS_SBI(ip->i_sb)->ipbmap;
+- struct bmap *bmp = JFS_SBI(ip->i_sb)->bmap;
++ struct bmap *bmp;
+ struct super_block *sb = ipbmap->i_sb;
+ int agno, agno_end;
+ u64 start, end, minlen;
+@@ -83,10 +83,15 @@ int jfs_ioc_trim(struct inode *ip, struct fstrim_range *range)
+ if (minlen == 0)
+ minlen = 1;
+
++ down_read(&sb->s_umount);
++ bmp = JFS_SBI(ip->i_sb)->bmap;
++
+ if (minlen > bmp->db_agsize ||
+ start >= bmp->db_mapsize ||
+- range->len < sb->s_blocksize)
++ range->len < sb->s_blocksize) {
++ up_read(&sb->s_umount);
+ return -EINVAL;
++ }
+
+ if (end >= bmp->db_mapsize)
+ end = bmp->db_mapsize - 1;
+@@ -100,6 +105,8 @@ int jfs_ioc_trim(struct inode *ip, struct fstrim_range *range)
+ trimmed += dbDiscardAG(ip, agno, minlen);
+ agno++;
+ }
++
++ up_read(&sb->s_umount);
+ range->len = trimmed << sb->s_blocksize_bits;
+
+ return 0;
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 0625d1c0d0649a..974ecf5e0d9522 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -2944,9 +2944,10 @@ static void dbAdjTree(dmtree_t *tp, int leafno, int newval, bool is_ctl)
+ static int dbFindLeaf(dmtree_t *tp, int l2nb, int *leafidx, bool is_ctl)
+ {
+ int ti, n = 0, k, x = 0;
+- int max_size;
++ int max_size, max_idx;
+
+ max_size = is_ctl ? CTLTREESIZE : TREESIZE;
++ max_idx = is_ctl ? LPERCTL : LPERDMAP;
+
+ /* first check the root of the tree to see if there is
+ * sufficient free space.
+@@ -2978,6 +2979,8 @@ static int dbFindLeaf(dmtree_t *tp, int l2nb, int *leafidx, bool is_ctl)
+ */
+ assert(n < 4);
+ }
++ if (le32_to_cpu(tp->dmt_leafidx) >= max_idx)
++ return -ENOSPC;
+
+ /* set the return to the leftmost leaf describing sufficient
+ * free space.
+@@ -3022,7 +3025,7 @@ static int dbFindBits(u32 word, int l2nb)
+
+ /* scan the word for nb free bits at nb alignments.
+ */
+- for (bitno = 0; mask != 0; bitno += nb, mask >>= nb) {
++ for (bitno = 0; mask != 0; bitno += nb, mask = (mask >> nb)) {
+ if ((mask & word) == mask)
+ break;
+ }
+diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
+index 2999ed5d83f5e0..0fb05e314edf60 100644
+--- a/fs/jfs/xattr.c
++++ b/fs/jfs/xattr.c
+@@ -434,6 +434,8 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
+ int rc;
+ int quota_allocation = 0;
+
++ memset(&ea_buf->new_ea, 0, sizeof(ea_buf->new_ea));
++
+ /* When fsck.jfs clears a bad ea, it doesn't clear the size */
+ if (ji->ea.flag == 0)
+ ea_size = 0;
+diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
+index 3f7e37e50c7d02..b08673d97470c9 100644
+--- a/fs/netfs/write_issue.c
++++ b/fs/netfs/write_issue.c
+@@ -410,13 +410,17 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
+ folio_unlock(folio);
+
+ if (fgroup == NETFS_FOLIO_COPY_TO_CACHE) {
+- if (!fscache_resources_valid(&wreq->cache_resources)) {
++ if (!cache->avail) {
+ trace_netfs_folio(folio, netfs_folio_trace_cancel_copy);
+ netfs_issue_write(wreq, upload);
+ netfs_folio_written_back(folio);
+ return 0;
+ }
+ trace_netfs_folio(folio, netfs_folio_trace_store_copy);
++ } else if (!upload->avail && !cache->avail) {
++ trace_netfs_folio(folio, netfs_folio_trace_cancel_store);
++ netfs_folio_written_back(folio);
++ return 0;
+ } else if (!upload->construct) {
+ trace_netfs_folio(folio, netfs_folio_trace_store);
+ } else {
+@@ -494,6 +498,30 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
+ return 0;
+ }
+
++/*
++ * End the issuing of writes, letting the collector know we're done.
++ */
++static void netfs_end_issue_write(struct netfs_io_request *wreq)
++{
++ bool needs_poke = true;
++
++ smp_wmb(); /* Write subreq lists before ALL_QUEUED. */
++ set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
++
++ for (int s = 0; s < NR_IO_STREAMS; s++) {
++ struct netfs_io_stream *stream = &wreq->io_streams[s];
++
++ if (!stream->active)
++ continue;
++ if (!list_empty(&stream->subrequests))
++ needs_poke = false;
++ netfs_issue_write(wreq, stream);
++ }
++
++ if (needs_poke)
++ netfs_wake_write_collector(wreq, false);
++}
++
+ /*
+ * Write some of the pending data back to the server
+ */
+@@ -541,10 +569,7 @@ int netfs_writepages(struct address_space *mapping,
+ break;
+ } while ((folio = writeback_iter(mapping, wbc, folio, &error)));
+
+- for (int s = 0; s < NR_IO_STREAMS; s++)
+- netfs_issue_write(wreq, &wreq->io_streams[s]);
+- smp_wmb(); /* Write lists before ALL_QUEUED. */
+- set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
++ netfs_end_issue_write(wreq);
+
+ mutex_unlock(&ictx->wb_lock);
+
+@@ -632,10 +657,7 @@ int netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_contr
+ if (writethrough_cache)
+ netfs_write_folio(wreq, wbc, writethrough_cache);
+
+- netfs_issue_write(wreq, &wreq->io_streams[0]);
+- netfs_issue_write(wreq, &wreq->io_streams[1]);
+- smp_wmb(); /* Write lists before ALL_QUEUED. */
+- set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
++ netfs_end_issue_write(wreq);
+
+ mutex_unlock(&ictx->wb_lock);
+
+@@ -680,13 +702,7 @@ int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t
+ break;
+ }
+
+- netfs_issue_write(wreq, upload);
+-
+- smp_wmb(); /* Write lists before ALL_QUEUED. */
+- set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
+- if (list_empty(&upload->subrequests))
+- netfs_wake_write_collector(wreq, false);
+-
++ netfs_end_issue_write(wreq);
+ _leave(" = %d", error);
+ return error;
+ }
+diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
+index 14ec1565632090..5cae26917436c0 100644
+--- a/fs/nfsd/netns.h
++++ b/fs/nfsd/netns.h
+@@ -148,6 +148,7 @@ struct nfsd_net {
+ u32 s2s_cp_cl_id;
+ struct idr s2s_cp_stateids;
+ spinlock_t s2s_cp_lock;
++ atomic_t pending_async_copies;
+
+ /*
+ * Version information
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 2e39cf2e502a33..5768b2ff1d1d13 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -751,15 +751,6 @@ nfsd4_access(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ &access->ac_supported);
+ }
+
+-static void gen_boot_verifier(nfs4_verifier *verifier, struct net *net)
+-{
+- __be32 *verf = (__be32 *)verifier->data;
+-
+- BUILD_BUG_ON(2*sizeof(*verf) != sizeof(verifier->data));
+-
+- nfsd_copy_write_verifier(verf, net_generic(net, nfsd_net_id));
+-}
+-
+ static __be32
+ nfsd4_commit(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ union nfsd4_op_u *u)
+@@ -1288,6 +1279,7 @@ static void nfs4_put_copy(struct nfsd4_copy *copy)
+ {
+ if (!refcount_dec_and_test(©->refcount))
+ return;
++ atomic_dec(©->cp_nn->pending_async_copies);
+ kfree(copy->cp_src);
+ kfree(copy);
+ }
+@@ -1630,7 +1622,6 @@ static void nfsd4_init_copy_res(struct nfsd4_copy *copy, bool sync)
+ test_bit(NFSD4_COPY_F_COMMITTED, ©->cp_flags) ?
+ NFS_FILE_SYNC : NFS_UNSTABLE;
+ nfsd4_copy_set_sync(copy, sync);
+- gen_boot_verifier(©->cp_res.wr_verifier, copy->cp_clp->net);
+ }
+
+ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy,
+@@ -1803,9 +1794,11 @@ static __be32
+ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ union nfsd4_op_u *u)
+ {
++ struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
++ struct nfsd4_copy *async_copy = NULL;
+ struct nfsd4_copy *copy = &u->copy;
++ struct nfsd42_write_res *result;
+ __be32 status;
+- struct nfsd4_copy *async_copy = NULL;
+
+ /*
+ * Currently, async COPY is not reliable. Force all COPY
+@@ -1814,6 +1807,9 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ */
+ nfsd4_copy_set_sync(copy, true);
+
++ result = ©->cp_res;
++ nfsd_copy_write_verifier((__be32 *)&result->wr_verifier.data, nn);
++
+ copy->cp_clp = cstate->clp;
+ if (nfsd4_ssc_is_inter(copy)) {
+ trace_nfsd_copy_inter(copy);
+@@ -1838,12 +1834,16 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ memcpy(©->fh, &cstate->current_fh.fh_handle,
+ sizeof(struct knfsd_fh));
+ if (nfsd4_copy_is_async(copy)) {
+- struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+-
+- status = nfserrno(-ENOMEM);
+ async_copy = kzalloc(sizeof(struct nfsd4_copy), GFP_KERNEL);
+ if (!async_copy)
+ goto out_err;
++ async_copy->cp_nn = nn;
++ /* Arbitrary cap on number of pending async copy operations */
++ if (atomic_inc_return(&nn->pending_async_copies) >
++ (int)rqstp->rq_pool->sp_nrthreads) {
++ atomic_dec(&nn->pending_async_copies);
++ goto out_err;
++ }
+ INIT_LIST_HEAD(&async_copy->copies);
+ refcount_set(&async_copy->refcount, 1);
+ async_copy->cp_src = kmalloc(sizeof(*async_copy->cp_src), GFP_KERNEL);
+@@ -1851,8 +1851,8 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ goto out_err;
+ if (!nfs4_init_copy_state(nn, copy))
+ goto out_err;
+- memcpy(©->cp_res.cb_stateid, ©->cp_stateid.cs_stid,
+- sizeof(copy->cp_res.cb_stateid));
++ memcpy(&result->cb_stateid, ©->cp_stateid.cs_stid,
++ sizeof(result->cb_stateid));
+ dup_copy_fields(copy, async_copy);
+ async_copy->copy_task = kthread_create(nfsd4_do_async_copy,
+ async_copy, "%s", "copy thread");
+@@ -1883,7 +1883,7 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ }
+ if (async_copy)
+ cleanup_async_copy(async_copy);
+- status = nfserrno(-ENOMEM);
++ status = nfserr_jukebox;
+ goto out;
+ }
+
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index fe06779ea527a1..3837f4e417247e 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1077,7 +1077,8 @@ static void nfs4_free_deleg(struct nfs4_stid *stid)
+ * When a delegation is recalled, the filehandle is stored in the "new"
+ * filter.
+ * Every 30 seconds we swap the filters and clear the "new" one,
+- * unless both are empty of course.
++ * unless both are empty of course. This results in delegations for a
++ * given filehandle being blocked for between 30 and 60 seconds.
+ *
+ * Each filter is 256 bits. We hash the filehandle to 32bit and use the
+ * low 3 bytes as hash-table indices.
+@@ -1106,9 +1107,9 @@ static int delegation_blocked(struct knfsd_fh *fh)
+ if (ktime_get_seconds() - bd->swap_time > 30) {
+ bd->entries -= bd->old_entries;
+ bd->old_entries = bd->entries;
++ bd->new = 1-bd->new;
+ memset(bd->set[bd->new], 0,
+ sizeof(bd->set[0]));
+- bd->new = 1-bd->new;
+ bd->swap_time = ktime_get_seconds();
+ }
+ spin_unlock(&blocked_delegations_lock);
+@@ -8574,6 +8575,7 @@ static int nfs4_state_create_net(struct net *net)
+ spin_lock_init(&nn->client_lock);
+ spin_lock_init(&nn->s2s_cp_lock);
+ idr_init(&nn->s2s_cp_stateids);
++ atomic_set(&nn->pending_async_copies, 0);
+
+ spin_lock_init(&nn->blocked_locks_lock);
+ INIT_LIST_HEAD(&nn->blocked_locks_lru);
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 97f58377797261..ebbae04837ef07 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -1245,14 +1245,6 @@ nfsd4_decode_putfh(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ return nfs_ok;
+ }
+
+-static __be32
+-nfsd4_decode_putpubfh(struct nfsd4_compoundargs *argp, union nfsd4_op_u *p)
+-{
+- if (argp->minorversion == 0)
+- return nfs_ok;
+- return nfserr_notsupp;
+-}
+-
+ static __be32
+ nfsd4_decode_read(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+@@ -2374,7 +2366,7 @@ static const nfsd4_dec nfsd4_dec_ops[] = {
+ [OP_OPEN_CONFIRM] = nfsd4_decode_open_confirm,
+ [OP_OPEN_DOWNGRADE] = nfsd4_decode_open_downgrade,
+ [OP_PUTFH] = nfsd4_decode_putfh,
+- [OP_PUTPUBFH] = nfsd4_decode_putpubfh,
++ [OP_PUTPUBFH] = nfsd4_decode_noop,
+ [OP_PUTROOTFH] = nfsd4_decode_noop,
+ [OP_READ] = nfsd4_decode_read,
+ [OP_READDIR] = nfsd4_decode_readdir,
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 34eb2c2cbcde34..e8704a4e848ca5 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1762,7 +1762,7 @@ int nfsd_nl_threads_get_doit(struct sk_buff *skb, struct genl_info *info)
+ struct svc_pool *sp = &nn->nfsd_serv->sv_pools[i];
+
+ err = nla_put_u32(skb, NFSD_A_SERVER_THREADS,
+- atomic_read(&sp->sp_nrthreads));
++ sp->sp_nrthreads);
+ if (err)
+ goto err_unlock;
+ }
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 0bc8eaa5e0098e..8103c3c90cd11f 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -705,7 +705,7 @@ int nfsd_get_nrthreads(int n, int *nthreads, struct net *net)
+
+ if (serv)
+ for (i = 0; i < serv->sv_nrpools && i < n; i++)
+- nthreads[i] = atomic_read(&serv->sv_pools[i].sp_nrthreads);
++ nthreads[i] = serv->sv_pools[i].sp_nrthreads;
+ return 0;
+ }
+
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 29b1f3613800a3..911e5e0a17af7d 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -100,6 +100,7 @@ nfserrno (int errno)
+ { nfserr_io, -EUCLEAN },
+ { nfserr_perm, -ENOKEY },
+ { nfserr_no_grace, -ENOGRACE},
++ { nfserr_io, -EBADMSG },
+ };
+ int i;
+
+diff --git a/fs/nfsd/xdr4.h b/fs/nfsd/xdr4.h
+index fbdd42cde1fa5b..2a21a7662e030c 100644
+--- a/fs/nfsd/xdr4.h
++++ b/fs/nfsd/xdr4.h
+@@ -713,6 +713,7 @@ struct nfsd4_copy {
+ struct nfsd4_ssc_umount_item *ss_nsui;
+ struct nfs_fh c_fh;
+ nfs4_stateid stateid;
++ struct nfsd_net *cp_nn;
+ };
+
+ static inline void nfsd4_copy_set_sync(struct nfsd4_copy *copy, bool sync)
+diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
+index 6be175a1ab3ce1..75991db2343e2d 100644
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -156,9 +156,8 @@ int ocfs2_get_block(struct inode *inode, sector_t iblock,
+ err = ocfs2_extent_map_get_blocks(inode, iblock, &p_blkno, &count,
+ &ext_flags);
+ if (err) {
+- mlog(ML_ERROR, "Error %d from get_blocks(0x%p, %llu, 1, "
+- "%llu, NULL)\n", err, inode, (unsigned long long)iblock,
+- (unsigned long long)p_blkno);
++ mlog(ML_ERROR, "get_blocks() failed, inode: 0x%p, "
++ "block: %llu\n", inode, (unsigned long long)iblock);
+ goto bail;
+ }
+
+diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c
+index cdb9b9bdea1f6d..8f714406528d62 100644
+--- a/fs/ocfs2/buffer_head_io.c
++++ b/fs/ocfs2/buffer_head_io.c
+@@ -235,7 +235,6 @@ int ocfs2_read_blocks(struct ocfs2_caching_info *ci, u64 block, int nr,
+ if (bhs[i] == NULL) {
+ bhs[i] = sb_getblk(sb, block++);
+ if (bhs[i] == NULL) {
+- ocfs2_metadata_cache_io_unlock(ci);
+ status = -ENOMEM;
+ mlog_errno(status);
+ /* Don't forget to put previous bh! */
+@@ -389,7 +388,8 @@ int ocfs2_read_blocks(struct ocfs2_caching_info *ci, u64 block, int nr,
+ /* Always set the buffer in the cache, even if it was
+ * a forced read, or read-ahead which hasn't yet
+ * completed. */
+- ocfs2_set_buffer_uptodate(ci, bh);
++ if (bh)
++ ocfs2_set_buffer_uptodate(ci, bh);
+ }
+ ocfs2_metadata_cache_io_unlock(ci);
+
+diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
+index 530fba34f6d312..1bf188b6866a67 100644
+--- a/fs/ocfs2/journal.c
++++ b/fs/ocfs2/journal.c
+@@ -1055,7 +1055,7 @@ void ocfs2_journal_shutdown(struct ocfs2_super *osb)
+ if (!igrab(inode))
+ BUG();
+
+- num_running_trans = atomic_read(&(osb->journal->j_num_trans));
++ num_running_trans = atomic_read(&(journal->j_num_trans));
+ trace_ocfs2_journal_shutdown(num_running_trans);
+
+ /* Do a commit_cache here. It will flush our journal, *and*
+@@ -1074,9 +1074,10 @@ void ocfs2_journal_shutdown(struct ocfs2_super *osb)
+ osb->commit_task = NULL;
+ }
+
+- BUG_ON(atomic_read(&(osb->journal->j_num_trans)) != 0);
++ BUG_ON(atomic_read(&(journal->j_num_trans)) != 0);
+
+- if (ocfs2_mount_local(osb)) {
++ if (ocfs2_mount_local(osb) &&
++ (journal->j_journal->j_flags & JBD2_LOADED)) {
+ jbd2_journal_lock_updates(journal->j_journal);
+ status = jbd2_journal_flush(journal->j_journal, 0);
+ jbd2_journal_unlock_updates(journal->j_journal);
+diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c
+index 5df34561c551c6..8ac42ea81a17bd 100644
+--- a/fs/ocfs2/localalloc.c
++++ b/fs/ocfs2/localalloc.c
+@@ -1002,6 +1002,25 @@ static int ocfs2_sync_local_to_main(struct ocfs2_super *osb,
+ start = bit_off + 1;
+ }
+
++ /* clear the contiguous bits until the end boundary */
++ if (count) {
++ blkno = la_start_blk +
++ ocfs2_clusters_to_blocks(osb->sb,
++ start - count);
++
++ trace_ocfs2_sync_local_to_main_free(
++ count, start - count,
++ (unsigned long long)la_start_blk,
++ (unsigned long long)blkno);
++
++ status = ocfs2_release_clusters(handle,
++ main_bm_inode,
++ main_bm_bh, blkno,
++ count);
++ if (status < 0)
++ mlog_errno(status);
++ }
++
+ bail:
+ if (status)
+ mlog_errno(status);
+diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
+index 8ce462c64c51ba..73d3367c533b8a 100644
+--- a/fs/ocfs2/quota_local.c
++++ b/fs/ocfs2/quota_local.c
+@@ -692,7 +692,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
+ int status;
+ struct buffer_head *bh = NULL;
+ struct ocfs2_quota_recovery *rec;
+- int locked = 0;
++ int locked = 0, global_read = 0;
+
+ info->dqi_max_spc_limit = 0x7fffffffffffffffLL;
+ info->dqi_max_ino_limit = 0x7fffffffffffffffLL;
+@@ -700,6 +700,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
+ if (!oinfo) {
+ mlog(ML_ERROR, "failed to allocate memory for ocfs2 quota"
+ " info.");
++ status = -ENOMEM;
+ goto out_err;
+ }
+ info->dqi_priv = oinfo;
+@@ -712,6 +713,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
+ status = ocfs2_global_read_info(sb, type);
+ if (status < 0)
+ goto out_err;
++ global_read = 1;
+
+ status = ocfs2_inode_lock(lqinode, &oinfo->dqi_lqi_bh, 1);
+ if (status < 0) {
+@@ -782,10 +784,12 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
+ if (locked)
+ ocfs2_inode_unlock(lqinode, 1);
+ ocfs2_release_local_quota_bitmaps(&oinfo->dqi_chunk);
++ if (global_read)
++ cancel_delayed_work_sync(&oinfo->dqi_sync_work);
+ kfree(oinfo);
+ }
+ brelse(bh);
+- return -1;
++ return status;
+ }
+
+ /* Write local info to quota file */
+diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c
+index 1f303b1adf1ab9..53a0961f114d11 100644
+--- a/fs/ocfs2/refcounttree.c
++++ b/fs/ocfs2/refcounttree.c
+@@ -25,6 +25,7 @@
+ #include "namei.h"
+ #include "ocfs2_trace.h"
+ #include "file.h"
++#include "symlink.h"
+
+ #include <linux/bio.h>
+ #include <linux/blkdev.h>
+@@ -4155,8 +4156,9 @@ static int __ocfs2_reflink(struct dentry *old_dentry,
+ int ret;
+ struct inode *inode = d_inode(old_dentry);
+ struct buffer_head *new_bh = NULL;
++ struct ocfs2_inode_info *oi = OCFS2_I(inode);
+
+- if (OCFS2_I(inode)->ip_flags & OCFS2_INODE_SYSTEM_FILE) {
++ if (oi->ip_flags & OCFS2_INODE_SYSTEM_FILE) {
+ ret = -EINVAL;
+ mlog_errno(ret);
+ goto out;
+@@ -4182,6 +4184,26 @@ static int __ocfs2_reflink(struct dentry *old_dentry,
+ goto out_unlock;
+ }
+
++ if ((oi->ip_dyn_features & OCFS2_HAS_XATTR_FL) &&
++ (oi->ip_dyn_features & OCFS2_INLINE_XATTR_FL)) {
++ /*
++ * Adjust extent record count to reserve space for extended attribute.
++ * Inline data count had been adjusted in ocfs2_duplicate_inline_data().
++ */
++ struct ocfs2_inode_info *new_oi = OCFS2_I(new_inode);
++
++ if (!(new_oi->ip_dyn_features & OCFS2_INLINE_DATA_FL) &&
++ !(ocfs2_inode_is_fast_symlink(new_inode))) {
++ struct ocfs2_dinode *new_di = (struct ocfs2_dinode *)new_bh->b_data;
++ struct ocfs2_dinode *old_di = (struct ocfs2_dinode *)old_bh->b_data;
++ struct ocfs2_extent_list *el = &new_di->id2.i_list;
++ int inline_size = le16_to_cpu(old_di->i_xattr_inline_size);
++
++ le16_add_cpu(&el->l_count, -(inline_size /
++ sizeof(struct ocfs2_extent_rec)));
++ }
++ }
++
+ ret = ocfs2_create_reflink_node(inode, old_bh,
+ new_inode, new_bh, preserve);
+ if (ret) {
+@@ -4189,7 +4211,7 @@ static int __ocfs2_reflink(struct dentry *old_dentry,
+ goto inode_unlock;
+ }
+
+- if (OCFS2_I(inode)->ip_dyn_features & OCFS2_HAS_XATTR_FL) {
++ if (oi->ip_dyn_features & OCFS2_HAS_XATTR_FL) {
+ ret = ocfs2_reflink_xattrs(inode, old_bh,
+ new_inode, new_bh,
+ preserve);
+diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
+index 35c0cc2a51af82..fb1140c52f07cb 100644
+--- a/fs/ocfs2/xattr.c
++++ b/fs/ocfs2/xattr.c
+@@ -6520,16 +6520,7 @@ static int ocfs2_reflink_xattr_inline(struct ocfs2_xattr_reflink *args)
+ }
+
+ new_oi = OCFS2_I(args->new_inode);
+- /*
+- * Adjust extent record count to reserve space for extended attribute.
+- * Inline data count had been adjusted in ocfs2_duplicate_inline_data().
+- */
+- if (!(new_oi->ip_dyn_features & OCFS2_INLINE_DATA_FL) &&
+- !(ocfs2_inode_is_fast_symlink(args->new_inode))) {
+- struct ocfs2_extent_list *el = &new_di->id2.i_list;
+- le16_add_cpu(&el->l_count, -(inline_size /
+- sizeof(struct ocfs2_extent_rec)));
+- }
++
+ spin_lock(&new_oi->ip_lock);
+ new_oi->ip_dyn_features |= OCFS2_HAS_XATTR_FL | OCFS2_INLINE_XATTR_FL;
+ new_di->i_dyn_features = cpu_to_le16(new_oi->ip_dyn_features);
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index a5ef2005a2cc54..051a802893a184 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -243,8 +243,24 @@ static int ovl_verify_area(loff_t pos, loff_t pos2, loff_t len, loff_t totlen)
+ return 0;
+ }
+
++static int ovl_sync_file(struct path *path)
++{
++ struct file *new_file;
++ int err;
++
++ new_file = ovl_path_open(path, O_LARGEFILE | O_RDONLY);
++ if (IS_ERR(new_file))
++ return PTR_ERR(new_file);
++
++ err = vfs_fsync(new_file, 0);
++ fput(new_file);
++
++ return err;
++}
++
+ static int ovl_copy_up_file(struct ovl_fs *ofs, struct dentry *dentry,
+- struct file *new_file, loff_t len)
++ struct file *new_file, loff_t len,
++ bool datasync)
+ {
+ struct path datapath;
+ struct file *old_file;
+@@ -342,7 +358,8 @@ static int ovl_copy_up_file(struct ovl_fs *ofs, struct dentry *dentry,
+
+ len -= bytes;
+ }
+- if (!error && ovl_should_sync(ofs))
++ /* call fsync once, either now or later along with metadata */
++ if (!error && ovl_should_sync(ofs) && datasync)
+ error = vfs_fsync(new_file, 0);
+ out_fput:
+ fput(old_file);
+@@ -574,6 +591,7 @@ struct ovl_copy_up_ctx {
+ bool indexed;
+ bool metacopy;
+ bool metacopy_digest;
++ bool metadata_fsync;
+ };
+
+ static int ovl_link_up(struct ovl_copy_up_ctx *c)
+@@ -634,7 +652,8 @@ static int ovl_copy_up_data(struct ovl_copy_up_ctx *c, const struct path *temp)
+ if (IS_ERR(new_file))
+ return PTR_ERR(new_file);
+
+- err = ovl_copy_up_file(ofs, c->dentry, new_file, c->stat.size);
++ err = ovl_copy_up_file(ofs, c->dentry, new_file, c->stat.size,
++ !c->metadata_fsync);
+ fput(new_file);
+
+ return err;
+@@ -701,6 +720,10 @@ static int ovl_copy_up_metadata(struct ovl_copy_up_ctx *c, struct dentry *temp)
+ err = ovl_set_attr(ofs, temp, &c->stat);
+ inode_unlock(temp->d_inode);
+
++ /* fsync metadata before moving it into upper dir */
++ if (!err && ovl_should_sync(ofs) && c->metadata_fsync)
++ err = ovl_sync_file(&upperpath);
++
+ return err;
+ }
+
+@@ -860,7 +883,8 @@ static int ovl_copy_up_tmpfile(struct ovl_copy_up_ctx *c)
+
+ temp = tmpfile->f_path.dentry;
+ if (!c->metacopy && c->stat.size) {
+- err = ovl_copy_up_file(ofs, c->dentry, tmpfile, c->stat.size);
++ err = ovl_copy_up_file(ofs, c->dentry, tmpfile, c->stat.size,
++ !c->metadata_fsync);
+ if (err)
+ goto out_fput;
+ }
+@@ -1135,6 +1159,17 @@ static int ovl_copy_up_one(struct dentry *parent, struct dentry *dentry,
+ !kgid_has_mapping(current_user_ns(), ctx.stat.gid))
+ return -EOVERFLOW;
+
++ /*
++ * With metacopy disabled, we fsync after final metadata copyup, for
++ * both regular files and directories to get atomic copyup semantics
++ * on filesystems that do not use strict metadata ordering (e.g. ubifs).
++ *
++ * With metacopy enabled we want to avoid fsync on all meta copyup
++ * that will hurt performance of workloads such as chown -R, so we
++ * only fsync on data copyup as legacy behavior.
++ */
++ ctx.metadata_fsync = !OVL_FS(dentry->d_sb)->config.metacopy &&
++ (S_ISREG(ctx.stat.mode) || S_ISDIR(ctx.stat.mode));
+ ctx.metacopy = ovl_need_meta_copy_up(dentry, ctx.stat.mode, flags);
+
+ if (parent) {
+diff --git a/fs/overlayfs/params.c b/fs/overlayfs/params.c
+index d0568c09134126..e42546c6c5dfbe 100644
+--- a/fs/overlayfs/params.c
++++ b/fs/overlayfs/params.c
+@@ -755,11 +755,6 @@ int ovl_fs_params_verify(const struct ovl_fs_context *ctx,
+ {
+ struct ovl_opt_set set = ctx->set;
+
+- if (ctx->nr_data > 0 && !config->metacopy) {
+- pr_err("lower data-only dirs require metacopy support.\n");
+- return -EINVAL;
+- }
+-
+ /* Workdir/index are useless in non-upper mount */
+ if (!config->upperdir) {
+ if (config->workdir) {
+@@ -911,6 +906,39 @@ int ovl_fs_params_verify(const struct ovl_fs_context *ctx,
+ config->metacopy = false;
+ }
+
++ /*
++ * Fail if we don't have trusted xattr capability and a feature was
++ * explicitly requested that requires them.
++ */
++ if (!config->userxattr && !capable(CAP_SYS_ADMIN)) {
++ if (set.redirect &&
++ config->redirect_mode != OVL_REDIRECT_NOFOLLOW) {
++ pr_err("redirect_dir requires permission to access trusted xattrs\n");
++ return -EPERM;
++ }
++ if (config->metacopy && set.metacopy) {
++ pr_err("metacopy requires permission to access trusted xattrs\n");
++ return -EPERM;
++ }
++ if (config->verity_mode) {
++ pr_err("verity requires permission to access trusted xattrs\n");
++ return -EPERM;
++ }
++ if (ctx->nr_data > 0) {
++ pr_err("lower data-only dirs require permission to access trusted xattrs\n");
++ return -EPERM;
++ }
++ /*
++ * Other xattr-dependent features should be disabled without
++ * great disturbance to the user in ovl_make_workdir().
++ */
++ }
++
++ if (ctx->nr_data > 0 && !config->metacopy) {
++ pr_err("lower data-only dirs require metacopy support.\n");
++ return -EINVAL;
++ }
++
+ return 0;
+ }
+
+diff --git a/fs/pidfs.c b/fs/pidfs.c
+index 7ffdc88dfb527d..80675b6bf88459 100644
+--- a/fs/pidfs.c
++++ b/fs/pidfs.c
+@@ -120,6 +120,7 @@ static long pidfd_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ struct nsproxy *nsp __free(put_nsproxy) = NULL;
+ struct pid *pid = pidfd_pid(file);
+ struct ns_common *ns_common = NULL;
++ struct pid_namespace *pid_ns;
+
+ if (arg)
+ return -EINVAL;
+@@ -202,7 +203,9 @@ static long pidfd_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ case PIDFD_GET_PID_NAMESPACE:
+ if (IS_ENABLED(CONFIG_PID_NS)) {
+ rcu_read_lock();
+- ns_common = to_ns_common( get_pid_ns(task_active_pid_ns(task)));
++ pid_ns = task_active_pid_ns(task);
++ if (pid_ns)
++ ns_common = to_ns_common(get_pid_ns(pid_ns));
+ rcu_read_unlock();
+ }
+ break;
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 72a1acd03675cc..f389c69767fa52 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -85,6 +85,7 @@
+ #include <linux/elf.h>
+ #include <linux/pid_namespace.h>
+ #include <linux/user_namespace.h>
++#include <linux/fs_parser.h>
+ #include <linux/fs_struct.h>
+ #include <linux/slab.h>
+ #include <linux/sched/autogroup.h>
+@@ -117,6 +118,40 @@
+ static u8 nlink_tid __ro_after_init;
+ static u8 nlink_tgid __ro_after_init;
+
++enum proc_mem_force {
++ PROC_MEM_FORCE_ALWAYS,
++ PROC_MEM_FORCE_PTRACE,
++ PROC_MEM_FORCE_NEVER
++};
++
++static enum proc_mem_force proc_mem_force_override __ro_after_init =
++ IS_ENABLED(CONFIG_PROC_MEM_NO_FORCE) ? PROC_MEM_FORCE_NEVER :
++ IS_ENABLED(CONFIG_PROC_MEM_FORCE_PTRACE) ? PROC_MEM_FORCE_PTRACE :
++ PROC_MEM_FORCE_ALWAYS;
++
++static const struct constant_table proc_mem_force_table[] __initconst = {
++ { "always", PROC_MEM_FORCE_ALWAYS },
++ { "ptrace", PROC_MEM_FORCE_PTRACE },
++ { "never", PROC_MEM_FORCE_NEVER },
++ { }
++};
++
++static int __init early_proc_mem_force_override(char *buf)
++{
++ if (!buf)
++ return -EINVAL;
++
++ /*
++ * lookup_constant() defaults to proc_mem_force_override to preseve
++ * the initial Kconfig choice in case an invalid param gets passed.
++ */
++ proc_mem_force_override = lookup_constant(proc_mem_force_table,
++ buf, proc_mem_force_override);
++
++ return 0;
++}
++early_param("proc_mem.force_override", early_proc_mem_force_override);
++
+ struct pid_entry {
+ const char *name;
+ unsigned int len;
+@@ -835,6 +870,28 @@ static int mem_open(struct inode *inode, struct file *file)
+ return ret;
+ }
+
++static bool proc_mem_foll_force(struct file *file, struct mm_struct *mm)
++{
++ struct task_struct *task;
++ bool ptrace_active = false;
++
++ switch (proc_mem_force_override) {
++ case PROC_MEM_FORCE_NEVER:
++ return false;
++ case PROC_MEM_FORCE_PTRACE:
++ task = get_proc_task(file_inode(file));
++ if (task) {
++ ptrace_active = READ_ONCE(task->ptrace) &&
++ READ_ONCE(task->mm) == mm &&
++ READ_ONCE(task->parent) == current;
++ put_task_struct(task);
++ }
++ return ptrace_active;
++ default:
++ return true;
++ }
++}
++
+ static ssize_t mem_rw(struct file *file, char __user *buf,
+ size_t count, loff_t *ppos, int write)
+ {
+@@ -855,7 +912,9 @@ static ssize_t mem_rw(struct file *file, char __user *buf,
+ if (!mmget_not_zero(mm))
+ goto free;
+
+- flags = FOLL_FORCE | (write ? FOLL_WRITE : 0);
++ flags = write ? FOLL_WRITE : 0;
++ if (proc_mem_foll_force(file, mm))
++ flags |= FOLL_FORCE;
+
+ while (count > 0) {
+ size_t this_len = min_t(size_t, count, PAGE_SIZE);
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 9553e77c9d3189..d11ebc055ce0dd 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -29,8 +29,13 @@ static const struct inode_operations proc_sys_inode_operations;
+ static const struct file_operations proc_sys_dir_file_operations;
+ static const struct inode_operations proc_sys_dir_operations;
+
+-/* Support for permanently empty directories */
+-static struct ctl_table sysctl_mount_point[] = { };
++/*
++ * Support for permanently empty directories.
++ * Must be non-empty to avoid sharing an address with other tables.
++ */
++static struct ctl_table sysctl_mount_point[] = {
++ { }
++};
+
+ /**
+ * register_sysctl_mount_point() - registers a sysctl mount point
+@@ -42,7 +47,7 @@ static struct ctl_table sysctl_mount_point[] = { };
+ */
+ struct ctl_table_header *register_sysctl_mount_point(const char *path)
+ {
+- return register_sysctl(path, sysctl_mount_point);
++ return register_sysctl_sz(path, sysctl_mount_point, 0);
+ }
+ EXPORT_SYMBOL(register_sysctl_mount_point);
+
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index 2a2523c93944de..33e2860010158c 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -313,8 +313,17 @@ cifs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ struct TCP_Server_Info *server = tcon->ses->server;
+ unsigned int xid;
+ int rc = 0;
++ const char *full_path;
++ void *page;
+
+ xid = get_xid();
++ page = alloc_dentry_path();
++
++ full_path = build_path_from_dentry(dentry, page);
++ if (IS_ERR(full_path)) {
++ rc = PTR_ERR(full_path);
++ goto statfs_out;
++ }
+
+ if (le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength) > 0)
+ buf->f_namelen =
+@@ -330,8 +339,10 @@ cifs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ buf->f_ffree = 0; /* unlimited */
+
+ if (server->ops->queryfs)
+- rc = server->ops->queryfs(xid, tcon, cifs_sb, buf);
++ rc = server->ops->queryfs(xid, tcon, full_path, cifs_sb, buf);
+
++statfs_out:
++ free_dentry_path(page);
+ free_xid(xid);
+ return rc;
+ }
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 9eae8649f90c38..59f649da5fdde7 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -482,7 +482,7 @@ struct smb_version_operations {
+ __u16 net_fid, struct cifsInodeInfo *cifs_inode);
+ /* query remote filesystem */
+ int (*queryfs)(const unsigned int, struct cifs_tcon *,
+- struct cifs_sb_info *, struct kstatfs *);
++ const char *, struct cifs_sb_info *, struct kstatfs *);
+ /* send mandatory brlock to the server */
+ int (*mand_lock)(const unsigned int, struct cifsFileInfo *, __u64,
+ __u64, __u32, int, int, bool);
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index 73e2e6c230b735..54579a2003ac6a 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -800,10 +800,6 @@ static void cifs_open_info_to_fattr(struct cifs_fattr *fattr,
+ fattr->cf_mode = S_IFREG | cifs_sb->ctx->file_mode;
+ fattr->cf_dtype = DT_REG;
+
+- /* clear write bits if ATTR_READONLY is set */
+- if (fattr->cf_cifsattrs & ATTR_READONLY)
+- fattr->cf_mode &= ~(S_IWUGO);
+-
+ /*
+ * Don't accept zero nlink from non-unix servers unless
+ * delete is pending. Instead mark it as unknown.
+@@ -816,6 +812,10 @@ static void cifs_open_info_to_fattr(struct cifs_fattr *fattr,
+ }
+ }
+
++ /* clear write bits if ATTR_READONLY is set */
++ if (fattr->cf_cifsattrs & ATTR_READONLY)
++ fattr->cf_mode &= ~(S_IWUGO);
++
+ out_reparse:
+ if (S_ISLNK(fattr->cf_mode)) {
+ if (likely(data->symlink_target))
+@@ -1233,11 +1233,14 @@ static int cifs_get_fattr(struct cifs_open_info_data *data,
+ __func__, rc);
+ goto out;
+ }
+- }
+-
+- /* fill in remaining high mode bits e.g. SUID, VTX */
+- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL)
++ } else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL)
++ /* fill in remaining high mode bits e.g. SUID, VTX */
+ cifs_sfu_mode(fattr, full_path, cifs_sb, xid);
++ else if (!(tcon->posix_extensions))
++ /* clear write bits if ATTR_READONLY is set */
++ if (fattr->cf_cifsattrs & ATTR_READONLY)
++ fattr->cf_mode &= ~(S_IWUGO);
++
+
+ /* check for Minshall+French symlinks */
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MF_SYMLINKS) {
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 48c27581ec511c..ad0e0de9a165d4 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -320,15 +320,21 @@ static int parse_reparse_posix(struct reparse_posix_data *buf,
+ unsigned int len;
+ u64 type;
+
++ len = le16_to_cpu(buf->ReparseDataLength);
++ if (len < sizeof(buf->InodeType)) {
++ cifs_dbg(VFS, "srv returned malformed nfs buffer\n");
++ return -EIO;
++ }
++
++ len -= sizeof(buf->InodeType);
++
+ switch ((type = le64_to_cpu(buf->InodeType))) {
+ case NFS_SPECFILE_LNK:
+- len = le16_to_cpu(buf->ReparseDataLength);
+ data->symlink_target = cifs_strndup_from_utf16(buf->DataBuffer,
+ len, true,
+ cifs_sb->local_nls);
+ if (!data->symlink_target)
+ return -ENOMEM;
+- convert_delimiter(data->symlink_target, '/');
+ cifs_dbg(FYI, "%s: target path: %s\n",
+ __func__, data->symlink_target);
+ break;
+@@ -482,12 +488,18 @@ bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
+ u32 tag = data->reparse.tag;
+
+ if (tag == IO_REPARSE_TAG_NFS && buf) {
++ if (le16_to_cpu(buf->ReparseDataLength) < sizeof(buf->InodeType))
++ return false;
+ switch (le64_to_cpu(buf->InodeType)) {
+ case NFS_SPECFILE_CHR:
++ if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8)
++ return false;
+ fattr->cf_mode |= S_IFCHR;
+ fattr->cf_rdev = reparse_nfs_mkdev(buf);
+ break;
+ case NFS_SPECFILE_BLK:
++ if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8)
++ return false;
+ fattr->cf_mode |= S_IFBLK;
+ fattr->cf_rdev = reparse_nfs_mkdev(buf);
+ break;
+diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
+index e1f2feb56f45f6..8c03250d85ae0c 100644
+--- a/fs/smb/client/smb1ops.c
++++ b/fs/smb/client/smb1ops.c
+@@ -909,7 +909,7 @@ cifs_oplock_response(struct cifs_tcon *tcon, __u64 persistent_fid,
+
+ static int
+ cifs_queryfs(const unsigned int xid, struct cifs_tcon *tcon,
+- struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
++ const char *path, struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
+ {
+ int rc = -EOPNOTSUPP;
+
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 11a1c53c64e0bc..a6dab60e2c01ef 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -1205,9 +1205,12 @@ struct inode *smb2_get_reparse_inode(struct cifs_open_info_data *data,
+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+ struct cifsFileInfo *cfile;
+ struct inode *new = NULL;
++ int out_buftype[4] = {};
++ struct kvec out_iov[4] = {};
+ struct kvec in_iov[2];
+ int cmds[2];
+ int rc;
++ int i;
+
+ oparms = CIFS_OPARMS(cifs_sb, tcon, full_path,
+ SYNCHRONIZE | DELETE |
+@@ -1228,7 +1231,7 @@ struct inode *smb2_get_reparse_inode(struct cifs_open_info_data *data,
+ cmds[1] = SMB2_OP_POSIX_QUERY_INFO;
+ cifs_get_writable_path(tcon, full_path, FIND_WR_ANY, &cfile);
+ rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, &oparms,
+- in_iov, cmds, 2, cfile, NULL, NULL, NULL);
++ in_iov, cmds, 2, cfile, out_iov, out_buftype, NULL);
+ if (!rc) {
+ rc = smb311_posix_get_inode_info(&new, full_path,
+ data, sb, xid);
+@@ -1237,12 +1240,29 @@ struct inode *smb2_get_reparse_inode(struct cifs_open_info_data *data,
+ cmds[1] = SMB2_OP_QUERY_INFO;
+ cifs_get_writable_path(tcon, full_path, FIND_WR_ANY, &cfile);
+ rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, &oparms,
+- in_iov, cmds, 2, cfile, NULL, NULL, NULL);
++ in_iov, cmds, 2, cfile, out_iov, out_buftype, NULL);
+ if (!rc) {
+ rc = cifs_get_inode_info(&new, full_path,
+ data, sb, xid, NULL);
+ }
+ }
++
++
++ /*
++ * If CREATE was successful but SMB2_OP_SET_REPARSE failed then
++ * remove the intermediate object created by CREATE. Otherwise
++ * empty object stay on the server when reparse call failed.
++ */
++ if (rc &&
++ out_iov[0].iov_base != NULL && out_buftype[0] != CIFS_NO_BUFFER &&
++ ((struct smb2_hdr *)out_iov[0].iov_base)->Status == STATUS_SUCCESS &&
++ (out_iov[1].iov_base == NULL || out_buftype[1] == CIFS_NO_BUFFER ||
++ ((struct smb2_hdr *)out_iov[1].iov_base)->Status != STATUS_SUCCESS))
++ smb2_unlink(xid, tcon, full_path, cifs_sb, NULL);
++
++ for (i = 0; i < ARRAY_SIZE(out_buftype); i++)
++ free_rsp_buf(out_buftype[i], out_iov[i].iov_base);
++
+ return rc ? ERR_PTR(rc) : new;
+ }
+
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index e6540072ffb0e1..2ba43327d3f4c8 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -2836,7 +2836,7 @@ smb2_query_info_compound(const unsigned int xid, struct cifs_tcon *tcon,
+
+ static int
+ smb2_queryfs(const unsigned int xid, struct cifs_tcon *tcon,
+- struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
++ const char *path, struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
+ {
+ struct smb2_query_info_rsp *rsp;
+ struct smb2_fs_full_size_info *info = NULL;
+@@ -2845,7 +2845,7 @@ smb2_queryfs(const unsigned int xid, struct cifs_tcon *tcon,
+ int rc;
+
+
+- rc = smb2_query_info_compound(xid, tcon, "",
++ rc = smb2_query_info_compound(xid, tcon, path,
+ FILE_READ_ATTRIBUTES,
+ FS_FULL_SIZE_INFORMATION,
+ SMB2_O_INFO_FILESYSTEM,
+@@ -2873,28 +2873,33 @@ smb2_queryfs(const unsigned int xid, struct cifs_tcon *tcon,
+
+ static int
+ smb311_queryfs(const unsigned int xid, struct cifs_tcon *tcon,
+- struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
++ const char *path, struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
+ {
+ int rc;
+- __le16 srch_path = 0; /* Null - open root of share */
++ __le16 *utf16_path = NULL;
+ u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+ struct cifs_open_parms oparms;
+ struct cifs_fid fid;
+
+ if (!tcon->posix_extensions)
+- return smb2_queryfs(xid, tcon, cifs_sb, buf);
++ return smb2_queryfs(xid, tcon, path, cifs_sb, buf);
+
+ oparms = (struct cifs_open_parms) {
+ .tcon = tcon,
+- .path = "",
++ .path = path,
+ .desired_access = FILE_READ_ATTRIBUTES,
+ .disposition = FILE_OPEN,
+ .create_options = cifs_create_options(cifs_sb, 0),
+ .fid = &fid,
+ };
+
+- rc = SMB2_open(xid, &oparms, &srch_path, &oplock, NULL, NULL,
++ utf16_path = cifs_convert_path_to_utf16(path, cifs_sb);
++ if (utf16_path == NULL)
++ return -ENOMEM;
++
++ rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, NULL,
+ NULL, NULL);
++ kfree(utf16_path);
+ if (rc)
+ return rc;
+
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index 7889df8112b4ee..cac80e7bfefc74 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -39,7 +39,8 @@ void ksmbd_conn_free(struct ksmbd_conn *conn)
+ xa_destroy(&conn->sessions);
+ kvfree(conn->request_buf);
+ kfree(conn->preauth_info);
+- kfree(conn);
++ if (atomic_dec_and_test(&conn->refcnt))
++ kfree(conn);
+ }
+
+ /**
+@@ -68,6 +69,7 @@ struct ksmbd_conn *ksmbd_conn_alloc(void)
+ conn->um = NULL;
+ atomic_set(&conn->req_running, 0);
+ atomic_set(&conn->r_count, 0);
++ atomic_set(&conn->refcnt, 1);
+ conn->total_credits = 1;
+ conn->outstanding_credits = 0;
+
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index 5b947175c048eb..b379ae4fdcdffa 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -106,6 +106,7 @@ struct ksmbd_conn {
+ bool signing_negotiated;
+ __le16 signing_algorithm;
+ bool binding;
++ atomic_t refcnt;
+ };
+
+ struct ksmbd_conn_ops {
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index e546ffa57b55ab..8ee86478287f93 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -51,6 +51,7 @@ static struct oplock_info *alloc_opinfo(struct ksmbd_work *work,
+ init_waitqueue_head(&opinfo->oplock_brk);
+ atomic_set(&opinfo->refcount, 1);
+ atomic_set(&opinfo->breaking_cnt, 0);
++ atomic_inc(&opinfo->conn->refcnt);
+
+ return opinfo;
+ }
+@@ -124,6 +125,8 @@ static void free_opinfo(struct oplock_info *opinfo)
+ {
+ if (opinfo->is_lease)
+ free_lease(opinfo);
++ if (opinfo->conn && atomic_dec_and_test(&opinfo->conn->refcnt))
++ kfree(opinfo->conn);
+ kfree(opinfo);
+ }
+
+@@ -163,9 +166,7 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci)
+ !atomic_inc_not_zero(&opinfo->refcount))
+ opinfo = NULL;
+ else {
+- atomic_inc(&opinfo->conn->r_count);
+ if (ksmbd_conn_releasing(opinfo->conn)) {
+- atomic_dec(&opinfo->conn->r_count);
+ atomic_dec(&opinfo->refcount);
+ opinfo = NULL;
+ }
+@@ -177,26 +178,11 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci)
+ return opinfo;
+ }
+
+-static void opinfo_conn_put(struct oplock_info *opinfo)
++void opinfo_put(struct oplock_info *opinfo)
+ {
+- struct ksmbd_conn *conn;
+-
+ if (!opinfo)
+ return;
+
+- conn = opinfo->conn;
+- /*
+- * Checking waitqueue to dropping pending requests on
+- * disconnection. waitqueue_active is safe because it
+- * uses atomic operation for condition.
+- */
+- if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q))
+- wake_up(&conn->r_count_q);
+- opinfo_put(opinfo);
+-}
+-
+-void opinfo_put(struct oplock_info *opinfo)
+-{
+ if (!atomic_dec_and_test(&opinfo->refcount))
+ return;
+
+@@ -1127,14 +1113,11 @@ void smb_send_parent_lease_break_noti(struct ksmbd_file *fp,
+ if (!atomic_inc_not_zero(&opinfo->refcount))
+ continue;
+
+- atomic_inc(&opinfo->conn->r_count);
+- if (ksmbd_conn_releasing(opinfo->conn)) {
+- atomic_dec(&opinfo->conn->r_count);
++ if (ksmbd_conn_releasing(opinfo->conn))
+ continue;
+- }
+
+ oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE);
+- opinfo_conn_put(opinfo);
++ opinfo_put(opinfo);
+ }
+ }
+ up_read(&p_ci->m_lock);
+@@ -1167,13 +1150,10 @@ void smb_lazy_parent_lease_break_close(struct ksmbd_file *fp)
+ if (!atomic_inc_not_zero(&opinfo->refcount))
+ continue;
+
+- atomic_inc(&opinfo->conn->r_count);
+- if (ksmbd_conn_releasing(opinfo->conn)) {
+- atomic_dec(&opinfo->conn->r_count);
++ if (ksmbd_conn_releasing(opinfo->conn))
+ continue;
+- }
+ oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE);
+- opinfo_conn_put(opinfo);
++ opinfo_put(opinfo);
+ }
+ }
+ up_read(&p_ci->m_lock);
+@@ -1252,7 +1232,7 @@ int smb_grant_oplock(struct ksmbd_work *work, int req_op_level, u64 pid,
+ prev_opinfo = opinfo_get_list(ci);
+ if (!prev_opinfo ||
+ (prev_opinfo->level == SMB2_OPLOCK_LEVEL_NONE && lctx)) {
+- opinfo_conn_put(prev_opinfo);
++ opinfo_put(prev_opinfo);
+ goto set_lev;
+ }
+ prev_op_has_lease = prev_opinfo->is_lease;
+@@ -1262,19 +1242,19 @@ int smb_grant_oplock(struct ksmbd_work *work, int req_op_level, u64 pid,
+ if (share_ret < 0 &&
+ prev_opinfo->level == SMB2_OPLOCK_LEVEL_EXCLUSIVE) {
+ err = share_ret;
+- opinfo_conn_put(prev_opinfo);
++ opinfo_put(prev_opinfo);
+ goto err_out;
+ }
+
+ if (prev_opinfo->level != SMB2_OPLOCK_LEVEL_BATCH &&
+ prev_opinfo->level != SMB2_OPLOCK_LEVEL_EXCLUSIVE) {
+- opinfo_conn_put(prev_opinfo);
++ opinfo_put(prev_opinfo);
+ goto op_break_not_needed;
+ }
+
+ list_add(&work->interim_entry, &prev_opinfo->interim_list);
+ err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II);
+- opinfo_conn_put(prev_opinfo);
++ opinfo_put(prev_opinfo);
+ if (err == -ENOENT)
+ goto set_lev;
+ /* Check all oplock was freed by close */
+@@ -1337,14 +1317,14 @@ static void smb_break_all_write_oplock(struct ksmbd_work *work,
+ return;
+ if (brk_opinfo->level != SMB2_OPLOCK_LEVEL_BATCH &&
+ brk_opinfo->level != SMB2_OPLOCK_LEVEL_EXCLUSIVE) {
+- opinfo_conn_put(brk_opinfo);
++ opinfo_put(brk_opinfo);
+ return;
+ }
+
+ brk_opinfo->open_trunc = is_trunc;
+ list_add(&work->interim_entry, &brk_opinfo->interim_list);
+ oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II);
+- opinfo_conn_put(brk_opinfo);
++ opinfo_put(brk_opinfo);
+ }
+
+ /**
+@@ -1376,11 +1356,8 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ if (!atomic_inc_not_zero(&brk_op->refcount))
+ continue;
+
+- atomic_inc(&brk_op->conn->r_count);
+- if (ksmbd_conn_releasing(brk_op->conn)) {
+- atomic_dec(&brk_op->conn->r_count);
++ if (ksmbd_conn_releasing(brk_op->conn))
+ continue;
+- }
+
+ rcu_read_unlock();
+ if (brk_op->is_lease && (brk_op->o_lease->state &
+@@ -1411,7 +1388,7 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ brk_op->open_trunc = is_trunc;
+ oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE);
+ next:
+- opinfo_conn_put(brk_op);
++ opinfo_put(brk_op);
+ rcu_read_lock();
+ }
+ rcu_read_unlock();
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 8bdc592514188a..065adfb985fe2a 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -3531,8 +3531,9 @@ int smb2_open(struct ksmbd_work *work)
+ memcpy(fp->create_guid, dh_info.CreateGuid,
+ SMB2_CREATE_GUID_SIZE);
+ if (dh_info.timeout)
+- fp->durable_timeout = min(dh_info.timeout,
+- DURABLE_HANDLE_MAX_TIMEOUT);
++ fp->durable_timeout =
++ min_t(unsigned int, dh_info.timeout,
++ DURABLE_HANDLE_MAX_TIMEOUT);
+ else
+ fp->durable_timeout = 60;
+ }
+diff --git a/fs/smb/server/vfs_cache.c b/fs/smb/server/vfs_cache.c
+index 4d4ee696e37cdf..a19f4e563c7e54 100644
+--- a/fs/smb/server/vfs_cache.c
++++ b/fs/smb/server/vfs_cache.c
+@@ -863,6 +863,8 @@ static bool session_fd_check(struct ksmbd_tree_connect *tcon,
+ list_for_each_entry_rcu(op, &ci->m_op_list, op_entry) {
+ if (op->conn != conn)
+ continue;
++ if (op->conn && atomic_dec_and_test(&op->conn->refcnt))
++ kfree(op->conn);
+ op->conn = NULL;
+ }
+ up_write(&ci->m_lock);
+@@ -965,6 +967,7 @@ int ksmbd_reopen_durable_fd(struct ksmbd_work *work, struct ksmbd_file *fp)
+ if (op->conn)
+ continue;
+ op->conn = fp->conn;
++ atomic_inc(&op->conn->refcnt);
+ }
+ up_write(&ci->m_lock);
+
+diff --git a/fs/smb/server/vfs_cache.h b/fs/smb/server/vfs_cache.h
+index b0f6d0f94cb8de..5bbb179736c29c 100644
+--- a/fs/smb/server/vfs_cache.h
++++ b/fs/smb/server/vfs_cache.h
+@@ -100,8 +100,8 @@ struct ksmbd_file {
+ struct list_head blocked_works;
+ struct list_head lock_list;
+
+- int durable_timeout;
+- int durable_scavenger_timeout;
++ unsigned int durable_timeout;
++ unsigned int durable_scavenger_timeout;
+
+ /* if ls is happening on directory, below is valid*/
+ struct ksmbd_readdir_data readdir_data;
+diff --git a/include/crypto/internal/simd.h b/include/crypto/internal/simd.h
+index d2316242a98843..be97b97a75dd2d 100644
+--- a/include/crypto/internal/simd.h
++++ b/include/crypto/internal/simd.h
+@@ -14,11 +14,10 @@
+ struct simd_skcipher_alg;
+ struct skcipher_alg;
+
+-struct simd_skcipher_alg *simd_skcipher_create_compat(const char *algname,
++struct simd_skcipher_alg *simd_skcipher_create_compat(struct skcipher_alg *ialg,
++ const char *algname,
+ const char *drvname,
+ const char *basename);
+-struct simd_skcipher_alg *simd_skcipher_create(const char *algname,
+- const char *basename);
+ void simd_skcipher_free(struct simd_skcipher_alg *alg);
+
+ int simd_register_skciphers_compat(struct skcipher_alg *algs, int count,
+@@ -32,13 +31,6 @@ void simd_unregister_skciphers(struct skcipher_alg *algs, int count,
+ struct simd_aead_alg;
+ struct aead_alg;
+
+-struct simd_aead_alg *simd_aead_create_compat(const char *algname,
+- const char *drvname,
+- const char *basename);
+-struct simd_aead_alg *simd_aead_create(const char *algname,
+- const char *basename);
+-void simd_aead_free(struct simd_aead_alg *alg);
+-
+ int simd_register_aeads_compat(struct aead_alg *algs, int count,
+ struct simd_aead_alg **simd_algs);
+
+diff --git a/include/drm/drm_print.h b/include/drm/drm_print.h
+index 5d9dff5149c999..d2676831d765da 100644
+--- a/include/drm/drm_print.h
++++ b/include/drm/drm_print.h
+@@ -221,7 +221,8 @@ drm_vprintf(struct drm_printer *p, const char *fmt, va_list *va)
+
+ /**
+ * struct drm_print_iterator - local struct used with drm_printer_coredump
+- * @data: Pointer to the devcoredump output buffer
++ * @data: Pointer to the devcoredump output buffer, can be NULL if using
++ * drm_printer_coredump to determine size of devcoredump
+ * @start: The offset within the buffer to start writing
+ * @remain: The number of bytes to write for this iteration
+ */
+@@ -266,6 +267,57 @@ struct drm_print_iterator {
+ * coredump_read, ...)
+ * }
+ *
++ * The above example has a time complexity of O(N^2), where N is the size of the
++ * devcoredump. This is acceptable for small devcoredumps but scales poorly for
++ * larger ones.
++ *
++ * Another use case for drm_coredump_printer is to capture the devcoredump into
++ * a saved buffer before the dev_coredump() callback. This involves two passes:
++ * one to determine the size of the devcoredump and another to print it to a
++ * buffer. Then, in dev_coredump(), copy from the saved buffer into the
++ * devcoredump read buffer.
++ *
++ * For example::
++ *
++ * char *devcoredump_saved_buffer;
++ *
++ * ssize_t __coredump_print(char *buffer, ssize_t count, ...)
++ * {
++ * struct drm_print_iterator iter;
++ * struct drm_printer p;
++ *
++ * iter.data = buffer;
++ * iter.start = 0;
++ * iter.remain = count;
++ *
++ * p = drm_coredump_printer(&iter);
++ *
++ * drm_printf(p, "foo=%d\n", foo);
++ * ...
++ * return count - iter.remain;
++ * }
++ *
++ * void coredump_print(...)
++ * {
++ * ssize_t count;
++ *
++ * count = __coredump_print(NULL, INT_MAX, ...);
++ * devcoredump_saved_buffer = kvmalloc(count, GFP_KERNEL);
++ * __coredump_print(devcoredump_saved_buffer, count, ...);
++ * }
++ *
++ * void coredump_read(char *buffer, loff_t offset, size_t count,
++ * void *data, size_t datalen)
++ * {
++ * ...
++ * memcpy(buffer, devcoredump_saved_buffer + offset, count);
++ * ...
++ * }
++ *
++ * The above example has a time complexity of O(N*2), where N is the size of the
++ * devcoredump. This scales better than the previous example for larger
++ * devcoredumps.
++ *
+ * RETURNS:
+ * The &drm_printer object
+ */
+diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
+index 5acc64954a8830..e28bc649b5c9b7 100644
+--- a/include/drm/gpu_scheduler.h
++++ b/include/drm/gpu_scheduler.h
+@@ -574,7 +574,7 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity,
+
+ void drm_sched_tdr_queue_imm(struct drm_gpu_scheduler *sched);
+ void drm_sched_job_cleanup(struct drm_sched_job *job);
+-void drm_sched_wakeup(struct drm_gpu_scheduler *sched, struct drm_sched_entity *entity);
++void drm_sched_wakeup(struct drm_gpu_scheduler *sched);
+ bool drm_sched_wqueue_ready(struct drm_gpu_scheduler *sched);
+ void drm_sched_wqueue_stop(struct drm_gpu_scheduler *sched);
+ void drm_sched_wqueue_start(struct drm_gpu_scheduler *sched);
+diff --git a/include/dt-bindings/clock/exynos7885.h b/include/dt-bindings/clock/exynos7885.h
+index 255e3aa9432373..54cfccff85086a 100644
+--- a/include/dt-bindings/clock/exynos7885.h
++++ b/include/dt-bindings/clock/exynos7885.h
+@@ -136,12 +136,12 @@
+ #define CLK_MOUT_FSYS_MMC_CARD_USER 2
+ #define CLK_MOUT_FSYS_MMC_EMBD_USER 3
+ #define CLK_MOUT_FSYS_MMC_SDIO_USER 4
+-#define CLK_MOUT_FSYS_USB30DRD_USER 4
+ #define CLK_GOUT_MMC_CARD_ACLK 5
+ #define CLK_GOUT_MMC_CARD_SDCLKIN 6
+ #define CLK_GOUT_MMC_EMBD_ACLK 7
+ #define CLK_GOUT_MMC_EMBD_SDCLKIN 8
+ #define CLK_GOUT_MMC_SDIO_ACLK 9
+ #define CLK_GOUT_MMC_SDIO_SDCLKIN 10
++#define CLK_MOUT_FSYS_USB30DRD_USER 11
+
+ #endif /* _DT_BINDINGS_CLOCK_EXYNOS_7885_H */
+diff --git a/include/dt-bindings/clock/qcom,gcc-sc8180x.h b/include/dt-bindings/clock/qcom,gcc-sc8180x.h
+index 90c6e021a0356d..2569f874fe13c6 100644
+--- a/include/dt-bindings/clock/qcom,gcc-sc8180x.h
++++ b/include/dt-bindings/clock/qcom,gcc-sc8180x.h
+@@ -248,6 +248,7 @@
+ #define GCC_USB3_SEC_CLKREF_CLK 238
+ #define GCC_UFS_MEM_CLKREF_EN 239
+ #define GCC_UFS_CARD_CLKREF_EN 240
++#define GPLL9 241
+
+ #define GCC_EMAC_BCR 0
+ #define GCC_GPU_BCR 1
+diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
+index d4d2f4d1d7cbdb..aabec598f79afa 100644
+--- a/include/linux/cpufreq.h
++++ b/include/linux/cpufreq.h
+@@ -1113,10 +1113,9 @@ static inline int parse_perf_domain(int cpu, const char *list_name,
+ const char *cell_name,
+ struct of_phandle_args *args)
+ {
+- struct device_node *cpu_np;
+ int ret;
+
+- cpu_np = of_cpu_device_node_get(cpu);
++ struct device_node *cpu_np __free(device_node) = of_cpu_device_node_get(cpu);
+ if (!cpu_np)
+ return -ENODEV;
+
+@@ -1124,9 +1123,6 @@ static inline int parse_perf_domain(int cpu, const char *list_name,
+ args);
+ if (ret < 0)
+ return ret;
+-
+- of_node_put(cpu_np);
+-
+ return 0;
+ }
+
+diff --git a/include/linux/fdtable.h b/include/linux/fdtable.h
+index 2944d4aa413b75..b1c5722f2b3ce4 100644
+--- a/include/linux/fdtable.h
++++ b/include/linux/fdtable.h
+@@ -22,7 +22,6 @@
+ * as this is the granularity returned by copy_fdset().
+ */
+ #define NR_OPEN_DEFAULT BITS_PER_LONG
+-#define NR_OPEN_MAX ~0U
+
+ struct fdtable {
+ unsigned int max_fds;
+@@ -106,7 +105,10 @@ struct task_struct;
+
+ void put_files_struct(struct files_struct *fs);
+ int unshare_files(void);
+-struct files_struct *dup_fd(struct files_struct *, unsigned, int *) __latent_entropy;
++struct fd_range {
++ unsigned int from, to;
++};
++struct files_struct *dup_fd(struct files_struct *, struct fd_range *) __latent_entropy;
+ void do_close_on_exec(struct files_struct *);
+ int iterate_fd(struct files_struct *, unsigned,
+ int (*)(const void *, struct file *, unsigned),
+@@ -115,8 +117,6 @@ int iterate_fd(struct files_struct *, unsigned,
+ extern int close_fd(unsigned int fd);
+ extern int __close_range(unsigned int fd, unsigned int max_fd, unsigned int flags);
+ extern struct file *file_close_fd(unsigned int fd);
+-extern int unshare_fd(unsigned long unshare_flags, unsigned int max_fds,
+- struct files_struct **new_fdp);
+
+ extern struct kmem_cache *files_cachep;
+
+diff --git a/include/linux/hdmi.h b/include/linux/hdmi.h
+index 3bb87bf6bc658b..455f855bc08484 100644
+--- a/include/linux/hdmi.h
++++ b/include/linux/hdmi.h
+@@ -59,6 +59,15 @@ enum hdmi_infoframe_type {
+ #define HDMI_DRM_INFOFRAME_SIZE 26
+ #define HDMI_VENDOR_INFOFRAME_SIZE 4
+
++/*
++ * HDMI 1.3a table 5-14 states that the largest InfoFrame_length is 27,
++ * not including the packet header or checksum byte. We include the
++ * checksum byte in HDMI_INFOFRAME_HEADER_SIZE, so this should allow
++ * HDMI_INFOFRAME_SIZE(MAX) to be the largest buffer we could ever need
++ * for any HDMI infoframe.
++ */
++#define HDMI_MAX_INFOFRAME_SIZE 27
++
+ #define HDMI_INFOFRAME_SIZE(type) \
+ (HDMI_INFOFRAME_HEADER_SIZE + HDMI_ ## type ## _INFOFRAME_SIZE)
+
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 45bf05ad5c53a1..3ddd69b64571a7 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -695,6 +695,9 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+ struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
+ nodemask_t *nmask, gfp_t gfp_mask,
+ bool allow_alloc_fallback);
++struct folio *alloc_hugetlb_folio_reserve(struct hstate *h, int preferred_nid,
++ nodemask_t *nmask, gfp_t gfp_mask);
++
+ int hugetlb_add_to_page_cache(struct folio *folio, struct address_space *mapping,
+ pgoff_t idx);
+ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
+@@ -1061,6 +1064,13 @@ static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+ return NULL;
+ }
+
++static inline struct folio *
++alloc_hugetlb_folio_reserve(struct hstate *h, int preferred_nid,
++ nodemask_t *nmask, gfp_t gfp_mask)
++{
++ return NULL;
++}
++
+ static inline struct folio *
+ alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
+ nodemask_t *nmask, gfp_t gfp_mask,
+diff --git a/include/linux/i2c.h b/include/linux/i2c.h
+index 377def4972985c..388ce71a29a979 100644
+--- a/include/linux/i2c.h
++++ b/include/linux/i2c.h
+@@ -761,6 +761,9 @@ struct i2c_adapter {
+ struct regulator *bus_regulator;
+
+ struct dentry *debugfs;
++
++ /* 7bit address space */
++ DECLARE_BITMAP(addrs_in_instantiation, 1 << 7);
+ };
+ #define to_i2c_adapter(d) container_of(d, struct i2c_adapter, dev)
+
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 607009150b5fa2..b26954dc9ed773 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -356,7 +356,7 @@ struct napi_struct {
+
+ unsigned long state;
+ int weight;
+- int defer_hard_irqs_count;
++ u32 defer_hard_irqs_count;
+ unsigned long gro_bitmask;
+ int (*poll)(struct napi_struct *, int);
+ #ifdef CONFIG_NETPOLL
+@@ -2091,7 +2091,7 @@ struct net_device {
+ unsigned int real_num_rx_queues;
+ struct netdev_rx_queue *_rx;
+ unsigned long gro_flush_timeout;
+- int napi_defer_hard_irqs;
++ u32 napi_defer_hard_irqs;
+ unsigned int gro_max_size;
+ unsigned int gro_ipv4_max_size;
+ rx_handler_func_t __rcu *rx_handler;
+@@ -5026,6 +5026,24 @@ void netif_set_tso_max_segs(struct net_device *dev, unsigned int segs);
+ void netif_inherit_tso_max(struct net_device *to,
+ const struct net_device *from);
+
++static inline unsigned int
++netif_get_gro_max_size(const struct net_device *dev, const struct sk_buff *skb)
++{
++ /* pairs with WRITE_ONCE() in netif_set_gro(_ipv4)_max_size() */
++ return skb->protocol == htons(ETH_P_IPV6) ?
++ READ_ONCE(dev->gro_max_size) :
++ READ_ONCE(dev->gro_ipv4_max_size);
++}
++
++static inline unsigned int
++netif_get_gso_max_size(const struct net_device *dev, const struct sk_buff *skb)
++{
++ /* pairs with WRITE_ONCE() in netif_set_gso(_ipv4)_max_size() */
++ return skb->protocol == htons(ETH_P_IPV6) ?
++ READ_ONCE(dev->gso_max_size) :
++ READ_ONCE(dev->gso_ipv4_max_size);
++}
++
+ static inline bool netif_is_macsec(const struct net_device *dev)
+ {
+ return dev->priv_flags & IFF_MACSEC;
+diff --git a/include/linux/nvme-keyring.h b/include/linux/nvme-keyring.h
+index e10333d78dbbe5..19d2b256180fd7 100644
+--- a/include/linux/nvme-keyring.h
++++ b/include/linux/nvme-keyring.h
+@@ -12,7 +12,7 @@ key_serial_t nvme_tls_psk_default(struct key *keyring,
+ const char *hostnqn, const char *subnqn);
+
+ key_serial_t nvme_keyring_id(void);
+-
++struct key *nvme_tls_key_lookup(key_serial_t key_id);
+ #else
+
+ static inline key_serial_t nvme_tls_psk_default(struct key *keyring,
+@@ -24,5 +24,9 @@ static inline key_serial_t nvme_keyring_id(void)
+ {
+ return 0;
+ }
++static inline struct key *nvme_tls_key_lookup(key_serial_t key_id)
++{
++ return ERR_PTR(-ENOTSUPP);
++}
+ #endif /* !CONFIG_NVME_KEYRING */
+ #endif /* _NVME_KEYRING_H */
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 1a8942277ddad2..e336306b8c08e8 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -1602,13 +1602,7 @@ static inline int perf_is_paranoid(void)
+ return sysctl_perf_event_paranoid > -1;
+ }
+
+-static inline int perf_allow_kernel(struct perf_event_attr *attr)
+-{
+- if (sysctl_perf_event_paranoid > 1 && !perfmon_capable())
+- return -EACCES;
+-
+- return security_perf_event_open(attr, PERF_SECURITY_KERNEL);
+-}
++int perf_allow_kernel(struct perf_event_attr *attr);
+
+ static inline int perf_allow_cpu(struct perf_event_attr *attr)
+ {
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index f8d150343d42d9..1c771ea4481dd8 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -639,6 +639,8 @@ struct sched_dl_entity {
+ *
+ * @dl_overrun tells if the task asked to be informed about runtime
+ * overruns.
++ *
++ * @dl_server tells if this is a server entity.
+ */
+ unsigned int dl_throttled : 1;
+ unsigned int dl_yielded : 1;
+diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
+index a7d0406b9ef59c..6811681033c0f0 100644
+--- a/include/linux/sunrpc/svc.h
++++ b/include/linux/sunrpc/svc.h
+@@ -33,9 +33,9 @@
+ * node traffic on multi-node NUMA NFS servers.
+ */
+ struct svc_pool {
+- unsigned int sp_id; /* pool id; also node id on NUMA */
++ unsigned int sp_id; /* pool id; also node id on NUMA */
+ struct lwq sp_xprts; /* pending transports */
+- atomic_t sp_nrthreads; /* # of threads in pool */
++ unsigned int sp_nrthreads; /* # of threads in pool */
+ struct list_head sp_all_threads; /* all server threads */
+ struct llist_head sp_idle_threads; /* idle server threads */
+
+diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
+index b503fafb7fb3ea..a270a5892ab4f5 100644
+--- a/include/linux/uprobes.h
++++ b/include/linux/uprobes.h
+@@ -76,6 +76,8 @@ struct uprobe_task {
+ struct uprobe *active_uprobe;
+ unsigned long xol_vaddr;
+
++ struct arch_uprobe *auprobe;
++
+ struct return_instance *return_instances;
+ unsigned int depth;
+ };
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 276ca543ef44d8..02a9f4dc594d02 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -103,8 +103,10 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+
+ if (!skb_partial_csum_set(skb, start, off))
+ return -EINVAL;
++ if (skb_transport_offset(skb) < nh_min_len)
++ return -EINVAL;
+
+- nh_min_len = max_t(u32, nh_min_len, skb_transport_offset(skb));
++ nh_min_len = skb_transport_offset(skb);
+ p_off = nh_min_len + thlen;
+ if (!pskb_may_pull(skb, p_off))
+ return -EINVAL;
+diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
+index 606b4a0f92dae2..edcc3b3a3ecf83 100644
+--- a/include/trace/events/netfs.h
++++ b/include/trace/events/netfs.h
+@@ -141,6 +141,7 @@
+ EM(netfs_streaming_cont_filled_page, "mod-streamw-f+") \
+ /* The rest are for writeback */ \
+ EM(netfs_folio_trace_cancel_copy, "cancel-copy") \
++ EM(netfs_folio_trace_cancel_store, "cancel-store") \
+ EM(netfs_folio_trace_clear, "clear") \
+ EM(netfs_folio_trace_clear_cc, "clear-cc") \
+ EM(netfs_folio_trace_clear_g, "clear-g") \
+diff --git a/include/uapi/linux/cec.h b/include/uapi/linux/cec.h
+index b8e071abaea5ac..3eba3934512e60 100644
+--- a/include/uapi/linux/cec.h
++++ b/include/uapi/linux/cec.h
+@@ -132,6 +132,8 @@ static inline void cec_msg_init(struct cec_msg *msg,
+ * Set the msg destination to the orig initiator and the msg initiator to the
+ * orig destination. Note that msg and orig may be the same pointer, in which
+ * case the change is done in place.
++ *
++ * It also zeroes the reply, timeout and flags fields.
+ */
+ static inline void cec_msg_set_reply_to(struct cec_msg *msg,
+ struct cec_msg *orig)
+@@ -139,7 +141,9 @@ static inline void cec_msg_set_reply_to(struct cec_msg *msg,
+ /* The destination becomes the initiator and vice versa */
+ msg->msg[0] = (cec_msg_destination(orig) << 4) |
+ cec_msg_initiator(orig);
+- msg->reply = msg->timeout = 0;
++ msg->reply = 0;
++ msg->timeout = 0;
++ msg->flags = 0;
+ }
+
+ /**
+diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
+index 639894ed1b9732..2f71d91462331d 100644
+--- a/include/uapi/linux/netfilter/nf_tables.h
++++ b/include/uapi/linux/netfilter/nf_tables.h
+@@ -1694,7 +1694,7 @@ enum nft_flowtable_flags {
+ *
+ * @NFTA_FLOWTABLE_TABLE: name of the table containing the expression (NLA_STRING)
+ * @NFTA_FLOWTABLE_NAME: name of this flow table (NLA_STRING)
+- * @NFTA_FLOWTABLE_HOOK: netfilter hook configuration(NLA_U32)
++ * @NFTA_FLOWTABLE_HOOK: netfilter hook configuration (NLA_NESTED)
+ * @NFTA_FLOWTABLE_USE: number of references to this flow table (NLA_U32)
+ * @NFTA_FLOWTABLE_HANDLE: object handle (NLA_U64)
+ * @NFTA_FLOWTABLE_FLAGS: flags (NLA_U32)
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 25112cf78e2b36..7a166120a45c3d 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -321,7 +321,7 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+ sizeof(struct io_kiocb));
+ ret |= io_futex_cache_init(ctx);
+ if (ret)
+- goto err;
++ goto free_ref;
+ init_completion(&ctx->ref_comp);
+ xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1);
+ mutex_init(&ctx->uring_lock);
+@@ -349,6 +349,9 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+ io_napi_init(ctx);
+
+ return ctx;
++
++free_ref:
++ percpu_ref_exit(&ctx->refs);
+ err:
+ io_alloc_cache_free(&ctx->rsrc_node_cache, kfree);
+ io_alloc_cache_free(&ctx->apoll_cache, kfree);
+diff --git a/io_uring/net.c b/io_uring/net.c
+index d08abcca89cc5b..ab2b9f172705f2 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -1126,6 +1126,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
+ int ret, min_ret = 0;
+ bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+ size_t len = sr->len;
++ bool mshot_finished;
+
+ if (!(req->flags & REQ_F_POLLED) &&
+ (sr->flags & IORING_RECVSEND_POLL_FIRST))
+@@ -1180,6 +1181,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
+ req_set_fail(req);
+ }
+
++ mshot_finished = ret <= 0;
+ if (ret > 0)
+ ret += sr->done_io;
+ else if (sr->done_io)
+@@ -1187,7 +1189,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
+ else
+ io_kbuf_recycle(req, issue_flags);
+
+- if (!io_recv_finish(req, &ret, kmsg, ret <= 0, issue_flags))
++ if (!io_recv_finish(req, &ret, kmsg, mshot_finished, issue_flags))
+ goto retry_multishot;
+
+ return ret;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 8c07efa3905b9a..d5215cb1747f16 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -8029,6 +8029,15 @@ static int widen_imprecise_scalars(struct bpf_verifier_env *env,
+ return 0;
+ }
+
++static struct bpf_reg_state *get_iter_from_state(struct bpf_verifier_state *cur_st,
++ struct bpf_kfunc_call_arg_meta *meta)
++{
++ int iter_frameno = meta->iter.frameno;
++ int iter_spi = meta->iter.spi;
++
++ return &cur_st->frame[iter_frameno]->stack[iter_spi].spilled_ptr;
++}
++
+ /* process_iter_next_call() is called when verifier gets to iterator's next
+ * "method" (e.g., bpf_iter_num_next() for numbers iterator) call. We'll refer
+ * to it as just "iter_next()" in comments below.
+@@ -8113,12 +8122,10 @@ static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx,
+ struct bpf_verifier_state *cur_st = env->cur_state, *queued_st, *prev_st;
+ struct bpf_func_state *cur_fr = cur_st->frame[cur_st->curframe], *queued_fr;
+ struct bpf_reg_state *cur_iter, *queued_iter;
+- int iter_frameno = meta->iter.frameno;
+- int iter_spi = meta->iter.spi;
+
+ BTF_TYPE_EMIT(struct bpf_iter);
+
+- cur_iter = &env->cur_state->frame[iter_frameno]->stack[iter_spi].spilled_ptr;
++ cur_iter = get_iter_from_state(cur_st, meta);
+
+ if (cur_iter->iter.state != BPF_ITER_STATE_ACTIVE &&
+ cur_iter->iter.state != BPF_ITER_STATE_DRAINED) {
+@@ -8146,7 +8153,7 @@ static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx,
+ if (!queued_st)
+ return -ENOMEM;
+
+- queued_iter = &queued_st->frame[iter_frameno]->stack[iter_spi].spilled_ptr;
++ queued_iter = get_iter_from_state(queued_st, meta);
+ queued_iter->iter.state = BPF_ITER_STATE_ACTIVE;
+ queued_iter->iter.depth++;
+ if (prev_st)
+@@ -12692,6 +12699,17 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ regs[BPF_REG_0].btf = desc_btf;
+ regs[BPF_REG_0].type = PTR_TO_BTF_ID;
+ regs[BPF_REG_0].btf_id = ptr_type_id;
++
++ if (is_iter_next_kfunc(&meta)) {
++ struct bpf_reg_state *cur_iter;
++
++ cur_iter = get_iter_from_state(env->cur_state, &meta);
++
++ if (cur_iter->type & MEM_RCU) /* KF_RCU_PROTECTED */
++ regs[BPF_REG_0].type |= MEM_RCU;
++ else
++ regs[BPF_REG_0].type |= PTR_TRUSTED;
++ }
+ }
+
+ if (is_kfunc_ret_null(&meta)) {
+@@ -19935,13 +19953,46 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
+ /* Convert BPF_CLASS(insn->code) == BPF_ALU64 to 32-bit ALU */
+ insn->code = BPF_ALU | BPF_OP(insn->code) | BPF_SRC(insn->code);
+
+- /* Make divide-by-zero exceptions impossible. */
++ /* Make sdiv/smod divide-by-minus-one exceptions impossible. */
++ if ((insn->code == (BPF_ALU64 | BPF_MOD | BPF_K) ||
++ insn->code == (BPF_ALU64 | BPF_DIV | BPF_K) ||
++ insn->code == (BPF_ALU | BPF_MOD | BPF_K) ||
++ insn->code == (BPF_ALU | BPF_DIV | BPF_K)) &&
++ insn->off == 1 && insn->imm == -1) {
++ bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
++ bool isdiv = BPF_OP(insn->code) == BPF_DIV;
++ struct bpf_insn *patchlet;
++ struct bpf_insn chk_and_sdiv[] = {
++ BPF_RAW_INSN((is64 ? BPF_ALU64 : BPF_ALU) |
++ BPF_NEG | BPF_K, insn->dst_reg,
++ 0, 0, 0),
++ };
++ struct bpf_insn chk_and_smod[] = {
++ BPF_MOV32_IMM(insn->dst_reg, 0),
++ };
++
++ patchlet = isdiv ? chk_and_sdiv : chk_and_smod;
++ cnt = isdiv ? ARRAY_SIZE(chk_and_sdiv) : ARRAY_SIZE(chk_and_smod);
++
++ new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
++ if (!new_prog)
++ return -ENOMEM;
++
++ delta += cnt - 1;
++ env->prog = prog = new_prog;
++ insn = new_prog->insnsi + i + delta;
++ goto next_insn;
++ }
++
++ /* Make divide-by-zero and divide-by-minus-one exceptions impossible. */
+ if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) ||
+ insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) ||
+ insn->code == (BPF_ALU | BPF_MOD | BPF_X) ||
+ insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
+ bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
+ bool isdiv = BPF_OP(insn->code) == BPF_DIV;
++ bool is_sdiv = isdiv && insn->off == 1;
++ bool is_smod = !isdiv && insn->off == 1;
+ struct bpf_insn *patchlet;
+ struct bpf_insn chk_and_div[] = {
+ /* [R,W]x div 0 -> 0 */
+@@ -19961,10 +20012,62 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
+ BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+ BPF_MOV32_REG(insn->dst_reg, insn->dst_reg),
+ };
++ struct bpf_insn chk_and_sdiv[] = {
++ /* [R,W]x sdiv 0 -> 0
++ * LLONG_MIN sdiv -1 -> LLONG_MIN
++ * INT_MIN sdiv -1 -> INT_MIN
++ */
++ BPF_MOV64_REG(BPF_REG_AX, insn->src_reg),
++ BPF_RAW_INSN((is64 ? BPF_ALU64 : BPF_ALU) |
++ BPF_ADD | BPF_K, BPF_REG_AX,
++ 0, 0, 1),
++ BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++ BPF_JGT | BPF_K, BPF_REG_AX,
++ 0, 4, 1),
++ BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++ BPF_JEQ | BPF_K, BPF_REG_AX,
++ 0, 1, 0),
++ BPF_RAW_INSN((is64 ? BPF_ALU64 : BPF_ALU) |
++ BPF_MOV | BPF_K, insn->dst_reg,
++ 0, 0, 0),
++ /* BPF_NEG(LLONG_MIN) == -LLONG_MIN == LLONG_MIN */
++ BPF_RAW_INSN((is64 ? BPF_ALU64 : BPF_ALU) |
++ BPF_NEG | BPF_K, insn->dst_reg,
++ 0, 0, 0),
++ BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++ *insn,
++ };
++ struct bpf_insn chk_and_smod[] = {
++ /* [R,W]x mod 0 -> [R,W]x */
++ /* [R,W]x mod -1 -> 0 */
++ BPF_MOV64_REG(BPF_REG_AX, insn->src_reg),
++ BPF_RAW_INSN((is64 ? BPF_ALU64 : BPF_ALU) |
++ BPF_ADD | BPF_K, BPF_REG_AX,
++ 0, 0, 1),
++ BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++ BPF_JGT | BPF_K, BPF_REG_AX,
++ 0, 3, 1),
++ BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++ BPF_JEQ | BPF_K, BPF_REG_AX,
++ 0, 3 + (is64 ? 0 : 1), 1),
++ BPF_MOV32_IMM(insn->dst_reg, 0),
++ BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++ *insn,
++ BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++ BPF_MOV32_REG(insn->dst_reg, insn->dst_reg),
++ };
+
+- patchlet = isdiv ? chk_and_div : chk_and_mod;
+- cnt = isdiv ? ARRAY_SIZE(chk_and_div) :
+- ARRAY_SIZE(chk_and_mod) - (is64 ? 2 : 0);
++ if (is_sdiv) {
++ patchlet = chk_and_sdiv;
++ cnt = ARRAY_SIZE(chk_and_sdiv);
++ } else if (is_smod) {
++ patchlet = chk_and_smod;
++ cnt = ARRAY_SIZE(chk_and_smod) - (is64 ? 2 : 0);
++ } else {
++ patchlet = isdiv ? chk_and_div : chk_and_mod;
++ cnt = isdiv ? ARRAY_SIZE(chk_and_div) :
++ ARRAY_SIZE(chk_and_mod) - (is64 ? 2 : 0);
++ }
+
+ new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
+ if (!new_prog)
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 8a6c6bbcd658ad..e9ebd6312bf574 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -264,6 +264,7 @@ static void event_function_call(struct perf_event *event, event_f func, void *da
+ {
+ struct perf_event_context *ctx = event->ctx;
+ struct task_struct *task = READ_ONCE(ctx->task); /* verified in event_function */
++ struct perf_cpu_context *cpuctx;
+ struct event_function_struct efs = {
+ .event = event,
+ .func = func,
+@@ -291,22 +292,25 @@ static void event_function_call(struct perf_event *event, event_f func, void *da
+ if (!task_function_call(task, event_function, &efs))
+ return;
+
+- raw_spin_lock_irq(&ctx->lock);
++ local_irq_disable();
++ cpuctx = this_cpu_ptr(&perf_cpu_context);
++ perf_ctx_lock(cpuctx, ctx);
+ /*
+ * Reload the task pointer, it might have been changed by
+ * a concurrent perf_event_context_sched_out().
+ */
+ task = ctx->task;
+- if (task == TASK_TOMBSTONE) {
+- raw_spin_unlock_irq(&ctx->lock);
+- return;
+- }
++ if (task == TASK_TOMBSTONE)
++ goto unlock;
+ if (ctx->is_active) {
+- raw_spin_unlock_irq(&ctx->lock);
++ perf_ctx_unlock(cpuctx, ctx);
++ local_irq_enable();
+ goto again;
+ }
+ func(event, NULL, ctx, data);
+- raw_spin_unlock_irq(&ctx->lock);
++unlock:
++ perf_ctx_unlock(cpuctx, ctx);
++ local_irq_enable();
+ }
+
+ /*
+@@ -4093,7 +4097,11 @@ static void perf_adjust_period(struct perf_event *event, u64 nsec, u64 count, bo
+ period = perf_calculate_period(event, nsec, count);
+
+ delta = (s64)(period - hwc->sample_period);
+- delta = (delta + 7) / 8; /* low pass filter */
++ if (delta >= 0)
++ delta += 7;
++ else
++ delta -= 7;
++ delta /= 8; /* low pass filter */
+
+ sample_period = hwc->sample_period + delta;
+
+@@ -13358,6 +13366,15 @@ const struct perf_event_attr *perf_event_attrs(struct perf_event *event)
+ return &event->attr;
+ }
+
++int perf_allow_kernel(struct perf_event_attr *attr)
++{
++ if (sysctl_perf_event_paranoid > 1 && !perfmon_capable())
++ return -EACCES;
++
++ return security_perf_event_open(attr, PERF_SECURITY_KERNEL);
++}
++EXPORT_SYMBOL_GPL(perf_allow_kernel);
++
+ /*
+ * Inherit an event from parent task to child task.
+ *
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 50d7949be2b175..56cd0c7f516d3e 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -1500,7 +1500,7 @@ static struct xol_area *__create_xol_area(unsigned long vaddr)
+
+ area->xol_mapping.name = "[uprobes]";
+ area->xol_mapping.pages = area->pages;
+- area->pages[0] = alloc_page(GFP_HIGHUSER);
++ area->pages[0] = alloc_page(GFP_HIGHUSER | __GFP_ZERO);
+ if (!area->pages[0])
+ goto free_bitmap;
+ area->pages[1] = NULL;
+@@ -2081,6 +2081,7 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
+ bool need_prep = false; /* prepare return uprobe, when needed */
+
+ down_read(&uprobe->register_rwsem);
++ current->utask->auprobe = &uprobe->arch;
+ for (uc = uprobe->consumers; uc; uc = uc->next) {
+ int rc = 0;
+
+@@ -2095,6 +2096,7 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
+
+ remove &= rc;
+ }
++ current->utask->auprobe = NULL;
+
+ if (need_prep && !remove)
+ prepare_uretprobe(uprobe, regs); /* put bp at return */
+diff --git a/kernel/fork.c b/kernel/fork.c
+index cc760491f20127..6b97fb2ac4af56 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1754,33 +1754,30 @@ static int copy_files(unsigned long clone_flags, struct task_struct *tsk,
+ int no_files)
+ {
+ struct files_struct *oldf, *newf;
+- int error = 0;
+
+ /*
+ * A background process may not have any files ...
+ */
+ oldf = current->files;
+ if (!oldf)
+- goto out;
++ return 0;
+
+ if (no_files) {
+ tsk->files = NULL;
+- goto out;
++ return 0;
+ }
+
+ if (clone_flags & CLONE_FILES) {
+ atomic_inc(&oldf->count);
+- goto out;
++ return 0;
+ }
+
+- newf = dup_fd(oldf, NR_OPEN_MAX, &error);
+- if (!newf)
+- goto out;
++ newf = dup_fd(oldf, NULL);
++ if (IS_ERR(newf))
++ return PTR_ERR(newf);
+
+ tsk->files = newf;
+- error = 0;
+-out:
+- return error;
++ return 0;
+ }
+
+ static int copy_sighand(unsigned long clone_flags, struct task_struct *tsk)
+@@ -3232,17 +3229,16 @@ static int unshare_fs(unsigned long unshare_flags, struct fs_struct **new_fsp)
+ /*
+ * Unshare file descriptor table if it is being shared
+ */
+-int unshare_fd(unsigned long unshare_flags, unsigned int max_fds,
+- struct files_struct **new_fdp)
++static int unshare_fd(unsigned long unshare_flags, struct files_struct **new_fdp)
+ {
+ struct files_struct *fd = current->files;
+- int error = 0;
+
+ if ((unshare_flags & CLONE_FILES) &&
+ (fd && atomic_read(&fd->count) > 1)) {
+- *new_fdp = dup_fd(fd, max_fds, &error);
+- if (!*new_fdp)
+- return error;
++ fd = dup_fd(fd, NULL);
++ if (IS_ERR(fd))
++ return PTR_ERR(fd);
++ *new_fdp = fd;
+ }
+
+ return 0;
+@@ -3300,7 +3296,7 @@ int ksys_unshare(unsigned long unshare_flags)
+ err = unshare_fs(unshare_flags, &new_fs);
+ if (err)
+ goto bad_unshare_out;
+- err = unshare_fd(unshare_flags, NR_OPEN_MAX, &new_fd);
++ err = unshare_fd(unshare_flags, &new_fd);
+ if (err)
+ goto bad_unshare_cleanup_fs;
+ err = unshare_userns(unshare_flags, &new_cred);
+@@ -3392,7 +3388,7 @@ int unshare_files(void)
+ struct files_struct *old, *copy = NULL;
+ int error;
+
+- error = unshare_fd(CLONE_FILES, NR_OPEN_MAX, ©);
++ error = unshare_fd(CLONE_FILES, ©);
+ if (error || !copy)
+ return error;
+
+diff --git a/kernel/jump_label.c b/kernel/jump_label.c
+index 6dc76b590703ed..93a822d3c468ca 100644
+--- a/kernel/jump_label.c
++++ b/kernel/jump_label.c
+@@ -168,7 +168,7 @@ bool static_key_slow_inc_cpuslocked(struct static_key *key)
+ jump_label_update(key);
+ /*
+ * Ensure that when static_key_fast_inc_not_disabled() or
+- * static_key_slow_try_dec() observe the positive value,
++ * static_key_dec_not_one() observe the positive value,
+ * they must also observe all the text changes.
+ */
+ atomic_set_release(&key->enabled, 1);
+@@ -250,7 +250,7 @@ void static_key_disable(struct static_key *key)
+ }
+ EXPORT_SYMBOL_GPL(static_key_disable);
+
+-static bool static_key_slow_try_dec(struct static_key *key)
++static bool static_key_dec_not_one(struct static_key *key)
+ {
+ int v;
+
+@@ -274,6 +274,14 @@ static bool static_key_slow_try_dec(struct static_key *key)
+ * enabled. This suggests an ordering problem on the user side.
+ */
+ WARN_ON_ONCE(v < 0);
++
++ /*
++ * Warn about underflow, and lie about success in an attempt to
++ * not make things worse.
++ */
++ if (WARN_ON_ONCE(v == 0))
++ return true;
++
+ if (v <= 1)
+ return false;
+ } while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v - 1)));
+@@ -284,15 +292,27 @@ static bool static_key_slow_try_dec(struct static_key *key)
+ static void __static_key_slow_dec_cpuslocked(struct static_key *key)
+ {
+ lockdep_assert_cpus_held();
++ int val;
+
+- if (static_key_slow_try_dec(key))
++ if (static_key_dec_not_one(key))
+ return;
+
+ guard(mutex)(&jump_label_mutex);
+- if (atomic_cmpxchg(&key->enabled, 1, 0) == 1)
++ val = atomic_read(&key->enabled);
++ /*
++ * It should be impossible to observe -1 with jump_label_mutex held,
++ * see static_key_slow_inc_cpuslocked().
++ */
++ if (WARN_ON_ONCE(val == -1))
++ return;
++ /*
++ * Cannot already be 0, something went sideways.
++ */
++ if (WARN_ON_ONCE(val == 0))
++ return;
++
++ if (atomic_dec_and_test(&key->enabled))
+ jump_label_update(key);
+- else
+- WARN_ON_ONCE(!static_key_slow_try_dec(key));
+ }
+
+ static void __static_key_slow_dec(struct static_key *key)
+@@ -329,7 +349,7 @@ void __static_key_slow_dec_deferred(struct static_key *key,
+ {
+ STATIC_KEY_CHECK_USE(key);
+
+- if (static_key_slow_try_dec(key))
++ if (static_key_dec_not_one(key))
+ return;
+
+ schedule_delayed_work(work, timeout);
+diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
+index b53a9e8f5904f1..f88c75b3cea3b8 100644
+--- a/kernel/rcu/rcuscale.c
++++ b/kernel/rcu/rcuscale.c
+@@ -499,7 +499,7 @@ rcu_scale_writer(void *arg)
+ schedule_timeout_idle(torture_random(&tr) % writer_holdoff_jiffies + 1);
+ wdp = &wdpp[i];
+ *wdp = ktime_get_mono_fast_ns();
+- if (gp_async) {
++ if (gp_async && !WARN_ON_ONCE(!cur_ops->async)) {
+ retry:
+ if (!rhp)
+ rhp = kmalloc(sizeof(*rhp), GFP_KERNEL);
+@@ -555,7 +555,7 @@ rcu_scale_writer(void *arg)
+ i++;
+ rcu_scale_wait_shutdown();
+ } while (!torture_must_stop());
+- if (gp_async) {
++ if (gp_async && cur_ops->async) {
+ cur_ops->gp_barrier();
+ }
+ writer_n_durations[me] = i_max + 1;
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index ba3440a45b6dd4..bc8429ada7a51d 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -34,6 +34,7 @@ typedef void (*postgp_func_t)(struct rcu_tasks *rtp);
+ * @rtp_blkd_tasks: List of tasks blocked as readers.
+ * @rtp_exit_list: List of tasks in the latter portion of do_exit().
+ * @cpu: CPU number corresponding to this entry.
++ * @index: Index of this CPU in rtpcp_array of the rcu_tasks structure.
+ * @rtpp: Pointer to the rcu_tasks structure.
+ */
+ struct rcu_tasks_percpu {
+@@ -49,6 +50,7 @@ struct rcu_tasks_percpu {
+ struct list_head rtp_blkd_tasks;
+ struct list_head rtp_exit_list;
+ int cpu;
++ int index;
+ struct rcu_tasks *rtpp;
+ };
+
+@@ -76,6 +78,7 @@ struct rcu_tasks_percpu {
+ * @call_func: This flavor's call_rcu()-equivalent function.
+ * @wait_state: Task state for synchronous grace-period waits (default TASK_UNINTERRUPTIBLE).
+ * @rtpcpu: This flavor's rcu_tasks_percpu structure.
++ * @rtpcp_array: Array of pointers to rcu_tasks_percpu structure of CPUs in cpu_possible_mask.
+ * @percpu_enqueue_shift: Shift down CPU ID this much when enqueuing callbacks.
+ * @percpu_enqueue_lim: Number of per-CPU callback queues in use for enqueuing.
+ * @percpu_dequeue_lim: Number of per-CPU callback queues in use for dequeuing.
+@@ -110,6 +113,7 @@ struct rcu_tasks {
+ call_rcu_func_t call_func;
+ unsigned int wait_state;
+ struct rcu_tasks_percpu __percpu *rtpcpu;
++ struct rcu_tasks_percpu **rtpcp_array;
+ int percpu_enqueue_shift;
+ int percpu_enqueue_lim;
+ int percpu_dequeue_lim;
+@@ -182,6 +186,8 @@ module_param(rcu_task_collapse_lim, int, 0444);
+ static int rcu_task_lazy_lim __read_mostly = 32;
+ module_param(rcu_task_lazy_lim, int, 0444);
+
++static int rcu_task_cpu_ids;
++
+ /* RCU tasks grace-period state for debugging. */
+ #define RTGS_INIT 0
+ #define RTGS_WAIT_WAIT_CBS 1
+@@ -245,6 +251,8 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
+ int cpu;
+ int lim;
+ int shift;
++ int maxcpu;
++ int index = 0;
+
+ if (rcu_task_enqueue_lim < 0) {
+ rcu_task_enqueue_lim = 1;
+@@ -254,14 +262,9 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
+ }
+ lim = rcu_task_enqueue_lim;
+
+- if (lim > nr_cpu_ids)
+- lim = nr_cpu_ids;
+- shift = ilog2(nr_cpu_ids / lim);
+- if (((nr_cpu_ids - 1) >> shift) >= lim)
+- shift++;
+- WRITE_ONCE(rtp->percpu_enqueue_shift, shift);
+- WRITE_ONCE(rtp->percpu_dequeue_lim, lim);
+- smp_store_release(&rtp->percpu_enqueue_lim, lim);
++ rtp->rtpcp_array = kcalloc(num_possible_cpus(), sizeof(struct rcu_tasks_percpu *), GFP_KERNEL);
++ BUG_ON(!rtp->rtpcp_array);
++
+ for_each_possible_cpu(cpu) {
+ struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu);
+
+@@ -273,14 +276,29 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
+ INIT_WORK(&rtpcp->rtp_work, rcu_tasks_invoke_cbs_wq);
+ rtpcp->cpu = cpu;
+ rtpcp->rtpp = rtp;
++ rtpcp->index = index;
++ rtp->rtpcp_array[index] = rtpcp;
++ index++;
+ if (!rtpcp->rtp_blkd_tasks.next)
+ INIT_LIST_HEAD(&rtpcp->rtp_blkd_tasks);
+ if (!rtpcp->rtp_exit_list.next)
+ INIT_LIST_HEAD(&rtpcp->rtp_exit_list);
++ maxcpu = cpu;
+ }
+
+- pr_info("%s: Setting shift to %d and lim to %d rcu_task_cb_adjust=%d.\n", rtp->name,
+- data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim), rcu_task_cb_adjust);
++ rcu_task_cpu_ids = maxcpu + 1;
++ if (lim > rcu_task_cpu_ids)
++ lim = rcu_task_cpu_ids;
++ shift = ilog2(rcu_task_cpu_ids / lim);
++ if (((rcu_task_cpu_ids - 1) >> shift) >= lim)
++ shift++;
++ WRITE_ONCE(rtp->percpu_enqueue_shift, shift);
++ WRITE_ONCE(rtp->percpu_dequeue_lim, lim);
++ smp_store_release(&rtp->percpu_enqueue_lim, lim);
++
++ pr_info("%s: Setting shift to %d and lim to %d rcu_task_cb_adjust=%d rcu_task_cpu_ids=%d.\n",
++ rtp->name, data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim),
++ rcu_task_cb_adjust, rcu_task_cpu_ids);
+ }
+
+ // Compute wakeup time for lazy callback timer.
+@@ -348,7 +366,7 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func,
+ rtpcp->rtp_n_lock_retries = 0;
+ }
+ if (rcu_task_cb_adjust && ++rtpcp->rtp_n_lock_retries > rcu_task_contend_lim &&
+- READ_ONCE(rtp->percpu_enqueue_lim) != nr_cpu_ids)
++ READ_ONCE(rtp->percpu_enqueue_lim) != rcu_task_cpu_ids)
+ needadjust = true; // Defer adjustment to avoid deadlock.
+ }
+ // Queuing callbacks before initialization not yet supported.
+@@ -368,10 +386,10 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func,
+ raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags);
+ if (unlikely(needadjust)) {
+ raw_spin_lock_irqsave(&rtp->cbs_gbl_lock, flags);
+- if (rtp->percpu_enqueue_lim != nr_cpu_ids) {
++ if (rtp->percpu_enqueue_lim != rcu_task_cpu_ids) {
+ WRITE_ONCE(rtp->percpu_enqueue_shift, 0);
+- WRITE_ONCE(rtp->percpu_dequeue_lim, nr_cpu_ids);
+- smp_store_release(&rtp->percpu_enqueue_lim, nr_cpu_ids);
++ WRITE_ONCE(rtp->percpu_dequeue_lim, rcu_task_cpu_ids);
++ smp_store_release(&rtp->percpu_enqueue_lim, rcu_task_cpu_ids);
+ pr_info("Switching %s to per-CPU callback queuing.\n", rtp->name);
+ }
+ raw_spin_unlock_irqrestore(&rtp->cbs_gbl_lock, flags);
+@@ -444,6 +462,8 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
+
+ dequeue_limit = smp_load_acquire(&rtp->percpu_dequeue_lim);
+ for (cpu = 0; cpu < dequeue_limit; cpu++) {
++ if (!cpu_possible(cpu))
++ continue;
+ struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu);
+
+ /* Advance and accelerate any new callbacks. */
+@@ -481,7 +501,7 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
+ if (rcu_task_cb_adjust && ncbs <= rcu_task_collapse_lim) {
+ raw_spin_lock_irqsave(&rtp->cbs_gbl_lock, flags);
+ if (rtp->percpu_enqueue_lim > 1) {
+- WRITE_ONCE(rtp->percpu_enqueue_shift, order_base_2(nr_cpu_ids));
++ WRITE_ONCE(rtp->percpu_enqueue_shift, order_base_2(rcu_task_cpu_ids));
+ smp_store_release(&rtp->percpu_enqueue_lim, 1);
+ rtp->percpu_dequeue_gpseq = get_state_synchronize_rcu();
+ gpdone = false;
+@@ -496,7 +516,9 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
+ pr_info("Completing switch %s to CPU-0 callback queuing.\n", rtp->name);
+ }
+ if (rtp->percpu_dequeue_lim == 1) {
+- for (cpu = rtp->percpu_dequeue_lim; cpu < nr_cpu_ids; cpu++) {
++ for (cpu = rtp->percpu_dequeue_lim; cpu < rcu_task_cpu_ids; cpu++) {
++ if (!cpu_possible(cpu))
++ continue;
+ struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu);
+
+ WARN_ON_ONCE(rcu_segcblist_n_cbs(&rtpcp->cblist));
+@@ -511,30 +533,32 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
+ // Advance callbacks and invoke any that are ready.
+ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu *rtpcp)
+ {
+- int cpu;
+- int cpunext;
+ int cpuwq;
+ unsigned long flags;
+ int len;
++ int index;
+ struct rcu_head *rhp;
+ struct rcu_cblist rcl = RCU_CBLIST_INITIALIZER(rcl);
+ struct rcu_tasks_percpu *rtpcp_next;
+
+- cpu = rtpcp->cpu;
+- cpunext = cpu * 2 + 1;
+- if (cpunext < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
+- rtpcp_next = per_cpu_ptr(rtp->rtpcpu, cpunext);
+- cpuwq = rcu_cpu_beenfullyonline(cpunext) ? cpunext : WORK_CPU_UNBOUND;
+- queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
+- cpunext++;
+- if (cpunext < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
+- rtpcp_next = per_cpu_ptr(rtp->rtpcpu, cpunext);
+- cpuwq = rcu_cpu_beenfullyonline(cpunext) ? cpunext : WORK_CPU_UNBOUND;
++ index = rtpcp->index * 2 + 1;
++ if (index < num_possible_cpus()) {
++ rtpcp_next = rtp->rtpcp_array[index];
++ if (rtpcp_next->cpu < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
++ cpuwq = rcu_cpu_beenfullyonline(rtpcp_next->cpu) ? rtpcp_next->cpu : WORK_CPU_UNBOUND;
+ queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
++ index++;
++ if (index < num_possible_cpus()) {
++ rtpcp_next = rtp->rtpcp_array[index];
++ if (rtpcp_next->cpu < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
++ cpuwq = rcu_cpu_beenfullyonline(rtpcp_next->cpu) ? rtpcp_next->cpu : WORK_CPU_UNBOUND;
++ queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
++ }
++ }
+ }
+ }
+
+- if (rcu_segcblist_empty(&rtpcp->cblist) || !cpu_possible(cpu))
++ if (rcu_segcblist_empty(&rtpcp->cblist))
+ return;
+ raw_spin_lock_irqsave_rcu_node(rtpcp, flags);
+ rcu_segcblist_advance(&rtpcp->cblist, rcu_seq_current(&rtp->tasks_gp_seq));
+diff --git a/kernel/resource.c b/kernel/resource.c
+index a83040fde236fb..1681ab5012e12b 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -540,20 +540,62 @@ static int __region_intersects(struct resource *parent, resource_size_t start,
+ size_t size, unsigned long flags,
+ unsigned long desc)
+ {
+- struct resource res;
++ resource_size_t ostart, oend;
+ int type = 0; int other = 0;
+- struct resource *p;
++ struct resource *p, *dp;
++ bool is_type, covered;
++ struct resource res;
+
+ res.start = start;
+ res.end = start + size - 1;
+
+ for (p = parent->child; p ; p = p->sibling) {
+- bool is_type = (((p->flags & flags) == flags) &&
+- ((desc == IORES_DESC_NONE) ||
+- (desc == p->desc)));
+-
+- if (resource_overlaps(p, &res))
+- is_type ? type++ : other++;
++ if (!resource_overlaps(p, &res))
++ continue;
++ is_type = (p->flags & flags) == flags &&
++ (desc == IORES_DESC_NONE || desc == p->desc);
++ if (is_type) {
++ type++;
++ continue;
++ }
++ /*
++ * Continue to search in descendant resources as if the
++ * matched descendant resources cover some ranges of 'p'.
++ *
++ * |------------- "CXL Window 0" ------------|
++ * |-- "System RAM" --|
++ *
++ * will behave similar as the following fake resource
++ * tree when searching "System RAM".
++ *
++ * |-- "System RAM" --||-- "CXL Window 0a" --|
++ */
++ covered = false;
++ ostart = max(res.start, p->start);
++ oend = min(res.end, p->end);
++ for_each_resource(p, dp, false) {
++ if (!resource_overlaps(dp, &res))
++ continue;
++ is_type = (dp->flags & flags) == flags &&
++ (desc == IORES_DESC_NONE || desc == dp->desc);
++ if (is_type) {
++ type++;
++ /*
++ * Range from 'ostart' to 'dp->start'
++ * isn't covered by matched resource.
++ */
++ if (dp->start > ostart)
++ break;
++ if (dp->end >= oend) {
++ covered = true;
++ break;
++ }
++ /* Remove covered range */
++ ostart = max(ostart, dp->end + 1);
++ }
++ }
++ if (!covered)
++ other++;
+ }
+
+ if (type == 0)
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index f3951e4a55e5b6..1af59cf714cd3d 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5789,6 +5789,14 @@ static void put_prev_task_balance(struct rq *rq, struct task_struct *prev,
+ #endif
+
+ put_prev_task(rq, prev);
++
++ /*
++ * We've updated @prev and no longer need the server link, clear it.
++ * Must be done before ->pick_next_task() because that can (re)set
++ * ->dl_server.
++ */
++ if (prev->dl_server)
++ prev->dl_server = NULL;
+ }
+
+ /*
+@@ -5819,6 +5827,13 @@ __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+ p = pick_next_task_idle(rq);
+ }
+
++ /*
++ * This is a normal CFS pick, but the previous could be a DL pick.
++ * Clear it as previous is no longer picked.
++ */
++ if (prev->dl_server)
++ prev->dl_server = NULL;
++
+ /*
+ * This is the fast path; it cannot be a DL server pick;
+ * therefore even if @p == @prev, ->dl_server must be NULL.
+@@ -5832,14 +5847,6 @@ __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+ restart:
+ put_prev_task_balance(rq, prev, rf);
+
+- /*
+- * We've updated @prev and no longer need the server link, clear it.
+- * Must be done before ->pick_next_task() because that can (re)set
+- * ->dl_server.
+- */
+- if (prev->dl_server)
+- prev->dl_server = NULL;
+-
+ for_each_class(class) {
+ p = class->pick_next_task(rq);
+ if (p)
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index 020d58967d4e8e..84dad1511d1e48 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -769,12 +769,13 @@ static void record_times(struct psi_group_cpu *groupc, u64 now)
+ }
+
+ static void psi_group_change(struct psi_group *group, int cpu,
+- unsigned int clear, unsigned int set, u64 now,
++ unsigned int clear, unsigned int set,
+ bool wake_clock)
+ {
+ struct psi_group_cpu *groupc;
+ unsigned int t, m;
+ u32 state_mask;
++ u64 now;
+
+ lockdep_assert_rq_held(cpu_rq(cpu));
+ groupc = per_cpu_ptr(group->pcpu, cpu);
+@@ -789,6 +790,7 @@ static void psi_group_change(struct psi_group *group, int cpu,
+ * SOME and FULL time these may have resulted in.
+ */
+ write_seqcount_begin(&groupc->seq);
++ now = cpu_clock(cpu);
+
+ /*
+ * Start with TSK_ONCPU, which doesn't have a corresponding
+@@ -899,18 +901,15 @@ void psi_task_change(struct task_struct *task, int clear, int set)
+ {
+ int cpu = task_cpu(task);
+ struct psi_group *group;
+- u64 now;
+
+ if (!task->pid)
+ return;
+
+ psi_flags_change(task, clear, set);
+
+- now = cpu_clock(cpu);
+-
+ group = task_psi_group(task);
+ do {
+- psi_group_change(group, cpu, clear, set, now, true);
++ psi_group_change(group, cpu, clear, set, true);
+ } while ((group = group->parent));
+ }
+
+@@ -919,7 +918,6 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ {
+ struct psi_group *group, *common = NULL;
+ int cpu = task_cpu(prev);
+- u64 now = cpu_clock(cpu);
+
+ if (next->pid) {
+ psi_flags_change(next, 0, TSK_ONCPU);
+@@ -936,7 +934,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ break;
+ }
+
+- psi_group_change(group, cpu, 0, TSK_ONCPU, now, true);
++ psi_group_change(group, cpu, 0, TSK_ONCPU, true);
+ } while ((group = group->parent));
+ }
+
+@@ -974,7 +972,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ do {
+ if (group == common)
+ break;
+- psi_group_change(group, cpu, clear, set, now, wake_clock);
++ psi_group_change(group, cpu, clear, set, wake_clock);
+ } while ((group = group->parent));
+
+ /*
+@@ -986,7 +984,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ if ((prev->psi_flags ^ next->psi_flags) & ~TSK_ONCPU) {
+ clear &= ~TSK_ONCPU;
+ for (; group; group = group->parent)
+- psi_group_change(group, cpu, clear, set, now, wake_clock);
++ psi_group_change(group, cpu, clear, set, wake_clock);
+ }
+ }
+ }
+@@ -997,8 +995,8 @@ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_st
+ int cpu = task_cpu(curr);
+ struct psi_group *group;
+ struct psi_group_cpu *groupc;
+- u64 now, irq;
+ s64 delta;
++ u64 irq;
+
+ if (static_branch_likely(&psi_disabled))
+ return;
+@@ -1011,7 +1009,6 @@ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_st
+ if (prev && task_psi_group(prev) == group)
+ return;
+
+- now = cpu_clock(cpu);
+ irq = irq_time_read(cpu);
+ delta = (s64)(irq - rq->psi_irq_time);
+ if (delta < 0)
+@@ -1019,12 +1016,15 @@ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_st
+ rq->psi_irq_time = irq;
+
+ do {
++ u64 now;
++
+ if (!group->enabled)
+ continue;
+
+ groupc = per_cpu_ptr(group->pcpu, cpu);
+
+ write_seqcount_begin(&groupc->seq);
++ now = cpu_clock(cpu);
+
+ record_times(groupc, now);
+ groupc->times[PSI_IRQ_FULL] += delta;
+@@ -1223,11 +1223,9 @@ void psi_cgroup_restart(struct psi_group *group)
+ for_each_possible_cpu(cpu) {
+ struct rq *rq = cpu_rq(cpu);
+ struct rq_flags rf;
+- u64 now;
+
+ rq_lock_irq(rq, &rf);
+- now = cpu_clock(cpu);
+- psi_group_change(group, cpu, 0, 0, now, true);
++ psi_group_change(group, cpu, 0, 0, true);
+ rq_unlock_irq(rq, &rf);
+ }
+ }
+diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c
+index 639397b5491ca0..5259cda486d058 100644
+--- a/kernel/static_call_inline.c
++++ b/kernel/static_call_inline.c
+@@ -411,6 +411,17 @@ static void static_call_del_module(struct module *mod)
+
+ for (site = start; site < stop; site++) {
+ key = static_call_key(site);
++
++ /*
++ * If the key was not updated due to a memory allocation
++ * failure in __static_call_init() then treating key::sites
++ * as key::mods in the code below would cause random memory
++ * access and #GP. In that case all subsequent sites have
++ * not been touched either, so stop iterating.
++ */
++ if (!static_call_key_has_mods(key))
++ break;
++
+ if (key == prev_key)
+ continue;
+
+@@ -442,7 +453,7 @@ static int static_call_module_notify(struct notifier_block *nb,
+ case MODULE_STATE_COMING:
+ ret = static_call_add_module(mod);
+ if (ret) {
+- WARN(1, "Failed to allocate memory for static calls");
++ pr_warn("Failed to allocate memory for static calls\n");
+ static_call_del_module(mod);
+ }
+ break;
+diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
+index b791524a6536ac..3bd6071441ade9 100644
+--- a/kernel/trace/trace_hwlat.c
++++ b/kernel/trace/trace_hwlat.c
+@@ -520,6 +520,8 @@ static void hwlat_hotplug_workfn(struct work_struct *dummy)
+ if (!hwlat_busy || hwlat_data.thread_mode != MODE_PER_CPU)
+ goto out_unlock;
+
++ if (!cpu_online(cpu))
++ goto out_unlock;
+ if (!cpumask_test_cpu(cpu, tr->tracing_cpumask))
+ goto out_unlock;
+
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 7e75c1214b3674..6ed4008e6d621e 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1953,12 +1953,8 @@ static void stop_kthread(unsigned int cpu)
+ {
+ struct task_struct *kthread;
+
+- mutex_lock(&interface_lock);
+- kthread = per_cpu(per_cpu_osnoise_var, cpu).kthread;
++ kthread = xchg_relaxed(&(per_cpu(per_cpu_osnoise_var, cpu).kthread), NULL);
+ if (kthread) {
+- per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL;
+- mutex_unlock(&interface_lock);
+-
+ if (cpumask_test_and_clear_cpu(cpu, &kthread_cpumask) &&
+ !WARN_ON(!test_bit(OSN_WORKLOAD, &osnoise_options))) {
+ kthread_stop(kthread);
+@@ -1972,7 +1968,6 @@ static void stop_kthread(unsigned int cpu)
+ put_task_struct(kthread);
+ }
+ } else {
+- mutex_unlock(&interface_lock);
+ /* if no workload, just return */
+ if (!test_bit(OSN_WORKLOAD, &osnoise_options)) {
+ /*
+@@ -1994,8 +1989,12 @@ static void stop_per_cpu_kthreads(void)
+ {
+ int cpu;
+
+- for_each_possible_cpu(cpu)
++ cpus_read_lock();
++
++ for_each_online_cpu(cpu)
+ stop_kthread(cpu);
++
++ cpus_read_unlock();
+ }
+
+ /*
+@@ -2007,6 +2006,10 @@ static int start_kthread(unsigned int cpu)
+ void *main = osnoise_main;
+ char comm[24];
+
++ /* Do not start a new thread if it is already running */
++ if (per_cpu(per_cpu_osnoise_var, cpu).kthread)
++ return 0;
++
+ if (timerlat_enabled()) {
+ snprintf(comm, 24, "timerlat/%d", cpu);
+ main = timerlat_main;
+@@ -2061,11 +2064,10 @@ static int start_per_cpu_kthreads(void)
+ if (cpumask_test_and_clear_cpu(cpu, &kthread_cpumask)) {
+ struct task_struct *kthread;
+
+- kthread = per_cpu(per_cpu_osnoise_var, cpu).kthread;
++ kthread = xchg_relaxed(&(per_cpu(per_cpu_osnoise_var, cpu).kthread), NULL);
+ if (!WARN_ON(!kthread))
+ kthread_stop(kthread);
+ }
+- per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL;
+ }
+
+ for_each_cpu(cpu, current_mask) {
+@@ -2095,6 +2097,8 @@ static void osnoise_hotplug_workfn(struct work_struct *dummy)
+ mutex_lock(&interface_lock);
+ cpus_read_lock();
+
++ if (!cpu_online(cpu))
++ goto out_unlock;
+ if (!cpumask_test_cpu(cpu, &osnoise_cpumask))
+ goto out_unlock;
+
+diff --git a/lib/buildid.c b/lib/buildid.c
+index e02b5507418b40..26007cc99a38f6 100644
+--- a/lib/buildid.c
++++ b/lib/buildid.c
+@@ -18,31 +18,37 @@ static int parse_build_id_buf(unsigned char *build_id,
+ const void *note_start,
+ Elf32_Word note_size)
+ {
+- Elf32_Word note_offs = 0, new_offs;
+-
+- while (note_offs + sizeof(Elf32_Nhdr) < note_size) {
+- Elf32_Nhdr *nhdr = (Elf32_Nhdr *)(note_start + note_offs);
++ const char note_name[] = "GNU";
++ const size_t note_name_sz = sizeof(note_name);
++ u64 note_off = 0, new_off, name_sz, desc_sz;
++ const char *data;
++
++ while (note_off + sizeof(Elf32_Nhdr) < note_size &&
++ note_off + sizeof(Elf32_Nhdr) > note_off /* overflow */) {
++ Elf32_Nhdr *nhdr = (Elf32_Nhdr *)(note_start + note_off);
++
++ name_sz = READ_ONCE(nhdr->n_namesz);
++ desc_sz = READ_ONCE(nhdr->n_descsz);
++
++ new_off = note_off + sizeof(Elf32_Nhdr);
++ if (check_add_overflow(new_off, ALIGN(name_sz, 4), &new_off) ||
++ check_add_overflow(new_off, ALIGN(desc_sz, 4), &new_off) ||
++ new_off > note_size)
++ break;
+
+ if (nhdr->n_type == BUILD_ID &&
+- nhdr->n_namesz == sizeof("GNU") &&
+- !strcmp((char *)(nhdr + 1), "GNU") &&
+- nhdr->n_descsz > 0 &&
+- nhdr->n_descsz <= BUILD_ID_SIZE_MAX) {
+- memcpy(build_id,
+- note_start + note_offs +
+- ALIGN(sizeof("GNU"), 4) + sizeof(Elf32_Nhdr),
+- nhdr->n_descsz);
+- memset(build_id + nhdr->n_descsz, 0,
+- BUILD_ID_SIZE_MAX - nhdr->n_descsz);
++ name_sz == note_name_sz &&
++ memcmp(nhdr + 1, note_name, note_name_sz) == 0 &&
++ desc_sz > 0 && desc_sz <= BUILD_ID_SIZE_MAX) {
++ data = note_start + note_off + ALIGN(note_name_sz, 4);
++ memcpy(build_id, data, desc_sz);
++ memset(build_id + desc_sz, 0, BUILD_ID_SIZE_MAX - desc_sz);
+ if (size)
+- *size = nhdr->n_descsz;
++ *size = desc_sz;
+ return 0;
+ }
+- new_offs = note_offs + sizeof(Elf32_Nhdr) +
+- ALIGN(nhdr->n_namesz, 4) + ALIGN(nhdr->n_descsz, 4);
+- if (new_offs <= note_offs) /* overflow */
+- break;
+- note_offs = new_offs;
++
++ note_off = new_off;
+ }
+
+ return -EINVAL;
+@@ -71,7 +77,7 @@ static int get_build_id_32(const void *page_addr, unsigned char *build_id,
+ {
+ Elf32_Ehdr *ehdr = (Elf32_Ehdr *)page_addr;
+ Elf32_Phdr *phdr;
+- int i;
++ __u32 i, phnum;
+
+ /*
+ * FIXME
+@@ -80,18 +86,19 @@ static int get_build_id_32(const void *page_addr, unsigned char *build_id,
+ */
+ if (ehdr->e_phoff != sizeof(Elf32_Ehdr))
+ return -EINVAL;
++
++ phnum = READ_ONCE(ehdr->e_phnum);
+ /* only supports phdr that fits in one page */
+- if (ehdr->e_phnum >
+- (PAGE_SIZE - sizeof(Elf32_Ehdr)) / sizeof(Elf32_Phdr))
++ if (phnum > (PAGE_SIZE - sizeof(Elf32_Ehdr)) / sizeof(Elf32_Phdr))
+ return -EINVAL;
+
+ phdr = (Elf32_Phdr *)(page_addr + sizeof(Elf32_Ehdr));
+
+- for (i = 0; i < ehdr->e_phnum; ++i) {
++ for (i = 0; i < phnum; ++i) {
+ if (phdr[i].p_type == PT_NOTE &&
+ !parse_build_id(page_addr, build_id, size,
+- page_addr + phdr[i].p_offset,
+- phdr[i].p_filesz))
++ page_addr + READ_ONCE(phdr[i].p_offset),
++ READ_ONCE(phdr[i].p_filesz)))
+ return 0;
+ }
+ return -EINVAL;
+@@ -103,7 +110,7 @@ static int get_build_id_64(const void *page_addr, unsigned char *build_id,
+ {
+ Elf64_Ehdr *ehdr = (Elf64_Ehdr *)page_addr;
+ Elf64_Phdr *phdr;
+- int i;
++ __u32 i, phnum;
+
+ /*
+ * FIXME
+@@ -112,18 +119,19 @@ static int get_build_id_64(const void *page_addr, unsigned char *build_id,
+ */
+ if (ehdr->e_phoff != sizeof(Elf64_Ehdr))
+ return -EINVAL;
++
++ phnum = READ_ONCE(ehdr->e_phnum);
+ /* only supports phdr that fits in one page */
+- if (ehdr->e_phnum >
+- (PAGE_SIZE - sizeof(Elf64_Ehdr)) / sizeof(Elf64_Phdr))
++ if (phnum > (PAGE_SIZE - sizeof(Elf64_Ehdr)) / sizeof(Elf64_Phdr))
+ return -EINVAL;
+
+ phdr = (Elf64_Phdr *)(page_addr + sizeof(Elf64_Ehdr));
+
+- for (i = 0; i < ehdr->e_phnum; ++i) {
++ for (i = 0; i < phnum; ++i) {
+ if (phdr[i].p_type == PT_NOTE &&
+ !parse_build_id(page_addr, build_id, size,
+- page_addr + phdr[i].p_offset,
+- phdr[i].p_filesz))
++ page_addr + READ_ONCE(phdr[i].p_offset),
++ READ_ONCE(phdr[i].p_filesz)))
+ return 0;
+ }
+ return -EINVAL;
+@@ -152,6 +160,10 @@ int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id,
+ page = find_get_page(vma->vm_file->f_mapping, 0);
+ if (!page)
+ return -EFAULT; /* page not mapped */
++ if (!PageUptodate(page)) {
++ put_page(page);
++ return -EFAULT;
++ }
+
+ ret = -EINVAL;
+ page_addr = kmap_local_page(page);
+diff --git a/mm/Kconfig b/mm/Kconfig
+index b72e7d040f789f..03395624bc709b 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -146,12 +146,15 @@ config ZSWAP_ZPOOL_DEFAULT_ZBUD
+ help
+ Use the zbud allocator as the default allocator.
+
+-config ZSWAP_ZPOOL_DEFAULT_Z3FOLD
+- bool "z3fold"
+- select Z3FOLD
++config ZSWAP_ZPOOL_DEFAULT_Z3FOLD_DEPRECATED
++ bool "z3foldi (DEPRECATED)"
++ select Z3FOLD_DEPRECATED
+ help
+ Use the z3fold allocator as the default allocator.
+
++ Deprecated and scheduled for removal in a few cycles,
++ see CONFIG_Z3FOLD_DEPRECATED.
++
+ config ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
+ bool "zsmalloc"
+ depends on HAVE_ZSMALLOC
+@@ -164,7 +167,7 @@ config ZSWAP_ZPOOL_DEFAULT
+ string
+ depends on ZSWAP
+ default "zbud" if ZSWAP_ZPOOL_DEFAULT_ZBUD
+- default "z3fold" if ZSWAP_ZPOOL_DEFAULT_Z3FOLD
++ default "z3fold" if ZSWAP_ZPOOL_DEFAULT_Z3FOLD_DEPRECATED
+ default "zsmalloc" if ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
+ default ""
+
+@@ -178,15 +181,25 @@ config ZBUD
+ deterministic reclaim properties that make it preferable to a higher
+ density approach when reclaim will be used.
+
+-config Z3FOLD
+- tristate "3:1 compression allocator (z3fold)"
++config Z3FOLD_DEPRECATED
++ tristate "3:1 compression allocator (z3fold) (DEPRECATED)"
+ depends on ZSWAP
+ help
++ Deprecated and scheduled for removal in a few cycles. If you have
++ a good reason for using Z3FOLD over ZSMALLOC, please contact
++ linux-mm@kvack.org and the zswap maintainers.
++
+ A special purpose allocator for storing compressed pages.
+ It is designed to store up to three compressed pages per physical
+ page. It is a ZBUD derivative so the simplicity and determinism are
+ still there.
+
++config Z3FOLD
++ tristate
++ default y if Z3FOLD_DEPRECATED=y
++ default m if Z3FOLD_DEPRECATED=m
++ depends on Z3FOLD_DEPRECATED
++
+ config HAVE_ZSMALLOC
+ def_bool y
+ depends on MMU
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 0ca9c1377b686f..a6bc35830a34c3 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -2181,6 +2181,10 @@ unsigned filemap_get_folios_contig(struct address_space *mapping,
+ if (xa_is_value(folio))
+ goto update_start;
+
++ /* If we landed in the middle of a THP, continue at its end. */
++ if (xa_is_sibling(folio))
++ goto update_start;
++
+ if (!folio_try_get(folio))
+ goto retry;
+
+diff --git a/mm/gup.c b/mm/gup.c
+index 54d0dc3831fbaa..947881ff5e8f1f 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -3703,6 +3703,7 @@ long memfd_pin_folios(struct file *memfd, loff_t start, loff_t end,
+ ret = PTR_ERR(folio);
+ if (ret != -EEXIST)
+ goto err;
++ folio = NULL;
+ }
+ }
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index c4176574ed9e8b..c6b852ef95238b 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -2564,6 +2564,23 @@ struct folio *alloc_buddy_hugetlb_folio_with_mpol(struct hstate *h,
+ return folio;
+ }
+
++struct folio *alloc_hugetlb_folio_reserve(struct hstate *h, int preferred_nid,
++ nodemask_t *nmask, gfp_t gfp_mask)
++{
++ struct folio *folio;
++
++ spin_lock_irq(&hugetlb_lock);
++ folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask, preferred_nid,
++ nmask);
++ if (folio) {
++ VM_BUG_ON(!h->resv_huge_pages);
++ h->resv_huge_pages--;
++ }
++
++ spin_unlock_irq(&hugetlb_lock);
++ return folio;
++}
++
+ /* folio migration callback function */
+ struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
+ nodemask_t *nmask, gfp_t gfp_mask, bool allow_alloc_fallback)
+diff --git a/mm/memfd.c b/mm/memfd.c
+index e7b7c5294d5963..c17c3ea701a17e 100644
+--- a/mm/memfd.c
++++ b/mm/memfd.c
+@@ -79,23 +79,25 @@ struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx)
+ * alloc from. Also, the folio will be pinned for an indefinite
+ * amount of time, so it is not expected to be migrated away.
+ */
+- gfp_mask = htlb_alloc_mask(hstate_file(memfd));
++ struct hstate *h = hstate_file(memfd);
++
++ gfp_mask = htlb_alloc_mask(h);
+ gfp_mask &= ~(__GFP_HIGHMEM | __GFP_MOVABLE);
++ idx >>= huge_page_order(h);
+
+- folio = alloc_hugetlb_folio_nodemask(hstate_file(memfd),
+- numa_node_id(),
+- NULL,
+- gfp_mask,
+- false);
+- if (folio && folio_try_get(folio)) {
++ folio = alloc_hugetlb_folio_reserve(h,
++ numa_node_id(),
++ NULL,
++ gfp_mask);
++ if (folio) {
+ err = hugetlb_add_to_page_cache(folio,
+ memfd->f_mapping,
+ idx);
+ if (err) {
+ folio_put(folio);
+- free_huge_folio(folio);
+ return ERR_PTR(err);
+ }
++ folio_unlock(folio);
+ return folio;
+ }
+ return ERR_PTR(-ENOMEM);
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 40b582a014b8f2..cff602cedf8e63 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -1273,6 +1273,13 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
+
+ /* If the object still fits, repoison it precisely. */
+ if (ks >= new_size) {
++ /* Zero out spare memory. */
++ if (want_init_on_alloc(flags)) {
++ kasan_disable_current();
++ memset((void *)p + new_size, 0, ks - new_size);
++ kasan_enable_current();
++ }
++
+ p = kasan_krealloc((void *)p, new_size, flags);
+ return (void *)p;
+ }
+diff --git a/mm/slub.c b/mm/slub.c
+index a77f354f83251e..fede2121ec1f21 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -756,6 +756,50 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab,
+ return false;
+ }
+
++/*
++ * kmalloc caches has fixed sizes (mostly power of 2), and kmalloc() API
++ * family will round up the real request size to these fixed ones, so
++ * there could be an extra area than what is requested. Save the original
++ * request size in the meta data area, for better debug and sanity check.
++ */
++static inline void set_orig_size(struct kmem_cache *s,
++ void *object, unsigned int orig_size)
++{
++ void *p = kasan_reset_tag(object);
++ unsigned int kasan_meta_size;
++
++ if (!slub_debug_orig_size(s))
++ return;
++
++ /*
++ * KASAN can save its free meta data inside of the object at offset 0.
++ * If this meta data size is larger than 'orig_size', it will overlap
++ * the data redzone in [orig_size+1, object_size]. Thus, we adjust
++ * 'orig_size' to be as at least as big as KASAN's meta data.
++ */
++ kasan_meta_size = kasan_metadata_size(s, true);
++ if (kasan_meta_size > orig_size)
++ orig_size = kasan_meta_size;
++
++ p += get_info_end(s);
++ p += sizeof(struct track) * 2;
++
++ *(unsigned int *)p = orig_size;
++}
++
++static inline unsigned int get_orig_size(struct kmem_cache *s, void *object)
++{
++ void *p = kasan_reset_tag(object);
++
++ if (!slub_debug_orig_size(s))
++ return s->object_size;
++
++ p += get_info_end(s);
++ p += sizeof(struct track) * 2;
++
++ return *(unsigned int *)p;
++}
++
+ #ifdef CONFIG_SLUB_DEBUG
+ static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)];
+ static DEFINE_SPINLOCK(object_map_lock);
+@@ -985,50 +1029,6 @@ static void print_slab_info(const struct slab *slab)
+ &slab->__page_flags);
+ }
+
+-/*
+- * kmalloc caches has fixed sizes (mostly power of 2), and kmalloc() API
+- * family will round up the real request size to these fixed ones, so
+- * there could be an extra area than what is requested. Save the original
+- * request size in the meta data area, for better debug and sanity check.
+- */
+-static inline void set_orig_size(struct kmem_cache *s,
+- void *object, unsigned int orig_size)
+-{
+- void *p = kasan_reset_tag(object);
+- unsigned int kasan_meta_size;
+-
+- if (!slub_debug_orig_size(s))
+- return;
+-
+- /*
+- * KASAN can save its free meta data inside of the object at offset 0.
+- * If this meta data size is larger than 'orig_size', it will overlap
+- * the data redzone in [orig_size+1, object_size]. Thus, we adjust
+- * 'orig_size' to be as at least as big as KASAN's meta data.
+- */
+- kasan_meta_size = kasan_metadata_size(s, true);
+- if (kasan_meta_size > orig_size)
+- orig_size = kasan_meta_size;
+-
+- p += get_info_end(s);
+- p += sizeof(struct track) * 2;
+-
+- *(unsigned int *)p = orig_size;
+-}
+-
+-static inline unsigned int get_orig_size(struct kmem_cache *s, void *object)
+-{
+- void *p = kasan_reset_tag(object);
+-
+- if (!slub_debug_orig_size(s))
+- return s->object_size;
+-
+- p += get_info_end(s);
+- p += sizeof(struct track) * 2;
+-
+- return *(unsigned int *)p;
+-}
+-
+ void skip_orig_size_check(struct kmem_cache *s, const void *object)
+ {
+ set_orig_size(s, (void *)object, s->object_size);
+@@ -1894,7 +1894,6 @@ static inline void inc_slabs_node(struct kmem_cache *s, int node,
+ int objects) {}
+ static inline void dec_slabs_node(struct kmem_cache *s, int node,
+ int objects) {}
+-
+ #ifndef CONFIG_SLUB_TINY
+ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab,
+ void **freelist, void *nextfree)
+@@ -2243,14 +2242,21 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init)
+ */
+ if (unlikely(init)) {
+ int rsize;
+- unsigned int inuse;
++ unsigned int inuse, orig_size;
+
+ inuse = get_info_end(s);
++ orig_size = get_orig_size(s, x);
+ if (!kasan_has_integrated_init())
+- memset(kasan_reset_tag(x), 0, s->object_size);
++ memset(kasan_reset_tag(x), 0, orig_size);
+ rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0;
+ memset((char *)kasan_reset_tag(x) + inuse, 0,
+ s->size - inuse - rsize);
++ /*
++ * Restore orig_size, otherwize kmalloc redzone overwritten
++ * would be reported
++ */
++ set_orig_size(s, x, orig_size);
++
+ }
+ /* KASAN might put x into memory quarantine, delaying its reuse. */
+ return !kasan_slab_free(s, x, init);
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index d6976db02c06c7..b2f8f9c5b61066 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3782,6 +3782,8 @@ static void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb)
+
+ hci_dev_lock(hdev);
+ conn = hci_conn_hash_lookup_handle(hdev, handle);
++ if (conn && hci_dev_test_flag(hdev, HCI_MGMT))
++ mgmt_device_connected(hdev, conn, NULL, 0);
+ hci_dev_unlock(hdev);
+
+ if (conn) {
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 1c82dcdf6e8fc7..561c8cb87473ef 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3706,7 +3706,7 @@ static void hci_remote_features_evt(struct hci_dev *hdev, void *data,
+ goto unlock;
+ }
+
+- if (!ev->status && !test_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags)) {
++ if (!ev->status) {
+ struct hci_cp_remote_name_req cp;
+ memset(&cp, 0, sizeof(cp));
+ bacpy(&cp.bdaddr, &conn->dst);
+@@ -5324,19 +5324,16 @@ static void hci_user_confirm_request_evt(struct hci_dev *hdev, void *data,
+ goto unlock;
+ }
+
+- /* If no side requires MITM protection; auto-accept */
++ /* If no side requires MITM protection; use JUST_CFM method */
+ if ((!loc_mitm || conn->remote_cap == HCI_IO_NO_INPUT_OUTPUT) &&
+ (!rem_mitm || conn->io_capability == HCI_IO_NO_INPUT_OUTPUT)) {
+
+- /* If we're not the initiators request authorization to
+- * proceed from user space (mgmt_user_confirm with
+- * confirm_hint set to 1). The exception is if neither
+- * side had MITM or if the local IO capability is
+- * NoInputNoOutput, in which case we do auto-accept
++ /* If we're not the initiator of request authorization and the
++ * local IO capability is not NoInputNoOutput, use JUST_WORKS
++ * method (mgmt_user_confirm with confirm_hint set to 1).
+ */
+ if (!test_bit(HCI_CONN_AUTH_PEND, &conn->flags) &&
+- conn->io_capability != HCI_IO_NO_INPUT_OUTPUT &&
+- (loc_mitm || rem_mitm)) {
++ conn->io_capability != HCI_IO_NO_INPUT_OUTPUT) {
+ bt_dev_dbg(hdev, "Confirming auto-accept as acceptor");
+ confirm_hint = 1;
+ goto confirm;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 9988ba382b686a..6544c1ed714344 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4066,17 +4066,9 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd,
+ static int l2cap_connect_req(struct l2cap_conn *conn,
+ struct l2cap_cmd_hdr *cmd, u16 cmd_len, u8 *data)
+ {
+- struct hci_dev *hdev = conn->hcon->hdev;
+- struct hci_conn *hcon = conn->hcon;
+-
+ if (cmd_len < sizeof(struct l2cap_conn_req))
+ return -EPROTO;
+
+- hci_dev_lock(hdev);
+- if (hci_dev_test_flag(hdev, HCI_MGMT))
+- mgmt_device_connected(hdev, hcon, NULL, 0);
+- hci_dev_unlock(hdev);
+-
+ l2cap_connect(conn, cmd, data, L2CAP_CONN_RSP);
+ return 0;
+ }
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index e4f564d6f6fbfb..4157d9f23f46ef 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1453,10 +1453,15 @@ static void cmd_status_rsp(struct mgmt_pending_cmd *cmd, void *data)
+
+ static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ {
+- if (cmd->cmd_complete) {
+- u8 *status = data;
++ struct cmd_lookup *match = data;
++
++ /* dequeue cmd_sync entries using cmd as data as that is about to be
++ * removed/freed.
++ */
++ hci_cmd_sync_dequeue(match->hdev, NULL, cmd, NULL);
+
+- cmd->cmd_complete(cmd, *status);
++ if (cmd->cmd_complete) {
++ cmd->cmd_complete(cmd, match->mgmt_status);
+ mgmt_pending_remove(cmd);
+
+ return;
+@@ -9394,12 +9399,12 @@ void mgmt_index_added(struct hci_dev *hdev)
+ void mgmt_index_removed(struct hci_dev *hdev)
+ {
+ struct mgmt_ev_ext_index ev;
+- u8 status = MGMT_STATUS_INVALID_INDEX;
++ struct cmd_lookup match = { NULL, hdev, MGMT_STATUS_INVALID_INDEX };
+
+ if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
+ return;
+
+- mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &status);
++ mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
+
+ if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
+ mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev, NULL, 0,
+@@ -9450,7 +9455,7 @@ void mgmt_power_on(struct hci_dev *hdev, int err)
+ void __mgmt_power_off(struct hci_dev *hdev)
+ {
+ struct cmd_lookup match = { NULL, hdev };
+- u8 status, zero_cod[] = { 0, 0, 0 };
++ u8 zero_cod[] = { 0, 0, 0 };
+
+ mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match);
+
+@@ -9462,11 +9467,11 @@ void __mgmt_power_off(struct hci_dev *hdev)
+ * status responses.
+ */
+ if (hci_dev_test_flag(hdev, HCI_UNREGISTER))
+- status = MGMT_STATUS_INVALID_INDEX;
++ match.mgmt_status = MGMT_STATUS_INVALID_INDEX;
+ else
+- status = MGMT_STATUS_NOT_POWERED;
++ match.mgmt_status = MGMT_STATUS_NOT_POWERED;
+
+- mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &status);
++ mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
+
+ if (memcmp(hdev->dev_class, zero_cod, sizeof(zero_cod)) != 0) {
+ mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev,
+diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
+index bc37e47ad8299e..1a52a0bca086d6 100644
+--- a/net/bridge/br_mdb.c
++++ b/net/bridge/br_mdb.c
+@@ -1674,7 +1674,7 @@ int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq,
+ spin_lock_bh(&br->multicast_lock);
+
+ mp = br_mdb_ip_get(br, &group);
+- if (!mp) {
++ if (!mp || (!mp->ports && !mp->host_joined)) {
+ NL_SET_ERR_MSG_MOD(extack, "MDB entry not found");
+ err = -ENOENT;
+ goto unlock;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index f66e6140788328..dd87f5fb2f3a7d 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3504,7 +3504,7 @@ static netdev_features_t gso_features_check(const struct sk_buff *skb,
+ if (gso_segs > READ_ONCE(dev->gso_max_segs))
+ return features & ~NETIF_F_GSO_MASK;
+
+- if (unlikely(skb->len >= READ_ONCE(dev->gso_max_size)))
++ if (unlikely(skb->len >= netif_get_gso_max_size(dev, skb)))
+ return features & ~NETIF_F_GSO_MASK;
+
+ if (!skb_shinfo(skb)->gso_type) {
+@@ -3750,7 +3750,7 @@ static void qdisc_pkt_len_init(struct sk_buff *skb)
+ sizeof(_tcphdr), &_tcphdr);
+ if (likely(th))
+ hdr_len += __tcp_hdrlen(th);
+- } else {
++ } else if (shinfo->gso_type & SKB_GSO_UDP_L4) {
+ struct udphdr _udphdr;
+
+ if (skb_header_pointer(skb, hdr_len,
+@@ -3758,10 +3758,14 @@ static void qdisc_pkt_len_init(struct sk_buff *skb)
+ hdr_len += sizeof(struct udphdr);
+ }
+
+- if (shinfo->gso_type & SKB_GSO_DODGY)
+- gso_segs = DIV_ROUND_UP(skb->len - hdr_len,
+- shinfo->gso_size);
++ if (unlikely(shinfo->gso_type & SKB_GSO_DODGY)) {
++ int payload = skb->len - hdr_len;
+
++ /* Malicious packet. */
++ if (payload <= 0)
++ return;
++ gso_segs = DIV_ROUND_UP(payload, shinfo->gso_size);
++ }
+ qdisc_skb_cb(skb)->pkt_len += (gso_segs - 1) * hdr_len;
+ }
+ }
+diff --git a/net/core/gro.c b/net/core/gro.c
+index b3b43de1a65027..87708483a5f460 100644
+--- a/net/core/gro.c
++++ b/net/core/gro.c
+@@ -98,7 +98,6 @@ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb)
+ unsigned int headlen = skb_headlen(skb);
+ unsigned int len = skb_gro_len(skb);
+ unsigned int delta_truesize;
+- unsigned int gro_max_size;
+ unsigned int new_truesize;
+ struct sk_buff *lp;
+ int segs;
+@@ -112,12 +111,8 @@ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb)
+ if (p->pp_recycle != skb->pp_recycle)
+ return -ETOOMANYREFS;
+
+- /* pairs with WRITE_ONCE() in netif_set_gro(_ipv4)_max_size() */
+- gro_max_size = p->protocol == htons(ETH_P_IPV6) ?
+- READ_ONCE(p->dev->gro_max_size) :
+- READ_ONCE(p->dev->gro_ipv4_max_size);
+-
+- if (unlikely(p->len + len >= gro_max_size || NAPI_GRO_CB(skb)->flush))
++ if (unlikely(p->len + len >= netif_get_gro_max_size(p->dev, p) ||
++ NAPI_GRO_CB(skb)->flush))
+ return -E2BIG;
+
+ if (unlikely(p->len + len >= GRO_LEGACY_MAX_SIZE)) {
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index 291fdf4a328b3d..93dd5d54368497 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -32,6 +32,7 @@
+ #ifdef CONFIG_SYSFS
+ static const char fmt_hex[] = "%#x\n";
+ static const char fmt_dec[] = "%d\n";
++static const char fmt_uint[] = "%u\n";
+ static const char fmt_ulong[] = "%lu\n";
+ static const char fmt_u64[] = "%llu\n";
+
+@@ -425,6 +426,9 @@ NETDEVICE_SHOW_RW(gro_flush_timeout, fmt_ulong);
+
+ static int change_napi_defer_hard_irqs(struct net_device *dev, unsigned long val)
+ {
++ if (val > S32_MAX)
++ return -ERANGE;
++
+ WRITE_ONCE(dev->napi_defer_hard_irqs, val);
+ return 0;
+ }
+@@ -438,7 +442,7 @@ static ssize_t napi_defer_hard_irqs_store(struct device *dev,
+
+ return netdev_store(dev, attr, buf, len, change_napi_defer_hard_irqs);
+ }
+-NETDEVICE_SHOW_RW(napi_defer_hard_irqs, fmt_dec);
++NETDEVICE_SHOW_RW(napi_defer_hard_irqs, fmt_uint);
+
+ static ssize_t ifalias_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t len)
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index 05f9515d2c05c1..a17d7eaeb00192 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -216,10 +216,12 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ rtnl_lock();
+
+ napi = napi_by_id(napi_id);
+- if (napi)
++ if (napi) {
+ err = netdev_nl_napi_fill_one(rsp, napi, info);
+- else
+- err = -EINVAL;
++ } else {
++ NL_SET_BAD_ATTR(info->extack, info->attrs[NETDEV_A_NAPI_ID]);
++ err = -ENOENT;
++ }
+
+ rtnl_unlock();
+
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index d657b042d5a048..930acc87c8c085 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -624,12 +624,9 @@ int __netpoll_setup(struct netpoll *np, struct net_device *ndev)
+ const struct net_device_ops *ops;
+ int err;
+
+- np->dev = ndev;
+- strscpy(np->dev_name, ndev->name, IFNAMSIZ);
+-
+ if (ndev->priv_flags & IFF_DISABLE_NETPOLL) {
+ np_err(np, "%s doesn't support polling, aborting\n",
+- np->dev_name);
++ ndev->name);
+ err = -ENOTSUPP;
+ goto out;
+ }
+@@ -647,7 +644,7 @@ int __netpoll_setup(struct netpoll *np, struct net_device *ndev)
+
+ refcount_set(&npinfo->refcnt, 1);
+
+- ops = np->dev->netdev_ops;
++ ops = ndev->netdev_ops;
+ if (ops->ndo_netpoll_setup) {
+ err = ops->ndo_netpoll_setup(ndev, npinfo);
+ if (err)
+@@ -658,6 +655,8 @@ int __netpoll_setup(struct netpoll *np, struct net_device *ndev)
+ refcount_inc(&npinfo->refcnt);
+ }
+
++ np->dev = ndev;
++ strscpy(np->dev_name, ndev->name, IFNAMSIZ);
+ npinfo->netpoll = np;
+
+ /* last thing to do is link it to the net device structure */
+@@ -675,6 +674,7 @@ EXPORT_SYMBOL_GPL(__netpoll_setup);
+ int netpoll_setup(struct netpoll *np)
+ {
+ struct net_device *ndev = NULL;
++ bool ip_overwritten = false;
+ struct in_device *in_dev;
+ int err;
+
+@@ -739,6 +739,7 @@ int netpoll_setup(struct netpoll *np)
+ }
+
+ np->local_ip.ip = ifa->ifa_local;
++ ip_overwritten = true;
+ np_info(np, "local IP %pI4\n", &np->local_ip.ip);
+ } else {
+ #if IS_ENABLED(CONFIG_IPV6)
+@@ -755,6 +756,7 @@ int netpoll_setup(struct netpoll *np)
+ !!(ipv6_addr_type(&np->remote_ip.in6) & IPV6_ADDR_LINKLOCAL))
+ continue;
+ np->local_ip.in6 = ifp->addr;
++ ip_overwritten = true;
+ err = 0;
+ break;
+ }
+@@ -785,6 +787,9 @@ int netpoll_setup(struct netpoll *np)
+ return 0;
+
+ put:
++ DEBUG_NET_WARN_ON_ONCE(np->dev);
++ if (ip_overwritten)
++ memset(&np->local_ip, 0, sizeof(np->local_ip));
+ netdev_put(ndev, &np->dev_tracker);
+ unlock:
+ rtnl_unlock();
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 83f8cd8aa2d16a..de2a044cc66567 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -314,8 +314,8 @@ void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask)
+ fragsz = SKB_DATA_ALIGN(fragsz);
+
+ local_lock_nested_bh(&napi_alloc_cache.bh_lock);
+- data = __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC,
+- align_mask);
++ data = __page_frag_alloc_align(&nc->page, fragsz,
++ GFP_ATOMIC | __GFP_NOWARN, align_mask);
+ local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
+ return data;
+
+@@ -330,7 +330,8 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask)
+ struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache);
+
+ fragsz = SKB_DATA_ALIGN(fragsz);
+- data = __page_frag_alloc_align(nc, fragsz, GFP_ATOMIC,
++ data = __page_frag_alloc_align(nc, fragsz,
++ GFP_ATOMIC | __GFP_NOWARN,
+ align_mask);
+ } else {
+ local_bh_disable();
+@@ -349,7 +350,7 @@ static struct sk_buff *napi_skb_cache_get(void)
+ local_lock_nested_bh(&napi_alloc_cache.bh_lock);
+ if (unlikely(!nc->skb_count)) {
+ nc->skb_count = kmem_cache_alloc_bulk(net_hotdata.skbuff_cache,
+- GFP_ATOMIC,
++ GFP_ATOMIC | __GFP_NOWARN,
+ NAPI_SKB_CACHE_BULK,
+ nc->skb_cache);
+ if (unlikely(!nc->skb_count)) {
+@@ -418,7 +419,8 @@ struct sk_buff *slab_build_skb(void *data)
+ struct sk_buff *skb;
+ unsigned int size;
+
+- skb = kmem_cache_alloc(net_hotdata.skbuff_cache, GFP_ATOMIC);
++ skb = kmem_cache_alloc(net_hotdata.skbuff_cache,
++ GFP_ATOMIC | __GFP_NOWARN);
+ if (unlikely(!skb))
+ return NULL;
+
+@@ -469,7 +471,8 @@ struct sk_buff *__build_skb(void *data, unsigned int frag_size)
+ {
+ struct sk_buff *skb;
+
+- skb = kmem_cache_alloc(net_hotdata.skbuff_cache, GFP_ATOMIC);
++ skb = kmem_cache_alloc(net_hotdata.skbuff_cache,
++ GFP_ATOMIC | __GFP_NOWARN);
+ if (unlikely(!skb))
+ return NULL;
+
+diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
+index 668c729946ea67..1664547deffd07 100644
+--- a/net/dsa/dsa.c
++++ b/net/dsa/dsa.c
+@@ -1577,6 +1577,7 @@ EXPORT_SYMBOL_GPL(dsa_unregister_switch);
+ void dsa_switch_shutdown(struct dsa_switch *ds)
+ {
+ struct net_device *conduit, *user_dev;
++ LIST_HEAD(close_list);
+ struct dsa_port *dp;
+
+ mutex_lock(&dsa2_mutex);
+@@ -1586,10 +1587,16 @@ void dsa_switch_shutdown(struct dsa_switch *ds)
+
+ rtnl_lock();
+
++ dsa_switch_for_each_cpu_port(dp, ds)
++ list_add(&dp->conduit->close_list, &close_list);
++
++ dev_close_many(&close_list, true);
++
+ dsa_switch_for_each_user_port(dp, ds) {
+ conduit = dsa_port_to_conduit(dp);
+ user_dev = dp->user;
+
++ netif_device_detach(user_dev);
+ netdev_upper_dev_unlink(conduit, user_dev);
+ }
+
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index d96f3e452fef6d..ddab1511645425 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -574,10 +574,6 @@ static int inet_set_ifa(struct net_device *dev, struct in_ifaddr *ifa)
+
+ ASSERT_RTNL();
+
+- if (!in_dev) {
+- inet_free_ifa(ifa);
+- return -ENOBUFS;
+- }
+ ipv4_devconf_setall(in_dev);
+ neigh_parms_data_state_setall(in_dev->arp_parms);
+ if (ifa->ifa_dev != in_dev) {
+@@ -1184,6 +1180,8 @@ int devinet_ioctl(struct net *net, unsigned int cmd, struct ifreq *ifr)
+
+ if (!ifa) {
+ ret = -ENOBUFS;
++ if (!in_dev)
++ break;
+ ifa = inet_alloc_ifa();
+ if (!ifa)
+ break;
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 7ad2cafb927634..da540ddb7af651 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1343,7 +1343,7 @@ static void nl_fib_lookup(struct net *net, struct fib_result_nl *frn)
+ struct flowi4 fl4 = {
+ .flowi4_mark = frn->fl_mark,
+ .daddr = frn->fl_addr,
+- .flowi4_tos = frn->fl_tos,
++ .flowi4_tos = frn->fl_tos & IPTOS_RT_MASK,
+ .flowi4_scope = frn->fl_scope,
+ };
+ struct fib_table *tb;
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index ba205473522e4e..868ef18ad656c1 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -661,11 +661,11 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
+ if (skb_cow_head(skb, 0))
+ goto free_skb;
+
+- tnl_params = (const struct iphdr *)skb->data;
+-
+- if (!pskb_network_may_pull(skb, pull_len))
++ if (!pskb_may_pull(skb, pull_len))
+ goto free_skb;
+
++ tnl_params = (const struct iphdr *)skb->data;
++
+ /* ip_tunnel_xmit() needs skb->data pointing to gre header. */
+ skb_pull(skb, pull_len);
+ skb_reset_mac_header(skb);
+diff --git a/net/ipv4/netfilter/nf_dup_ipv4.c b/net/ipv4/netfilter/nf_dup_ipv4.c
+index 6cc5743c553a02..9a21175693db58 100644
+--- a/net/ipv4/netfilter/nf_dup_ipv4.c
++++ b/net/ipv4/netfilter/nf_dup_ipv4.c
+@@ -52,8 +52,9 @@ void nf_dup_ipv4(struct net *net, struct sk_buff *skb, unsigned int hooknum,
+ {
+ struct iphdr *iph;
+
++ local_bh_disable();
+ if (this_cpu_read(nf_skb_duplicated))
+- return;
++ goto out;
+ /*
+ * Copy the skb, and route the copy. Will later return %XT_CONTINUE for
+ * the original skb, which should continue on its way as if nothing has
+@@ -61,7 +62,7 @@ void nf_dup_ipv4(struct net *net, struct sk_buff *skb, unsigned int hooknum,
+ */
+ skb = pskb_copy(skb, GFP_ATOMIC);
+ if (skb == NULL)
+- return;
++ goto out;
+
+ #if IS_ENABLED(CONFIG_NF_CONNTRACK)
+ /* Avoid counting cloned packets towards the original connection. */
+@@ -90,6 +91,8 @@ void nf_dup_ipv4(struct net *net, struct sk_buff *skb, unsigned int hooknum,
+ } else {
+ kfree_skb(skb);
+ }
++out:
++ local_bh_enable();
+ }
+ EXPORT_SYMBOL_GPL(nf_dup_ipv4);
+
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index a4e510846905e6..5087e12209a19f 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -120,6 +120,9 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp)
+ struct tcp_sock *tp = tcp_sk(sk);
+ int ts_recent_stamp;
+
++ if (tw->tw_substate == TCP_FIN_WAIT2)
++ reuse = 0;
++
+ if (reuse == 2) {
+ /* Still does not detect *everything* that goes through
+ * lo, since we require a loopback src or dst address
+diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
+index e4ad3311e14895..2308665b51c538 100644
+--- a/net/ipv4/tcp_offload.c
++++ b/net/ipv4/tcp_offload.c
+@@ -101,8 +101,14 @@ static struct sk_buff *tcp4_gso_segment(struct sk_buff *skb,
+ if (!pskb_may_pull(skb, sizeof(struct tcphdr)))
+ return ERR_PTR(-EINVAL);
+
+- if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST)
+- return __tcp4_gso_segment_list(skb, features);
++ if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) {
++ struct tcphdr *th = tcp_hdr(skb);
++
++ if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size)
++ return __tcp4_gso_segment_list(skb, features);
++
++ skb->ip_summed = CHECKSUM_NONE;
++ }
+
+ if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) {
+ const struct iphdr *iph = ip_hdr(skb);
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index d842303587af93..a5be6e4ed326fb 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -296,8 +296,26 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ return NULL;
+ }
+
+- if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST)
+- return __udp_gso_segment_list(gso_skb, features, is_ipv6);
++ if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST) {
++ /* Detect modified geometry and pass those to skb_segment. */
++ if (skb_pagelen(gso_skb) - sizeof(*uh) == skb_shinfo(gso_skb)->gso_size)
++ return __udp_gso_segment_list(gso_skb, features, is_ipv6);
++
++ /* Setup csum, as fraglist skips this in udp4_gro_receive. */
++ gso_skb->csum_start = skb_transport_header(gso_skb) - gso_skb->head;
++ gso_skb->csum_offset = offsetof(struct udphdr, check);
++ gso_skb->ip_summed = CHECKSUM_PARTIAL;
++
++ uh = udp_hdr(gso_skb);
++ if (is_ipv6)
++ uh->check = ~udp_v6_check(gso_skb->len,
++ &ipv6_hdr(gso_skb)->saddr,
++ &ipv6_hdr(gso_skb)->daddr, 0);
++ else
++ uh->check = ~udp_v4_check(gso_skb->len,
++ ip_hdr(gso_skb)->saddr,
++ ip_hdr(gso_skb)->daddr, 0);
++ }
+
+ skb_pull(gso_skb, sizeof(*uh));
+
+diff --git a/net/ipv6/netfilter/nf_dup_ipv6.c b/net/ipv6/netfilter/nf_dup_ipv6.c
+index a0a2de30be3e7b..0c39c77fe8a8a4 100644
+--- a/net/ipv6/netfilter/nf_dup_ipv6.c
++++ b/net/ipv6/netfilter/nf_dup_ipv6.c
+@@ -47,11 +47,12 @@ static bool nf_dup_ipv6_route(struct net *net, struct sk_buff *skb,
+ void nf_dup_ipv6(struct net *net, struct sk_buff *skb, unsigned int hooknum,
+ const struct in6_addr *gw, int oif)
+ {
++ local_bh_disable();
+ if (this_cpu_read(nf_skb_duplicated))
+- return;
++ goto out;
+ skb = pskb_copy(skb, GFP_ATOMIC);
+ if (skb == NULL)
+- return;
++ goto out;
+
+ #if IS_ENABLED(CONFIG_NF_CONNTRACK)
+ nf_reset_ct(skb);
+@@ -69,6 +70,8 @@ void nf_dup_ipv6(struct net *net, struct sk_buff *skb, unsigned int hooknum,
+ } else {
+ kfree_skb(skb);
+ }
++out:
++ local_bh_enable();
+ }
+ EXPORT_SYMBOL_GPL(nf_dup_ipv6);
+
+diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c
+index 23971903e66de8..a45bf17cb2a172 100644
+--- a/net/ipv6/tcpv6_offload.c
++++ b/net/ipv6/tcpv6_offload.c
+@@ -159,8 +159,14 @@ static struct sk_buff *tcp6_gso_segment(struct sk_buff *skb,
+ if (!pskb_may_pull(skb, sizeof(*th)))
+ return ERR_PTR(-EINVAL);
+
+- if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST)
+- return __tcp6_gso_segment_list(skb, features);
++ if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) {
++ struct tcphdr *th = tcp_hdr(skb);
++
++ if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size)
++ return __tcp6_gso_segment_list(skb, features);
++
++ skb->ip_summed = CHECKSUM_NONE;
++ }
+
+ if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) {
+ const struct ipv6hdr *ipv6h = ipv6_hdr(skb);
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 2e86f520f79941..ee8133f77b64cd 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -152,7 +152,7 @@ static void l2tp_session_free(struct l2tp_session *session)
+ trace_free_session(session);
+ if (session->tunnel)
+ l2tp_tunnel_dec_refcount(session->tunnel);
+- kfree(session);
++ kfree_rcu(session, rcu);
+ }
+
+ struct l2tp_tunnel *l2tp_sk_to_tunnel(struct sock *sk)
+@@ -254,7 +254,14 @@ struct l2tp_session *l2tp_v3_session_get(const struct net *net, struct sock *sk,
+
+ hash_for_each_possible_rcu(pn->l2tp_v3_session_htable, session,
+ hlist, key) {
+- if (session->tunnel->sock == sk &&
++ /* session->tunnel may be NULL if another thread is in
++ * l2tp_session_register and has added an item to
++ * l2tp_v3_session_htable but hasn't yet added the
++ * session to its tunnel's session_list.
++ */
++ struct l2tp_tunnel *tunnel = READ_ONCE(session->tunnel);
++
++ if (tunnel && tunnel->sock == sk &&
+ refcount_inc_not_zero(&session->ref_count)) {
+ rcu_read_unlock_bh();
+ return session;
+@@ -387,12 +394,12 @@ static int l2tp_session_collision_add(struct l2tp_net *pn,
+
+ /* If existing session isn't already in the session hlist, add it. */
+ if (!hash_hashed(&session2->hlist))
+- hash_add(pn->l2tp_v3_session_htable, &session2->hlist,
+- session2->hlist_key);
++ hash_add_rcu(pn->l2tp_v3_session_htable, &session2->hlist,
++ session2->hlist_key);
+
+ /* Add new session to the hlist and collision list */
+- hash_add(pn->l2tp_v3_session_htable, &session1->hlist,
+- session1->hlist_key);
++ hash_add_rcu(pn->l2tp_v3_session_htable, &session1->hlist,
++ session1->hlist_key);
+ refcount_inc(&clist->ref_count);
+ l2tp_session_coll_list_add(clist, session1);
+
+@@ -408,7 +415,7 @@ static void l2tp_session_collision_del(struct l2tp_net *pn,
+
+ lockdep_assert_held(&pn->l2tp_session_idr_lock);
+
+- hash_del(&session->hlist);
++ hash_del_rcu(&session->hlist);
+
+ if (clist) {
+ /* Remove session from its collision list. If there
+@@ -482,7 +489,8 @@ int l2tp_session_register(struct l2tp_session *session,
+ }
+
+ l2tp_tunnel_inc_refcount(tunnel);
+- list_add(&session->list, &tunnel->session_list);
++ WRITE_ONCE(session->tunnel, tunnel);
++ list_add_rcu(&session->list, &tunnel->session_list);
+
+ if (tunnel->version == L2TP_HDR_VER_3) {
+ if (!other_session)
+@@ -797,7 +805,8 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
+ if (!session->lns_mode && !session->send_seq) {
+ trace_session_seqnum_lns_enable(session);
+ session->send_seq = 1;
+- l2tp_session_set_header_len(session, tunnel->version);
++ l2tp_session_set_header_len(session, tunnel->version,
++ tunnel->encap);
+ }
+ } else {
+ /* No sequence numbers.
+@@ -818,7 +827,8 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
+ if (!session->lns_mode && session->send_seq) {
+ trace_session_seqnum_lns_disable(session);
+ session->send_seq = 0;
+- l2tp_session_set_header_len(session, tunnel->version);
++ l2tp_session_set_header_len(session, tunnel->version,
++ tunnel->encap);
+ } else if (session->send_seq) {
+ pr_debug_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
+ session->name);
+@@ -1288,8 +1298,6 @@ static void l2tp_session_unhash(struct l2tp_session *session)
+
+ spin_unlock_bh(&pn->l2tp_session_idr_lock);
+ spin_unlock_bh(&tunnel->list_lock);
+-
+- synchronize_rcu();
+ }
+ }
+
+@@ -1663,7 +1671,8 @@ EXPORT_SYMBOL_GPL(l2tp_session_delete);
+ /* We come here whenever a session's send_seq, cookie_len or
+ * l2specific_type parameters are set.
+ */
+-void l2tp_session_set_header_len(struct l2tp_session *session, int version)
++void l2tp_session_set_header_len(struct l2tp_session *session, int version,
++ enum l2tp_encap_type encap)
+ {
+ if (version == L2TP_HDR_VER_2) {
+ session->hdr_len = 6;
+@@ -1672,7 +1681,7 @@ void l2tp_session_set_header_len(struct l2tp_session *session, int version)
+ } else {
+ session->hdr_len = 4 + session->cookie_len;
+ session->hdr_len += l2tp_get_l2specific_len(session);
+- if (session->tunnel->encap == L2TP_ENCAPTYPE_UDP)
++ if (encap == L2TP_ENCAPTYPE_UDP)
+ session->hdr_len += 4;
+ }
+ }
+@@ -1686,7 +1695,6 @@ struct l2tp_session *l2tp_session_create(int priv_size, struct l2tp_tunnel *tunn
+ session = kzalloc(sizeof(*session) + priv_size, GFP_KERNEL);
+ if (session) {
+ session->magic = L2TP_SESSION_MAGIC;
+- session->tunnel = tunnel;
+
+ session->session_id = session_id;
+ session->peer_session_id = peer_session_id;
+@@ -1724,7 +1732,7 @@ struct l2tp_session *l2tp_session_create(int priv_size, struct l2tp_tunnel *tunn
+ memcpy(&session->peer_cookie[0], &cfg->peer_cookie[0], cfg->peer_cookie_len);
+ }
+
+- l2tp_session_set_header_len(session, tunnel->version);
++ l2tp_session_set_header_len(session, tunnel->version, tunnel->encap);
+
+ refcount_set(&session->ref_count, 1);
+
+diff --git a/net/l2tp/l2tp_core.h b/net/l2tp/l2tp_core.h
+index 8ac81bc1bc6fa3..d0e3460089d902 100644
+--- a/net/l2tp/l2tp_core.h
++++ b/net/l2tp/l2tp_core.h
+@@ -67,6 +67,7 @@ struct l2tp_session_coll_list {
+ struct l2tp_session {
+ int magic; /* should be L2TP_SESSION_MAGIC */
+ long dead;
++ struct rcu_head rcu;
+
+ struct l2tp_tunnel *tunnel; /* back pointer to tunnel context */
+ u32 session_id;
+@@ -260,7 +261,8 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
+ int l2tp_udp_encap_recv(struct sock *sk, struct sk_buff *skb);
+
+ /* Transmit path helpers for sending packets over the tunnel socket. */
+-void l2tp_session_set_header_len(struct l2tp_session *session, int version);
++void l2tp_session_set_header_len(struct l2tp_session *session, int version,
++ enum l2tp_encap_type encap);
+ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb);
+
+ /* Pseudowire management.
+diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c
+index d105030520f954..fc43ecbd128cc5 100644
+--- a/net/l2tp/l2tp_netlink.c
++++ b/net/l2tp/l2tp_netlink.c
+@@ -692,8 +692,10 @@ static int l2tp_nl_cmd_session_modify(struct sk_buff *skb, struct genl_info *inf
+ session->recv_seq = nla_get_u8(info->attrs[L2TP_ATTR_RECV_SEQ]);
+
+ if (info->attrs[L2TP_ATTR_SEND_SEQ]) {
++ struct l2tp_tunnel *tunnel = session->tunnel;
++
+ session->send_seq = nla_get_u8(info->attrs[L2TP_ATTR_SEND_SEQ]);
+- l2tp_session_set_header_len(session, session->tunnel->version);
++ l2tp_session_set_header_len(session, tunnel->version, tunnel->encap);
+ }
+
+ if (info->attrs[L2TP_ATTR_LNS_MODE])
+diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
+index 3596290047b28e..4f25c1212cacb1 100644
+--- a/net/l2tp/l2tp_ppp.c
++++ b/net/l2tp/l2tp_ppp.c
+@@ -1205,7 +1205,8 @@ static int pppol2tp_session_setsockopt(struct sock *sk,
+ po->chan.hdrlen = val ? PPPOL2TP_L2TP_HDR_SIZE_SEQ :
+ PPPOL2TP_L2TP_HDR_SIZE_NOSEQ;
+ }
+- l2tp_session_set_header_len(session, session->tunnel->version);
++ l2tp_session_set_header_len(session, session->tunnel->version,
++ session->tunnel->encap);
+ break;
+
+ case PPPOL2TP_SO_LNSMODE:
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index e8567723e94d5e..b72e4036526bfa 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -286,7 +286,9 @@ ieee80211_get_max_required_bw(struct ieee80211_link_data *link)
+ enum nl80211_chan_width max_bw = NL80211_CHAN_WIDTH_20_NOHT;
+ struct sta_info *sta;
+
+- list_for_each_entry_rcu(sta, &sdata->local->sta_list, list) {
++ lockdep_assert_wiphy(sdata->local->hw.wiphy);
++
++ list_for_each_entry(sta, &sdata->local->sta_list, list) {
+ if (sdata != sta->sdata &&
+ !(sta->sdata->bss && sta->sdata->bss == sdata->bss))
+ continue;
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 9e3d2ed9cf6780..746f51ac03068b 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1231,7 +1231,7 @@ static bool ieee80211_add_vht_ie(struct ieee80211_sub_if_data *sdata,
+ bool disable_mu_mimo = false;
+ struct ieee80211_sub_if_data *other;
+
+- list_for_each_entry_rcu(other, &local->interfaces, list) {
++ list_for_each_entry(other, &local->interfaces, list) {
+ if (other->vif.bss_conf.mu_mimo_owner) {
+ disable_mu_mimo = true;
+ break;
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index 1c5d99975ad04d..3b2bde6360bcb6 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -504,7 +504,7 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted)
+ * the scan was in progress; if there was none this will
+ * just be a no-op for the particular interface.
+ */
+- list_for_each_entry_rcu(sdata, &local->interfaces, list) {
++ list_for_each_entry(sdata, &local->interfaces, list) {
+ if (ieee80211_sdata_running(sdata))
+ wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work);
+ }
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index c7ad9bc5973a0e..aed72794d9fe30 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -751,7 +751,9 @@ static void __iterate_interfaces(struct ieee80211_local *local,
+ struct ieee80211_sub_if_data *sdata;
+ bool active_only = iter_flags & IEEE80211_IFACE_ITER_ACTIVE;
+
+- list_for_each_entry_rcu(sdata, &local->interfaces, list) {
++ list_for_each_entry_rcu(sdata, &local->interfaces, list,
++ lockdep_is_held(&local->iflist_mtx) ||
++ lockdep_is_held(&local->hw.wiphy->mtx)) {
+ switch (sdata->vif.type) {
+ case NL80211_IFTYPE_MONITOR:
+ if (!(sdata->u.mntr.flags & MONITOR_FLAG_ACTIVE))
+diff --git a/net/mac802154/scan.c b/net/mac802154/scan.c
+index 1c0eeaa76560cd..a6dab3cc3ad858 100644
+--- a/net/mac802154/scan.c
++++ b/net/mac802154/scan.c
+@@ -176,6 +176,7 @@ void mac802154_scan_worker(struct work_struct *work)
+ struct ieee802154_local *local =
+ container_of(work, struct ieee802154_local, scan_work.work);
+ struct cfg802154_scan_request *scan_req;
++ enum nl802154_scan_types scan_req_type;
+ struct ieee802154_sub_if_data *sdata;
+ unsigned int scan_duration = 0;
+ struct wpan_phy *wpan_phy;
+@@ -209,6 +210,7 @@ void mac802154_scan_worker(struct work_struct *work)
+ }
+
+ wpan_phy = scan_req->wpan_phy;
++ scan_req_type = scan_req->type;
+ scan_req_duration = scan_req->duration;
+
+ /* Look for the next valid chan */
+@@ -246,7 +248,7 @@ void mac802154_scan_worker(struct work_struct *work)
+ goto end_scan;
+ }
+
+- if (scan_req->type == NL802154_SCAN_ACTIVE) {
++ if (scan_req_type == NL802154_SCAN_ACTIVE) {
+ ret = mac802154_transmit_beacon_req(local, sdata);
+ if (ret)
+ dev_err(&sdata->dev->dev,
+diff --git a/net/ncsi/ncsi-manage.c b/net/ncsi/ncsi-manage.c
+index 5ecf611c882009..5cf55bde366d18 100644
+--- a/net/ncsi/ncsi-manage.c
++++ b/net/ncsi/ncsi-manage.c
+@@ -1954,6 +1954,8 @@ void ncsi_unregister_dev(struct ncsi_dev *nd)
+ list_del_rcu(&ndp->node);
+ spin_unlock_irqrestore(&ncsi_dev_lock, flags);
+
++ disable_work_sync(&ndp->work);
++
+ kfree(ndp);
+ }
+ EXPORT_SYMBOL_GPL(ncsi_unregister_dev);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 472f211472db48..e792f153f9587b 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -10795,7 +10795,10 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ break;
+ }
+ te = nft_trans_container_elem(trans);
+- nft_setelem_remove(net, te->set, te->elem_priv);
++ if (!te->set->ops->abort ||
++ nft_setelem_is_catchall(te->set, te->elem_priv))
++ nft_setelem_remove(net, te->set, te->elem_priv);
++
+ if (!nft_setelem_is_catchall(te->set, te->elem_priv))
+ atomic_dec(&te->set->nelems);
+
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 08de24658f4fa9..3f8197d5926ef5 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -1058,7 +1058,7 @@ bool rxrpc_direct_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
+ int rxrpc_io_thread(void *data);
+ static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local)
+ {
+- wake_up_process(local->io_thread);
++ wake_up_process(READ_ONCE(local->io_thread));
+ }
+
+ static inline bool rxrpc_protocol_error(struct sk_buff *skb, enum rxrpc_abort_reason why)
+diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c
+index 0300baa9afcd39..07c74c77d80214 100644
+--- a/net/rxrpc/io_thread.c
++++ b/net/rxrpc/io_thread.c
+@@ -27,11 +27,17 @@ int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb)
+ {
+ struct sk_buff_head *rx_queue;
+ struct rxrpc_local *local = rcu_dereference_sk_user_data(udp_sk);
++ struct task_struct *io_thread;
+
+ if (unlikely(!local)) {
+ kfree_skb(skb);
+ return 0;
+ }
++ io_thread = READ_ONCE(local->io_thread);
++ if (!io_thread) {
++ kfree_skb(skb);
++ return 0;
++ }
+ if (skb->tstamp == 0)
+ skb->tstamp = ktime_get_real();
+
+@@ -47,7 +53,7 @@ int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb)
+ #endif
+
+ skb_queue_tail(rx_queue, skb);
+- rxrpc_wake_up_io_thread(local);
++ wake_up_process(io_thread);
+ return 0;
+ }
+
+@@ -565,7 +571,7 @@ int rxrpc_io_thread(void *data)
+ __set_current_state(TASK_RUNNING);
+ rxrpc_see_local(local, rxrpc_local_stop);
+ rxrpc_destroy_local(local);
+- local->io_thread = NULL;
++ WRITE_ONCE(local->io_thread, NULL);
+ rxrpc_see_local(local, rxrpc_local_stopped);
+ return 0;
+ }
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index 504453c688d751..f9623ace22016f 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -232,7 +232,7 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ }
+
+ wait_for_completion(&local->io_thread_ready);
+- local->io_thread = io_thread;
++ WRITE_ONCE(local->io_thread, io_thread);
+ _leave(" = 0");
+ return 0;
+
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index cc2df9f8c14a68..8498d0606b248b 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1952,7 +1952,9 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ goto unlock;
+ }
+
+- rcu_assign_pointer(q->admin_sched, new_admin);
++ /* Not going to race against advance_sched(), but still */
++ admin = rcu_replace_pointer(q->admin_sched, new_admin,
++ lockdep_rtnl_is_held());
+ if (admin)
+ call_rcu(&admin->rcu, taprio_free_sched_cb);
+ } else {
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 32f76f1298da81..078bcb3858c79c 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -8557,8 +8557,10 @@ static int sctp_listen_start(struct sock *sk, int backlog)
+ */
+ inet_sk_set_state(sk, SCTP_SS_LISTENING);
+ if (!ep->base.bind_addr.port) {
+- if (sctp_autobind(sk))
++ if (sctp_autobind(sk)) {
++ inet_sk_set_state(sk, SCTP_SS_CLOSED);
+ return -EAGAIN;
++ }
+ } else {
+ if (sctp_get_port(sk, inet_sk(sk)->inet_num)) {
+ inet_sk_set_state(sk, SCTP_SS_CLOSED);
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index 88a59cfa5583c0..df06b152ed94ee 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -713,7 +713,7 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
+ serv->sv_nrthreads += 1;
+ spin_unlock_bh(&serv->sv_lock);
+
+- atomic_inc(&pool->sp_nrthreads);
++ pool->sp_nrthreads += 1;
+
+ /* Protected by whatever lock the service uses when calling
+ * svc_set_num_threads()
+@@ -768,31 +768,22 @@ svc_pool_victim(struct svc_serv *serv, struct svc_pool *target_pool,
+ struct svc_pool *pool;
+ unsigned int i;
+
+-retry:
+ pool = target_pool;
+
+- if (pool != NULL) {
+- if (atomic_inc_not_zero(&pool->sp_nrthreads))
+- goto found_pool;
+- return NULL;
+- } else {
++ if (!pool) {
+ for (i = 0; i < serv->sv_nrpools; i++) {
+ pool = &serv->sv_pools[--(*state) % serv->sv_nrpools];
+- if (atomic_inc_not_zero(&pool->sp_nrthreads))
+- goto found_pool;
++ if (pool->sp_nrthreads)
++ break;
+ }
+- return NULL;
+ }
+
+-found_pool:
+- set_bit(SP_VICTIM_REMAINS, &pool->sp_flags);
+- set_bit(SP_NEED_VICTIM, &pool->sp_flags);
+- if (!atomic_dec_and_test(&pool->sp_nrthreads))
++ if (pool && pool->sp_nrthreads) {
++ set_bit(SP_VICTIM_REMAINS, &pool->sp_flags);
++ set_bit(SP_NEED_VICTIM, &pool->sp_flags);
+ return pool;
+- /* Nothing left in this pool any more */
+- clear_bit(SP_NEED_VICTIM, &pool->sp_flags);
+- clear_bit(SP_VICTIM_REMAINS, &pool->sp_flags);
+- goto retry;
++ }
++ return NULL;
+ }
+
+ static int
+@@ -871,7 +862,7 @@ svc_set_num_threads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
+ if (!pool)
+ nrservs -= serv->sv_nrthreads;
+ else
+- nrservs -= atomic_read(&pool->sp_nrthreads);
++ nrservs -= pool->sp_nrthreads;
+
+ if (nrservs > 0)
+ return svc_start_kthreads(serv, pool, nrservs);
+@@ -959,7 +950,7 @@ svc_exit_thread(struct svc_rqst *rqstp)
+
+ list_del_rcu(&rqstp->rq_all);
+
+- atomic_dec(&pool->sp_nrthreads);
++ pool->sp_nrthreads -= 1;
+
+ spin_lock_bh(&serv->sv_lock);
+ serv->sv_nrthreads -= 1;
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 5a526ebafeb4b7..3c9e25f6a1d222 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -163,8 +163,12 @@ static int bearer_name_validate(const char *name,
+
+ /* return bearer name components, if necessary */
+ if (name_parts) {
+- strcpy(name_parts->media_name, media_name);
+- strcpy(name_parts->if_name, if_name);
++ if (strscpy(name_parts->media_name, media_name,
++ TIPC_MAX_MEDIA_NAME) < 0)
++ return 0;
++ if (strscpy(name_parts->if_name, if_name,
++ TIPC_MAX_IF_NAME) < 0)
++ return 0;
+ }
+ return 1;
+ }
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 1d83bc3de5ca5d..f18e1716339e0d 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -10144,7 +10144,20 @@ static int nl80211_start_radar_detection(struct sk_buff *skb,
+
+ err = rdev_start_radar_detection(rdev, dev, &chandef, cac_time_ms);
+ if (!err) {
+- wdev->links[0].ap.chandef = chandef;
++ switch (wdev->iftype) {
++ case NL80211_IFTYPE_AP:
++ case NL80211_IFTYPE_P2P_GO:
++ wdev->links[0].ap.chandef = chandef;
++ break;
++ case NL80211_IFTYPE_ADHOC:
++ wdev->u.ibss.chandef = chandef;
++ break;
++ case NL80211_IFTYPE_MESH_POINT:
++ wdev->u.mesh.chandef = chandef;
++ break;
++ default:
++ break;
++ }
+ wdev->cac_started = true;
+ wdev->cac_start_time = jiffies;
+ wdev->cac_time_ms = cac_time_ms;
+diff --git a/rust/Makefile b/rust/Makefile
+index f168d2c98a15f1..2aa93007aacae0 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -8,16 +8,16 @@ always-$(CONFIG_RUST) += exports_core_generated.h
+
+ # Missing prototypes are expected in the helpers since these are exported
+ # for Rust only, thus there is no header nor prototypes.
+-obj-$(CONFIG_RUST) += helpers.o
+-CFLAGS_REMOVE_helpers.o = -Wmissing-prototypes -Wmissing-declarations
++obj-$(CONFIG_RUST) += helpers/helpers.o
++CFLAGS_REMOVE_helpers/helpers.o = -Wmissing-prototypes -Wmissing-declarations
+
+ always-$(CONFIG_RUST) += libmacros.so
+ no-clean-files += libmacros.so
+
+ always-$(CONFIG_RUST) += bindings/bindings_generated.rs bindings/bindings_helpers_generated.rs
+ obj-$(CONFIG_RUST) += alloc.o bindings.o kernel.o
+-always-$(CONFIG_RUST) += exports_alloc_generated.h exports_bindings_generated.h \
+- exports_kernel_generated.h
++always-$(CONFIG_RUST) += exports_alloc_generated.h exports_helpers_generated.h \
++ exports_bindings_generated.h exports_kernel_generated.h
+
+ always-$(CONFIG_RUST) += uapi/uapi_generated.rs
+ obj-$(CONFIG_RUST) += uapi.o
+@@ -299,7 +299,7 @@ $(obj)/bindings/bindings_helpers_generated.rs: private bindgen_target_cflags = \
+ -I$(objtree)/$(obj) -Wno-missing-prototypes -Wno-missing-declarations
+ $(obj)/bindings/bindings_helpers_generated.rs: private bindgen_target_extra = ; \
+ sed -Ei 's/pub fn rust_helper_([a-zA-Z0-9_]*)/#[link_name="rust_helper_\1"]\n pub fn \1/g' $@
+-$(obj)/bindings/bindings_helpers_generated.rs: $(src)/helpers.c FORCE
++$(obj)/bindings/bindings_helpers_generated.rs: $(src)/helpers/helpers.c FORCE
+ $(call if_changed_dep,bindgen)
+
+ quiet_cmd_exports = EXPORTS $@
+@@ -313,6 +313,18 @@ $(obj)/exports_core_generated.h: $(obj)/core.o FORCE
+ $(obj)/exports_alloc_generated.h: $(obj)/alloc.o FORCE
+ $(call if_changed,exports)
+
++# Even though Rust kernel modules should never use the bindings directly,
++# symbols from the `bindings` crate and the C helpers need to be exported
++# because Rust generics and inlined functions may not get their code generated
++# in the crate where they are defined. Other helpers, called from non-inline
++# functions, may not be exported, in principle. However, in general, the Rust
++# compiler does not guarantee codegen will be performed for a non-inline
++# function either. Therefore, we export all symbols from helpers and bindings.
++# In the future, this may be revisited to reduce the number of exports after
++# the compiler is informed about the places codegen is required.
++$(obj)/exports_helpers_generated.h: $(obj)/helpers/helpers.o FORCE
++ $(call if_changed,exports)
++
+ $(obj)/exports_bindings_generated.h: $(obj)/bindings.o FORCE
+ $(call if_changed,exports)
+
+diff --git a/rust/exports.c b/rust/exports.c
+index 3803c21d1403ef..e5695f3b45b7aa 100644
+--- a/rust/exports.c
++++ b/rust/exports.c
+@@ -17,6 +17,7 @@
+
+ #include "exports_core_generated.h"
+ #include "exports_alloc_generated.h"
++#include "exports_helpers_generated.h"
+ #include "exports_bindings_generated.h"
+ #include "exports_kernel_generated.h"
+
+diff --git a/rust/helpers.c b/rust/helpers.c
+deleted file mode 100644
+index 92d3c03ae1bd53..00000000000000
+--- a/rust/helpers.c
++++ /dev/null
+@@ -1,239 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Non-trivial C macros cannot be used in Rust. Similarly, inlined C functions
+- * cannot be called either. This file explicitly creates functions ("helpers")
+- * that wrap those so that they can be called from Rust.
+- *
+- * Even though Rust kernel modules should never use the bindings directly, some
+- * of these helpers need to be exported because Rust generics and inlined
+- * functions may not get their code generated in the crate where they are
+- * defined. Other helpers, called from non-inline functions, may not be
+- * exported, in principle. However, in general, the Rust compiler does not
+- * guarantee codegen will be performed for a non-inline function either.
+- * Therefore, this file exports all the helpers. In the future, this may be
+- * revisited to reduce the number of exports after the compiler is informed
+- * about the places codegen is required.
+- *
+- * All symbols are exported as GPL-only to guarantee no GPL-only feature is
+- * accidentally exposed.
+- *
+- * Sorted alphabetically.
+- */
+-
+-#include <kunit/test-bug.h>
+-#include <linux/bug.h>
+-#include <linux/build_bug.h>
+-#include <linux/device.h>
+-#include <linux/err.h>
+-#include <linux/errname.h>
+-#include <linux/gfp.h>
+-#include <linux/highmem.h>
+-#include <linux/mutex.h>
+-#include <linux/refcount.h>
+-#include <linux/sched/signal.h>
+-#include <linux/slab.h>
+-#include <linux/spinlock.h>
+-#include <linux/wait.h>
+-#include <linux/workqueue.h>
+-
+-__noreturn void rust_helper_BUG(void)
+-{
+- BUG();
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_BUG);
+-
+-unsigned long rust_helper_copy_from_user(void *to, const void __user *from,
+- unsigned long n)
+-{
+- return copy_from_user(to, from, n);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_copy_from_user);
+-
+-unsigned long rust_helper_copy_to_user(void __user *to, const void *from,
+- unsigned long n)
+-{
+- return copy_to_user(to, from, n);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_copy_to_user);
+-
+-void rust_helper_mutex_lock(struct mutex *lock)
+-{
+- mutex_lock(lock);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_mutex_lock);
+-
+-void rust_helper___spin_lock_init(spinlock_t *lock, const char *name,
+- struct lock_class_key *key)
+-{
+-#ifdef CONFIG_DEBUG_SPINLOCK
+- __raw_spin_lock_init(spinlock_check(lock), name, key, LD_WAIT_CONFIG);
+-#else
+- spin_lock_init(lock);
+-#endif
+-}
+-EXPORT_SYMBOL_GPL(rust_helper___spin_lock_init);
+-
+-void rust_helper_spin_lock(spinlock_t *lock)
+-{
+- spin_lock(lock);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_spin_lock);
+-
+-void rust_helper_spin_unlock(spinlock_t *lock)
+-{
+- spin_unlock(lock);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_spin_unlock);
+-
+-void rust_helper_init_wait(struct wait_queue_entry *wq_entry)
+-{
+- init_wait(wq_entry);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_init_wait);
+-
+-int rust_helper_signal_pending(struct task_struct *t)
+-{
+- return signal_pending(t);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_signal_pending);
+-
+-struct page *rust_helper_alloc_pages(gfp_t gfp_mask, unsigned int order)
+-{
+- return alloc_pages(gfp_mask, order);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_alloc_pages);
+-
+-void *rust_helper_kmap_local_page(struct page *page)
+-{
+- return kmap_local_page(page);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_kmap_local_page);
+-
+-void rust_helper_kunmap_local(const void *addr)
+-{
+- kunmap_local(addr);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_kunmap_local);
+-
+-refcount_t rust_helper_REFCOUNT_INIT(int n)
+-{
+- return (refcount_t)REFCOUNT_INIT(n);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_REFCOUNT_INIT);
+-
+-void rust_helper_refcount_inc(refcount_t *r)
+-{
+- refcount_inc(r);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_refcount_inc);
+-
+-bool rust_helper_refcount_dec_and_test(refcount_t *r)
+-{
+- return refcount_dec_and_test(r);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_refcount_dec_and_test);
+-
+-__force void *rust_helper_ERR_PTR(long err)
+-{
+- return ERR_PTR(err);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_ERR_PTR);
+-
+-bool rust_helper_IS_ERR(__force const void *ptr)
+-{
+- return IS_ERR(ptr);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_IS_ERR);
+-
+-long rust_helper_PTR_ERR(__force const void *ptr)
+-{
+- return PTR_ERR(ptr);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_PTR_ERR);
+-
+-const char *rust_helper_errname(int err)
+-{
+- return errname(err);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_errname);
+-
+-struct task_struct *rust_helper_get_current(void)
+-{
+- return current;
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_get_current);
+-
+-void rust_helper_get_task_struct(struct task_struct *t)
+-{
+- get_task_struct(t);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_get_task_struct);
+-
+-void rust_helper_put_task_struct(struct task_struct *t)
+-{
+- put_task_struct(t);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_put_task_struct);
+-
+-struct kunit *rust_helper_kunit_get_current_test(void)
+-{
+- return kunit_get_current_test();
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_kunit_get_current_test);
+-
+-void rust_helper_init_work_with_key(struct work_struct *work, work_func_t func,
+- bool onstack, const char *name,
+- struct lock_class_key *key)
+-{
+- __init_work(work, onstack);
+- work->data = (atomic_long_t)WORK_DATA_INIT();
+- lockdep_init_map(&work->lockdep_map, name, key, 0);
+- INIT_LIST_HEAD(&work->entry);
+- work->func = func;
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_init_work_with_key);
+-
+-void * __must_check __realloc_size(2)
+-rust_helper_krealloc(const void *objp, size_t new_size, gfp_t flags)
+-{
+- return krealloc(objp, new_size, flags);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_krealloc);
+-
+-/*
+- * `bindgen` binds the C `size_t` type as the Rust `usize` type, so we can
+- * use it in contexts where Rust expects a `usize` like slice (array) indices.
+- * `usize` is defined to be the same as C's `uintptr_t` type (can hold any
+- * pointer) but not necessarily the same as `size_t` (can hold the size of any
+- * single object). Most modern platforms use the same concrete integer type for
+- * both of them, but in case we find ourselves on a platform where
+- * that's not true, fail early instead of risking ABI or
+- * integer-overflow issues.
+- *
+- * If your platform fails this assertion, it means that you are in
+- * danger of integer-overflow bugs (even if you attempt to add
+- * `--no-size_t-is-usize`). It may be easiest to change the kernel ABI on
+- * your platform such that `size_t` matches `uintptr_t` (i.e., to increase
+- * `size_t`, because `uintptr_t` has to be at least as big as `size_t`).
+- */
+-static_assert(
+- sizeof(size_t) == sizeof(uintptr_t) &&
+- __alignof__(size_t) == __alignof__(uintptr_t),
+- "Rust code expects C `size_t` to match Rust `usize`"
+-);
+-
+-// This will soon be moved to a separate file, so no need to merge with above.
+-#include <linux/blk-mq.h>
+-#include <linux/blkdev.h>
+-
+-void *rust_helper_blk_mq_rq_to_pdu(struct request *rq)
+-{
+- return blk_mq_rq_to_pdu(rq);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_blk_mq_rq_to_pdu);
+-
+-struct request *rust_helper_blk_mq_rq_from_pdu(void *pdu)
+-{
+- return blk_mq_rq_from_pdu(pdu);
+-}
+-EXPORT_SYMBOL_GPL(rust_helper_blk_mq_rq_from_pdu);
+diff --git a/rust/helpers/blk.c b/rust/helpers/blk.c
+new file mode 100644
+index 00000000000000..cc9f4e6a2d2346
+--- /dev/null
++++ b/rust/helpers/blk.c
+@@ -0,0 +1,14 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/blk-mq.h>
++#include <linux/blkdev.h>
++
++void *rust_helper_blk_mq_rq_to_pdu(struct request *rq)
++{
++ return blk_mq_rq_to_pdu(rq);
++}
++
++struct request *rust_helper_blk_mq_rq_from_pdu(void *pdu)
++{
++ return blk_mq_rq_from_pdu(pdu);
++}
+diff --git a/rust/helpers/bug.c b/rust/helpers/bug.c
+new file mode 100644
+index 00000000000000..e2d13babc73710
+--- /dev/null
++++ b/rust/helpers/bug.c
+@@ -0,0 +1,8 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/bug.h>
++
++__noreturn void rust_helper_BUG(void)
++{
++ BUG();
++}
+diff --git a/rust/helpers/build_assert.c b/rust/helpers/build_assert.c
+new file mode 100644
+index 00000000000000..6a54b2680b1457
+--- /dev/null
++++ b/rust/helpers/build_assert.c
+@@ -0,0 +1,25 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/build_bug.h>
++
++/*
++ * `bindgen` binds the C `size_t` type as the Rust `usize` type, so we can
++ * use it in contexts where Rust expects a `usize` like slice (array) indices.
++ * `usize` is defined to be the same as C's `uintptr_t` type (can hold any
++ * pointer) but not necessarily the same as `size_t` (can hold the size of any
++ * single object). Most modern platforms use the same concrete integer type for
++ * both of them, but in case we find ourselves on a platform where
++ * that's not true, fail early instead of risking ABI or
++ * integer-overflow issues.
++ *
++ * If your platform fails this assertion, it means that you are in
++ * danger of integer-overflow bugs (even if you attempt to add
++ * `--no-size_t-is-usize`). It may be easiest to change the kernel ABI on
++ * your platform such that `size_t` matches `uintptr_t` (i.e., to increase
++ * `size_t`, because `uintptr_t` has to be at least as big as `size_t`).
++ */
++static_assert(
++ sizeof(size_t) == sizeof(uintptr_t) &&
++ __alignof__(size_t) == __alignof__(uintptr_t),
++ "Rust code expects C `size_t` to match Rust `usize`"
++);
+diff --git a/rust/helpers/build_bug.c b/rust/helpers/build_bug.c
+new file mode 100644
+index 00000000000000..e994f7b5928c01
+--- /dev/null
++++ b/rust/helpers/build_bug.c
+@@ -0,0 +1,9 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/export.h>
++#include <linux/errname.h>
++
++const char *rust_helper_errname(int err)
++{
++ return errname(err);
++}
+diff --git a/rust/helpers/err.c b/rust/helpers/err.c
+new file mode 100644
+index 00000000000000..be3d45ef78a257
+--- /dev/null
++++ b/rust/helpers/err.c
+@@ -0,0 +1,19 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/err.h>
++#include <linux/export.h>
++
++__force void *rust_helper_ERR_PTR(long err)
++{
++ return ERR_PTR(err);
++}
++
++bool rust_helper_IS_ERR(__force const void *ptr)
++{
++ return IS_ERR(ptr);
++}
++
++long rust_helper_PTR_ERR(__force const void *ptr)
++{
++ return PTR_ERR(ptr);
++}
+diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
+new file mode 100644
+index 00000000000000..173533616c917c
+--- /dev/null
++++ b/rust/helpers/helpers.c
+@@ -0,0 +1,25 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Non-trivial C macros cannot be used in Rust. Similarly, inlined C functions
++ * cannot be called either. This file explicitly creates functions ("helpers")
++ * that wrap those so that they can be called from Rust.
++ *
++ * Sorted alphabetically.
++ */
++
++#include "blk.c"
++#include "bug.c"
++#include "build_assert.c"
++#include "build_bug.c"
++#include "err.c"
++#include "kunit.c"
++#include "mutex.c"
++#include "page.c"
++#include "refcount.c"
++#include "signal.c"
++#include "slab.c"
++#include "spinlock.c"
++#include "task.c"
++#include "uaccess.c"
++#include "wait.c"
++#include "workqueue.c"
+diff --git a/rust/helpers/kunit.c b/rust/helpers/kunit.c
+new file mode 100644
+index 00000000000000..9d725067eb3bc1
+--- /dev/null
++++ b/rust/helpers/kunit.c
+@@ -0,0 +1,9 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <kunit/test-bug.h>
++#include <linux/export.h>
++
++struct kunit *rust_helper_kunit_get_current_test(void)
++{
++ return kunit_get_current_test();
++}
+diff --git a/rust/helpers/mutex.c b/rust/helpers/mutex.c
+new file mode 100644
+index 00000000000000..a17ca8cdb50ca0
+--- /dev/null
++++ b/rust/helpers/mutex.c
+@@ -0,0 +1,15 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/export.h>
++#include <linux/mutex.h>
++
++void rust_helper_mutex_lock(struct mutex *lock)
++{
++ mutex_lock(lock);
++}
++
++void rust_helper___mutex_init(struct mutex *mutex, const char *name,
++ struct lock_class_key *key)
++{
++ __mutex_init(mutex, name, key);
++}
+diff --git a/rust/helpers/page.c b/rust/helpers/page.c
+new file mode 100644
+index 00000000000000..b3f2b8fbf87fc9
+--- /dev/null
++++ b/rust/helpers/page.c
+@@ -0,0 +1,19 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/gfp.h>
++#include <linux/highmem.h>
++
++struct page *rust_helper_alloc_pages(gfp_t gfp_mask, unsigned int order)
++{
++ return alloc_pages(gfp_mask, order);
++}
++
++void *rust_helper_kmap_local_page(struct page *page)
++{
++ return kmap_local_page(page);
++}
++
++void rust_helper_kunmap_local(const void *addr)
++{
++ kunmap_local(addr);
++}
+diff --git a/rust/helpers/refcount.c b/rust/helpers/refcount.c
+new file mode 100644
+index 00000000000000..f47afc148ec36c
+--- /dev/null
++++ b/rust/helpers/refcount.c
+@@ -0,0 +1,19 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/export.h>
++#include <linux/refcount.h>
++
++refcount_t rust_helper_REFCOUNT_INIT(int n)
++{
++ return (refcount_t)REFCOUNT_INIT(n);
++}
++
++void rust_helper_refcount_inc(refcount_t *r)
++{
++ refcount_inc(r);
++}
++
++bool rust_helper_refcount_dec_and_test(refcount_t *r)
++{
++ return refcount_dec_and_test(r);
++}
+diff --git a/rust/helpers/signal.c b/rust/helpers/signal.c
+new file mode 100644
+index 00000000000000..63c407f80c26b3
+--- /dev/null
++++ b/rust/helpers/signal.c
+@@ -0,0 +1,9 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/export.h>
++#include <linux/sched/signal.h>
++
++int rust_helper_signal_pending(struct task_struct *t)
++{
++ return signal_pending(t);
++}
+diff --git a/rust/helpers/slab.c b/rust/helpers/slab.c
+new file mode 100644
+index 00000000000000..f043e087f9d666
+--- /dev/null
++++ b/rust/helpers/slab.c
+@@ -0,0 +1,9 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/slab.h>
++
++void * __must_check __realloc_size(2)
++rust_helper_krealloc(const void *objp, size_t new_size, gfp_t flags)
++{
++ return krealloc(objp, new_size, flags);
++}
+diff --git a/rust/helpers/spinlock.c b/rust/helpers/spinlock.c
+new file mode 100644
+index 00000000000000..acc1376b833c78
+--- /dev/null
++++ b/rust/helpers/spinlock.c
+@@ -0,0 +1,24 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/export.h>
++#include <linux/spinlock.h>
++
++void rust_helper___spin_lock_init(spinlock_t *lock, const char *name,
++ struct lock_class_key *key)
++{
++#ifdef CONFIG_DEBUG_SPINLOCK
++ __raw_spin_lock_init(spinlock_check(lock), name, key, LD_WAIT_CONFIG);
++#else
++ spin_lock_init(lock);
++#endif
++}
++
++void rust_helper_spin_lock(spinlock_t *lock)
++{
++ spin_lock(lock);
++}
++
++void rust_helper_spin_unlock(spinlock_t *lock)
++{
++ spin_unlock(lock);
++}
+diff --git a/rust/helpers/task.c b/rust/helpers/task.c
+new file mode 100644
+index 00000000000000..7ac789232d11cd
+--- /dev/null
++++ b/rust/helpers/task.c
+@@ -0,0 +1,19 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/export.h>
++#include <linux/sched/task.h>
++
++struct task_struct *rust_helper_get_current(void)
++{
++ return current;
++}
++
++void rust_helper_get_task_struct(struct task_struct *t)
++{
++ get_task_struct(t);
++}
++
++void rust_helper_put_task_struct(struct task_struct *t)
++{
++ put_task_struct(t);
++}
+diff --git a/rust/helpers/uaccess.c b/rust/helpers/uaccess.c
+new file mode 100644
+index 00000000000000..f49076f813cd64
+--- /dev/null
++++ b/rust/helpers/uaccess.c
+@@ -0,0 +1,15 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/uaccess.h>
++
++unsigned long rust_helper_copy_from_user(void *to, const void __user *from,
++ unsigned long n)
++{
++ return copy_from_user(to, from, n);
++}
++
++unsigned long rust_helper_copy_to_user(void __user *to, const void *from,
++ unsigned long n)
++{
++ return copy_to_user(to, from, n);
++}
+diff --git a/rust/helpers/wait.c b/rust/helpers/wait.c
+new file mode 100644
+index 00000000000000..c7336bbf275071
+--- /dev/null
++++ b/rust/helpers/wait.c
+@@ -0,0 +1,9 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/export.h>
++#include <linux/wait.h>
++
++void rust_helper_init_wait(struct wait_queue_entry *wq_entry)
++{
++ init_wait(wq_entry);
++}
+diff --git a/rust/helpers/workqueue.c b/rust/helpers/workqueue.c
+new file mode 100644
+index 00000000000000..f59427acc32373
+--- /dev/null
++++ b/rust/helpers/workqueue.c
+@@ -0,0 +1,15 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/export.h>
++#include <linux/workqueue.h>
++
++void rust_helper_init_work_with_key(struct work_struct *work, work_func_t func,
++ bool onstack, const char *name,
++ struct lock_class_key *key)
++{
++ __init_work(work, onstack);
++ work->data = (atomic_long_t)WORK_DATA_INIT();
++ lockdep_init_map(&work->lockdep_map, name, key, 0);
++ INIT_LIST_HEAD(&work->entry);
++ work->func = func;
++}
+diff --git a/rust/kernel/sync/locked_by.rs b/rust/kernel/sync/locked_by.rs
+index babc731bd5f626..ce2ee8d8786587 100644
+--- a/rust/kernel/sync/locked_by.rs
++++ b/rust/kernel/sync/locked_by.rs
+@@ -83,8 +83,12 @@ pub struct LockedBy<T: ?Sized, U: ?Sized> {
+ // SAFETY: `LockedBy` can be transferred across thread boundaries iff the data it protects can.
+ unsafe impl<T: ?Sized + Send, U: ?Sized> Send for LockedBy<T, U> {}
+
+-// SAFETY: `LockedBy` serialises the interior mutability it provides, so it is `Sync` as long as the
+-// data it protects is `Send`.
++// SAFETY: If `T` is not `Sync`, then parallel shared access to this `LockedBy` allows you to use
++// `access_mut` to hand out `&mut T` on one thread at the time. The requirement that `T: Send` is
++// sufficient to allow that.
++//
++// If `T` is `Sync`, then the `access` method also becomes available, which allows you to obtain
++// several `&T` from several threads at once. However, this is okay as `T` is `Sync`.
+ unsafe impl<T: ?Sized + Send, U: ?Sized> Sync for LockedBy<T, U> {}
+
+ impl<T, U> LockedBy<T, U> {
+@@ -118,7 +122,10 @@ impl<T: ?Sized, U> LockedBy<T, U> {
+ ///
+ /// Panics if `owner` is different from the data protected by the lock used in
+ /// [`new`](LockedBy::new).
+- pub fn access<'a>(&'a self, owner: &'a U) -> &'a T {
++ pub fn access<'a>(&'a self, owner: &'a U) -> &'a T
++ where
++ T: Sync,
++ {
+ build_assert!(
+ size_of::<U>() > 0,
+ "`U` cannot be a ZST because `owner` wouldn't be unique"
+@@ -127,7 +134,10 @@ pub fn access<'a>(&'a self, owner: &'a U) -> &'a T {
+ panic!("mismatched owners");
+ }
+
+- // SAFETY: `owner` is evidence that the owner is locked.
++ // SAFETY: `owner` is evidence that there are only shared references to the owner for the
++ // duration of 'a, so it's not possible to use `Self::access_mut` to obtain a mutable
++ // reference to the inner value that aliases with this shared reference. The type is `Sync`
++ // so there are no other requirements.
+ unsafe { &*self.data.get() }
+ }
+
+diff --git a/scripts/gdb/linux/proc.py b/scripts/gdb/linux/proc.py
+index 43c687e7a69de6..65dd1bd129641d 100644
+--- a/scripts/gdb/linux/proc.py
++++ b/scripts/gdb/linux/proc.py
+@@ -18,6 +18,7 @@ from linux import utils
+ from linux import tasks
+ from linux import lists
+ from linux import vfs
++from linux import rbtree
+ from struct import *
+
+
+@@ -172,8 +173,7 @@ values of that process namespace"""
+ gdb.write("{:^18} {:^15} {:>9} {} {} options\n".format(
+ "mount", "super_block", "devname", "pathname", "fstype"))
+
+- for mnt in lists.list_for_each_entry(namespace['list'],
+- mount_ptr_type, "mnt_list"):
++ for mnt in rbtree.rb_inorder_for_each_entry(namespace['mounts'], mount_ptr_type, "mnt_node"):
+ devname = mnt['mnt_devname'].string()
+ devname = devname if devname else "none"
+
+diff --git a/scripts/gdb/linux/rbtree.py b/scripts/gdb/linux/rbtree.py
+index fe462855eefda3..fcbcc5f4153cdf 100644
+--- a/scripts/gdb/linux/rbtree.py
++++ b/scripts/gdb/linux/rbtree.py
+@@ -9,6 +9,18 @@ from linux import utils
+ rb_root_type = utils.CachedType("struct rb_root")
+ rb_node_type = utils.CachedType("struct rb_node")
+
++def rb_inorder_for_each(root):
++ def inorder(node):
++ if node:
++ yield from inorder(node['rb_left'])
++ yield node
++ yield from inorder(node['rb_right'])
++
++ yield from inorder(root['rb_node'])
++
++def rb_inorder_for_each_entry(root, gdbtype, member):
++ for node in rb_inorder_for_each(root):
++ yield utils.container_of(node, gdbtype, member)
+
+ def rb_first(root):
+ if root.type == rb_root_type.get_type():
+diff --git a/scripts/gdb/linux/timerlist.py b/scripts/gdb/linux/timerlist.py
+index 64bc87191003de..98445671fe8389 100644
+--- a/scripts/gdb/linux/timerlist.py
++++ b/scripts/gdb/linux/timerlist.py
+@@ -87,21 +87,22 @@ def print_cpu(hrtimer_bases, cpu, max_clock_bases):
+ text += "\n"
+
+ if constants.LX_CONFIG_TICK_ONESHOT:
+- fmts = [(" .{} : {}", 'nohz_mode'),
+- (" .{} : {} nsecs", 'last_tick'),
+- (" .{} : {}", 'tick_stopped'),
+- (" .{} : {}", 'idle_jiffies'),
+- (" .{} : {}", 'idle_calls'),
+- (" .{} : {}", 'idle_sleeps'),
+- (" .{} : {} nsecs", 'idle_entrytime'),
+- (" .{} : {} nsecs", 'idle_waketime'),
+- (" .{} : {} nsecs", 'idle_exittime'),
+- (" .{} : {} nsecs", 'idle_sleeptime'),
+- (" .{}: {} nsecs", 'iowait_sleeptime'),
+- (" .{} : {}", 'last_jiffies'),
+- (" .{} : {}", 'next_timer'),
+- (" .{} : {} nsecs", 'idle_expires')]
+- text += "\n".join([s.format(f, ts[f]) for s, f in fmts])
++ TS_FLAG_STOPPED = 1 << 1
++ TS_FLAG_NOHZ = 1 << 4
++ text += f" .{'nohz':15s}: {int(bool(ts['flags'] & TS_FLAG_NOHZ))}\n"
++ text += f" .{'last_tick':15s}: {ts['last_tick']}\n"
++ text += f" .{'tick_stopped':15s}: {int(bool(ts['flags'] & TS_FLAG_STOPPED))}\n"
++ text += f" .{'idle_jiffies':15s}: {ts['idle_jiffies']}\n"
++ text += f" .{'idle_calls':15s}: {ts['idle_calls']}\n"
++ text += f" .{'idle_sleeps':15s}: {ts['idle_sleeps']}\n"
++ text += f" .{'idle_entrytime':15s}: {ts['idle_entrytime']} nsecs\n"
++ text += f" .{'idle_waketime':15s}: {ts['idle_waketime']} nsecs\n"
++ text += f" .{'idle_exittime':15s}: {ts['idle_exittime']} nsecs\n"
++ text += f" .{'idle_sleeptime':15s}: {ts['idle_sleeptime']} nsecs\n"
++ text += f" .{'iowait_sleeptime':15s}: {ts['iowait_sleeptime']} nsecs\n"
++ text += f" .{'last_jiffies':15s}: {ts['last_jiffies']}\n"
++ text += f" .{'next_timer':15s}: {ts['next_timer']}\n"
++ text += f" .{'idle_expires':15s}: {ts['idle_expires']} nsecs\n"
+ text += "\njiffies: {}\n".format(jiffies)
+
+ text += "\n"
+diff --git a/scripts/kconfig/parser.y b/scripts/kconfig/parser.y
+index 61900feb4254a3..add1ce4b5091d6 100644
+--- a/scripts/kconfig/parser.y
++++ b/scripts/kconfig/parser.y
+@@ -158,8 +158,14 @@ config_stmt: config_entry_start config_option_list
+ yynerrs++;
+ }
+
+- list_add_tail(¤t_entry->sym->choice_link,
+- ¤t_choice->choice_members);
++ /*
++ * If the same symbol appears twice in a choice block, the list
++ * node would be added twice, leading to a broken linked list.
++ * list_empty() ensures that this symbol has not yet added.
++ */
++ if (list_empty(¤t_entry->sym->choice_link))
++ list_add_tail(¤t_entry->sym->choice_link,
++ ¤t_choice->choice_members);
+ }
+
+ printd(DEBUG_PARSE, "%s:%d:endconfig\n", cur_filename, cur_lineno);
+diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc
+index 7d239c032b3d67..5e9f810b9e7f74 100644
+--- a/scripts/kconfig/qconf.cc
++++ b/scripts/kconfig/qconf.cc
+@@ -1166,7 +1166,7 @@ void ConfigInfoView::clicked(const QUrl &url)
+ {
+ QByteArray str = url.toEncoded();
+ const std::size_t count = str.size();
+- char *data = new char[count + 1];
++ char *data = new char[count + 2]; // '$' + '\0'
+ struct symbol **result;
+ struct menu *m = NULL;
+
+@@ -1505,6 +1505,8 @@ ConfigMainWindow::ConfigMainWindow(void)
+ connect(helpText, &ConfigInfoView::menuSelected,
+ this, &ConfigMainWindow::setMenuLink);
+
++ conf_read(NULL);
++
+ QString listMode = configSettings->value("/listMode", "symbol").toString();
+ if (listMode == "single")
+ showSingleView();
+@@ -1906,8 +1908,6 @@ int main(int ac, char** av)
+ configApp->connect(configApp, SIGNAL(lastWindowClosed()), SLOT(quit()));
+ configApp->connect(configApp, SIGNAL(aboutToQuit()), v, SLOT(saveSettings()));
+
+- conf_read(NULL);
+-
+ v->show();
+ configApp->exec();
+
+diff --git a/security/Kconfig b/security/Kconfig
+index 412e76f1575d0d..a93c1a9b7c283b 100644
+--- a/security/Kconfig
++++ b/security/Kconfig
+@@ -19,6 +19,38 @@ config SECURITY_DMESG_RESTRICT
+
+ If you are unsure how to answer this question, answer N.
+
++choice
++ prompt "Allow /proc/pid/mem access override"
++ default PROC_MEM_ALWAYS_FORCE
++ help
++ Traditionally /proc/pid/mem allows users to override memory
++ permissions for users like ptrace, assuming they have ptrace
++ capability.
++
++ This allows people to limit that - either never override, or
++ require actual active ptrace attachment.
++
++ Defaults to the traditional behavior (for now)
++
++config PROC_MEM_ALWAYS_FORCE
++ bool "Traditional /proc/pid/mem behavior"
++ help
++ This allows /proc/pid/mem accesses to override memory mapping
++ permissions if you have ptrace access rights.
++
++config PROC_MEM_FORCE_PTRACE
++ bool "Require active ptrace() use for access override"
++ help
++ This allows /proc/pid/mem accesses to override memory mapping
++ permissions for active ptracers like gdb.
++
++config PROC_MEM_NO_FORCE
++ bool "Never"
++ help
++ Never override memory mapping permissions
++
++endchoice
++
+ config SECURITY
+ bool "Enable different security models"
+ depends on SYSFS
+diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c
+index 90b53500a236bd..aed9e3ef2c9ecb 100644
+--- a/security/tomoyo/domain.c
++++ b/security/tomoyo/domain.c
+@@ -723,10 +723,13 @@ int tomoyo_find_next_domain(struct linux_binprm *bprm)
+ ee->r.obj = &ee->obj;
+ ee->obj.path1 = bprm->file->f_path;
+ /* Get symlink's pathname of program. */
+- retval = -ENOENT;
+ exename.name = tomoyo_realpath_nofollow(original_name);
+- if (!exename.name)
+- goto out;
++ if (!exename.name) {
++ /* Fallback to realpath if symlink's pathname does not exist. */
++ exename.name = tomoyo_realpath_from_path(&bprm->file->f_path);
++ if (!exename.name)
++ goto out;
++ }
+ tomoyo_fill_path_info(&exename);
+ retry:
+ /* Check 'aggregator' directive. */
+diff --git a/sound/core/control.c b/sound/core/control.c
+index f64a555f404f0a..63b35c1fbdac1b 100644
+--- a/sound/core/control.c
++++ b/sound/core/control.c
+@@ -1167,10 +1167,7 @@ static int __snd_ctl_elem_info(struct snd_card *card,
+ #ifdef CONFIG_SND_DEBUG
+ info->access = 0;
+ #endif
+- result = snd_power_ref_and_wait(card);
+- if (!result)
+- result = kctl->info(kctl, info);
+- snd_power_unref(card);
++ result = kctl->info(kctl, info);
+ if (result >= 0) {
+ snd_BUG_ON(info->access);
+ index_offset = snd_ctl_get_ioff(kctl, &info->id);
+@@ -1208,12 +1205,17 @@ static int snd_ctl_elem_info(struct snd_ctl_file *ctl,
+ static int snd_ctl_elem_info_user(struct snd_ctl_file *ctl,
+ struct snd_ctl_elem_info __user *_info)
+ {
++ struct snd_card *card = ctl->card;
+ struct snd_ctl_elem_info info;
+ int result;
+
+ if (copy_from_user(&info, _info, sizeof(info)))
+ return -EFAULT;
++ result = snd_power_ref_and_wait(card);
++ if (result)
++ return result;
+ result = snd_ctl_elem_info(ctl, &info);
++ snd_power_unref(card);
+ if (result < 0)
+ return result;
+ /* drop internal access flags */
+@@ -1257,10 +1259,7 @@ static int snd_ctl_elem_read(struct snd_card *card,
+
+ if (!snd_ctl_skip_validation(&info))
+ fill_remaining_elem_value(control, &info, pattern);
+- ret = snd_power_ref_and_wait(card);
+- if (!ret)
+- ret = kctl->get(kctl, control);
+- snd_power_unref(card);
++ ret = kctl->get(kctl, control);
+ if (ret < 0)
+ return ret;
+ if (!snd_ctl_skip_validation(&info) &&
+@@ -1285,7 +1284,11 @@ static int snd_ctl_elem_read_user(struct snd_card *card,
+ if (IS_ERR(control))
+ return PTR_ERR(no_free_ptr(control));
+
++ result = snd_power_ref_and_wait(card);
++ if (result)
++ return result;
+ result = snd_ctl_elem_read(card, control);
++ snd_power_unref(card);
+ if (result < 0)
+ return result;
+
+@@ -1300,7 +1303,7 @@ static int snd_ctl_elem_write(struct snd_card *card, struct snd_ctl_file *file,
+ struct snd_kcontrol *kctl;
+ struct snd_kcontrol_volatile *vd;
+ unsigned int index_offset;
+- int result;
++ int result = 0;
+
+ down_write(&card->controls_rwsem);
+ kctl = snd_ctl_find_id_locked(card, &control->id);
+@@ -1318,9 +1321,8 @@ static int snd_ctl_elem_write(struct snd_card *card, struct snd_ctl_file *file,
+ }
+
+ snd_ctl_build_ioff(&control->id, kctl, index_offset);
+- result = snd_power_ref_and_wait(card);
+ /* validate input values */
+- if (IS_ENABLED(CONFIG_SND_CTL_INPUT_VALIDATION) && !result) {
++ if (IS_ENABLED(CONFIG_SND_CTL_INPUT_VALIDATION)) {
+ struct snd_ctl_elem_info info;
+
+ memset(&info, 0, sizeof(info));
+@@ -1332,7 +1334,6 @@ static int snd_ctl_elem_write(struct snd_card *card, struct snd_ctl_file *file,
+ }
+ if (!result)
+ result = kctl->put(kctl, control);
+- snd_power_unref(card);
+ if (result < 0) {
+ up_write(&card->controls_rwsem);
+ return result;
+@@ -1361,7 +1362,11 @@ static int snd_ctl_elem_write_user(struct snd_ctl_file *file,
+ return PTR_ERR(no_free_ptr(control));
+
+ card = file->card;
++ result = snd_power_ref_and_wait(card);
++ if (result < 0)
++ return result;
+ result = snd_ctl_elem_write(card, file, control);
++ snd_power_unref(card);
+ if (result < 0)
+ return result;
+
+@@ -1830,7 +1835,7 @@ static int call_tlv_handler(struct snd_ctl_file *file, int op_flag,
+ {SNDRV_CTL_TLV_OP_CMD, SNDRV_CTL_ELEM_ACCESS_TLV_COMMAND},
+ };
+ struct snd_kcontrol_volatile *vd = &kctl->vd[snd_ctl_get_ioff(kctl, id)];
+- int i, ret;
++ int i;
+
+ /* Check support of the request for this element. */
+ for (i = 0; i < ARRAY_SIZE(pairs); ++i) {
+@@ -1848,11 +1853,7 @@ static int call_tlv_handler(struct snd_ctl_file *file, int op_flag,
+ vd->owner != NULL && vd->owner != file)
+ return -EPERM;
+
+- ret = snd_power_ref_and_wait(file->card);
+- if (!ret)
+- ret = kctl->tlv.c(kctl, op_flag, size, buf);
+- snd_power_unref(file->card);
+- return ret;
++ return kctl->tlv.c(kctl, op_flag, size, buf);
+ }
+
+ static int read_tlv_buf(struct snd_kcontrol *kctl, struct snd_ctl_elem_id *id,
+@@ -1965,16 +1966,28 @@ static long snd_ctl_ioctl(struct file *file, unsigned int cmd, unsigned long arg
+ case SNDRV_CTL_IOCTL_SUBSCRIBE_EVENTS:
+ return snd_ctl_subscribe_events(ctl, ip);
+ case SNDRV_CTL_IOCTL_TLV_READ:
+- scoped_guard(rwsem_read, &ctl->card->controls_rwsem)
++ err = snd_power_ref_and_wait(card);
++ if (err < 0)
++ return err;
++ scoped_guard(rwsem_read, &card->controls_rwsem)
+ err = snd_ctl_tlv_ioctl(ctl, argp, SNDRV_CTL_TLV_OP_READ);
++ snd_power_unref(card);
+ return err;
+ case SNDRV_CTL_IOCTL_TLV_WRITE:
+- scoped_guard(rwsem_write, &ctl->card->controls_rwsem)
++ err = snd_power_ref_and_wait(card);
++ if (err < 0)
++ return err;
++ scoped_guard(rwsem_write, &card->controls_rwsem)
+ err = snd_ctl_tlv_ioctl(ctl, argp, SNDRV_CTL_TLV_OP_WRITE);
++ snd_power_unref(card);
+ return err;
+ case SNDRV_CTL_IOCTL_TLV_COMMAND:
+- scoped_guard(rwsem_write, &ctl->card->controls_rwsem)
++ err = snd_power_ref_and_wait(card);
++ if (err < 0)
++ return err;
++ scoped_guard(rwsem_write, &card->controls_rwsem)
+ err = snd_ctl_tlv_ioctl(ctl, argp, SNDRV_CTL_TLV_OP_CMD);
++ snd_power_unref(card);
+ return err;
+ case SNDRV_CTL_IOCTL_POWER:
+ return -ENOPROTOOPT;
+diff --git a/sound/core/control_compat.c b/sound/core/control_compat.c
+index 934bb945e702a2..ff0031cc7dfb8c 100644
+--- a/sound/core/control_compat.c
++++ b/sound/core/control_compat.c
+@@ -79,6 +79,7 @@ struct snd_ctl_elem_info32 {
+ static int snd_ctl_elem_info_compat(struct snd_ctl_file *ctl,
+ struct snd_ctl_elem_info32 __user *data32)
+ {
++ struct snd_card *card = ctl->card;
+ struct snd_ctl_elem_info *data __free(kfree) = NULL;
+ int err;
+
+@@ -95,7 +96,11 @@ static int snd_ctl_elem_info_compat(struct snd_ctl_file *ctl,
+ if (get_user(data->value.enumerated.item, &data32->value.enumerated.item))
+ return -EFAULT;
+
++ err = snd_power_ref_and_wait(card);
++ if (err < 0)
++ return err;
+ err = snd_ctl_elem_info(ctl, data);
++ snd_power_unref(card);
+ if (err < 0)
+ return err;
+ /* restore info to 32bit */
+@@ -175,10 +180,7 @@ static int get_ctl_type(struct snd_card *card, struct snd_ctl_elem_id *id,
+ if (info == NULL)
+ return -ENOMEM;
+ info->id = *id;
+- err = snd_power_ref_and_wait(card);
+- if (!err)
+- err = kctl->info(kctl, info);
+- snd_power_unref(card);
++ err = kctl->info(kctl, info);
+ if (err >= 0) {
+ err = info->type;
+ *countp = info->count;
+@@ -275,8 +277,8 @@ static int copy_ctl_value_to_user(void __user *userdata,
+ return 0;
+ }
+
+-static int ctl_elem_read_user(struct snd_card *card,
+- void __user *userdata, void __user *valuep)
++static int __ctl_elem_read_user(struct snd_card *card,
++ void __user *userdata, void __user *valuep)
+ {
+ struct snd_ctl_elem_value *data __free(kfree) = NULL;
+ int err, type, count;
+@@ -296,8 +298,21 @@ static int ctl_elem_read_user(struct snd_card *card,
+ return copy_ctl_value_to_user(userdata, valuep, data, type, count);
+ }
+
+-static int ctl_elem_write_user(struct snd_ctl_file *file,
+- void __user *userdata, void __user *valuep)
++static int ctl_elem_read_user(struct snd_card *card,
++ void __user *userdata, void __user *valuep)
++{
++ int err;
++
++ err = snd_power_ref_and_wait(card);
++ if (err < 0)
++ return err;
++ err = __ctl_elem_read_user(card, userdata, valuep);
++ snd_power_unref(card);
++ return err;
++}
++
++static int __ctl_elem_write_user(struct snd_ctl_file *file,
++ void __user *userdata, void __user *valuep)
+ {
+ struct snd_ctl_elem_value *data __free(kfree) = NULL;
+ struct snd_card *card = file->card;
+@@ -318,6 +333,20 @@ static int ctl_elem_write_user(struct snd_ctl_file *file,
+ return copy_ctl_value_to_user(userdata, valuep, data, type, count);
+ }
+
++static int ctl_elem_write_user(struct snd_ctl_file *file,
++ void __user *userdata, void __user *valuep)
++{
++ struct snd_card *card = file->card;
++ int err;
++
++ err = snd_power_ref_and_wait(card);
++ if (err < 0)
++ return err;
++ err = __ctl_elem_write_user(file, userdata, valuep);
++ snd_power_unref(card);
++ return err;
++}
++
+ static int snd_ctl_elem_read_user_compat(struct snd_card *card,
+ struct snd_ctl_elem_value32 __user *data32)
+ {
+diff --git a/sound/core/init.c b/sound/core/init.c
+index b9b708cf980d6d..27e7569ace99b4 100644
+--- a/sound/core/init.c
++++ b/sound/core/init.c
+@@ -654,13 +654,19 @@ void snd_card_free(struct snd_card *card)
+ }
+ EXPORT_SYMBOL(snd_card_free);
+
++/* check, if the character is in the valid ASCII range */
++static inline bool safe_ascii_char(char c)
++{
++ return isascii(c) && isalnum(c);
++}
++
+ /* retrieve the last word of shortname or longname */
+ static const char *retrieve_id_from_card_name(const char *name)
+ {
+ const char *spos = name;
+
+ while (*name) {
+- if (isspace(*name) && isalnum(name[1]))
++ if (isspace(*name) && safe_ascii_char(name[1]))
+ spos = name + 1;
+ name++;
+ }
+@@ -687,12 +693,12 @@ static void copy_valid_id_string(struct snd_card *card, const char *src,
+ {
+ char *id = card->id;
+
+- while (*nid && !isalnum(*nid))
++ while (*nid && !safe_ascii_char(*nid))
+ nid++;
+ if (isdigit(*nid))
+ *id++ = isalpha(*src) ? *src : 'D';
+ while (*nid && (size_t)(id - card->id) < sizeof(card->id) - 1) {
+- if (isalnum(*nid))
++ if (safe_ascii_char(*nid))
+ *id++ = *nid;
+ nid++;
+ }
+@@ -787,7 +793,7 @@ static ssize_t id_store(struct device *dev, struct device_attribute *attr,
+
+ for (idx = 0; idx < copy; idx++) {
+ c = buf[idx];
+- if (!isalnum(c) && c != '_' && c != '-')
++ if (!safe_ascii_char(c) && c != '_' && c != '-')
+ return -EINVAL;
+ }
+ memcpy(buf1, buf, copy);
+diff --git a/sound/core/oss/mixer_oss.c b/sound/core/oss/mixer_oss.c
+index 6a0508093ea688..81af725ea40e5d 100644
+--- a/sound/core/oss/mixer_oss.c
++++ b/sound/core/oss/mixer_oss.c
+@@ -901,8 +901,8 @@ static void snd_mixer_oss_slot_free(struct snd_mixer_oss_slot *chn)
+ struct slot *p = chn->private_data;
+ if (p) {
+ if (p->allocated && p->assigned) {
+- kfree_const(p->assigned->name);
+- kfree_const(p->assigned);
++ kfree(p->assigned->name);
++ kfree(p->assigned);
+ }
+ kfree(p);
+ }
+diff --git a/sound/isa/gus/gus_pcm.c b/sound/isa/gus/gus_pcm.c
+index 850544725da796..d55c3dc229c0e8 100644
+--- a/sound/isa/gus/gus_pcm.c
++++ b/sound/isa/gus/gus_pcm.c
+@@ -378,7 +378,7 @@ static int snd_gf1_pcm_playback_copy(struct snd_pcm_substream *substream,
+
+ bpos = get_bpos(pcmp, voice, pos, len);
+ if (bpos < 0)
+- return pos;
++ return bpos;
+ if (copy_from_iter(runtime->dma_area + bpos, len, src) != len)
+ return -EFAULT;
+ return playback_copy_ack(substream, bpos, len);
+@@ -395,7 +395,7 @@ static int snd_gf1_pcm_playback_silence(struct snd_pcm_substream *substream,
+
+ bpos = get_bpos(pcmp, voice, pos, len);
+ if (bpos < 0)
+- return pos;
++ return bpos;
+ snd_pcm_format_set_silence(runtime->format, runtime->dma_area + bpos,
+ bytes_to_samples(runtime, count));
+ return playback_copy_ack(substream, bpos, len);
+diff --git a/sound/pci/asihpi/hpimsgx.c b/sound/pci/asihpi/hpimsgx.c
+index d0caef2994818e..b68e6bfbbfbab5 100644
+--- a/sound/pci/asihpi/hpimsgx.c
++++ b/sound/pci/asihpi/hpimsgx.c
+@@ -708,7 +708,7 @@ static u16 HPIMSGX__init(struct hpi_message *phm,
+ phr->error = HPI_ERROR_PROCESSING_MESSAGE;
+ return phr->error;
+ }
+- if (hr.error == 0) {
++ if (hr.error == 0 && hr.u.s.adapter_index < HPI_MAX_ADAPTERS) {
+ /* the adapter was created successfully
+ save the mapping for future use */
+ hpi_entry_points[hr.u.s.adapter_index] = entry_point_func;
+diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
+index 68c883f202ca5b..c2d0109866e62e 100644
+--- a/sound/pci/hda/hda_controller.h
++++ b/sound/pci/hda/hda_controller.h
+@@ -28,7 +28,7 @@
+ #else
+ #define AZX_DCAPS_I915_COMPONENT 0 /* NOP */
+ #endif
+-#define AZX_DCAPS_AMD_ALLOC_FIX (1 << 14) /* AMD allocation workaround */
++/* 14 unused */
+ #define AZX_DCAPS_CTX_WORKAROUND (1 << 15) /* X-Fi workaround */
+ #define AZX_DCAPS_POSFIX_LPIB (1 << 16) /* Use LPIB as default */
+ #define AZX_DCAPS_AMD_WORKAROUND (1 << 17) /* AMD-specific workaround */
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index 9cff87dfbecbb1..b34d84fedcc8ab 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -1383,7 +1383,7 @@ static int try_assign_dacs(struct hda_codec *codec, int num_outs,
+ struct nid_path *path;
+ hda_nid_t pin = pins[i];
+
+- if (!spec->obey_preferred_dacs) {
++ if (!spec->preferred_dacs) {
+ path = snd_hda_get_path_from_idx(codec, path_idx[i]);
+ if (path) {
+ badness += assign_out_path_ctls(codec, path);
+@@ -1395,7 +1395,7 @@ static int try_assign_dacs(struct hda_codec *codec, int num_outs,
+ if (dacs[i]) {
+ if (is_dac_already_used(codec, dacs[i]))
+ badness += bad->shared_primary;
+- } else if (spec->obey_preferred_dacs) {
++ } else if (spec->preferred_dacs) {
+ badness += BAD_NO_PRIMARY_DAC;
+ }
+
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 97d33a48ff17c8..b33602e64d174c 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -40,7 +40,6 @@
+
+ #ifdef CONFIG_X86
+ /* for snoop control */
+-#include <linux/dma-map-ops.h>
+ #include <asm/set_memory.h>
+ #include <asm/cpufeature.h>
+ #endif
+@@ -307,7 +306,7 @@ enum {
+
+ /* quirks for ATI HDMI with snoop off */
+ #define AZX_DCAPS_PRESET_ATI_HDMI_NS \
+- (AZX_DCAPS_PRESET_ATI_HDMI | AZX_DCAPS_AMD_ALLOC_FIX)
++ (AZX_DCAPS_PRESET_ATI_HDMI | AZX_DCAPS_SNOOP_OFF)
+
+ /* quirks for AMD SB */
+ #define AZX_DCAPS_PRESET_AMD_SB \
+@@ -1703,13 +1702,6 @@ static void azx_check_snoop_available(struct azx *chip)
+ if (chip->driver_caps & AZX_DCAPS_SNOOP_OFF)
+ snoop = false;
+
+-#ifdef CONFIG_X86
+- /* check the presence of DMA ops (i.e. IOMMU), disable snoop conditionally */
+- if ((chip->driver_caps & AZX_DCAPS_AMD_ALLOC_FIX) &&
+- !get_dma_ops(chip->card->dev))
+- snoop = false;
+-#endif
+-
+ chip->snoop = snoop;
+ if (!snoop) {
+ dev_info(chip->card->dev, "Force to non-snoop mode\n");
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index e851785ff05814..4a2c8274c3df7e 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -816,6 +816,23 @@ static const struct hda_pintbl cxt_pincfg_sws_js201d[] = {
+ {}
+ };
+
++/* pincfg quirk for Tuxedo Sirius;
++ * unfortunately the (PCI) SSID conflicts with System76 Pangolin pang14,
++ * which has incompatible pin setup, so we check the codec SSID (luckily
++ * different one!) and conditionally apply the quirk here
++ */
++static void cxt_fixup_sirius_top_speaker(struct hda_codec *codec,
++ const struct hda_fixup *fix,
++ int action)
++{
++ /* ignore for incorrectly picked-up pang14 */
++ if (codec->core.subsystem_id == 0x278212b3)
++ return;
++ /* set up the top speaker pin */
++ if (action == HDA_FIXUP_ACT_PRE_PROBE)
++ snd_hda_codec_set_pincfg(codec, 0x1d, 0x82170111);
++}
++
+ static const struct hda_fixup cxt_fixups[] = {
+ [CXT_PINCFG_LENOVO_X200] = {
+ .type = HDA_FIXUP_PINS,
+@@ -976,11 +993,8 @@ static const struct hda_fixup cxt_fixups[] = {
+ .v.pins = cxt_pincfg_sws_js201d,
+ },
+ [CXT_PINCFG_TOP_SPEAKER] = {
+- .type = HDA_FIXUP_PINS,
+- .v.pins = (const struct hda_pintbl[]) {
+- { 0x1d, 0x82170111 },
+- { }
+- },
++ .type = HDA_FIXUP_FUNC,
++ .v.func = cxt_fixup_sirius_top_speaker,
+ },
+ };
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 452c6e7c20e209..a2737c1ff92040 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -125,6 +125,7 @@ struct alc_spec {
+ unsigned int has_hs_key:1;
+ unsigned int no_internal_mic_pin:1;
+ unsigned int en_3kpull_low:1;
++ int num_speaker_amps;
+
+ /* for PLL fix */
+ hda_nid_t pll_nid;
+@@ -586,6 +587,7 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ switch (codec->core.vendor_id) {
+ case 0x10ec0236:
+ case 0x10ec0256:
++ case 0x10ec0257:
+ case 0x19e58326:
+ case 0x10ec0283:
+ case 0x10ec0285:
+@@ -4802,7 +4804,133 @@ static void alc298_fixup_samsung_amp(struct hda_codec *codec,
+ }
+ }
+
+-#include "samsung_helper.c"
++struct alc298_samsung_v2_amp_desc {
++ unsigned short nid;
++ int init_seq_size;
++ unsigned short init_seq[18][2];
++};
++
++static const struct alc298_samsung_v2_amp_desc
++alc298_samsung_v2_amp_desc_tbl[] = {
++ { 0x38, 18, {
++ { 0x23e1, 0x0000 }, { 0x2012, 0x006f }, { 0x2014, 0x0000 },
++ { 0x201b, 0x0001 }, { 0x201d, 0x0001 }, { 0x201f, 0x00fe },
++ { 0x2021, 0x0000 }, { 0x2022, 0x0010 }, { 0x203d, 0x0005 },
++ { 0x203f, 0x0003 }, { 0x2050, 0x002c }, { 0x2076, 0x000e },
++ { 0x207c, 0x004a }, { 0x2081, 0x0003 }, { 0x2399, 0x0003 },
++ { 0x23a4, 0x00b5 }, { 0x23a5, 0x0001 }, { 0x23ba, 0x0094 }
++ }},
++ { 0x39, 18, {
++ { 0x23e1, 0x0000 }, { 0x2012, 0x006f }, { 0x2014, 0x0000 },
++ { 0x201b, 0x0002 }, { 0x201d, 0x0002 }, { 0x201f, 0x00fd },
++ { 0x2021, 0x0001 }, { 0x2022, 0x0010 }, { 0x203d, 0x0005 },
++ { 0x203f, 0x0003 }, { 0x2050, 0x002c }, { 0x2076, 0x000e },
++ { 0x207c, 0x004a }, { 0x2081, 0x0003 }, { 0x2399, 0x0003 },
++ { 0x23a4, 0x00b5 }, { 0x23a5, 0x0001 }, { 0x23ba, 0x0094 }
++ }},
++ { 0x3c, 15, {
++ { 0x23e1, 0x0000 }, { 0x2012, 0x006f }, { 0x2014, 0x0000 },
++ { 0x201b, 0x0001 }, { 0x201d, 0x0001 }, { 0x201f, 0x00fe },
++ { 0x2021, 0x0000 }, { 0x2022, 0x0010 }, { 0x203d, 0x0005 },
++ { 0x203f, 0x0003 }, { 0x2050, 0x002c }, { 0x2076, 0x000e },
++ { 0x207c, 0x004a }, { 0x2081, 0x0003 }, { 0x23ba, 0x008d }
++ }},
++ { 0x3d, 15, {
++ { 0x23e1, 0x0000 }, { 0x2012, 0x006f }, { 0x2014, 0x0000 },
++ { 0x201b, 0x0002 }, { 0x201d, 0x0002 }, { 0x201f, 0x00fd },
++ { 0x2021, 0x0001 }, { 0x2022, 0x0010 }, { 0x203d, 0x0005 },
++ { 0x203f, 0x0003 }, { 0x2050, 0x002c }, { 0x2076, 0x000e },
++ { 0x207c, 0x004a }, { 0x2081, 0x0003 }, { 0x23ba, 0x008d }
++ }}
++};
++
++static void alc298_samsung_v2_enable_amps(struct hda_codec *codec)
++{
++ struct alc_spec *spec = codec->spec;
++ static const unsigned short enable_seq[][2] = {
++ { 0x203a, 0x0081 }, { 0x23ff, 0x0001 },
++ };
++ int i, j;
++
++ for (i = 0; i < spec->num_speaker_amps; i++) {
++ alc_write_coef_idx(codec, 0x22, alc298_samsung_v2_amp_desc_tbl[i].nid);
++ for (j = 0; j < ARRAY_SIZE(enable_seq); j++)
++ alc298_samsung_write_coef_pack(codec, enable_seq[j]);
++ codec_dbg(codec, "alc298_samsung_v2: Enabled speaker amp 0x%02x\n",
++ alc298_samsung_v2_amp_desc_tbl[i].nid);
++ }
++}
++
++static void alc298_samsung_v2_disable_amps(struct hda_codec *codec)
++{
++ struct alc_spec *spec = codec->spec;
++ static const unsigned short disable_seq[][2] = {
++ { 0x23ff, 0x0000 }, { 0x203a, 0x0080 },
++ };
++ int i, j;
++
++ for (i = 0; i < spec->num_speaker_amps; i++) {
++ alc_write_coef_idx(codec, 0x22, alc298_samsung_v2_amp_desc_tbl[i].nid);
++ for (j = 0; j < ARRAY_SIZE(disable_seq); j++)
++ alc298_samsung_write_coef_pack(codec, disable_seq[j]);
++ codec_dbg(codec, "alc298_samsung_v2: Disabled speaker amp 0x%02x\n",
++ alc298_samsung_v2_amp_desc_tbl[i].nid);
++ }
++}
++
++static void alc298_samsung_v2_playback_hook(struct hda_pcm_stream *hinfo,
++ struct hda_codec *codec,
++ struct snd_pcm_substream *substream,
++ int action)
++{
++ /* Dynamically enable/disable speaker amps before and after playback */
++ if (action == HDA_GEN_PCM_ACT_OPEN)
++ alc298_samsung_v2_enable_amps(codec);
++ if (action == HDA_GEN_PCM_ACT_CLOSE)
++ alc298_samsung_v2_disable_amps(codec);
++}
++
++static void alc298_samsung_v2_init_amps(struct hda_codec *codec,
++ int num_speaker_amps)
++{
++ struct alc_spec *spec = codec->spec;
++ int i, j;
++
++ /* Set spec's num_speaker_amps before doing anything else */
++ spec->num_speaker_amps = num_speaker_amps;
++
++ /* Disable speaker amps before init to prevent any physical damage */
++ alc298_samsung_v2_disable_amps(codec);
++
++ /* Initialize the speaker amps */
++ for (i = 0; i < spec->num_speaker_amps; i++) {
++ alc_write_coef_idx(codec, 0x22, alc298_samsung_v2_amp_desc_tbl[i].nid);
++ for (j = 0; j < alc298_samsung_v2_amp_desc_tbl[i].init_seq_size; j++) {
++ alc298_samsung_write_coef_pack(codec,
++ alc298_samsung_v2_amp_desc_tbl[i].init_seq[j]);
++ }
++ alc_write_coef_idx(codec, 0x89, 0x0);
++ codec_dbg(codec, "alc298_samsung_v2: Initialized speaker amp 0x%02x\n",
++ alc298_samsung_v2_amp_desc_tbl[i].nid);
++ }
++
++ /* register hook to enable speaker amps only when they are needed */
++ spec->gen.pcm_playback_hook = alc298_samsung_v2_playback_hook;
++}
++
++static void alc298_fixup_samsung_amp_v2_2_amps(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ if (action == HDA_FIXUP_ACT_PROBE)
++ alc298_samsung_v2_init_amps(codec, 2);
++}
++
++static void alc298_fixup_samsung_amp_v2_4_amps(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ if (action == HDA_FIXUP_ACT_PROBE)
++ alc298_samsung_v2_init_amps(codec, 4);
++}
+
+ #if IS_REACHABLE(CONFIG_INPUT)
+ static void gpio2_mic_hotkey_event(struct hda_codec *codec,
+@@ -7540,7 +7668,8 @@ enum {
+ ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+ ALC236_FIXUP_LENOVO_INV_DMIC,
+ ALC298_FIXUP_SAMSUNG_AMP,
+- ALC298_FIXUP_SAMSUNG_AMP2,
++ ALC298_FIXUP_SAMSUNG_AMP_V2_2_AMPS,
++ ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS,
+ ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
+@@ -9175,9 +9304,13 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET
+ },
+- [ALC298_FIXUP_SAMSUNG_AMP2] = {
++ [ALC298_FIXUP_SAMSUNG_AMP_V2_2_AMPS] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc298_fixup_samsung_amp_v2_2_amps
++ },
++ [ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS] = {
+ .type = HDA_FIXUP_FUNC,
+- .v.func = alc298_fixup_samsung_amp2
++ .v.func = alc298_fixup_samsung_amp_v2_4_amps
+ },
+ [ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET] = {
+ .type = HDA_FIXUP_VERBS,
+@@ -10255,6 +10388,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x88dd, "HP Pavilion 15z-ec200", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x8902, "HP OMEN 16", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x890e, "HP 255 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x8919, "HP Pavilion Aero Laptop 13-be0xxx", ALC287_FIXUP_HP_GPIO_LED),
+@@ -10396,6 +10530,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8ca2, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8ca4, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8ca7, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8caf, "HP Elite mt645 G8 Mobile Thin Client", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8cbd, "HP Pavilion Aero Laptop 13-bg0xxx", ALC245_FIXUP_HP_X360_MUTE_LEDS),
+ SND_PCI_QUIRK(0x103c, 0x8cdd, "HP Spectre", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x103c, 0x8cde, "HP Spectre", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10557,8 +10692,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xca03, "Samsung Galaxy Book2 Pro 360 (NP930QED)", ALC298_FIXUP_SAMSUNG_AMP),
+ SND_PCI_QUIRK(0x144d, 0xc868, "Samsung Galaxy Book2 Pro (NP930XED)", ALC298_FIXUP_SAMSUNG_AMP),
+- SND_PCI_QUIRK(0x144d, 0xc1ca, "Samsung Galaxy Book3 Pro 360 (NP960QFG-KB1US)", ALC298_FIXUP_SAMSUNG_AMP2),
+- SND_PCI_QUIRK(0x144d, 0xc1cc, "Samsung Galaxy Book3 Ultra (NT960XFH-XD92G))", ALC298_FIXUP_SAMSUNG_AMP2),
++ SND_PCI_QUIRK(0x144d, 0xc870, "Samsung Galaxy Book2 Pro (NP950XED)", ALC298_FIXUP_SAMSUNG_AMP_V2_2_AMPS),
++ SND_PCI_QUIRK(0x144d, 0xc886, "Samsung Galaxy Book3 Pro (NP964XFG)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
++ SND_PCI_QUIRK(0x144d, 0xc1ca, "Samsung Galaxy Book3 Pro 360 (NP960QFG)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
++ SND_PCI_QUIRK(0x144d, 0xc1cc, "Samsung Galaxy Book3 Ultra (NT960XFH)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
+ SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+@@ -10757,6 +10894,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x38cd, "Y790 VECO DUAL", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38d2, "Lenovo Yoga 9 14IMH9", ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN),
+ SND_PCI_QUIRK(0x17aa, 0x38d7, "Lenovo Yoga 9 14IMH9", ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN),
++ SND_PCI_QUIRK(0x17aa, 0x38df, "Y990 YG DUAL", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+@@ -10789,8 +10927,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1849, 0xa233, "Positivo Master C6300", ALC269_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1854, 0x0440, "LG CQ6", ALC256_FIXUP_HEADPHONE_AMP_VOL),
+ SND_PCI_QUIRK(0x1854, 0x0441, "LG CQ6 AIO", ALC256_FIXUP_HEADPHONE_AMP_VOL),
++ SND_PCI_QUIRK(0x1854, 0x0488, "LG gram 16 (16Z90R)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
++ SND_PCI_QUIRK(0x1854, 0x048a, "LG gram 17 (17ZD90R)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
+ SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+ SND_PCI_QUIRK(0x19e5, 0x320f, "Huawei WRT-WX9 ", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x19e5, 0x3212, "Huawei KLV-WX9 ", ALC256_FIXUP_ACER_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
+ SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
+ SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
+@@ -10999,7 +11140,8 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC298_FIXUP_HUAWEI_MBX_STEREO, .name = "huawei-mbx-stereo"},
+ {.id = ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, .name = "alc256-medion-headset"},
+ {.id = ALC298_FIXUP_SAMSUNG_AMP, .name = "alc298-samsung-amp"},
+- {.id = ALC298_FIXUP_SAMSUNG_AMP2, .name = "alc298-samsung-amp2"},
++ {.id = ALC298_FIXUP_SAMSUNG_AMP_V2_2_AMPS, .name = "alc298-samsung-amp-v2-2-amps"},
++ {.id = ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS, .name = "alc298-samsung-amp-v2-4-amps"},
+ {.id = ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc256-samsung-headphone"},
+ {.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
+ {.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"},
+diff --git a/sound/pci/hda/samsung_helper.c b/sound/pci/hda/samsung_helper.c
+deleted file mode 100644
+index a40175b690157b..00000000000000
+--- a/sound/pci/hda/samsung_helper.c
++++ /dev/null
+@@ -1,310 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-or-later
+-/* Helper functions for Samsung Galaxy Book3 audio initialization */
+-
+-struct alc298_samsung_coeff_fixup_desc {
+- unsigned char coeff_idx;
+- unsigned short coeff_value;
+-};
+-
+-struct alc298_samsung_coeff_seq_desc {
+- unsigned short coeff_0x23;
+- unsigned short coeff_0x24;
+- unsigned short coeff_0x25;
+- unsigned short coeff_0x26;
+-};
+-
+-
+-static inline void alc298_samsung_write_coef_pack2(struct hda_codec *codec,
+- const struct alc298_samsung_coeff_seq_desc *seq)
+-{
+- int i;
+-
+- for (i = 0; i < 100; i++) {
+- if ((alc_read_coef_idx(codec, 0x26) & 0x0010) == 0)
+- break;
+-
+- usleep_range(500, 1000);
+- }
+-
+- alc_write_coef_idx(codec, 0x23, seq->coeff_0x23);
+- alc_write_coef_idx(codec, 0x24, seq->coeff_0x24);
+- alc_write_coef_idx(codec, 0x25, seq->coeff_0x25);
+- alc_write_coef_idx(codec, 0x26, seq->coeff_0x26);
+-}
+-
+-static inline void alc298_samsung_write_coef_pack_seq(
+- struct hda_codec *codec,
+- unsigned char target,
+- const struct alc298_samsung_coeff_seq_desc seq[],
+- int count)
+-{
+- alc_write_coef_idx(codec, 0x22, target);
+- for (int i = 0; i < count; i++)
+- alc298_samsung_write_coef_pack2(codec, &seq[i]);
+-}
+-
+-static void alc298_fixup_samsung_amp2(struct hda_codec *codec,
+- const struct hda_fixup *fix, int action)
+-{
+- int i;
+- static const struct alc298_samsung_coeff_fixup_desc fixups1[] = {
+- { 0x99, 0x8000 }, { 0x82, 0x4408 }, { 0x32, 0x3f00 }, { 0x0e, 0x6f80 },
+- { 0x10, 0x0e21 }, { 0x55, 0x8000 }, { 0x08, 0x2fcf }, { 0x08, 0x2fcf },
+- { 0x2d, 0xc020 }, { 0x19, 0x0017 }, { 0x50, 0x1000 }, { 0x0e, 0x6f80 },
+- { 0x08, 0x2fcf }, { 0x80, 0x0011 }, { 0x2b, 0x0c10 }, { 0x2d, 0xc020 },
+- { 0x03, 0x0042 }, { 0x0f, 0x0062 }, { 0x08, 0x2fcf },
+- };
+-
+- static const struct alc298_samsung_coeff_seq_desc amp_0x38[] = {
+- { 0x2000, 0x0000, 0x0001, 0xb011 }, { 0x23ff, 0x0000, 0x0000, 0xb011 },
+- { 0x203a, 0x0000, 0x0080, 0xb011 }, { 0x23e1, 0x0000, 0x0000, 0xb011 },
+- { 0x2012, 0x0000, 0x006f, 0xb011 }, { 0x2014, 0x0000, 0x0000, 0xb011 },
+- { 0x201b, 0x0000, 0x0001, 0xb011 }, { 0x201d, 0x0000, 0x0001, 0xb011 },
+- { 0x201f, 0x0000, 0x00fe, 0xb011 }, { 0x2021, 0x0000, 0x0000, 0xb011 },
+- { 0x2022, 0x0000, 0x0010, 0xb011 }, { 0x203d, 0x0000, 0x0005, 0xb011 },
+- { 0x203f, 0x0000, 0x0003, 0xb011 }, { 0x2050, 0x0000, 0x002c, 0xb011 },
+- { 0x2076, 0x0000, 0x000e, 0xb011 }, { 0x207c, 0x0000, 0x004a, 0xb011 },
+- { 0x2081, 0x0000, 0x0003, 0xb011 }, { 0x2399, 0x0000, 0x0003, 0xb011 },
+- { 0x23a4, 0x0000, 0x00b5, 0xb011 }, { 0x23a5, 0x0000, 0x0001, 0xb011 },
+- { 0x23ba, 0x0000, 0x0094, 0xb011 }, { 0x2100, 0x00d0, 0x950e, 0xb017 },
+- { 0x2104, 0x0061, 0xd4e2, 0xb017 }, { 0x2108, 0x00d0, 0x950e, 0xb017 },
+- { 0x210c, 0x0075, 0xf4e2, 0xb017 }, { 0x2110, 0x00b4, 0x4b0d, 0xb017 },
+- { 0x2114, 0x000a, 0x1000, 0xb017 }, { 0x2118, 0x0015, 0x2000, 0xb017 },
+- { 0x211c, 0x000a, 0x1000, 0xb017 }, { 0x2120, 0x0075, 0xf4e2, 0xb017 },
+- { 0x2124, 0x00b4, 0x4b0d, 0xb017 }, { 0x2128, 0x0000, 0x0010, 0xb017 },
+- { 0x212c, 0x0000, 0x0000, 0xb017 }, { 0x2130, 0x0000, 0x0000, 0xb017 },
+- { 0x2134, 0x0000, 0x0000, 0xb017 }, { 0x2138, 0x0000, 0x0000, 0xb017 },
+- { 0x213c, 0x0000, 0x0010, 0xb017 }, { 0x2140, 0x0000, 0x0000, 0xb017 },
+- { 0x2144, 0x0000, 0x0000, 0xb017 }, { 0x2148, 0x0000, 0x0000, 0xb017 },
+- { 0x214c, 0x0000, 0x0000, 0xb017 }, { 0x2150, 0x0000, 0x0010, 0xb017 },
+- { 0x2154, 0x0000, 0x0000, 0xb017 }, { 0x2158, 0x0000, 0x0000, 0xb017 },
+- { 0x215c, 0x0000, 0x0000, 0xb017 }, { 0x2160, 0x0000, 0x0000, 0xb017 },
+- { 0x2164, 0x0000, 0x0010, 0xb017 }, { 0x2168, 0x0000, 0x0000, 0xb017 },
+- { 0x216c, 0x0000, 0x0000, 0xb017 }, { 0x2170, 0x0000, 0x0000, 0xb017 },
+- { 0x2174, 0x0000, 0x0000, 0xb017 }, { 0x2178, 0x0000, 0x0010, 0xb017 },
+- { 0x217c, 0x0000, 0x0000, 0xb017 }, { 0x2180, 0x0000, 0x0000, 0xb017 },
+- { 0x2184, 0x0000, 0x0000, 0xb017 }, { 0x2188, 0x0000, 0x0000, 0xb017 },
+- { 0x218c, 0x0064, 0x5800, 0xb017 }, { 0x2190, 0x00c8, 0xb000, 0xb017 },
+- { 0x2194, 0x0064, 0x5800, 0xb017 }, { 0x2198, 0x003d, 0x5be7, 0xb017 },
+- { 0x219c, 0x0054, 0x060a, 0xb017 }, { 0x21a0, 0x00c8, 0xa310, 0xb017 },
+- { 0x21a4, 0x0029, 0x4de5, 0xb017 }, { 0x21a8, 0x0032, 0x420c, 0xb017 },
+- { 0x21ac, 0x0029, 0x4de5, 0xb017 }, { 0x21b0, 0x00fa, 0xe50c, 0xb017 },
+- { 0x21b4, 0x0000, 0x0010, 0xb017 }, { 0x21b8, 0x0000, 0x0000, 0xb017 },
+- { 0x21bc, 0x0000, 0x0000, 0xb017 }, { 0x21c0, 0x0000, 0x0000, 0xb017 },
+- { 0x21c4, 0x0000, 0x0000, 0xb017 }, { 0x21c8, 0x0056, 0xc50f, 0xb017 },
+- { 0x21cc, 0x007b, 0xd7e1, 0xb017 }, { 0x21d0, 0x0077, 0xa70e, 0xb017 },
+- { 0x21d4, 0x00e0, 0xbde1, 0xb017 }, { 0x21d8, 0x0032, 0x530e, 0xb017 },
+- { 0x2204, 0x00fb, 0x7e0f, 0xb017 }, { 0x2208, 0x000b, 0x02e1, 0xb017 },
+- { 0x220c, 0x00fb, 0x7e0f, 0xb017 }, { 0x2210, 0x00d5, 0x17e1, 0xb017 },
+- { 0x2214, 0x00c0, 0x130f, 0xb017 }, { 0x2218, 0x00e5, 0x0a00, 0xb017 },
+- { 0x221c, 0x00cb, 0x1500, 0xb017 }, { 0x2220, 0x00e5, 0x0a00, 0xb017 },
+- { 0x2224, 0x00d5, 0x17e1, 0xb017 }, { 0x2228, 0x00c0, 0x130f, 0xb017 },
+- { 0x222c, 0x00f5, 0xdb0e, 0xb017 }, { 0x2230, 0x0017, 0x48e2, 0xb017 },
+- { 0x2234, 0x00f5, 0xdb0e, 0xb017 }, { 0x2238, 0x00ef, 0x5ce2, 0xb017 },
+- { 0x223c, 0x00c1, 0xcc0d, 0xb017 }, { 0x2240, 0x00f5, 0xdb0e, 0xb017 },
+- { 0x2244, 0x0017, 0x48e2, 0xb017 }, { 0x2248, 0x00f5, 0xdb0e, 0xb017 },
+- { 0x224c, 0x00ef, 0x5ce2, 0xb017 }, { 0x2250, 0x00c1, 0xcc0d, 0xb017 },
+- { 0x2254, 0x00f5, 0xdb0e, 0xb017 }, { 0x2258, 0x0017, 0x48e2, 0xb017 },
+- { 0x225c, 0x00f5, 0xdb0e, 0xb017 }, { 0x2260, 0x00ef, 0x5ce2, 0xb017 },
+- { 0x2264, 0x00c1, 0xcc0d, 0xb017 }, { 0x2268, 0x00f5, 0xdb0e, 0xb017 },
+- { 0x226c, 0x0017, 0x48e2, 0xb017 }, { 0x2270, 0x00f5, 0xdb0e, 0xb017 },
+- { 0x2274, 0x00ef, 0x5ce2, 0xb017 }, { 0x2278, 0x00c1, 0xcc0d, 0xb017 },
+- { 0x227c, 0x00f5, 0xdb0e, 0xb017 }, { 0x2280, 0x0017, 0x48e2, 0xb017 },
+- { 0x2284, 0x00f5, 0xdb0e, 0xb017 }, { 0x2288, 0x00ef, 0x5ce2, 0xb017 },
+- { 0x228c, 0x00c1, 0xcc0d, 0xb017 }, { 0x22cc, 0x00e8, 0x8d00, 0xb017 },
+- { 0x22d0, 0x0000, 0x0000, 0xb017 }, { 0x22d4, 0x0018, 0x72ff, 0xb017 },
+- { 0x22d8, 0x00ce, 0x25e1, 0xb017 }, { 0x22dc, 0x002f, 0xe40e, 0xb017 },
+- { 0x238e, 0x0000, 0x0099, 0xb011 }, { 0x238f, 0x0000, 0x0011, 0xb011 },
+- { 0x2390, 0x0000, 0x0056, 0xb011 }, { 0x2391, 0x0000, 0x0004, 0xb011 },
+- { 0x2392, 0x0000, 0x00bb, 0xb011 }, { 0x2393, 0x0000, 0x006d, 0xb011 },
+- { 0x2394, 0x0000, 0x0010, 0xb011 }, { 0x2395, 0x0000, 0x0064, 0xb011 },
+- { 0x2396, 0x0000, 0x00b6, 0xb011 }, { 0x2397, 0x0000, 0x0028, 0xb011 },
+- { 0x2398, 0x0000, 0x000b, 0xb011 }, { 0x239a, 0x0000, 0x0099, 0xb011 },
+- { 0x239b, 0x0000, 0x000d, 0xb011 }, { 0x23a6, 0x0000, 0x0064, 0xb011 },
+- { 0x23a7, 0x0000, 0x0078, 0xb011 }, { 0x23b9, 0x0000, 0x0000, 0xb011 },
+- { 0x23e0, 0x0000, 0x0021, 0xb011 }, { 0x23e1, 0x0000, 0x0001, 0xb011 },
+- };
+-
+- static const struct alc298_samsung_coeff_seq_desc amp_0x39[] = {
+- { 0x2000, 0x0000, 0x0001, 0xb011 }, { 0x23ff, 0x0000, 0x0000, 0xb011 },
+- { 0x203a, 0x0000, 0x0080, 0xb011 }, { 0x23e1, 0x0000, 0x0000, 0xb011 },
+- { 0x2012, 0x0000, 0x006f, 0xb011 }, { 0x2014, 0x0000, 0x0000, 0xb011 },
+- { 0x201b, 0x0000, 0x0002, 0xb011 }, { 0x201d, 0x0000, 0x0002, 0xb011 },
+- { 0x201f, 0x0000, 0x00fd, 0xb011 }, { 0x2021, 0x0000, 0x0001, 0xb011 },
+- { 0x2022, 0x0000, 0x0010, 0xb011 }, { 0x203d, 0x0000, 0x0005, 0xb011 },
+- { 0x203f, 0x0000, 0x0003, 0xb011 }, { 0x2050, 0x0000, 0x002c, 0xb011 },
+- { 0x2076, 0x0000, 0x000e, 0xb011 }, { 0x207c, 0x0000, 0x004a, 0xb011 },
+- { 0x2081, 0x0000, 0x0003, 0xb011 }, { 0x2399, 0x0000, 0x0003, 0xb011 },
+- { 0x23a4, 0x0000, 0x00b5, 0xb011 }, { 0x23a5, 0x0000, 0x0001, 0xb011 },
+- { 0x23ba, 0x0000, 0x0094, 0xb011 }, { 0x2100, 0x00d0, 0x950e, 0xb017 },
+- { 0x2104, 0x0061, 0xd4e2, 0xb017 }, { 0x2108, 0x00d0, 0x950e, 0xb017 },
+- { 0x210c, 0x0075, 0xf4e2, 0xb017 }, { 0x2110, 0x00b4, 0x4b0d, 0xb017 },
+- { 0x2114, 0x000a, 0x1000, 0xb017 }, { 0x2118, 0x0015, 0x2000, 0xb017 },
+- { 0x211c, 0x000a, 0x1000, 0xb017 }, { 0x2120, 0x0075, 0xf4e2, 0xb017 },
+- { 0x2124, 0x00b4, 0x4b0d, 0xb017 }, { 0x2128, 0x0000, 0x0010, 0xb017 },
+- { 0x212c, 0x0000, 0x0000, 0xb017 }, { 0x2130, 0x0000, 0x0000, 0xb017 },
+- { 0x2134, 0x0000, 0x0000, 0xb017 }, { 0x2138, 0x0000, 0x0000, 0xb017 },
+- { 0x213c, 0x0000, 0x0010, 0xb017 }, { 0x2140, 0x0000, 0x0000, 0xb017 },
+- { 0x2144, 0x0000, 0x0000, 0xb017 }, { 0x2148, 0x0000, 0x0000, 0xb017 },
+- { 0x214c, 0x0000, 0x0000, 0xb017 }, { 0x2150, 0x0000, 0x0010, 0xb017 },
+- { 0x2154, 0x0000, 0x0000, 0xb017 }, { 0x2158, 0x0000, 0x0000, 0xb017 },
+- { 0x215c, 0x0000, 0x0000, 0xb017 }, { 0x2160, 0x0000, 0x0000, 0xb017 },
+- { 0x2164, 0x0000, 0x0010, 0xb017 }, { 0x2168, 0x0000, 0x0000, 0xb017 },
+- { 0x216c, 0x0000, 0x0000, 0xb017 }, { 0x2170, 0x0000, 0x0000, 0xb017 },
+- { 0x2174, 0x0000, 0x0000, 0xb017 }, { 0x2178, 0x0000, 0x0010, 0xb017 },
+- { 0x217c, 0x0000, 0x0000, 0xb017 }, { 0x2180, 0x0000, 0x0000, 0xb017 },
+- { 0x2184, 0x0000, 0x0000, 0xb017 }, { 0x2188, 0x0000, 0x0000, 0xb017 },
+- { 0x218c, 0x0064, 0x5800, 0xb017 }, { 0x2190, 0x00c8, 0xb000, 0xb017 },
+- { 0x2194, 0x0064, 0x5800, 0xb017 }, { 0x2198, 0x003d, 0x5be7, 0xb017 },
+- { 0x219c, 0x0054, 0x060a, 0xb017 }, { 0x21a0, 0x00c8, 0xa310, 0xb017 },
+- { 0x21a4, 0x0029, 0x4de5, 0xb017 }, { 0x21a8, 0x0032, 0x420c, 0xb017 },
+- { 0x21ac, 0x0029, 0x4de5, 0xb017 }, { 0x21b0, 0x00fa, 0xe50c, 0xb017 },
+- { 0x21b4, 0x0000, 0x0010, 0xb017 }, { 0x21b8, 0x0000, 0x0000, 0xb017 },
+- { 0x21bc, 0x0000, 0x0000, 0xb017 }, { 0x21c0, 0x0000, 0x0000, 0xb017 },
+- { 0x21c4, 0x0000, 0x0000, 0xb017 }, { 0x21c8, 0x0056, 0xc50f, 0xb017 },
+- { 0x21cc, 0x007b, 0xd7e1, 0xb017 }, { 0x21d0, 0x0077, 0xa70e, 0xb017 },
+- { 0x21d4, 0x00e0, 0xbde1, 0xb017 }, { 0x21d8, 0x0032, 0x530e, 0xb017 },
+- { 0x2204, 0x00fb, 0x7e0f, 0xb017 }, { 0x2208, 0x000b, 0x02e1, 0xb017 },
+- { 0x220c, 0x00fb, 0x7e0f, 0xb017 }, { 0x2210, 0x00d5, 0x17e1, 0xb017 },
+- { 0x2214, 0x00c0, 0x130f, 0xb017 }, { 0x2218, 0x00e5, 0x0a00, 0xb017 },
+- { 0x221c, 0x00cb, 0x1500, 0xb017 }, { 0x2220, 0x00e5, 0x0a00, 0xb017 },
+- { 0x2224, 0x00d5, 0x17e1, 0xb017 }, { 0x2228, 0x00c0, 0x130f, 0xb017 },
+- { 0x222c, 0x00f5, 0xdb0e, 0xb017 }, { 0x2230, 0x0017, 0x48e2, 0xb017 },
+- { 0x2234, 0x00f5, 0xdb0e, 0xb017 }, { 0x2238, 0x00ef, 0x5ce2, 0xb017 },
+- { 0x223c, 0x00c1, 0xcc0d, 0xb017 }, { 0x2240, 0x00f5, 0xdb0e, 0xb017 },
+- { 0x2244, 0x0017, 0x48e2, 0xb017 }, { 0x2248, 0x00f5, 0xdb0e, 0xb017 },
+- { 0x224c, 0x00ef, 0x5ce2, 0xb017 }, { 0x2250, 0x00c1, 0xcc0d, 0xb017 },
+- { 0x2254, 0x00f5, 0xdb0e, 0xb017 }, { 0x2258, 0x0017, 0x48e2, 0xb017 },
+- { 0x225c, 0x00f5, 0xdb0e, 0xb017 }, { 0x2260, 0x00ef, 0x5ce2, 0xb017 },
+- { 0x2264, 0x00c1, 0xcc0d, 0xb017 }, { 0x2268, 0x00f5, 0xdb0e, 0xb017 },
+- { 0x226c, 0x0017, 0x48e2, 0xb017 }, { 0x2270, 0x00f5, 0xdb0e, 0xb017 },
+- { 0x2274, 0x00ef, 0x5ce2, 0xb017 }, { 0x2278, 0x00c1, 0xcc0d, 0xb017 },
+- { 0x227c, 0x00f5, 0xdb0e, 0xb017 }, { 0x2280, 0x0017, 0x48e2, 0xb017 },
+- { 0x2284, 0x00f5, 0xdb0e, 0xb017 }, { 0x2288, 0x00ef, 0x5ce2, 0xb017 },
+- { 0x228c, 0x00c1, 0xcc0d, 0xb017 }, { 0x22cc, 0x00e8, 0x8d00, 0xb017 },
+- { 0x22d0, 0x0000, 0x0000, 0xb017 }, { 0x22d4, 0x0018, 0x72ff, 0xb017 },
+- { 0x22d8, 0x00ce, 0x25e1, 0xb017 }, { 0x22dc, 0x002f, 0xe40e, 0xb017 },
+- { 0x238e, 0x0000, 0x0099, 0xb011 }, { 0x238f, 0x0000, 0x0011, 0xb011 },
+- { 0x2390, 0x0000, 0x0056, 0xb011 }, { 0x2391, 0x0000, 0x0004, 0xb011 },
+- { 0x2392, 0x0000, 0x00bb, 0xb011 }, { 0x2393, 0x0000, 0x006d, 0xb011 },
+- { 0x2394, 0x0000, 0x0010, 0xb011 }, { 0x2395, 0x0000, 0x0064, 0xb011 },
+- { 0x2396, 0x0000, 0x00b6, 0xb011 }, { 0x2397, 0x0000, 0x0028, 0xb011 },
+- { 0x2398, 0x0000, 0x000b, 0xb011 }, { 0x239a, 0x0000, 0x0099, 0xb011 },
+- { 0x239b, 0x0000, 0x000d, 0xb011 }, { 0x23a6, 0x0000, 0x0064, 0xb011 },
+- { 0x23a7, 0x0000, 0x0078, 0xb011 }, { 0x23b9, 0x0000, 0x0000, 0xb011 },
+- { 0x23e0, 0x0000, 0x0021, 0xb011 }, { 0x23e1, 0x0000, 0x0001, 0xb011 },
+- };
+-
+- static const struct alc298_samsung_coeff_seq_desc amp_0x3c[] = {
+- { 0x2000, 0x0000, 0x0001, 0xb011 }, { 0x23ff, 0x0000, 0x0000, 0xb011 },
+- { 0x203a, 0x0000, 0x0080, 0xb011 }, { 0x23e1, 0x0000, 0x0000, 0xb011 },
+- { 0x2012, 0x0000, 0x006f, 0xb011 }, { 0x2014, 0x0000, 0x0000, 0xb011 },
+- { 0x201b, 0x0000, 0x0001, 0xb011 }, { 0x201d, 0x0000, 0x0001, 0xb011 },
+- { 0x201f, 0x0000, 0x00fe, 0xb011 }, { 0x2021, 0x0000, 0x0000, 0xb011 },
+- { 0x2022, 0x0000, 0x0010, 0xb011 }, { 0x203d, 0x0000, 0x0005, 0xb011 },
+- { 0x203f, 0x0000, 0x0003, 0xb011 }, { 0x2050, 0x0000, 0x002c, 0xb011 },
+- { 0x2076, 0x0000, 0x000e, 0xb011 }, { 0x207c, 0x0000, 0x004a, 0xb011 },
+- { 0x2081, 0x0000, 0x0003, 0xb011 }, { 0x23ba, 0x0000, 0x008d, 0xb011 },
+- { 0x2128, 0x0005, 0x460d, 0xb017 }, { 0x212c, 0x00f6, 0x73e5, 0xb017 },
+- { 0x2130, 0x0005, 0x460d, 0xb017 }, { 0x2134, 0x00c0, 0xe9e5, 0xb017 },
+- { 0x2138, 0x00d5, 0x010b, 0xb017 }, { 0x213c, 0x009d, 0x7809, 0xb017 },
+- { 0x2140, 0x00c5, 0x0eed, 0xb017 }, { 0x2144, 0x009d, 0x7809, 0xb017 },
+- { 0x2148, 0x00c4, 0x4ef0, 0xb017 }, { 0x214c, 0x003a, 0x3106, 0xb017 },
+- { 0x2150, 0x00af, 0x750e, 0xb017 }, { 0x2154, 0x008c, 0x1ff1, 0xb017 },
+- { 0x2158, 0x009e, 0x360c, 0xb017 }, { 0x215c, 0x008c, 0x1ff1, 0xb017 },
+- { 0x2160, 0x004d, 0xac0a, 0xb017 }, { 0x2164, 0x007d, 0xa00f, 0xb017 },
+- { 0x2168, 0x00e1, 0x9ce3, 0xb017 }, { 0x216c, 0x00e8, 0x590e, 0xb017 },
+- { 0x2170, 0x00e1, 0x9ce3, 0xb017 }, { 0x2174, 0x0066, 0xfa0d, 0xb017 },
+- { 0x2178, 0x0000, 0x0010, 0xb017 }, { 0x217c, 0x0000, 0x0000, 0xb017 },
+- { 0x2180, 0x0000, 0x0000, 0xb017 }, { 0x2184, 0x0000, 0x0000, 0xb017 },
+- { 0x2188, 0x0000, 0x0000, 0xb017 }, { 0x218c, 0x0000, 0x0010, 0xb017 },
+- { 0x2190, 0x0000, 0x0000, 0xb017 }, { 0x2194, 0x0000, 0x0000, 0xb017 },
+- { 0x2198, 0x0000, 0x0000, 0xb017 }, { 0x219c, 0x0000, 0x0000, 0xb017 },
+- { 0x21a0, 0x0000, 0x0010, 0xb017 }, { 0x21a4, 0x0000, 0x0000, 0xb017 },
+- { 0x21a8, 0x0000, 0x0000, 0xb017 }, { 0x21ac, 0x0000, 0x0000, 0xb017 },
+- { 0x21b0, 0x0000, 0x0000, 0xb017 }, { 0x21b4, 0x0000, 0x0010, 0xb017 },
+- { 0x21b8, 0x0000, 0x0000, 0xb017 }, { 0x21bc, 0x0000, 0x0000, 0xb017 },
+- { 0x21c0, 0x0000, 0x0000, 0xb017 }, { 0x21c4, 0x0000, 0x0000, 0xb017 },
+- { 0x23b9, 0x0000, 0x0000, 0xb011 }, { 0x23e0, 0x0000, 0x0020, 0xb011 },
+- { 0x23e1, 0x0000, 0x0001, 0xb011 },
+- };
+-
+- static const struct alc298_samsung_coeff_seq_desc amp_0x3d[] = {
+- { 0x2000, 0x0000, 0x0001, 0xb011 }, { 0x23ff, 0x0000, 0x0000, 0xb011 },
+- { 0x203a, 0x0000, 0x0080, 0xb011 }, { 0x23e1, 0x0000, 0x0000, 0xb011 },
+- { 0x2012, 0x0000, 0x006f, 0xb011 }, { 0x2014, 0x0000, 0x0000, 0xb011 },
+- { 0x201b, 0x0000, 0x0002, 0xb011 }, { 0x201d, 0x0000, 0x0002, 0xb011 },
+- { 0x201f, 0x0000, 0x00fd, 0xb011 }, { 0x2021, 0x0000, 0x0001, 0xb011 },
+- { 0x2022, 0x0000, 0x0010, 0xb011 }, { 0x203d, 0x0000, 0x0005, 0xb011 },
+- { 0x203f, 0x0000, 0x0003, 0xb011 }, { 0x2050, 0x0000, 0x002c, 0xb011 },
+- { 0x2076, 0x0000, 0x000e, 0xb011 }, { 0x207c, 0x0000, 0x004a, 0xb011 },
+- { 0x2081, 0x0000, 0x0003, 0xb011 }, { 0x23ba, 0x0000, 0x008d, 0xb011 },
+- { 0x2128, 0x0005, 0x460d, 0xb017 }, { 0x212c, 0x00f6, 0x73e5, 0xb017 },
+- { 0x2130, 0x0005, 0x460d, 0xb017 }, { 0x2134, 0x00c0, 0xe9e5, 0xb017 },
+- { 0x2138, 0x00d5, 0x010b, 0xb017 }, { 0x213c, 0x009d, 0x7809, 0xb017 },
+- { 0x2140, 0x00c5, 0x0eed, 0xb017 }, { 0x2144, 0x009d, 0x7809, 0xb017 },
+- { 0x2148, 0x00c4, 0x4ef0, 0xb017 }, { 0x214c, 0x003a, 0x3106, 0xb017 },
+- { 0x2150, 0x00af, 0x750e, 0xb017 }, { 0x2154, 0x008c, 0x1ff1, 0xb017 },
+- { 0x2158, 0x009e, 0x360c, 0xb017 }, { 0x215c, 0x008c, 0x1ff1, 0xb017 },
+- { 0x2160, 0x004d, 0xac0a, 0xb017 }, { 0x2164, 0x007d, 0xa00f, 0xb017 },
+- { 0x2168, 0x00e1, 0x9ce3, 0xb017 }, { 0x216c, 0x00e8, 0x590e, 0xb017 },
+- { 0x2170, 0x00e1, 0x9ce3, 0xb017 }, { 0x2174, 0x0066, 0xfa0d, 0xb017 },
+- { 0x2178, 0x0000, 0x0010, 0xb017 }, { 0x217c, 0x0000, 0x0000, 0xb017 },
+- { 0x2180, 0x0000, 0x0000, 0xb017 }, { 0x2184, 0x0000, 0x0000, 0xb017 },
+- { 0x2188, 0x0000, 0x0000, 0xb017 }, { 0x218c, 0x0000, 0x0010, 0xb017 },
+- { 0x2190, 0x0000, 0x0000, 0xb017 }, { 0x2194, 0x0000, 0x0000, 0xb017 },
+- { 0x2198, 0x0000, 0x0000, 0xb017 }, { 0x219c, 0x0000, 0x0000, 0xb017 },
+- { 0x21a0, 0x0000, 0x0010, 0xb017 }, { 0x21a4, 0x0000, 0x0000, 0xb017 },
+- { 0x21a8, 0x0000, 0x0000, 0xb017 }, { 0x21ac, 0x0000, 0x0000, 0xb017 },
+- { 0x21b0, 0x0000, 0x0000, 0xb017 }, { 0x21b4, 0x0000, 0x0010, 0xb017 },
+- { 0x21b8, 0x0000, 0x0000, 0xb017 }, { 0x21bc, 0x0000, 0x0000, 0xb017 },
+- { 0x21c0, 0x0000, 0x0000, 0xb017 }, { 0x21c4, 0x0000, 0x0000, 0xb017 },
+- { 0x23b9, 0x0000, 0x0000, 0xb011 }, { 0x23e0, 0x0000, 0x0020, 0xb011 },
+- { 0x23e1, 0x0000, 0x0001, 0xb011 },
+- };
+-
+- static const struct alc298_samsung_coeff_seq_desc amp_seq1[] = {
+- { 0x23ff, 0x0000, 0x0000, 0xb011 }, { 0x203a, 0x0000, 0x0080, 0xb011 },
+- };
+-
+- static const struct alc298_samsung_coeff_fixup_desc fixups2[] = {
+- { 0x4f, 0xb029 }, { 0x05, 0x2be0 }, { 0x30, 0x2421 },
+- };
+-
+-
+- static const struct alc298_samsung_coeff_seq_desc amp_seq2[] = {
+- { 0x203a, 0x0000, 0x0081, 0xb011 }, { 0x23ff, 0x0000, 0x0001, 0xb011 },
+- };
+-
+- if (action != HDA_FIXUP_ACT_INIT)
+- return;
+-
+- // First set of fixups
+- for (i = 0; i < ARRAY_SIZE(fixups1); i++)
+- alc_write_coef_idx(codec, fixups1[i].coeff_idx, fixups1[i].coeff_value);
+-
+- // First set of writes
+- alc298_samsung_write_coef_pack_seq(codec, 0x38, amp_0x38, ARRAY_SIZE(amp_0x38));
+- alc298_samsung_write_coef_pack_seq(codec, 0x39, amp_0x39, ARRAY_SIZE(amp_0x39));
+- alc298_samsung_write_coef_pack_seq(codec, 0x3c, amp_0x3c, ARRAY_SIZE(amp_0x3c));
+- alc298_samsung_write_coef_pack_seq(codec, 0x3d, amp_0x3d, ARRAY_SIZE(amp_0x3d));
+-
+- // Second set of writes
+- alc298_samsung_write_coef_pack_seq(codec, 0x38, amp_seq1, ARRAY_SIZE(amp_seq1));
+- alc298_samsung_write_coef_pack_seq(codec, 0x39, amp_seq1, ARRAY_SIZE(amp_seq1));
+- alc298_samsung_write_coef_pack_seq(codec, 0x3c, amp_seq1, ARRAY_SIZE(amp_seq1));
+- alc298_samsung_write_coef_pack_seq(codec, 0x3d, amp_seq1, ARRAY_SIZE(amp_seq1));
+-
+- // Second set of fixups
+- for (i = 0; i < ARRAY_SIZE(fixups2); i++)
+- alc_write_coef_idx(codec, fixups2[i].coeff_idx, fixups2[i].coeff_value);
+-
+- // Third set of writes
+- alc298_samsung_write_coef_pack_seq(codec, 0x38, amp_seq2, ARRAY_SIZE(amp_seq2));
+- alc298_samsung_write_coef_pack_seq(codec, 0x39, amp_seq2, ARRAY_SIZE(amp_seq2));
+- alc298_samsung_write_coef_pack_seq(codec, 0x3c, amp_seq2, ARRAY_SIZE(amp_seq2));
+- alc298_samsung_write_coef_pack_seq(codec, 0x3d, amp_seq2, ARRAY_SIZE(amp_seq2));
+-
+- // Final fixup
+- alc_write_coef_idx(codec, 0x10, 0x0F21);
+-}
+diff --git a/sound/pci/rme9652/hdsp.c b/sound/pci/rme9652/hdsp.c
+index e7d1b43471a291..713ca262a0e979 100644
+--- a/sound/pci/rme9652/hdsp.c
++++ b/sound/pci/rme9652/hdsp.c
+@@ -1298,8 +1298,10 @@ static int snd_hdsp_midi_output_possible (struct hdsp *hdsp, int id)
+
+ static void snd_hdsp_flush_midi_input (struct hdsp *hdsp, int id)
+ {
+- while (snd_hdsp_midi_input_available (hdsp, id))
+- snd_hdsp_midi_read_byte (hdsp, id);
++ int count = 256;
++
++ while (snd_hdsp_midi_input_available(hdsp, id) && --count)
++ snd_hdsp_midi_read_byte(hdsp, id);
+ }
+
+ static int snd_hdsp_midi_output_write (struct hdsp_midi *hmidi)
+diff --git a/sound/pci/rme9652/hdspm.c b/sound/pci/rme9652/hdspm.c
+index 267c7848974aee..74215f57f4fc9d 100644
+--- a/sound/pci/rme9652/hdspm.c
++++ b/sound/pci/rme9652/hdspm.c
+@@ -1838,8 +1838,10 @@ static inline int snd_hdspm_midi_output_possible (struct hdspm *hdspm, int id)
+
+ static void snd_hdspm_flush_midi_input(struct hdspm *hdspm, int id)
+ {
+- while (snd_hdspm_midi_input_available (hdspm, id))
+- snd_hdspm_midi_read_byte (hdspm, id);
++ int count = 256;
++
++ while (snd_hdspm_midi_input_available(hdspm, id) && --count)
++ snd_hdspm_midi_read_byte(hdspm, id);
+ }
+
+ static int snd_hdspm_midi_output_write (struct hdspm_midi *hmidi)
+diff --git a/sound/soc/atmel/mchp-pdmc.c b/sound/soc/atmel/mchp-pdmc.c
+index dcc4e14b3dde27..206bbb5aaab5d9 100644
+--- a/sound/soc/atmel/mchp-pdmc.c
++++ b/sound/soc/atmel/mchp-pdmc.c
+@@ -285,6 +285,9 @@ static int mchp_pdmc_chmap_ctl_put(struct snd_kcontrol *kcontrol,
+ if (!substream)
+ return -ENODEV;
+
++ if (!substream->runtime)
++ return 0; /* just for avoiding error from alsactl restore */
++
+ map = mchp_pdmc_chmap_get(substream, info);
+ if (!map)
+ return -EINVAL;
+diff --git a/sound/soc/codecs/wsa883x.c b/sound/soc/codecs/wsa883x.c
+index 3e4fdaa3f44fb1..53f6de43405486 100644
+--- a/sound/soc/codecs/wsa883x.c
++++ b/sound/soc/codecs/wsa883x.c
+@@ -997,15 +997,19 @@ static const struct reg_sequence reg_init[] = {
+ {WSA883X_GMAMP_SUP1, 0xE2},
+ };
+
+-static void wsa883x_init(struct wsa883x_priv *wsa883x)
++static int wsa883x_init(struct wsa883x_priv *wsa883x)
+ {
+ struct regmap *regmap = wsa883x->regmap;
+- int variant, version;
++ int variant, version, ret;
+
+- regmap_read(regmap, WSA883X_OTP_REG_0, &variant);
++ ret = regmap_read(regmap, WSA883X_OTP_REG_0, &variant);
++ if (ret)
++ return ret;
+ wsa883x->variant = variant & WSA883X_ID_MASK;
+
+- regmap_read(regmap, WSA883X_CHIP_ID0, &version);
++ ret = regmap_read(regmap, WSA883X_CHIP_ID0, &version);
++ if (ret)
++ return ret;
+ wsa883x->version = version;
+
+ switch (wsa883x->variant) {
+@@ -1040,6 +1044,8 @@ static void wsa883x_init(struct wsa883x_priv *wsa883x)
+ WSA883X_DRE_OFFSET_MASK,
+ wsa883x->comp_offset);
+ }
++
++ return 0;
+ }
+
+ static int wsa883x_update_status(struct sdw_slave *slave,
+@@ -1048,7 +1054,7 @@ static int wsa883x_update_status(struct sdw_slave *slave,
+ struct wsa883x_priv *wsa883x = dev_get_drvdata(&slave->dev);
+
+ if (status == SDW_SLAVE_ATTACHED && slave->dev_num > 0)
+- wsa883x_init(wsa883x);
++ return wsa883x_init(wsa883x);
+
+ return 0;
+ }
+diff --git a/sound/soc/fsl/imx-card.c b/sound/soc/fsl/imx-card.c
+index 0e18ccabe28c31..ce0d8cec375a85 100644
+--- a/sound/soc/fsl/imx-card.c
++++ b/sound/soc/fsl/imx-card.c
+@@ -713,6 +713,7 @@ static int imx_card_probe(struct platform_device *pdev)
+
+ data->plat_data = plat_data;
+ data->card.dev = &pdev->dev;
++ data->card.owner = THIS_MODULE;
+
+ dev_set_drvdata(&pdev->dev, &data->card);
+ snd_soc_card_set_drvdata(&data->card, data);
+diff --git a/sound/soc/intel/boards/bytcht_cx2072x.c b/sound/soc/intel/boards/bytcht_cx2072x.c
+index df3c2a7b64d23c..8c2b4ab764bbaf 100644
+--- a/sound/soc/intel/boards/bytcht_cx2072x.c
++++ b/sound/soc/intel/boards/bytcht_cx2072x.c
+@@ -255,7 +255,11 @@ static int snd_byt_cht_cx2072x_probe(struct platform_device *pdev)
+ snprintf(codec_name, sizeof(codec_name), "i2c-%s",
+ acpi_dev_name(adev));
+ byt_cht_cx2072x_dais[dai_index].codecs->name = codec_name;
++ } else {
++ dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
++ return -ENOENT;
+ }
++
+ acpi_dev_put(adev);
+
+ /* override platform name, if required */
+diff --git a/sound/soc/intel/boards/bytcht_da7213.c b/sound/soc/intel/boards/bytcht_da7213.c
+index 08c598b7e1eeeb..9178bbe8d99506 100644
+--- a/sound/soc/intel/boards/bytcht_da7213.c
++++ b/sound/soc/intel/boards/bytcht_da7213.c
+@@ -258,7 +258,11 @@ static int bytcht_da7213_probe(struct platform_device *pdev)
+ snprintf(codec_name, sizeof(codec_name),
+ "i2c-%s", acpi_dev_name(adev));
+ dailink[dai_index].codecs->name = codec_name;
++ } else {
++ dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
++ return -ENOENT;
+ }
++
+ acpi_dev_put(adev);
+
+ /* override platform name, if required */
+diff --git a/sound/soc/intel/boards/bytcht_es8316.c b/sound/soc/intel/boards/bytcht_es8316.c
+index 77b91ea4dc32ca..3539c9ff0fd2ca 100644
+--- a/sound/soc/intel/boards/bytcht_es8316.c
++++ b/sound/soc/intel/boards/bytcht_es8316.c
+@@ -562,7 +562,7 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
+ byt_cht_es8316_dais[dai_index].codecs->name = codec_name;
+ } else {
+ dev_err(dev, "Error cannot find '%s' dev\n", mach->id);
+- return -ENXIO;
++ return -ENOENT;
+ }
+
+ codec_dev = acpi_get_first_physical_node(adev);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index db4a33680d9488..4479825c08b5e3 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -1693,7 +1693,7 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
+ byt_rt5640_dais[dai_index].codecs->name = byt_rt5640_codec_name;
+ } else {
+ dev_err(dev, "Error cannot find '%s' dev\n", mach->id);
+- return -ENXIO;
++ return -ENOENT;
+ }
+
+ codec_dev = acpi_get_first_physical_node(adev);
+diff --git a/sound/soc/intel/boards/bytcr_rt5651.c b/sound/soc/intel/boards/bytcr_rt5651.c
+index 8514b79f389bb5..1f54da98aacf47 100644
+--- a/sound/soc/intel/boards/bytcr_rt5651.c
++++ b/sound/soc/intel/boards/bytcr_rt5651.c
+@@ -926,7 +926,7 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
+ byt_rt5651_dais[dai_index].codecs->name = byt_rt5651_codec_name;
+ } else {
+ dev_err(dev, "Error cannot find '%s' dev\n", mach->id);
+- return -ENXIO;
++ return -ENOENT;
+ }
+
+ codec_dev = acpi_get_first_physical_node(adev);
+diff --git a/sound/soc/intel/boards/cht_bsw_rt5645.c b/sound/soc/intel/boards/cht_bsw_rt5645.c
+index 1da9ceee4d593e..ac23a8b7cafca2 100644
+--- a/sound/soc/intel/boards/cht_bsw_rt5645.c
++++ b/sound/soc/intel/boards/cht_bsw_rt5645.c
+@@ -582,7 +582,11 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ snprintf(cht_rt5645_codec_name, sizeof(cht_rt5645_codec_name),
+ "i2c-%s", acpi_dev_name(adev));
+ cht_dailink[dai_index].codecs->name = cht_rt5645_codec_name;
++ } else {
++ dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
++ return -ENOENT;
+ }
++
+ /* acpi_get_first_physical_node() returns a borrowed ref, no need to deref */
+ codec_dev = acpi_get_first_physical_node(adev);
+ acpi_dev_put(adev);
+diff --git a/sound/soc/intel/boards/cht_bsw_rt5672.c b/sound/soc/intel/boards/cht_bsw_rt5672.c
+index d68e5bc755dee5..c6c469d51243ef 100644
+--- a/sound/soc/intel/boards/cht_bsw_rt5672.c
++++ b/sound/soc/intel/boards/cht_bsw_rt5672.c
+@@ -479,7 +479,11 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ snprintf(drv->codec_name, sizeof(drv->codec_name),
+ "i2c-%s", acpi_dev_name(adev));
+ cht_dailink[dai_index].codecs->name = drv->codec_name;
++ } else {
++ dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
++ return -ENOENT;
+ }
++
+ acpi_dev_put(adev);
+
+ /* Use SSP0 on Bay Trail CR devices */
+diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/sof_es8336.c
+index 2a88efaa6d26bd..b45d0501f1090c 100644
+--- a/sound/soc/intel/boards/sof_es8336.c
++++ b/sound/soc/intel/boards/sof_es8336.c
+@@ -681,7 +681,7 @@ static int sof_es8336_probe(struct platform_device *pdev)
+ dai_links[0].codecs->dai_name = "ES8326 HiFi";
+ } else {
+ dev_err(dev, "Error cannot find '%s' dev\n", mach->id);
+- return -ENXIO;
++ return -ENOENT;
+ }
+
+ codec_dev = acpi_get_first_physical_node(adev);
+diff --git a/sound/soc/intel/boards/sof_wm8804.c b/sound/soc/intel/boards/sof_wm8804.c
+index b2d02cc92a6a8d..0a5ce34d7f7bbb 100644
+--- a/sound/soc/intel/boards/sof_wm8804.c
++++ b/sound/soc/intel/boards/sof_wm8804.c
+@@ -270,7 +270,11 @@ static int sof_wm8804_probe(struct platform_device *pdev)
+ snprintf(codec_name, sizeof(codec_name),
+ "%s%s", "i2c-", acpi_dev_name(adev));
+ dailink[dai_index].codecs->name = codec_name;
++ } else {
++ dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
++ return -ENOENT;
+ }
++
+ acpi_dev_put(adev);
+
+ snd_soc_card_set_drvdata(card, ctx);
+diff --git a/sound/soc/intel/common/soc-acpi-intel-rpl-match.c b/sound/soc/intel/common/soc-acpi-intel-rpl-match.c
+index bc8817633b81b0..b83ac2e6337cf3 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-rpl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-rpl-match.c
+@@ -198,6 +198,7 @@ static const struct snd_soc_acpi_link_adr rpl_cs42l43_l0[] = {
+ .num_adr = ARRAY_SIZE(cs42l43_0_adr),
+ .adr_d = cs42l43_0_adr,
+ },
++ {}
+ };
+
+ static const struct snd_soc_acpi_link_adr rpl_sdca_3_in_1[] = {
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index af5d42b57be7eb..3d82570293b29f 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -889,7 +889,7 @@ static int soc_tplg_dbytes_create(struct soc_tplg *tplg, size_t size)
+ return ret;
+
+ /* register dynamic object */
+- sbe = (struct soc_bytes_ext *)&kc.private_value;
++ sbe = (struct soc_bytes_ext *)kc.private_value;
+
+ INIT_LIST_HEAD(&sbe->dobj.list);
+ sbe->dobj.type = SND_SOC_DOBJ_BYTES;
+@@ -923,7 +923,7 @@ static int soc_tplg_dmixer_create(struct soc_tplg *tplg, size_t size)
+ return ret;
+
+ /* register dynamic object */
+- sm = (struct soc_mixer_control *)&kc.private_value;
++ sm = (struct soc_mixer_control *)kc.private_value;
+
+ INIT_LIST_HEAD(&sm->dobj.list);
+ sm->dobj.type = SND_SOC_DOBJ_MIXER;
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index bdb04fa37a71df..8f01a4b1fa0fa6 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -382,6 +382,12 @@ static const struct usb_audio_device_name usb_audio_names[] = {
+ /* Creative/Toshiba Multimedia Center SB-0500 */
+ DEVICE_NAME(0x041e, 0x3048, "Toshiba", "SB-0500"),
+
++ /* Logitech Audio Devices */
++ DEVICE_NAME(0x046d, 0x0867, "Logitech, Inc.", "Logi-MeetUp"),
++ DEVICE_NAME(0x046d, 0x0874, "Logitech, Inc.", "Logi-Tap-Audio"),
++ DEVICE_NAME(0x046d, 0x087c, "Logitech, Inc.", "Logi-Huddle"),
++ DEVICE_NAME(0x046d, 0x0898, "Logitech, Inc.", "Logi-RB-Audio"),
++ DEVICE_NAME(0x046d, 0x08d2, "Logitech, Inc.", "Logi-RBM-Audio"),
+ DEVICE_NAME(0x046d, 0x0990, "Logitech, Inc.", "QuickCam Pro 9000"),
+
+ DEVICE_NAME(0x05e1, 0x0408, "Syntek", "STK1160"),
+diff --git a/sound/usb/line6/podhd.c b/sound/usb/line6/podhd.c
+index ffd8c157a28139..70de08635f54cb 100644
+--- a/sound/usb/line6/podhd.c
++++ b/sound/usb/line6/podhd.c
+@@ -507,7 +507,7 @@ static const struct line6_properties podhd_properties_table[] = {
+ [LINE6_PODHD500X] = {
+ .id = "PODHD500X",
+ .name = "POD HD500X",
+- .capabilities = LINE6_CAP_CONTROL
++ .capabilities = LINE6_CAP_CONTROL | LINE6_CAP_HWMON_CTL
+ | LINE6_CAP_PCM | LINE6_CAP_HWMON,
+ .altsetting = 1,
+ .ep_ctrl_r = 0x81,
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index f7ce8e8c3c3eaa..2d27d729c3bea8 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1377,6 +1377,19 @@ static int get_min_max_with_quirks(struct usb_mixer_elem_info *cval,
+
+ #define get_min_max(cval, def) get_min_max_with_quirks(cval, def, NULL)
+
++/* get the max value advertised via control API */
++static int get_max_exposed(struct usb_mixer_elem_info *cval)
++{
++ if (!cval->max_exposed) {
++ if (cval->res)
++ cval->max_exposed =
++ DIV_ROUND_UP(cval->max - cval->min, cval->res);
++ else
++ cval->max_exposed = cval->max - cval->min;
++ }
++ return cval->max_exposed;
++}
++
+ /* get a feature/mixer unit info */
+ static int mixer_ctl_feature_info(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_info *uinfo)
+@@ -1389,11 +1402,8 @@ static int mixer_ctl_feature_info(struct snd_kcontrol *kcontrol,
+ else
+ uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
+ uinfo->count = cval->channels;
+- if (cval->val_type == USB_MIXER_BOOLEAN ||
+- cval->val_type == USB_MIXER_INV_BOOLEAN) {
+- uinfo->value.integer.min = 0;
+- uinfo->value.integer.max = 1;
+- } else {
++ if (cval->val_type != USB_MIXER_BOOLEAN &&
++ cval->val_type != USB_MIXER_INV_BOOLEAN) {
+ if (!cval->initialized) {
+ get_min_max_with_quirks(cval, 0, kcontrol);
+ if (cval->initialized && cval->dBmin >= cval->dBmax) {
+@@ -1405,10 +1415,10 @@ static int mixer_ctl_feature_info(struct snd_kcontrol *kcontrol,
+ &kcontrol->id);
+ }
+ }
+- uinfo->value.integer.min = 0;
+- uinfo->value.integer.max =
+- DIV_ROUND_UP(cval->max - cval->min, cval->res);
+ }
++
++ uinfo->value.integer.min = 0;
++ uinfo->value.integer.max = get_max_exposed(cval);
+ return 0;
+ }
+
+@@ -1449,6 +1459,7 @@ static int mixer_ctl_feature_put(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+ {
+ struct usb_mixer_elem_info *cval = kcontrol->private_data;
++ int max_val = get_max_exposed(cval);
+ int c, cnt, val, oval, err;
+ int changed = 0;
+
+@@ -1461,6 +1472,8 @@ static int mixer_ctl_feature_put(struct snd_kcontrol *kcontrol,
+ if (err < 0)
+ return filter_error(cval, err);
+ val = ucontrol->value.integer.value[cnt];
++ if (val < 0 || val > max_val)
++ return -EINVAL;
+ val = get_abs_value(cval, val);
+ if (oval != val) {
+ snd_usb_set_cur_mix_value(cval, c + 1, cnt, val);
+@@ -1474,6 +1487,8 @@ static int mixer_ctl_feature_put(struct snd_kcontrol *kcontrol,
+ if (err < 0)
+ return filter_error(cval, err);
+ val = ucontrol->value.integer.value[0];
++ if (val < 0 || val > max_val)
++ return -EINVAL;
+ val = get_abs_value(cval, val);
+ if (val != oval) {
+ snd_usb_set_cur_mix_value(cval, 0, 0, val);
+@@ -2337,6 +2352,8 @@ static int mixer_ctl_procunit_put(struct snd_kcontrol *kcontrol,
+ if (err < 0)
+ return filter_error(cval, err);
+ val = ucontrol->value.integer.value[0];
++ if (val < 0 || val > get_max_exposed(cval))
++ return -EINVAL;
+ val = get_abs_value(cval, val);
+ if (val != oval) {
+ set_cur_ctl_value(cval, cval->control << 8, val);
+@@ -2699,6 +2716,8 @@ static int mixer_ctl_selector_put(struct snd_kcontrol *kcontrol,
+ if (err < 0)
+ return filter_error(cval, err);
+ val = ucontrol->value.enumerated.item[0];
++ if (val < 0 || val >= cval->max) /* here cval->max = # elements */
++ return -EINVAL;
+ val = get_abs_value(cval, val);
+ if (val != oval) {
+ set_cur_ctl_value(cval, cval->control << 8, val);
+diff --git a/sound/usb/mixer.h b/sound/usb/mixer.h
+index d43895c1ae5c6c..167fbfcf01ace9 100644
+--- a/sound/usb/mixer.h
++++ b/sound/usb/mixer.h
+@@ -88,6 +88,7 @@ struct usb_mixer_elem_info {
+ int channels;
+ int val_type;
+ int min, max, res;
++ int max_exposed; /* control API exposes the value in 0..max_exposed */
+ int dBmin, dBmax;
+ int cached;
+ int cache_val[MAX_CHANNELS];
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 2bc344cf54a831..5f09f9f205cea0 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -14,6 +14,7 @@
+ * Przemek Rudy (prudy1@o2.pl)
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/hid.h>
+ #include <linux/init.h>
+ #include <linux/math64.h>
+@@ -2925,6 +2926,415 @@ static int snd_bbfpro_controls_create(struct usb_mixer_interface *mixer)
+ return 0;
+ }
+
++/*
++ * RME Digiface USB
++ */
++
++#define RME_DIGIFACE_READ_STATUS 17
++#define RME_DIGIFACE_STATUS_REG0L 0
++#define RME_DIGIFACE_STATUS_REG0H 1
++#define RME_DIGIFACE_STATUS_REG1L 2
++#define RME_DIGIFACE_STATUS_REG1H 3
++#define RME_DIGIFACE_STATUS_REG2L 4
++#define RME_DIGIFACE_STATUS_REG2H 5
++#define RME_DIGIFACE_STATUS_REG3L 6
++#define RME_DIGIFACE_STATUS_REG3H 7
++
++#define RME_DIGIFACE_CTL_REG1 16
++#define RME_DIGIFACE_CTL_REG2 18
++
++/* Reg is overloaded, 0-7 for status halfwords or 16 or 18 for control registers */
++#define RME_DIGIFACE_REGISTER(reg, mask) (((reg) << 16) | (mask))
++#define RME_DIGIFACE_INVERT BIT(31)
++
++/* Nonconst helpers */
++#define field_get(_mask, _reg) (((_reg) & (_mask)) >> (ffs(_mask) - 1))
++#define field_prep(_mask, _val) (((_val) << (ffs(_mask) - 1)) & (_mask))
++
++static int snd_rme_digiface_write_reg(struct snd_kcontrol *kcontrol, int item, u16 mask, u16 val)
++{
++ struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
++ struct snd_usb_audio *chip = list->mixer->chip;
++ struct usb_device *dev = chip->dev;
++ int err;
++
++ err = snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0),
++ item,
++ USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
++ val, mask, NULL, 0);
++ if (err < 0)
++ dev_err(&dev->dev,
++ "unable to issue control set request %d (ret = %d)",
++ item, err);
++ return err;
++}
++
++static int snd_rme_digiface_read_status(struct snd_kcontrol *kcontrol, u32 status[4])
++{
++ struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
++ struct snd_usb_audio *chip = list->mixer->chip;
++ struct usb_device *dev = chip->dev;
++ __le32 buf[4];
++ int err;
++
++ err = snd_usb_ctl_msg(dev, usb_rcvctrlpipe(dev, 0),
++ RME_DIGIFACE_READ_STATUS,
++ USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
++ 0, 0,
++ buf, sizeof(buf));
++ if (err < 0) {
++ dev_err(&dev->dev,
++ "unable to issue status read request (ret = %d)",
++ err);
++ } else {
++ for (int i = 0; i < ARRAY_SIZE(buf); i++)
++ status[i] = le32_to_cpu(buf[i]);
++ }
++ return err;
++}
++
++static int snd_rme_digiface_get_status_val(struct snd_kcontrol *kcontrol)
++{
++ int err;
++ u32 status[4];
++ bool invert = kcontrol->private_value & RME_DIGIFACE_INVERT;
++ u8 reg = (kcontrol->private_value >> 16) & 0xff;
++ u16 mask = kcontrol->private_value & 0xffff;
++ u16 val;
++
++ err = snd_rme_digiface_read_status(kcontrol, status);
++ if (err < 0)
++ return err;
++
++ switch (reg) {
++ /* Status register halfwords */
++ case RME_DIGIFACE_STATUS_REG0L ... RME_DIGIFACE_STATUS_REG3H:
++ break;
++ case RME_DIGIFACE_CTL_REG1: /* Control register 1, present in halfword 3L */
++ reg = RME_DIGIFACE_STATUS_REG3L;
++ break;
++ case RME_DIGIFACE_CTL_REG2: /* Control register 2, present in halfword 3H */
++ reg = RME_DIGIFACE_STATUS_REG3H;
++ break;
++ default:
++ return -EINVAL;
++ }
++
++ if (reg & 1)
++ val = status[reg >> 1] >> 16;
++ else
++ val = status[reg >> 1] & 0xffff;
++
++ if (invert)
++ val ^= mask;
++
++ return field_get(mask, val);
++}
++
++static int snd_rme_digiface_rate_get(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ int freq = snd_rme_digiface_get_status_val(kcontrol);
++
++ if (freq < 0)
++ return freq;
++ if (freq >= ARRAY_SIZE(snd_rme_rate_table))
++ return -EIO;
++
++ ucontrol->value.integer.value[0] = snd_rme_rate_table[freq];
++ return 0;
++}
++
++static int snd_rme_digiface_enum_get(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ int val = snd_rme_digiface_get_status_val(kcontrol);
++
++ if (val < 0)
++ return val;
++
++ ucontrol->value.enumerated.item[0] = val;
++ return 0;
++}
++
++static int snd_rme_digiface_enum_put(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ bool invert = kcontrol->private_value & RME_DIGIFACE_INVERT;
++ u8 reg = (kcontrol->private_value >> 16) & 0xff;
++ u16 mask = kcontrol->private_value & 0xffff;
++ u16 val = field_prep(mask, ucontrol->value.enumerated.item[0]);
++
++ if (invert)
++ val ^= mask;
++
++ return snd_rme_digiface_write_reg(kcontrol, reg, mask, val);
++}
++
++static int snd_rme_digiface_current_sync_get(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ int ret = snd_rme_digiface_enum_get(kcontrol, ucontrol);
++
++ /* 7 means internal for current sync */
++ if (ucontrol->value.enumerated.item[0] == 7)
++ ucontrol->value.enumerated.item[0] = 0;
++
++ return ret;
++}
++
++static int snd_rme_digiface_sync_state_get(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ u32 status[4];
++ int err;
++ bool valid, sync;
++
++ err = snd_rme_digiface_read_status(kcontrol, status);
++ if (err < 0)
++ return err;
++
++ valid = status[0] & BIT(kcontrol->private_value);
++ sync = status[0] & BIT(5 + kcontrol->private_value);
++
++ if (!valid)
++ ucontrol->value.enumerated.item[0] = SND_RME_CLOCK_NOLOCK;
++ else if (!sync)
++ ucontrol->value.enumerated.item[0] = SND_RME_CLOCK_LOCK;
++ else
++ ucontrol->value.enumerated.item[0] = SND_RME_CLOCK_SYNC;
++ return 0;
++}
++
++
++static int snd_rme_digiface_format_info(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_info *uinfo)
++{
++ static const char *const format[] = {
++ "ADAT", "S/PDIF"
++ };
++
++ return snd_ctl_enum_info(uinfo, 1,
++ ARRAY_SIZE(format), format);
++}
++
++
++static int snd_rme_digiface_sync_source_info(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_info *uinfo)
++{
++ static const char *const sync_sources[] = {
++ "Internal", "Input 1", "Input 2", "Input 3", "Input 4"
++ };
++
++ return snd_ctl_enum_info(uinfo, 1,
++ ARRAY_SIZE(sync_sources), sync_sources);
++}
++
++static int snd_rme_digiface_rate_info(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_info *uinfo)
++{
++ uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
++ uinfo->count = 1;
++ uinfo->value.integer.min = 0;
++ uinfo->value.integer.max = 200000;
++ uinfo->value.integer.step = 0;
++ return 0;
++}
++
++static const struct snd_kcontrol_new snd_rme_digiface_controls[] = {
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 1 Sync",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_sync_state_info,
++ .get = snd_rme_digiface_sync_state_get,
++ .private_value = 0,
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 1 Format",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_digiface_format_info,
++ .get = snd_rme_digiface_enum_get,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG0H, BIT(0)) |
++ RME_DIGIFACE_INVERT,
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 1 Rate",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_digiface_rate_info,
++ .get = snd_rme_digiface_rate_get,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG1L, GENMASK(3, 0)),
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 2 Sync",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_sync_state_info,
++ .get = snd_rme_digiface_sync_state_get,
++ .private_value = 1,
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 2 Format",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_digiface_format_info,
++ .get = snd_rme_digiface_enum_get,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG0L, BIT(13)) |
++ RME_DIGIFACE_INVERT,
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 2 Rate",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_digiface_rate_info,
++ .get = snd_rme_digiface_rate_get,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG1L, GENMASK(7, 4)),
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 3 Sync",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_sync_state_info,
++ .get = snd_rme_digiface_sync_state_get,
++ .private_value = 2,
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 3 Format",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_digiface_format_info,
++ .get = snd_rme_digiface_enum_get,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG0L, BIT(14)) |
++ RME_DIGIFACE_INVERT,
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 3 Rate",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_digiface_rate_info,
++ .get = snd_rme_digiface_rate_get,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG1L, GENMASK(11, 8)),
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 4 Sync",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_sync_state_info,
++ .get = snd_rme_digiface_sync_state_get,
++ .private_value = 3,
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 4 Format",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_digiface_format_info,
++ .get = snd_rme_digiface_enum_get,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG0L, GENMASK(15, 12)) |
++ RME_DIGIFACE_INVERT,
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Input 4 Rate",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_digiface_rate_info,
++ .get = snd_rme_digiface_rate_get,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG1L, GENMASK(3, 0)),
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Output 1 Format",
++ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++ .info = snd_rme_digiface_format_info,
++ .get = snd_rme_digiface_enum_get,
++ .put = snd_rme_digiface_enum_put,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG2, BIT(0)),
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Output 2 Format",
++ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++ .info = snd_rme_digiface_format_info,
++ .get = snd_rme_digiface_enum_get,
++ .put = snd_rme_digiface_enum_put,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG2, BIT(1)),
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Output 3 Format",
++ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++ .info = snd_rme_digiface_format_info,
++ .get = snd_rme_digiface_enum_get,
++ .put = snd_rme_digiface_enum_put,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG2, BIT(3)),
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Output 4 Format",
++ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++ .info = snd_rme_digiface_format_info,
++ .get = snd_rme_digiface_enum_get,
++ .put = snd_rme_digiface_enum_put,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG2, BIT(4)),
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Sync Source",
++ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++ .info = snd_rme_digiface_sync_source_info,
++ .get = snd_rme_digiface_enum_get,
++ .put = snd_rme_digiface_enum_put,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG1, GENMASK(2, 0)),
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Current Sync Source",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_digiface_sync_source_info,
++ .get = snd_rme_digiface_current_sync_get,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG0L, GENMASK(12, 10)),
++ },
++ {
++ /*
++ * This is writeable, but it is only set by the PCM rate.
++ * Mixer apps currently need to drive the mixer using raw USB requests,
++ * so they can also change this that way to configure the rate for
++ * stand-alone operation when the PCM is closed.
++ */
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "System Rate",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_rate_info,
++ .get = snd_rme_digiface_rate_get,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG1, GENMASK(6, 3)),
++ },
++ {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .name = "Current Rate",
++ .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++ .info = snd_rme_rate_info,
++ .get = snd_rme_digiface_rate_get,
++ .private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG1H, GENMASK(7, 4)),
++ }
++};
++
++static int snd_rme_digiface_controls_create(struct usb_mixer_interface *mixer)
++{
++ int err, i;
++
++ for (i = 0; i < ARRAY_SIZE(snd_rme_digiface_controls); ++i) {
++ err = add_single_ctl_with_resume(mixer, 0,
++ NULL,
++ &snd_rme_digiface_controls[i],
++ NULL);
++ if (err < 0)
++ return err;
++ }
++
++ return 0;
++}
++
+ /*
+ * Pioneer DJ DJM Mixers
+ *
+@@ -3483,6 +3893,9 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ case USB_ID(0x2a39, 0x3fb0): /* RME Babyface Pro FS */
+ err = snd_bbfpro_controls_create(mixer);
+ break;
++ case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++ err = snd_rme_digiface_controls_create(mixer);
++ break;
+ case USB_ID(0x2b73, 0x0017): /* Pioneer DJ DJM-250MK2 */
+ err = snd_djm_controls_create(mixer, SND_DJM_250MK2_IDX);
+ break;
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index aaa6a515d0f8a4..24c981c9b2405d 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -35,10 +35,87 @@
+ .bInterfaceClass = USB_CLASS_AUDIO, \
+ .bInterfaceSubClass = USB_SUBCLASS_AUDIOCONTROL
+
++/* Quirk .driver_info, followed by the definition of the quirk entry;
++ * put like QUIRK_DRIVER_INFO { ... } in each entry of the quirk table
++ */
++#define QUIRK_DRIVER_INFO \
++ .driver_info = (unsigned long)&(const struct snd_usb_audio_quirk)
++
++/*
++ * Macros for quirk data entries
++ */
++
++/* Quirk data entry for ignoring the interface */
++#define QUIRK_DATA_IGNORE(_ifno) \
++ .ifnum = (_ifno), .type = QUIRK_IGNORE_INTERFACE
++/* Quirk data entry for a standard audio interface */
++#define QUIRK_DATA_STANDARD_AUDIO(_ifno) \
++ .ifnum = (_ifno), .type = QUIRK_AUDIO_STANDARD_INTERFACE
++/* Quirk data entry for a standard MIDI interface */
++#define QUIRK_DATA_STANDARD_MIDI(_ifno) \
++ .ifnum = (_ifno), .type = QUIRK_MIDI_STANDARD_INTERFACE
++/* Quirk data entry for a standard mixer interface */
++#define QUIRK_DATA_STANDARD_MIXER(_ifno) \
++ .ifnum = (_ifno), .type = QUIRK_AUDIO_STANDARD_MIXER
++
++/* Quirk data entry for Yamaha MIDI */
++#define QUIRK_DATA_MIDI_YAMAHA(_ifno) \
++ .ifnum = (_ifno), .type = QUIRK_MIDI_YAMAHA
++/* Quirk data entry for Edirol UAxx */
++#define QUIRK_DATA_EDIROL_UAXX(_ifno) \
++ .ifnum = (_ifno), .type = QUIRK_AUDIO_EDIROL_UAXX
++/* Quirk data entry for raw bytes interface */
++#define QUIRK_DATA_RAW_BYTES(_ifno) \
++ .ifnum = (_ifno), .type = QUIRK_MIDI_RAW_BYTES
++
++/* Quirk composite array terminator */
++#define QUIRK_COMPOSITE_END { .ifnum = -1 }
++
++/* Quirk data entry for composite quirks;
++ * followed by the quirk array that is terminated with QUIRK_COMPOSITE_END
++ * e.g. QUIRK_DATA_COMPOSITE { { quirk1 }, { quirk2 },..., QUIRK_COMPOSITE_END }
++ */
++#define QUIRK_DATA_COMPOSITE \
++ .ifnum = QUIRK_ANY_INTERFACE, \
++ .type = QUIRK_COMPOSITE, \
++ .data = &(const struct snd_usb_audio_quirk[])
++
++/* Quirk data entry for a fixed audio endpoint;
++ * followed by audioformat definition
++ * e.g. QUIRK_DATA_AUDIOFORMAT(n) { .formats = xxx, ... }
++ */
++#define QUIRK_DATA_AUDIOFORMAT(_ifno) \
++ .ifnum = (_ifno), \
++ .type = QUIRK_AUDIO_FIXED_ENDPOINT, \
++ .data = &(const struct audioformat)
++
++/* Quirk data entry for a fixed MIDI endpoint;
++ * followed by snd_usb_midi_endpoint_info definition
++ * e.g. QUIRK_DATA_MIDI_FIXED_ENDPOINT(n) { .out_cables = x, .in_cables = y }
++ */
++#define QUIRK_DATA_MIDI_FIXED_ENDPOINT(_ifno) \
++ .ifnum = (_ifno), \
++ .type = QUIRK_MIDI_FIXED_ENDPOINT, \
++ .data = &(const struct snd_usb_midi_endpoint_info)
++/* Quirk data entry for a MIDIMAN MIDI endpoint */
++#define QUIRK_DATA_MIDI_MIDIMAN(_ifno) \
++ .ifnum = (_ifno), \
++ .type = QUIRK_MIDI_MIDIMAN, \
++ .data = &(const struct snd_usb_midi_endpoint_info)
++/* Quirk data entry for a EMAGIC MIDI endpoint */
++#define QUIRK_DATA_MIDI_EMAGIC(_ifno) \
++ .ifnum = (_ifno), \
++ .type = QUIRK_MIDI_EMAGIC, \
++ .data = &(const struct snd_usb_midi_endpoint_info)
++
++/*
++ * Here we go... the quirk table definition begins:
++ */
++
+ /* FTDI devices */
+ {
+ USB_DEVICE(0x0403, 0xb8d8),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "STARR LABS", */
+ /* .product_name = "Starr Labs MIDI USB device", */
+ .ifnum = 0,
+@@ -49,10 +126,8 @@
+ {
+ /* Creative BT-D1 */
+ USB_DEVICE(0x041e, 0x0005),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels = 2,
+ .iface = 1,
+@@ -87,18 +162,11 @@
+ */
+ {
+ USB_AUDIO_DEVICE(0x041e, 0x4095),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(2) },
+ {
+- .ifnum = 3,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(3) {
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels = 2,
+ .fmt_bits = 16,
+@@ -114,9 +182,7 @@
+ .rate_table = (unsigned int[]) { 48000 },
+ },
+ },
+- {
+- .ifnum = -1
+- },
++ QUIRK_COMPOSITE_END
+ },
+ },
+ },
+@@ -128,31 +194,18 @@
+ */
+ {
+ USB_DEVICE(0x0424, 0xb832),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Standard Microsystems Corp.",
+ .product_name = "HP Wireless Audio",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
+ /* Mixer */
+- {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE,
+- },
++ { QUIRK_DATA_IGNORE(0) },
+ /* Playback */
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE,
+- },
++ { QUIRK_DATA_IGNORE(1) },
+ /* Capture */
+- {
+- .ifnum = 2,
+- .type = QUIRK_IGNORE_INTERFACE,
+- },
++ { QUIRK_DATA_IGNORE(2) },
+ /* HID Device, .ifnum = 3 */
+- {
+- .ifnum = -1,
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -175,20 +228,18 @@
+
+ #define YAMAHA_DEVICE(id, name) { \
+ USB_DEVICE(0x0499, id), \
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { \
++ QUIRK_DRIVER_INFO { \
+ .vendor_name = "Yamaha", \
+ .product_name = name, \
+- .ifnum = QUIRK_ANY_INTERFACE, \
+- .type = QUIRK_MIDI_YAMAHA \
++ QUIRK_DATA_MIDI_YAMAHA(QUIRK_ANY_INTERFACE) \
+ } \
+ }
+ #define YAMAHA_INTERFACE(id, intf, name) { \
+ USB_DEVICE_VENDOR_SPEC(0x0499, id), \
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { \
++ QUIRK_DRIVER_INFO { \
+ .vendor_name = "Yamaha", \
+ .product_name = name, \
+- .ifnum = intf, \
+- .type = QUIRK_MIDI_YAMAHA \
++ QUIRK_DATA_MIDI_YAMAHA(intf) \
+ } \
+ }
+ YAMAHA_DEVICE(0x1000, "UX256"),
+@@ -276,135 +327,67 @@ YAMAHA_DEVICE(0x105d, NULL),
+ YAMAHA_DEVICE(0x1718, "P-125"),
+ {
+ USB_DEVICE(0x0499, 0x1503),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "Yamaha", */
+ /* .product_name = "MOX6/MOX8", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 3,
+- .type = QUIRK_MIDI_YAMAHA
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ { QUIRK_DATA_MIDI_YAMAHA(3) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0499, 0x1507),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "Yamaha", */
+ /* .product_name = "THR10", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 3,
+- .type = QUIRK_MIDI_YAMAHA
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ { QUIRK_DATA_MIDI_YAMAHA(3) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0499, 0x1509),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "Yamaha", */
+ /* .product_name = "Steinberg UR22", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 3,
+- .type = QUIRK_MIDI_YAMAHA
+- },
+- {
+- .ifnum = 4,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ { QUIRK_DATA_MIDI_YAMAHA(3) },
++ { QUIRK_DATA_IGNORE(4) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0499, 0x150a),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "Yamaha", */
+ /* .product_name = "THR5A", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 3,
+- .type = QUIRK_MIDI_YAMAHA
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ { QUIRK_DATA_MIDI_YAMAHA(3) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0499, 0x150c),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "Yamaha", */
+ /* .product_name = "THR10C", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 3,
+- .type = QUIRK_MIDI_YAMAHA
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ { QUIRK_DATA_MIDI_YAMAHA(3) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -438,7 +421,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ USB_DEVICE_ID_MATCH_INT_CLASS,
+ .idVendor = 0x0499,
+ .bInterfaceClass = USB_CLASS_VENDOR_SPEC,
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .ifnum = QUIRK_ANY_INTERFACE,
+ .type = QUIRK_AUTODETECT
+ }
+@@ -449,16 +432,12 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ */
+ {
+ USB_DEVICE(0x0582, 0x0000),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "UA-100",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels = 4,
+ .iface = 0,
+@@ -473,9 +452,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels = 2,
+ .iface = 1,
+@@ -490,106 +467,66 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0007,
+ .in_cables = 0x0007
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0582, 0x0002),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UM-4",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
+ {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x000f,
+ .in_cables = 0x000f
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0582, 0x0003),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "SC-8850",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x003f,
+ .in_cables = 0x003f
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0582, 0x0004),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "U-8",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0005,
+ .in_cables = 0x0005
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -597,152 +534,92 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Has ID 0x0099 when not in "Advanced Driver" mode.
+ * The UM-2EX has only one input, but we cannot detect this. */
+ USB_DEVICE(0x0582, 0x0005),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UM-2",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0003,
+ .in_cables = 0x0003
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0582, 0x0007),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "SC-8820",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0013,
+ .in_cables = 0x0013
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0582, 0x0008),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "PC-300",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* has ID 0x009d when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0009),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UM-1",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0582, 0x000b),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "SK-500",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0013,
+ .in_cables = 0x0013
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -750,31 +627,19 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* thanks to Emiliano Grilli <emillo@libero.it>
+ * for helping researching this data */
+ USB_DEVICE(0x0582, 0x000c),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "SC-D70",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(0) },
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0007,
+ .in_cables = 0x0007
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -788,35 +653,23 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * the 96kHz sample rate.
+ */
+ USB_DEVICE(0x0582, 0x0010),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UA-5",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* has ID 0x0013 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0012),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "XV-5050",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -825,12 +678,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x0015 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0014),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UM-880",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x01ff,
+ .in_cables = 0x01ff
+ }
+@@ -839,74 +690,48 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x0017 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0016),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "SD-90",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(0) },
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x000f,
+ .in_cables = 0x000f
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* has ID 0x001c when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x001b),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "MMP-2",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
+ {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* has ID 0x001e when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x001d),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "V-SYNTH",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -915,12 +740,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x0024 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0023),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UM-550",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x003f,
+ .in_cables = 0x003f
+ }
+@@ -933,20 +756,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * and no MIDI.
+ */
+ USB_DEVICE(0x0582, 0x0025),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UA-20",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 2,
+ .iface = 1,
+@@ -961,9 +777,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 2,
+ .iface = 2,
+@@ -978,28 +792,22 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 3,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(3) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* has ID 0x0028 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0027),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "SD-20",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0003,
+ .in_cables = 0x0007
+ }
+@@ -1008,12 +816,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x002a when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0029),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "SD-80",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x000f,
+ .in_cables = 0x000f
+ }
+@@ -1026,39 +832,24 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * but offers only 16-bit PCM and no MIDI.
+ */
+ USB_DEVICE_VENDOR_SPEC(0x0582, 0x002b),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UA-700",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = 3,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_EDIROL_UAXX(1) },
++ { QUIRK_DATA_EDIROL_UAXX(2) },
++ { QUIRK_DATA_EDIROL_UAXX(3) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* has ID 0x002e when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x002d),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "XV-2020",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1067,12 +858,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x0030 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x002f),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "VariOS",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0007,
+ .in_cables = 0x0007
+ }
+@@ -1081,12 +870,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x0034 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0033),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "PCR",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0003,
+ .in_cables = 0x0007
+ }
+@@ -1098,12 +885,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * later revisions use IDs 0x0054 and 0x00a2.
+ */
+ USB_DEVICE(0x0582, 0x0037),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "Digital Piano",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1116,39 +901,24 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * and no MIDI.
+ */
+ USB_DEVICE_VENDOR_SPEC(0x0582, 0x003b),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "BOSS",
+ .product_name = "GS-10",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = & (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 3,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ { QUIRK_DATA_STANDARD_MIDI(3) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* has ID 0x0041 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0040),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "GI-20",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1157,12 +927,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x0043 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0042),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "RS-70",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1171,36 +939,24 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x0049 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0047),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "EDIROL", */
+ /* .product_name = "UR-80", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
+ /* in the 96 kHz modes, only interface 1 is there */
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* has ID 0x004a when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0048),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "EDIROL", */
+ /* .product_name = "UR-80", */
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0003,
+ .in_cables = 0x0007
+ }
+@@ -1209,35 +965,23 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x004e when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x004c),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "PCR-A",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* has ID 0x004f when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x004d),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "PCR-A",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0003,
+ .in_cables = 0x0007
+ }
+@@ -1249,76 +993,52 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * is standard compliant, but has only 16-bit PCM.
+ */
+ USB_DEVICE(0x0582, 0x0050),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UA-3FX",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0582, 0x0052),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UM-1SX",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE
++ QUIRK_DATA_STANDARD_MIDI(0)
+ }
+ },
+ {
+ USB_DEVICE(0x0582, 0x0060),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "EXR Series",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE
++ QUIRK_DATA_STANDARD_MIDI(0)
+ }
+ },
+ {
+ /* has ID 0x0066 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0064),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "EDIROL", */
+ /* .product_name = "PCR-1", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* has ID 0x0067 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0065),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "EDIROL", */
+ /* .product_name = "PCR-1", */
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0003
+ }
+@@ -1327,12 +1047,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x006e when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x006d),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "FANTOM-X",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1345,39 +1063,24 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * offers only 16-bit PCM at 44.1 kHz and no MIDI.
+ */
+ USB_DEVICE_VENDOR_SPEC(0x0582, 0x0074),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UA-25",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_EDIROL_UAXX(0) },
++ { QUIRK_DATA_EDIROL_UAXX(1) },
++ { QUIRK_DATA_EDIROL_UAXX(2) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* has ID 0x0076 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0075),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "BOSS",
+ .product_name = "DR-880",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1386,12 +1089,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x007b when not in "Advanced Driver" mode */
+ USB_DEVICE_VENDOR_SPEC(0x0582, 0x007a),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ /* "RD" or "RD-700SX"? */
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0003,
+ .in_cables = 0x0003
+ }
+@@ -1400,12 +1101,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x0081 when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x0080),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Roland",
+ .product_name = "G-70",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1414,12 +1113,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* has ID 0x008c when not in "Advanced Driver" mode */
+ USB_DEVICE(0x0582, 0x008b),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "PC-50",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1431,56 +1128,31 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * is standard compliant, but has only 16-bit PCM and no MIDI.
+ */
+ USB_DEVICE(0x0582, 0x00a3),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UA-4FX",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_EDIROL_UAXX(0) },
++ { QUIRK_DATA_EDIROL_UAXX(1) },
++ { QUIRK_DATA_EDIROL_UAXX(2) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* Edirol M-16DX */
+ USB_DEVICE(0x0582, 0x00c4),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(0) },
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
+ {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -1490,37 +1162,22 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * offers only 16-bit PCM at 44.1 kHz and no MIDI.
+ */
+ USB_DEVICE_VENDOR_SPEC(0x0582, 0x00e6),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "EDIROL",
+ .product_name = "UA-25EX",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_EDIROL_UAXX
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_EDIROL_UAXX(0) },
++ { QUIRK_DATA_EDIROL_UAXX(1) },
++ { QUIRK_DATA_EDIROL_UAXX(2) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* Edirol UM-3G */
+ USB_DEVICE_VENDOR_SPEC(0x0582, 0x0108),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+- .ifnum = 0,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ .out_cables = 0x0007,
+ .in_cables = 0x0007
+ }
+@@ -1529,45 +1186,29 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* BOSS ME-25 */
+ USB_DEVICE(0x0582, 0x0113),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(0) },
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* only 44.1 kHz works at the moment */
+ USB_DEVICE(0x0582, 0x0120),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "Roland", */
+ /* .product_name = "OCTO-CAPTURE", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 10,
+ .iface = 0,
+@@ -1583,9 +1224,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 12,
+ .iface = 1,
+@@ -1601,40 +1240,26 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = 3,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 4,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ { QUIRK_DATA_IGNORE(3) },
++ { QUIRK_DATA_IGNORE(4) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* only 44.1 kHz works at the moment */
+ USB_DEVICE(0x0582, 0x012f),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "Roland", */
+ /* .product_name = "QUAD-CAPTURE", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 4,
+ .iface = 0,
+@@ -1650,9 +1275,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 6,
+ .iface = 1,
+@@ -1668,54 +1291,32 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = 3,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 4,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ { QUIRK_DATA_IGNORE(3) },
++ { QUIRK_DATA_IGNORE(4) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0582, 0x0159),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "Roland", */
+ /* .product_name = "UA-22", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(0) },
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -1723,19 +1324,19 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* UA101 and co are supported by another driver */
+ {
+ USB_DEVICE(0x0582, 0x0044), /* UA-1000 high speed */
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .ifnum = QUIRK_NODEV_INTERFACE
+ },
+ },
+ {
+ USB_DEVICE(0x0582, 0x007d), /* UA-101 high speed */
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .ifnum = QUIRK_NODEV_INTERFACE
+ },
+ },
+ {
+ USB_DEVICE(0x0582, 0x008d), /* UA-101 full speed */
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .ifnum = QUIRK_NODEV_INTERFACE
+ },
+ },
+@@ -1746,7 +1347,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ USB_DEVICE_ID_MATCH_INT_CLASS,
+ .idVendor = 0x0582,
+ .bInterfaceClass = USB_CLASS_VENDOR_SPEC,
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .ifnum = QUIRK_ANY_INTERFACE,
+ .type = QUIRK_AUTODETECT
+ }
+@@ -1761,12 +1362,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * compliant USB MIDI ports for external MIDI and controls.
+ */
+ USB_DEVICE_VENDOR_SPEC(0x06f8, 0xb000),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Hercules",
+ .product_name = "DJ Console (WE)",
+- .ifnum = 4,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(4) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1776,12 +1375,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Midiman/M-Audio devices */
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x1002),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "M-Audio",
+ .product_name = "MidiSport 2x2",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ .out_cables = 0x0003,
+ .in_cables = 0x0003
+ }
+@@ -1789,12 +1386,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x1011),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "M-Audio",
+ .product_name = "MidiSport 1x1",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1802,12 +1397,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x1015),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "M-Audio",
+ .product_name = "Keystation",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1815,12 +1408,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x1021),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "M-Audio",
+ .product_name = "MidiSport 4x4",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ .out_cables = 0x000f,
+ .in_cables = 0x000f
+ }
+@@ -1833,12 +1424,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * Thanks to Olaf Giesbrecht <Olaf_Giesbrecht@yahoo.de>
+ */
+ USB_DEVICE_VER(0x0763, 0x1031, 0x0100, 0x0109),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "M-Audio",
+ .product_name = "MidiSport 8x8",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ .out_cables = 0x01ff,
+ .in_cables = 0x01ff
+ }
+@@ -1846,12 +1435,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x1033),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "M-Audio",
+ .product_name = "MidiSport 8x8",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ .out_cables = 0x01ff,
+ .in_cables = 0x01ff
+ }
+@@ -1859,12 +1446,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x1041),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "M-Audio",
+ .product_name = "MidiSport 2x4",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ .out_cables = 0x000f,
+ .in_cables = 0x0003
+ }
+@@ -1872,76 +1457,41 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x2001),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "M-Audio",
+ .product_name = "Quattro",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = & (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
+ /*
+ * Interfaces 0-2 are "Windows-compatible", 16-bit only,
+ * and share endpoints with the other interfaces.
+ * Ignore them. The other interfaces can do 24 bits,
+ * but captured samples are big-endian (see usbaudio.c).
+ */
+- {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 3,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 4,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 5,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 6,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 7,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 8,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 9,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
++ { QUIRK_DATA_IGNORE(2) },
++ { QUIRK_DATA_IGNORE(3) },
++ { QUIRK_DATA_STANDARD_AUDIO(4) },
++ { QUIRK_DATA_STANDARD_AUDIO(5) },
++ { QUIRK_DATA_IGNORE(6) },
++ { QUIRK_DATA_STANDARD_AUDIO(7) },
++ { QUIRK_DATA_STANDARD_AUDIO(8) },
++ {
++ QUIRK_DATA_MIDI_MIDIMAN(9) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x2003),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "M-Audio",
+ .product_name = "AudioPhile",
+- .ifnum = 6,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_MIDIMAN(6) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1949,12 +1499,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x2008),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "M-Audio",
+ .product_name = "Ozone",
+- .ifnum = 3,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_MIDIMAN(3) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+@@ -1962,93 +1510,45 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x200d),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "M-Audio",
+ .product_name = "OmniStudio",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = & (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 3,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 4,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 5,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 6,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 7,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 8,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 9,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
++ { QUIRK_DATA_IGNORE(2) },
++ { QUIRK_DATA_IGNORE(3) },
++ { QUIRK_DATA_STANDARD_AUDIO(4) },
++ { QUIRK_DATA_STANDARD_AUDIO(5) },
++ { QUIRK_DATA_IGNORE(6) },
++ { QUIRK_DATA_STANDARD_AUDIO(7) },
++ { QUIRK_DATA_STANDARD_AUDIO(8) },
++ {
++ QUIRK_DATA_MIDI_MIDIMAN(9) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x0763, 0x2019),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "M-Audio", */
+ /* .product_name = "Ozone Academic", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = & (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(0) },
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 3,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_MIDIMAN(3) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -2058,21 +1558,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x2030),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "M-Audio", */
+ /* .product_name = "Fast Track C400", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(1) },
+ /* Playback */
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 6,
+ .iface = 2,
+@@ -2096,9 +1589,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ /* Capture */
+ {
+- .ifnum = 3,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(3) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 4,
+ .iface = 3,
+@@ -2120,30 +1611,21 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .clock = 0x80,
+ }
+ },
+- /* MIDI */
+- {
+- .ifnum = -1 /* Interface = 4 */
+- }
++ /* MIDI: Interface = 4*/
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x2031),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "M-Audio", */
+ /* .product_name = "Fast Track C600", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(1) },
+ /* Playback */
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 2,
+@@ -2167,9 +1649,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ /* Capture */
+ {
+- .ifnum = 3,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(3) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 6,
+ .iface = 3,
+@@ -2191,29 +1671,20 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .clock = 0x80,
+ }
+ },
+- /* MIDI */
+- {
+- .ifnum = -1 /* Interface = 4 */
+- }
++ /* MIDI: Interface = 4 */
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x2080),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "M-Audio", */
+ /* .product_name = "Fast Track Ultra", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = & (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(0) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 1,
+@@ -2235,9 +1706,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 2,
+@@ -2259,28 +1728,19 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ /* interface 3 (MIDI) is standard compliant */
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0763, 0x2081),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "M-Audio", */
+ /* .product_name = "Fast Track Ultra 8R", */
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = & (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(0) },
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 1,
+@@ -2302,9 +1762,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 2,
+@@ -2326,9 +1784,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ /* interface 3 (MIDI) is standard compliant */
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -2336,21 +1792,19 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Casio devices */
+ {
+ USB_DEVICE(0x07cf, 0x6801),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Casio",
+ .product_name = "PL-40R",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_YAMAHA
++ QUIRK_DATA_MIDI_YAMAHA(0)
+ }
+ },
+ {
+ /* this ID is used by several devices without a product ID */
+ USB_DEVICE(0x07cf, 0x6802),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Casio",
+ .product_name = "Keyboard",
+- .ifnum = 0,
+- .type = QUIRK_MIDI_YAMAHA
++ QUIRK_DATA_MIDI_YAMAHA(0)
+ }
+ },
+
+@@ -2363,23 +1817,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .idVendor = 0x07fd,
+ .idProduct = 0x0001,
+ .bDeviceSubClass = 2,
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "MOTU",
+ .product_name = "Fastlane",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = & (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_MIDI_RAW_BYTES
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_RAW_BYTES(0) },
++ { QUIRK_DATA_IGNORE(1) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -2387,12 +1831,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Emagic devices */
+ {
+ USB_DEVICE(0x086a, 0x0001),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Emagic",
+ .product_name = "Unitor8",
+- .ifnum = 2,
+- .type = QUIRK_MIDI_EMAGIC,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_EMAGIC(2) {
+ .out_cables = 0x80ff,
+ .in_cables = 0x80ff
+ }
+@@ -2400,12 +1842,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE(0x086a, 0x0002),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Emagic",
+ /* .product_name = "AMT8", */
+- .ifnum = 2,
+- .type = QUIRK_MIDI_EMAGIC,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_EMAGIC(2) {
+ .out_cables = 0x80ff,
+ .in_cables = 0x80ff
+ }
+@@ -2413,12 +1853,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE(0x086a, 0x0003),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Emagic",
+ /* .product_name = "MT4", */
+- .ifnum = 2,
+- .type = QUIRK_MIDI_EMAGIC,
+- .data = & (const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_EMAGIC(2) {
+ .out_cables = 0x800f,
+ .in_cables = 0x8003
+ }
+@@ -2428,38 +1866,35 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* KORG devices */
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0944, 0x0200),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "KORG, Inc.",
+ /* .product_name = "PANDORA PX5D", */
+- .ifnum = 3,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE,
++ QUIRK_DATA_STANDARD_MIDI(3)
+ }
+ },
+
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0944, 0x0201),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "KORG, Inc.",
+ /* .product_name = "ToneLab ST", */
+- .ifnum = 3,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE,
++ QUIRK_DATA_STANDARD_MIDI(3)
+ }
+ },
+
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0944, 0x0204),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "KORG, Inc.",
+ /* .product_name = "ToneLab EX", */
+- .ifnum = 3,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE,
++ QUIRK_DATA_STANDARD_MIDI(3)
+ }
+ },
+
+ /* AKAI devices */
+ {
+ USB_DEVICE(0x09e8, 0x0062),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "AKAI",
+ .product_name = "MPD16",
+ .ifnum = 0,
+@@ -2470,89 +1905,49 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* Akai MPC Element */
+ USB_DEVICE(0x09e8, 0x0021),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = & (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_STANDARD_MIDI(1) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+-},
+-
+-/* Steinberg devices */
+-{
+- /* Steinberg MI2 */
+- USB_DEVICE_VENDOR_SPEC(0x0a4e, 0x2040),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = & (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
++},
++
++/* Steinberg devices */
++{
++ /* Steinberg MI2 */
++ USB_DEVICE_VENDOR_SPEC(0x0a4e, 0x2040),
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(0) },
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
+ {
+- .ifnum = 3,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = &(const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(3) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* Steinberg MI4 */
+ USB_DEVICE_VENDOR_SPEC(0x0a4e, 0x4040),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = & (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(0) },
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = 3,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = &(const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(3) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -2560,34 +1955,31 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* TerraTec devices */
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0ccd, 0x0012),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "TerraTec",
+ .product_name = "PHASE 26",
+- .ifnum = 3,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE
++ QUIRK_DATA_STANDARD_MIDI(3)
+ }
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0ccd, 0x0013),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "TerraTec",
+ .product_name = "PHASE 26",
+- .ifnum = 3,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE
++ QUIRK_DATA_STANDARD_MIDI(3)
+ }
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x0ccd, 0x0014),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "TerraTec",
+ .product_name = "PHASE 26",
+- .ifnum = 3,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE
++ QUIRK_DATA_STANDARD_MIDI(3)
+ }
+ },
+ {
+ USB_DEVICE(0x0ccd, 0x0035),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Miditech",
+ .product_name = "Play'n Roll",
+ .ifnum = 0,
+@@ -2602,7 +1994,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Novation EMS devices */
+ {
+ USB_DEVICE_VENDOR_SPEC(0x1235, 0x0001),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Novation",
+ .product_name = "ReMOTE Audio/XStation",
+ .ifnum = 4,
+@@ -2611,7 +2003,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x1235, 0x0002),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Novation",
+ .product_name = "Speedio",
+ .ifnum = 3,
+@@ -2620,38 +2012,29 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ USB_DEVICE(0x1235, 0x000a),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "Novation", */
+ /* .product_name = "Nocturn", */
+- .ifnum = 0,
+- .type = QUIRK_MIDI_RAW_BYTES
++ QUIRK_DATA_RAW_BYTES(0)
+ }
+ },
+ {
+ USB_DEVICE(0x1235, 0x000e),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "Novation", */
+ /* .product_name = "Launchpad", */
+- .ifnum = 0,
+- .type = QUIRK_MIDI_RAW_BYTES
++ QUIRK_DATA_RAW_BYTES(0)
+ }
+ },
+ {
+ USB_DEVICE(0x1235, 0x0010),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Focusrite",
+ .product_name = "Saffire 6 USB",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(0) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 4,
+ .iface = 0,
+@@ -2678,9 +2061,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 2,
+ .iface = 0,
+@@ -2702,28 +2083,19 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+- {
+- .ifnum = 1,
+- .type = QUIRK_MIDI_RAW_BYTES
+- },
+- {
+- .ifnum = -1
+- }
++ { QUIRK_DATA_RAW_BYTES(1) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE(0x1235, 0x0018),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Novation",
+ .product_name = "Twitch",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = & (const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 4,
+ .iface = 0,
+@@ -2742,19 +2114,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+- {
+- .ifnum = 1,
+- .type = QUIRK_MIDI_RAW_BYTES
+- },
+- {
+- .ifnum = -1
+- }
++ { QUIRK_DATA_RAW_BYTES(1) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ USB_DEVICE_VENDOR_SPEC(0x1235, 0x4661),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Novation",
+ .product_name = "ReMOTE25",
+ .ifnum = 0,
+@@ -2766,25 +2133,16 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* VirusTI Desktop */
+ USB_DEVICE_VENDOR_SPEC(0x133e, 0x0815),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 3,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = &(const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(3) {
+ .out_cables = 0x0003,
+ .in_cables = 0x0003
+ }
+ },
+- {
+- .ifnum = 4,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ { QUIRK_DATA_IGNORE(4) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -2812,7 +2170,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* QinHeng devices */
+ {
+ USB_DEVICE(0x1a86, 0x752d),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "QinHeng",
+ .product_name = "CH345",
+ .ifnum = 1,
+@@ -2826,7 +2184,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Miditech devices */
+ {
+ USB_DEVICE(0x4752, 0x0011),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Miditech",
+ .product_name = "Midistart-2",
+ .ifnum = 0,
+@@ -2838,7 +2196,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* this ID used by both Miditech MidiStudio-2 and CME UF-x */
+ USB_DEVICE(0x7104, 0x2202),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .ifnum = 0,
+ .type = QUIRK_MIDI_CME
+ }
+@@ -2848,20 +2206,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ /* Thanks to Clemens Ladisch <clemens@ladisch.de> */
+ USB_DEVICE(0x0dba, 0x1000),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Digidesign",
+ .product_name = "MBox",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]){
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
++ QUIRK_DATA_COMPOSITE{
++ { QUIRK_DATA_STANDARD_MIXER(0) },
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ .channels = 2,
+ .iface = 1,
+@@ -2882,9 +2233,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ .channels = 2,
+ .iface = 1,
+@@ -2905,9 +2254,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -2915,24 +2262,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* DIGIDESIGN MBOX 2 */
+ {
+ USB_DEVICE(0x0dba, 0x3000),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Digidesign",
+ .product_name = "Mbox 2",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ .channels = 2,
+ .iface = 2,
+@@ -2950,15 +2287,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
++ { QUIRK_DATA_IGNORE(3) },
+ {
+- .ifnum = 3,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 4,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
+- .formats = SNDRV_PCM_FMTBIT_S24_3BE,
++ QUIRK_DATA_AUDIOFORMAT(4) {
++ .formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ .channels = 2,
+ .iface = 4,
+ .altsetting = 2,
+@@ -2975,14 +2307,9 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
++ { QUIRK_DATA_IGNORE(5) },
+ {
+- .ifnum = 5,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 6,
+- .type = QUIRK_MIDI_MIDIMAN,
+- .data = &(const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_MIDIMAN(6) {
+ .out_ep = 0x02,
+ .out_cables = 0x0001,
+ .in_ep = 0x81,
+@@ -2990,33 +2317,21 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ /* DIGIDESIGN MBOX 3 */
+ {
+ USB_DEVICE(0x0dba, 0x5000),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Digidesign",
+ .product_name = "Mbox 3",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_IGNORE(1) },
+ {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .fmt_bits = 24,
+ .channels = 4,
+@@ -3043,9 +2358,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 3,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(3) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .fmt_bits = 24,
+ .channels = 4,
+@@ -3069,36 +2382,25 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 4,
+- .type = QUIRK_MIDI_FIXED_ENDPOINT,
+- .data = &(const struct snd_usb_midi_endpoint_info) {
++ QUIRK_DATA_MIDI_FIXED_ENDPOINT(4) {
+ .out_cables = 0x0001,
+ .in_cables = 0x0001
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ {
+ /* Tascam US122 MKII - playback-only support */
+ USB_DEVICE_VENDOR_SPEC(0x0644, 0x8021),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "TASCAM",
+ .product_name = "US122 MKII",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 2,
+ .iface = 1,
+@@ -3119,9 +2421,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3129,20 +2429,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Denon DN-X1600 */
+ {
+ USB_AUDIO_DEVICE(0x154e, 0x500e),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Denon",
+ .product_name = "DN-X1600",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]){
++ QUIRK_DATA_COMPOSITE{
++ { QUIRK_DATA_IGNORE(0) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE,
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 1,
+@@ -3163,9 +2456,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 2,
+@@ -3185,13 +2476,8 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+- {
+- .ifnum = 4,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE,
+- },
+- {
+- .ifnum = -1
+- }
++ { QUIRK_DATA_STANDARD_MIDI(4) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3200,17 +2486,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ USB_DEVICE(0x045e, 0x0283),
+ .bInterfaceClass = USB_CLASS_PER_INTERFACE,
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Microsoft",
+ .product_name = "XboxLive Headset/Xbox Communicator",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
+ {
+ /* playback */
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels = 1,
+ .iface = 0,
+@@ -3226,9 +2508,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ /* capture */
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels = 1,
+ .iface = 1,
+@@ -3242,9 +2522,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_max = 16000
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3253,18 +2531,11 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ USB_DEVICE(0x200c, 0x100b),
+ .bInterfaceClass = USB_CLASS_PER_INTERFACE,
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(0) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 4,
+ .iface = 1,
+@@ -3283,9 +2554,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3298,28 +2567,12 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * enabled in create_standard_audio_quirk().
+ */
+ USB_DEVICE(0x1686, 0x00dd),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- /* Playback */
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE,
+- },
+- {
+- /* Capture */
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE,
+- },
+- {
+- /* Midi */
+- .ifnum = 3,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = -1
+- },
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) }, /* Playback */
++ { QUIRK_DATA_STANDARD_AUDIO(2) }, /* Capture */
++ { QUIRK_DATA_STANDARD_MIDI(3) }, /* Midi */
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3333,18 +2586,16 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ USB_DEVICE_ID_MATCH_INT_SUBCLASS,
+ .bInterfaceClass = USB_CLASS_AUDIO,
+ .bInterfaceSubClass = USB_SUBCLASS_MIDISTREAMING,
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_MIDI_STANDARD_INTERFACE
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_STANDARD_MIDI(QUIRK_ANY_INTERFACE)
+ }
+ },
+
+ /* Rane SL-1 */
+ {
+ USB_DEVICE(0x13e5, 0x0001),
+- .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_STANDARD_AUDIO(QUIRK_ANY_INTERFACE)
+ }
+ },
+
+@@ -3360,24 +2611,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * and only the 48 kHz sample rate works for the playback interface.
+ */
+ USB_DEVICE(0x0a12, 0x1243),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
+- /* Capture */
+- {
+- .ifnum = 1,
+- .type = QUIRK_IGNORE_INTERFACE,
+- },
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(0) },
++ { QUIRK_DATA_IGNORE(1) }, /* Capture */
+ /* Playback */
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels = 2,
+ .iface = 2,
+@@ -3396,9 +2636,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+- {
+- .ifnum = -1
+- },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3411,19 +2649,12 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * even on windows.
+ */
+ USB_DEVICE(0x19b5, 0x0021),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(0) },
+ /* Playback */
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels = 2,
+ .iface = 1,
+@@ -3442,29 +2673,20 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+- {
+- .ifnum = -1
+- },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+ /* MOTU Microbook II */
+ {
+ USB_DEVICE_VENDOR_SPEC(0x07fd, 0x0004),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "MOTU",
+ .product_name = "MicroBookII",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(0) },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ .channels = 6,
+ .iface = 0,
+@@ -3485,9 +2707,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ .channels = 8,
+ .iface = 0,
+@@ -3508,9 +2728,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3522,14 +2740,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * The feedback for the output is the input.
+ */
+ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0023),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 12,
+ .iface = 0,
+@@ -3546,9 +2760,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 10,
+ .iface = 0,
+@@ -3566,9 +2778,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_table = (unsigned int[]) { 44100 }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3611,14 +2821,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * but not for DVS (Digital Vinyl Systems) like in Mixxx.
+ */
+ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0017),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8, // outputs
+ .iface = 0,
+@@ -3635,9 +2841,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8, // inputs
+ .iface = 0,
+@@ -3655,9 +2859,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_table = (unsigned int[]) { 48000 }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3668,14 +2870,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * The feedback for the output is the dummy input.
+ */
+ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x000e),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 4,
+ .iface = 0,
+@@ -3692,9 +2890,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 2,
+ .iface = 0,
+@@ -3712,9 +2908,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_table = (unsigned int[]) { 44100 }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3725,14 +2919,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * PCM is 6 channels out & 4 channels in @ 44.1 fixed
+ */
+ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x000d),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 6, //Master, Headphones & Booth
+ .iface = 0,
+@@ -3749,9 +2939,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 4, //2x RCA inputs (CH1 & CH2)
+ .iface = 0,
+@@ -3769,9 +2957,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_table = (unsigned int[]) { 44100 }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3783,14 +2969,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * The Feedback for the output is the input
+ */
+ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x001e),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 4,
+ .iface = 0,
+@@ -3807,9 +2989,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 6,
+ .iface = 0,
+@@ -3827,9 +3007,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_table = (unsigned int[]) { 44100 }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3840,14 +3018,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * 10 channels playback & 12 channels capture @ 44.1/48/96kHz S24LE
+ */
+ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x000a),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 10,
+ .iface = 0,
+@@ -3868,9 +3042,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 12,
+ .iface = 0,
+@@ -3892,9 +3064,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3906,14 +3076,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * The Feedback for the output is the input
+ */
+ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0029),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 6,
+ .iface = 0,
+@@ -3930,9 +3096,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 6,
+ .iface = 0,
+@@ -3950,9 +3114,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_table = (unsigned int[]) { 44100 }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -3970,20 +3132,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ */
+ {
+ USB_AUDIO_DEVICE(0x534d, 0x0021),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "MacroSilicon",
+ .product_name = "MS210x",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(2) },
+ {
+- .ifnum = 3,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(3) {
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels = 2,
+ .iface = 3,
+@@ -3998,9 +3153,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_max = 48000,
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -4018,20 +3171,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ */
+ {
+ USB_AUDIO_DEVICE(0x534d, 0x2109),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "MacroSilicon",
+ .product_name = "MS2109",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_MIXER,
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_MIXER(2) },
+ {
+- .ifnum = 3,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(3) {
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels = 2,
+ .iface = 3,
+@@ -4046,9 +3192,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_max = 48000,
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -4058,14 +3202,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * 8 channels playback & 8 channels capture @ 44.1/48/96kHz S24LE
+ */
+ USB_DEVICE_VENDOR_SPEC(0x08e4, 0x017f),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 0,
+@@ -4084,9 +3224,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 0,
+@@ -4106,9 +3244,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_table = (unsigned int[]) { 44100, 48000, 96000 }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -4118,14 +3254,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * 10 channels playback & 12 channels capture @ 48kHz S24LE
+ */
+ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x001b),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 10,
+ .iface = 0,
+@@ -4144,9 +3276,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 12,
+ .iface = 0,
+@@ -4164,9 +3294,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_table = (unsigned int[]) { 48000 }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -4178,14 +3306,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * Capture on EP 0x86
+ */
+ USB_DEVICE_VENDOR_SPEC(0x08e4, 0x0163),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 0,
+@@ -4205,9 +3329,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8,
+ .iface = 0,
+@@ -4227,9 +3349,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_table = (unsigned int[]) { 44100, 48000, 96000 }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -4240,14 +3360,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * and 8 channels in @ 48 fixed (endpoint 0x82).
+ */
+ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0013),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8, // outputs
+ .iface = 0,
+@@ -4264,9 +3380,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(0) {
+ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ .channels = 8, // inputs
+ .iface = 0,
+@@ -4284,9 +3398,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .rate_table = (unsigned int[]) { 48000 }
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -4297,28 +3409,15 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ */
+ USB_DEVICE(0x1395, 0x0300),
+ .bInterfaceClass = USB_CLASS_PER_INTERFACE,
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
+ // Communication
+- {
+- .ifnum = 3,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
++ { QUIRK_DATA_STANDARD_AUDIO(3) },
+ // Recording
+- {
+- .ifnum = 4,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
++ { QUIRK_DATA_STANDARD_AUDIO(4) },
+ // Main
+- {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
+- {
+- .ifnum = -1
+- }
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -4327,21 +3426,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * Fiero SC-01 (firmware v1.0.0 @ 48 kHz)
+ */
+ USB_DEVICE(0x2b53, 0x0023),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Fiero",
+ .product_name = "SC-01",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(0) },
+ /* Playback */
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 2,
+ .fmt_bits = 24,
+@@ -4361,9 +3453,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ /* Capture */
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 2,
+ .fmt_bits = 24,
+@@ -4382,9 +3472,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .clock = 0x29
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -4393,21 +3481,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * Fiero SC-01 (firmware v1.0.0 @ 96 kHz)
+ */
+ USB_DEVICE(0x2b53, 0x0024),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Fiero",
+ .product_name = "SC-01",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(0) },
+ /* Playback */
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 2,
+ .fmt_bits = 24,
+@@ -4427,9 +3508,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ /* Capture */
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 2,
+ .fmt_bits = 24,
+@@ -4448,9 +3527,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .clock = 0x29
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -4459,21 +3536,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * Fiero SC-01 (firmware v1.1.0)
+ */
+ USB_DEVICE(0x2b53, 0x0031),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Fiero",
+ .product_name = "SC-01",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = &(const struct snd_usb_audio_quirk[]) {
+- {
+- .ifnum = 0,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE
+- },
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(0) },
+ /* Playback */
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(1) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 2,
+ .fmt_bits = 24,
+@@ -4494,9 +3564,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ /* Capture */
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+- .data = &(const struct audioformat) {
++ QUIRK_DATA_AUDIOFORMAT(2) {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .channels = 2,
+ .fmt_bits = 24,
+@@ -4516,9 +3584,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .clock = 0x29
+ }
+ },
+- {
+- .ifnum = -1
+- }
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+@@ -4527,30 +3593,187 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ * For the standard mode, Mythware XA001AU has ID ffad:a001
+ */
+ USB_DEVICE_VENDOR_SPEC(0xffad, 0xa001),
+- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ QUIRK_DRIVER_INFO {
+ .vendor_name = "Mythware",
+ .product_name = "XA001AU",
+- .ifnum = QUIRK_ANY_INTERFACE,
+- .type = QUIRK_COMPOSITE,
+- .data = (const struct snd_usb_audio_quirk[]) {
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_IGNORE(0) },
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ QUIRK_COMPOSITE_END
++ }
++ }
++},
++{
++ /* Only claim interface 0 */
++ .match_flags = USB_DEVICE_ID_MATCH_VENDOR |
++ USB_DEVICE_ID_MATCH_PRODUCT |
++ USB_DEVICE_ID_MATCH_INT_CLASS |
++ USB_DEVICE_ID_MATCH_INT_NUMBER,
++ .idVendor = 0x2a39,
++ .idProduct = 0x3f8c,
++ .bInterfaceClass = USB_CLASS_VENDOR_SPEC,
++ .bInterfaceNumber = 0,
++ QUIRK_DRIVER_INFO {
++ QUIRK_DATA_COMPOSITE {
++ /*
++ * Three modes depending on sample rate band,
++ * with different channel counts for in/out
++ */
++ { QUIRK_DATA_STANDARD_MIXER(0) },
++ {
++ QUIRK_DATA_AUDIOFORMAT(0) {
++ .formats = SNDRV_PCM_FMTBIT_S32_LE,
++ .channels = 34, // outputs
++ .fmt_bits = 24,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x02,
++ .ep_idx = 1,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC |
++ USB_ENDPOINT_SYNC_ASYNC,
++ .rates = SNDRV_PCM_RATE_32000 |
++ SNDRV_PCM_RATE_44100 |
++ SNDRV_PCM_RATE_48000,
++ .rate_min = 32000,
++ .rate_max = 48000,
++ .nr_rates = 3,
++ .rate_table = (unsigned int[]) {
++ 32000, 44100, 48000,
++ },
++ .sync_ep = 0x81,
++ .sync_iface = 0,
++ .sync_altsetting = 1,
++ .sync_ep_idx = 0,
++ .implicit_fb = 1,
++ },
++ },
++ {
++ QUIRK_DATA_AUDIOFORMAT(0) {
++ .formats = SNDRV_PCM_FMTBIT_S32_LE,
++ .channels = 18, // outputs
++ .fmt_bits = 24,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x02,
++ .ep_idx = 1,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC |
++ USB_ENDPOINT_SYNC_ASYNC,
++ .rates = SNDRV_PCM_RATE_64000 |
++ SNDRV_PCM_RATE_88200 |
++ SNDRV_PCM_RATE_96000,
++ .rate_min = 64000,
++ .rate_max = 96000,
++ .nr_rates = 3,
++ .rate_table = (unsigned int[]) {
++ 64000, 88200, 96000,
++ },
++ .sync_ep = 0x81,
++ .sync_iface = 0,
++ .sync_altsetting = 1,
++ .sync_ep_idx = 0,
++ .implicit_fb = 1,
++ },
++ },
+ {
+- .ifnum = 0,
+- .type = QUIRK_IGNORE_INTERFACE,
++ QUIRK_DATA_AUDIOFORMAT(0) {
++ .formats = SNDRV_PCM_FMTBIT_S32_LE,
++ .channels = 10, // outputs
++ .fmt_bits = 24,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x02,
++ .ep_idx = 1,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC |
++ USB_ENDPOINT_SYNC_ASYNC,
++ .rates = SNDRV_PCM_RATE_KNOT |
++ SNDRV_PCM_RATE_176400 |
++ SNDRV_PCM_RATE_192000,
++ .rate_min = 128000,
++ .rate_max = 192000,
++ .nr_rates = 3,
++ .rate_table = (unsigned int[]) {
++ 128000, 176400, 192000,
++ },
++ .sync_ep = 0x81,
++ .sync_iface = 0,
++ .sync_altsetting = 1,
++ .sync_ep_idx = 0,
++ .implicit_fb = 1,
++ },
+ },
+ {
+- .ifnum = 1,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE,
++ QUIRK_DATA_AUDIOFORMAT(0) {
++ .formats = SNDRV_PCM_FMTBIT_S32_LE,
++ .channels = 32, // inputs
++ .fmt_bits = 24,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x81,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC |
++ USB_ENDPOINT_SYNC_ASYNC,
++ .rates = SNDRV_PCM_RATE_32000 |
++ SNDRV_PCM_RATE_44100 |
++ SNDRV_PCM_RATE_48000,
++ .rate_min = 32000,
++ .rate_max = 48000,
++ .nr_rates = 3,
++ .rate_table = (unsigned int[]) {
++ 32000, 44100, 48000,
++ }
++ }
+ },
+ {
+- .ifnum = 2,
+- .type = QUIRK_AUDIO_STANDARD_INTERFACE,
++ QUIRK_DATA_AUDIOFORMAT(0) {
++ .formats = SNDRV_PCM_FMTBIT_S32_LE,
++ .channels = 16, // inputs
++ .fmt_bits = 24,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x81,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC |
++ USB_ENDPOINT_SYNC_ASYNC,
++ .rates = SNDRV_PCM_RATE_64000 |
++ SNDRV_PCM_RATE_88200 |
++ SNDRV_PCM_RATE_96000,
++ .rate_min = 64000,
++ .rate_max = 96000,
++ .nr_rates = 3,
++ .rate_table = (unsigned int[]) {
++ 64000, 88200, 96000,
++ }
++ }
+ },
+ {
+- .ifnum = -1
+- }
++ QUIRK_DATA_AUDIOFORMAT(0) {
++ .formats = SNDRV_PCM_FMTBIT_S32_LE,
++ .channels = 8, // inputs
++ .fmt_bits = 24,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x81,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC |
++ USB_ENDPOINT_SYNC_ASYNC,
++ .rates = SNDRV_PCM_RATE_KNOT |
++ SNDRV_PCM_RATE_176400 |
++ SNDRV_PCM_RATE_192000,
++ .rate_min = 128000,
++ .rate_max = 192000,
++ .nr_rates = 3,
++ .rate_table = (unsigned int[]) {
++ 128000, 176400, 192000,
++ }
++ }
++ },
++ QUIRK_COMPOSITE_END
+ }
+ }
+ },
+-
+ #undef USB_DEVICE_VENDOR_SPEC
+ #undef USB_AUDIO_DEVICE
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index e7b68c67852e92..f4c68eb7e07a12 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1389,6 +1389,27 @@ static int snd_usb_motu_m_series_boot_quirk(struct usb_device *dev)
+ return 0;
+ }
+
++static int snd_usb_rme_digiface_boot_quirk(struct usb_device *dev)
++{
++ /* Disable mixer, internal clock, all outputs ADAT, 48kHz, TMS off */
++ snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0),
++ 16, 0x40, 0x2410, 0x7fff, NULL, 0);
++ snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0),
++ 18, 0x40, 0x0104, 0xffff, NULL, 0);
++
++ /* Disable loopback for all inputs */
++ for (int ch = 0; ch < 32; ch++)
++ snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0),
++ 22, 0x40, 0x400, ch, NULL, 0);
++
++ /* Unity gain for all outputs */
++ for (int ch = 0; ch < 34; ch++)
++ snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0),
++ 21, 0x40, 0x9000, 0x100 + ch, NULL, 0);
++
++ return 0;
++}
++
+ /*
+ * Setup quirks
+ */
+@@ -1616,6 +1637,8 @@ int snd_usb_apply_boot_quirk(struct usb_device *dev,
+ get_iface_desc(intf->altsetting)->bInterfaceNumber < 3)
+ return snd_usb_motu_microbookii_boot_quirk(dev);
+ break;
++ case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++ return snd_usb_rme_digiface_boot_quirk(dev);
+ }
+
+ return 0;
+@@ -1771,6 +1794,38 @@ static void mbox3_set_format_quirk(struct snd_usb_substream *subs,
+ dev_warn(&subs->dev->dev, "MBOX3: Couldn't set the sample rate");
+ }
+
++static const int rme_digiface_rate_table[] = {
++ 32000, 44100, 48000, 0,
++ 64000, 88200, 96000, 0,
++ 128000, 176400, 192000, 0,
++};
++
++static int rme_digiface_set_format_quirk(struct snd_usb_substream *subs)
++{
++ unsigned int cur_rate = subs->data_endpoint->cur_rate;
++ u16 val;
++ int speed_mode;
++ int id;
++
++ for (id = 0; id < ARRAY_SIZE(rme_digiface_rate_table); id++) {
++ if (rme_digiface_rate_table[id] == cur_rate)
++ break;
++ }
++
++ if (id >= ARRAY_SIZE(rme_digiface_rate_table))
++ return -EINVAL;
++
++ /* 2, 3, 4 for 1x, 2x, 4x */
++ speed_mode = (id >> 2) + 2;
++ val = (id << 3) | (speed_mode << 12);
++
++ /* Set the sample rate */
++ snd_usb_ctl_msg(subs->stream->chip->dev,
++ usb_sndctrlpipe(subs->stream->chip->dev, 0),
++ 16, 0x40, val, 0x7078, NULL, 0);
++ return 0;
++}
++
+ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ const struct audioformat *fmt)
+ {
+@@ -1795,6 +1850,9 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ case USB_ID(0x0dba, 0x5000):
+ mbox3_set_format_quirk(subs, fmt); /* Digidesign Mbox 3 */
+ break;
++ case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++ rme_digiface_set_format_quirk(subs);
++ break;
+ }
+ }
+
+@@ -2163,6 +2221,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_DISABLE_AUTOSUSPEND),
+ DEVICE_FLG(0x17aa, 0x104d, /* Lenovo ThinkStation P620 Internal Speaker + Front Headset */
+ QUIRK_FLAG_DISABLE_AUTOSUSPEND),
++ DEVICE_FLG(0x1852, 0x5062, /* Luxman D-08u */
++ QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY),
+ DEVICE_FLG(0x1852, 0x5065, /* Luxman DA-06 */
+ QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY),
+ DEVICE_FLG(0x1901, 0x0191, /* GE B850V3 CP2114 audio interface */
+@@ -2221,6 +2281,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ DEVICE_FLG(0x2b53, 0x0031, /* Fiero SC-01 (firmware v1.1.0) */
+ QUIRK_FLAG_GENERIC_IMPLICIT_FB),
++ DEVICE_FLG(0x2d95, 0x8011, /* VIVO USB-C HEADSET */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x2d95, 0x8021, /* VIVO USB-C-XE710 HEADSET */
+ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+diff --git a/tools/arch/x86/kcpuid/kcpuid.c b/tools/arch/x86/kcpuid/kcpuid.c
+index 24b7d017ec2c11..b7965dfff33a9a 100644
+--- a/tools/arch/x86/kcpuid/kcpuid.c
++++ b/tools/arch/x86/kcpuid/kcpuid.c
+@@ -7,7 +7,8 @@
+ #include <string.h>
+ #include <getopt.h>
+
+-#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
++#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
++#define min(a, b) (((a) < (b)) ? (a) : (b))
+
+ typedef unsigned int u32;
+ typedef unsigned long long u64;
+@@ -207,12 +208,9 @@ static void raw_dump_range(struct cpuid_range *range)
+ #define MAX_SUBLEAF_NUM 32
+ struct cpuid_range *setup_cpuid_range(u32 input_eax)
+ {
+- u32 max_func, idx_func;
+- int subleaf;
++ u32 max_func, idx_func, subleaf, max_subleaf;
++ u32 eax, ebx, ecx, edx, f = input_eax;
+ struct cpuid_range *range;
+- u32 eax, ebx, ecx, edx;
+- u32 f = input_eax;
+- int max_subleaf;
+ bool allzero;
+
+ eax = input_eax;
+@@ -258,7 +256,7 @@ struct cpuid_range *setup_cpuid_range(u32 input_eax)
+ * others have to be tried (0xf)
+ */
+ if (f == 0x7 || f == 0x14 || f == 0x17 || f == 0x18)
+- max_subleaf = (eax & 0xff) + 1;
++ max_subleaf = min((eax & 0xff) + 1, max_subleaf);
+
+ if (f == 0xb)
+ max_subleaf = 2;
+diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
+index 968714b4c3d45b..0f2106218e1f0d 100644
+--- a/tools/bpf/bpftool/net.c
++++ b/tools/bpf/bpftool/net.c
+@@ -482,9 +482,9 @@ static void __show_dev_tc_bpf(const struct ip_devname_ifindex *dev,
+ if (prog_flags[i] || json_output) {
+ NET_START_ARRAY("prog_flags", "%s ");
+ for (j = 0; prog_flags[i] && j < 32; j++) {
+- if (!(prog_flags[i] & (1 << j)))
++ if (!(prog_flags[i] & (1U << j)))
+ continue;
+- NET_DUMP_UINT_ONLY(1 << j);
++ NET_DUMP_UINT_ONLY(1U << j);
+ }
+ NET_END_ARRAY("");
+ }
+@@ -493,9 +493,9 @@ static void __show_dev_tc_bpf(const struct ip_devname_ifindex *dev,
+ if (link_flags[i] || json_output) {
+ NET_START_ARRAY("link_flags", "%s ");
+ for (j = 0; link_flags[i] && j < 32; j++) {
+- if (!(link_flags[i] & (1 << j)))
++ if (!(link_flags[i] & (1U << j)))
+ continue;
+- NET_DUMP_UINT_ONLY(1 << j);
++ NET_DUMP_UINT_ONLY(1U << j);
+ }
+ NET_END_ARRAY("");
+ }
+@@ -824,6 +824,9 @@ static void show_link_netfilter(void)
+ nf_link_count++;
+ }
+
++ if (!nf_link_info)
++ return;
++
+ qsort(nf_link_info, nf_link_count, sizeof(*nf_link_info), netfilter_link_compar);
+
+ for (id = 0; id < nf_link_count; id++) {
+diff --git a/tools/hv/hv_fcopy_uio_daemon.c b/tools/hv/hv_fcopy_uio_daemon.c
+index 3ce316cc9f970b..7a00f3066a9807 100644
+--- a/tools/hv/hv_fcopy_uio_daemon.c
++++ b/tools/hv/hv_fcopy_uio_daemon.c
+@@ -296,6 +296,13 @@ static int hv_fcopy_start(struct hv_start_fcopy *smsg_in)
+ file_name = (char *)malloc(file_size * sizeof(char));
+ path_name = (char *)malloc(path_size * sizeof(char));
+
++ if (!file_name || !path_name) {
++ free(file_name);
++ free(path_name);
++ syslog(LOG_ERR, "Can't allocate memory for file name and/or path name");
++ return HV_E_FAIL;
++ }
++
+ wcstoutf8(file_name, (__u16 *)in_file_name, file_size);
+ wcstoutf8(path_name, (__u16 *)in_path_name, path_size);
+
+diff --git a/tools/include/nolibc/arch-powerpc.h b/tools/include/nolibc/arch-powerpc.h
+index ac212e6185b26d..41ebd394b90c7a 100644
+--- a/tools/include/nolibc/arch-powerpc.h
++++ b/tools/include/nolibc/arch-powerpc.h
+@@ -172,7 +172,7 @@
+ _ret; \
+ })
+
+-#ifndef __powerpc64__
++#if !defined(__powerpc64__) && !defined(__clang__)
+ /* FIXME: For 32-bit PowerPC, with newer gcc compilers (e.g. gcc 13.1.0),
+ * "omit-frame-pointer" fails with __attribute__((no_stack_protector)) but
+ * works with __attribute__((__optimize__("-fno-stack-protector")))
+diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
+index f028f113c4fd4e..9eb998c5999830 100644
+--- a/tools/perf/util/hist.c
++++ b/tools/perf/util/hist.c
+@@ -636,7 +636,12 @@ static struct hist_entry *hists__findnew_entry(struct hists *hists,
+ * mis-adjust symbol addresses when computing
+ * the history counter to increment.
+ */
+- if (he->ms.map != entry->ms.map) {
++ if (hists__has(hists, sym) && he->ms.map != entry->ms.map) {
++ if (he->ms.sym) {
++ u64 addr = he->ms.sym->start;
++ he->ms.sym = map__find_symbol(entry->ms.map, addr);
++ }
++
+ map__put(he->ms.map);
+ he->ms.map = map__get(entry->ms.map);
+ }
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 8477edefc29978..706be5e4a07617 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2270,8 +2270,12 @@ static void save_lbr_cursor_node(struct thread *thread,
+ cursor->curr = cursor->first;
+ else
+ cursor->curr = cursor->curr->next;
++
++ map_symbol__exit(&lbr_stitch->prev_lbr_cursor[idx].ms);
+ memcpy(&lbr_stitch->prev_lbr_cursor[idx], cursor->curr,
+ sizeof(struct callchain_cursor_node));
++ lbr_stitch->prev_lbr_cursor[idx].ms.maps = maps__get(cursor->curr->ms.maps);
++ lbr_stitch->prev_lbr_cursor[idx].ms.map = map__get(cursor->curr->ms.map);
+
+ lbr_stitch->prev_lbr_cursor[idx].valid = true;
+ cursor->pos++;
+@@ -2482,6 +2486,9 @@ static bool has_stitched_lbr(struct thread *thread,
+ memcpy(&stitch_node->cursor, &lbr_stitch->prev_lbr_cursor[i],
+ sizeof(struct callchain_cursor_node));
+
++ stitch_node->cursor.ms.maps = maps__get(lbr_stitch->prev_lbr_cursor[i].ms.maps);
++ stitch_node->cursor.ms.map = map__get(lbr_stitch->prev_lbr_cursor[i].ms.map);
++
+ if (callee)
+ list_add(&stitch_node->node, &lbr_stitch->lists);
+ else
+@@ -2505,6 +2512,8 @@ static bool alloc_lbr_stitch(struct thread *thread, unsigned int max_lbr)
+ if (!thread__lbr_stitch(thread)->prev_lbr_cursor)
+ goto free_lbr_stitch;
+
++ thread__lbr_stitch(thread)->prev_lbr_cursor_size = max_lbr + 1;
++
+ INIT_LIST_HEAD(&thread__lbr_stitch(thread)->lists);
+ INIT_LIST_HEAD(&thread__lbr_stitch(thread)->free_lists);
+
+@@ -2560,8 +2569,12 @@ static int resolve_lbr_callchain_sample(struct thread *thread,
+ max_lbr, callee);
+
+ if (!stitched_lbr && !list_empty(&lbr_stitch->lists)) {
+- list_replace_init(&lbr_stitch->lists,
+- &lbr_stitch->free_lists);
++ struct stitch_list *stitch_node;
++
++ list_for_each_entry(stitch_node, &lbr_stitch->lists, node)
++ map_symbol__exit(&stitch_node->cursor.ms);
++
++ list_splice_init(&lbr_stitch->lists, &lbr_stitch->free_lists);
+ }
+ memcpy(&lbr_stitch->prev_sample, sample, sizeof(*sample));
+ }
+diff --git a/tools/perf/util/setup.py b/tools/perf/util/setup.py
+index 142e9d447ce721..649550e9b7aa8c 100644
+--- a/tools/perf/util/setup.py
++++ b/tools/perf/util/setup.py
+@@ -17,7 +17,7 @@ src_feature_tests = getenv('srctree') + '/tools/build/feature'
+
+ def clang_has_option(option):
+ cc_output = Popen([cc, cc_options + option, path.join(src_feature_tests, "test-hello.c") ], stderr=PIPE).stderr.readlines()
+- return [o for o in cc_output if ((b"unknown argument" in o) or (b"is not supported" in o))] == [ ]
++ return [o for o in cc_output if ((b"unknown argument" in o) or (b"is not supported" in o) or (b"unknown warning option" in o))] == [ ]
+
+ if cc_is_clang:
+ from sysconfig import get_config_vars
+@@ -63,6 +63,8 @@ cflags = getenv('CFLAGS', '').split()
+ cflags += ['-fno-strict-aliasing', '-Wno-write-strings', '-Wno-unused-parameter', '-Wno-redundant-decls' ]
+ if cc_is_clang:
+ cflags += ["-Wno-unused-command-line-argument" ]
++ if clang_has_option("-Wno-cast-function-type-mismatch"):
++ cflags += ["-Wno-cast-function-type-mismatch" ]
+ else:
+ cflags += ['-Wno-cast-function-type' ]
+
+diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
+index 87c59aa9fe38bf..0ffdd52d86d707 100644
+--- a/tools/perf/util/thread.c
++++ b/tools/perf/util/thread.c
+@@ -476,6 +476,7 @@ void thread__free_stitch_list(struct thread *thread)
+ return;
+
+ list_for_each_entry_safe(pos, tmp, &lbr_stitch->lists, node) {
++ map_symbol__exit(&pos->cursor.ms);
+ list_del_init(&pos->node);
+ free(pos);
+ }
+@@ -485,6 +486,9 @@ void thread__free_stitch_list(struct thread *thread)
+ free(pos);
+ }
+
++ for (unsigned int i = 0 ; i < lbr_stitch->prev_lbr_cursor_size; i++)
++ map_symbol__exit(&lbr_stitch->prev_lbr_cursor[i].ms);
++
+ zfree(&lbr_stitch->prev_lbr_cursor);
+ free(thread__lbr_stitch(thread));
+ thread__set_lbr_stitch(thread, NULL);
+diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h
+index 8b4a3c69bad19c..6cbf6eb2812e05 100644
+--- a/tools/perf/util/thread.h
++++ b/tools/perf/util/thread.h
+@@ -26,6 +26,7 @@ struct lbr_stitch {
+ struct list_head free_lists;
+ struct perf_sample prev_sample;
+ struct callchain_cursor_node *prev_lbr_cursor;
++ unsigned int prev_lbr_cursor_size;
+ };
+
+ DECLARE_RC_STRUCT(thread) {
+diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+index fd28c1157bd3d0..72f565af4f829e 100644
+--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
++++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+@@ -477,6 +477,7 @@ static void testmod_unregister_uprobe(void)
+ if (uprobe.offset) {
+ uprobe_unregister(d_real_inode(uprobe.path.dentry),
+ uprobe.offset, &uprobe.consumer);
++ path_put(&uprobe.path);
+ uprobe.offset = 0;
+ }
+
+diff --git a/tools/testing/selftests/breakpoints/step_after_suspend_test.c b/tools/testing/selftests/breakpoints/step_after_suspend_test.c
+index dfec31fb9b30d1..8d275f03e977f5 100644
+--- a/tools/testing/selftests/breakpoints/step_after_suspend_test.c
++++ b/tools/testing/selftests/breakpoints/step_after_suspend_test.c
+@@ -152,7 +152,10 @@ void suspend(void)
+ if (err < 0)
+ ksft_exit_fail_msg("timerfd_settime() failed\n");
+
+- if (write(power_state_fd, "mem", strlen("mem")) != strlen("mem"))
++ system("(echo mem > /sys/power/state) 2> /dev/null");
++
++ timerfd_gettime(timerfd, &spec);
++ if (spec.it_value.tv_sec != 0 || spec.it_value.tv_nsec != 0)
+ ksft_exit_fail_msg("Failed to enter Suspend state\n");
+
+ close(timerfd);
+diff --git a/tools/testing/selftests/devices/probe/test_discoverable_devices.py b/tools/testing/selftests/devices/probe/test_discoverable_devices.py
+index d94a74b8a05487..d7a2bb91c80796 100755
+--- a/tools/testing/selftests/devices/probe/test_discoverable_devices.py
++++ b/tools/testing/selftests/devices/probe/test_discoverable_devices.py
+@@ -45,7 +45,7 @@ def find_pci_controller_dirs():
+
+
+ def find_usb_controller_dirs():
+- usb_controller_sysfs_dir = "usb[\d]+"
++ usb_controller_sysfs_dir = r"usb[\d]+"
+
+ dir_regex = re.compile(usb_controller_sysfs_dir)
+ for d in os.scandir(sysfs_usb_devices):
+@@ -91,7 +91,7 @@ def get_acpi_uid(sysfs_dev_dir):
+
+
+ def get_usb_version(sysfs_dev_dir):
+- re_usb_version = re.compile("PRODUCT=.*/(\d)/.*")
++ re_usb_version = re.compile(r"PRODUCT=.*/(\d)/.*")
+ with open(os.path.join(sysfs_dev_dir, "uevent")) as f:
+ return int(re_usb_version.search(f.read()).group(1))
+
+diff --git a/tools/testing/selftests/hid/Makefile b/tools/testing/selftests/hid/Makefile
+index 2b5ea18bde38bd..346328e2295c30 100644
+--- a/tools/testing/selftests/hid/Makefile
++++ b/tools/testing/selftests/hid/Makefile
+@@ -17,6 +17,8 @@ TEST_PROGS += hid-tablet.sh
+ TEST_PROGS += hid-usb_crash.sh
+ TEST_PROGS += hid-wacom.sh
+
++TEST_FILES := run-hid-tools-tests.sh
++
+ CXX ?= $(CROSS_COMPILE)g++
+
+ HOSTPKG_CONFIG := pkg-config
+diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+index d680c00d2853ac..67df7b47087f03 100755
+--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
++++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+@@ -254,7 +254,7 @@ function cleanup_hugetlb_memory() {
+ local cgroup="$1"
+ if [[ "$(pgrep -f write_to_hugetlbfs)" != "" ]]; then
+ echo killing write_to_hugetlbfs
+- killall -2 write_to_hugetlbfs
++ killall -2 --wait write_to_hugetlbfs
+ wait_for_hugetlb_memory_to_get_depleted $cgroup
+ fi
+ set -e
+diff --git a/tools/testing/selftests/mm/pagemap_ioctl.c b/tools/testing/selftests/mm/pagemap_ioctl.c
+index fc90af2a97b80a..bcc73b4e805c68 100644
+--- a/tools/testing/selftests/mm/pagemap_ioctl.c
++++ b/tools/testing/selftests/mm/pagemap_ioctl.c
+@@ -15,7 +15,7 @@
+ #include <sys/ioctl.h>
+ #include <sys/stat.h>
+ #include <math.h>
+-#include <asm-generic/unistd.h>
++#include <asm/unistd.h>
+ #include <pthread.h>
+ #include <sys/resource.h>
+ #include <assert.h>
+diff --git a/tools/testing/selftests/mm/write_to_hugetlbfs.c b/tools/testing/selftests/mm/write_to_hugetlbfs.c
+index 6a2caba19ee1d9..1289d311efd705 100644
+--- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
++++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
+@@ -28,7 +28,7 @@ enum method {
+
+ /* Global variables. */
+ static const char *self;
+-static char *shmaddr;
++static int *shmaddr;
+ static int shmid;
+
+ /*
+@@ -47,15 +47,17 @@ void sig_handler(int signo)
+ {
+ printf("Received %d.\n", signo);
+ if (signo == SIGINT) {
+- printf("Deleting the memory\n");
+- if (shmdt((const void *)shmaddr) != 0) {
+- perror("Detach failure");
++ if (shmaddr) {
++ printf("Deleting the memory\n");
++ if (shmdt((const void *)shmaddr) != 0) {
++ perror("Detach failure");
++ shmctl(shmid, IPC_RMID, NULL);
++ exit(4);
++ }
++
+ shmctl(shmid, IPC_RMID, NULL);
+- exit(4);
++ printf("Done deleting the memory\n");
+ }
+-
+- shmctl(shmid, IPC_RMID, NULL);
+- printf("Done deleting the memory\n");
+ }
+ exit(2);
+ }
+@@ -211,7 +213,8 @@ int main(int argc, char **argv)
+ shmctl(shmid, IPC_RMID, NULL);
+ exit(2);
+ }
+- printf("shmaddr: %p\n", ptr);
++ shmaddr = ptr;
++ printf("shmaddr: %p\n", shmaddr);
+
+ break;
+ default:
+diff --git a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
+index bd9317bf5adafb..dc056fec993bd3 100644
+--- a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
++++ b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
+@@ -207,6 +207,7 @@ static int conntrack_data_generate_v6(struct mnl_socket *sock,
+ static int count_entries(const struct nlmsghdr *nlh, void *data)
+ {
+ reply_counter++;
++ return MNL_CB_OK;
+ }
+
+ static int conntracK_count_zone(struct mnl_socket *sock, uint16_t zone)
+diff --git a/tools/testing/selftests/net/netfilter/nft_audit.sh b/tools/testing/selftests/net/netfilter/nft_audit.sh
+index 902f8114bc80fc..87f2b4c725aa02 100755
+--- a/tools/testing/selftests/net/netfilter/nft_audit.sh
++++ b/tools/testing/selftests/net/netfilter/nft_audit.sh
+@@ -48,12 +48,31 @@ logread_pid=$!
+ trap 'kill $logread_pid; rm -f $logfile $rulefile' EXIT
+ exec 3<"$logfile"
+
++lsplit='s/^\(.*\) entries=\([^ ]*\) \(.*\)$/pfx="\1"\nval="\2"\nsfx="\3"/'
++summarize_logs() {
++ sum=0
++ while read line; do
++ eval $(sed "$lsplit" <<< "$line")
++ [[ $sum -gt 0 ]] && {
++ [[ "$pfx $sfx" == "$tpfx $tsfx" ]] && {
++ let "sum += val"
++ continue
++ }
++ echo "$tpfx entries=$sum $tsfx"
++ }
++ tpfx="$pfx"
++ tsfx="$sfx"
++ sum=$val
++ done
++ echo "$tpfx entries=$sum $tsfx"
++}
++
+ do_test() { # (cmd, log)
+ echo -n "testing for cmd: $1 ... "
+ cat <&3 >/dev/null
+ $1 >/dev/null || exit 1
+ sleep 0.1
+- res=$(diff -a -u <(echo "$2") - <&3)
++ res=$(diff -a -u <(echo "$2") <(summarize_logs <&3))
+ [ $? -eq 0 ] && { echo "OK"; return; }
+ echo "FAIL"
+ grep -v '^\(---\|+++\|@@\)' <<< "$res"
+@@ -152,31 +171,17 @@ do_test 'nft reset rules t1 c2' \
+ 'table=t1 family=2 entries=3 op=nft_reset_rule'
+
+ do_test 'nft reset rules table t1' \
+-'table=t1 family=2 entries=3 op=nft_reset_rule
+-table=t1 family=2 entries=3 op=nft_reset_rule
+-table=t1 family=2 entries=3 op=nft_reset_rule'
++'table=t1 family=2 entries=9 op=nft_reset_rule'
+
+ do_test 'nft reset rules t2 c3' \
+-'table=t2 family=2 entries=189 op=nft_reset_rule
+-table=t2 family=2 entries=188 op=nft_reset_rule
+-table=t2 family=2 entries=126 op=nft_reset_rule'
++'table=t2 family=2 entries=503 op=nft_reset_rule'
+
+ do_test 'nft reset rules t2' \
+-'table=t2 family=2 entries=3 op=nft_reset_rule
+-table=t2 family=2 entries=3 op=nft_reset_rule
+-table=t2 family=2 entries=186 op=nft_reset_rule
+-table=t2 family=2 entries=188 op=nft_reset_rule
+-table=t2 family=2 entries=129 op=nft_reset_rule'
++'table=t2 family=2 entries=509 op=nft_reset_rule'
+
+ do_test 'nft reset rules' \
+-'table=t1 family=2 entries=3 op=nft_reset_rule
+-table=t1 family=2 entries=3 op=nft_reset_rule
+-table=t1 family=2 entries=3 op=nft_reset_rule
+-table=t2 family=2 entries=3 op=nft_reset_rule
+-table=t2 family=2 entries=3 op=nft_reset_rule
+-table=t2 family=2 entries=180 op=nft_reset_rule
+-table=t2 family=2 entries=188 op=nft_reset_rule
+-table=t2 family=2 entries=135 op=nft_reset_rule'
++'table=t1 family=2 entries=9 op=nft_reset_rule
++table=t2 family=2 entries=509 op=nft_reset_rule'
+
+ # resetting sets and elements
+
+@@ -200,13 +205,11 @@ do_test 'nft reset counters t1' \
+ 'table=t1 family=2 entries=1 op=nft_reset_obj'
+
+ do_test 'nft reset counters t2' \
+-'table=t2 family=2 entries=342 op=nft_reset_obj
+-table=t2 family=2 entries=158 op=nft_reset_obj'
++'table=t2 family=2 entries=500 op=nft_reset_obj'
+
+ do_test 'nft reset counters' \
+ 'table=t1 family=2 entries=1 op=nft_reset_obj
+-table=t2 family=2 entries=341 op=nft_reset_obj
+-table=t2 family=2 entries=159 op=nft_reset_obj'
++table=t2 family=2 entries=500 op=nft_reset_obj'
+
+ # resetting quotas
+
+@@ -217,13 +220,11 @@ do_test 'nft reset quotas t1' \
+ 'table=t1 family=2 entries=1 op=nft_reset_obj'
+
+ do_test 'nft reset quotas t2' \
+-'table=t2 family=2 entries=315 op=nft_reset_obj
+-table=t2 family=2 entries=185 op=nft_reset_obj'
++'table=t2 family=2 entries=500 op=nft_reset_obj'
+
+ do_test 'nft reset quotas' \
+ 'table=t1 family=2 entries=1 op=nft_reset_obj
+-table=t2 family=2 entries=314 op=nft_reset_obj
+-table=t2 family=2 entries=186 op=nft_reset_obj'
++table=t2 family=2 entries=500 op=nft_reset_obj'
+
+ # deleting rules
+
+diff --git a/tools/testing/selftests/nolibc/nolibc-test.c b/tools/testing/selftests/nolibc/nolibc-test.c
+index 093d0512f4c57a..8cbb51dca0cd6b 100644
+--- a/tools/testing/selftests/nolibc/nolibc-test.c
++++ b/tools/testing/selftests/nolibc/nolibc-test.c
+@@ -542,7 +542,7 @@ int expect_strzr(const char *expr, int llen)
+ {
+ int ret = 0;
+
+- llen += printf(" = <%s> ", expr);
++ llen += printf(" = <%s> ", expr ? expr : "(null)");
+ if (expr) {
+ ret = 1;
+ result(llen, FAIL);
+@@ -561,7 +561,7 @@ int expect_strnz(const char *expr, int llen)
+ {
+ int ret = 0;
+
+- llen += printf(" = <%s> ", expr);
++ llen += printf(" = <%s> ", expr ? expr : "(null)");
+ if (!expr) {
+ ret = 1;
+ result(llen, FAIL);
+diff --git a/tools/testing/selftests/vDSO/parse_vdso.c b/tools/testing/selftests/vDSO/parse_vdso.c
+index 4ae417372e9ebc..7dd5668ea8a6e3 100644
+--- a/tools/testing/selftests/vDSO/parse_vdso.c
++++ b/tools/testing/selftests/vDSO/parse_vdso.c
+@@ -36,6 +36,12 @@
+ #define ELF_BITS_XFORM(bits, x) ELF_BITS_XFORM2(bits, x)
+ #define ELF(x) ELF_BITS_XFORM(ELF_BITS, x)
+
++#ifdef __s390x__
++#define ELF_HASH_ENTRY ELF(Xword)
++#else
++#define ELF_HASH_ENTRY ELF(Word)
++#endif
++
+ static struct vdso_info
+ {
+ bool valid;
+@@ -47,8 +53,8 @@ static struct vdso_info
+ /* Symbol table */
+ ELF(Sym) *symtab;
+ const char *symstrings;
+- ELF(Word) *bucket, *chain;
+- ELF(Word) nbucket, nchain;
++ ELF_HASH_ENTRY *bucket, *chain;
++ ELF_HASH_ENTRY nbucket, nchain;
+
+ /* Version table */
+ ELF(Versym) *versym;
+@@ -115,7 +121,7 @@ void vdso_init_from_sysinfo_ehdr(uintptr_t base)
+ /*
+ * Fish out the useful bits of the dynamic table.
+ */
+- ELF(Word) *hash = 0;
++ ELF_HASH_ENTRY *hash = 0;
+ vdso_info.symstrings = 0;
+ vdso_info.symtab = 0;
+ vdso_info.versym = 0;
+@@ -133,7 +139,7 @@ void vdso_init_from_sysinfo_ehdr(uintptr_t base)
+ + vdso_info.load_offset);
+ break;
+ case DT_HASH:
+- hash = (ELF(Word) *)
++ hash = (ELF_HASH_ENTRY *)
+ ((uintptr_t)dyn[i].d_un.d_ptr
+ + vdso_info.load_offset);
+ break;
+@@ -216,7 +222,8 @@ void *vdso_sym(const char *version, const char *name)
+ ELF(Sym) *sym = &vdso_info.symtab[chain];
+
+ /* Check for a defined global or weak function w/ right name. */
+- if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC)
++ if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC &&
++ ELF64_ST_TYPE(sym->st_info) != STT_NOTYPE)
+ continue;
+ if (ELF64_ST_BIND(sym->st_info) != STB_GLOBAL &&
+ ELF64_ST_BIND(sym->st_info) != STB_WEAK)
+diff --git a/tools/testing/selftests/vDSO/vdso_config.h b/tools/testing/selftests/vDSO/vdso_config.h
+index 7b543e7f04d7b0..fe0b3ec48c8d8d 100644
+--- a/tools/testing/selftests/vDSO/vdso_config.h
++++ b/tools/testing/selftests/vDSO/vdso_config.h
+@@ -18,18 +18,18 @@
+ #elif defined(__aarch64__)
+ #define VDSO_VERSION 3
+ #define VDSO_NAMES 0
+-#elif defined(__powerpc__)
++#elif defined(__powerpc64__)
+ #define VDSO_VERSION 1
+ #define VDSO_NAMES 0
+-#define VDSO_32BIT 1
+-#elif defined(__powerpc64__)
++#elif defined(__powerpc__)
+ #define VDSO_VERSION 1
+ #define VDSO_NAMES 0
+-#elif defined (__s390__)
++#define VDSO_32BIT 1
++#elif defined (__s390__) && !defined(__s390x__)
+ #define VDSO_VERSION 2
+ #define VDSO_NAMES 0
+ #define VDSO_32BIT 1
+-#elif defined (__s390X__)
++#elif defined (__s390x__)
+ #define VDSO_VERSION 2
+ #define VDSO_NAMES 0
+ #elif defined(__mips__)
+diff --git a/tools/testing/selftests/vDSO/vdso_test_correctness.c b/tools/testing/selftests/vDSO/vdso_test_correctness.c
+index e691a3cf149112..cdb697ae8343cc 100644
+--- a/tools/testing/selftests/vDSO/vdso_test_correctness.c
++++ b/tools/testing/selftests/vDSO/vdso_test_correctness.c
+@@ -114,6 +114,12 @@ static void fill_function_pointers()
+ if (!vdso)
+ vdso = dlopen("linux-gate.so.1",
+ RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
++ if (!vdso)
++ vdso = dlopen("linux-vdso32.so.1",
++ RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
++ if (!vdso)
++ vdso = dlopen("linux-vdso64.so.1",
++ RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
+ if (!vdso) {
+ printf("[WARN]\tfailed to find vDSO\n");
+ return;
+diff --git a/tools/tracing/rtla/Makefile.rtla b/tools/tracing/rtla/Makefile.rtla
+index 3ff0b8970896f7..cc1d6b615475f0 100644
+--- a/tools/tracing/rtla/Makefile.rtla
++++ b/tools/tracing/rtla/Makefile.rtla
+@@ -38,7 +38,7 @@ BINDIR := /usr/bin
+ .PHONY: install
+ install: doc_install
+ @$(MKDIR) -p $(DESTDIR)$(BINDIR)
+- $(call QUIET_INSTALL,rtla)$(INSTALL) rtla -m 755 $(DESTDIR)$(BINDIR)
++ $(call QUIET_INSTALL,rtla)$(INSTALL) $(RTLA) -m 755 $(DESTDIR)$(BINDIR)
+ @$(STRIP) $(DESTDIR)$(BINDIR)/rtla
+ @test ! -f $(DESTDIR)$(BINDIR)/osnoise || $(RM) $(DESTDIR)$(BINDIR)/osnoise
+ @$(LN) rtla $(DESTDIR)$(BINDIR)/osnoise
+diff --git a/tools/tracing/rtla/src/osnoise_top.c b/tools/tracing/rtla/src/osnoise_top.c
+index 2f756628613dd8..30e3853076a0da 100644
+--- a/tools/tracing/rtla/src/osnoise_top.c
++++ b/tools/tracing/rtla/src/osnoise_top.c
+@@ -442,7 +442,7 @@ struct osnoise_top_params *osnoise_top_parse_args(int argc, char **argv)
+ case 'd':
+ params->duration = parse_seconds_duration(optarg);
+ if (!params->duration)
+- osnoise_top_usage(params, "Invalid -D duration\n");
++ osnoise_top_usage(params, "Invalid -d duration\n");
+ break;
+ case 'e':
+ tevent = trace_event_alloc(optarg);
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index 8c16419fe22aa7..210b0f533534ab 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -459,7 +459,7 @@ static void timerlat_top_usage(char *usage)
+ " -c/--cpus cpus: run the tracer only on the given cpus",
+ " -H/--house-keeping cpus: run rtla control threads only on the given cpus",
+ " -C/--cgroup[=cgroup_name]: set cgroup, if no cgroup_name is passed, the rtla's cgroup will be inherited",
+- " -d/--duration time[m|h|d]: duration of the session in seconds",
++ " -d/--duration time[s|m|h|d]: duration of the session",
+ " -D/--debug: print debug info",
+ " --dump-tasks: prints the task running on all CPUs if stop conditions are met (depends on !--no-aa)",
+ " -t/--trace[file]: save the stopped trace to [file|timerlat_trace.txt]",
+@@ -613,7 +613,7 @@ static struct timerlat_top_params
+ case 'd':
+ params->duration = parse_seconds_duration(optarg);
+ if (!params->duration)
+- timerlat_top_usage("Invalid -D duration\n");
++ timerlat_top_usage("Invalid -d duration\n");
+ break;
+ case 'e':
+ tevent = trace_event_alloc(optarg);
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-10-17 14:04 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-10-17 14:04 UTC (permalink / raw
To: gentoo-commits
commit: 6b673de40ecee748d176e9fb3662007838f2ebd2
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 17 14:04:40 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct 17 14:04:40 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6b673de4
Linux patch 6.11.4
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1003_linux-6.11.4.patch | 15888 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 15892 insertions(+)
diff --git a/0000_README b/0000_README
index 7b982874..9e83ebe0 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-6.11.3.patch
From: https://www.kernel.org
Desc: Linux 6.11.3
+Patch: 1003_linux-6.11.4.patch
+From: https://www.kernel.org
+Desc: Linux 6.11.4
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1003_linux-6.11.4.patch b/1003_linux-6.11.4.patch
new file mode 100644
index 00000000..d1b84005
--- /dev/null
+++ b/1003_linux-6.11.4.patch
@@ -0,0 +1,15888 @@
+diff --git a/Makefile b/Makefile
+index 108d314ea95b52..50c615983e4405 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 11
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/loongarch/pci/acpi.c b/arch/loongarch/pci/acpi.c
+index 365f7de771cbb9..1da4dc46df43e5 100644
+--- a/arch/loongarch/pci/acpi.c
++++ b/arch/loongarch/pci/acpi.c
+@@ -225,6 +225,7 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
+ if (bus) {
+ memcpy(bus->sysdata, info->cfg, sizeof(struct pci_config_window));
+ kfree(info);
++ kfree(root_ops);
+ } else {
+ struct pci_bus *child;
+
+diff --git a/arch/riscv/include/asm/sparsemem.h b/arch/riscv/include/asm/sparsemem.h
+index 63acaecc337478..2f901a410586d0 100644
+--- a/arch/riscv/include/asm/sparsemem.h
++++ b/arch/riscv/include/asm/sparsemem.h
+@@ -7,7 +7,7 @@
+ #ifdef CONFIG_64BIT
+ #define MAX_PHYSMEM_BITS 56
+ #else
+-#define MAX_PHYSMEM_BITS 34
++#define MAX_PHYSMEM_BITS 32
+ #endif /* CONFIG_64BIT */
+ #define SECTION_SIZE_BITS 27
+ #endif /* CONFIG_SPARSEMEM */
+diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h
+index a96b1fea24fe43..5ba77f60bf0b58 100644
+--- a/arch/riscv/include/asm/string.h
++++ b/arch/riscv/include/asm/string.h
+@@ -19,6 +19,7 @@ extern asmlinkage void *__memcpy(void *, const void *, size_t);
+ extern asmlinkage void *memmove(void *, const void *, size_t);
+ extern asmlinkage void *__memmove(void *, const void *, size_t);
+
++#if !(defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS))
+ #define __HAVE_ARCH_STRCMP
+ extern asmlinkage int strcmp(const char *cs, const char *ct);
+
+@@ -27,6 +28,7 @@ extern asmlinkage __kernel_size_t strlen(const char *);
+
+ #define __HAVE_ARCH_STRNCMP
+ extern asmlinkage int strncmp(const char *cs, const char *ct, size_t count);
++#endif
+
+ /* For those files which don't want to check by kasan. */
+ #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+diff --git a/arch/riscv/kernel/elf_kexec.c b/arch/riscv/kernel/elf_kexec.c
+index 11c0d2e0becfe9..3c37661801f95d 100644
+--- a/arch/riscv/kernel/elf_kexec.c
++++ b/arch/riscv/kernel/elf_kexec.c
+@@ -451,6 +451,12 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+ *(u32 *)loc = CLEAN_IMM(CJTYPE, *(u32 *)loc) |
+ ENCODE_CJTYPE_IMM(val - addr);
+ break;
++ case R_RISCV_ADD16:
++ *(u16 *)loc += val;
++ break;
++ case R_RISCV_SUB16:
++ *(u16 *)loc -= val;
++ break;
+ case R_RISCV_ADD32:
+ *(u32 *)loc += val;
+ break;
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index ac2e908d4418d0..fefb8e7d957a01 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -239,8 +239,8 @@ SYM_CODE_START(ret_from_fork)
+ jalr s0
+ 1:
+ move a0, sp /* pt_regs */
+- la ra, ret_from_exception
+- tail syscall_exit_to_user_mode
++ call syscall_exit_to_user_mode
++ j ret_from_exception
+ SYM_CODE_END(ret_from_fork)
+
+ #ifdef CONFIG_IRQ_STACKS
+diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c
+index a72879b4249a5d..5ab1c7e1a6ed5d 100644
+--- a/arch/riscv/kernel/riscv_ksyms.c
++++ b/arch/riscv/kernel/riscv_ksyms.c
+@@ -12,9 +12,6 @@
+ EXPORT_SYMBOL(memset);
+ EXPORT_SYMBOL(memcpy);
+ EXPORT_SYMBOL(memmove);
+-EXPORT_SYMBOL(strcmp);
+-EXPORT_SYMBOL(strlen);
+-EXPORT_SYMBOL(strncmp);
+ EXPORT_SYMBOL(__memset);
+ EXPORT_SYMBOL(__memcpy);
+ EXPORT_SYMBOL(__memmove);
+diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
+index 2b369f51b0a5ed..8eec6b69a875f8 100644
+--- a/arch/riscv/lib/Makefile
++++ b/arch/riscv/lib/Makefile
+@@ -3,9 +3,11 @@ lib-y += delay.o
+ lib-y += memcpy.o
+ lib-y += memset.o
+ lib-y += memmove.o
++ifeq ($(CONFIG_KASAN_GENERIC)$(CONFIG_KASAN_SW_TAGS),)
+ lib-y += strcmp.o
+ lib-y += strlen.o
+ lib-y += strncmp.o
++endif
+ lib-y += csum.o
+ ifeq ($(CONFIG_MMU), y)
+ lib-$(CONFIG_RISCV_ISA_V) += uaccess_vector.o
+diff --git a/arch/riscv/lib/strcmp.S b/arch/riscv/lib/strcmp.S
+index 687b2bea5c438c..542301a67a2ff7 100644
+--- a/arch/riscv/lib/strcmp.S
++++ b/arch/riscv/lib/strcmp.S
+@@ -120,3 +120,4 @@ strcmp_zbb:
+ .option pop
+ #endif
+ SYM_FUNC_END(strcmp)
++EXPORT_SYMBOL(strcmp)
+diff --git a/arch/riscv/lib/strlen.S b/arch/riscv/lib/strlen.S
+index 8ae3064e45ff00..962983b73251e3 100644
+--- a/arch/riscv/lib/strlen.S
++++ b/arch/riscv/lib/strlen.S
+@@ -131,3 +131,4 @@ strlen_zbb:
+ #endif
+ SYM_FUNC_END(strlen)
+ SYM_FUNC_ALIAS(__pi_strlen, strlen)
++EXPORT_SYMBOL(strlen)
+diff --git a/arch/riscv/lib/strncmp.S b/arch/riscv/lib/strncmp.S
+index aba5b3148621d8..0f359ea2f55b29 100644
+--- a/arch/riscv/lib/strncmp.S
++++ b/arch/riscv/lib/strncmp.S
+@@ -136,3 +136,4 @@ strncmp_zbb:
+ .option pop
+ #endif
+ SYM_FUNC_END(strncmp)
++EXPORT_SYMBOL(strncmp)
+diff --git a/arch/riscv/purgatory/Makefile b/arch/riscv/purgatory/Makefile
+index f11945ee249031..fb9c917c9b4573 100644
+--- a/arch/riscv/purgatory/Makefile
++++ b/arch/riscv/purgatory/Makefile
+@@ -1,7 +1,9 @@
+ # SPDX-License-Identifier: GPL-2.0
+
+ purgatory-y := purgatory.o sha256.o entry.o string.o ctype.o memcpy.o memset.o
++ifeq ($(CONFIG_KASAN_GENERIC)$(CONFIG_KASAN_SW_TAGS),)
+ purgatory-y += strcmp.o strlen.o strncmp.o
++endif
+
+ targets += $(purgatory-y)
+ PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y))
+diff --git a/arch/s390/include/asm/facility.h b/arch/s390/include/asm/facility.h
+index b7d234838a3667..65ebf86506cdea 100644
+--- a/arch/s390/include/asm/facility.h
++++ b/arch/s390/include/asm/facility.h
+@@ -59,8 +59,10 @@ static inline int test_facility(unsigned long nr)
+ unsigned long facilities_als[] = { FACILITIES_ALS };
+
+ if (__builtin_constant_p(nr) && nr < sizeof(facilities_als) * 8) {
+- if (__test_facility(nr, &facilities_als))
+- return 1;
++ if (__test_facility(nr, &facilities_als)) {
++ if (!__is_defined(__DECOMPRESSOR))
++ return 1;
++ }
+ }
+ return __test_facility(nr, &stfle_fac_list);
+ }
+diff --git a/arch/s390/include/asm/io.h b/arch/s390/include/asm/io.h
+index 0fbc992d7a5ea7..fc9933a743d692 100644
+--- a/arch/s390/include/asm/io.h
++++ b/arch/s390/include/asm/io.h
+@@ -16,8 +16,10 @@
+ #include <asm/pci_io.h>
+
+ #define xlate_dev_mem_ptr xlate_dev_mem_ptr
++#define kc_xlate_dev_mem_ptr xlate_dev_mem_ptr
+ void *xlate_dev_mem_ptr(phys_addr_t phys);
+ #define unxlate_dev_mem_ptr unxlate_dev_mem_ptr
++#define kc_unxlate_dev_mem_ptr unxlate_dev_mem_ptr
+ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr);
+
+ #define IO_SPACE_LIMIT 0
+diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
+index ee051ad81c7119..0242fa78c91896 100644
+--- a/arch/s390/kernel/early.c
++++ b/arch/s390/kernel/early.c
+@@ -177,8 +177,21 @@ static __init void setup_topology(void)
+
+ void __do_early_pgm_check(struct pt_regs *regs)
+ {
+- if (!fixup_exception(regs))
+- disabled_wait();
++ struct lowcore *lc = get_lowcore();
++ unsigned long ip;
++
++ regs->int_code = lc->pgm_int_code;
++ regs->int_parm_long = lc->trans_exc_code;
++ ip = __rewind_psw(regs->psw, regs->int_code >> 16);
++
++ /* Monitor Event? Might be a warning */
++ if ((regs->int_code & PGM_INT_CODE_MASK) == 0x40) {
++ if (report_bug(ip, regs) == BUG_TRAP_TYPE_WARN)
++ return;
++ }
++ if (fixup_exception(regs))
++ return;
++ disabled_wait();
+ }
+
+ static noinline __init void setup_lowcore_early(void)
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index 736c1d9632dd55..48fa9471660a89 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -1463,7 +1463,7 @@ static int aux_output_begin(struct perf_output_handle *handle,
+ unsigned long range, i, range_scan, idx, head, base, offset;
+ struct hws_trailer_entry *te;
+
+- if (WARN_ON_ONCE(handle->head & ~PAGE_MASK))
++ if (handle->head & ~PAGE_MASK)
+ return -EINVAL;
+
+ aux->head = handle->head >> PAGE_SHIFT;
+@@ -1642,7 +1642,7 @@ static void hw_collect_aux(struct cpu_hw_sf *cpuhw)
+ unsigned long num_sdb;
+
+ aux = perf_get_aux(handle);
+- if (WARN_ON_ONCE(!aux))
++ if (!aux)
+ return;
+
+ /* Inform user space new data arrived */
+@@ -1661,7 +1661,7 @@ static void hw_collect_aux(struct cpu_hw_sf *cpuhw)
+ num_sdb);
+ break;
+ }
+- if (WARN_ON_ONCE(!aux))
++ if (!aux)
+ return;
+
+ /* Update head and alert_mark to new position */
+@@ -1896,12 +1896,8 @@ static void cpumsf_pmu_start(struct perf_event *event, int flags)
+ {
+ struct cpu_hw_sf *cpuhw = this_cpu_ptr(&cpu_hw_sf);
+
+- if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
++ if (!(event->hw.state & PERF_HES_STOPPED))
+ return;
+-
+- if (flags & PERF_EF_RELOAD)
+- WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE));
+-
+ perf_pmu_disable(event->pmu);
+ event->hw.state = 0;
+ cpuhw->lsctl.cs = 1;
+diff --git a/arch/s390/mm/cmm.c b/arch/s390/mm/cmm.c
+index 75d15bf41d97c3..d01724a715d0fc 100644
+--- a/arch/s390/mm/cmm.c
++++ b/arch/s390/mm/cmm.c
+@@ -95,11 +95,12 @@ static long cmm_alloc_pages(long nr, long *counter,
+ (*counter)++;
+ spin_unlock(&cmm_lock);
+ nr--;
++ cond_resched();
+ }
+ return nr;
+ }
+
+-static long cmm_free_pages(long nr, long *counter, struct cmm_page_array **list)
++static long __cmm_free_pages(long nr, long *counter, struct cmm_page_array **list)
+ {
+ struct cmm_page_array *pa;
+ unsigned long addr;
+@@ -123,6 +124,21 @@ static long cmm_free_pages(long nr, long *counter, struct cmm_page_array **list)
+ return nr;
+ }
+
++static long cmm_free_pages(long nr, long *counter, struct cmm_page_array **list)
++{
++ long inc = 0;
++
++ while (nr) {
++ inc = min(256L, nr);
++ nr -= inc;
++ inc = __cmm_free_pages(inc, counter, list);
++ if (inc)
++ break;
++ cond_resched();
++ }
++ return nr + inc;
++}
++
+ static int cmm_oom_notify(struct notifier_block *self,
+ unsigned long dummy, void *parm)
+ {
+diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
+index 059e5c16af054e..61eadde085114d 100644
+--- a/arch/x86/kernel/amd_nb.c
++++ b/arch/x86/kernel/amd_nb.c
+@@ -26,6 +26,7 @@
+ #define PCI_DEVICE_ID_AMD_19H_M70H_ROOT 0x14e8
+ #define PCI_DEVICE_ID_AMD_1AH_M00H_ROOT 0x153a
+ #define PCI_DEVICE_ID_AMD_1AH_M20H_ROOT 0x1507
++#define PCI_DEVICE_ID_AMD_1AH_M60H_ROOT 0x1122
+ #define PCI_DEVICE_ID_AMD_MI200_ROOT 0x14bb
+ #define PCI_DEVICE_ID_AMD_MI300_ROOT 0x14f8
+
+@@ -63,6 +64,7 @@ static const struct pci_device_id amd_root_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M70H_ROOT) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M00H_ROOT) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M20H_ROOT) },
++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M60H_ROOT) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_MI200_ROOT) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_MI300_ROOT) },
+ {}
+@@ -95,6 +97,7 @@ static const struct pci_device_id amd_nb_misc_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M78H_DF_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M00H_DF_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M20H_DF_F3) },
++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M60H_DF_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M70H_DF_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_MI200_DF_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_MI300_DF_F3) },
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 074b41fafbe3ff..06b080b61aa578 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -64,6 +64,56 @@ static bool is_imm8(int value)
+ return value <= 127 && value >= -128;
+ }
+
++/*
++ * Let us limit the positive offset to be <= 123.
++ * This is to ensure eventual jit convergence For the following patterns:
++ * ...
++ * pass4, final_proglen=4391:
++ * ...
++ * 20e: 48 85 ff test rdi,rdi
++ * 211: 74 7d je 0x290
++ * 213: 48 8b 77 00 mov rsi,QWORD PTR [rdi+0x0]
++ * ...
++ * 289: 48 85 ff test rdi,rdi
++ * 28c: 74 17 je 0x2a5
++ * 28e: e9 7f ff ff ff jmp 0x212
++ * 293: bf 03 00 00 00 mov edi,0x3
++ * Note that insn at 0x211 is 2-byte cond jump insn for offset 0x7d (-125)
++ * and insn at 0x28e is 5-byte jmp insn with offset -129.
++ *
++ * pass5, final_proglen=4392:
++ * ...
++ * 20e: 48 85 ff test rdi,rdi
++ * 211: 0f 84 80 00 00 00 je 0x297
++ * 217: 48 8b 77 00 mov rsi,QWORD PTR [rdi+0x0]
++ * ...
++ * 28d: 48 85 ff test rdi,rdi
++ * 290: 74 1a je 0x2ac
++ * 292: eb 84 jmp 0x218
++ * 294: bf 03 00 00 00 mov edi,0x3
++ * Note that insn at 0x211 is 6-byte cond jump insn now since its offset
++ * becomes 0x80 based on previous round (0x293 - 0x213 = 0x80).
++ * At the same time, insn at 0x292 is a 2-byte insn since its offset is
++ * -124.
++ *
++ * pass6 will repeat the same code as in pass4 and this will prevent
++ * eventual convergence.
++ *
++ * To fix this issue, we need to break je (2->6 bytes) <-> jmp (5->2 bytes)
++ * cycle in the above. In the above example je offset <= 0x7c should work.
++ *
++ * For other cases, je <-> je needs offset <= 0x7b to avoid no convergence
++ * issue. For jmp <-> je and jmp <-> jmp cases, jmp offset <= 0x7c should
++ * avoid no convergence issue.
++ *
++ * Overall, let us limit the positive offset for 8bit cond/uncond jmp insn
++ * to maximum 123 (0x7b). This way, the jit pass can eventually converge.
++ */
++static bool is_imm8_jmp_offset(int value)
++{
++ return value <= 123 && value >= -128;
++}
++
+ static bool is_simm32(s64 value)
+ {
+ return value == (s64)(s32)value;
+@@ -2231,7 +2281,7 @@ st: if (is_imm8(insn->off))
+ return -EFAULT;
+ }
+ jmp_offset = addrs[i + insn->off] - addrs[i];
+- if (is_imm8(jmp_offset)) {
++ if (is_imm8_jmp_offset(jmp_offset)) {
+ if (jmp_padding) {
+ /* To keep the jmp_offset valid, the extra bytes are
+ * padded before the jump insn, so we subtract the
+@@ -2313,7 +2363,7 @@ st: if (is_imm8(insn->off))
+ break;
+ }
+ emit_jmp:
+- if (is_imm8(jmp_offset)) {
++ if (is_imm8_jmp_offset(jmp_offset)) {
+ if (jmp_padding) {
+ /* To avoid breaking jmp_offset, the extra bytes
+ * are padded before the actual jmp insn, so
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 2c12ae42dc8bd9..d6818c6cafda16 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1032,6 +1032,10 @@ static u64 xen_do_read_msr(unsigned int msr, int *err)
+ switch (msr) {
+ case MSR_IA32_APICBASE:
+ val &= ~X2APIC_ENABLE;
++ if (smp_processor_id() == 0)
++ val |= MSR_IA32_APICBASE_BSP;
++ else
++ val &= ~MSR_IA32_APICBASE_BSP;
+ break;
+ }
+ return val;
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 3d74ebe9dbd804..0eb52e37246773 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -483,38 +483,17 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ },
+ },
+ {
+- /* Asus ExpertBook B2402CBA */
++ /* Asus ExpertBook B2402 (B2402CBA / B2402FBA / B2402CVA / B2402FVA) */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_BOARD_NAME, "B2402CBA"),
++ DMI_MATCH(DMI_BOARD_NAME, "B2402"),
+ },
+ },
+ {
+- /* Asus ExpertBook B2402FBA */
++ /* Asus ExpertBook B2502 (B2502CBA / B2502FBA / B2502CVA / B2502FVA) */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_BOARD_NAME, "B2402FBA"),
+- },
+- },
+- {
+- /* Asus ExpertBook B2502 */
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_BOARD_NAME, "B2502CBA"),
+- },
+- },
+- {
+- /* Asus ExpertBook B2502FBA */
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_BOARD_NAME, "B2502FBA"),
+- },
+- },
+- {
+- /* Asus ExpertBook B2502CVA */
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_BOARD_NAME, "B2502CVA"),
++ DMI_MATCH(DMI_BOARD_NAME, "B2502"),
+ },
+ },
+ {
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 7df9ec9f924c44..002e1e9501d1a4 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -4059,10 +4059,20 @@ static void ata_eh_handle_port_suspend(struct ata_port *ap)
+
+ WARN_ON(ap->pflags & ATA_PFLAG_SUSPENDED);
+
+- /* Set all devices attached to the port in standby mode */
+- ata_for_each_link(link, ap, HOST_FIRST) {
+- ata_for_each_dev(dev, link, ENABLED)
+- ata_dev_power_set_standby(dev);
++ /*
++ * We will reach this point for all of the PM events:
++ * PM_EVENT_SUSPEND (if runtime pm, PM_EVENT_AUTO will also be set)
++ * PM_EVENT_FREEZE, and PM_EVENT_HIBERNATE.
++ *
++ * We do not want to perform disk spin down for PM_EVENT_FREEZE.
++ * (Spin down will be performed by the subsequent PM_EVENT_HIBERNATE.)
++ */
++ if (!(ap->pm_mesg.event & PM_EVENT_FREEZE)) {
++ /* Set all devices attached to the port in standby mode */
++ ata_for_each_link(link, ap, HOST_FIRST) {
++ ata_for_each_dev(dev, link, ENABLED)
++ ata_dev_power_set_standby(dev);
++ }
+ }
+
+ /*
+diff --git a/drivers/base/bus.c b/drivers/base/bus.c
+index ffea0728b8b2fb..6a68734e7ebd15 100644
+--- a/drivers/base/bus.c
++++ b/drivers/base/bus.c
+@@ -152,7 +152,8 @@ static ssize_t bus_attr_show(struct kobject *kobj, struct attribute *attr,
+ {
+ struct bus_attribute *bus_attr = to_bus_attr(attr);
+ struct subsys_private *subsys_priv = to_subsys_private(kobj);
+- ssize_t ret = 0;
++ /* return -EIO for reading a bus attribute without show() */
++ ssize_t ret = -EIO;
+
+ if (bus_attr->show)
+ ret = bus_attr->show(subsys_priv->bus, buf);
+@@ -164,7 +165,8 @@ static ssize_t bus_attr_store(struct kobject *kobj, struct attribute *attr,
+ {
+ struct bus_attribute *bus_attr = to_bus_attr(attr);
+ struct subsys_private *subsys_priv = to_subsys_private(kobj);
+- ssize_t ret = 0;
++ /* return -EIO for writing a bus attribute without store() */
++ ssize_t ret = -EIO;
+
+ if (bus_attr->store)
+ ret = bus_attr->store(subsys_priv->bus, buf, count);
+@@ -920,6 +922,8 @@ int bus_register(const struct bus_type *bus)
+ bus_remove_file(bus, &bus_attr_uevent);
+ bus_uevent_fail:
+ kset_unregister(&priv->subsys);
++ /* Above kset_unregister() will kfree @priv */
++ priv = NULL;
+ out:
+ kfree(priv);
+ return retval;
+diff --git a/drivers/base/power/common.c b/drivers/base/power/common.c
+index 327d168dd37a80..1a9dccc2137b9c 100644
+--- a/drivers/base/power/common.c
++++ b/drivers/base/power/common.c
+@@ -195,6 +195,7 @@ int dev_pm_domain_attach_list(struct device *dev,
+ struct device *pd_dev = NULL;
+ int ret, i, num_pds = 0;
+ bool by_id = true;
++ size_t size;
+ u32 pd_flags = data ? data->pd_flags : 0;
+ u32 link_flags = pd_flags & PD_FLAG_NO_DEV_LINK ? 0 :
+ DL_FLAG_STATELESS | DL_FLAG_PM_RUNTIME;
+@@ -217,19 +218,17 @@ int dev_pm_domain_attach_list(struct device *dev,
+ if (num_pds <= 0)
+ return 0;
+
+- pds = devm_kzalloc(dev, sizeof(*pds), GFP_KERNEL);
++ pds = kzalloc(sizeof(*pds), GFP_KERNEL);
+ if (!pds)
+ return -ENOMEM;
+
+- pds->pd_devs = devm_kcalloc(dev, num_pds, sizeof(*pds->pd_devs),
+- GFP_KERNEL);
+- if (!pds->pd_devs)
+- return -ENOMEM;
+-
+- pds->pd_links = devm_kcalloc(dev, num_pds, sizeof(*pds->pd_links),
+- GFP_KERNEL);
+- if (!pds->pd_links)
+- return -ENOMEM;
++ size = sizeof(*pds->pd_devs) + sizeof(*pds->pd_links);
++ pds->pd_devs = kcalloc(num_pds, size, GFP_KERNEL);
++ if (!pds->pd_devs) {
++ ret = -ENOMEM;
++ goto free_pds;
++ }
++ pds->pd_links = (void *)(pds->pd_devs + num_pds);
+
+ if (link_flags && pd_flags & PD_FLAG_DEV_LINK_ON)
+ link_flags |= DL_FLAG_RPM_ACTIVE;
+@@ -272,6 +271,9 @@ int dev_pm_domain_attach_list(struct device *dev,
+ device_link_del(pds->pd_links[i]);
+ dev_pm_domain_detach(pds->pd_devs[i], true);
+ }
++ kfree(pds->pd_devs);
++free_pds:
++ kfree(pds);
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_domain_attach_list);
+@@ -318,6 +320,9 @@ void dev_pm_domain_detach_list(struct dev_pm_domain_list *list)
+ device_link_del(list->pd_links[i]);
+ dev_pm_domain_detach(list->pd_devs[i], true);
+ }
++
++ kfree(list->pd_devs);
++ kfree(list);
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_domain_detach_list);
+
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index efcb8d9d274c31..f25b1670e91caa 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1989,6 +1989,13 @@ static void zram_destroy_comps(struct zram *zram)
+ zcomp_destroy(comp);
+ zram->num_active_comps--;
+ }
++
++ for (prio = ZRAM_PRIMARY_COMP; prio < ZRAM_MAX_COMPS; prio++) {
++ /* Do not free statically defined compression algorithms */
++ if (zram->comp_algs[prio] != default_compressor)
++ kfree(zram->comp_algs[prio]);
++ zram->comp_algs[prio] = NULL;
++ }
+ }
+
+ static void zram_reset_device(struct zram *zram)
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 93dbeb8b348d5a..a1e9b052bc8476 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -4092,16 +4092,29 @@ static void btusb_disconnect(struct usb_interface *intf)
+ static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
+ {
+ struct btusb_data *data = usb_get_intfdata(intf);
++ int err;
+
+ BT_DBG("intf %p", intf);
+
+- /* Don't suspend if there are connections */
+- if (hci_conn_count(data->hdev))
++ /* Don't auto-suspend if there are connections; external suspend calls
++ * shall never fail.
++ */
++ if (PMSG_IS_AUTO(message) && hci_conn_count(data->hdev))
+ return -EBUSY;
+
+ if (data->suspend_count++)
+ return 0;
+
++ /* Notify Host stack to suspend; this has to be done before stopping
++ * the traffic since the hci_suspend_dev itself may generate some
++ * traffic.
++ */
++ err = hci_suspend_dev(data->hdev);
++ if (err) {
++ data->suspend_count--;
++ return err;
++ }
++
+ spin_lock_irq(&data->txlock);
+ if (!(PMSG_IS_AUTO(message) && data->tx_in_flight)) {
+ set_bit(BTUSB_SUSPENDING, &data->flags);
+@@ -4109,6 +4122,7 @@ static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
+ } else {
+ spin_unlock_irq(&data->txlock);
+ data->suspend_count--;
++ hci_resume_dev(data->hdev);
+ return -EBUSY;
+ }
+
+@@ -4229,6 +4243,8 @@ static int btusb_resume(struct usb_interface *intf)
+ spin_unlock_irq(&data->txlock);
+ schedule_work(&data->work);
+
++ hci_resume_dev(data->hdev);
++
+ return 0;
+
+ failed:
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index de7d720d99fa94..bcb05fc44c998d 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -2007,25 +2007,27 @@ static int virtcons_probe(struct virtio_device *vdev)
+ multiport = true;
+ }
+
+- err = init_vqs(portdev);
+- if (err < 0) {
+- dev_err(&vdev->dev, "Error %d initializing vqs\n", err);
+- goto free_chrdev;
+- }
+-
+ spin_lock_init(&portdev->ports_lock);
+ INIT_LIST_HEAD(&portdev->ports);
+ INIT_LIST_HEAD(&portdev->list);
+
+- virtio_device_ready(portdev->vdev);
+-
+ INIT_WORK(&portdev->config_work, &config_work_handler);
+ INIT_WORK(&portdev->control_work, &control_work_handler);
+
+ if (multiport) {
+ spin_lock_init(&portdev->c_ivq_lock);
+ spin_lock_init(&portdev->c_ovq_lock);
++ }
+
++ err = init_vqs(portdev);
++ if (err < 0) {
++ dev_err(&vdev->dev, "Error %d initializing vqs\n", err);
++ goto free_chrdev;
++ }
++
++ virtio_device_ready(portdev->vdev);
++
++ if (multiport) {
+ err = fill_queue(portdev->c_ivq, &portdev->c_ivq_lock);
+ if (err < 0) {
+ dev_err(&vdev->dev,
+diff --git a/drivers/clk/bcm/clk-bcm53573-ilp.c b/drivers/clk/bcm/clk-bcm53573-ilp.c
+index 84f2af736ee8a6..83ef41d618be37 100644
+--- a/drivers/clk/bcm/clk-bcm53573-ilp.c
++++ b/drivers/clk/bcm/clk-bcm53573-ilp.c
+@@ -112,7 +112,7 @@ static void bcm53573_ilp_init(struct device_node *np)
+ goto err_free_ilp;
+ }
+
+- ilp->regmap = syscon_node_to_regmap(of_get_parent(np));
++ ilp->regmap = syscon_node_to_regmap(np->parent);
+ if (IS_ERR(ilp->regmap)) {
+ err = PTR_ERR(ilp->regmap);
+ goto err_free_ilp;
+diff --git a/drivers/clk/imx/clk-imx7d.c b/drivers/clk/imx/clk-imx7d.c
+index 2b77d1fc7bb946..1e1296e748357b 100644
+--- a/drivers/clk/imx/clk-imx7d.c
++++ b/drivers/clk/imx/clk-imx7d.c
+@@ -498,9 +498,9 @@ static void __init imx7d_clocks_init(struct device_node *ccm_node)
+ hws[IMX7D_ENET_AXI_ROOT_SRC] = imx_clk_hw_mux2_flags("enet_axi_src", base + 0x8900, 24, 3, enet_axi_sel, ARRAY_SIZE(enet_axi_sel), CLK_SET_PARENT_GATE);
+ hws[IMX7D_NAND_USDHC_BUS_ROOT_SRC] = imx_clk_hw_mux2_flags("nand_usdhc_src", base + 0x8980, 24, 3, nand_usdhc_bus_sel, ARRAY_SIZE(nand_usdhc_bus_sel), CLK_SET_PARENT_GATE);
+ hws[IMX7D_DRAM_PHYM_ROOT_SRC] = imx_clk_hw_mux2_flags("dram_phym_src", base + 0x9800, 24, 1, dram_phym_sel, ARRAY_SIZE(dram_phym_sel), CLK_SET_PARENT_GATE);
+- hws[IMX7D_DRAM_ROOT_SRC] = imx_clk_hw_mux2_flags("dram_src", base + 0x9880, 24, 1, dram_sel, ARRAY_SIZE(dram_sel), CLK_SET_PARENT_GATE);
++ hws[IMX7D_DRAM_ROOT_SRC] = imx_clk_hw_mux2("dram_src", base + 0x9880, 24, 1, dram_sel, ARRAY_SIZE(dram_sel));
+ hws[IMX7D_DRAM_PHYM_ALT_ROOT_SRC] = imx_clk_hw_mux2_flags("dram_phym_alt_src", base + 0xa000, 24, 3, dram_phym_alt_sel, ARRAY_SIZE(dram_phym_alt_sel), CLK_SET_PARENT_GATE);
+- hws[IMX7D_DRAM_ALT_ROOT_SRC] = imx_clk_hw_mux2_flags("dram_alt_src", base + 0xa080, 24, 3, dram_alt_sel, ARRAY_SIZE(dram_alt_sel), CLK_SET_PARENT_GATE);
++ hws[IMX7D_DRAM_ALT_ROOT_SRC] = imx_clk_hw_mux2("dram_alt_src", base + 0xa080, 24, 3, dram_alt_sel, ARRAY_SIZE(dram_alt_sel));
+ hws[IMX7D_USB_HSIC_ROOT_SRC] = imx_clk_hw_mux2_flags("usb_hsic_src", base + 0xa100, 24, 3, usb_hsic_sel, ARRAY_SIZE(usb_hsic_sel), CLK_SET_PARENT_GATE);
+ hws[IMX7D_PCIE_CTRL_ROOT_SRC] = imx_clk_hw_mux2_flags("pcie_ctrl_src", base + 0xa180, 24, 3, pcie_ctrl_sel, ARRAY_SIZE(pcie_ctrl_sel), CLK_SET_PARENT_GATE);
+ hws[IMX7D_PCIE_PHY_ROOT_SRC] = imx_clk_hw_mux2_flags("pcie_phy_src", base + 0xa200, 24, 3, pcie_phy_sel, ARRAY_SIZE(pcie_phy_sel), CLK_SET_PARENT_GATE);
+diff --git a/drivers/comedi/drivers/ni_routing/tools/convert_c_to_py.c b/drivers/comedi/drivers/ni_routing/tools/convert_c_to_py.c
+index d55521b5bdcb2d..892a66b2cea665 100644
+--- a/drivers/comedi/drivers/ni_routing/tools/convert_c_to_py.c
++++ b/drivers/comedi/drivers/ni_routing/tools/convert_c_to_py.c
+@@ -140,6 +140,11 @@ int main(void)
+ {
+ FILE *fp = fopen("ni_values.py", "w");
+
++ if (fp == NULL) {
++ fprintf(stderr, "Could not open file!");
++ return -1;
++ }
++
+ /* write route register values */
+ fprintf(fp, "ni_route_values = {\n");
+ for (int i = 0; ni_all_route_values[i]; ++i)
+diff --git a/drivers/dax/device.c b/drivers/dax/device.c
+index 2051e4f73c8a43..cb09b52a0e326f 100644
+--- a/drivers/dax/device.c
++++ b/drivers/dax/device.c
+@@ -86,7 +86,7 @@ static void dax_set_mapping(struct vm_fault *vmf, pfn_t pfn,
+ nr_pages = 1;
+
+ pgoff = linear_page_index(vmf->vma,
+- ALIGN(vmf->address, fault_size));
++ ALIGN_DOWN(vmf->address, fault_size));
+
+ for (i = 0; i < nr_pages; i++) {
+ struct page *page = pfn_to_page(pfn_t_to_pfn(pfn) + i);
+diff --git a/drivers/gpio/gpio-aspeed.c b/drivers/gpio/gpio-aspeed.c
+index 04c03402db6ddf..ea40ad43a79baa 100644
+--- a/drivers/gpio/gpio-aspeed.c
++++ b/drivers/gpio/gpio-aspeed.c
+@@ -406,6 +406,8 @@ static void __aspeed_gpio_set(struct gpio_chip *gc, unsigned int offset,
+ gpio->dcache[GPIO_BANK(offset)] = reg;
+
+ iowrite32(reg, addr);
++ /* Flush write */
++ ioread32(addr);
+ }
+
+ static void aspeed_gpio_set(struct gpio_chip *gc, unsigned int offset,
+@@ -1191,7 +1193,7 @@ static int __init aspeed_gpio_probe(struct platform_device *pdev)
+ if (!gpio_id)
+ return -EINVAL;
+
+- gpio->clk = of_clk_get(pdev->dev.of_node, 0);
++ gpio->clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ if (IS_ERR(gpio->clk)) {
+ dev_warn(&pdev->dev,
+ "Failed to get clock from devicetree, debouncing disabled\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 11672bfe4fad69..a485281ad1ad87 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -1438,8 +1438,8 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void **process_info,
+ list_add_tail(&vm->vm_list_node,
+ &(vm->process_info->vm_list_head));
+ vm->process_info->n_vms++;
+-
+- *ef = dma_fence_get(&vm->process_info->eviction_fence->base);
++ if (ef)
++ *ef = dma_fence_get(&vm->process_info->eviction_fence->base);
+ mutex_unlock(&vm->process_info->lock);
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 9e29b92eb523d0..e44892109f71b0 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -1676,12 +1676,15 @@ int kfd_process_device_init_vm(struct kfd_process_device *pdd,
+
+ ret = amdgpu_amdkfd_gpuvm_acquire_process_vm(dev->adev, avm,
+ &p->kgd_process_info,
+- &ef);
++ p->ef ? NULL : &ef);
+ if (ret) {
+ dev_err(dev->adev->dev, "Failed to create process VM object\n");
+ return ret;
+ }
+- RCU_INIT_POINTER(p->ef, ef);
++
++ if (!p->ef)
++ RCU_INIT_POINTER(p->ef, ef);
++
+ pdd->drm_priv = drm_file->private_data;
+
+ ret = kfd_process_device_reserve_ib_mem(pdd);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 1ab7cd8a6b6aed..4f19e9736a676b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2950,10 +2950,11 @@ static int dm_suspend(void *handle)
+
+ hpd_rx_irq_work_suspend(dm);
+
+- if (adev->dm.dc->caps.ips_support)
+- dc_allow_idle_optimizations(adev->dm.dc, true);
+-
+ dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3);
++
++ if (dm->dc->caps.ips_support && adev->in_s0ix)
++ dc_allow_idle_optimizations(dm->dc, true);
++
+ dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D3);
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 9e05d77453ac38..db8c6bec712fce 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1832,7 +1832,7 @@ bool dc_validate_boot_timing(const struct dc *dc,
+ if (crtc_timing->pix_clk_100hz != pix_clk_100hz)
+ return false;
+
+- if (!se->funcs->dp_get_pixel_format)
++ if (!se || !se->funcs->dp_get_pixel_format)
+ return false;
+
+ if (!se->funcs->dp_get_pixel_format(
+@@ -5120,11 +5120,26 @@ static bool update_planes_and_stream_v3(struct dc *dc,
+ return true;
+ }
+
++static void clear_update_flags(struct dc_surface_update *srf_updates,
++ int surface_count, struct dc_stream_state *stream)
++{
++ int i;
++
++ if (stream)
++ stream->update_flags.raw = 0;
++
++ for (i = 0; i < surface_count; i++)
++ if (srf_updates[i].surface)
++ srf_updates[i].surface->update_flags.raw = 0;
++}
++
+ bool dc_update_planes_and_stream(struct dc *dc,
+ struct dc_surface_update *srf_updates, int surface_count,
+ struct dc_stream_state *stream,
+ struct dc_stream_update *stream_update)
+ {
++ bool ret = false;
++
+ dc_exit_ips_for_hw_access(dc);
+ /*
+ * update planes and stream version 3 separates FULL and FAST updates
+@@ -5141,10 +5156,16 @@ bool dc_update_planes_and_stream(struct dc *dc,
+ * features as they are now transparent to the new sequence.
+ */
+ if (dc->ctx->dce_version >= DCN_VERSION_4_01)
+- return update_planes_and_stream_v3(dc, srf_updates,
++ ret = update_planes_and_stream_v3(dc, srf_updates,
+ surface_count, stream, stream_update);
+- return update_planes_and_stream_v2(dc, srf_updates,
++ else
++ ret = update_planes_and_stream_v2(dc, srf_updates,
+ surface_count, stream, stream_update);
++
++ if (ret)
++ clear_update_flags(srf_updates, surface_count, stream);
++
++ return ret;
+ }
+
+ void dc_commit_updates_for_stream(struct dc *dc,
+@@ -5154,6 +5175,8 @@ void dc_commit_updates_for_stream(struct dc *dc,
+ struct dc_stream_update *stream_update,
+ struct dc_state *state)
+ {
++ bool ret = false;
++
+ dc_exit_ips_for_hw_access(dc);
+ /* TODO: Since change commit sequence can have a huge impact,
+ * we decided to only enable it for DCN3x. However, as soon as
+@@ -5161,17 +5184,17 @@ void dc_commit_updates_for_stream(struct dc *dc,
+ * the new sequence for all ASICs.
+ */
+ if (dc->ctx->dce_version >= DCN_VERSION_4_01) {
+- update_planes_and_stream_v3(dc, srf_updates, surface_count,
++ ret = update_planes_and_stream_v3(dc, srf_updates, surface_count,
+ stream, stream_update);
+- return;
+- }
+- if (dc->ctx->dce_version >= DCN_VERSION_3_2) {
+- update_planes_and_stream_v2(dc, srf_updates, surface_count,
++ } else if (dc->ctx->dce_version >= DCN_VERSION_3_2) {
++ ret = update_planes_and_stream_v2(dc, srf_updates, surface_count,
+ stream, stream_update);
+- return;
+- }
+- update_planes_and_stream_v1(dc, srf_updates, surface_count, stream,
+- stream_update, state);
++ } else
++ ret = update_planes_and_stream_v1(dc, srf_updates, surface_count, stream,
++ stream_update, state);
++
++ if (ret)
++ clear_update_flags(srf_updates, surface_count, stream);
+ }
+
+ uint8_t dc_get_current_stream_count(struct dc *dc)
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h b/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
+index 9118fcddbf1167..227bf0e84a130b 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
++++ b/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
+@@ -60,7 +60,7 @@ struct vi_dpm_level {
+
+ struct vi_dpm_table {
+ uint32_t count;
+- struct vi_dpm_level dpm_level[] __counted_by(count);
++ struct vi_dpm_level dpm_level[];
+ };
+
+ #define PCIE_PERF_REQ_REMOVE_REGISTRY 0
+@@ -91,7 +91,7 @@ struct phm_set_power_state_input {
+
+ struct phm_clock_array {
+ uint32_t count;
+- uint32_t values[] __counted_by(count);
++ uint32_t values[];
+ };
+
+ struct phm_clock_voltage_dependency_record {
+@@ -123,7 +123,7 @@ struct phm_acpclock_voltage_dependency_record {
+
+ struct phm_clock_voltage_dependency_table {
+ uint32_t count;
+- struct phm_clock_voltage_dependency_record entries[] __counted_by(count);
++ struct phm_clock_voltage_dependency_record entries[];
+ };
+
+ struct phm_phase_shedding_limits_record {
+@@ -140,7 +140,7 @@ struct phm_uvd_clock_voltage_dependency_record {
+
+ struct phm_uvd_clock_voltage_dependency_table {
+ uint8_t count;
+- struct phm_uvd_clock_voltage_dependency_record entries[] __counted_by(count);
++ struct phm_uvd_clock_voltage_dependency_record entries[];
+ };
+
+ struct phm_acp_clock_voltage_dependency_record {
+@@ -150,7 +150,7 @@ struct phm_acp_clock_voltage_dependency_record {
+
+ struct phm_acp_clock_voltage_dependency_table {
+ uint32_t count;
+- struct phm_acp_clock_voltage_dependency_record entries[] __counted_by(count);
++ struct phm_acp_clock_voltage_dependency_record entries[];
+ };
+
+ struct phm_vce_clock_voltage_dependency_record {
+@@ -161,32 +161,32 @@ struct phm_vce_clock_voltage_dependency_record {
+
+ struct phm_phase_shedding_limits_table {
+ uint32_t count;
+- struct phm_phase_shedding_limits_record entries[] __counted_by(count);
++ struct phm_phase_shedding_limits_record entries[];
+ };
+
+ struct phm_vceclock_voltage_dependency_table {
+ uint8_t count;
+- struct phm_vceclock_voltage_dependency_record entries[] __counted_by(count);
++ struct phm_vceclock_voltage_dependency_record entries[];
+ };
+
+ struct phm_uvdclock_voltage_dependency_table {
+ uint8_t count;
+- struct phm_uvdclock_voltage_dependency_record entries[] __counted_by(count);
++ struct phm_uvdclock_voltage_dependency_record entries[];
+ };
+
+ struct phm_samuclock_voltage_dependency_table {
+ uint8_t count;
+- struct phm_samuclock_voltage_dependency_record entries[] __counted_by(count);
++ struct phm_samuclock_voltage_dependency_record entries[];
+ };
+
+ struct phm_acpclock_voltage_dependency_table {
+ uint32_t count;
+- struct phm_acpclock_voltage_dependency_record entries[] __counted_by(count);
++ struct phm_acpclock_voltage_dependency_record entries[];
+ };
+
+ struct phm_vce_clock_voltage_dependency_table {
+ uint8_t count;
+- struct phm_vce_clock_voltage_dependency_record entries[] __counted_by(count);
++ struct phm_vce_clock_voltage_dependency_record entries[];
+ };
+
+
+@@ -393,7 +393,7 @@ union phm_cac_leakage_record {
+
+ struct phm_cac_leakage_table {
+ uint32_t count;
+- union phm_cac_leakage_record entries[] __counted_by(count);
++ union phm_cac_leakage_record entries[];
+ };
+
+ struct phm_samu_clock_voltage_dependency_record {
+@@ -404,7 +404,7 @@ struct phm_samu_clock_voltage_dependency_record {
+
+ struct phm_samu_clock_voltage_dependency_table {
+ uint8_t count;
+- struct phm_samu_clock_voltage_dependency_record entries[] __counted_by(count);
++ struct phm_samu_clock_voltage_dependency_record entries[];
+ };
+
+ struct phm_cac_tdp_table {
+diff --git a/drivers/gpu/drm/drm_fbdev_dma.c b/drivers/gpu/drm/drm_fbdev_dma.c
+index b0602c4f362830..51c2d742d19980 100644
+--- a/drivers/gpu/drm/drm_fbdev_dma.c
++++ b/drivers/gpu/drm/drm_fbdev_dma.c
+@@ -50,7 +50,8 @@ static void drm_fbdev_dma_fb_destroy(struct fb_info *info)
+ if (!fb_helper->dev)
+ return;
+
+- fb_deferred_io_cleanup(info);
++ if (info->fbdefio)
++ fb_deferred_io_cleanup(info);
+ drm_fb_helper_fini(fb_helper);
+
+ drm_client_buffer_vunmap(fb_helper->buffer);
+diff --git a/drivers/gpu/drm/i915/display/intel_hdcp.c b/drivers/gpu/drm/i915/display/intel_hdcp.c
+index 3ebe035f382ec9..b0440cc59c2345 100644
+--- a/drivers/gpu/drm/i915/display/intel_hdcp.c
++++ b/drivers/gpu/drm/i915/display/intel_hdcp.c
+@@ -1089,7 +1089,8 @@ static void intel_hdcp_update_value(struct intel_connector *connector,
+ hdcp->value = value;
+ if (update_property) {
+ drm_connector_get(&connector->base);
+- queue_work(i915->unordered_wq, &hdcp->prop_work);
++ if (!queue_work(i915->unordered_wq, &hdcp->prop_work))
++ drm_connector_put(&connector->base);
+ }
+ }
+
+@@ -2517,7 +2518,8 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state,
+ mutex_lock(&hdcp->mutex);
+ hdcp->value = DRM_MODE_CONTENT_PROTECTION_DESIRED;
+ drm_connector_get(&connector->base);
+- queue_work(i915->unordered_wq, &hdcp->prop_work);
++ if (!queue_work(i915->unordered_wq, &hdcp->prop_work))
++ drm_connector_put(&connector->base);
+ mutex_unlock(&hdcp->mutex);
+ }
+
+@@ -2534,7 +2536,9 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state,
+ */
+ if (!desired_and_not_enabled && !content_protection_type_changed) {
+ drm_connector_get(&connector->base);
+- queue_work(i915->unordered_wq, &hdcp->prop_work);
++ if (!queue_work(i915->unordered_wq, &hdcp->prop_work))
++ drm_connector_put(&connector->base);
++
+ }
+ }
+
+diff --git a/drivers/gpu/drm/nouveau/dispnv04/crtc.c b/drivers/gpu/drm/nouveau/dispnv04/crtc.c
+index 4310ad71870b10..8fed62e002fea0 100644
+--- a/drivers/gpu/drm/nouveau/dispnv04/crtc.c
++++ b/drivers/gpu/drm/nouveau/dispnv04/crtc.c
+@@ -1157,7 +1157,7 @@ nv04_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb,
+ chan = drm->channel;
+ if (!chan)
+ return -ENODEV;
+- cli = (void *)chan->user.client;
++ cli = chan->cli;
+ push = chan->chan.push;
+
+ s = kzalloc(sizeof(*s), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_abi16.c b/drivers/gpu/drm/nouveau/nouveau_abi16.c
+index d56909071de660..0dd38e73676dab 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_abi16.c
++++ b/drivers/gpu/drm/nouveau/nouveau_abi16.c
+@@ -356,7 +356,7 @@ nouveau_abi16_ioctl_channel_alloc(ABI16_IOCTL_ARGS)
+ list_add(&chan->head, &abi16->channels);
+
+ /* create channel object and initialise dma and fence management */
+- ret = nouveau_channel_new(drm, device, false, runm, init->fb_ctxdma_handle,
++ ret = nouveau_channel_new(cli, false, runm, init->fb_ctxdma_handle,
+ init->tt_ctxdma_handle, &chan->chan);
+ if (ret)
+ goto done;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index 70fb003a66669a..933356e9389039 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -859,7 +859,7 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object *bo, int evict,
+ {
+ struct nouveau_drm *drm = nouveau_bdev(bo->bdev);
+ struct nouveau_channel *chan = drm->ttm.chan;
+- struct nouveau_cli *cli = (void *)chan->user.client;
++ struct nouveau_cli *cli = chan->cli;
+ struct nouveau_fence *fence;
+ int ret;
+
+diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c b/drivers/gpu/drm/nouveau/nouveau_chan.c
+index 7c97b288680760..cee36b1efd3917 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_chan.c
++++ b/drivers/gpu/drm/nouveau/nouveau_chan.c
+@@ -52,7 +52,7 @@ static int
+ nouveau_channel_killed(struct nvif_event *event, void *repv, u32 repc)
+ {
+ struct nouveau_channel *chan = container_of(event, typeof(*chan), kill);
+- struct nouveau_cli *cli = (void *)chan->user.client;
++ struct nouveau_cli *cli = chan->cli;
+
+ NV_PRINTK(warn, cli, "channel %d killed!\n", chan->chid);
+
+@@ -66,7 +66,7 @@ int
+ nouveau_channel_idle(struct nouveau_channel *chan)
+ {
+ if (likely(chan && chan->fence && !atomic_read(&chan->killed))) {
+- struct nouveau_cli *cli = (void *)chan->user.client;
++ struct nouveau_cli *cli = chan->cli;
+ struct nouveau_fence *fence = NULL;
+ int ret;
+
+@@ -142,10 +142,11 @@ nouveau_channel_wait(struct nvif_push *push, u32 size)
+ }
+
+ static int
+-nouveau_channel_prep(struct nouveau_drm *drm, struct nvif_device *device,
++nouveau_channel_prep(struct nouveau_cli *cli,
+ u32 size, struct nouveau_channel **pchan)
+ {
+- struct nouveau_cli *cli = (void *)device->object.client;
++ struct nouveau_drm *drm = cli->drm;
++ struct nvif_device *device = &cli->device;
+ struct nv_dma_v0 args = {};
+ struct nouveau_channel *chan;
+ u32 target;
+@@ -155,6 +156,7 @@ nouveau_channel_prep(struct nouveau_drm *drm, struct nvif_device *device,
+ if (!chan)
+ return -ENOMEM;
+
++ chan->cli = cli;
+ chan->device = device;
+ chan->drm = drm;
+ chan->vmm = nouveau_cli_vmm(cli);
+@@ -254,7 +256,7 @@ nouveau_channel_prep(struct nouveau_drm *drm, struct nvif_device *device,
+ }
+
+ static int
+-nouveau_channel_ctor(struct nouveau_drm *drm, struct nvif_device *device, bool priv, u64 runm,
++nouveau_channel_ctor(struct nouveau_cli *cli, bool priv, u64 runm,
+ struct nouveau_channel **pchan)
+ {
+ const struct nvif_mclass hosts[] = {
+@@ -279,7 +281,7 @@ nouveau_channel_ctor(struct nouveau_drm *drm, struct nvif_device *device, bool p
+ struct nvif_chan_v0 chan;
+ char name[TASK_COMM_LEN+16];
+ } args;
+- struct nouveau_cli *cli = (void *)device->object.client;
++ struct nvif_device *device = &cli->device;
+ struct nouveau_channel *chan;
+ const u64 plength = 0x10000;
+ const u64 ioffset = plength;
+@@ -298,7 +300,7 @@ nouveau_channel_ctor(struct nouveau_drm *drm, struct nvif_device *device, bool p
+ size = ioffset + ilength;
+
+ /* allocate dma push buffer */
+- ret = nouveau_channel_prep(drm, device, size, &chan);
++ ret = nouveau_channel_prep(cli, size, &chan);
+ *pchan = chan;
+ if (ret)
+ return ret;
+@@ -493,13 +495,12 @@ nouveau_channel_init(struct nouveau_channel *chan, u32 vram, u32 gart)
+ }
+
+ int
+-nouveau_channel_new(struct nouveau_drm *drm, struct nvif_device *device,
++nouveau_channel_new(struct nouveau_cli *cli,
+ bool priv, u64 runm, u32 vram, u32 gart, struct nouveau_channel **pchan)
+ {
+- struct nouveau_cli *cli = (void *)device->object.client;
+ int ret;
+
+- ret = nouveau_channel_ctor(drm, device, priv, runm, pchan);
++ ret = nouveau_channel_ctor(cli, priv, runm, pchan);
+ if (ret) {
+ NV_PRINTK(dbg, cli, "channel create, %d\n", ret);
+ return ret;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.h b/drivers/gpu/drm/nouveau/nouveau_chan.h
+index 5de2ef4e98c2bb..260febd634ee21 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_chan.h
++++ b/drivers/gpu/drm/nouveau/nouveau_chan.h
+@@ -12,6 +12,7 @@ struct nouveau_channel {
+ struct nvif_push *push;
+ } chan;
+
++ struct nouveau_cli *cli;
+ struct nvif_device *device;
+ struct nouveau_drm *drm;
+ struct nouveau_vmm *vmm;
+@@ -62,7 +63,7 @@ struct nouveau_channel {
+ int nouveau_channels_init(struct nouveau_drm *);
+ void nouveau_channels_fini(struct nouveau_drm *);
+
+-int nouveau_channel_new(struct nouveau_drm *, struct nvif_device *, bool priv, u64 runm,
++int nouveau_channel_new(struct nouveau_cli *, bool priv, u64 runm,
+ u32 vram, u32 gart, struct nouveau_channel **);
+ void nouveau_channel_del(struct nouveau_channel **);
+ int nouveau_channel_idle(struct nouveau_channel *);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
+index 6fb65b01d77804..097bd3af0719e0 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
+@@ -193,7 +193,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
+ if (!spage || !(src & MIGRATE_PFN_MIGRATE))
+ goto done;
+
+- dpage = alloc_page_vma(GFP_HIGHUSER, vmf->vma, vmf->address);
++ dpage = alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO, vmf->vma, vmf->address);
+ if (!dpage)
+ goto done;
+
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index a58c31089613eb..bfba4e374df445 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -356,7 +356,7 @@ nouveau_accel_ce_init(struct nouveau_drm *drm)
+ return;
+ }
+
+- ret = nouveau_channel_new(drm, device, false, runm, NvDmaFB, NvDmaTT, &drm->cechan);
++ ret = nouveau_channel_new(&drm->client, true, runm, NvDmaFB, NvDmaTT, &drm->cechan);
+ if (ret)
+ NV_ERROR(drm, "failed to create ce channel, %d\n", ret);
+ }
+@@ -384,7 +384,7 @@ nouveau_accel_gr_init(struct nouveau_drm *drm)
+ return;
+ }
+
+- ret = nouveau_channel_new(drm, device, false, runm, NvDmaFB, NvDmaTT, &drm->channel);
++ ret = nouveau_channel_new(&drm->client, false, runm, NvDmaFB, NvDmaTT, &drm->channel);
+ if (ret) {
+ NV_ERROR(drm, "failed to create kernel channel, %d\n", ret);
+ nouveau_accel_gr_fini(drm);
+diff --git a/drivers/gpu/drm/v3d/v3d_perfmon.c b/drivers/gpu/drm/v3d/v3d_perfmon.c
+index b7d0b02e1a95dd..e0bd48e3610a88 100644
+--- a/drivers/gpu/drm/v3d/v3d_perfmon.c
++++ b/drivers/gpu/drm/v3d/v3d_perfmon.c
+@@ -289,6 +289,11 @@ void v3d_perfmon_open_file(struct v3d_file_priv *v3d_priv)
+ static int v3d_perfmon_idr_del(int id, void *elem, void *data)
+ {
+ struct v3d_perfmon *perfmon = elem;
++ struct v3d_dev *v3d = (struct v3d_dev *)data;
++
++ /* If the active perfmon is being destroyed, stop it first */
++ if (perfmon == v3d->active_perfmon)
++ v3d_perfmon_stop(v3d, perfmon, false);
+
+ v3d_perfmon_put(perfmon);
+
+@@ -297,8 +302,10 @@ static int v3d_perfmon_idr_del(int id, void *elem, void *data)
+
+ void v3d_perfmon_close_file(struct v3d_file_priv *v3d_priv)
+ {
++ struct v3d_dev *v3d = v3d_priv->v3d;
++
+ mutex_lock(&v3d_priv->perfmon.lock);
+- idr_for_each(&v3d_priv->perfmon.idr, v3d_perfmon_idr_del, NULL);
++ idr_for_each(&v3d_priv->perfmon.idr, v3d_perfmon_idr_del, v3d);
+ idr_destroy(&v3d_priv->perfmon.idr);
+ mutex_unlock(&v3d_priv->perfmon.lock);
+ mutex_destroy(&v3d_priv->perfmon.lock);
+diff --git a/drivers/gpu/drm/vc4/vc4_perfmon.c b/drivers/gpu/drm/vc4/vc4_perfmon.c
+index c4ac2c94623815..c00a5cc2316d20 100644
+--- a/drivers/gpu/drm/vc4/vc4_perfmon.c
++++ b/drivers/gpu/drm/vc4/vc4_perfmon.c
+@@ -116,6 +116,11 @@ void vc4_perfmon_open_file(struct vc4_file *vc4file)
+ static int vc4_perfmon_idr_del(int id, void *elem, void *data)
+ {
+ struct vc4_perfmon *perfmon = elem;
++ struct vc4_dev *vc4 = (struct vc4_dev *)data;
++
++ /* If the active perfmon is being destroyed, stop it first */
++ if (perfmon == vc4->active_perfmon)
++ vc4_perfmon_stop(vc4, perfmon, false);
+
+ vc4_perfmon_put(perfmon);
+
+@@ -130,7 +135,7 @@ void vc4_perfmon_close_file(struct vc4_file *vc4file)
+ return;
+
+ mutex_lock(&vc4file->perfmon.lock);
+- idr_for_each(&vc4file->perfmon.idr, vc4_perfmon_idr_del, NULL);
++ idr_for_each(&vc4file->perfmon.idr, vc4_perfmon_idr_del, vc4);
+ idr_destroy(&vc4file->perfmon.idr);
+ mutex_unlock(&vc4file->perfmon.lock);
+ mutex_destroy(&vc4file->perfmon.lock);
+diff --git a/drivers/gpu/drm/xe/xe_bb.c b/drivers/gpu/drm/xe/xe_bb.c
+index a13e0b3a169ed8..ef777dbdf4ecc3 100644
+--- a/drivers/gpu/drm/xe/xe_bb.c
++++ b/drivers/gpu/drm/xe/xe_bb.c
+@@ -65,7 +65,8 @@ __xe_bb_create_job(struct xe_exec_queue *q, struct xe_bb *bb, u64 *addr)
+ {
+ u32 size = drm_suballoc_size(bb->bo);
+
+- bb->cs[bb->len++] = MI_BATCH_BUFFER_END;
++ if (bb->len == 0 || bb->cs[bb->len - 1] != MI_BATCH_BUFFER_END)
++ bb->cs[bb->len++] = MI_BATCH_BUFFER_END;
+
+ xe_gt_assert(q->gt, bb->len * 4 + bb_prefetch(q->gt) <= size);
+
+diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
+index 1011e5d281fa97..c87e6bca64d86a 100644
+--- a/drivers/gpu/drm/xe/xe_debugfs.c
++++ b/drivers/gpu/drm/xe/xe_debugfs.c
+@@ -190,7 +190,7 @@ void xe_debugfs_register(struct xe_device *xe)
+ debugfs_create_file("forcewake_all", 0400, root, xe,
+ &forcewake_all_fops);
+
+- debugfs_create_file("wedged_mode", 0400, root, xe,
++ debugfs_create_file("wedged_mode", 0600, root, xe,
+ &wedged_mode_fops);
+
+ for (mem_type = XE_PL_VRAM0; mem_type <= XE_PL_VRAM1; ++mem_type) {
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index cb9df15e713767..0062a5e4d5fac7 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -874,7 +874,9 @@ int xe_gt_sanitize_freq(struct xe_gt *gt)
+ int ret = 0;
+
+ if ((!xe_uc_fw_is_available(>->uc.gsc.fw) ||
+- xe_uc_fw_is_loaded(>->uc.gsc.fw)) && XE_WA(gt, 22019338487))
++ xe_uc_fw_is_loaded(>->uc.gsc.fw) ||
++ xe_uc_fw_is_in_error_state(>->uc.gsc.fw)) &&
++ XE_WA(gt, 22019338487))
+ ret = xe_guc_pc_restore_stashed_freq(>->uc.guc.pc);
+
+ return ret;
+diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
+index 64afc90ad2c519..cd9918e3896c09 100644
+--- a/drivers/gpu/drm/xe/xe_guc_ct.c
++++ b/drivers/gpu/drm/xe/xe_guc_ct.c
+@@ -658,16 +658,12 @@ static int __guc_ct_send_locked(struct xe_guc_ct *ct, const u32 *action,
+ num_g2h = 1;
+
+ if (g2h_fence_needs_alloc(g2h_fence)) {
+- void *ptr;
+-
+ g2h_fence->seqno = next_ct_seqno(ct, true);
+- ptr = xa_store(&ct->fence_lookup,
+- g2h_fence->seqno,
+- g2h_fence, GFP_ATOMIC);
+- if (IS_ERR(ptr)) {
+- ret = PTR_ERR(ptr);
++ ret = xa_err(xa_store(&ct->fence_lookup,
++ g2h_fence->seqno, g2h_fence,
++ GFP_ATOMIC));
++ if (ret)
+ goto out;
+- }
+ }
+
+ seqno = g2h_fence->seqno;
+@@ -870,14 +866,11 @@ static int guc_ct_send_recv(struct xe_guc_ct *ct, const u32 *action, u32 len,
+ retry_same_fence:
+ ret = guc_ct_send(ct, action, len, 0, 0, &g2h_fence);
+ if (unlikely(ret == -ENOMEM)) {
+- void *ptr;
+-
+ /* Retry allocation /w GFP_KERNEL */
+- ptr = xa_store(&ct->fence_lookup,
+- g2h_fence.seqno,
+- &g2h_fence, GFP_KERNEL);
+- if (IS_ERR(ptr))
+- return PTR_ERR(ptr);
++ ret = xa_err(xa_store(&ct->fence_lookup, g2h_fence.seqno,
++ &g2h_fence, GFP_KERNEL));
++ if (ret)
++ return ret;
+
+ goto retry_same_fence;
+ } else if (unlikely(ret)) {
+@@ -894,16 +887,26 @@ static int guc_ct_send_recv(struct xe_guc_ct *ct, const u32 *action, u32 len,
+ }
+
+ ret = wait_event_timeout(ct->g2h_fence_wq, g2h_fence.done, HZ);
++
++ /*
++ * Ensure we serialize with completion side to prevent UAF with fence going out of scope on
++ * the stack, since we have no clue if it will fire after the timeout before we can erase
++ * from the xa. Also we have some dependent loads and stores below for which we need the
++ * correct ordering, and we lack the needed barriers.
++ */
++ mutex_lock(&ct->lock);
+ if (!ret) {
+- xe_gt_err(gt, "Timed out wait for G2H, fence %u, action %04x",
+- g2h_fence.seqno, action[0]);
++ xe_gt_err(gt, "Timed out wait for G2H, fence %u, action %04x, done %s",
++ g2h_fence.seqno, action[0], str_yes_no(g2h_fence.done));
+ xa_erase_irq(&ct->fence_lookup, g2h_fence.seqno);
++ mutex_unlock(&ct->lock);
+ return -ETIME;
+ }
+
+ if (g2h_fence.retry) {
+ xe_gt_dbg(gt, "H2G action %#x retrying: reason %#x\n",
+ action[0], g2h_fence.reason);
++ mutex_unlock(&ct->lock);
+ goto retry;
+ }
+ if (g2h_fence.fail) {
+@@ -912,7 +915,12 @@ static int guc_ct_send_recv(struct xe_guc_ct *ct, const u32 *action, u32 len,
+ ret = -EIO;
+ }
+
+- return ret > 0 ? response_buffer ? g2h_fence.response_len : g2h_fence.response_data : ret;
++ if (ret > 0)
++ ret = response_buffer ? g2h_fence.response_len : g2h_fence.response_data;
++
++ mutex_unlock(&ct->lock);
++
++ return ret;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 62c3982ad7fdc9..690f821f8bf5ad 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -393,7 +393,6 @@ static void __release_guc_id(struct xe_guc *guc, struct xe_exec_queue *q, u32 xa
+ static int alloc_guc_id(struct xe_guc *guc, struct xe_exec_queue *q)
+ {
+ int ret;
+- void *ptr;
+ int i;
+
+ /*
+@@ -413,12 +412,10 @@ static int alloc_guc_id(struct xe_guc *guc, struct xe_exec_queue *q)
+ q->guc->id = ret;
+
+ for (i = 0; i < q->width; ++i) {
+- ptr = xa_store(&guc->submission_state.exec_queue_lookup,
+- q->guc->id + i, q, GFP_NOWAIT);
+- if (IS_ERR(ptr)) {
+- ret = PTR_ERR(ptr);
++ ret = xa_err(xa_store(&guc->submission_state.exec_queue_lookup,
++ q->guc->id + i, q, GFP_NOWAIT));
++ if (ret)
+ goto err_release;
+- }
+ }
+
+ return 0;
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+index 4b59687ff5d821..3438d392920fad 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+@@ -236,9 +236,9 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata)
+ cl_data->in_data = in_data;
+
+ for (i = 0; i < cl_data->num_hid_devices; i++) {
+- in_data->sensor_virt_addr[i] = dma_alloc_coherent(dev, sizeof(int) * 8,
+- &cl_data->sensor_dma_addr[i],
+- GFP_KERNEL);
++ in_data->sensor_virt_addr[i] = dmam_alloc_coherent(dev, sizeof(int) * 8,
++ &cl_data->sensor_dma_addr[i],
++ GFP_KERNEL);
+ if (!in_data->sensor_virt_addr[i]) {
+ rc = -ENOMEM;
+ goto cleanup;
+@@ -331,7 +331,6 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata)
+ int amd_sfh_hid_client_deinit(struct amd_mp2_dev *privdata)
+ {
+ struct amdtp_cl_data *cl_data = privdata->cl_data;
+- struct amd_input_data *in_data = cl_data->in_data;
+ int i, status;
+
+ for (i = 0; i < cl_data->num_hid_devices; i++) {
+@@ -351,12 +350,5 @@ int amd_sfh_hid_client_deinit(struct amd_mp2_dev *privdata)
+ cancel_delayed_work_sync(&cl_data->work_buffer);
+ amdtp_hid_remove(cl_data);
+
+- for (i = 0; i < cl_data->num_hid_devices; i++) {
+- if (in_data->sensor_virt_addr[i]) {
+- dma_free_coherent(&privdata->pdev->dev, 8 * sizeof(int),
+- in_data->sensor_virt_addr[i],
+- cl_data->sensor_dma_addr[i]);
+- }
+- }
+ return 0;
+ }
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 06104a4e0fdc15..8a991b30e3c6d2 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -505,6 +505,7 @@
+ #define USB_DEVICE_ID_GENERAL_TOUCH_WIN8_PIT_E100 0xe100
+
+ #define I2C_VENDOR_ID_GOODIX 0x27c6
++#define I2C_DEVICE_ID_GOODIX_01E0 0x01e0
+ #define I2C_DEVICE_ID_GOODIX_01E8 0x01e8
+ #define I2C_DEVICE_ID_GOODIX_01E9 0x01e9
+ #define I2C_DEVICE_ID_GOODIX_01F0 0x01f0
+@@ -1035,6 +1036,8 @@
+ #define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES 0xc056
+ #define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3215_SERIES 0xc057
+ #define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3225_SERIES 0xc058
++#define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3325_SERIES 0x430c
++#define USB_DEVICE_ID_PLANTRONICS_ENCOREPRO_500_SERIES 0x431e
+
+ #define USB_VENDOR_ID_PANASONIC 0x04da
+ #define USB_DEVICE_ID_PANABOARD_UBT780 0x1044
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index c4a6908bbe5404..847462650549e9 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1446,7 +1446,8 @@ static __u8 *mt_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ {
+ if (hdev->vendor == I2C_VENDOR_ID_GOODIX &&
+ (hdev->product == I2C_DEVICE_ID_GOODIX_01E8 ||
+- hdev->product == I2C_DEVICE_ID_GOODIX_01E9)) {
++ hdev->product == I2C_DEVICE_ID_GOODIX_01E9 ||
++ hdev->product == I2C_DEVICE_ID_GOODIX_01E0)) {
+ if (rdesc[607] == 0x15) {
+ rdesc[607] = 0x25;
+ dev_info(
+@@ -2065,7 +2066,10 @@ static const struct hid_device_id mt_devices[] = {
+ I2C_DEVICE_ID_GOODIX_01E8) },
+ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
+ HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
+- I2C_DEVICE_ID_GOODIX_01E8) },
++ I2C_DEVICE_ID_GOODIX_01E9) },
++ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
++ HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
++ I2C_DEVICE_ID_GOODIX_01E0) },
+
+ /* GoodTouch panels */
+ { .driver_data = MT_CLS_NSMU,
+diff --git a/drivers/hid/hid-plantronics.c b/drivers/hid/hid-plantronics.c
+index 3d414ae194acbd..25cfd964dc25d9 100644
+--- a/drivers/hid/hid-plantronics.c
++++ b/drivers/hid/hid-plantronics.c
+@@ -38,8 +38,10 @@
+ (usage->hid & HID_USAGE_PAGE) == HID_UP_CONSUMER)
+
+ #define PLT_QUIRK_DOUBLE_VOLUME_KEYS BIT(0)
++#define PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS BIT(1)
+
+ #define PLT_DOUBLE_KEY_TIMEOUT 5 /* ms */
++#define PLT_FOLLOWED_OPPOSITE_KEY_TIMEOUT 220 /* ms */
+
+ struct plt_drv_data {
+ unsigned long device_type;
+@@ -137,6 +139,21 @@ static int plantronics_event(struct hid_device *hdev, struct hid_field *field,
+
+ drv_data->last_volume_key_ts = cur_ts;
+ }
++ if (drv_data->quirks & PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS) {
++ unsigned long prev_ts, cur_ts;
++
++ /* Usages are filtered in plantronics_usages. */
++
++ if (!value) /* Handle key presses only. */
++ return 0;
++
++ prev_ts = drv_data->last_volume_key_ts;
++ cur_ts = jiffies;
++ if (jiffies_to_msecs(cur_ts - prev_ts) <= PLT_FOLLOWED_OPPOSITE_KEY_TIMEOUT)
++ return 1; /* Ignore the followed opposite volume key. */
++
++ drv_data->last_volume_key_ts = cur_ts;
++ }
+
+ return 0;
+ }
+@@ -210,6 +227,12 @@ static const struct hid_device_id plantronics_devices[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+ USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3225_SERIES),
+ .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
++ { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
++ USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3325_SERIES),
++ .driver_data = PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS },
++ { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
++ USB_DEVICE_ID_PLANTRONICS_ENCOREPRO_500_SERIES),
++ .driver_data = PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS },
+ { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, HID_ANY_ID) },
+ { }
+ };
+diff --git a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+index e157863a8b250b..b3c3cfcd97fc54 100644
+--- a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
++++ b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+@@ -635,7 +635,7 @@ static int ish_fw_xfer_direct_dma(struct ishtp_cl_data *client_data,
+ const struct firmware *fw,
+ const struct shim_fw_info fw_info)
+ {
+- int rv;
++ int rv = 0;
+ void *dma_buf;
+ dma_addr_t dma_buf_phy;
+ u32 fragment_offset, fragment_size, payload_max_size;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index e86a37c3cf9c3d..7a16c65f20148a 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2501,6 +2501,8 @@ static void wacom_wac_pen_report(struct hid_device *hdev,
+ /* Going into range select tool */
+ if (wacom_wac->hid_data.invert_state)
+ wacom_wac->tool[0] = BTN_TOOL_RUBBER;
++ else if (wacom_wac->features.quirks & WACOM_QUIRK_AESPEN)
++ wacom_wac->tool[0] = BTN_TOOL_PEN;
+ else if (wacom_wac->id[0])
+ wacom_wac->tool[0] = wacom_intuos_get_tool_type(wacom_wac->id[0]);
+ else
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index b60fe2e58ad6cb..778e584c3a75ce 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -162,6 +162,7 @@ config SENSORS_ADM9240
+ tristate "Analog Devices ADM9240 and compatibles"
+ depends on I2C
+ select HWMON_VID
++ select REGMAP_I2C
+ help
+ If you say yes here you get support for Analog Devices ADM9240,
+ Dallas DS1780, National Semiconductor LM81 sensor chips.
+@@ -223,6 +224,7 @@ config SENSORS_ADT7462
+ config SENSORS_ADT7470
+ tristate "Analog Devices ADT7470"
+ depends on I2C
++ select REGMAP_I2C
+ help
+ If you say yes here you get support for the Analog Devices
+ ADT7470 temperature monitoring chips.
+@@ -999,6 +1001,7 @@ config SENSORS_LTC2990
+ config SENSORS_LTC2991
+ tristate "Analog Devices LTC2991"
+ depends on I2C
++ select REGMAP_I2C
+ help
+ If you say yes here you get support for Analog Devices LTC2991
+ Octal I2C Voltage, Current, and Temperature Monitor. The LTC2991
+@@ -1275,6 +1278,7 @@ config SENSORS_MAX31790
+ config SENSORS_MC34VR500
+ tristate "NXP MC34VR500 hardware monitoring driver"
+ depends on I2C
++ select REGMAP_I2C
+ help
+ If you say yes here you get support for the temperature and input
+ voltage sensors of the NXP MC34VR500.
+@@ -2288,6 +2292,7 @@ config SENSORS_TMP464
+ config SENSORS_TMP513
+ tristate "Texas Instruments TMP513 and compatibles"
+ depends on I2C
++ select REGMAP_I2C
+ help
+ If you say yes here you get support for Texas Instruments TMP512,
+ and TMP513 temperature and power supply sensor chips.
+diff --git a/drivers/hwmon/intel-m10-bmc-hwmon.c b/drivers/hwmon/intel-m10-bmc-hwmon.c
+index ca2dff15892515..96397ae6ff18fc 100644
+--- a/drivers/hwmon/intel-m10-bmc-hwmon.c
++++ b/drivers/hwmon/intel-m10-bmc-hwmon.c
+@@ -358,7 +358,7 @@ static const struct m10bmc_sdata n6000bmc_temp_tbl[] = {
+ { 0x4f0, 0x4f4, 0x4f8, 0x52c, 0x0, 500, "Board Top Near FPGA Temperature" },
+ { 0x4fc, 0x500, 0x504, 0x52c, 0x0, 500, "Board Bottom Near CVL Temperature" },
+ { 0x508, 0x50c, 0x510, 0x52c, 0x0, 500, "Board Top East Near VRs Temperature" },
+- { 0x514, 0x518, 0x51c, 0x52c, 0x0, 500, "Columbiaville Die Temperature" },
++ { 0x514, 0x518, 0x51c, 0x52c, 0x0, 500, "CVL Die Temperature" },
+ { 0x520, 0x524, 0x528, 0x52c, 0x0, 500, "Board Rear Side Temperature" },
+ { 0x530, 0x534, 0x538, 0x52c, 0x0, 500, "Board Front Side Temperature" },
+ { 0x53c, 0x540, 0x544, 0x0, 0x0, 500, "QSFP1 Case Temperature" },
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index 543526bac04254..f96b91e4331261 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -548,6 +548,7 @@ static const struct pci_device_id k10temp_id_table[] = {
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_19H_M78H_DF_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_1AH_M00H_DF_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_1AH_M20H_DF_F3) },
++ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_1AH_M60H_DF_F3) },
+ { PCI_VDEVICE(HYGON, PCI_DEVICE_ID_AMD_17H_DF_F3) },
+ {}
+ };
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 328c0dab6b147e..299fe9d3afab0a 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1763,8 +1763,15 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id)
+
+ i801_add_tco(priv);
+
++ /*
++ * adapter.name is used by platform code to find the main I801 adapter
++ * to instantiante i2c_clients, do not change.
++ */
+ snprintf(priv->adapter.name, sizeof(priv->adapter.name),
+- "SMBus I801 adapter at %04lx", priv->smba);
++ "SMBus %s adapter at %04lx",
++ (priv->features & FEATURE_IDF) ? "I801 IDF" : "I801",
++ priv->smba);
++
+ err = i2c_add_adapter(&priv->adapter);
+ if (err) {
+ platform_device_unregister(priv->tco_pdev);
+diff --git a/drivers/i3c/master/i3c-master-cdns.c b/drivers/i3c/master/i3c-master-cdns.c
+index c1627f3552ce3e..c2d26beb3da2bc 100644
+--- a/drivers/i3c/master/i3c-master-cdns.c
++++ b/drivers/i3c/master/i3c-master-cdns.c
+@@ -1666,6 +1666,7 @@ static void cdns_i3c_master_remove(struct platform_device *pdev)
+ {
+ struct cdns_i3c_master *master = platform_get_drvdata(pdev);
+
++ cancel_work_sync(&master->hj_work);
+ i3c_master_unregister(&master->base);
+
+ clk_disable_unprepare(master->sysclk);
+diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
+index 7439e47ff951d4..70708fea12962c 100644
+--- a/drivers/infiniband/core/mad.c
++++ b/drivers/infiniband/core/mad.c
+@@ -2616,14 +2616,16 @@ static int retry_send(struct ib_mad_send_wr_private *mad_send_wr)
+
+ static void timeout_sends(struct work_struct *work)
+ {
++ struct ib_mad_send_wr_private *mad_send_wr, *n;
+ struct ib_mad_agent_private *mad_agent_priv;
+- struct ib_mad_send_wr_private *mad_send_wr;
+ struct ib_mad_send_wc mad_send_wc;
++ struct list_head local_list;
+ unsigned long flags, delay;
+
+ mad_agent_priv = container_of(work, struct ib_mad_agent_private,
+ timed_work.work);
+ mad_send_wc.vendor_err = 0;
++ INIT_LIST_HEAD(&local_list);
+
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ while (!list_empty(&mad_agent_priv->wait_list)) {
+@@ -2641,13 +2643,16 @@ static void timeout_sends(struct work_struct *work)
+ break;
+ }
+
+- list_del(&mad_send_wr->agent_list);
++ list_del_init(&mad_send_wr->agent_list);
+ if (mad_send_wr->status == IB_WC_SUCCESS &&
+ !retry_send(mad_send_wr))
+ continue;
+
+- spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
++ list_add_tail(&mad_send_wr->agent_list, &local_list);
++ }
++ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+
++ list_for_each_entry_safe(mad_send_wr, n, &local_list, agent_list) {
+ if (mad_send_wr->status == IB_WC_SUCCESS)
+ mad_send_wc.status = IB_WC_RESP_TIMEOUT_ERR;
+ else
+@@ -2655,11 +2660,8 @@ static void timeout_sends(struct work_struct *work)
+ mad_send_wc.send_buf = &mad_send_wr->send_buf;
+ mad_agent_priv->agent.send_handler(&mad_agent_priv->agent,
+ &mad_send_wc);
+-
+ deref_mad_agent(mad_agent_priv);
+- spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ }
+- spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+ }
+
+ /*
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index a524181f34df95..3a4605fda6d57e 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -733,24 +733,31 @@ static int pagefault_dmabuf_mr(struct mlx5_ib_mr *mr, size_t bcnt,
+ * >0: Number of pages mapped
+ */
+ static int pagefault_mr(struct mlx5_ib_mr *mr, u64 io_virt, size_t bcnt,
+- u32 *bytes_mapped, u32 flags)
++ u32 *bytes_mapped, u32 flags, bool permissive_fault)
+ {
+ struct ib_umem_odp *odp = to_ib_umem_odp(mr->umem);
+
+- if (unlikely(io_virt < mr->ibmr.iova))
++ if (unlikely(io_virt < mr->ibmr.iova) && !permissive_fault)
+ return -EFAULT;
+
+ if (mr->umem->is_dmabuf)
+ return pagefault_dmabuf_mr(mr, bcnt, bytes_mapped, flags);
+
+ if (!odp->is_implicit_odp) {
++ u64 offset = io_virt < mr->ibmr.iova ? 0 : io_virt - mr->ibmr.iova;
+ u64 user_va;
+
+- if (check_add_overflow(io_virt - mr->ibmr.iova,
+- (u64)odp->umem.address, &user_va))
++ if (check_add_overflow(offset, (u64)odp->umem.address,
++ &user_va))
+ return -EFAULT;
+- if (unlikely(user_va >= ib_umem_end(odp) ||
+- ib_umem_end(odp) - user_va < bcnt))
++
++ if (permissive_fault) {
++ if (user_va < ib_umem_start(odp))
++ user_va = ib_umem_start(odp);
++ if ((user_va + bcnt) > ib_umem_end(odp))
++ bcnt = ib_umem_end(odp) - user_va;
++ } else if (unlikely(user_va >= ib_umem_end(odp) ||
++ ib_umem_end(odp) - user_va < bcnt))
+ return -EFAULT;
+ return pagefault_real_mr(mr, odp, user_va, bcnt, bytes_mapped,
+ flags);
+@@ -857,7 +864,7 @@ static int pagefault_single_data_segment(struct mlx5_ib_dev *dev,
+ case MLX5_MKEY_MR:
+ mr = container_of(mmkey, struct mlx5_ib_mr, mmkey);
+
+- ret = pagefault_mr(mr, io_virt, bcnt, bytes_mapped, 0);
++ ret = pagefault_mr(mr, io_virt, bcnt, bytes_mapped, 0, false);
+ if (ret < 0)
+ goto end;
+
+@@ -1710,7 +1717,7 @@ static void mlx5_ib_prefetch_mr_work(struct work_struct *w)
+ for (i = 0; i < work->num_sge; ++i) {
+ ret = pagefault_mr(work->frags[i].mr, work->frags[i].io_virt,
+ work->frags[i].length, &bytes_mapped,
+- work->pf_flags);
++ work->pf_flags, false);
+ if (ret <= 0)
+ continue;
+ mlx5_update_odp_stats(work->frags[i].mr, prefetch, ret);
+@@ -1761,7 +1768,7 @@ static int mlx5_ib_prefetch_sg_list(struct ib_pd *pd,
+ if (IS_ERR(mr))
+ return PTR_ERR(mr);
+ ret = pagefault_mr(mr, sg_list[i].addr, sg_list[i].length,
+- &bytes_mapped, pf_flags);
++ &bytes_mapped, pf_flags, false);
+ if (ret < 0) {
+ mlx5r_deref_odp_mkey(&mr->mmkey);
+ return ret;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index 94ac99a4f696e7..758a3d9c2844d1 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -931,12 +931,11 @@ static void rtrs_srv_info_req_done(struct ib_cq *cq, struct ib_wc *wc)
+ if (err)
+ goto close;
+
+-out:
+ rtrs_iu_free(iu, srv_path->s.dev->ib_dev, 1);
+ return;
+ close:
++ rtrs_iu_free(iu, srv_path->s.dev->ib_dev, 1);
+ close_path(srv_path);
+- goto out;
+ }
+
+ static int post_recv_info_req(struct rtrs_srv_con *con)
+@@ -987,6 +986,16 @@ static int post_recv_path(struct rtrs_srv_path *srv_path)
+ q_size = SERVICE_CON_QUEUE_DEPTH;
+ else
+ q_size = srv->queue_depth;
++ if (srv_path->state != RTRS_SRV_CONNECTING) {
++ rtrs_err(s, "Path state invalid. state %s\n",
++ rtrs_srv_state_str(srv_path->state));
++ return -EIO;
++ }
++
++ if (!srv_path->s.con[cid]) {
++ rtrs_err(s, "Conn not set for %d\n", cid);
++ return -EIO;
++ }
+
+ err = post_recv_io(to_srv_con(srv_path->s.con[cid]), q_size);
+ if (err) {
+diff --git a/drivers/md/dm-vdo/dedupe.c b/drivers/md/dm-vdo/dedupe.c
+index 39ac68614419fe..80628ae93fbacc 100644
+--- a/drivers/md/dm-vdo/dedupe.c
++++ b/drivers/md/dm-vdo/dedupe.c
+@@ -729,6 +729,7 @@ static void process_update_result(struct data_vio *agent)
+ !change_context_state(context, DEDUPE_CONTEXT_COMPLETE, DEDUPE_CONTEXT_IDLE))
+ return;
+
++ agent->dedupe_context = NULL;
+ release_context(context);
+ }
+
+@@ -1648,6 +1649,7 @@ static void process_query_result(struct data_vio *agent)
+
+ if (change_context_state(context, DEDUPE_CONTEXT_COMPLETE, DEDUPE_CONTEXT_IDLE)) {
+ agent->is_duplicate = decode_uds_advice(context);
++ agent->dedupe_context = NULL;
+ release_context(context);
+ }
+ }
+@@ -2321,6 +2323,7 @@ static void timeout_index_operations_callback(struct vdo_completion *completion)
+ * send its requestor on its way.
+ */
+ list_del_init(&context->list_entry);
++ context->requestor->dedupe_context = NULL;
+ continue_data_vio(context->requestor);
+ timed_out++;
+ }
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index 8b0de1cb08808b..97605c2e25dacc 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -311,6 +311,10 @@ static void __vb2_plane_dmabuf_put(struct vb2_buffer *vb, struct vb2_plane *p)
+ p->mem_priv = NULL;
+ p->dbuf = NULL;
+ p->dbuf_mapped = 0;
++ p->bytesused = 0;
++ p->length = 0;
++ p->m.fd = 0;
++ p->data_offset = 0;
+ }
+
+ /*
+@@ -1420,10 +1424,6 @@ static int __prepare_dmabuf(struct vb2_buffer *vb)
+
+ /* Release previously acquired memory if present */
+ __vb2_plane_dmabuf_put(vb, &vb->planes[plane]);
+- vb->planes[plane].bytesused = 0;
+- vb->planes[plane].length = 0;
+- vb->planes[plane].m.fd = 0;
+- vb->planes[plane].data_offset = 0;
+
+ /* Acquire each plane's memory */
+ mem_priv = call_ptr_memop(attach_dmabuf,
+diff --git a/drivers/mfd/intel-lpss-pci.c b/drivers/mfd/intel-lpss-pci.c
+index 1362b3f64ade61..1d8cdc4d5819b6 100644
+--- a/drivers/mfd/intel-lpss-pci.c
++++ b/drivers/mfd/intel-lpss-pci.c
+@@ -424,6 +424,19 @@ static const struct pci_device_id intel_lpss_pci_ids[] = {
+ { PCI_VDEVICE(INTEL, 0x5ac4), (kernel_ulong_t)&bxt_spi_info },
+ { PCI_VDEVICE(INTEL, 0x5ac6), (kernel_ulong_t)&bxt_spi_info },
+ { PCI_VDEVICE(INTEL, 0x5aee), (kernel_ulong_t)&bxt_uart_info },
++ /* ARL-H */
++ { PCI_VDEVICE(INTEL, 0x7725), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0x7726), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0x7727), (kernel_ulong_t)&tgl_spi_info },
++ { PCI_VDEVICE(INTEL, 0x7730), (kernel_ulong_t)&tgl_spi_info },
++ { PCI_VDEVICE(INTEL, 0x7746), (kernel_ulong_t)&tgl_spi_info },
++ { PCI_VDEVICE(INTEL, 0x7750), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x7751), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x7752), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0x7778), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x7779), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x777a), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x777b), (kernel_ulong_t)&bxt_i2c_info },
+ /* RPL-S */
+ { PCI_VDEVICE(INTEL, 0x7a28), (kernel_ulong_t)&bxt_uart_info },
+ { PCI_VDEVICE(INTEL, 0x7a29), (kernel_ulong_t)&bxt_uart_info },
+@@ -594,6 +607,32 @@ static const struct pci_device_id intel_lpss_pci_ids[] = {
+ { PCI_VDEVICE(INTEL, 0xa879), (kernel_ulong_t)&ehl_i2c_info },
+ { PCI_VDEVICE(INTEL, 0xa87a), (kernel_ulong_t)&ehl_i2c_info },
+ { PCI_VDEVICE(INTEL, 0xa87b), (kernel_ulong_t)&ehl_i2c_info },
++ /* PTL-H */
++ { PCI_VDEVICE(INTEL, 0xe325), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0xe326), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0xe327), (kernel_ulong_t)&tgl_spi_info },
++ { PCI_VDEVICE(INTEL, 0xe330), (kernel_ulong_t)&tgl_spi_info },
++ { PCI_VDEVICE(INTEL, 0xe346), (kernel_ulong_t)&tgl_spi_info },
++ { PCI_VDEVICE(INTEL, 0xe350), (kernel_ulong_t)&ehl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xe351), (kernel_ulong_t)&ehl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xe352), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0xe378), (kernel_ulong_t)&ehl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xe379), (kernel_ulong_t)&ehl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xe37a), (kernel_ulong_t)&ehl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xe37b), (kernel_ulong_t)&ehl_i2c_info },
++ /* PTL-P */
++ { PCI_VDEVICE(INTEL, 0xe425), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0xe426), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0xe427), (kernel_ulong_t)&tgl_spi_info },
++ { PCI_VDEVICE(INTEL, 0xe430), (kernel_ulong_t)&tgl_spi_info },
++ { PCI_VDEVICE(INTEL, 0xe446), (kernel_ulong_t)&tgl_spi_info },
++ { PCI_VDEVICE(INTEL, 0xe450), (kernel_ulong_t)&ehl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xe451), (kernel_ulong_t)&ehl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xe452), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0xe478), (kernel_ulong_t)&ehl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xe479), (kernel_ulong_t)&ehl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xe47a), (kernel_ulong_t)&ehl_i2c_info },
++ { PCI_VDEVICE(INTEL, 0xe47b), (kernel_ulong_t)&ehl_i2c_info },
+ { }
+ };
+ MODULE_DEVICE_TABLE(pci, intel_lpss_pci_ids);
+diff --git a/drivers/mfd/intel_soc_pmic_chtwc.c b/drivers/mfd/intel_soc_pmic_chtwc.c
+index 7fce3ef7ab453d..2a83f540d4c9d8 100644
+--- a/drivers/mfd/intel_soc_pmic_chtwc.c
++++ b/drivers/mfd/intel_soc_pmic_chtwc.c
+@@ -178,7 +178,6 @@ static const struct dmi_system_id cht_wc_model_dmi_ids[] = {
+ .driver_data = (void *)(long)INTEL_CHT_WC_LENOVO_YT3_X90,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"),
+ },
+ },
+diff --git a/drivers/mmc/host/mvsdio.c b/drivers/mmc/host/mvsdio.c
+index af7f21888e273b..ca01b7d204ba66 100644
+--- a/drivers/mmc/host/mvsdio.c
++++ b/drivers/mmc/host/mvsdio.c
+@@ -38,9 +38,8 @@ struct mvsd_host {
+ unsigned int xfer_mode;
+ unsigned int intr_en;
+ unsigned int ctrl;
+- bool use_pio;
+- struct sg_mapping_iter sg_miter;
+ unsigned int pio_size;
++ void *pio_ptr;
+ unsigned int sg_frags;
+ unsigned int ns_per_clk;
+ unsigned int clock;
+@@ -115,18 +114,11 @@ static int mvsd_setup_data(struct mvsd_host *host, struct mmc_data *data)
+ * data when the buffer is not aligned on a 64 byte
+ * boundary.
+ */
+- unsigned int miter_flags = SG_MITER_ATOMIC; /* Used from IRQ */
+-
+- if (data->flags & MMC_DATA_READ)
+- miter_flags |= SG_MITER_TO_SG;
+- else
+- miter_flags |= SG_MITER_FROM_SG;
+-
+ host->pio_size = data->blocks * data->blksz;
+- sg_miter_start(&host->sg_miter, data->sg, data->sg_len, miter_flags);
++ host->pio_ptr = sg_virt(data->sg);
+ if (!nodma)
+- dev_dbg(host->dev, "fallback to PIO for data\n");
+- host->use_pio = true;
++ dev_dbg(host->dev, "fallback to PIO for data at 0x%p size %d\n",
++ host->pio_ptr, host->pio_size);
+ return 1;
+ } else {
+ dma_addr_t phys_addr;
+@@ -137,7 +129,6 @@ static int mvsd_setup_data(struct mvsd_host *host, struct mmc_data *data)
+ phys_addr = sg_dma_address(data->sg);
+ mvsd_write(MVSD_SYS_ADDR_LOW, (u32)phys_addr & 0xffff);
+ mvsd_write(MVSD_SYS_ADDR_HI, (u32)phys_addr >> 16);
+- host->use_pio = false;
+ return 0;
+ }
+ }
+@@ -297,8 +288,8 @@ static u32 mvsd_finish_data(struct mvsd_host *host, struct mmc_data *data,
+ {
+ void __iomem *iobase = host->base;
+
+- if (host->use_pio) {
+- sg_miter_stop(&host->sg_miter);
++ if (host->pio_ptr) {
++ host->pio_ptr = NULL;
+ host->pio_size = 0;
+ } else {
+ dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->sg_frags,
+@@ -353,12 +344,9 @@ static u32 mvsd_finish_data(struct mvsd_host *host, struct mmc_data *data,
+ static irqreturn_t mvsd_irq(int irq, void *dev)
+ {
+ struct mvsd_host *host = dev;
+- struct sg_mapping_iter *sgm = &host->sg_miter;
+ void __iomem *iobase = host->base;
+ u32 intr_status, intr_done_mask;
+ int irq_handled = 0;
+- u16 *p;
+- int s;
+
+ intr_status = mvsd_read(MVSD_NOR_INTR_STATUS);
+ dev_dbg(host->dev, "intr 0x%04x intr_en 0x%04x hw_state 0x%04x\n",
+@@ -382,36 +370,15 @@ static irqreturn_t mvsd_irq(int irq, void *dev)
+ spin_lock(&host->lock);
+
+ /* PIO handling, if needed. Messy business... */
+- if (host->use_pio) {
+- /*
+- * As we set sgm->consumed this always gives a valid buffer
+- * position.
+- */
+- if (!sg_miter_next(sgm)) {
+- /* This should not happen */
+- dev_err(host->dev, "ran out of scatter segments\n");
+- spin_unlock(&host->lock);
+- host->intr_en &=
+- ~(MVSD_NOR_RX_READY | MVSD_NOR_RX_FIFO_8W |
+- MVSD_NOR_TX_AVAIL | MVSD_NOR_TX_FIFO_8W);
+- mvsd_write(MVSD_NOR_INTR_EN, host->intr_en);
+- return IRQ_HANDLED;
+- }
+- p = sgm->addr;
+- s = sgm->length;
+- if (s > host->pio_size)
+- s = host->pio_size;
+- }
+-
+- if (host->use_pio &&
++ if (host->pio_size &&
+ (intr_status & host->intr_en &
+ (MVSD_NOR_RX_READY | MVSD_NOR_RX_FIFO_8W))) {
+-
++ u16 *p = host->pio_ptr;
++ int s = host->pio_size;
+ while (s >= 32 && (intr_status & MVSD_NOR_RX_FIFO_8W)) {
+ readsw(iobase + MVSD_FIFO, p, 16);
+ p += 16;
+ s -= 32;
+- sgm->consumed += 32;
+ intr_status = mvsd_read(MVSD_NOR_INTR_STATUS);
+ }
+ /*
+@@ -424,7 +391,6 @@ static irqreturn_t mvsd_irq(int irq, void *dev)
+ put_unaligned(mvsd_read(MVSD_FIFO), p++);
+ put_unaligned(mvsd_read(MVSD_FIFO), p++);
+ s -= 4;
+- sgm->consumed += 4;
+ intr_status = mvsd_read(MVSD_NOR_INTR_STATUS);
+ }
+ if (s && s < 4 && (intr_status & MVSD_NOR_RX_READY)) {
+@@ -432,13 +398,10 @@ static irqreturn_t mvsd_irq(int irq, void *dev)
+ val[0] = mvsd_read(MVSD_FIFO);
+ val[1] = mvsd_read(MVSD_FIFO);
+ memcpy(p, ((void *)&val) + 4 - s, s);
+- sgm->consumed += s;
+ s = 0;
+ intr_status = mvsd_read(MVSD_NOR_INTR_STATUS);
+ }
+- /* PIO transfer done */
+- host->pio_size -= sgm->consumed;
+- if (host->pio_size == 0) {
++ if (s == 0) {
+ host->intr_en &=
+ ~(MVSD_NOR_RX_READY | MVSD_NOR_RX_FIFO_8W);
+ mvsd_write(MVSD_NOR_INTR_EN, host->intr_en);
+@@ -450,10 +413,14 @@ static irqreturn_t mvsd_irq(int irq, void *dev)
+ }
+ dev_dbg(host->dev, "pio %d intr 0x%04x hw_state 0x%04x\n",
+ s, intr_status, mvsd_read(MVSD_HW_STATE));
++ host->pio_ptr = p;
++ host->pio_size = s;
+ irq_handled = 1;
+- } else if (host->use_pio &&
++ } else if (host->pio_size &&
+ (intr_status & host->intr_en &
+ (MVSD_NOR_TX_AVAIL | MVSD_NOR_TX_FIFO_8W))) {
++ u16 *p = host->pio_ptr;
++ int s = host->pio_size;
+ /*
+ * The TX_FIFO_8W bit is unreliable. When set, bursting
+ * 16 halfwords all at once in the FIFO drops data. Actually
+@@ -464,7 +431,6 @@ static irqreturn_t mvsd_irq(int irq, void *dev)
+ mvsd_write(MVSD_FIFO, get_unaligned(p++));
+ mvsd_write(MVSD_FIFO, get_unaligned(p++));
+ s -= 4;
+- sgm->consumed += 4;
+ intr_status = mvsd_read(MVSD_NOR_INTR_STATUS);
+ }
+ if (s < 4) {
+@@ -473,13 +439,10 @@ static irqreturn_t mvsd_irq(int irq, void *dev)
+ memcpy(((void *)&val) + 4 - s, p, s);
+ mvsd_write(MVSD_FIFO, val[0]);
+ mvsd_write(MVSD_FIFO, val[1]);
+- sgm->consumed += s;
+ s = 0;
+ intr_status = mvsd_read(MVSD_NOR_INTR_STATUS);
+ }
+- /* PIO transfer done */
+- host->pio_size -= sgm->consumed;
+- if (host->pio_size == 0) {
++ if (s == 0) {
+ host->intr_en &=
+ ~(MVSD_NOR_TX_AVAIL | MVSD_NOR_TX_FIFO_8W);
+ mvsd_write(MVSD_NOR_INTR_EN, host->intr_en);
+@@ -487,6 +450,8 @@ static irqreturn_t mvsd_irq(int irq, void *dev)
+ }
+ dev_dbg(host->dev, "pio %d intr 0x%04x hw_state 0x%04x\n",
+ s, intr_status, mvsd_read(MVSD_HW_STATE));
++ host->pio_ptr = p;
++ host->pio_size = s;
+ irq_handled = 1;
+ }
+
+diff --git a/drivers/mmc/host/sdhci-of-dwcmshc.c b/drivers/mmc/host/sdhci-of-dwcmshc.c
+index e79aa4b3b6c360..c23e5cf47eed0b 100644
+--- a/drivers/mmc/host/sdhci-of-dwcmshc.c
++++ b/drivers/mmc/host/sdhci-of-dwcmshc.c
+@@ -746,6 +746,14 @@ static void th1520_sdhci_reset(struct sdhci_host *host, u8 mask)
+
+ sdhci_reset(host, mask);
+
++ /* The T-Head 1520 SoC does not comply with the SDHCI specification
++ * regarding the "Software Reset for CMD line should clear 'Command
++ * Complete' in the Normal Interrupt Status Register." Clear the bit
++ * here to compensate for this quirk.
++ */
++ if (mask & SDHCI_RESET_CMD)
++ sdhci_writel(host, SDHCI_INT_RESPONSE, SDHCI_INT_STATUS);
++
+ if (priv->flags & FLAG_IO_FIXED_1V8) {
+ ctrl_2 = sdhci_readw(host, SDHCI_HOST_CONTROL2);
+ if (!(ctrl_2 & SDHCI_CTRL_VDD_180)) {
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 0783fc121bbbf9..c39cb119e760db 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -27,6 +27,7 @@
+ #include <linux/phylink.h>
+ #include <linux/etherdevice.h>
+ #include <linux/if_bridge.h>
++#include <linux/if_vlan.h>
+ #include <net/dsa.h>
+
+ #include "b53_regs.h"
+@@ -224,6 +225,9 @@ static const struct b53_mib_desc b53_mibs_58xx[] = {
+
+ #define B53_MIBS_58XX_SIZE ARRAY_SIZE(b53_mibs_58xx)
+
++#define B53_MAX_MTU_25 (1536 - ETH_HLEN - VLAN_HLEN - ETH_FCS_LEN)
++#define B53_MAX_MTU (9720 - ETH_HLEN - VLAN_HLEN - ETH_FCS_LEN)
++
+ static int b53_do_vlan_op(struct b53_device *dev, u8 op)
+ {
+ unsigned int i;
+@@ -2254,20 +2258,25 @@ static int b53_change_mtu(struct dsa_switch *ds, int port, int mtu)
+ bool allow_10_100;
+
+ if (is5325(dev) || is5365(dev))
+- return -EOPNOTSUPP;
++ return 0;
+
+ if (!dsa_is_cpu_port(ds, port))
+ return 0;
+
+- enable_jumbo = (mtu >= JMS_MIN_SIZE);
+- allow_10_100 = (dev->chip_id == BCM583XX_DEVICE_ID);
++ enable_jumbo = (mtu > ETH_DATA_LEN);
++ allow_10_100 = !is63xx(dev);
+
+ return b53_set_jumbo(dev, enable_jumbo, allow_10_100);
+ }
+
+ static int b53_get_max_mtu(struct dsa_switch *ds, int port)
+ {
+- return JMS_MAX_SIZE;
++ struct b53_device *dev = ds->priv;
++
++ if (is5325(dev) || is5365(dev))
++ return B53_MAX_MTU_25;
++
++ return B53_MAX_MTU;
+ }
+
+ static const struct phylink_mac_ops b53_phylink_mac_ops = {
+diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
+index 268949939636a0..d246f95d57ecf8 100644
+--- a/drivers/net/dsa/lan9303-core.c
++++ b/drivers/net/dsa/lan9303-core.c
+@@ -6,6 +6,7 @@
+ #include <linux/module.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/regmap.h>
++#include <linux/iopoll.h>
+ #include <linux/mutex.h>
+ #include <linux/mii.h>
+ #include <linux/of.h>
+@@ -839,6 +840,8 @@ static void lan9303_handle_reset(struct lan9303 *chip)
+ if (!chip->reset_gpio)
+ return;
+
++ gpiod_set_value_cansleep(chip->reset_gpio, 1);
++
+ if (chip->reset_duration != 0)
+ msleep(chip->reset_duration);
+
+@@ -864,8 +867,34 @@ static int lan9303_disable_processing(struct lan9303 *chip)
+ static int lan9303_check_device(struct lan9303 *chip)
+ {
+ int ret;
++ int err;
+ u32 reg;
+
++ /* In I2C-managed configurations this polling loop will clash with
++ * switch's reading of EEPROM right after reset and this behaviour is
++ * not configurable. While lan9303_read() already has quite long retry
++ * timeout, seems not all cases are being detected as arbitration error.
++ *
++ * According to datasheet, EEPROM loader has 30ms timeout (in case of
++ * missing EEPROM).
++ *
++ * Loading of the largest supported EEPROM is expected to take at least
++ * 5.9s.
++ */
++ err = read_poll_timeout(lan9303_read, ret,
++ !ret && reg & LAN9303_HW_CFG_READY,
++ 20000, 6000000, false,
++ chip->regmap, LAN9303_HW_CFG, ®);
++ if (ret) {
++ dev_err(chip->dev, "failed to read HW_CFG reg: %pe\n",
++ ERR_PTR(ret));
++ return ret;
++ }
++ if (err) {
++ dev_err(chip->dev, "HW_CFG not ready: 0x%08x\n", reg);
++ return err;
++ }
++
+ ret = lan9303_read(chip->regmap, LAN9303_CHIP_REV, ®);
+ if (ret) {
+ dev_err(chip->dev, "failed to read chip revision register: %d\n",
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index c7282ce3d11c54..6eec3be8557163 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -3164,7 +3164,6 @@ static int sja1105_setup(struct dsa_switch *ds)
+ * TPID is ETH_P_SJA1105, and the VLAN ID is the port pvid.
+ */
+ ds->vlan_filtering_is_global = true;
+- ds->untag_bridge_pvid = true;
+ ds->fdb_isolation = true;
+ ds->max_num_bridges = DSA_TAG_8021Q_MAX_NUM_BRIDGES;
+
+diff --git a/drivers/net/ethernet/adi/adin1110.c b/drivers/net/ethernet/adi/adin1110.c
+index 0713f1e2c7f38b..bf2e513295bb77 100644
+--- a/drivers/net/ethernet/adi/adin1110.c
++++ b/drivers/net/ethernet/adi/adin1110.c
+@@ -318,11 +318,11 @@ static int adin1110_read_fifo(struct adin1110_port_priv *port_priv)
+ * from the ADIN1110 frame header.
+ */
+ if (frame_size < ADIN1110_FRAME_HEADER_LEN + ADIN1110_FEC_LEN)
+- return ret;
++ return -EINVAL;
+
+ round_len = adin1110_round_len(frame_size);
+ if (round_len < 0)
+- return ret;
++ return -EINVAL;
+
+ frame_size_no_fcs = frame_size - ADIN1110_FRAME_HEADER_LEN - ADIN1110_FEC_LEN;
+ memset(priv->data, 0, ADIN1110_RD_HEADER_LEN);
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 570f8a14d975b5..910089b57a998a 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1077,7 +1077,8 @@ fec_restart(struct net_device *ndev)
+ u32 rcntl = OPT_FRAME_SIZE | 0x04;
+ u32 ecntl = FEC_ECR_ETHEREN;
+
+- fec_ptp_save_state(fep);
++ if (fep->bufdesc_ex)
++ fec_ptp_save_state(fep);
+
+ /* Whack a reset. We should wait for this.
+ * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+@@ -1340,7 +1341,8 @@ fec_stop(struct net_device *ndev)
+ netdev_err(ndev, "Graceful transmit stop did not complete!\n");
+ }
+
+- fec_ptp_save_state(fep);
++ if (fep->bufdesc_ex)
++ fec_ptp_save_state(fep);
+
+ /* Whack a reset. We should wait for this.
+ * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+diff --git a/drivers/net/ethernet/ibm/emac/mal.c b/drivers/net/ethernet/ibm/emac/mal.c
+index d92dd9c83031ee..99d5f83f7c60b5 100644
+--- a/drivers/net/ethernet/ibm/emac/mal.c
++++ b/drivers/net/ethernet/ibm/emac/mal.c
+@@ -578,7 +578,7 @@ static int mal_probe(struct platform_device *ofdev)
+ printk(KERN_ERR "%pOF: Support for 405EZ not enabled!\n",
+ ofdev->dev.of_node);
+ err = -ENODEV;
+- goto fail;
++ goto fail_unmap;
+ #endif
+ }
+
+@@ -742,6 +742,8 @@ static void mal_remove(struct platform_device *ofdev)
+
+ free_netdev(mal->dummy_dev);
+
++ dcr_unmap(mal->dcr_host, 0x100);
++
+ dma_free_coherent(&ofdev->dev,
+ sizeof(struct mal_descriptor) *
+ (NUM_TX_BUFF * mal->num_tx_chans +
+diff --git a/drivers/net/ethernet/intel/e1000e/hw.h b/drivers/net/ethernet/intel/e1000e/hw.h
+index 4b6e7536170abc..fc8ed38aa09554 100644
+--- a/drivers/net/ethernet/intel/e1000e/hw.h
++++ b/drivers/net/ethernet/intel/e1000e/hw.h
+@@ -108,8 +108,8 @@ struct e1000_hw;
+ #define E1000_DEV_ID_PCH_RPL_I219_V22 0x0DC8
+ #define E1000_DEV_ID_PCH_MTP_I219_LM18 0x550A
+ #define E1000_DEV_ID_PCH_MTP_I219_V18 0x550B
+-#define E1000_DEV_ID_PCH_MTP_I219_LM19 0x550C
+-#define E1000_DEV_ID_PCH_MTP_I219_V19 0x550D
++#define E1000_DEV_ID_PCH_ADP_I219_LM19 0x550C
++#define E1000_DEV_ID_PCH_ADP_I219_V19 0x550D
+ #define E1000_DEV_ID_PCH_LNP_I219_LM20 0x550E
+ #define E1000_DEV_ID_PCH_LNP_I219_V20 0x550F
+ #define E1000_DEV_ID_PCH_LNP_I219_LM21 0x5510
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index f103249b12facf..07e90334635824 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -7899,10 +7899,10 @@ static const struct pci_device_id e1000_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_adp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_adp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_adp },
++ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM19), board_pch_adp },
++ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V19), board_pch_adp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_mtp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_mtp },
+- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_mtp },
+- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_mtp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_mtp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_mtp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_mtp },
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index cbcfada7b357a8..f7d4b5f79422b1 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -1734,6 +1734,7 @@ struct i40e_mac_filter *i40e_add_mac_filter(struct i40e_vsi *vsi,
+ struct hlist_node *h;
+ int bkt;
+
++ lockdep_assert_held(&vsi->mac_filter_hash_lock);
+ if (vsi->info.pvid)
+ return i40e_add_filter(vsi, macaddr,
+ le16_to_cpu(vsi->info.pvid));
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 662622f01e3125..dfa785e39458db 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -2213,8 +2213,10 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
+ vfres->vsi_res[0].qset_handle
+ = le16_to_cpu(vsi->info.qs_handle[0]);
+ if (!(vf->driver_caps & VIRTCHNL_VF_OFFLOAD_USO) && !vf->pf_set_mac) {
++ spin_lock_bh(&vsi->mac_filter_hash_lock);
+ i40e_del_mac_filter(vsi, vf->default_lan_addr.addr);
+ eth_zero_addr(vf->default_lan_addr.addr);
++ spin_unlock_bh(&vsi->mac_filter_hash_lock);
+ }
+ ether_addr_copy(vfres->vsi_res[0].default_mac_addr,
+ vf->default_lan_addr.addr);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c
+index f182179529b7de..6b60b7c4de0933 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ddp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ddp.c
+@@ -31,7 +31,7 @@ static const struct ice_tunnel_type_scan tnls[] = {
+ * Verifies various attributes of the package file, including length, format
+ * version, and the requirement of at least one segment.
+ */
+-static enum ice_ddp_state ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len)
++static enum ice_ddp_state ice_verify_pkg(const struct ice_pkg_hdr *pkg, u32 len)
+ {
+ u32 seg_count;
+ u32 i;
+@@ -57,13 +57,13 @@ static enum ice_ddp_state ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len)
+ /* all segments must fit within length */
+ for (i = 0; i < seg_count; i++) {
+ u32 off = le32_to_cpu(pkg->seg_offset[i]);
+- struct ice_generic_seg_hdr *seg;
++ const struct ice_generic_seg_hdr *seg;
+
+ /* segment header must fit */
+ if (len < off + sizeof(*seg))
+ return ICE_DDP_PKG_INVALID_FILE;
+
+- seg = (struct ice_generic_seg_hdr *)((u8 *)pkg + off);
++ seg = (void *)pkg + off;
+
+ /* segment body must fit */
+ if (len < off + le32_to_cpu(seg->seg_size))
+@@ -119,13 +119,13 @@ static enum ice_ddp_state ice_chk_pkg_version(struct ice_pkg_ver *pkg_ver)
+ *
+ * This helper function validates a buffer's header.
+ */
+-static struct ice_buf_hdr *ice_pkg_val_buf(struct ice_buf *buf)
++static const struct ice_buf_hdr *ice_pkg_val_buf(const struct ice_buf *buf)
+ {
+- struct ice_buf_hdr *hdr;
++ const struct ice_buf_hdr *hdr;
+ u16 section_count;
+ u16 data_end;
+
+- hdr = (struct ice_buf_hdr *)buf->buf;
++ hdr = (const struct ice_buf_hdr *)buf->buf;
+ /* verify data */
+ section_count = le16_to_cpu(hdr->section_count);
+ if (section_count < ICE_MIN_S_COUNT || section_count > ICE_MAX_S_COUNT)
+@@ -165,8 +165,8 @@ static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg)
+ * unexpected value has been detected (for example an invalid section count or
+ * an invalid buffer end value).
+ */
+-static struct ice_buf_hdr *ice_pkg_enum_buf(struct ice_seg *ice_seg,
+- struct ice_pkg_enum *state)
++static const struct ice_buf_hdr *ice_pkg_enum_buf(struct ice_seg *ice_seg,
++ struct ice_pkg_enum *state)
+ {
+ if (ice_seg) {
+ state->buf_table = ice_find_buf_table(ice_seg);
+@@ -1800,9 +1800,9 @@ int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
+ * success it returns a pointer to the segment header, otherwise it will
+ * return NULL.
+ */
+-static struct ice_generic_seg_hdr *
++static const struct ice_generic_seg_hdr *
+ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
+- struct ice_pkg_hdr *pkg_hdr)
++ const struct ice_pkg_hdr *pkg_hdr)
+ {
+ u32 i;
+
+@@ -1813,11 +1813,9 @@ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
+
+ /* Search all package segments for the requested segment type */
+ for (i = 0; i < le32_to_cpu(pkg_hdr->seg_count); i++) {
+- struct ice_generic_seg_hdr *seg;
++ const struct ice_generic_seg_hdr *seg;
+
+- seg = (struct ice_generic_seg_hdr
+- *)((u8 *)pkg_hdr +
+- le32_to_cpu(pkg_hdr->seg_offset[i]));
++ seg = (void *)pkg_hdr + le32_to_cpu(pkg_hdr->seg_offset[i]);
+
+ if (le32_to_cpu(seg->seg_type) == seg_type)
+ return seg;
+@@ -2354,12 +2352,12 @@ ice_get_set_tx_topo(struct ice_hw *hw, u8 *buf, u16 buf_size,
+ *
+ * Return: zero when update was successful, negative values otherwise.
+ */
+-int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
++int ice_cfg_tx_topo(struct ice_hw *hw, const void *buf, u32 len)
+ {
+- u8 *current_topo, *new_topo = NULL;
+- struct ice_run_time_cfg_seg *seg;
+- struct ice_buf_hdr *section;
+- struct ice_pkg_hdr *pkg_hdr;
++ u8 *new_topo = NULL, *topo __free(kfree) = NULL;
++ const struct ice_run_time_cfg_seg *seg;
++ const struct ice_buf_hdr *section;
++ const struct ice_pkg_hdr *pkg_hdr;
+ enum ice_ddp_state state;
+ u16 offset, size = 0;
+ u32 reg = 0;
+@@ -2375,15 +2373,13 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
+ return -EOPNOTSUPP;
+ }
+
+- current_topo = kzalloc(ICE_AQ_MAX_BUF_LEN, GFP_KERNEL);
+- if (!current_topo)
++ topo = kzalloc(ICE_AQ_MAX_BUF_LEN, GFP_KERNEL);
++ if (!topo)
+ return -ENOMEM;
+
+- /* Get the current Tx topology */
+- status = ice_get_set_tx_topo(hw, current_topo, ICE_AQ_MAX_BUF_LEN, NULL,
+- &flags, false);
+-
+- kfree(current_topo);
++ /* Get the current Tx topology flags */
++ status = ice_get_set_tx_topo(hw, topo, ICE_AQ_MAX_BUF_LEN, NULL, &flags,
++ false);
+
+ if (status) {
+ ice_debug(hw, ICE_DBG_INIT, "Get current topology is failed\n");
+@@ -2419,7 +2415,7 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
+ goto update_topo;
+ }
+
+- pkg_hdr = (struct ice_pkg_hdr *)buf;
++ pkg_hdr = (const struct ice_pkg_hdr *)buf;
+ state = ice_verify_pkg(pkg_hdr, len);
+ if (state) {
+ ice_debug(hw, ICE_DBG_INIT, "Failed to verify pkg (err: %d)\n",
+@@ -2428,7 +2424,7 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
+ }
+
+ /* Find runtime configuration segment */
+- seg = (struct ice_run_time_cfg_seg *)
++ seg = (const struct ice_run_time_cfg_seg *)
+ ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE_RUN_TIME_CFG, pkg_hdr);
+ if (!seg) {
+ ice_debug(hw, ICE_DBG_INIT, "5 layer topology segment is missing\n");
+@@ -2461,8 +2457,10 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
+ return -EIO;
+ }
+
+- /* Get the new topology buffer */
+- new_topo = ((u8 *)section) + offset;
++ /* Get the new topology buffer, reuse current topo copy mem */
++ static_assert(ICE_PKG_BUF_SIZE == ICE_AQ_MAX_BUF_LEN);
++ new_topo = topo;
++ memcpy(new_topo, (u8 *)section + offset, size);
+
+ update_topo:
+ /* Acquire global lock to make sure that set topology issued
+diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.h b/drivers/net/ethernet/intel/ice/ice_ddp.h
+index 622543f08b4313..00840e5a107790 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ddp.h
++++ b/drivers/net/ethernet/intel/ice/ice_ddp.h
+@@ -430,7 +430,7 @@ struct ice_pkg_enum {
+ u32 buf_idx;
+
+ u32 type;
+- struct ice_buf_hdr *buf;
++ const struct ice_buf_hdr *buf;
+ u32 sect_idx;
+ void *sect;
+ u32 sect_type;
+@@ -454,6 +454,6 @@ u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld);
+ void *ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state,
+ u32 sect_type);
+
+-int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len);
++int ice_cfg_tx_topo(struct ice_hw *hw, const void *buf, u32 len);
+
+ #endif
+diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c
+index e92be6f130a3da..cc35d29ac9e6c2 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dpll.c
++++ b/drivers/net/ethernet/intel/ice/ice_dpll.c
+@@ -651,6 +651,8 @@ ice_dpll_output_state_set(const struct dpll_pin *pin, void *pin_priv,
+ struct ice_dpll_pin *p = pin_priv;
+ struct ice_dpll *d = dpll_priv;
+
++ if (state == DPLL_PIN_STATE_SELECTABLE)
++ return -EINVAL;
+ if (!enable && p->state[d->dpll_idx] == DPLL_PIN_STATE_DISCONNECTED)
+ return 0;
+
+@@ -1626,6 +1628,8 @@ ice_dpll_init_rclk_pins(struct ice_pf *pf, struct ice_dpll_pin *pin,
+ struct dpll_pin *parent;
+ int ret, i;
+
++ if (WARN_ON((!vsi || !vsi->netdev)))
++ return -EINVAL;
+ ret = ice_dpll_get_pins(pf, pin, start_idx, ICE_DPLL_RCLK_NUM_PER_PF,
+ pf->dplls.clock_id);
+ if (ret)
+@@ -1641,8 +1645,6 @@ ice_dpll_init_rclk_pins(struct ice_pf *pf, struct ice_dpll_pin *pin,
+ if (ret)
+ goto unregister_pins;
+ }
+- if (WARN_ON((!vsi || !vsi->netdev)))
+- return -EINVAL;
+ dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin);
+
+ return 0;
+diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c
+index f5aceb32bf4dd2..cccb7ddf61c975 100644
+--- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c
++++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c
+@@ -582,10 +582,13 @@ ice_eswitch_br_switchdev_event(struct notifier_block *nb,
+ return NOTIFY_DONE;
+ }
+
+-static void ice_eswitch_br_fdb_flush(struct ice_esw_br *bridge)
++void ice_eswitch_br_fdb_flush(struct ice_esw_br *bridge)
+ {
+ struct ice_esw_br_fdb_entry *entry, *tmp;
+
++ if (!bridge)
++ return;
++
+ list_for_each_entry_safe(entry, tmp, &bridge->fdb_list, list)
+ ice_eswitch_br_fdb_entry_notify_and_cleanup(bridge, entry);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h
+index c15c7344d7f853..66a2c804338f0d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h
++++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h
+@@ -117,5 +117,6 @@ void
+ ice_eswitch_br_offloads_deinit(struct ice_pf *pf);
+ int
+ ice_eswitch_br_offloads_init(struct ice_pf *pf);
++void ice_eswitch_br_fdb_flush(struct ice_esw_br *bridge);
+
+ #endif /* _ICE_ESWITCH_BR_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index ea780d468579fa..03b72c0e043a7c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -86,7 +86,8 @@ ice_indr_setup_tc_cb(struct net_device *netdev, struct Qdisc *sch,
+
+ bool netif_is_ice(const struct net_device *dev)
+ {
+- return dev && (dev->netdev_ops == &ice_netdev_ops);
++ return dev && (dev->netdev_ops == &ice_netdev_ops ||
++ dev->netdev_ops == &ice_netdev_safe_mode_ops);
+ }
+
+ /**
+@@ -519,25 +520,6 @@ static void ice_pf_dis_all_vsi(struct ice_pf *pf, bool locked)
+ pf->vf_agg_node[node].num_vsis = 0;
+ }
+
+-/**
+- * ice_clear_sw_switch_recipes - clear switch recipes
+- * @pf: board private structure
+- *
+- * Mark switch recipes as not created in sw structures. There are cases where
+- * rules (especially advanced rules) need to be restored, either re-read from
+- * hardware or added again. For example after the reset. 'recp_created' flag
+- * prevents from doing that and need to be cleared upfront.
+- */
+-static void ice_clear_sw_switch_recipes(struct ice_pf *pf)
+-{
+- struct ice_sw_recipe *recp;
+- u8 i;
+-
+- recp = pf->hw.switch_info->recp_list;
+- for (i = 0; i < ICE_MAX_NUM_RECIPES; i++)
+- recp[i].recp_created = false;
+-}
+-
+ /**
+ * ice_prepare_for_reset - prep for reset
+ * @pf: board private structure
+@@ -574,8 +556,9 @@ ice_prepare_for_reset(struct ice_pf *pf, enum ice_reset_req reset_type)
+ mutex_unlock(&pf->vfs.table_lock);
+
+ if (ice_is_eswitch_mode_switchdev(pf)) {
+- if (reset_type != ICE_RESET_PFR)
+- ice_clear_sw_switch_recipes(pf);
++ rtnl_lock();
++ ice_eswitch_br_fdb_flush(pf->eswitch.br_offloads->bridge);
++ rtnl_unlock();
+ }
+
+ /* release ADQ specific HW and SW resources */
+@@ -4548,16 +4531,10 @@ ice_init_tx_topology(struct ice_hw *hw, const struct firmware *firmware)
+ u8 num_tx_sched_layers = hw->num_tx_sched_layers;
+ struct ice_pf *pf = hw->back;
+ struct device *dev;
+- u8 *buf_copy;
+ int err;
+
+ dev = ice_pf_to_dev(pf);
+- /* ice_cfg_tx_topo buf argument is not a constant,
+- * so we have to make a copy
+- */
+- buf_copy = kmemdup(firmware->data, firmware->size, GFP_KERNEL);
+-
+- err = ice_cfg_tx_topo(hw, buf_copy, firmware->size);
++ err = ice_cfg_tx_topo(hw, firmware->data, firmware->size);
+ if (!err) {
+ if (hw->num_tx_sched_layers > num_tx_sched_layers)
+ dev_info(dev, "Tx scheduling layers switching feature disabled\n");
+@@ -4785,14 +4762,12 @@ int ice_init_dev(struct ice_pf *pf)
+ ice_init_feature_support(pf);
+
+ err = ice_init_ddp_config(hw, pf);
+- if (err)
+- return err;
+
+ /* if ice_init_ddp_config fails, ICE_FLAG_ADV_FEATURES bit won't be
+ * set in pf->state, which will cause ice_is_safe_mode to return
+ * true
+ */
+- if (ice_is_safe_mode(pf)) {
++ if (err || ice_is_safe_mode(pf)) {
+ /* we already got function/device capabilities but these don't
+ * reflect what the driver needs to do in safe mode. Instead of
+ * adding conditional logic everywhere to ignore these
+diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
+index 55ef33208456a4..7a7cad20541b93 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
++++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
+@@ -1096,8 +1096,10 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+ return -ENOENT;
+
+ vsi = ice_get_vf_vsi(vf);
+- if (!vsi)
++ if (!vsi) {
++ ice_put_vf(vf);
+ return -ENOENT;
++ }
+
+ prev_msix = vf->num_msix;
+ prev_queues = vf->num_vf_qs;
+@@ -1119,7 +1121,10 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+ if (vf->first_vector_idx < 0)
+ goto unroll;
+
+- if (ice_vf_reconfig_vsi(vf) || ice_vf_init_host_cfg(vf, vsi)) {
++ vsi->req_txq = queues;
++ vsi->req_rxq = queues;
++
++ if (ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT)) {
+ /* Try to rebuild with previous values */
+ needs_rebuild = true;
+ goto unroll;
+@@ -1142,12 +1147,16 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+ vf->num_msix = prev_msix;
+ vf->num_vf_qs = prev_queues;
+ vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
+- if (vf->first_vector_idx < 0)
++ if (vf->first_vector_idx < 0) {
++ ice_put_vf(vf);
+ return -EINVAL;
++ }
+
+ if (needs_rebuild) {
+- ice_vf_reconfig_vsi(vf);
+- ice_vf_init_host_cfg(vf, vsi);
++ vsi->req_txq = prev_queues;
++ vsi->req_rxq = prev_queues;
++
++ ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT);
+ }
+
+ ice_ena_vf_mappings(vf);
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 79d91e95358ca1..0e740342e2947e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -6322,8 +6322,6 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
+ if (!itr->vsi_list_info ||
+ !test_bit(vsi_handle, itr->vsi_list_info->vsi_map))
+ continue;
+- /* Clearing it so that the logic can add it back */
+- clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
+ f_entry.fltr_info.vsi_handle = vsi_handle;
+ f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
+ /* update the src in case it is VSI num */
+diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
+index e6923f8121a994..ea39b999a0d000 100644
+--- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
+@@ -819,6 +819,17 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
+ rule_info.sw_act.flag |= ICE_FLTR_TX;
+ rule_info.sw_act.src = vsi->idx;
+ rule_info.flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE;
++ /* This is a specific case. The destination VSI index is
++ * overwritten by the source VSI index. This type of filter
++ * should allow the packet to go to the LAN, not to the
++ * VSI passed here. It should set LAN_EN bit only. However,
++ * the VSI must be a valid one. Setting source VSI index
++ * here is safe. Even if the result from switch is set LAN_EN
++ * and LB_EN (which normally will pass the packet to this VSI)
++ * packet won't be seen on the VSI, because local loopback is
++ * turned off.
++ */
++ rule_info.sw_act.vsi_handle = vsi->idx;
+ } else {
+ /* VF to VF */
+ rule_info.sw_act.flag |= ICE_FLTR_TX;
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+index 5635e9da2212ba..f8fbd49e231056 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+@@ -256,7 +256,7 @@ static void ice_vf_pre_vsi_rebuild(struct ice_vf *vf)
+ *
+ * It brings the VSI down and then reconfigures it with the hardware.
+ */
+-int ice_vf_reconfig_vsi(struct ice_vf *vf)
++static int ice_vf_reconfig_vsi(struct ice_vf *vf)
+ {
+ struct ice_vsi *vsi = ice_get_vf_vsi(vf);
+ struct ice_pf *pf = vf->pf;
+@@ -335,6 +335,13 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf, struct ice_vsi *vsi)
+
+ err = vlan_ops->add_vlan(vsi, &vf->port_vlan_info);
+ } else {
++ /* clear possible previous port vlan config */
++ err = ice_vsi_clear_port_vlan(vsi);
++ if (err) {
++ dev_err(dev, "failed to clear port VLAN via VSI parameters for VF %u, error %d\n",
++ vf->vf_id, err);
++ return err;
++ }
+ err = ice_vsi_add_vlan_zero(vsi);
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
+index 91ba7fe0eaee1a..0c7e77c0a09fa6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
+@@ -23,7 +23,6 @@
+ #warning "Only include ice_vf_lib_private.h in CONFIG_PCI_IOV virtualization files"
+ #endif
+
+-int ice_vf_reconfig_vsi(struct ice_vf *vf);
+ void ice_initialize_vf_entry(struct ice_vf *vf);
+ void ice_dis_vf_qs(struct ice_vf *vf);
+ int ice_check_vf_init(struct ice_vf *vf);
+diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c
+index 6e8f2aab608015..5291f2888ef89a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c
+@@ -787,3 +787,60 @@ int ice_vsi_clear_outer_port_vlan(struct ice_vsi *vsi)
+ kfree(ctxt);
+ return err;
+ }
++
++int ice_vsi_clear_port_vlan(struct ice_vsi *vsi)
++{
++ struct ice_hw *hw = &vsi->back->hw;
++ struct ice_vsi_ctx *ctxt;
++ int err;
++
++ ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL);
++ if (!ctxt)
++ return -ENOMEM;
++
++ ctxt->info = vsi->info;
++
++ ctxt->info.port_based_outer_vlan = 0;
++ ctxt->info.port_based_inner_vlan = 0;
++
++ ctxt->info.inner_vlan_flags =
++ FIELD_PREP(ICE_AQ_VSI_INNER_VLAN_TX_MODE_M,
++ ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL);
++ if (ice_is_dvm_ena(hw)) {
++ ctxt->info.inner_vlan_flags |=
++ FIELD_PREP(ICE_AQ_VSI_INNER_VLAN_EMODE_M,
++ ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING);
++ ctxt->info.outer_vlan_flags =
++ FIELD_PREP(ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M,
++ ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL);
++ ctxt->info.outer_vlan_flags |=
++ FIELD_PREP(ICE_AQ_VSI_OUTER_TAG_TYPE_M,
++ ICE_AQ_VSI_OUTER_TAG_VLAN_8100);
++ ctxt->info.outer_vlan_flags |=
++ ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING <<
++ ICE_AQ_VSI_OUTER_VLAN_EMODE_S;
++ }
++
++ ctxt->info.sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA;
++ ctxt->info.valid_sections =
++ cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID |
++ ICE_AQ_VSI_PROP_VLAN_VALID |
++ ICE_AQ_VSI_PROP_SW_VALID);
++
++ err = ice_update_vsi(hw, vsi->idx, ctxt, NULL);
++ if (err) {
++ dev_err(ice_pf_to_dev(vsi->back), "update VSI for clearing port based VLAN failed, err %d aq_err %s\n",
++ err, ice_aq_str(hw->adminq.sq_last_status));
++ } else {
++ vsi->info.port_based_outer_vlan =
++ ctxt->info.port_based_outer_vlan;
++ vsi->info.port_based_inner_vlan =
++ ctxt->info.port_based_inner_vlan;
++ vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags;
++ vsi->info.inner_vlan_flags = ctxt->info.inner_vlan_flags;
++ vsi->info.sw_flags2 = ctxt->info.sw_flags2;
++ }
++
++ kfree(ctxt);
++ return err;
++}
+diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h
+index f0d84d11bd5b1f..12b227621a7ddc 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h
++++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h
+@@ -36,5 +36,6 @@ int ice_vsi_ena_outer_insertion(struct ice_vsi *vsi, u16 tpid);
+ int ice_vsi_dis_outer_insertion(struct ice_vsi *vsi);
+ int ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan);
+ int ice_vsi_clear_outer_port_vlan(struct ice_vsi *vsi);
++int ice_vsi_clear_port_vlan(struct ice_vsi *vsi);
+
+ #endif /* _ICE_VSI_VLAN_LIB_H_ */
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+index 70986e12da28e3..3c0f97650d72fd 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+@@ -666,7 +666,7 @@ idpf_vc_xn_forward_reply(struct idpf_adapter *adapter,
+
+ if (ctlq_msg->data_len) {
+ payload = ctlq_msg->ctx.indirect.payload->va;
+- payload_size = ctlq_msg->ctx.indirect.payload->size;
++ payload_size = ctlq_msg->data_len;
+ }
+
+ xn->reply_sz = payload_size;
+@@ -1295,10 +1295,6 @@ int idpf_send_create_vport_msg(struct idpf_adapter *adapter,
+ err = reply_sz;
+ goto free_vport_params;
+ }
+- if (reply_sz < IDPF_CTLQ_MAX_BUF_LEN) {
+- err = -EIO;
+- goto free_vport_params;
+- }
+
+ return 0;
+
+@@ -2602,9 +2598,6 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
+ if (reply_sz < 0)
+ return reply_sz;
+
+- if (reply_sz < IDPF_CTLQ_MAX_BUF_LEN)
+- return -EIO;
+-
+ ptypes_recvd += le16_to_cpu(ptype_info->num_ptypes);
+ if (ptypes_recvd > max_ptype)
+ return -EINVAL;
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 1ef4cb871452a6..f1d0881687233e 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -9651,6 +9651,10 @@ static void igb_io_resume(struct pci_dev *pdev)
+ struct igb_adapter *adapter = netdev_priv(netdev);
+
+ if (netif_running(netdev)) {
++ if (!test_bit(__IGB_DOWN, &adapter->state)) {
++ dev_dbg(&pdev->dev, "Resuming from non-fatal error, do nothing.\n");
++ return;
++ }
+ if (igb_up(adapter)) {
+ dev_err(&pdev->dev, "igb_up failed after reset\n");
+ return;
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index c9e17a8208a901..f1723a6fb082b4 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -1260,7 +1260,8 @@ static int efx_poll(struct napi_struct *napi, int budget)
+
+ spent = efx_process_channel(channel, budget);
+
+- xdp_do_flush();
++ if (budget)
++ xdp_do_flush();
+
+ if (spent < budget) {
+ if (efx_channel_has_rx_queue(channel) &&
+diff --git a/drivers/net/ethernet/sfc/siena/efx_channels.c b/drivers/net/ethernet/sfc/siena/efx_channels.c
+index a7346e965bfe70..d120b3c83ac07d 100644
+--- a/drivers/net/ethernet/sfc/siena/efx_channels.c
++++ b/drivers/net/ethernet/sfc/siena/efx_channels.c
+@@ -1285,7 +1285,8 @@ static int efx_poll(struct napi_struct *napi, int budget)
+
+ spent = efx_process_channel(channel, budget);
+
+- xdp_do_flush();
++ if (budget)
++ xdp_do_flush();
+
+ if (spent < budget) {
+ if (efx_channel_has_rx_queue(channel) &&
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 95d3d1081727fa..f3a1b179aaeaca 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2022,7 +2022,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv,
+ rx_q->queue_index = queue;
+ rx_q->priv_data = priv;
+
+- pp_params.flags = PP_FLAG_DMA_MAP | (xdp_prog ? PP_FLAG_DMA_SYNC_DEV : 0);
++ pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
+ pp_params.pool_size = dma_conf->dma_rx_size;
+ num_pages = DIV_ROUND_UP(dma_conf->dma_buf_sz, PAGE_SIZE);
+ pp_params.order = ilog2(num_pages);
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_config.c b/drivers/net/ethernet/ti/icssg/icssg_config.c
+index dae52a83a3786f..5be020d0887aca 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_config.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_config.c
+@@ -733,6 +733,7 @@ void icssg_vtbl_modify(struct prueth_emac *emac, u8 vid, u8 port_mask,
+ u8 fid_c1;
+
+ tbl = prueth->vlan_tbl;
++ spin_lock(&prueth->vtbl_lock);
+ fid_c1 = tbl[vid].fid_c1;
+
+ /* FID_C1: bit0..2 port membership mask,
+@@ -748,6 +749,7 @@ void icssg_vtbl_modify(struct prueth_emac *emac, u8 vid, u8 port_mask,
+ }
+
+ tbl[vid].fid_c1 = fid_c1;
++ spin_unlock(&prueth->vtbl_lock);
+ }
+ EXPORT_SYMBOL_GPL(icssg_vtbl_modify);
+
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index e3451beed32385..33cb3590a5cdec 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -1262,6 +1262,7 @@ static int prueth_probe(struct platform_device *pdev)
+ icss_iep_init_fw(prueth->iep1);
+ }
+
++ spin_lock_init(&prueth->vtbl_lock);
+ /* setup netdev interfaces */
+ if (eth0_node) {
+ ret = prueth_netdev_init(prueth, eth0_node);
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.h b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+index f678d656a3ed3a..4d1c895dacdb67 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.h
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+@@ -282,6 +282,8 @@ struct prueth {
+ bool is_switchmode_supported;
+ unsigned char switch_id[MAX_PHYS_ITEM_ID_LEN];
+ int default_vlan;
++ /** @vtbl_lock: Lock for vtbl in shared memory */
++ spinlock_t vtbl_lock;
+ };
+
+ struct emac_tx_ts_response {
+diff --git a/drivers/net/netconsole.c b/drivers/net/netconsole.c
+index 9c09293b52588d..3e68f7a6e0abe3 100644
+--- a/drivers/net/netconsole.c
++++ b/drivers/net/netconsole.c
+@@ -1118,8 +1118,14 @@ static void send_ext_msg_udp(struct netconsole_target *nt, const char *msg,
+
+ this_chunk = min(userdata_len - sent_userdata,
+ MAX_PRINT_CHUNK - preceding_bytes);
+- if (WARN_ON_ONCE(this_chunk <= 0))
++ if (WARN_ON_ONCE(this_chunk < 0))
++ /* this_chunk could be zero if all the previous
++ * message used all the buffer. This is not a
++ * problem, userdata will be sent in the next
++ * iteration
++ */
+ return;
++
+ memcpy(buf + this_header + this_offset,
+ userdata + sent_userdata,
+ this_chunk);
+diff --git a/drivers/net/phy/aquantia/aquantia_main.c b/drivers/net/phy/aquantia/aquantia_main.c
+index 4d156d406bab9a..c33a5ef34ba032 100644
+--- a/drivers/net/phy/aquantia/aquantia_main.c
++++ b/drivers/net/phy/aquantia/aquantia_main.c
+@@ -537,12 +537,6 @@ static int aqcs109_config_init(struct phy_device *phydev)
+ if (!ret)
+ aqr107_chip_info(phydev);
+
+- /* AQCS109 belongs to a chip family partially supporting 10G and 5G.
+- * PMA speed ability bits are the same for all members of the family,
+- * AQCS109 however supports speeds up to 2.5G only.
+- */
+- phy_set_max_speed(phydev, SPEED_2500);
+-
+ return aqr107_set_downshift(phydev, MDIO_AN_VEND_PROV_DOWNSHIFT_DFLT);
+ }
+
+@@ -731,6 +725,31 @@ static int aqr113c_fill_interface_modes(struct phy_device *phydev)
+ return aqr107_fill_interface_modes(phydev);
+ }
+
++static int aqr115c_get_features(struct phy_device *phydev)
++{
++ unsigned long *supported = phydev->supported;
++
++ /* PHY supports speeds up to 2.5G with autoneg. PMA capabilities
++ * are not useful.
++ */
++ linkmode_or(supported, supported, phy_gbit_features);
++ linkmode_set_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT, supported);
++
++ return 0;
++}
++
++static int aqr111_get_features(struct phy_device *phydev)
++{
++ /* PHY supports speeds up to 5G with autoneg. PMA capabilities
++ * are not useful.
++ */
++ aqr115c_get_features(phydev);
++ linkmode_set_bit(ETHTOOL_LINK_MODE_5000baseT_Full_BIT,
++ phydev->supported);
++
++ return 0;
++}
++
+ static int aqr113c_config_init(struct phy_device *phydev)
+ {
+ int ret;
+@@ -767,15 +786,6 @@ static int aqr107_probe(struct phy_device *phydev)
+ return aqr_hwmon_probe(phydev);
+ }
+
+-static int aqr111_config_init(struct phy_device *phydev)
+-{
+- /* AQR111 reports supporting speed up to 10G,
+- * however only speeds up to 5G are supported.
+- */
+- phy_set_max_speed(phydev, SPEED_5000);
+-
+- return aqr107_config_init(phydev);
+-}
+
+ static struct phy_driver aqr_driver[] = {
+ {
+@@ -853,6 +863,7 @@ static struct phy_driver aqr_driver[] = {
+ .get_sset_count = aqr107_get_sset_count,
+ .get_strings = aqr107_get_strings,
+ .get_stats = aqr107_get_stats,
++ .get_features = aqr115c_get_features,
+ .link_change_notify = aqr107_link_change_notify,
+ .led_brightness_set = aqr_phy_led_brightness_set,
+ .led_hw_is_supported = aqr_phy_led_hw_is_supported,
+@@ -865,7 +876,7 @@ static struct phy_driver aqr_driver[] = {
+ .name = "Aquantia AQR111",
+ .probe = aqr107_probe,
+ .get_rate_matching = aqr107_get_rate_matching,
+- .config_init = aqr111_config_init,
++ .config_init = aqr107_config_init,
+ .config_aneg = aqr_config_aneg,
+ .config_intr = aqr_config_intr,
+ .handle_interrupt = aqr_handle_interrupt,
+@@ -877,6 +888,7 @@ static struct phy_driver aqr_driver[] = {
+ .get_sset_count = aqr107_get_sset_count,
+ .get_strings = aqr107_get_strings,
+ .get_stats = aqr107_get_stats,
++ .get_features = aqr111_get_features,
+ .link_change_notify = aqr107_link_change_notify,
+ .led_brightness_set = aqr_phy_led_brightness_set,
+ .led_hw_is_supported = aqr_phy_led_hw_is_supported,
+@@ -889,7 +901,7 @@ static struct phy_driver aqr_driver[] = {
+ .name = "Aquantia AQR111B0",
+ .probe = aqr107_probe,
+ .get_rate_matching = aqr107_get_rate_matching,
+- .config_init = aqr111_config_init,
++ .config_init = aqr107_config_init,
+ .config_aneg = aqr_config_aneg,
+ .config_intr = aqr_config_intr,
+ .handle_interrupt = aqr_handle_interrupt,
+@@ -901,6 +913,7 @@ static struct phy_driver aqr_driver[] = {
+ .get_sset_count = aqr107_get_sset_count,
+ .get_strings = aqr107_get_strings,
+ .get_stats = aqr107_get_stats,
++ .get_features = aqr111_get_features,
+ .link_change_notify = aqr107_link_change_notify,
+ .led_brightness_set = aqr_phy_led_brightness_set,
+ .led_hw_is_supported = aqr_phy_led_hw_is_supported,
+@@ -1010,7 +1023,7 @@ static struct phy_driver aqr_driver[] = {
+ .name = "Aquantia AQR114C",
+ .probe = aqr107_probe,
+ .get_rate_matching = aqr107_get_rate_matching,
+- .config_init = aqr111_config_init,
++ .config_init = aqr107_config_init,
+ .config_aneg = aqr_config_aneg,
+ .config_intr = aqr_config_intr,
+ .handle_interrupt = aqr_handle_interrupt,
+@@ -1022,6 +1035,7 @@ static struct phy_driver aqr_driver[] = {
+ .get_sset_count = aqr107_get_sset_count,
+ .get_strings = aqr107_get_strings,
+ .get_stats = aqr107_get_stats,
++ .get_features = aqr111_get_features,
+ .link_change_notify = aqr107_link_change_notify,
+ .led_brightness_set = aqr_phy_led_brightness_set,
+ .led_hw_is_supported = aqr_phy_led_hw_is_supported,
+@@ -1046,6 +1060,7 @@ static struct phy_driver aqr_driver[] = {
+ .get_sset_count = aqr107_get_sset_count,
+ .get_strings = aqr107_get_strings,
+ .get_stats = aqr107_get_stats,
++ .get_features = aqr115c_get_features,
+ .link_change_notify = aqr107_link_change_notify,
+ .led_brightness_set = aqr_phy_led_brightness_set,
+ .led_hw_is_supported = aqr_phy_led_hw_is_supported,
+diff --git a/drivers/net/phy/bcm84881.c b/drivers/net/phy/bcm84881.c
+index f1d47c2640585b..97da3aee49422c 100644
+--- a/drivers/net/phy/bcm84881.c
++++ b/drivers/net/phy/bcm84881.c
+@@ -132,7 +132,7 @@ static int bcm84881_aneg_done(struct phy_device *phydev)
+
+ bmsr = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_C22 + MII_BMSR);
+ if (bmsr < 0)
+- return val;
++ return bmsr;
+
+ return !!(val & MDIO_AN_STAT1_COMPLETE) &&
+ !!(bmsr & BMSR_ANEGCOMPLETE);
+@@ -158,7 +158,7 @@ static int bcm84881_read_status(struct phy_device *phydev)
+
+ bmsr = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_C22 + MII_BMSR);
+ if (bmsr < 0)
+- return val;
++ return bmsr;
+
+ phydev->autoneg_complete = !!(val & MDIO_AN_STAT1_COMPLETE) &&
+ !!(bmsr & BMSR_ANEGCOMPLETE);
+diff --git a/drivers/net/phy/dp83869.c b/drivers/net/phy/dp83869.c
+index d7aaefb5226b62..5f056d7db83eed 100644
+--- a/drivers/net/phy/dp83869.c
++++ b/drivers/net/phy/dp83869.c
+@@ -645,7 +645,6 @@ static int dp83869_configure_fiber(struct phy_device *phydev,
+ phydev->supported);
+
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, phydev->supported);
+- linkmode_set_bit(ADVERTISED_FIBRE, phydev->advertising);
+
+ if (dp83869->mode == DP83869_RGMII_1000_BASE) {
+ linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 6bb2793de0a94a..366cf3b2cbb0a3 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -3249,10 +3249,11 @@ static __maybe_unused int phy_led_hw_is_supported(struct led_classdev *led_cdev,
+
+ static void phy_leds_unregister(struct phy_device *phydev)
+ {
+- struct phy_led *phyled;
++ struct phy_led *phyled, *tmp;
+
+- list_for_each_entry(phyled, &phydev->leds, list) {
++ list_for_each_entry_safe(phyled, tmp, &phydev->leds, list) {
+ led_classdev_unregister(&phyled->led_cdev);
++ list_del(&phyled->list);
+ }
+ }
+
+diff --git a/drivers/net/phy/realtek.c b/drivers/net/phy/realtek.c
+index c15d2f66ef0dc2..166f6a7283731e 100644
+--- a/drivers/net/phy/realtek.c
++++ b/drivers/net/phy/realtek.c
+@@ -1081,6 +1081,16 @@ static int rtl8221b_vn_cg_c45_match_phy_device(struct phy_device *phydev)
+ return rtlgen_is_c45_match(phydev, RTL_8221B_VN_CG, true);
+ }
+
++static int rtl8251b_c22_match_phy_device(struct phy_device *phydev)
++{
++ return rtlgen_is_c45_match(phydev, RTL_8251B, false);
++}
++
++static int rtl8251b_c45_match_phy_device(struct phy_device *phydev)
++{
++ return rtlgen_is_c45_match(phydev, RTL_8251B, true);
++}
++
+ static int rtlgen_resume(struct phy_device *phydev)
+ {
+ int ret = genphy_resume(phydev);
+@@ -1418,7 +1428,7 @@ static struct phy_driver realtek_drvs[] = {
+ .suspend = genphy_c45_pma_suspend,
+ .resume = rtlgen_c45_resume,
+ }, {
+- PHY_ID_MATCH_EXACT(0x001cc862),
++ .match_phy_device = rtl8251b_c45_match_phy_device,
+ .name = "RTL8251B 5Gbps PHY",
+ .get_features = rtl822x_get_features,
+ .config_aneg = rtl822x_config_aneg,
+@@ -1427,6 +1437,18 @@ static struct phy_driver realtek_drvs[] = {
+ .resume = rtlgen_resume,
+ .read_page = rtl821x_read_page,
+ .write_page = rtl821x_write_page,
++ }, {
++ .match_phy_device = rtl8251b_c22_match_phy_device,
++ .name = "RTL8126A-internal 5Gbps PHY",
++ .get_features = rtl822x_get_features,
++ .config_aneg = rtl822x_config_aneg,
++ .read_status = rtl822x_read_status,
++ .suspend = genphy_suspend,
++ .resume = rtlgen_resume,
++ .read_page = rtl821x_read_page,
++ .write_page = rtl821x_write_page,
++ .read_mmd = rtl822x_read_mmd,
++ .write_mmd = rtl822x_write_mmd,
+ }, {
+ PHY_ID_MATCH_EXACT(0x001ccad0),
+ .name = "RTL8224 2.5Gbps PHY",
+diff --git a/drivers/net/ppp/ppp_async.c b/drivers/net/ppp/ppp_async.c
+index c33c3db3cc0896..18115e255e52fe 100644
+--- a/drivers/net/ppp/ppp_async.c
++++ b/drivers/net/ppp/ppp_async.c
+@@ -542,7 +542,7 @@ ppp_async_encode(struct asyncppp *ap)
+ * and 7 (code-reject) must be sent as though no options
+ * had been negotiated.
+ */
+- islcp = proto == PPP_LCP && 1 <= data[2] && data[2] <= 7;
++ islcp = proto == PPP_LCP && count >= 3 && 1 <= data[2] && data[2] <= 7;
+
+ if (i == 0) {
+ if (islcp)
+diff --git a/drivers/net/pse-pd/pse_core.c b/drivers/net/pse-pd/pse_core.c
+index 4f032b16a8a0a6..f8e6854781e6ee 100644
+--- a/drivers/net/pse-pd/pse_core.c
++++ b/drivers/net/pse-pd/pse_core.c
+@@ -785,6 +785,17 @@ static int pse_ethtool_c33_set_config(struct pse_control *psec,
+ */
+ switch (config->c33_admin_control) {
+ case ETHTOOL_C33_PSE_ADMIN_STATE_ENABLED:
++ /* We could have mismatch between admin_state_enabled and
++ * state reported by regulator_is_enabled. This can occur when
++ * the PI is forcibly turn off by the controller. Call
++ * regulator_disable on that case to fix the counters state.
++ */
++ if (psec->pcdev->pi[psec->id].admin_state_enabled &&
++ !regulator_is_enabled(psec->ps)) {
++ err = regulator_disable(psec->ps);
++ if (err)
++ break;
++ }
+ if (!psec->pcdev->pi[psec->id].admin_state_enabled)
+ err = regulator_enable(psec->ps);
+ break;
+diff --git a/drivers/net/slip/slhc.c b/drivers/net/slip/slhc.c
+index 18df7ca6619814..c3f5759c239ac8 100644
+--- a/drivers/net/slip/slhc.c
++++ b/drivers/net/slip/slhc.c
+@@ -643,46 +643,57 @@ slhc_uncompress(struct slcompress *comp, unsigned char *icp, int isize)
+ int
+ slhc_remember(struct slcompress *comp, unsigned char *icp, int isize)
+ {
+- struct cstate *cs;
+- unsigned ihl;
+-
++ const struct tcphdr *th;
+ unsigned char index;
++ struct iphdr *iph;
++ struct cstate *cs;
++ unsigned int ihl;
+
+- if(isize < 20) {
+- /* The packet is shorter than a legal IP header */
++ /* The packet is shorter than a legal IP header.
++ * Also make sure isize is positive.
++ */
++ if (isize < (int)sizeof(struct iphdr)) {
++runt:
+ comp->sls_i_runt++;
+- return slhc_toss( comp );
++ return slhc_toss(comp);
+ }
++ iph = (struct iphdr *)icp;
+ /* Peek at the IP header's IHL field to find its length */
+- ihl = icp[0] & 0xf;
+- if(ihl < 20 / 4){
+- /* The IP header length field is too small */
+- comp->sls_i_runt++;
+- return slhc_toss( comp );
+- }
+- index = icp[9];
+- icp[9] = IPPROTO_TCP;
++ ihl = iph->ihl;
++ /* The IP header length field is too small,
++ * or packet is shorter than the IP header followed
++ * by minimal tcp header.
++ */
++ if (ihl < 5 || isize < ihl * 4 + sizeof(struct tcphdr))
++ goto runt;
++
++ index = iph->protocol;
++ iph->protocol = IPPROTO_TCP;
+
+ if (ip_fast_csum(icp, ihl)) {
+ /* Bad IP header checksum; discard */
+ comp->sls_i_badcheck++;
+- return slhc_toss( comp );
++ return slhc_toss(comp);
+ }
+- if(index > comp->rslot_limit) {
++ if (index > comp->rslot_limit) {
+ comp->sls_i_error++;
+ return slhc_toss(comp);
+ }
+-
++ th = (struct tcphdr *)(icp + ihl * 4);
++ if (th->doff < sizeof(struct tcphdr) / 4)
++ goto runt;
++ if (isize < ihl * 4 + th->doff * 4)
++ goto runt;
+ /* Update local state */
+ cs = &comp->rstate[comp->recv_current = index];
+ comp->flags &=~ SLF_TOSS;
+- memcpy(&cs->cs_ip,icp,20);
+- memcpy(&cs->cs_tcp,icp + ihl*4,20);
++ memcpy(&cs->cs_ip, iph, sizeof(*iph));
++ memcpy(&cs->cs_tcp, th, sizeof(*th));
+ if (ihl > 5)
+- memcpy(cs->cs_ipopt, icp + sizeof(struct iphdr), (ihl - 5) * 4);
+- if (cs->cs_tcp.doff > 5)
+- memcpy(cs->cs_tcpopt, icp + ihl*4 + sizeof(struct tcphdr), (cs->cs_tcp.doff - 5) * 4);
+- cs->cs_hsize = ihl*2 + cs->cs_tcp.doff*2;
++ memcpy(cs->cs_ipopt, &iph[1], (ihl - 5) * 4);
++ if (th->doff > 5)
++ memcpy(cs->cs_tcpopt, &th[1], (th->doff - 5) * 4);
++ cs->cs_hsize = ihl*2 + th->doff*2;
+ cs->initialized = true;
+ /* Put headers back on packet
+ * Neither header checksum is recalculated
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index ba59e92ab941de..02919c529dc2db 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -4913,9 +4913,13 @@ static int __init vxlan_init_module(void)
+ if (rc)
+ goto out4;
+
+- vxlan_vnifilter_init();
++ rc = vxlan_vnifilter_init();
++ if (rc)
++ goto out5;
+
+ return 0;
++out5:
++ rtnl_link_unregister(&vxlan_link_ops);
+ out4:
+ unregister_switchdev_notifier(&vxlan_switchdev_notifier_block);
+ out3:
+diff --git a/drivers/net/vxlan/vxlan_private.h b/drivers/net/vxlan/vxlan_private.h
+index b35d96b7884378..76a351a997d510 100644
+--- a/drivers/net/vxlan/vxlan_private.h
++++ b/drivers/net/vxlan/vxlan_private.h
+@@ -202,7 +202,7 @@ int vxlan_vni_in_use(struct net *src_net, struct vxlan_dev *vxlan,
+ int vxlan_vnigroup_init(struct vxlan_dev *vxlan);
+ void vxlan_vnigroup_uninit(struct vxlan_dev *vxlan);
+
+-void vxlan_vnifilter_init(void);
++int vxlan_vnifilter_init(void);
+ void vxlan_vnifilter_uninit(void);
+ void vxlan_vnifilter_count(struct vxlan_dev *vxlan, __be32 vni,
+ struct vxlan_vni_node *vninode,
+diff --git a/drivers/net/vxlan/vxlan_vnifilter.c b/drivers/net/vxlan/vxlan_vnifilter.c
+index 9c59d0bf8c3de0..d2023e7131bd4f 100644
+--- a/drivers/net/vxlan/vxlan_vnifilter.c
++++ b/drivers/net/vxlan/vxlan_vnifilter.c
+@@ -992,19 +992,18 @@ static int vxlan_vnifilter_process(struct sk_buff *skb, struct nlmsghdr *nlh,
+ return err;
+ }
+
+-void vxlan_vnifilter_init(void)
++static const struct rtnl_msg_handler vxlan_vnifilter_rtnl_msg_handlers[] = {
++ {THIS_MODULE, PF_BRIDGE, RTM_GETTUNNEL, NULL, vxlan_vnifilter_dump, 0},
++ {THIS_MODULE, PF_BRIDGE, RTM_NEWTUNNEL, vxlan_vnifilter_process, NULL, 0},
++ {THIS_MODULE, PF_BRIDGE, RTM_DELTUNNEL, vxlan_vnifilter_process, NULL, 0},
++};
++
++int vxlan_vnifilter_init(void)
+ {
+- rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_GETTUNNEL, NULL,
+- vxlan_vnifilter_dump, 0);
+- rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_NEWTUNNEL,
+- vxlan_vnifilter_process, NULL, 0);
+- rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_DELTUNNEL,
+- vxlan_vnifilter_process, NULL, 0);
++ return rtnl_register_many(vxlan_vnifilter_rtnl_msg_handlers);
+ }
+
+ void vxlan_vnifilter_uninit(void)
+ {
+- rtnl_unregister(PF_BRIDGE, RTM_GETTUNNEL);
+- rtnl_unregister(PF_BRIDGE, RTM_NEWTUNNEL);
+- rtnl_unregister(PF_BRIDGE, RTM_DELTUNNEL);
++ rtnl_unregister_many(vxlan_vnifilter_rtnl_msg_handlers);
+ }
+diff --git a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
+index 31946387badf0e..ad1786be2554b3 100644
+--- a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
++++ b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
+@@ -1554,6 +1554,7 @@ static void switchtec_ntb_remove(struct device *dev)
+ switchtec_ntb_deinit_db_msg_irq(sndev);
+ switchtec_ntb_deinit_shared_mw(sndev);
+ switchtec_ntb_deinit_crosslink(sndev);
++ cancel_work_sync(&sndev->check_link_status_work);
+ kfree(sndev);
+ dev_info(dev, "ntb device unregistered\n");
+ }
+diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c
+index 35c8fbbba10ed3..f55d60922b87d3 100644
+--- a/drivers/nvdimm/nd_virtio.c
++++ b/drivers/nvdimm/nd_virtio.c
+@@ -44,6 +44,15 @@ static int virtio_pmem_flush(struct nd_region *nd_region)
+ unsigned long flags;
+ int err, err1;
+
++ /*
++ * Don't bother to submit the request to the device if the device is
++ * not activated.
++ */
++ if (vdev->config->get_status(vdev) & VIRTIO_CONFIG_S_NEEDS_RESET) {
++ dev_info(&vdev->dev, "virtio pmem device needs a reset\n");
++ return -EIO;
++ }
++
+ might_sleep();
+ req_data = kmalloc(sizeof(*req_data), GFP_KERNEL);
+ if (!req_data)
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 494f8860220d97..3aa18737470fa2 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -2630,8 +2630,10 @@ int dev_pm_opp_set_config(struct device *dev, struct dev_pm_opp_config *config)
+
+ /* Attach genpds */
+ if (config->genpd_names) {
+- if (config->required_devs)
++ if (config->required_devs) {
++ ret = -EINVAL;
+ goto err;
++ }
+
+ ret = _opp_attach_genpd(opp_table, dev, config->genpd_names,
+ config->virt_devs);
+diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
+index 1b5aba1f0c92f9..bc3a5d6b017797 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.c
++++ b/drivers/pci/controller/dwc/pcie-designware.c
+@@ -112,6 +112,7 @@ int dw_pcie_get_resources(struct dw_pcie *pci)
+ pci->dbi_base = devm_pci_remap_cfg_resource(pci->dev, res);
+ if (IS_ERR(pci->dbi_base))
+ return PTR_ERR(pci->dbi_base);
++ pci->dbi_phys_addr = res->start;
+ }
+
+ /* DBI2 is mainly useful for the endpoint controller */
+@@ -134,6 +135,7 @@ int dw_pcie_get_resources(struct dw_pcie *pci)
+ pci->atu_base = devm_ioremap_resource(pci->dev, res);
+ if (IS_ERR(pci->atu_base))
+ return PTR_ERR(pci->atu_base);
++ pci->atu_phys_addr = res->start;
+ } else {
+ pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET;
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
+index 53c4c8f399c883..e518f81ea80cda 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.h
++++ b/drivers/pci/controller/dwc/pcie-designware.h
+@@ -407,8 +407,10 @@ struct dw_pcie_ops {
+ struct dw_pcie {
+ struct device *dev;
+ void __iomem *dbi_base;
++ resource_size_t dbi_phys_addr;
+ void __iomem *dbi_base2;
+ void __iomem *atu_base;
++ resource_size_t atu_phys_addr;
+ size_t atu_size;
+ u32 num_ib_windows;
+ u32 num_ob_windows;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 6f953e32d99070..0b3020c7a50a42 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -45,6 +45,7 @@
+ #define PARF_PHY_REFCLK 0x4c
+ #define PARF_CONFIG_BITS 0x50
+ #define PARF_DBI_BASE_ADDR 0x168
++#define PARF_SLV_ADDR_SPACE_SIZE 0x16c
+ #define PARF_MHI_CLOCK_RESET_CTRL 0x174
+ #define PARF_AXI_MSTR_WR_ADDR_HALT 0x178
+ #define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8
+@@ -52,8 +53,13 @@
+ #define PARF_LTSSM 0x1b0
+ #define PARF_SID_OFFSET 0x234
+ #define PARF_BDF_TRANSLATE_CFG 0x24c
+-#define PARF_SLV_ADDR_SPACE_SIZE 0x358
++#define PARF_DBI_BASE_ADDR_V2 0x350
++#define PARF_DBI_BASE_ADDR_V2_HI 0x354
++#define PARF_SLV_ADDR_SPACE_SIZE_V2 0x358
++#define PARF_SLV_ADDR_SPACE_SIZE_V2_HI 0x35c
+ #define PARF_NO_SNOOP_OVERIDE 0x3d4
++#define PARF_ATU_BASE_ADDR 0x634
++#define PARF_ATU_BASE_ADDR_HI 0x638
+ #define PARF_DEVICE_TYPE 0x1000
+ #define PARF_BDF_TO_SID_TABLE_N 0x2000
+ #define PARF_BDF_TO_SID_CFG 0x2c00
+@@ -108,7 +114,7 @@
+ #define PHY_RX0_EQ(x) FIELD_PREP(GENMASK(26, 24), x)
+
+ /* PARF_SLV_ADDR_SPACE_SIZE register value */
+-#define SLV_ADDR_SPACE_SZ 0x10000000
++#define SLV_ADDR_SPACE_SZ 0x80000000
+
+ /* PARF_MHI_CLOCK_RESET_CTRL register fields */
+ #define AHB_CLK_EN BIT(0)
+@@ -325,6 +331,50 @@ static void qcom_pcie_clear_hpc(struct dw_pcie *pci)
+ dw_pcie_dbi_ro_wr_dis(pci);
+ }
+
++static void qcom_pcie_configure_dbi_base(struct qcom_pcie *pcie)
++{
++ struct dw_pcie *pci = pcie->pci;
++
++ if (pci->dbi_phys_addr) {
++ /*
++ * PARF_DBI_BASE_ADDR register is in CPU domain and require to
++ * be programmed with CPU physical address.
++ */
++ writel(lower_32_bits(pci->dbi_phys_addr), pcie->parf +
++ PARF_DBI_BASE_ADDR);
++ writel(SLV_ADDR_SPACE_SZ, pcie->parf +
++ PARF_SLV_ADDR_SPACE_SIZE);
++ }
++}
++
++static void qcom_pcie_configure_dbi_atu_base(struct qcom_pcie *pcie)
++{
++ struct dw_pcie *pci = pcie->pci;
++
++ if (pci->dbi_phys_addr) {
++ /*
++ * PARF_DBI_BASE_ADDR_V2 and PARF_ATU_BASE_ADDR registers are
++ * in CPU domain and require to be programmed with CPU
++ * physical addresses.
++ */
++ writel(lower_32_bits(pci->dbi_phys_addr), pcie->parf +
++ PARF_DBI_BASE_ADDR_V2);
++ writel(upper_32_bits(pci->dbi_phys_addr), pcie->parf +
++ PARF_DBI_BASE_ADDR_V2_HI);
++
++ if (pci->atu_phys_addr) {
++ writel(lower_32_bits(pci->atu_phys_addr), pcie->parf +
++ PARF_ATU_BASE_ADDR);
++ writel(upper_32_bits(pci->atu_phys_addr), pcie->parf +
++ PARF_ATU_BASE_ADDR_HI);
++ }
++
++ writel(0x0, pcie->parf + PARF_SLV_ADDR_SPACE_SIZE_V2);
++ writel(SLV_ADDR_SPACE_SZ, pcie->parf +
++ PARF_SLV_ADDR_SPACE_SIZE_V2_HI);
++ }
++}
++
+ static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie)
+ {
+ u32 val;
+@@ -541,8 +591,7 @@ static int qcom_pcie_init_1_0_0(struct qcom_pcie *pcie)
+
+ static int qcom_pcie_post_init_1_0_0(struct qcom_pcie *pcie)
+ {
+- /* change DBI base address */
+- writel(0, pcie->parf + PARF_DBI_BASE_ADDR);
++ qcom_pcie_configure_dbi_base(pcie);
+
+ if (IS_ENABLED(CONFIG_PCI_MSI)) {
+ u32 val = readl(pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT);
+@@ -629,8 +678,7 @@ static int qcom_pcie_post_init_2_3_2(struct qcom_pcie *pcie)
+ val &= ~PHY_TEST_PWR_DOWN;
+ writel(val, pcie->parf + PARF_PHY_CTRL);
+
+- /* change DBI base address */
+- writel(0, pcie->parf + PARF_DBI_BASE_ADDR);
++ qcom_pcie_configure_dbi_base(pcie);
+
+ /* MAC PHY_POWERDOWN MUX DISABLE */
+ val = readl(pcie->parf + PARF_SYS_CTRL);
+@@ -812,13 +860,11 @@ static int qcom_pcie_post_init_2_3_3(struct qcom_pcie *pcie)
+ u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
+ u32 val;
+
+- writel(SLV_ADDR_SPACE_SZ, pcie->parf + PARF_SLV_ADDR_SPACE_SIZE);
+-
+ val = readl(pcie->parf + PARF_PHY_CTRL);
+ val &= ~PHY_TEST_PWR_DOWN;
+ writel(val, pcie->parf + PARF_PHY_CTRL);
+
+- writel(0, pcie->parf + PARF_DBI_BASE_ADDR);
++ qcom_pcie_configure_dbi_atu_base(pcie);
+
+ writel(MST_WAKEUP_EN | SLV_WAKEUP_EN | MSTR_ACLK_CGC_DIS
+ | SLV_ACLK_CGC_DIS | CORE_CLK_CGC_DIS |
+@@ -914,8 +960,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
+ val &= ~PHY_TEST_PWR_DOWN;
+ writel(val, pcie->parf + PARF_PHY_CTRL);
+
+- /* change DBI base address */
+- writel(0, pcie->parf + PARF_DBI_BASE_ADDR);
++ qcom_pcie_configure_dbi_atu_base(pcie);
+
+ /* MAC PHY_POWERDOWN MUX DISABLE */
+ val = readl(pcie->parf + PARF_SYS_CTRL);
+@@ -1124,14 +1169,11 @@ static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie)
+ u32 val;
+ int i;
+
+- writel(SLV_ADDR_SPACE_SZ,
+- pcie->parf + PARF_SLV_ADDR_SPACE_SIZE);
+-
+ val = readl(pcie->parf + PARF_PHY_CTRL);
+ val &= ~PHY_TEST_PWR_DOWN;
+ writel(val, pcie->parf + PARF_PHY_CTRL);
+
+- writel(0, pcie->parf + PARF_DBI_BASE_ADDR);
++ qcom_pcie_configure_dbi_atu_base(pcie);
+
+ writel(DEVICE_TYPE_RC, pcie->parf + PARF_DEVICE_TYPE);
+ writel(BYPASS | MSTR_AXI_CLK_EN | AHB_CLK_EN,
+diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c
+index 84309dfe0c6840..17f00710925508 100644
+--- a/drivers/pci/endpoint/pci-epc-core.c
++++ b/drivers/pci/endpoint/pci-epc-core.c
+@@ -838,6 +838,10 @@ void pci_epc_destroy(struct pci_epc *epc)
+ {
+ pci_ep_cfs_remove_epc_group(epc->group);
+ device_unregister(&epc->dev);
++
++#ifdef CONFIG_PCI_DOMAINS_GENERIC
++ pci_bus_release_domain_nr(&epc->dev, epc->domain_nr);
++#endif
+ }
+ EXPORT_SYMBOL_GPL(pci_epc_destroy);
+
+@@ -900,6 +904,16 @@ __pci_epc_create(struct device *dev, const struct pci_epc_ops *ops,
+ epc->dev.release = pci_epc_release;
+ epc->ops = ops;
+
++#ifdef CONFIG_PCI_DOMAINS_GENERIC
++ epc->domain_nr = pci_bus_find_domain_nr(NULL, dev);
++#else
++ /*
++ * TODO: If the architecture doesn't support generic PCI
++ * domains, then a custom implementation has to be used.
++ */
++ WARN_ONCE(1, "This architecture doesn't support generic PCI domains\n");
++#endif
++
+ ret = dev_set_name(&epc->dev, "%s", dev_name(dev));
+ if (ret)
+ goto put_dev;
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index ad2d571ccbc1f7..85ced6958d6d11 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -6814,16 +6814,16 @@ static int of_pci_bus_find_domain_nr(struct device *parent)
+ return ida_alloc(&pci_domain_nr_dynamic_ida, GFP_KERNEL);
+ }
+
+-static void of_pci_bus_release_domain_nr(struct pci_bus *bus, struct device *parent)
++static void of_pci_bus_release_domain_nr(struct device *parent, int domain_nr)
+ {
+- if (bus->domain_nr < 0)
++ if (domain_nr < 0)
+ return;
+
+ /* Release domain from IDA where it was allocated. */
+- if (of_get_pci_domain_nr(parent->of_node) == bus->domain_nr)
+- ida_free(&pci_domain_nr_static_ida, bus->domain_nr);
++ if (of_get_pci_domain_nr(parent->of_node) == domain_nr)
++ ida_free(&pci_domain_nr_static_ida, domain_nr);
+ else
+- ida_free(&pci_domain_nr_dynamic_ida, bus->domain_nr);
++ ida_free(&pci_domain_nr_dynamic_ida, domain_nr);
+ }
+
+ int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent)
+@@ -6832,11 +6832,11 @@ int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent)
+ acpi_pci_bus_find_domain_nr(bus);
+ }
+
+-void pci_bus_release_domain_nr(struct pci_bus *bus, struct device *parent)
++void pci_bus_release_domain_nr(struct device *parent, int domain_nr)
+ {
+ if (!acpi_disabled)
+ return;
+- of_pci_bus_release_domain_nr(bus, parent);
++ of_pci_bus_release_domain_nr(parent, domain_nr);
+ }
+ #endif
+
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index b14b9876c0303f..e9e56bbb3b59d1 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1061,7 +1061,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+
+ free:
+ #ifdef CONFIG_PCI_DOMAINS_GENERIC
+- pci_bus_release_domain_nr(bus, parent);
++ pci_bus_release_domain_nr(parent, bus->domain_nr);
+ #endif
+ kfree(bus);
+ return err;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 5d57ea27dbc42a..dccb60c1d9cc3d 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3621,6 +3621,8 @@ DECLARE_PCI_FIXUP_FINAL(0x1814, 0x0601, /* Ralink RT2800 802.11n PCI */
+ quirk_broken_intx_masking);
+ DECLARE_PCI_FIXUP_FINAL(0x1b7c, 0x0004, /* Ceton InfiniTV4 */
+ quirk_broken_intx_masking);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CREATIVE, PCI_DEVICE_ID_CREATIVE_20K2,
++ quirk_broken_intx_masking);
+
+ /*
+ * Realtek RTL8169 PCI Gigabit Ethernet Controller (rev 10)
+@@ -4259,6 +4261,10 @@ static void quirk_dma_func0_alias(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_RICOH, 0xe832, quirk_dma_func0_alias);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_RICOH, 0xe476, quirk_dma_func0_alias);
+
++/* Some Glenfly chips use function 0 as the PCIe Requester ID for DMA */
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_GLENFLY, 0x3d40, quirk_dma_func0_alias);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_GLENFLY, 0x3d41, quirk_dma_func0_alias);
++
+ static void quirk_dma_func1_alias(struct pci_dev *dev)
+ {
+ if (PCI_FUNC(dev->devfn) != 1)
+@@ -5083,6 +5089,8 @@ static const struct pci_dev_acs_enabled {
+ /* QCOM QDF2xxx root ports */
+ { PCI_VENDOR_ID_QCOM, 0x0400, pci_quirk_qcom_rp_acs },
+ { PCI_VENDOR_ID_QCOM, 0x0401, pci_quirk_qcom_rp_acs },
++ /* QCOM SA8775P root port */
++ { PCI_VENDOR_ID_QCOM, 0x0115, pci_quirk_qcom_rp_acs },
+ /* HXT SD4800 root ports. The ACS design is same as QCOM QDF2xxx */
+ { PCI_VENDOR_ID_HXT, 0x0401, pci_quirk_qcom_rp_acs },
+ /* Intel PCH root ports */
+diff --git a/drivers/pci/remove.c b/drivers/pci/remove.c
+index 4770cb87e3f0a1..cc0c6f5bd2afb8 100644
+--- a/drivers/pci/remove.c
++++ b/drivers/pci/remove.c
+@@ -179,7 +179,7 @@ void pci_remove_root_bus(struct pci_bus *bus)
+ #ifdef CONFIG_PCI_DOMAINS_GENERIC
+ /* Release domain_nr if it was dynamically allocated */
+ if (host_bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET)
+- pci_bus_release_domain_nr(bus, host_bridge->dev.parent);
++ pci_bus_release_domain_nr(host_bridge->dev.parent, bus->domain_nr);
+ #endif
+
+ pci_remove_bus(bus);
+diff --git a/drivers/powercap/intel_rapl_tpmi.c b/drivers/powercap/intel_rapl_tpmi.c
+index 947544e4d229aa..645fd1dc51a982 100644
+--- a/drivers/powercap/intel_rapl_tpmi.c
++++ b/drivers/powercap/intel_rapl_tpmi.c
+@@ -15,7 +15,8 @@
+ #include <linux/module.h>
+ #include <linux/slab.h>
+
+-#define TPMI_RAPL_VERSION 1
++#define TPMI_RAPL_MAJOR_VERSION 0
++#define TPMI_RAPL_MINOR_VERSION 1
+
+ /* 1 header + 10 registers + 5 reserved. 8 bytes for each. */
+ #define TPMI_RAPL_DOMAIN_SIZE 128
+@@ -154,11 +155,21 @@ static int parse_one_domain(struct tpmi_rapl_package *trp, u32 offset)
+ tpmi_domain_size = tpmi_domain_header >> 16 & 0xff;
+ tpmi_domain_flags = tpmi_domain_header >> 32 & 0xffff;
+
+- if (tpmi_domain_version != TPMI_RAPL_VERSION) {
+- pr_warn(FW_BUG "Unsupported version:%d\n", tpmi_domain_version);
++ if (tpmi_domain_version == TPMI_VERSION_INVALID) {
++ pr_warn(FW_BUG "Invalid version\n");
+ return -ENODEV;
+ }
+
++ if (TPMI_MAJOR_VERSION(tpmi_domain_version) != TPMI_RAPL_MAJOR_VERSION) {
++ pr_warn(FW_BUG "Unsupported major version:%ld\n",
++ TPMI_MAJOR_VERSION(tpmi_domain_version));
++ return -ENODEV;
++ }
++
++ if (TPMI_MINOR_VERSION(tpmi_domain_version) > TPMI_RAPL_MINOR_VERSION)
++ pr_info("Ignore: Unsupported minor version:%ld\n",
++ TPMI_MINOR_VERSION(tpmi_domain_version));
++
+ /* Domain size: in unit of 128 Bytes */
+ if (tpmi_domain_size != 1) {
+ pr_warn(FW_BUG "Invalid Domain size %d\n", tpmi_domain_size);
+@@ -181,7 +192,7 @@ static int parse_one_domain(struct tpmi_rapl_package *trp, u32 offset)
+ pr_warn(FW_BUG "System domain must support Domain Info register\n");
+ return -ENODEV;
+ }
+- tpmi_domain_info = readq(trp->base + offset + TPMI_RAPL_REG_DOMAIN_INFO);
++ tpmi_domain_info = readq(trp->base + offset + TPMI_RAPL_REG_DOMAIN_INFO * 8);
+ if (!(tpmi_domain_info & TPMI_RAPL_DOMAIN_ROOT))
+ return 0;
+ domain_type = RAPL_DOMAIN_PLATFORM;
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index 448b9a5438e0b8..9e99bb27c033ef 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -666,6 +666,17 @@ static struct resource_table *imx_rproc_get_loaded_rsc_table(struct rproc *rproc
+ return (struct resource_table *)priv->rsc_table;
+ }
+
++static struct resource_table *
++imx_rproc_elf_find_loaded_rsc_table(struct rproc *rproc, const struct firmware *fw)
++{
++ struct imx_rproc *priv = rproc->priv;
++
++ if (priv->rsc_table)
++ return (struct resource_table *)priv->rsc_table;
++
++ return rproc_elf_find_loaded_rsc_table(rproc, fw);
++}
++
+ static const struct rproc_ops imx_rproc_ops = {
+ .prepare = imx_rproc_prepare,
+ .attach = imx_rproc_attach,
+@@ -676,7 +687,7 @@ static const struct rproc_ops imx_rproc_ops = {
+ .da_to_va = imx_rproc_da_to_va,
+ .load = rproc_elf_load_segments,
+ .parse_fw = imx_rproc_parse_fw,
+- .find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table,
++ .find_loaded_rsc_table = imx_rproc_elf_find_loaded_rsc_table,
+ .get_loaded_rsc_table = imx_rproc_get_loaded_rsc_table,
+ .sanity_check = rproc_elf_sanity_check,
+ .get_boot_addr = rproc_elf_get_boot_addr,
+diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c
+index 29eead383eb9a4..5d833577f23fe1 100644
+--- a/drivers/scsi/fnic/fnic_main.c
++++ b/drivers/scsi/fnic/fnic_main.c
+@@ -830,7 +830,6 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ spin_lock_init(&fnic->vlans_lock);
+ INIT_WORK(&fnic->fip_frame_work, fnic_handle_fip_frame);
+ INIT_WORK(&fnic->event_work, fnic_handle_event);
+- INIT_WORK(&fnic->flush_work, fnic_flush_tx);
+ skb_queue_head_init(&fnic->fip_frame_queue);
+ INIT_LIST_HEAD(&fnic->evlist);
+ INIT_LIST_HEAD(&fnic->vlans);
+@@ -948,6 +947,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ INIT_WORK(&fnic->link_work, fnic_handle_link);
+ INIT_WORK(&fnic->frame_work, fnic_handle_frame);
++ INIT_WORK(&fnic->flush_work, fnic_flush_tx);
+ skb_queue_head_init(&fnic->frame_queue);
+ skb_queue_head_init(&fnic->tx_queue);
+
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index 2dedd1493e5bae..134bc96dd13400 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -1572,8 +1572,8 @@ lpfc_cmpl_ct_cmd_gft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ }
+ }
+ } else
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+- "3065 GFT_ID failed x%08x\n", ulp_status);
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_DISCOVERY,
++ "3065 GFT_ID status x%08x\n", ulp_status);
+
+ out:
+ lpfc_ct_free_iocb(phba, cmdiocb);
+@@ -1647,6 +1647,18 @@ lpfc_cmpl_ct(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ }
+
+ out:
++ /* If the caller wanted a synchronous DA_ID completion, signal the
++ * wait obj and clear flag to reset the vport.
++ */
++ if (ndlp->save_flags & NLP_WAIT_FOR_DA_ID) {
++ if (ndlp->da_id_waitq)
++ wake_up(ndlp->da_id_waitq);
++ }
++
++ spin_lock_irq(&ndlp->lock);
++ ndlp->save_flags &= ~NLP_WAIT_FOR_DA_ID;
++ spin_unlock_irq(&ndlp->lock);
++
+ lpfc_ct_free_iocb(phba, cmdiocb);
+ lpfc_nlp_put(ndlp);
+ return;
+@@ -2246,7 +2258,7 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ }
+
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+- "0229 FDMI cmd %04x failed, latt = %d "
++ "0229 FDMI cmd %04x latt = %d "
+ "ulp_status: x%x, rid x%x\n",
+ be16_to_cpu(fdmi_cmd), latt, ulp_status,
+ ulp_word4);
+@@ -2263,9 +2275,9 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ /* Check for a CT LS_RJT response */
+ cmd = be16_to_cpu(fdmi_cmd);
+ if (be16_to_cpu(fdmi_rsp) == SLI_CT_RESPONSE_FS_RJT) {
+- /* FDMI rsp failed */
++ /* Log FDMI reject */
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY | LOG_ELS,
+- "0220 FDMI cmd failed FS_RJT Data: x%x", cmd);
++ "0220 FDMI cmd FS_RJT Data: x%x", cmd);
+
+ /* Should we fallback to FDMI-2 / FDMI-1 ? */
+ switch (cmd) {
+diff --git a/drivers/scsi/lpfc/lpfc_disc.h b/drivers/scsi/lpfc/lpfc_disc.h
+index f82615d87c4bbb..f5ae8cc158205c 100644
+--- a/drivers/scsi/lpfc/lpfc_disc.h
++++ b/drivers/scsi/lpfc/lpfc_disc.h
+@@ -90,6 +90,8 @@ enum lpfc_nlp_save_flags {
+ NLP_IN_RECOV_POST_DEV_LOSS = 0x1,
+ /* wait for outstanding LOGO to cmpl */
+ NLP_WAIT_FOR_LOGO = 0x2,
++ /* wait for outstanding DA_ID to finish */
++ NLP_WAIT_FOR_DA_ID = 0x4
+ };
+
+ struct lpfc_nodelist {
+@@ -159,7 +161,12 @@ struct lpfc_nodelist {
+ uint32_t nvme_fb_size; /* NVME target's supported byte cnt */
+ #define NVME_FB_BIT_SHIFT 9 /* PRLI Rsp first burst in 512B units. */
+ uint32_t nlp_defer_did;
++
++ /* These wait objects are NPIV specific. These IOs must complete
++ * synchronously.
++ */
+ wait_queue_head_t *logo_waitq;
++ wait_queue_head_t *da_id_waitq;
+ };
+
+ struct lpfc_node_rrq {
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index e27f5d955edb41..14938c7c4aac5c 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -979,7 +979,7 @@ lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ phba->fcoe_cvl_eventtag_attn =
+ phba->fcoe_cvl_eventtag;
+ lpfc_printf_log(phba, KERN_WARNING, LOG_FIP | LOG_ELS,
+- "2611 FLOGI failed on FCF (x%x), "
++ "2611 FLOGI FCF (x%x), "
+ "status:x%x/x%x, tmo:x%x, perform "
+ "roundrobin FCF failover\n",
+ phba->fcf.current_rec.fcf_indx,
+@@ -997,11 +997,11 @@ lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ if (!(ulp_status == IOSTAT_LOCAL_REJECT &&
+ ((ulp_word4 & IOERR_PARAM_MASK) ==
+ IOERR_LOOP_OPEN_FAILURE)))
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+- "2858 FLOGI failure Status:x%x/x%x TMO"
+- ":x%x Data x%lx x%x\n",
+- ulp_status, ulp_word4, tmo,
+- phba->hba_flag, phba->fcf.fcf_flag);
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
++ "2858 FLOGI Status:x%x/x%x TMO"
++ ":x%x Data x%lx x%x\n",
++ ulp_status, ulp_word4, tmo,
++ phba->hba_flag, phba->fcf.fcf_flag);
+
+ /* Check for retry */
+ if (lpfc_els_retry(phba, cmdiocb, rspiocb)) {
+@@ -1023,7 +1023,7 @@ lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ lpfc_nlp_put(ndlp);
+
+ lpfc_printf_vlog(vport, KERN_WARNING, LOG_ELS,
+- "0150 FLOGI failure Status:x%x/x%x "
++ "0150 FLOGI Status:x%x/x%x "
+ "xri x%x TMO:x%x refcnt %d\n",
+ ulp_status, ulp_word4, cmdiocb->sli4_xritag,
+ tmo, kref_read(&ndlp->kref));
+@@ -1032,11 +1032,11 @@ lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ if (!(ulp_status == IOSTAT_LOCAL_REJECT &&
+ ((ulp_word4 & IOERR_PARAM_MASK) ==
+ IOERR_LOOP_OPEN_FAILURE))) {
+- /* FLOGI failure */
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+- "0100 FLOGI failure Status:x%x/x%x "
+- "TMO:x%x\n",
+- ulp_status, ulp_word4, tmo);
++ /* Warn FLOGI status */
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
++ "0100 FLOGI Status:x%x/x%x "
++ "TMO:x%x\n",
++ ulp_status, ulp_word4, tmo);
+ goto flogifail;
+ }
+
+@@ -1962,16 +1962,16 @@ lpfc_cmpl_els_rrq(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+
+ if (ulp_status) {
+ /* Check for retry */
+- /* RRQ failed Don't print the vport to vport rjts */
++ /* Warn RRQ status Don't print the vport to vport rjts */
+ if (ulp_status != IOSTAT_LS_RJT ||
+ (((ulp_word4) >> 16 != LSRJT_INVALID_CMD) &&
+ ((ulp_word4) >> 16 != LSRJT_UNABLE_TPC)) ||
+ (phba)->pport->cfg_log_verbose & LOG_ELS)
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+- "2881 RRQ failure DID:%06X Status:"
+- "x%x/x%x\n",
+- ndlp->nlp_DID, ulp_status,
+- ulp_word4);
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
++ "2881 RRQ DID:%06X Status:"
++ "x%x/x%x\n",
++ ndlp->nlp_DID, ulp_status,
++ ulp_word4);
+ }
+
+ lpfc_clr_rrq_active(phba, rrq->xritag, rrq);
+@@ -2075,16 +2075,16 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ }
+ goto out;
+ }
+- /* PLOGI failed Don't print the vport to vport rjts */
++ /* Warn PLOGI status Don't print the vport to vport rjts */
+ if (ulp_status != IOSTAT_LS_RJT ||
+ (((ulp_word4) >> 16 != LSRJT_INVALID_CMD) &&
+ ((ulp_word4) >> 16 != LSRJT_UNABLE_TPC)) ||
+ (phba)->pport->cfg_log_verbose & LOG_ELS)
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+- "2753 PLOGI failure DID:%06X "
+- "Status:x%x/x%x\n",
+- ndlp->nlp_DID, ulp_status,
+- ulp_word4);
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
++ "2753 PLOGI DID:%06X "
++ "Status:x%x/x%x\n",
++ ndlp->nlp_DID, ulp_status,
++ ulp_word4);
+
+ /* Do not call DSM for lpfc_els_abort'ed ELS cmds */
+ if (!lpfc_error_lost_link(vport, ulp_status, ulp_word4))
+@@ -2321,7 +2321,6 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ struct lpfc_vport *vport = cmdiocb->vport;
+ struct lpfc_nodelist *ndlp;
+ char *mode;
+- u32 loglevel;
+ u32 ulp_status;
+ u32 ulp_word4;
+ bool release_node = false;
+@@ -2370,17 +2369,14 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ * could be expected.
+ */
+ if (test_bit(FC_FABRIC, &vport->fc_flag) ||
+- vport->cfg_enable_fc4_type != LPFC_ENABLE_BOTH) {
+- mode = KERN_ERR;
+- loglevel = LOG_TRACE_EVENT;
+- } else {
++ vport->cfg_enable_fc4_type != LPFC_ENABLE_BOTH)
++ mode = KERN_WARNING;
++ else
+ mode = KERN_INFO;
+- loglevel = LOG_ELS;
+- }
+
+- /* PRLI failed */
+- lpfc_printf_vlog(vport, mode, loglevel,
+- "2754 PRLI failure DID:%06X Status:x%x/x%x, "
++ /* Warn PRLI status */
++ lpfc_printf_vlog(vport, mode, LOG_ELS,
++ "2754 PRLI DID:%06X Status:x%x/x%x, "
+ "data: x%x x%x x%x\n",
+ ndlp->nlp_DID, ulp_status,
+ ulp_word4, ndlp->nlp_state,
+@@ -2852,11 +2848,11 @@ lpfc_cmpl_els_adisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ }
+ goto out;
+ }
+- /* ADISC failed */
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+- "2755 ADISC failure DID:%06X Status:x%x/x%x\n",
+- ndlp->nlp_DID, ulp_status,
+- ulp_word4);
++ /* Warn ADISC status */
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
++ "2755 ADISC DID:%06X Status:x%x/x%x\n",
++ ndlp->nlp_DID, ulp_status,
++ ulp_word4);
+ lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+ NLP_EVT_CMPL_ADISC);
+
+@@ -3043,12 +3039,12 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ * discovery. The PLOGI will retry.
+ */
+ if (ulp_status) {
+- /* LOGO failed */
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+- "2756 LOGO failure, No Retry DID:%06X "
+- "Status:x%x/x%x\n",
+- ndlp->nlp_DID, ulp_status,
+- ulp_word4);
++ /* Warn LOGO status */
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
++ "2756 LOGO, No Retry DID:%06X "
++ "Status:x%x/x%x\n",
++ ndlp->nlp_DID, ulp_status,
++ ulp_word4);
+
+ if (lpfc_error_lost_link(vport, ulp_status, ulp_word4))
+ skip_recovery = 1;
+@@ -4835,11 +4831,10 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ if ((phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) &&
+ (cmd == ELS_CMD_FDISC) &&
+ (stat.un.b.lsRjtRsnCodeExp == LSEXP_OUT_OF_RESOURCE)){
+- lpfc_printf_vlog(vport, KERN_ERR,
+- LOG_TRACE_EVENT,
+- "0125 FDISC Failed (x%x). "
+- "Fabric out of resources\n",
+- stat.un.lsRjtError);
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
++ "0125 FDISC (x%x). "
++ "Fabric out of resources\n",
++ stat.un.lsRjtError);
+ lpfc_vport_set_state(vport,
+ FC_VPORT_NO_FABRIC_RSCS);
+ }
+@@ -4875,11 +4870,10 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ LSEXP_NOTHING_MORE) {
+ vport->fc_sparam.cmn.bbRcvSizeMsb &= 0xf;
+ retry = 1;
+- lpfc_printf_vlog(vport, KERN_ERR,
+- LOG_TRACE_EVENT,
+- "0820 FLOGI Failed (x%x). "
+- "BBCredit Not Supported\n",
+- stat.un.lsRjtError);
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
++ "0820 FLOGI (x%x). "
++ "BBCredit Not Supported\n",
++ stat.un.lsRjtError);
+ }
+ break;
+
+@@ -4889,11 +4883,10 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ ((stat.un.b.lsRjtRsnCodeExp == LSEXP_INVALID_PNAME) ||
+ (stat.un.b.lsRjtRsnCodeExp == LSEXP_INVALID_NPORT_ID))
+ ) {
+- lpfc_printf_vlog(vport, KERN_ERR,
+- LOG_TRACE_EVENT,
+- "0122 FDISC Failed (x%x). "
+- "Fabric Detected Bad WWN\n",
+- stat.un.lsRjtError);
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
++ "0122 FDISC (x%x). "
++ "Fabric Detected Bad WWN\n",
++ stat.un.lsRjtError);
+ lpfc_vport_set_state(vport,
+ FC_VPORT_FABRIC_REJ_WWN);
+ }
+@@ -5353,8 +5346,8 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ u32 ulp_status, ulp_word4, tmo, did, iotag;
+
+ if (!vport) {
+- lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+- "3177 ELS response failed\n");
++ lpfc_printf_log(phba, KERN_WARNING, LOG_ELS,
++ "3177 null vport in ELS rsp\n");
+ goto out;
+ }
+ if (cmdiocb->context_un.mbox)
+@@ -9656,11 +9649,12 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
+ if (piocb->cmd_flag & LPFC_DRIVER_ABORTED && !mbx_tmo_err)
+ continue;
+
+- /* On the ELS ring we can have ELS_REQUESTs or
+- * GEN_REQUESTs waiting for a response.
++ /* On the ELS ring we can have ELS_REQUESTs, ELS_RSPs,
++ * or GEN_REQUESTs waiting for a CQE response.
+ */
+ ulp_command = get_job_cmnd(phba, piocb);
+- if (ulp_command == CMD_ELS_REQUEST64_CR) {
++ if (ulp_command == CMD_ELS_REQUEST64_WQE ||
++ ulp_command == CMD_XMIT_ELS_RSP64_WQE) {
+ list_add_tail(&piocb->dlist, &abort_list);
+
+ /* If the link is down when flushing ELS commands
+@@ -11325,10 +11319,10 @@ lpfc_cmpl_els_fdisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ /* Check for retry */
+ if (lpfc_els_retry(phba, cmdiocb, rspiocb))
+ goto out;
+- /* FDISC failed */
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+- "0126 FDISC failed. (x%x/x%x)\n",
+- ulp_status, ulp_word4);
++ /* Warn FDISC status */
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
++ "0126 FDISC cmpl status: x%x/x%x)\n",
++ ulp_status, ulp_word4);
+ goto fdisc_failed;
+ }
+
+diff --git a/drivers/scsi/lpfc/lpfc_vport.c b/drivers/scsi/lpfc/lpfc_vport.c
+index 4439167a51882d..7a4d4d8e2ad55f 100644
+--- a/drivers/scsi/lpfc/lpfc_vport.c
++++ b/drivers/scsi/lpfc/lpfc_vport.c
+@@ -626,6 +626,7 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
+ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+ struct lpfc_hba *phba = vport->phba;
+ int rc;
++ DECLARE_WAIT_QUEUE_HEAD_ONSTACK(waitq);
+
+ if (vport->port_type == LPFC_PHYSICAL_PORT) {
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+@@ -679,21 +680,49 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
+ if (!ndlp)
+ goto skip_logo;
+
++ /* Send the DA_ID and Fabric LOGO to cleanup the NPIV fabric entries. */
+ if (ndlp && ndlp->nlp_state == NLP_STE_UNMAPPED_NODE &&
+ phba->link_state >= LPFC_LINK_UP &&
+ phba->fc_topology != LPFC_TOPOLOGY_LOOP) {
+ if (vport->cfg_enable_da_id) {
+- /* Send DA_ID and wait for a completion. */
++ /* Send DA_ID and wait for a completion. This is best
++ * effort. If the DA_ID fails, likely the fabric will
++ * "leak" NportIDs but at least the driver issued the
++ * command.
++ */
++ ndlp = lpfc_findnode_did(vport, NameServer_DID);
++ if (!ndlp)
++ goto issue_logo;
++
++ spin_lock_irq(&ndlp->lock);
++ ndlp->da_id_waitq = &waitq;
++ ndlp->save_flags |= NLP_WAIT_FOR_DA_ID;
++ spin_unlock_irq(&ndlp->lock);
++
+ rc = lpfc_ns_cmd(vport, SLI_CTNS_DA_ID, 0, 0);
+- if (rc) {
+- lpfc_printf_log(vport->phba, KERN_WARNING,
+- LOG_VPORT,
+- "1829 CT command failed to "
+- "delete objects on fabric, "
+- "rc %d\n", rc);
++ if (!rc) {
++ wait_event_timeout(waitq,
++ !(ndlp->save_flags & NLP_WAIT_FOR_DA_ID),
++ msecs_to_jiffies(phba->fc_ratov * 2000));
+ }
++
++ lpfc_printf_vlog(vport, KERN_INFO, LOG_VPORT | LOG_ELS,
++ "1829 DA_ID issue status %d. "
++ "SFlag x%x NState x%x, NFlag x%x "
++ "Rpi x%x\n",
++ rc, ndlp->save_flags, ndlp->nlp_state,
++ ndlp->nlp_flag, ndlp->nlp_rpi);
++
++ /* Remove the waitq and save_flags. It no
++ * longer matters if the wake happened.
++ */
++ spin_lock_irq(&ndlp->lock);
++ ndlp->da_id_waitq = NULL;
++ ndlp->save_flags &= ~NLP_WAIT_FOR_DA_ID;
++ spin_unlock_irq(&ndlp->lock);
+ }
+
++issue_logo:
+ /*
+ * If the vpi is not registered, then a valid FDISC doesn't
+ * exist and there is no need for a ELS LOGO. Just cleanup
+diff --git a/drivers/scsi/wd33c93.c b/drivers/scsi/wd33c93.c
+index a44b60c9004ab0..dd1fef9226f227 100644
+--- a/drivers/scsi/wd33c93.c
++++ b/drivers/scsi/wd33c93.c
+@@ -831,7 +831,7 @@ wd33c93_intr(struct Scsi_Host *instance)
+ /* construct an IDENTIFY message with correct disconnect bit */
+
+ hostdata->outgoing_msg[0] = IDENTIFY(0, cmd->device->lun);
+- if (scsi_pointer->phase)
++ if (WD33C93_scsi_pointer(cmd)->phase)
+ hostdata->outgoing_msg[0] |= 0x40;
+
+ if (hostdata->sync_stat[cmd->device->id] == SS_FIRST) {
+diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
+index e0683a5975d101..05652e983539b4 100644
+--- a/drivers/soundwire/cadence_master.c
++++ b/drivers/soundwire/cadence_master.c
+@@ -890,8 +890,14 @@ static int cdns_update_slave_status(struct sdw_cdns *cdns,
+ }
+ }
+
+- if (is_slave)
+- return sdw_handle_slave_status(&cdns->bus, status);
++ if (is_slave) {
++ int ret;
++
++ mutex_lock(&cdns->status_update_lock);
++ ret = sdw_handle_slave_status(&cdns->bus, status);
++ mutex_unlock(&cdns->status_update_lock);
++ return ret;
++ }
+
+ return 0;
+ }
+@@ -988,6 +994,31 @@ irqreturn_t sdw_cdns_irq(int irq, void *dev_id)
+ }
+ EXPORT_SYMBOL(sdw_cdns_irq);
+
++static void cdns_check_attached_status_dwork(struct work_struct *work)
++{
++ struct sdw_cdns *cdns =
++ container_of(work, struct sdw_cdns, attach_dwork.work);
++ enum sdw_slave_status status[SDW_MAX_DEVICES + 1];
++ u32 val;
++ int ret;
++ int i;
++
++ val = cdns_readl(cdns, CDNS_MCP_SLAVE_STAT);
++
++ for (i = 0; i <= SDW_MAX_DEVICES; i++) {
++ status[i] = val & 0x3;
++ if (status[i])
++ dev_dbg(cdns->dev, "Peripheral %d status: %d\n", i, status[i]);
++ val >>= 2;
++ }
++
++ mutex_lock(&cdns->status_update_lock);
++ ret = sdw_handle_slave_status(&cdns->bus, status);
++ mutex_unlock(&cdns->status_update_lock);
++ if (ret < 0)
++ dev_err(cdns->dev, "%s: sdw_handle_slave_status failed: %d\n", __func__, ret);
++}
++
+ /**
+ * cdns_update_slave_status_work - update slave status in a work since we will need to handle
+ * other interrupts eg. CDNS_MCP_INT_RX_WL during the update slave
+@@ -1740,7 +1771,11 @@ int sdw_cdns_probe(struct sdw_cdns *cdns)
+ init_completion(&cdns->tx_complete);
+ cdns->bus.port_ops = &cdns_port_ops;
+
++ mutex_init(&cdns->status_update_lock);
++
+ INIT_WORK(&cdns->work, cdns_update_slave_status_work);
++ INIT_DELAYED_WORK(&cdns->attach_dwork, cdns_check_attached_status_dwork);
++
+ return 0;
+ }
+ EXPORT_SYMBOL(sdw_cdns_probe);
+diff --git a/drivers/soundwire/cadence_master.h b/drivers/soundwire/cadence_master.h
+index bc84435e420f5b..e1d7969ba48ae8 100644
+--- a/drivers/soundwire/cadence_master.h
++++ b/drivers/soundwire/cadence_master.h
+@@ -117,6 +117,8 @@ struct sdw_cdns_dai_runtime {
+ * @link_up: Link status
+ * @msg_count: Messages sent on bus
+ * @dai_runtime_array: runtime context for each allocated DAI.
++ * @status_update_lock: protect concurrency between interrupt-based and delayed work
++ * status update
+ */
+ struct sdw_cdns {
+ struct device *dev;
+@@ -148,10 +150,13 @@ struct sdw_cdns {
+ bool interrupt_enabled;
+
+ struct work_struct work;
++ struct delayed_work attach_dwork;
+
+ struct list_head list;
+
+ struct sdw_cdns_dai_runtime **dai_runtime_array;
++
++ struct mutex status_update_lock; /* add mutual exclusion to sdw_handle_slave_status() */
+ };
+
+ #define bus_to_cdns(_bus) container_of(_bus, struct sdw_cdns, bus)
+diff --git a/drivers/soundwire/intel.h b/drivers/soundwire/intel.h
+index 68838e843b543e..f4235f5991c37d 100644
+--- a/drivers/soundwire/intel.h
++++ b/drivers/soundwire/intel.h
+@@ -103,6 +103,8 @@ static inline void intel_writew(void __iomem *base, int offset, u16 value)
+
+ #define INTEL_MASTER_RESET_ITERATIONS 10
+
++#define SDW_INTEL_DELAYED_ENUMERATION_MS 100
++
+ #define SDW_INTEL_CHECK_OPS(sdw, cb) ((sdw) && (sdw)->link_res && (sdw)->link_res->hw_ops && \
+ (sdw)->link_res->hw_ops->cb)
+ #define SDW_INTEL_OPS(sdw, cb) ((sdw)->link_res->hw_ops->cb)
+diff --git a/drivers/soundwire/intel_auxdevice.c b/drivers/soundwire/intel_auxdevice.c
+index 8807e01cbf7c7e..dff49c5ce5b30e 100644
+--- a/drivers/soundwire/intel_auxdevice.c
++++ b/drivers/soundwire/intel_auxdevice.c
+@@ -475,6 +475,7 @@ static void intel_link_remove(struct auxiliary_device *auxdev)
+ */
+ if (!bus->prop.hw_disabled) {
+ sdw_intel_debugfs_exit(sdw);
++ cancel_delayed_work_sync(&cdns->attach_dwork);
+ sdw_cdns_enable_interrupt(cdns, false);
+ }
+ sdw_bus_master_delete(bus);
+diff --git a/drivers/soundwire/intel_bus_common.c b/drivers/soundwire/intel_bus_common.c
+index df944e11b9caa5..d3ff6c65b64c33 100644
+--- a/drivers/soundwire/intel_bus_common.c
++++ b/drivers/soundwire/intel_bus_common.c
+@@ -45,21 +45,24 @@ int intel_start_bus(struct sdw_intel *sdw)
+ return ret;
+ }
+
+- ret = sdw_cdns_exit_reset(cdns);
++ ret = sdw_cdns_enable_interrupt(cdns, true);
+ if (ret < 0) {
+- dev_err(dev, "%s: unable to exit bus reset sequence: %d\n", __func__, ret);
++ dev_err(dev, "%s: cannot enable interrupts: %d\n", __func__, ret);
+ return ret;
+ }
+
+- ret = sdw_cdns_enable_interrupt(cdns, true);
++ ret = sdw_cdns_exit_reset(cdns);
+ if (ret < 0) {
+- dev_err(dev, "%s: cannot enable interrupts: %d\n", __func__, ret);
++ dev_err(dev, "%s: unable to exit bus reset sequence: %d\n", __func__, ret);
+ return ret;
+ }
+
+ sdw_cdns_check_self_clearing_bits(cdns, __func__,
+ true, INTEL_MASTER_RESET_ITERATIONS);
+
++ schedule_delayed_work(&cdns->attach_dwork,
++ msecs_to_jiffies(SDW_INTEL_DELAYED_ENUMERATION_MS));
++
+ return 0;
+ }
+
+@@ -136,21 +139,24 @@ int intel_start_bus_after_reset(struct sdw_intel *sdw)
+ return ret;
+ }
+
+- ret = sdw_cdns_exit_reset(cdns);
++ ret = sdw_cdns_enable_interrupt(cdns, true);
+ if (ret < 0) {
+- dev_err(dev, "unable to exit bus reset sequence during resume\n");
++ dev_err(dev, "cannot enable interrupts during resume\n");
+ return ret;
+ }
+
+- ret = sdw_cdns_enable_interrupt(cdns, true);
++ ret = sdw_cdns_exit_reset(cdns);
+ if (ret < 0) {
+- dev_err(dev, "cannot enable interrupts during resume\n");
++ dev_err(dev, "unable to exit bus reset sequence during resume\n");
+ return ret;
+ }
+
+ }
+ sdw_cdns_check_self_clearing_bits(cdns, __func__, true, INTEL_MASTER_RESET_ITERATIONS);
+
++ schedule_delayed_work(&cdns->attach_dwork,
++ msecs_to_jiffies(SDW_INTEL_DELAYED_ENUMERATION_MS));
++
+ return 0;
+ }
+
+@@ -184,6 +190,9 @@ int intel_start_bus_after_clock_stop(struct sdw_intel *sdw)
+
+ sdw_cdns_check_self_clearing_bits(cdns, __func__, true, INTEL_MASTER_RESET_ITERATIONS);
+
++ schedule_delayed_work(&cdns->attach_dwork,
++ msecs_to_jiffies(SDW_INTEL_DELAYED_ENUMERATION_MS));
++
+ return 0;
+ }
+
+@@ -194,6 +203,8 @@ int intel_stop_bus(struct sdw_intel *sdw, bool clock_stop)
+ bool wake_enable = false;
+ int ret;
+
++ cancel_delayed_work_sync(&cdns->attach_dwork);
++
+ if (clock_stop) {
+ ret = sdw_cdns_clock_stop(cdns, true);
+ if (ret < 0)
+diff --git a/drivers/staging/vme_user/vme_fake.c b/drivers/staging/vme_user/vme_fake.c
+index 7f84d1c86f2917..c4fb2b65154c72 100644
+--- a/drivers/staging/vme_user/vme_fake.c
++++ b/drivers/staging/vme_user/vme_fake.c
+@@ -1059,6 +1059,12 @@ static int __init fake_init(void)
+ struct vme_slave_resource *slave_image;
+ struct vme_lm_resource *lm;
+
++ if (geoid < 0 || geoid >= VME_MAX_SLOTS) {
++ pr_err("VME geographical address must be between 0 and %d (exclusive), but got %d\n",
++ VME_MAX_SLOTS, geoid);
++ return -EINVAL;
++ }
++
+ /* We need a fake parent device */
+ vme_root = root_device_register("vme");
+ if (IS_ERR(vme_root))
+diff --git a/drivers/staging/vme_user/vme_tsi148.c b/drivers/staging/vme_user/vme_tsi148.c
+index 2ec9c290440414..e40ca4870d704b 100644
+--- a/drivers/staging/vme_user/vme_tsi148.c
++++ b/drivers/staging/vme_user/vme_tsi148.c
+@@ -2253,6 +2253,12 @@ static int tsi148_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ struct vme_dma_resource *dma_ctrlr;
+ struct vme_lm_resource *lm;
+
++ if (geoid < 0 || geoid >= VME_MAX_SLOTS) {
++ dev_err(&pdev->dev, "VME geographical address must be between 0 and %d (exclusive), but got %d\n",
++ VME_MAX_SLOTS, geoid);
++ return -EINVAL;
++ }
++
+ /* If we want to support more than one of each bridge, we need to
+ * dynamically generate this so we get one per device
+ */
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_device_pci.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_device_pci.c
+index 00661492187028..ba5d36d36fc404 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_device_pci.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_device_pci.c
+@@ -416,7 +416,6 @@ static int proc_thermal_pci_probe(struct pci_dev *pdev, const struct pci_device_
+ if (!pci_info->no_legacy)
+ proc_thermal_remove(proc_priv);
+ proc_thermal_mmio_remove(pdev, proc_priv);
+- pci_disable_device(pdev);
+
+ return ret;
+ }
+@@ -438,7 +437,6 @@ static void proc_thermal_pci_remove(struct pci_dev *pdev)
+ proc_thermal_mmio_remove(pdev, pci_info->proc_priv);
+ if (!pci_info->no_legacy)
+ proc_thermal_remove(proc_priv);
+- pci_disable_device(pdev);
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 93c16aff0df209..795be67ca878b4 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -737,6 +737,7 @@ struct thermal_zone_device *thermal_zone_get_by_id(int id)
+ mutex_lock(&thermal_list_lock);
+ list_for_each_entry(tz, &thermal_tz_list, node) {
+ if (tz->id == id) {
++ get_device(&tz->device);
+ match = tz;
+ break;
+ }
+@@ -1646,14 +1647,12 @@ void thermal_zone_device_unregister(struct thermal_zone_device *tz)
+ ida_destroy(&tz->ida);
+
+ device_del(&tz->device);
+-
+- kfree(tz->tzp);
+-
+ put_device(&tz->device);
+
+ thermal_notify_tz_delete(tz);
+
+ wait_for_completion(&tz->removal);
++ kfree(tz->tzp);
+ kfree(tz);
+ }
+ EXPORT_SYMBOL_GPL(thermal_zone_device_unregister);
+diff --git a/drivers/thermal/thermal_core.h b/drivers/thermal/thermal_core.h
+index 3c8e2bca87f2d9..57fff8bd57611d 100644
+--- a/drivers/thermal/thermal_core.h
++++ b/drivers/thermal/thermal_core.h
+@@ -194,6 +194,9 @@ int for_each_thermal_governor(int (*cb)(struct thermal_governor *, void *),
+
+ struct thermal_zone_device *thermal_zone_get_by_id(int id);
+
++DEFINE_CLASS(thermal_zone_get_by_id, struct thermal_zone_device *,
++ if (_T) put_device(&_T->device), thermal_zone_get_by_id(id), int id)
++
+ static inline bool cdev_is_power_actor(struct thermal_cooling_device *cdev)
+ {
+ return cdev->ops->get_requested_power && cdev->ops->state2power &&
+diff --git a/drivers/thermal/thermal_netlink.c b/drivers/thermal/thermal_netlink.c
+index 97157c45363019..f3c58c708969c2 100644
+--- a/drivers/thermal/thermal_netlink.c
++++ b/drivers/thermal/thermal_netlink.c
+@@ -443,7 +443,6 @@ static int thermal_genl_cmd_tz_get_trip(struct param *p)
+ {
+ struct sk_buff *msg = p->msg;
+ const struct thermal_trip_desc *td;
+- struct thermal_zone_device *tz;
+ struct nlattr *start_trip;
+ int id;
+
+@@ -452,7 +451,7 @@ static int thermal_genl_cmd_tz_get_trip(struct param *p)
+
+ id = nla_get_u32(p->attrs[THERMAL_GENL_ATTR_TZ_ID]);
+
+- tz = thermal_zone_get_by_id(id);
++ CLASS(thermal_zone_get_by_id, tz)(id);
+ if (!tz)
+ return -EINVAL;
+
+@@ -488,7 +487,6 @@ static int thermal_genl_cmd_tz_get_trip(struct param *p)
+ static int thermal_genl_cmd_tz_get_temp(struct param *p)
+ {
+ struct sk_buff *msg = p->msg;
+- struct thermal_zone_device *tz;
+ int temp, ret, id;
+
+ if (!p->attrs[THERMAL_GENL_ATTR_TZ_ID])
+@@ -496,7 +494,7 @@ static int thermal_genl_cmd_tz_get_temp(struct param *p)
+
+ id = nla_get_u32(p->attrs[THERMAL_GENL_ATTR_TZ_ID]);
+
+- tz = thermal_zone_get_by_id(id);
++ CLASS(thermal_zone_get_by_id, tz)(id);
+ if (!tz)
+ return -EINVAL;
+
+@@ -514,7 +512,6 @@ static int thermal_genl_cmd_tz_get_temp(struct param *p)
+ static int thermal_genl_cmd_tz_get_gov(struct param *p)
+ {
+ struct sk_buff *msg = p->msg;
+- struct thermal_zone_device *tz;
+ int id, ret = 0;
+
+ if (!p->attrs[THERMAL_GENL_ATTR_TZ_ID])
+@@ -522,7 +519,7 @@ static int thermal_genl_cmd_tz_get_gov(struct param *p)
+
+ id = nla_get_u32(p->attrs[THERMAL_GENL_ATTR_TZ_ID]);
+
+- tz = thermal_zone_get_by_id(id);
++ CLASS(thermal_zone_get_by_id, tz)(id);
+ if (!tz)
+ return -EINVAL;
+
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 20e797c9e13ab8..4187ddeeac3662 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -407,14 +407,16 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state)
+ /*
+ * Turn off DTR and RTS early.
+ */
+- if (uport && uart_console(uport) && tty) {
+- uport->cons->cflag = tty->termios.c_cflag;
+- uport->cons->ispeed = tty->termios.c_ispeed;
+- uport->cons->ospeed = tty->termios.c_ospeed;
+- }
++ if (uport) {
++ if (uart_console(uport) && tty) {
++ uport->cons->cflag = tty->termios.c_cflag;
++ uport->cons->ispeed = tty->termios.c_ispeed;
++ uport->cons->ospeed = tty->termios.c_ospeed;
++ }
+
+- if (!tty || C_HUPCL(tty))
+- uart_port_dtr_rts(uport, false);
++ if (!tty || C_HUPCL(tty))
++ uart_port_dtr_rts(uport, false);
++ }
+
+ uart_port_shutdown(port);
+ }
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index a6f818cdef0e77..ce0620e804484a 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -2920,9 +2920,8 @@ static void ufshcd_init_lrb(struct ufs_hba *hba, struct ufshcd_lrb *lrb, int i)
+ struct utp_transfer_req_desc *utrdlp = hba->utrdl_base_addr;
+ dma_addr_t cmd_desc_element_addr = hba->ucdl_dma_addr +
+ i * ufshcd_get_ucd_size(hba);
+- u16 response_offset = offsetof(struct utp_transfer_cmd_desc,
+- response_upiu);
+- u16 prdt_offset = offsetof(struct utp_transfer_cmd_desc, prd_table);
++ u16 response_offset = le16_to_cpu(utrdlp[i].response_upiu_offset);
++ u16 prdt_offset = le16_to_cpu(utrdlp[i].prd_table_offset);
+
+ lrb->utr_descriptor_ptr = utrdlp + i;
+ lrb->utrd_dma_addr = hba->utrdl_dma_addr +
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index 2d7f616270c17b..69ef3cd8d4f836 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -86,7 +86,7 @@ static int hw_device_state(struct ci_hdrc *ci, u32 dma)
+ hw_write(ci, OP_ENDPTLISTADDR, ~0, dma);
+ /* interrupt, error, port change, reset, sleep/suspend */
+ hw_write(ci, OP_USBINTR, ~0,
+- USBi_UI|USBi_UEI|USBi_PCI|USBi_URI|USBi_SLI);
++ USBi_UI|USBi_UEI|USBi_PCI|USBi_URI);
+ } else {
+ hw_write(ci, OP_USBINTR, ~0, 0);
+ }
+@@ -877,6 +877,7 @@ __releases(ci->lock)
+ __acquires(ci->lock)
+ {
+ int retval;
++ u32 intr;
+
+ spin_unlock(&ci->lock);
+ if (ci->gadget.speed != USB_SPEED_UNKNOWN)
+@@ -890,6 +891,11 @@ __acquires(ci->lock)
+ if (retval)
+ goto done;
+
++ /* clear SLI */
++ hw_write(ci, OP_USBSTS, USBi_SLI, USBi_SLI);
++ intr = hw_read(ci, OP_USBINTR, ~0);
++ hw_write(ci, OP_USBINTR, ~0, intr | USBi_SLI);
++
+ ci->status = usb_ep_alloc_request(&ci->ep0in->ep, GFP_ATOMIC);
+ if (ci->status == NULL)
+ retval = -ENOMEM;
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index 7b84416dfc2b11..c1b7209b94836c 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -469,18 +469,6 @@ static int dwc2_driver_probe(struct platform_device *dev)
+
+ spin_lock_init(&hsotg->lock);
+
+- hsotg->irq = platform_get_irq(dev, 0);
+- if (hsotg->irq < 0)
+- return hsotg->irq;
+-
+- dev_dbg(hsotg->dev, "registering common handler for irq%d\n",
+- hsotg->irq);
+- retval = devm_request_irq(hsotg->dev, hsotg->irq,
+- dwc2_handle_common_intr, IRQF_SHARED,
+- dev_name(hsotg->dev), hsotg);
+- if (retval)
+- return retval;
+-
+ hsotg->vbus_supply = devm_regulator_get_optional(hsotg->dev, "vbus");
+ if (IS_ERR(hsotg->vbus_supply)) {
+ retval = PTR_ERR(hsotg->vbus_supply);
+@@ -524,6 +512,20 @@ static int dwc2_driver_probe(struct platform_device *dev)
+ if (retval)
+ goto error;
+
++ hsotg->irq = platform_get_irq(dev, 0);
++ if (hsotg->irq < 0) {
++ retval = hsotg->irq;
++ goto error;
++ }
++
++ dev_dbg(hsotg->dev, "registering common handler for irq%d\n",
++ hsotg->irq);
++ retval = devm_request_irq(hsotg->dev, hsotg->irq,
++ dwc2_handle_common_intr, IRQF_SHARED,
++ dev_name(hsotg->dev), hsotg);
++ if (retval)
++ goto error;
++
+ /*
+ * For OTG cores, set the force mode bits to reflect the value
+ * of dr_mode. Force mode bits should not be touched at any
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 9eb085f359ce3f..21740e2b8f0781 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -544,6 +544,7 @@ static int dwc3_alloc_event_buffers(struct dwc3 *dwc, unsigned int length)
+ int dwc3_event_buffers_setup(struct dwc3 *dwc)
+ {
+ struct dwc3_event_buffer *evt;
++ u32 reg;
+
+ if (!dwc->ev_buf)
+ return 0;
+@@ -556,8 +557,10 @@ int dwc3_event_buffers_setup(struct dwc3 *dwc)
+ upper_32_bits(evt->dma));
+ dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0),
+ DWC3_GEVNTSIZ_SIZE(evt->length));
+- dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), 0);
+
++ /* Clear any stale event */
++ reg = dwc3_readl(dwc->regs, DWC3_GEVNTCOUNT(0));
++ dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), reg);
+ return 0;
+ }
+
+@@ -584,7 +587,10 @@ void dwc3_event_buffers_cleanup(struct dwc3 *dwc)
+ dwc3_writel(dwc->regs, DWC3_GEVNTADRHI(0), 0);
+ dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0), DWC3_GEVNTSIZ_INTMASK
+ | DWC3_GEVNTSIZ_SIZE(0));
+- dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), 0);
++
++ /* Clear any stale event */
++ reg = dwc3_readl(dwc->regs, DWC3_GEVNTCOUNT(0));
++ dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), reg);
+ }
+
+ static void dwc3_core_num_eps(struct dwc3 *dwc)
+@@ -2499,7 +2505,11 @@ static int dwc3_runtime_resume(struct device *dev)
+
+ switch (dwc->current_dr_role) {
+ case DWC3_GCTL_PRTCAP_DEVICE:
+- dwc3_gadget_process_pending_events(dwc);
++ if (dwc->pending_events) {
++ pm_runtime_put(dwc->dev);
++ dwc->pending_events = false;
++ enable_irq(dwc->irq_gadget);
++ }
+ break;
+ case DWC3_GCTL_PRTCAP_HOST:
+ default:
+@@ -2552,7 +2562,7 @@ static int dwc3_suspend(struct device *dev)
+ static int dwc3_resume(struct device *dev)
+ {
+ struct dwc3 *dwc = dev_get_drvdata(dev);
+- int ret;
++ int ret = 0;
+
+ pinctrl_pm_select_default_state(dev);
+
+@@ -2560,14 +2570,12 @@ static int dwc3_resume(struct device *dev)
+ pm_runtime_set_active(dev);
+
+ ret = dwc3_resume_common(dwc, PMSG_RESUME);
+- if (ret) {
++ if (ret)
+ pm_runtime_set_suspended(dev);
+- return ret;
+- }
+
+ pm_runtime_enable(dev);
+
+- return 0;
++ return ret;
+ }
+
+ static void dwc3_complete(struct device *dev)
+@@ -2589,6 +2597,12 @@ static void dwc3_complete(struct device *dev)
+ static const struct dev_pm_ops dwc3_dev_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(dwc3_suspend, dwc3_resume)
+ .complete = dwc3_complete,
++
++ /*
++ * Runtime suspend halts the controller on disconnection. It relies on
++ * platforms with custom connection notification to start the controller
++ * again.
++ */
+ SET_RUNTIME_PM_OPS(dwc3_runtime_suspend, dwc3_runtime_resume,
+ dwc3_runtime_idle)
+ };
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index c71240e8f7c7da..9c508e0c5cdf54 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -1675,7 +1675,6 @@ static inline void dwc3_otg_host_init(struct dwc3 *dwc)
+ #if !IS_ENABLED(CONFIG_USB_DWC3_HOST)
+ int dwc3_gadget_suspend(struct dwc3 *dwc);
+ int dwc3_gadget_resume(struct dwc3 *dwc);
+-void dwc3_gadget_process_pending_events(struct dwc3 *dwc);
+ #else
+ static inline int dwc3_gadget_suspend(struct dwc3 *dwc)
+ {
+@@ -1687,9 +1686,6 @@ static inline int dwc3_gadget_resume(struct dwc3 *dwc)
+ return 0;
+ }
+
+-static inline void dwc3_gadget_process_pending_events(struct dwc3 *dwc)
+-{
+-}
+ #endif /* !IS_ENABLED(CONFIG_USB_DWC3_HOST) */
+
+ #if IS_ENABLED(CONFIG_USB_DWC3_ULPI)
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 291bc549935bb1..10178e5eda5a3f 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -4728,14 +4728,3 @@ int dwc3_gadget_resume(struct dwc3 *dwc)
+
+ return dwc3_gadget_soft_connect(dwc);
+ }
+-
+-void dwc3_gadget_process_pending_events(struct dwc3 *dwc)
+-{
+- if (dwc->pending_events) {
+- dwc3_interrupt(dwc->irq_gadget, dwc->ev_buf);
+- dwc3_thread_interrupt(dwc->irq_gadget, dwc->ev_buf);
+- pm_runtime_put(dwc->dev);
+- dwc->pending_events = false;
+- enable_irq(dwc->irq_gadget);
+- }
+-}
+diff --git a/drivers/usb/gadget/function/uvc_v4l2.c b/drivers/usb/gadget/function/uvc_v4l2.c
+index a024aecb76dc37..de1736f834e6b8 100644
+--- a/drivers/usb/gadget/function/uvc_v4l2.c
++++ b/drivers/usb/gadget/function/uvc_v4l2.c
+@@ -121,6 +121,9 @@ static struct uvcg_format *find_format_by_pix(struct uvc_device *uvc,
+ list_for_each_entry(format, &uvc->header->formats, entry) {
+ const struct uvc_format_desc *fmtdesc = to_uvc_format(format->fmt);
+
++ if (IS_ERR(fmtdesc))
++ continue;
++
+ if (fmtdesc->fcc == pixelformat) {
+ uformat = format->fmt;
+ break;
+@@ -240,6 +243,7 @@ uvc_v4l2_try_format(struct file *file, void *fh, struct v4l2_format *fmt)
+ struct uvc_video *video = &uvc->video;
+ struct uvcg_format *uformat;
+ struct uvcg_frame *uframe;
++ const struct uvc_format_desc *fmtdesc;
+ u8 *fcc;
+
+ if (fmt->type != video->queue.queue.type)
+@@ -277,7 +281,10 @@ uvc_v4l2_try_format(struct file *file, void *fh, struct v4l2_format *fmt)
+ fmt->fmt.pix.height = uframe->frame.w_height;
+ fmt->fmt.pix.bytesperline = uvc_v4l2_get_bytesperline(uformat, uframe);
+ fmt->fmt.pix.sizeimage = uvc_get_frame_size(uformat, uframe);
+- fmt->fmt.pix.pixelformat = to_uvc_format(uformat)->fcc;
++ fmtdesc = to_uvc_format(uformat);
++ if (IS_ERR(fmtdesc))
++ return PTR_ERR(fmtdesc);
++ fmt->fmt.pix.pixelformat = fmtdesc->fcc;
+ }
+ fmt->fmt.pix.field = V4L2_FIELD_NONE;
+ fmt->fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
+@@ -389,6 +396,9 @@ uvc_v4l2_enum_format(struct file *file, void *fh, struct v4l2_fmtdesc *f)
+ return -EINVAL;
+
+ fmtdesc = to_uvc_format(uformat);
++ if (IS_ERR(fmtdesc))
++ return PTR_ERR(fmtdesc);
++
+ f->pixelformat = fmtdesc->fcc;
+
+ return 0;
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index cf6478f97f4a3d..a6f46364be65f0 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1696,6 +1696,7 @@ int usb_gadget_register_driver_owner(struct usb_gadget_driver *driver,
+ driver->driver.bus = &gadget_bus_type;
+ driver->driver.owner = owner;
+ driver->driver.mod_name = mod_name;
++ driver->driver.probe_type = PROBE_FORCE_SYNCHRONOUS;
+ ret = driver_register(&driver->driver);
+ if (ret) {
+ pr_warn("%s: driver registration failed: %d\n",
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index 161c09953c4e08..241d7aa1fbc20f 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -173,16 +173,18 @@ static void xhci_dbc_giveback(struct dbc_request *req, int status)
+ spin_lock(&dbc->lock);
+ }
+
+-static void xhci_dbc_flush_single_request(struct dbc_request *req)
++static void trb_to_noop(union xhci_trb *trb)
+ {
+- union xhci_trb *trb = req->trb;
+-
+ trb->generic.field[0] = 0;
+ trb->generic.field[1] = 0;
+ trb->generic.field[2] = 0;
+ trb->generic.field[3] &= cpu_to_le32(TRB_CYCLE);
+ trb->generic.field[3] |= cpu_to_le32(TRB_TYPE(TRB_TR_NOOP));
++}
+
++static void xhci_dbc_flush_single_request(struct dbc_request *req)
++{
++ trb_to_noop(req->trb);
+ xhci_dbc_giveback(req, -ESHUTDOWN);
+ }
+
+@@ -649,7 +651,6 @@ static void xhci_dbc_stop(struct xhci_dbc *dbc)
+ case DS_DISABLED:
+ return;
+ case DS_CONFIGURED:
+- case DS_STALLED:
+ if (dbc->driver->disconnect)
+ dbc->driver->disconnect(dbc);
+ break;
+@@ -669,6 +670,23 @@ static void xhci_dbc_stop(struct xhci_dbc *dbc)
+ pm_runtime_put_sync(dbc->dev); /* note, was self.controller */
+ }
+
++static void
++handle_ep_halt_changes(struct xhci_dbc *dbc, struct dbc_ep *dep, bool halted)
++{
++ if (halted) {
++ dev_info(dbc->dev, "DbC Endpoint halted\n");
++ dep->halted = 1;
++
++ } else if (dep->halted) {
++ dev_info(dbc->dev, "DbC Endpoint halt cleared\n");
++ dep->halted = 0;
++
++ if (!list_empty(&dep->list_pending))
++ writel(DBC_DOOR_BELL_TARGET(dep->direction),
++ &dbc->regs->doorbell);
++ }
++}
++
+ static void
+ dbc_handle_port_status(struct xhci_dbc *dbc, union xhci_trb *event)
+ {
+@@ -697,6 +715,7 @@ static void dbc_handle_xfer_event(struct xhci_dbc *dbc, union xhci_trb *event)
+ struct xhci_ring *ring;
+ int ep_id;
+ int status;
++ struct xhci_ep_ctx *ep_ctx;
+ u32 comp_code;
+ size_t remain_length;
+ struct dbc_request *req = NULL, *r;
+@@ -706,8 +725,30 @@ static void dbc_handle_xfer_event(struct xhci_dbc *dbc, union xhci_trb *event)
+ ep_id = TRB_TO_EP_ID(le32_to_cpu(event->generic.field[3]));
+ dep = (ep_id == EPID_OUT) ?
+ get_out_ep(dbc) : get_in_ep(dbc);
++ ep_ctx = (ep_id == EPID_OUT) ?
++ dbc_bulkout_ctx(dbc) : dbc_bulkin_ctx(dbc);
+ ring = dep->ring;
+
++ /* Match the pending request: */
++ list_for_each_entry(r, &dep->list_pending, list_pending) {
++ if (r->trb_dma == event->trans_event.buffer) {
++ req = r;
++ break;
++ }
++ if (r->status == -COMP_STALL_ERROR) {
++ dev_warn(dbc->dev, "Give back stale stalled req\n");
++ ring->num_trbs_free++;
++ xhci_dbc_giveback(r, 0);
++ }
++ }
++
++ if (!req) {
++ dev_warn(dbc->dev, "no matched request\n");
++ return;
++ }
++
++ trace_xhci_dbc_handle_transfer(ring, &req->trb->generic);
++
+ switch (comp_code) {
+ case COMP_SUCCESS:
+ remain_length = 0;
+@@ -718,31 +759,49 @@ static void dbc_handle_xfer_event(struct xhci_dbc *dbc, union xhci_trb *event)
+ case COMP_TRB_ERROR:
+ case COMP_BABBLE_DETECTED_ERROR:
+ case COMP_USB_TRANSACTION_ERROR:
+- case COMP_STALL_ERROR:
+ dev_warn(dbc->dev, "tx error %d detected\n", comp_code);
+ status = -comp_code;
+ break;
++ case COMP_STALL_ERROR:
++ dev_warn(dbc->dev, "Stall error at bulk TRB %llx, remaining %zu, ep deq %llx\n",
++ event->trans_event.buffer, remain_length, ep_ctx->deq);
++ status = 0;
++ dep->halted = 1;
++
++ /*
++ * xHC DbC may trigger a STALL bulk xfer event when host sends a
++ * ClearFeature(ENDPOINT_HALT) request even if there wasn't an
++ * active bulk transfer.
++ *
++ * Don't give back this transfer request as hardware will later
++ * start processing TRBs starting from this 'STALLED' TRB,
++ * causing TRBs and requests to be out of sync.
++ *
++ * If STALL event shows some bytes were transferred then assume
++ * it's an actual transfer issue and give back the request.
++ * In this case mark the TRB as No-Op to avoid hw from using the
++ * TRB again.
++ */
++
++ if ((ep_ctx->deq & ~TRB_CYCLE) == event->trans_event.buffer) {
++ dev_dbg(dbc->dev, "Ep stopped on Stalled TRB\n");
++ if (remain_length == req->length) {
++ dev_dbg(dbc->dev, "Spurious stall event, keep req\n");
++ req->status = -COMP_STALL_ERROR;
++ req->actual = 0;
++ return;
++ }
++ dev_dbg(dbc->dev, "Give back stalled req, but turn TRB to No-op\n");
++ trb_to_noop(req->trb);
++ }
++ break;
++
+ default:
+ dev_err(dbc->dev, "unknown tx error %d\n", comp_code);
+ status = -comp_code;
+ break;
+ }
+
+- /* Match the pending request: */
+- list_for_each_entry(r, &dep->list_pending, list_pending) {
+- if (r->trb_dma == event->trans_event.buffer) {
+- req = r;
+- break;
+- }
+- }
+-
+- if (!req) {
+- dev_warn(dbc->dev, "no matched request\n");
+- return;
+- }
+-
+- trace_xhci_dbc_handle_transfer(ring, &req->trb->generic);
+-
+ ring->num_trbs_free++;
+ req->actual = req->length - remain_length;
+ xhci_dbc_giveback(req, status);
+@@ -762,7 +821,6 @@ static void inc_evt_deq(struct xhci_ring *ring)
+ static enum evtreturn xhci_dbc_do_handle_events(struct xhci_dbc *dbc)
+ {
+ dma_addr_t deq;
+- struct dbc_ep *dep;
+ union xhci_trb *evt;
+ u32 ctrl, portsc;
+ bool update_erdp = false;
+@@ -814,43 +872,17 @@ static enum evtreturn xhci_dbc_do_handle_events(struct xhci_dbc *dbc)
+ return EVT_DISC;
+ }
+
+- /* Handle endpoint stall event: */
++ /* Check and handle changes in endpoint halt status */
+ ctrl = readl(&dbc->regs->control);
+- if ((ctrl & DBC_CTRL_HALT_IN_TR) ||
+- (ctrl & DBC_CTRL_HALT_OUT_TR)) {
+- dev_info(dbc->dev, "DbC Endpoint stall\n");
+- dbc->state = DS_STALLED;
+-
+- if (ctrl & DBC_CTRL_HALT_IN_TR) {
+- dep = get_in_ep(dbc);
+- xhci_dbc_flush_endpoint_requests(dep);
+- }
+-
+- if (ctrl & DBC_CTRL_HALT_OUT_TR) {
+- dep = get_out_ep(dbc);
+- xhci_dbc_flush_endpoint_requests(dep);
+- }
+-
+- return EVT_DONE;
+- }
++ handle_ep_halt_changes(dbc, get_in_ep(dbc), ctrl & DBC_CTRL_HALT_IN_TR);
++ handle_ep_halt_changes(dbc, get_out_ep(dbc), ctrl & DBC_CTRL_HALT_OUT_TR);
+
+ /* Clear DbC run change bit: */
+ if (ctrl & DBC_CTRL_DBC_RUN_CHANGE) {
+ writel(ctrl, &dbc->regs->control);
+ ctrl = readl(&dbc->regs->control);
+ }
+-
+ break;
+- case DS_STALLED:
+- ctrl = readl(&dbc->regs->control);
+- if (!(ctrl & DBC_CTRL_HALT_IN_TR) &&
+- !(ctrl & DBC_CTRL_HALT_OUT_TR) &&
+- (ctrl & DBC_CTRL_DBC_RUN)) {
+- dbc->state = DS_CONFIGURED;
+- break;
+- }
+-
+- return EVT_DONE;
+ default:
+ dev_err(dbc->dev, "Unknown DbC state %d\n", dbc->state);
+ break;
+@@ -939,7 +971,6 @@ static const char * const dbc_state_strings[DS_MAX] = {
+ [DS_ENABLED] = "enabled",
+ [DS_CONNECTED] = "connected",
+ [DS_CONFIGURED] = "configured",
+- [DS_STALLED] = "stalled",
+ };
+
+ static ssize_t dbc_show(struct device *dev,
+diff --git a/drivers/usb/host/xhci-dbgcap.h b/drivers/usb/host/xhci-dbgcap.h
+index 0118c6288a3cce..97c5dc290138bc 100644
+--- a/drivers/usb/host/xhci-dbgcap.h
++++ b/drivers/usb/host/xhci-dbgcap.h
+@@ -81,7 +81,6 @@ enum dbc_state {
+ DS_ENABLED,
+ DS_CONNECTED,
+ DS_CONFIGURED,
+- DS_STALLED,
+ DS_MAX
+ };
+
+@@ -90,6 +89,7 @@ struct dbc_ep {
+ struct list_head list_pending;
+ struct xhci_ring *ring;
+ unsigned int direction:1;
++ unsigned int halted:1;
+ };
+
+ #define DBC_QUEUE_SIZE 16
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 994fd8b38bd015..df0c5c4f4508a6 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -79,6 +79,7 @@
+ #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142
+ #define PCI_DEVICE_ID_ASMEDIA_1142_XHCI 0x1242
+ #define PCI_DEVICE_ID_ASMEDIA_2142_XHCI 0x2142
++#define PCI_DEVICE_ID_ASMEDIA_3042_XHCI 0x3042
+ #define PCI_DEVICE_ID_ASMEDIA_3242_XHCI 0x3242
+
+ #define PCI_DEVICE_ID_CADENCE 0x17CD
+@@ -457,6 +458,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI)
+ xhci->quirks |= XHCI_ASMEDIA_MODIFY_FLOWCONTROL;
+
++ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
++ pdev->device == PCI_DEVICE_ID_ASMEDIA_3042_XHCI)
++ xhci->quirks |= XHCI_RESET_ON_RESUME;
++
+ if (pdev->vendor == PCI_VENDOR_ID_TI && pdev->device == 0x8241)
+ xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_7;
+
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 31bdfa52eeb252..ecaa75718e5926 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -259,6 +259,12 @@ int xhci_plat_probe(struct platform_device *pdev, struct device *sysdev, const s
+ if (device_property_read_bool(tmpdev, "write-64-hi-lo-quirk"))
+ xhci->quirks |= XHCI_WRITE_64_HI_LO;
+
++ if (device_property_read_bool(tmpdev, "xhci-missing-cas-quirk"))
++ xhci->quirks |= XHCI_MISSING_CAS;
++
++ if (device_property_read_bool(tmpdev, "xhci-skip-phy-init-quirk"))
++ xhci->quirks |= XHCI_SKIP_PHY_INIT;
++
+ device_property_read_u32(tmpdev, "imod-interval-ns",
+ &xhci->imod_interval);
+ }
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 4a9859e03f6b4c..7c12b937d0759c 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -34,8 +34,6 @@
+ #define YUREX_BUF_SIZE 8
+ #define YUREX_WRITE_TIMEOUT (HZ*2)
+
+-#define MAX_S64_STRLEN 20 /* {-}922337203685477580{7,8} */
+-
+ /* table of devices that work with this driver */
+ static struct usb_device_id yurex_table[] = {
+ { USB_DEVICE(YUREX_VENDOR_ID, YUREX_PRODUCT_ID) },
+@@ -403,7 +401,8 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+ {
+ struct usb_yurex *dev;
+ int len = 0;
+- char in_buffer[MAX_S64_STRLEN];
++ char in_buffer[20];
++ unsigned long flags;
+
+ dev = file->private_data;
+
+@@ -413,16 +412,14 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+ return -ENODEV;
+ }
+
+- if (WARN_ON_ONCE(dev->bbu > S64_MAX || dev->bbu < S64_MIN)) {
+- mutex_unlock(&dev->io_mutex);
+- return -EIO;
+- }
+-
+- spin_lock_irq(&dev->lock);
+- scnprintf(in_buffer, MAX_S64_STRLEN, "%lld\n", dev->bbu);
+- spin_unlock_irq(&dev->lock);
++ spin_lock_irqsave(&dev->lock, flags);
++ len = snprintf(in_buffer, 20, "%lld\n", dev->bbu);
++ spin_unlock_irqrestore(&dev->lock, flags);
+ mutex_unlock(&dev->io_mutex);
+
++ if (WARN_ON_ONCE(len >= sizeof(in_buffer)))
++ return -EIO;
++
+ return simple_read_from_buffer(buffer, count, ppos, in_buffer, len);
+ }
+
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index fd68204374f2ce..e5ad23d86833d5 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2423,6 +2423,17 @@ UNUSUAL_DEV( 0xc251, 0x4003, 0x0100, 0x0100,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NOT_LOCKABLE),
+
++/*
++ * Reported by Icenowy Zheng <uwu@icenowy.me>
++ * This is an interface for vendor-specific cryptic commands instead
++ * of real USB storage device.
++ */
++UNUSUAL_DEV( 0xe5b7, 0x0811, 0x0100, 0x0100,
++ "ZhuHai JieLi Technology",
++ "JieLi BR21",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_IGNORE_DEVICE),
++
+ /* Reported by Andrew Simmons <andrew.simmons@gmail.com> */
+ UNUSUAL_DEV( 0xed06, 0x4500, 0x0001, 0x0001,
+ "DataStor",
+diff --git a/drivers/usb/typec/tipd/core.c b/drivers/usb/typec/tipd/core.c
+index dd51a25480bfb9..256b0c054e9a95 100644
+--- a/drivers/usb/typec/tipd/core.c
++++ b/drivers/usb/typec/tipd/core.c
+@@ -1465,8 +1465,9 @@ static void tps6598x_remove(struct i2c_client *client)
+
+ if (!client->irq)
+ cancel_delayed_work_sync(&tps->wq_poll);
++ else
++ devm_free_irq(tps->dev, client->irq, tps);
+
+- devm_free_irq(tps->dev, client->irq, tps);
+ tps6598x_disconnect(tps, 0);
+ typec_unregister_port(tps->port);
+ usb_role_switch_put(tps->role_sw);
+diff --git a/drivers/vdpa/octeon_ep/octep_vdpa_hw.c b/drivers/vdpa/octeon_ep/octep_vdpa_hw.c
+index 11bd76ae18cf93..1d4767b33315e2 100644
+--- a/drivers/vdpa/octeon_ep/octep_vdpa_hw.c
++++ b/drivers/vdpa/octeon_ep/octep_vdpa_hw.c
+@@ -475,11 +475,11 @@ int octep_hw_caps_read(struct octep_hw *oct_hw, struct pci_dev *pdev)
+ dev_err(dev, "Incomplete PCI capabilities");
+ return -EIO;
+ }
+- dev_info(dev, "common cfg mapped at: 0x%016llx\n", (u64)(uintptr_t)oct_hw->common_cfg);
+- dev_info(dev, "device cfg mapped at: 0x%016llx\n", (u64)(uintptr_t)oct_hw->dev_cfg);
+- dev_info(dev, "isr cfg mapped at: 0x%016llx\n", (u64)(uintptr_t)oct_hw->isr);
+- dev_info(dev, "notify base: 0x%016llx, notify off multiplier: %u\n",
+- (u64)(uintptr_t)oct_hw->notify_base, oct_hw->notify_off_multiplier);
++ dev_info(dev, "common cfg mapped at: %p\n", oct_hw->common_cfg);
++ dev_info(dev, "device cfg mapped at: %p\n", oct_hw->dev_cfg);
++ dev_info(dev, "isr cfg mapped at: %p\n", oct_hw->isr);
++ dev_info(dev, "notify base: %p, notify off multiplier: %u\n",
++ oct_hw->notify_base, oct_hw->notify_off_multiplier);
+
+ oct_hw->config_size = octep_get_config_size(oct_hw);
+ oct_hw->features = octep_hw_get_dev_features(oct_hw);
+@@ -511,7 +511,7 @@ int octep_hw_caps_read(struct octep_hw *oct_hw, struct pci_dev *pdev)
+ }
+ mbox = octep_get_mbox(oct_hw);
+ octep_mbox_init(mbox);
+- dev_info(dev, "mbox mapped at: 0x%016llx\n", (u64)(uintptr_t)mbox);
++ dev_info(dev, "mbox mapped at: %p\n", mbox);
+
+ return 0;
+ }
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 3f7333dca508c8..fedd796c9a5cd2 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -847,6 +847,8 @@ static int set_con2fb_map(int unit, int newidx, int user)
+ return err;
+
+ fbcon_add_cursor_work(info);
++ } else if (vc) {
++ set_blitting_type(vc, info);
+ }
+
+ con2fb_map[unit] = newidx;
+diff --git a/drivers/video/fbdev/sis/sis_main.c b/drivers/video/fbdev/sis/sis_main.c
+index 009bf1d9264480..75033e6be15ab1 100644
+--- a/drivers/video/fbdev/sis/sis_main.c
++++ b/drivers/video/fbdev/sis/sis_main.c
+@@ -183,7 +183,7 @@ static void sisfb_search_mode(char *name, bool quiet)
+ {
+ unsigned int j = 0, xres = 0, yres = 0, depth = 0, rate = 0;
+ int i = 0;
+- char strbuf[16], strbuf1[20];
++ char strbuf[24], strbuf1[20];
+ char *nameptr = name;
+
+ /* We don't know the hardware specs yet and there is no ivideo */
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index a5966324607d49..d9f511babd89ab 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -1300,13 +1300,29 @@ static int btrfs_issue_discard(struct block_device *bdev, u64 start, u64 len,
+ bytes_left = end - start;
+ }
+
+- if (bytes_left) {
++ while (bytes_left) {
++ u64 bytes_to_discard = min(BTRFS_MAX_DISCARD_CHUNK_SIZE, bytes_left);
++
+ ret = blkdev_issue_discard(bdev, start >> SECTOR_SHIFT,
+- bytes_left >> SECTOR_SHIFT,
++ bytes_to_discard >> SECTOR_SHIFT,
+ GFP_NOFS);
+- if (!ret)
+- *discarded_bytes += bytes_left;
++
++ if (ret) {
++ if (ret != -EOPNOTSUPP)
++ break;
++ continue;
++ }
++
++ start += bytes_to_discard;
++ bytes_left -= bytes_to_discard;
++ *discarded_bytes += bytes_to_discard;
++
++ if (btrfs_trim_interrupted()) {
++ ret = -ERESTARTSYS;
++ break;
++ }
+ }
++
+ return ret;
+ }
+
+@@ -6459,7 +6475,7 @@ static int btrfs_trim_free_extents(struct btrfs_device *device, u64 *trimmed)
+ start += len;
+ *trimmed += bytes;
+
+- if (fatal_signal_pending(current)) {
++ if (btrfs_trim_interrupted()) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index eaa1dbd313528c..f4bcb25306606a 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -3809,7 +3809,7 @@ static int trim_no_bitmap(struct btrfs_block_group *block_group,
+ if (async && *total_trimmed)
+ break;
+
+- if (fatal_signal_pending(current)) {
++ if (btrfs_trim_interrupted()) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+@@ -4000,7 +4000,7 @@ static int trim_bitmaps(struct btrfs_block_group *block_group,
+ }
+ block_group->discard_cursor = start;
+
+- if (fatal_signal_pending(current)) {
++ if (btrfs_trim_interrupted()) {
+ if (start != offset)
+ reset_trimming_bitmap(ctl, offset);
+ ret = -ERESTARTSYS;
+diff --git a/fs/btrfs/free-space-cache.h b/fs/btrfs/free-space-cache.h
+index 83774bfd7b3bb0..9f1dbfdee8cabf 100644
+--- a/fs/btrfs/free-space-cache.h
++++ b/fs/btrfs/free-space-cache.h
+@@ -10,6 +10,7 @@
+ #include <linux/list.h>
+ #include <linux/spinlock.h>
+ #include <linux/mutex.h>
++#include <linux/freezer.h>
+ #include "fs.h"
+
+ struct inode;
+@@ -56,6 +57,11 @@ static inline bool btrfs_free_space_trimming_bitmap(
+ return (info->trim_state == BTRFS_TRIM_STATE_TRIMMING);
+ }
+
++static inline bool btrfs_trim_interrupted(void)
++{
++ return fatal_signal_pending(current) || freezing(current);
++}
++
+ /*
+ * Deltas are an effective way to populate global statistics. Give macro names
+ * to make it clear what we're doing. An example is discard_extents in
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index 37a09ebb34dd88..3a9bbb259cda7f 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -30,6 +30,12 @@ struct btrfs_zoned_device_info;
+
+ #define BTRFS_MAX_DATA_CHUNK_SIZE (10ULL * SZ_1G)
+
++/*
++ * Arbitratry maximum size of one discard request to limit potentially long time
++ * spent in blkdev_issue_discard().
++ */
++#define BTRFS_MAX_DISCARD_CHUNK_SIZE (SZ_1G)
++
+ extern struct mutex uuid_mutex;
+
+ #define BTRFS_STRIPE_LEN SZ_64K
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 047e3337852e1a..ff02fd44fb7cd0 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1352,7 +1352,7 @@ static int btrfs_load_zone_info(struct btrfs_fs_info *fs_info, int zone_idx,
+ switch (zone.cond) {
+ case BLK_ZONE_COND_OFFLINE:
+ case BLK_ZONE_COND_READONLY:
+- btrfs_err(fs_info,
++ btrfs_err_in_rcu(fs_info,
+ "zoned: offline/readonly zone %llu on device %s (devid %llu)",
+ (info->physical >> device->zone_info->zone_size_shift),
+ rcu_str_deref(device->name), device->devid);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 687d406f47a92b..cd3328b0315349 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -735,11 +735,12 @@ static void ext4_handle_error(struct super_block *sb, bool force_ro, int error,
+
+ ext4_msg(sb, KERN_CRIT, "Remounting filesystem read-only");
+ /*
+- * Make sure updated value of ->s_mount_flags will be visible before
+- * ->s_flags update
++ * EXT4_FLAGS_SHUTDOWN was set which stops all filesystem
++ * modifications. We don't set SB_RDONLY because that requires
++ * sb->s_umount semaphore and setting it without proper remount
++ * procedure is confusing code such as freeze_super() leading to
++ * deadlocks and other problems.
+ */
+- smp_wmb();
+- sb->s_flags |= SB_RDONLY;
+ }
+
+ static void update_super_work(struct work_struct *work)
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index aea9e3c405f1fb..c6b3b79fdd0d9b 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -458,7 +458,7 @@ static int ext4_xattr_inode_iget(struct inode *parent, unsigned long ea_ino,
+ ext4_set_inode_state(inode, EXT4_STATE_LUSTRE_EA_INODE);
+ ext4_xattr_inode_set_ref(inode, 1);
+ } else {
+- inode_lock(inode);
++ inode_lock_nested(inode, I_MUTEX_XATTR);
+ inode->i_flags |= S_NOQUOTA;
+ inode_unlock(inode);
+ }
+@@ -1039,7 +1039,7 @@ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode,
+ s64 ref_count;
+ int ret;
+
+- inode_lock(ea_inode);
++ inode_lock_nested(ea_inode, I_MUTEX_XATTR);
+
+ ret = ext4_reserve_inode_write(handle, ea_inode, &iloc);
+ if (ret)
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
+index 6df77f008d3fad..fdeb0b34a3d39b 100644
+--- a/fs/nfs/callback_xdr.c
++++ b/fs/nfs/callback_xdr.c
+@@ -375,6 +375,8 @@ static __be32 decode_rc_list(struct xdr_stream *xdr,
+
+ rc_list->rcl_nrefcalls = ntohl(*p++);
+ if (rc_list->rcl_nrefcalls) {
++ if (unlikely(rc_list->rcl_nrefcalls > xdr->buf->len))
++ goto out;
+ p = xdr_inline_decode(xdr,
+ rc_list->rcl_nrefcalls * 2 * sizeof(uint32_t));
+ if (unlikely(p == NULL))
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 8286edd6062de6..c49d5cce5ce6ad 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -983,6 +983,7 @@ struct nfs_server *nfs_alloc_server(void)
+ INIT_LIST_HEAD(&server->layouts);
+ INIT_LIST_HEAD(&server->state_owners_lru);
+ INIT_LIST_HEAD(&server->ss_copies);
++ INIT_LIST_HEAD(&server->ss_src_copies);
+
+ atomic_set(&server->active, 0);
+
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 28704f924612c4..531c9c20ef1d1b 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -218,7 +218,7 @@ static int handle_async_copy(struct nfs42_copy_res *res,
+
+ if (dst_server != src_server) {
+ spin_lock(&src_server->nfs_client->cl_lock);
+- list_add_tail(©->src_copies, &src_server->ss_copies);
++ list_add_tail(©->src_copies, &src_server->ss_src_copies);
+ spin_unlock(&src_server->nfs_client->cl_lock);
+ }
+
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 30aba1dedaba6c..9795b3591fda73 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1596,7 +1596,7 @@ static void nfs42_complete_copies(struct nfs4_state_owner *sp, struct nfs4_state
+ complete(©->completion);
+ }
+ }
+- list_for_each_entry(copy, &sp->so_server->ss_copies, src_copies) {
++ list_for_each_entry(copy, &sp->so_server->ss_src_copies, src_copies) {
+ if ((test_bit(NFS_CLNT_SRC_SSC_COPY_STATE, &state->flags) &&
+ !nfs4_stateid_match_other(&state->stateid,
+ ©->parent_src_state->stateid)))
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index e2e248032bfd04..5583db806b0bc4 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -719,7 +719,7 @@ nfsd_file_cache_init(void)
+
+ ret = rhltable_init(&nfsd_file_rhltable, &nfsd_file_rhash_params);
+ if (ret)
+- return ret;
++ goto out;
+
+ ret = -ENOMEM;
+ nfsd_file_slab = KMEM_CACHE(nfsd_file, 0);
+@@ -771,6 +771,8 @@ nfsd_file_cache_init(void)
+
+ INIT_DELAYED_WORK(&nfsd_filecache_laundrette, nfsd_file_gc_worker);
+ out:
++ if (ret)
++ clear_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags);
+ return ret;
+ out_notifier:
+ lease_unregister_notifier(&nfsd_file_lease_notifier);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 3837f4e417247e..64cf5d7b7a4e27 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -7158,6 +7158,7 @@ nfsd4_free_stateid(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ switch (s->sc_type) {
+ case SC_TYPE_DELEG:
+ if (s->sc_status & SC_STATUS_REVOKED) {
++ s->sc_status |= SC_STATUS_CLOSED;
+ spin_unlock(&s->sc_lock);
+ dp = delegstateid(s);
+ list_del_init(&dp->dl_recall_lru);
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 8103c3c90cd11f..58523b4c37de03 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -449,6 +449,9 @@ static void nfsd_shutdown_net(struct net *net)
+ {
+ struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+
++ if (!nn->nfsd_net_up)
++ return;
++ nfsd_export_flush(net);
+ nfs4_state_shutdown_net(net);
+ nfsd_reply_cache_shutdown(nn);
+ nfsd_file_cache_shutdown_net(net);
+@@ -556,11 +559,8 @@ void nfsd_destroy_serv(struct net *net)
+ * other initialization has been done except the rpcb information.
+ */
+ svc_rpcb_cleanup(serv, net);
+- if (!nn->nfsd_net_up)
+- return;
+
+ nfsd_shutdown_net(net);
+- nfsd_export_flush(net);
+ svc_destroy(&serv);
+ }
+
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index ca1ddc46bd8664..d31eae611fe066 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -408,6 +408,42 @@ static int ntfs_extend(struct inode *inode, loff_t pos, size_t count,
+ err = 0;
+ }
+
++ if (file && is_sparsed(ni)) {
++ /*
++ * This code optimizes large writes to sparse file.
++ * TODO: merge this fragment with fallocate fragment.
++ */
++ struct ntfs_sb_info *sbi = ni->mi.sbi;
++ CLST vcn = pos >> sbi->cluster_bits;
++ CLST cend = bytes_to_cluster(sbi, end);
++ CLST cend_v = bytes_to_cluster(sbi, ni->i_valid);
++ CLST lcn, clen;
++ bool new;
++
++ if (cend_v > cend)
++ cend_v = cend;
++
++ /*
++ * Allocate and zero new clusters.
++ * Zeroing these clusters may be too long.
++ */
++ for (; vcn < cend_v; vcn += clen) {
++ err = attr_data_get_block(ni, vcn, cend_v - vcn, &lcn,
++ &clen, &new, true);
++ if (err)
++ goto out;
++ }
++ /*
++ * Allocate but not zero new clusters.
++ */
++ for (; vcn < cend; vcn += clen) {
++ err = attr_data_get_block(ni, vcn, cend - vcn, &lcn,
++ &clen, &new, false);
++ if (err)
++ goto out;
++ }
++ }
++
+ inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode));
+ mark_inode_dirty(inode);
+
+@@ -484,7 +520,7 @@ static int ntfs_truncate(struct inode *inode, loff_t new_size)
+ }
+
+ /*
+- * ntfs_fallocate
++ * ntfs_fallocate - file_operations::ntfs_fallocate
+ *
+ * Preallocate space for a file. This implements ntfs's fallocate file
+ * operation, which gets called from sys_fallocate system call. User
+@@ -619,6 +655,8 @@ static long ntfs_fallocate(struct file *file, int mode, loff_t vbo, loff_t len)
+ ni_lock(ni);
+ err = attr_collapse_range(ni, vbo, len);
+ ni_unlock(ni);
++ if (err)
++ goto out;
+ } else if (mode & FALLOC_FL_INSERT_RANGE) {
+ /* Check new size. */
+ err = inode_newsize_ok(inode, new_size);
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index a469c608a39488..60c975ac38e610 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -1900,13 +1900,13 @@ enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr,
+
+ /*
+ * fiemap_fill_next_extent_k - a copy of fiemap_fill_next_extent
+- * but it accepts kernel address for fi_extents_start
++ * but it uses 'fe_k' instead of fieinfo->fi_extents_start
+ */
+ static int fiemap_fill_next_extent_k(struct fiemap_extent_info *fieinfo,
+- u64 logical, u64 phys, u64 len, u32 flags)
++ struct fiemap_extent *fe_k, u64 logical,
++ u64 phys, u64 len, u32 flags)
+ {
+ struct fiemap_extent extent;
+- struct fiemap_extent __user *dest = fieinfo->fi_extents_start;
+
+ /* only count the extents */
+ if (fieinfo->fi_extents_max == 0) {
+@@ -1930,8 +1930,7 @@ static int fiemap_fill_next_extent_k(struct fiemap_extent_info *fieinfo,
+ extent.fe_length = len;
+ extent.fe_flags = flags;
+
+- dest += fieinfo->fi_extents_mapped;
+- memcpy(dest, &extent, sizeof(extent));
++ memcpy(fe_k + fieinfo->fi_extents_mapped, &extent, sizeof(extent));
+
+ fieinfo->fi_extents_mapped++;
+ if (fieinfo->fi_extents_mapped == fieinfo->fi_extents_max)
+@@ -1949,7 +1948,6 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ __u64 vbo, __u64 len)
+ {
+ int err = 0;
+- struct fiemap_extent __user *fe_u = fieinfo->fi_extents_start;
+ struct fiemap_extent *fe_k = NULL;
+ struct ntfs_sb_info *sbi = ni->mi.sbi;
+ u8 cluster_bits = sbi->cluster_bits;
+@@ -2008,7 +2006,6 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ err = -ENOMEM;
+ goto out;
+ }
+- fieinfo->fi_extents_start = fe_k;
+
+ end = vbo + len;
+ alloc_size = le64_to_cpu(attr->nres.alloc_size);
+@@ -2098,8 +2095,8 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ if (vbo + dlen >= end)
+ flags |= FIEMAP_EXTENT_LAST;
+
+- err = fiemap_fill_next_extent_k(fieinfo, vbo, lbo, dlen,
+- flags);
++ err = fiemap_fill_next_extent_k(fieinfo, fe_k, vbo, lbo,
++ dlen, flags);
+
+ if (err < 0)
+ break;
+@@ -2120,7 +2117,7 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ if (vbo + bytes >= end)
+ flags |= FIEMAP_EXTENT_LAST;
+
+- err = fiemap_fill_next_extent_k(fieinfo, vbo, lbo, bytes,
++ err = fiemap_fill_next_extent_k(fieinfo, fe_k, vbo, lbo, bytes,
+ flags);
+ if (err < 0)
+ break;
+@@ -2137,15 +2134,13 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ /*
+ * Copy to user memory out of lock
+ */
+- if (copy_to_user(fe_u, fe_k,
++ if (copy_to_user(fieinfo->fi_extents_start, fe_k,
+ fieinfo->fi_extents_max *
+ sizeof(struct fiemap_extent))) {
+ err = -EFAULT;
+ }
+
+ out:
+- /* Restore original pointer. */
+- fieinfo->fi_extents_start = fe_u;
+ kfree(fe_k);
+ return err;
+ }
+diff --git a/fs/ntfs3/fslog.c b/fs/ntfs3/fslog.c
+index c64dd114ac652f..d0d530f4e2b95e 100644
+--- a/fs/ntfs3/fslog.c
++++ b/fs/ntfs3/fslog.c
+@@ -609,14 +609,29 @@ static inline void add_client(struct CLIENT_REC *ca, u16 index, __le16 *head)
+ *head = cpu_to_le16(index);
+ }
+
++/*
++ * Enumerate restart table.
++ *
++ * @t - table to enumerate.
++ * @c - current enumerated element.
++ *
++ * enumeration starts with @c == NULL
++ * returns next element or NULL
++ */
+ static inline void *enum_rstbl(struct RESTART_TABLE *t, void *c)
+ {
+ __le32 *e;
+ u32 bprt;
+- u16 rsize = t ? le16_to_cpu(t->size) : 0;
++ u16 rsize;
++
++ if (!t)
++ return NULL;
++
++ rsize = le16_to_cpu(t->size);
+
+ if (!c) {
+- if (!t || !t->total)
++ /* start enumeration. */
++ if (!t->total)
+ return NULL;
+ e = Add2Ptr(t, sizeof(struct RESTART_TABLE));
+ } else {
+diff --git a/fs/ntfs3/namei.c b/fs/ntfs3/namei.c
+index f16d318c4372a4..0a70c36585463b 100644
+--- a/fs/ntfs3/namei.c
++++ b/fs/ntfs3/namei.c
+@@ -395,7 +395,7 @@ static int ntfs_d_hash(const struct dentry *dentry, struct qstr *name)
+ /*
+ * Try slow way with current upcase table
+ */
+- uni = __getname();
++ uni = kmem_cache_alloc(names_cachep, GFP_NOWAIT);
+ if (!uni)
+ return -ENOMEM;
+
+@@ -417,7 +417,7 @@ static int ntfs_d_hash(const struct dentry *dentry, struct qstr *name)
+ err = 0;
+
+ out:
+- __putname(uni);
++ kmem_cache_free(names_cachep, uni);
+ return err;
+ }
+
+diff --git a/fs/ntfs3/super.c b/fs/ntfs3/super.c
+index a8758b85803f44..28fed4072f67c7 100644
+--- a/fs/ntfs3/super.c
++++ b/fs/ntfs3/super.c
+@@ -1491,11 +1491,10 @@ static int ntfs_fill_super(struct super_block *sb, struct fs_context *fc)
+
+ #ifdef __BIG_ENDIAN
+ {
+- const __le16 *src = sbi->upcase;
+ u16 *dst = sbi->upcase;
+
+ for (i = 0; i < 0x10000; i++)
+- *dst++ = le16_to_cpu(*src++);
++ __swab16s(dst++);
+ }
+ #endif
+
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index 8e08a9a1b7ed57..13a041ef0c4e9b 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -50,6 +50,20 @@ static struct proc_dir_entry *proc_root_kcore;
+ #define kc_offset_to_vaddr(o) ((o) + PAGE_OFFSET)
+ #endif
+
++#ifndef kc_xlate_dev_mem_ptr
++#define kc_xlate_dev_mem_ptr kc_xlate_dev_mem_ptr
++static inline void *kc_xlate_dev_mem_ptr(phys_addr_t phys)
++{
++ return __va(phys);
++}
++#endif
++#ifndef kc_unxlate_dev_mem_ptr
++#define kc_unxlate_dev_mem_ptr kc_unxlate_dev_mem_ptr
++static inline void kc_unxlate_dev_mem_ptr(phys_addr_t phys, void *virt)
++{
++}
++#endif
++
+ static LIST_HEAD(kclist_head);
+ static DECLARE_RWSEM(kclist_lock);
+ static int kcore_need_update = 1;
+@@ -471,6 +485,8 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter)
+ while (buflen) {
+ struct page *page;
+ unsigned long pfn;
++ phys_addr_t phys;
++ void *__start;
+
+ /*
+ * If this is the first iteration or the address is not within
+@@ -537,7 +553,8 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter)
+ }
+ break;
+ case KCORE_RAM:
+- pfn = __pa(start) >> PAGE_SHIFT;
++ phys = __pa(start);
++ pfn = phys >> PAGE_SHIFT;
+ page = pfn_to_online_page(pfn);
+
+ /*
+@@ -557,13 +574,28 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter)
+ fallthrough;
+ case KCORE_VMEMMAP:
+ case KCORE_TEXT:
++ if (m->type == KCORE_RAM) {
++ __start = kc_xlate_dev_mem_ptr(phys);
++ if (!__start) {
++ ret = -ENOMEM;
++ if (iov_iter_zero(tsz, iter) != tsz)
++ ret = -EFAULT;
++ goto out;
++ }
++ } else {
++ __start = (void *)start;
++ }
++
+ /*
+ * Sadly we must use a bounce buffer here to be able to
+ * make use of copy_from_kernel_nofault(), as these
+ * memory regions might not always be mapped on all
+ * architectures.
+ */
+- if (copy_from_kernel_nofault(buf, (void *)start, tsz)) {
++ ret = copy_from_kernel_nofault(buf, __start, tsz);
++ if (m->type == KCORE_RAM)
++ kc_unxlate_dev_mem_ptr(phys, __start);
++ if (ret) {
+ if (iov_iter_zero(tsz, iter) != tsz) {
+ ret = -EFAULT;
+ goto out;
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 2ba43327d3f4c8..e9be7b43bb6b8d 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -4312,7 +4312,7 @@ smb2_get_enc_key(struct TCP_Server_Info *server, __u64 ses_id, int enc, u8 *key)
+ */
+ static int
+ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+- struct smb_rqst *rqst, int enc)
++ struct smb_rqst *rqst, int enc, struct crypto_aead *tfm)
+ {
+ struct smb2_transform_hdr *tr_hdr =
+ (struct smb2_transform_hdr *)rqst[0].rq_iov[0].iov_base;
+@@ -4323,8 +4323,6 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ u8 key[SMB3_ENC_DEC_KEY_SIZE];
+ struct aead_request *req;
+ u8 *iv;
+- DECLARE_CRYPTO_WAIT(wait);
+- struct crypto_aead *tfm;
+ unsigned int crypt_len = le32_to_cpu(tr_hdr->OriginalMessageSize);
+ void *creq;
+ size_t sensitive_size;
+@@ -4336,14 +4334,6 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ return rc;
+ }
+
+- rc = smb3_crypto_aead_allocate(server);
+- if (rc) {
+- cifs_server_dbg(VFS, "%s: crypto alloc failed\n", __func__);
+- return rc;
+- }
+-
+- tfm = enc ? server->secmech.enc : server->secmech.dec;
+-
+ if ((server->cipher_type == SMB2_ENCRYPTION_AES256_CCM) ||
+ (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM))
+ rc = crypto_aead_setkey(tfm, key, SMB3_GCM256_CRYPTKEY_SIZE);
+@@ -4383,11 +4373,7 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ aead_request_set_crypt(req, sg, sg, crypt_len, iv);
+ aead_request_set_ad(req, assoc_data_len);
+
+- aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+- crypto_req_done, &wait);
+-
+- rc = crypto_wait_req(enc ? crypto_aead_encrypt(req)
+- : crypto_aead_decrypt(req), &wait);
++ rc = enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req);
+
+ if (!rc && enc)
+ memcpy(&tr_hdr->Signature, sign, SMB2_SIGNATURE_SIZE);
+@@ -4493,7 +4479,7 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst,
+ /* fill the 1st iov with a transform header */
+ fill_transform_hdr(tr_hdr, orig_len, old_rq, server->cipher_type);
+
+- rc = crypt_message(server, num_rqst, new_rq, 1);
++ rc = crypt_message(server, num_rqst, new_rq, 1, server->secmech.enc);
+ cifs_dbg(FYI, "Encrypt message returned %d\n", rc);
+ if (rc)
+ goto err_free;
+@@ -4518,8 +4504,9 @@ decrypt_raw_data(struct TCP_Server_Info *server, char *buf,
+ unsigned int buf_data_size, struct iov_iter *iter,
+ bool is_offloaded)
+ {
+- struct kvec iov[2];
++ struct crypto_aead *tfm;
+ struct smb_rqst rqst = {NULL};
++ struct kvec iov[2];
+ size_t iter_size = 0;
+ int rc;
+
+@@ -4535,9 +4522,31 @@ decrypt_raw_data(struct TCP_Server_Info *server, char *buf,
+ iter_size = iov_iter_count(iter);
+ }
+
+- rc = crypt_message(server, 1, &rqst, 0);
++ if (is_offloaded) {
++ if ((server->cipher_type == SMB2_ENCRYPTION_AES128_GCM) ||
++ (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM))
++ tfm = crypto_alloc_aead("gcm(aes)", 0, 0);
++ else
++ tfm = crypto_alloc_aead("ccm(aes)", 0, 0);
++ if (IS_ERR(tfm)) {
++ rc = PTR_ERR(tfm);
++ cifs_server_dbg(VFS, "%s: Failed alloc decrypt TFM, rc=%d\n", __func__, rc);
++
++ return rc;
++ }
++ } else {
++ if (unlikely(!server->secmech.dec))
++ return -EIO;
++
++ tfm = server->secmech.dec;
++ }
++
++ rc = crypt_message(server, 1, &rqst, 0, tfm);
+ cifs_dbg(FYI, "Decrypt message returned %d\n", rc);
+
++ if (is_offloaded)
++ crypto_free_aead(tfm);
++
+ if (rc)
+ return rc;
+
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 88dc49d670371b..3d9e6e15dd900a 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -1265,6 +1265,12 @@ SMB2_negotiate(const unsigned int xid,
+ else
+ cifs_server_dbg(VFS, "Missing expected negotiate contexts\n");
+ }
++
++ if (server->cipher_type && !rc) {
++ rc = smb3_crypto_aead_allocate(server);
++ if (rc)
++ cifs_server_dbg(VFS, "%s: crypto alloc failed, rc=%d\n", __func__, rc);
++ }
+ neg_exit:
+ free_rsp_buf(resp_buftype, rsp);
+ return rc;
+diff --git a/fs/unicode/mkutf8data.c b/fs/unicode/mkutf8data.c
+index 77b685db827511..b2bd08250c7a09 100644
+--- a/fs/unicode/mkutf8data.c
++++ b/fs/unicode/mkutf8data.c
+@@ -2230,75 +2230,6 @@ static void nfdicf_init(void)
+ file_fail(fold_name);
+ }
+
+-static void ignore_init(void)
+-{
+- FILE *file;
+- unsigned int unichar;
+- unsigned int first;
+- unsigned int last;
+- unsigned int *um;
+- int count;
+- int ret;
+-
+- if (verbose > 0)
+- printf("Parsing %s\n", prop_name);
+- file = fopen(prop_name, "r");
+- if (!file)
+- open_fail(prop_name, errno);
+- assert(file);
+- count = 0;
+- while (fgets(line, LINESIZE, file)) {
+- ret = sscanf(line, "%X..%X ; %s # ", &first, &last, buf0);
+- if (ret == 3) {
+- if (strcmp(buf0, "Default_Ignorable_Code_Point"))
+- continue;
+- if (!utf32valid(first) || !utf32valid(last))
+- line_fail(prop_name, line);
+- for (unichar = first; unichar <= last; unichar++) {
+- free(unicode_data[unichar].utf32nfdi);
+- um = malloc(sizeof(unsigned int));
+- *um = 0;
+- unicode_data[unichar].utf32nfdi = um;
+- free(unicode_data[unichar].utf32nfdicf);
+- um = malloc(sizeof(unsigned int));
+- *um = 0;
+- unicode_data[unichar].utf32nfdicf = um;
+- count++;
+- }
+- if (verbose > 1)
+- printf(" %X..%X Default_Ignorable_Code_Point\n",
+- first, last);
+- continue;
+- }
+- ret = sscanf(line, "%X ; %s # ", &unichar, buf0);
+- if (ret == 2) {
+- if (strcmp(buf0, "Default_Ignorable_Code_Point"))
+- continue;
+- if (!utf32valid(unichar))
+- line_fail(prop_name, line);
+- free(unicode_data[unichar].utf32nfdi);
+- um = malloc(sizeof(unsigned int));
+- *um = 0;
+- unicode_data[unichar].utf32nfdi = um;
+- free(unicode_data[unichar].utf32nfdicf);
+- um = malloc(sizeof(unsigned int));
+- *um = 0;
+- unicode_data[unichar].utf32nfdicf = um;
+- if (verbose > 1)
+- printf(" %X Default_Ignorable_Code_Point\n",
+- unichar);
+- count++;
+- continue;
+- }
+- }
+- fclose(file);
+-
+- if (verbose > 0)
+- printf("Found %d entries\n", count);
+- if (count == 0)
+- file_fail(prop_name);
+-}
+-
+ static void corrections_init(void)
+ {
+ FILE *file;
+@@ -3411,7 +3342,6 @@ int main(int argc, char *argv[])
+ ccc_init();
+ nfdi_init();
+ nfdicf_init();
+- ignore_init();
+ corrections_init();
+ hangul_decompose();
+ nfdi_decompose();
+diff --git a/fs/unicode/utf8data.c_shipped b/fs/unicode/utf8data.c_shipped
+index dafa5fed761d83..ac2da4ba2dc0f9 100644
+--- a/fs/unicode/utf8data.c_shipped
++++ b/fs/unicode/utf8data.c_shipped
+@@ -82,58 +82,58 @@ static const struct utf8data utf8nfdidata[] = {
+ { 0xc0100, 20736 }
+ };
+
+-static const unsigned char utf8data[64256] = {
++static const unsigned char utf8data[64080] = {
+ /* nfdicf_30100 */
+- 0xd7,0x07,0x66,0x84,0x0c,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x99,0x1a,0xe3,0x63,0x15,
+- 0xe2,0x4c,0x0e,0xc1,0xe0,0x4e,0x0d,0xcf,0x86,0x65,0x2d,0x0d,0x01,0x00,0xd4,0xb8,
+- 0xd3,0x27,0xe2,0x89,0xa3,0xe1,0xce,0x35,0xe0,0x2c,0x22,0xcf,0x86,0xc5,0xe4,0x15,
+- 0x6d,0xe3,0x60,0x68,0xe2,0xf6,0x65,0xe1,0x29,0x65,0xe0,0xee,0x64,0xcf,0x86,0xe5,
+- 0xb3,0x64,0x64,0x96,0x64,0x0b,0x00,0xd2,0x0e,0xe1,0xb5,0x3c,0xe0,0xba,0xa3,0xcf,
+- 0x86,0xcf,0x06,0x01,0x00,0xd1,0x0c,0xe0,0x1e,0xa9,0xcf,0x86,0xcf,0x06,0x02,0xff,
++ 0xd7,0x07,0x66,0x84,0x0c,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x96,0x1a,0xe3,0x60,0x15,
++ 0xe2,0x49,0x0e,0xc1,0xe0,0x4b,0x0d,0xcf,0x86,0x65,0x2d,0x0d,0x01,0x00,0xd4,0xb8,
++ 0xd3,0x27,0xe2,0x03,0xa3,0xe1,0xcb,0x35,0xe0,0x29,0x22,0xcf,0x86,0xc5,0xe4,0xfa,
++ 0x6c,0xe3,0x45,0x68,0xe2,0xdb,0x65,0xe1,0x0e,0x65,0xe0,0xd3,0x64,0xcf,0x86,0xe5,
++ 0x98,0x64,0x64,0x7b,0x64,0x0b,0x00,0xd2,0x0e,0xe1,0xb3,0x3c,0xe0,0x34,0xa3,0xcf,
++ 0x86,0xcf,0x06,0x01,0x00,0xd1,0x0c,0xe0,0x98,0xa8,0xcf,0x86,0xcf,0x06,0x02,0xff,
+ 0xff,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,
+- 0x00,0xe4,0xe1,0x45,0xe3,0x3b,0x45,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x87,0xad,
+- 0xd0,0x21,0xcf,0x86,0xe5,0x81,0xaa,0xe4,0x00,0xaa,0xe3,0xbf,0xa9,0xe2,0x9e,0xa9,
+- 0xe1,0x8d,0xa9,0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,
+- 0x00,0xcf,0x86,0xe5,0x63,0xac,0xd4,0x19,0xe3,0xa2,0xab,0xe2,0x81,0xab,0xe1,0x70,
+- 0xab,0x10,0x08,0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,
+- 0x09,0xac,0xe2,0xe8,0xab,0xe1,0xd7,0xab,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,
+- 0x01,0xff,0xe9,0x9b,0xbb,0x00,0x83,0xe2,0x19,0xfa,0xe1,0xf2,0xf6,0xe0,0x6f,0xf5,
+- 0xcf,0x86,0xd5,0x31,0xc4,0xe3,0x54,0x4e,0xe2,0xf5,0x4c,0xe1,0xa4,0xcc,0xe0,0x9c,
+- 0x4b,0xcf,0x86,0xe5,0x8e,0x49,0xe4,0xaf,0x46,0xe3,0x11,0xbd,0xe2,0x68,0xbc,0xe1,
+- 0x43,0xbc,0xe0,0x1c,0xbc,0xcf,0x86,0xe5,0xe9,0xbb,0x94,0x07,0x63,0xd4,0xbb,0x07,
+- 0x00,0x07,0x00,0xe4,0xdb,0xf4,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,
+- 0xe1,0xea,0xe1,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0xd9,0xe2,0xcf,0x86,
+- 0xe5,0x9e,0xe2,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xd9,0xe2,0xcf,0x06,
+- 0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x74,0xf4,0xe3,0x5d,0xf3,
+- 0xd2,0xa0,0xe1,0x13,0xe7,0xd0,0x21,0xcf,0x86,0xe5,0x14,0xe4,0xe4,0x90,0xe3,0xe3,
+- 0x4e,0xe3,0xe2,0x2d,0xe3,0xe1,0x1b,0xe3,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,
+- 0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0x70,0xe5,0xe3,0x2f,0xe5,
+- 0xe2,0x0e,0xe5,0xe1,0xfd,0xe4,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,
+- 0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0xf7,0xe5,0xe1,0xe6,0xe5,0x10,0x09,
+- 0x05,0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x17,
+- 0xe6,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,
+- 0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0x5d,0xe6,0xd2,0x14,0xe1,0x2c,0xe6,
++ 0x00,0xe4,0xdf,0x45,0xe3,0x39,0x45,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x01,0xad,
++ 0xd0,0x21,0xcf,0x86,0xe5,0xfb,0xa9,0xe4,0x7a,0xa9,0xe3,0x39,0xa9,0xe2,0x18,0xa9,
++ 0xe1,0x07,0xa9,0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,
++ 0x00,0xcf,0x86,0xe5,0xdd,0xab,0xd4,0x19,0xe3,0x1c,0xab,0xe2,0xfb,0xaa,0xe1,0xea,
++ 0xaa,0x10,0x08,0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,
++ 0x83,0xab,0xe2,0x62,0xab,0xe1,0x51,0xab,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,
++ 0x01,0xff,0xe9,0x9b,0xbb,0x00,0x83,0xe2,0x68,0xf9,0xe1,0x52,0xf6,0xe0,0xcf,0xf4,
++ 0xcf,0x86,0xd5,0x31,0xc4,0xe3,0x51,0x4e,0xe2,0xf2,0x4c,0xe1,0x09,0xcc,0xe0,0x99,
++ 0x4b,0xcf,0x86,0xe5,0x8b,0x49,0xe4,0xac,0x46,0xe3,0x76,0xbc,0xe2,0xcd,0xbb,0xe1,
++ 0xa8,0xbb,0xe0,0x81,0xbb,0xcf,0x86,0xe5,0x4e,0xbb,0x94,0x07,0x63,0x39,0xbb,0x07,
++ 0x00,0x07,0x00,0xe4,0x3b,0xf4,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,
++ 0xe1,0x4a,0xe1,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x39,0xe2,0xcf,0x86,
++ 0xe5,0xfe,0xe1,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x39,0xe2,0xcf,0x06,
++ 0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0xd4,0xf3,0xe3,0xbd,0xf2,
++ 0xd2,0xa0,0xe1,0x73,0xe6,0xd0,0x21,0xcf,0x86,0xe5,0x74,0xe3,0xe4,0xf0,0xe2,0xe3,
++ 0xae,0xe2,0xe2,0x8d,0xe2,0xe1,0x7b,0xe2,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,
++ 0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xd0,0xe4,0xe3,0x8f,0xe4,
++ 0xe2,0x6e,0xe4,0xe1,0x5d,0xe4,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,
++ 0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0x57,0xe5,0xe1,0x46,0xe5,0x10,0x09,
++ 0x05,0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x77,
++ 0xe5,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,
++ 0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0xbd,0xe5,0xd2,0x14,0xe1,0x8c,0xe5,
+ 0x10,0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,
+- 0x38,0xe6,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,
+- 0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x8d,0xeb,0xd4,0x19,0xe3,0xc6,0xea,0xe2,0xa4,
+- 0xea,0xe1,0x93,0xea,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,
+- 0xb7,0x00,0xd3,0x18,0xe2,0x10,0xeb,0xe1,0xff,0xea,0x10,0x09,0x05,0xff,0xf0,0xa3,
+- 0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x28,0xeb,0x10,
++ 0x98,0xe5,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,
++ 0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0xed,0xea,0xd4,0x19,0xe3,0x26,0xea,0xe2,0x04,
++ 0xea,0xe1,0xf3,0xe9,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,
++ 0xb7,0x00,0xd3,0x18,0xe2,0x70,0xea,0xe1,0x5f,0xea,0x10,0x09,0x05,0xff,0xf0,0xa3,
++ 0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x88,0xea,0x10,
+ 0x08,0x05,0xff,0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,
+ 0x08,0x05,0xff,0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,
+- 0x05,0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x2a,
+- 0xed,0xd4,0x1a,0xe3,0x62,0xec,0xe2,0x48,0xec,0xe1,0x35,0xec,0x10,0x08,0x05,0xff,
+- 0xe7,0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0xaa,0xec,
+- 0xe1,0x98,0xec,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,
+- 0x00,0xd2,0x13,0xe1,0xc6,0xec,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,
++ 0x05,0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x8a,
++ 0xec,0xd4,0x1a,0xe3,0xc2,0xeb,0xe2,0xa8,0xeb,0xe1,0x95,0xeb,0x10,0x08,0x05,0xff,
++ 0xe7,0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x0a,0xec,
++ 0xe1,0xf8,0xeb,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,
++ 0x00,0xd2,0x13,0xe1,0x26,0xec,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,
+ 0xe7,0xa9,0x80,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,
+ 0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,
+- 0xff,0xe7,0xaa,0xae,0x00,0xe0,0xdc,0xef,0xcf,0x86,0xd5,0x1d,0xe4,0x51,0xee,0xe3,
+- 0x0d,0xee,0xe2,0xeb,0xed,0xe1,0xda,0xed,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,
+- 0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0xf8,0xee,0xe2,0xd4,0xee,0xe1,
+- 0xc3,0xee,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,
+- 0xd3,0x18,0xe2,0x43,0xef,0xe1,0x32,0xef,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,
+- 0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x5b,0xef,0x10,0x08,0x05,
++ 0xff,0xe7,0xaa,0xae,0x00,0xe0,0x3c,0xef,0xcf,0x86,0xd5,0x1d,0xe4,0xb1,0xed,0xe3,
++ 0x6d,0xed,0xe2,0x4b,0xed,0xe1,0x3a,0xed,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,
++ 0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0x58,0xee,0xe2,0x34,0xee,0xe1,
++ 0x23,0xee,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,
++ 0xd3,0x18,0xe2,0xa3,0xee,0xe1,0x92,0xee,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,
++ 0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0xbb,0xee,0x10,0x08,0x05,
+ 0xff,0xe8,0x9a,0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,
+ 0xff,0xe8,0x9c,0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,
+ 0x9e,0x86,0x00,0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+@@ -141,152 +141,152 @@ static const unsigned char utf8data[64256] = {
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdi_30100 */
+- 0x57,0x04,0x01,0x00,0xc6,0xd5,0x16,0xe4,0xc2,0x59,0xe3,0xfb,0x54,0xe2,0x74,0x4f,
+- 0xc1,0xe0,0xa0,0x4d,0xcf,0x86,0x65,0x84,0x4d,0x01,0x00,0xd4,0xb8,0xd3,0x27,0xe2,
+- 0x0c,0xa0,0xe1,0xdf,0x8d,0xe0,0x39,0x71,0xcf,0x86,0xc5,0xe4,0x98,0x69,0xe3,0xe3,
+- 0x64,0xe2,0x79,0x62,0xe1,0xac,0x61,0xe0,0x71,0x61,0xcf,0x86,0xe5,0x36,0x61,0x64,
+- 0x19,0x61,0x0b,0x00,0xd2,0x0e,0xe1,0xc2,0xa0,0xe0,0x3d,0xa0,0xcf,0x86,0xcf,0x06,
+- 0x01,0x00,0xd1,0x0c,0xe0,0xa1,0xa5,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xd0,0x08,
+- 0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xe4,0x9e,
+- 0xb6,0xe3,0x18,0xae,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x0a,0xaa,0xd0,0x21,0xcf,
+- 0x86,0xe5,0x04,0xa7,0xe4,0x83,0xa6,0xe3,0x42,0xa6,0xe2,0x21,0xa6,0xe1,0x10,0xa6,
+- 0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0xcf,0x86,
+- 0xe5,0xe6,0xa8,0xd4,0x19,0xe3,0x25,0xa8,0xe2,0x04,0xa8,0xe1,0xf3,0xa7,0x10,0x08,
+- 0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,0x8c,0xa8,0xe2,
+- 0x6b,0xa8,0xe1,0x5a,0xa8,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,0x01,0xff,0xe9,
+- 0x9b,0xbb,0x00,0x83,0xe2,0x9c,0xf6,0xe1,0x75,0xf3,0xe0,0xf2,0xf1,0xcf,0x86,0xd5,
+- 0x31,0xc4,0xe3,0x6d,0xcc,0xe2,0x46,0xca,0xe1,0x27,0xc9,0xe0,0xb7,0xbf,0xcf,0x86,
+- 0xe5,0xaa,0xbb,0xe4,0xa3,0xba,0xe3,0x94,0xb9,0xe2,0xeb,0xb8,0xe1,0xc6,0xb8,0xe0,
+- 0x9f,0xb8,0xcf,0x86,0xe5,0x6c,0xb8,0x94,0x07,0x63,0x57,0xb8,0x07,0x00,0x07,0x00,
+- 0xe4,0x5e,0xf1,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,0x6d,0xde,
+- 0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x5c,0xdf,0xcf,0x86,0xe5,0x21,0xdf,
+- 0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x5c,0xdf,0xcf,0x06,0x13,0x00,0xcf,
+- 0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0xf7,0xf0,0xe3,0xe0,0xef,0xd2,0xa0,0xe1,
+- 0x96,0xe3,0xd0,0x21,0xcf,0x86,0xe5,0x97,0xe0,0xe4,0x13,0xe0,0xe3,0xd1,0xdf,0xe2,
+- 0xb0,0xdf,0xe1,0x9e,0xdf,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,0xff,0xe4,
+- 0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xf3,0xe1,0xe3,0xb2,0xe1,0xe2,0x91,0xe1,
+- 0xe1,0x80,0xe1,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,0x93,0xb6,
+- 0x00,0xd4,0x34,0xd3,0x18,0xe2,0x7a,0xe2,0xe1,0x69,0xe2,0x10,0x09,0x05,0xff,0xf0,
+- 0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x9a,0xe2,0x91,0x11,
+- 0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,0x00,0x05,
+- 0xff,0xe5,0xac,0xbe,0x00,0xe3,0xe0,0xe2,0xd2,0x14,0xe1,0xaf,0xe2,0x10,0x08,0x05,
+- 0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0xbb,0xe2,0x10,
+- 0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,0xd5,0xd0,
+- 0x6a,0xcf,0x86,0xe5,0x10,0xe8,0xd4,0x19,0xe3,0x49,0xe7,0xe2,0x27,0xe7,0xe1,0x16,
+- 0xe7,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,0x00,0xd3,
+- 0x18,0xe2,0x93,0xe7,0xe1,0x82,0xe7,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,0x9e,0x00,
+- 0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0xab,0xe7,0x10,0x08,0x05,0xff,
+- 0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,0x05,0xff,
+- 0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,0xff,0xe7,
+- 0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0xad,0xe9,0xd4,0x1a,
+- 0xe3,0xe5,0xe8,0xe2,0xcb,0xe8,0xe1,0xb8,0xe8,0x10,0x08,0x05,0xff,0xe7,0x9b,0xb4,
+- 0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x2d,0xe9,0xe1,0x1b,0xe9,
+- 0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,0xd2,0x13,
+- 0xe1,0x49,0xe9,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,0xa9,0x80,
+- 0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,0xf0,0xa5,
+- 0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,0xe7,0xaa,
+- 0xae,0x00,0xe0,0x5f,0xec,0xcf,0x86,0xd5,0x1d,0xe4,0xd4,0xea,0xe3,0x90,0xea,0xe2,
+- 0x6e,0xea,0xe1,0x5d,0xea,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,0x05,0xff,
+- 0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0x7b,0xeb,0xe2,0x57,0xeb,0xe1,0x46,0xeb,0x10,
+- 0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,0x18,0xe2,
+- 0xc6,0xeb,0xe1,0xb5,0xeb,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,0x05,0xff,
+- 0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0xde,0xeb,0x10,0x08,0x05,0xff,0xe8,0x9a,
+- 0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,0xe8,0x9c,
+- 0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,0x86,0x00,
+- 0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x57,0x04,0x01,0x00,0xc6,0xd5,0x13,0xe4,0xa8,0x59,0xe3,0xe2,0x54,0xe2,0x5b,0x4f,
++ 0xc1,0xe0,0x87,0x4d,0xcf,0x06,0x01,0x00,0xd4,0xb8,0xd3,0x27,0xe2,0x89,0x9f,0xe1,
++ 0x91,0x8d,0xe0,0x21,0x71,0xcf,0x86,0xc5,0xe4,0x80,0x69,0xe3,0xcb,0x64,0xe2,0x61,
++ 0x62,0xe1,0x94,0x61,0xe0,0x59,0x61,0xcf,0x86,0xe5,0x1e,0x61,0x64,0x01,0x61,0x0b,
++ 0x00,0xd2,0x0e,0xe1,0x3f,0xa0,0xe0,0xba,0x9f,0xcf,0x86,0xcf,0x06,0x01,0x00,0xd1,
++ 0x0c,0xe0,0x1e,0xa5,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xd0,0x08,0xcf,0x86,0xcf,
++ 0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xe4,0x1b,0xb6,0xe3,0x95,
++ 0xad,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x87,0xa9,0xd0,0x21,0xcf,0x86,0xe5,0x81,
++ 0xa6,0xe4,0x00,0xa6,0xe3,0xbf,0xa5,0xe2,0x9e,0xa5,0xe1,0x8d,0xa5,0x10,0x08,0x01,
++ 0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0xcf,0x86,0xe5,0x63,0xa8,
++ 0xd4,0x19,0xe3,0xa2,0xa7,0xe2,0x81,0xa7,0xe1,0x70,0xa7,0x10,0x08,0x01,0xff,0xe9,
++ 0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,0x09,0xa8,0xe2,0xe8,0xa7,0xe1,
++ 0xd7,0xa7,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,0x01,0xff,0xe9,0x9b,0xbb,0x00,
++ 0x83,0xe2,0xee,0xf5,0xe1,0xd8,0xf2,0xe0,0x55,0xf1,0xcf,0x86,0xd5,0x31,0xc4,0xe3,
++ 0xd5,0xcb,0xe2,0xae,0xc9,0xe1,0x8f,0xc8,0xe0,0x1f,0xbf,0xcf,0x86,0xe5,0x12,0xbb,
++ 0xe4,0x0b,0xba,0xe3,0xfc,0xb8,0xe2,0x53,0xb8,0xe1,0x2e,0xb8,0xe0,0x07,0xb8,0xcf,
++ 0x86,0xe5,0xd4,0xb7,0x94,0x07,0x63,0xbf,0xb7,0x07,0x00,0x07,0x00,0xe4,0xc1,0xf0,
++ 0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,0xd0,0xdd,0xcf,0x86,0xcf,
++ 0x06,0x05,0x00,0xd1,0x0e,0xe0,0xbf,0xde,0xcf,0x86,0xe5,0x84,0xde,0xcf,0x06,0x11,
++ 0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xbf,0xde,0xcf,0x06,0x13,0x00,0xcf,0x86,0xd5,0x06,
++ 0xcf,0x06,0x00,0x00,0xe4,0x5a,0xf0,0xe3,0x43,0xef,0xd2,0xa0,0xe1,0xf9,0xe2,0xd0,
++ 0x21,0xcf,0x86,0xe5,0xfa,0xdf,0xe4,0x76,0xdf,0xe3,0x34,0xdf,0xe2,0x13,0xdf,0xe1,
++ 0x01,0xdf,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,0xff,0xe4,0xb8,0xb8,0x00,
++ 0xcf,0x86,0xd5,0x1c,0xe4,0x56,0xe1,0xe3,0x15,0xe1,0xe2,0xf4,0xe0,0xe1,0xe3,0xe0,
++ 0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,0x93,0xb6,0x00,0xd4,0x34,
++ 0xd3,0x18,0xe2,0xdd,0xe1,0xe1,0xcc,0xe1,0x10,0x09,0x05,0xff,0xf0,0xa1,0x9a,0xa8,
++ 0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0xfd,0xe1,0x91,0x11,0x10,0x09,0x05,
++ 0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,0x00,0x05,0xff,0xe5,0xac,
++ 0xbe,0x00,0xe3,0x43,0xe2,0xd2,0x14,0xe1,0x12,0xe2,0x10,0x08,0x05,0xff,0xe5,0xaf,
++ 0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0x1e,0xe2,0x10,0x08,0x05,0xff,
++ 0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,0xd5,0xd0,0x6a,0xcf,0x86,
++ 0xe5,0x73,0xe7,0xd4,0x19,0xe3,0xac,0xe6,0xe2,0x8a,0xe6,0xe1,0x79,0xe6,0x10,0x08,
++ 0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,0x00,0xd3,0x18,0xe2,0xf6,
++ 0xe6,0xe1,0xe5,0xe6,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,0x9e,0x00,0x05,0xff,0xf0,
++ 0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x0e,0xe7,0x10,0x08,0x05,0xff,0xe7,0x81,0xbd,
++ 0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,0x05,0xff,0xe7,0x85,0x85,
++ 0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,0xff,0xe7,0x86,0x9c,0x00,
++ 0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x10,0xe9,0xd4,0x1a,0xe3,0x48,0xe8,
++ 0xe2,0x2e,0xe8,0xe1,0x1b,0xe8,0x10,0x08,0x05,0xff,0xe7,0x9b,0xb4,0x00,0x05,0xff,
++ 0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x90,0xe8,0xe1,0x7e,0xe8,0x10,0x08,0x05,
++ 0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,0xd2,0x13,0xe1,0xac,0xe8,
++ 0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,0xa9,0x80,0x00,0xd1,0x12,
++ 0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,
++ 0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,0xe7,0xaa,0xae,0x00,0xe0,
++ 0xc2,0xeb,0xcf,0x86,0xd5,0x1d,0xe4,0x37,0xea,0xe3,0xf3,0xe9,0xe2,0xd1,0xe9,0xe1,
++ 0xc0,0xe9,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,0x05,0xff,0xe4,0x8f,0x95,
++ 0x00,0xd4,0x19,0xe3,0xde,0xea,0xe2,0xba,0xea,0xe1,0xa9,0xea,0x10,0x08,0x05,0xff,
++ 0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,0x18,0xe2,0x29,0xeb,0xe1,
++ 0x18,0xeb,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,0x05,0xff,0xf0,0xa7,0x83,
++ 0x92,0x00,0xd2,0x13,0xe1,0x41,0xeb,0x10,0x08,0x05,0xff,0xe8,0x9a,0x88,0x00,0x05,
++ 0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,0xe8,0x9c,0xa8,0x00,0x05,
++ 0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,0x86,0x00,0x05,0xff,0xe4,
++ 0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdicf_30200 */
+- 0xd7,0x07,0x66,0x84,0x05,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x99,0x13,0xe3,0x63,0x0e,
+- 0xe2,0x4c,0x07,0xc1,0xe0,0x4e,0x06,0xcf,0x86,0x65,0x2d,0x06,0x01,0x00,0xd4,0x2a,
+- 0xe3,0xd0,0x35,0xe2,0x88,0x9c,0xe1,0xcd,0x2e,0xe0,0x2b,0x1b,0xcf,0x86,0xc5,0xe4,
+- 0x14,0x66,0xe3,0x5f,0x61,0xe2,0xf5,0x5e,0xe1,0x28,0x5e,0xe0,0xed,0x5d,0xcf,0x86,
+- 0xe5,0xb2,0x5d,0x64,0x95,0x5d,0x0b,0x00,0x83,0xe2,0xa7,0xf3,0xe1,0x80,0xf0,0xe0,
+- 0xfd,0xee,0xcf,0x86,0xd5,0x31,0xc4,0xe3,0xe2,0x47,0xe2,0x83,0x46,0xe1,0x32,0xc6,
+- 0xe0,0x2a,0x45,0xcf,0x86,0xe5,0x1c,0x43,0xe4,0x3d,0x40,0xe3,0x9f,0xb6,0xe2,0xf6,
+- 0xb5,0xe1,0xd1,0xb5,0xe0,0xaa,0xb5,0xcf,0x86,0xe5,0x77,0xb5,0x94,0x07,0x63,0x62,
+- 0xb5,0x07,0x00,0x07,0x00,0xe4,0x69,0xee,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,
+- 0xd2,0x0b,0xe1,0x78,0xdb,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x67,0xdc,
+- 0xcf,0x86,0xe5,0x2c,0xdc,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x67,0xdc,
+- 0xcf,0x06,0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x02,0xee,0xe3,
+- 0xeb,0xec,0xd2,0xa0,0xe1,0xa1,0xe0,0xd0,0x21,0xcf,0x86,0xe5,0xa2,0xdd,0xe4,0x1e,
+- 0xdd,0xe3,0xdc,0xdc,0xe2,0xbb,0xdc,0xe1,0xa9,0xdc,0x10,0x08,0x05,0xff,0xe4,0xb8,
+- 0xbd,0x00,0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xfe,0xde,0xe3,
+- 0xbd,0xde,0xe2,0x9c,0xde,0xe1,0x8b,0xde,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,
+- 0x05,0xff,0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0x85,0xdf,0xe1,0x74,0xdf,
++ 0xd7,0x07,0x66,0x84,0x05,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x96,0x13,0xe3,0x60,0x0e,
++ 0xe2,0x49,0x07,0xc1,0xe0,0x4b,0x06,0xcf,0x86,0x65,0x2d,0x06,0x01,0x00,0xd4,0x2a,
++ 0xe3,0xce,0x35,0xe2,0x02,0x9c,0xe1,0xca,0x2e,0xe0,0x28,0x1b,0xcf,0x86,0xc5,0xe4,
++ 0xf9,0x65,0xe3,0x44,0x61,0xe2,0xda,0x5e,0xe1,0x0d,0x5e,0xe0,0xd2,0x5d,0xcf,0x86,
++ 0xe5,0x97,0x5d,0x64,0x7a,0x5d,0x0b,0x00,0x83,0xe2,0xf6,0xf2,0xe1,0xe0,0xef,0xe0,
++ 0x5d,0xee,0xcf,0x86,0xd5,0x31,0xc4,0xe3,0xdf,0x47,0xe2,0x80,0x46,0xe1,0x97,0xc5,
++ 0xe0,0x27,0x45,0xcf,0x86,0xe5,0x19,0x43,0xe4,0x3a,0x40,0xe3,0x04,0xb6,0xe2,0x5b,
++ 0xb5,0xe1,0x36,0xb5,0xe0,0x0f,0xb5,0xcf,0x86,0xe5,0xdc,0xb4,0x94,0x07,0x63,0xc7,
++ 0xb4,0x07,0x00,0x07,0x00,0xe4,0xc9,0xed,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,
++ 0xd2,0x0b,0xe1,0xd8,0xda,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0xc7,0xdb,
++ 0xcf,0x86,0xe5,0x8c,0xdb,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xc7,0xdb,
++ 0xcf,0x06,0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x62,0xed,0xe3,
++ 0x4b,0xec,0xd2,0xa0,0xe1,0x01,0xe0,0xd0,0x21,0xcf,0x86,0xe5,0x02,0xdd,0xe4,0x7e,
++ 0xdc,0xe3,0x3c,0xdc,0xe2,0x1b,0xdc,0xe1,0x09,0xdc,0x10,0x08,0x05,0xff,0xe4,0xb8,
++ 0xbd,0x00,0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0x5e,0xde,0xe3,
++ 0x1d,0xde,0xe2,0xfc,0xdd,0xe1,0xeb,0xdd,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,
++ 0x05,0xff,0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0xe5,0xde,0xe1,0xd4,0xde,
+ 0x10,0x09,0x05,0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,
+- 0xe2,0xa5,0xdf,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,
+- 0xe5,0xac,0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0xeb,0xdf,0xd2,0x14,0xe1,
+- 0xba,0xdf,0x10,0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,
+- 0x00,0xe1,0xc6,0xdf,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,
+- 0xa2,0x00,0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x1b,0xe5,0xd4,0x19,0xe3,0x54,0xe4,
+- 0xe2,0x32,0xe4,0xe1,0x21,0xe4,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,
+- 0xe6,0xb5,0xb7,0x00,0xd3,0x18,0xe2,0x9e,0xe4,0xe1,0x8d,0xe4,0x10,0x09,0x05,0xff,
+- 0xf0,0xa3,0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0xb6,
++ 0xe2,0x05,0xdf,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,
++ 0xe5,0xac,0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0x4b,0xdf,0xd2,0x14,0xe1,
++ 0x1a,0xdf,0x10,0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,
++ 0x00,0xe1,0x26,0xdf,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,
++ 0xa2,0x00,0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x7b,0xe4,0xd4,0x19,0xe3,0xb4,0xe3,
++ 0xe2,0x92,0xe3,0xe1,0x81,0xe3,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,
++ 0xe6,0xb5,0xb7,0x00,0xd3,0x18,0xe2,0xfe,0xe3,0xe1,0xed,0xe3,0x10,0x09,0x05,0xff,
++ 0xf0,0xa3,0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x16,
+ 0xe4,0x10,0x08,0x05,0xff,0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,
+ 0x11,0x10,0x08,0x05,0xff,0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,
+ 0x10,0x08,0x05,0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,
+- 0xe5,0xb8,0xe6,0xd4,0x1a,0xe3,0xf0,0xe5,0xe2,0xd6,0xe5,0xe1,0xc3,0xe5,0x10,0x08,
++ 0xe5,0x18,0xe6,0xd4,0x1a,0xe3,0x50,0xe5,0xe2,0x36,0xe5,0xe1,0x23,0xe5,0x10,0x08,
+ 0x05,0xff,0xe7,0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,
+- 0x38,0xe6,0xe1,0x26,0xe6,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,
+- 0x83,0xa3,0x00,0xd2,0x13,0xe1,0x54,0xe6,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,
++ 0x98,0xe5,0xe1,0x86,0xe5,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,
++ 0x83,0xa3,0x00,0xd2,0x13,0xe1,0xb4,0xe5,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,
+ 0x05,0xff,0xe7,0xa9,0x80,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,
+ 0x00,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,
+- 0x00,0x05,0xff,0xe7,0xaa,0xae,0x00,0xe0,0x6a,0xe9,0xcf,0x86,0xd5,0x1d,0xe4,0xdf,
+- 0xe7,0xe3,0x9b,0xe7,0xe2,0x79,0xe7,0xe1,0x68,0xe7,0x10,0x09,0x05,0xff,0xf0,0xa3,
+- 0x8d,0x9f,0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0x86,0xe8,0xe2,0x62,
+- 0xe8,0xe1,0x51,0xe8,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,
+- 0x8a,0x00,0xd3,0x18,0xe2,0xd1,0xe8,0xe1,0xc0,0xe8,0x10,0x09,0x05,0xff,0xf0,0xa6,
+- 0xbe,0xb1,0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0xe9,0xe8,0x10,
++ 0x00,0x05,0xff,0xe7,0xaa,0xae,0x00,0xe0,0xca,0xe8,0xcf,0x86,0xd5,0x1d,0xe4,0x3f,
++ 0xe7,0xe3,0xfb,0xe6,0xe2,0xd9,0xe6,0xe1,0xc8,0xe6,0x10,0x09,0x05,0xff,0xf0,0xa3,
++ 0x8d,0x9f,0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0xe6,0xe7,0xe2,0xc2,
++ 0xe7,0xe1,0xb1,0xe7,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,
++ 0x8a,0x00,0xd3,0x18,0xe2,0x31,0xe8,0xe1,0x20,0xe8,0x10,0x09,0x05,0xff,0xf0,0xa6,
++ 0xbe,0xb1,0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x49,0xe8,0x10,
+ 0x08,0x05,0xff,0xe8,0x9a,0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,
+ 0x08,0x05,0xff,0xe8,0x9c,0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,
+ 0xff,0xe8,0x9e,0x86,0x00,0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdi_30200 */
+- 0x57,0x04,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x82,0x53,0xe3,0xbb,0x4e,0xe2,0x34,0x49,
+- 0xc1,0xe0,0x60,0x47,0xcf,0x86,0x65,0x44,0x47,0x01,0x00,0xd4,0x2a,0xe3,0x1c,0x9a,
+- 0xe2,0xcb,0x99,0xe1,0x9e,0x87,0xe0,0xf8,0x6a,0xcf,0x86,0xc5,0xe4,0x57,0x63,0xe3,
+- 0xa2,0x5e,0xe2,0x38,0x5c,0xe1,0x6b,0x5b,0xe0,0x30,0x5b,0xcf,0x86,0xe5,0xf5,0x5a,
+- 0x64,0xd8,0x5a,0x0b,0x00,0x83,0xe2,0xea,0xf0,0xe1,0xc3,0xed,0xe0,0x40,0xec,0xcf,
+- 0x86,0xd5,0x31,0xc4,0xe3,0xbb,0xc6,0xe2,0x94,0xc4,0xe1,0x75,0xc3,0xe0,0x05,0xba,
+- 0xcf,0x86,0xe5,0xf8,0xb5,0xe4,0xf1,0xb4,0xe3,0xe2,0xb3,0xe2,0x39,0xb3,0xe1,0x14,
+- 0xb3,0xe0,0xed,0xb2,0xcf,0x86,0xe5,0xba,0xb2,0x94,0x07,0x63,0xa5,0xb2,0x07,0x00,
+- 0x07,0x00,0xe4,0xac,0xeb,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,
+- 0xbb,0xd8,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0xaa,0xd9,0xcf,0x86,0xe5,
+- 0x6f,0xd9,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xaa,0xd9,0xcf,0x06,0x13,
+- 0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x45,0xeb,0xe3,0x2e,0xea,0xd2,
+- 0xa0,0xe1,0xe4,0xdd,0xd0,0x21,0xcf,0x86,0xe5,0xe5,0xda,0xe4,0x61,0xda,0xe3,0x1f,
+- 0xda,0xe2,0xfe,0xd9,0xe1,0xec,0xd9,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,
+- 0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0x41,0xdc,0xe3,0x00,0xdc,0xe2,
+- 0xdf,0xdb,0xe1,0xce,0xdb,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,
+- 0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0xc8,0xdc,0xe1,0xb7,0xdc,0x10,0x09,0x05,
+- 0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0xe8,0xdc,
+- 0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,
+- 0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0x2e,0xdd,0xd2,0x14,0xe1,0xfd,0xdc,0x10,
+- 0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0x09,
+- 0xdd,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,
+- 0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x5e,0xe2,0xd4,0x19,0xe3,0x97,0xe1,0xe2,0x75,0xe1,
+- 0xe1,0x64,0xe1,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,
+- 0x00,0xd3,0x18,0xe2,0xe1,0xe1,0xe1,0xd0,0xe1,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,
+- 0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0xf9,0xe1,0x10,0x08,
+- 0x05,0xff,0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,
+- 0x05,0xff,0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,
+- 0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0xfb,0xe3,
+- 0xd4,0x1a,0xe3,0x33,0xe3,0xe2,0x19,0xe3,0xe1,0x06,0xe3,0x10,0x08,0x05,0xff,0xe7,
+- 0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x7b,0xe3,0xe1,
+- 0x69,0xe3,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,
+- 0xd2,0x13,0xe1,0x97,0xe3,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,
+- 0xa9,0x80,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,
+- 0xf0,0xa5,0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,
+- 0xe7,0xaa,0xae,0x00,0xe0,0xad,0xe6,0xcf,0x86,0xd5,0x1d,0xe4,0x22,0xe5,0xe3,0xde,
+- 0xe4,0xe2,0xbc,0xe4,0xe1,0xab,0xe4,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,
+- 0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0xc9,0xe5,0xe2,0xa5,0xe5,0xe1,0x94,
+- 0xe5,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,
+- 0x18,0xe2,0x14,0xe6,0xe1,0x03,0xe6,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,
+- 0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x2c,0xe6,0x10,0x08,0x05,0xff,
+- 0xe8,0x9a,0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,
+- 0xe8,0x9c,0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,
+- 0x86,0x00,0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x57,0x04,0x01,0x00,0xc6,0xd5,0x13,0xe4,0x68,0x53,0xe3,0xa2,0x4e,0xe2,0x1b,0x49,
++ 0xc1,0xe0,0x47,0x47,0xcf,0x06,0x01,0x00,0xd4,0x2a,0xe3,0x99,0x99,0xe2,0x48,0x99,
++ 0xe1,0x50,0x87,0xe0,0xe0,0x6a,0xcf,0x86,0xc5,0xe4,0x3f,0x63,0xe3,0x8a,0x5e,0xe2,
++ 0x20,0x5c,0xe1,0x53,0x5b,0xe0,0x18,0x5b,0xcf,0x86,0xe5,0xdd,0x5a,0x64,0xc0,0x5a,
++ 0x0b,0x00,0x83,0xe2,0x3c,0xf0,0xe1,0x26,0xed,0xe0,0xa3,0xeb,0xcf,0x86,0xd5,0x31,
++ 0xc4,0xe3,0x23,0xc6,0xe2,0xfc,0xc3,0xe1,0xdd,0xc2,0xe0,0x6d,0xb9,0xcf,0x86,0xe5,
++ 0x60,0xb5,0xe4,0x59,0xb4,0xe3,0x4a,0xb3,0xe2,0xa1,0xb2,0xe1,0x7c,0xb2,0xe0,0x55,
++ 0xb2,0xcf,0x86,0xe5,0x22,0xb2,0x94,0x07,0x63,0x0d,0xb2,0x07,0x00,0x07,0x00,0xe4,
++ 0x0f,0xeb,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,0x1e,0xd8,0xcf,
++ 0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x0d,0xd9,0xcf,0x86,0xe5,0xd2,0xd8,0xcf,
++ 0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x0d,0xd9,0xcf,0x06,0x13,0x00,0xcf,0x86,
++ 0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0xa8,0xea,0xe3,0x91,0xe9,0xd2,0xa0,0xe1,0x47,
++ 0xdd,0xd0,0x21,0xcf,0x86,0xe5,0x48,0xda,0xe4,0xc4,0xd9,0xe3,0x82,0xd9,0xe2,0x61,
++ 0xd9,0xe1,0x4f,0xd9,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,0xff,0xe4,0xb8,
++ 0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xa4,0xdb,0xe3,0x63,0xdb,0xe2,0x42,0xdb,0xe1,
++ 0x31,0xdb,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,0x93,0xb6,0x00,
++ 0xd4,0x34,0xd3,0x18,0xe2,0x2b,0xdc,0xe1,0x1a,0xdc,0x10,0x09,0x05,0xff,0xf0,0xa1,
++ 0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x4b,0xdc,0x91,0x11,0x10,
++ 0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,0x00,0x05,0xff,
++ 0xe5,0xac,0xbe,0x00,0xe3,0x91,0xdc,0xd2,0x14,0xe1,0x60,0xdc,0x10,0x08,0x05,0xff,
++ 0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0x6c,0xdc,0x10,0x08,
++ 0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,0xd5,0xd0,0x6a,
++ 0xcf,0x86,0xe5,0xc1,0xe1,0xd4,0x19,0xe3,0xfa,0xe0,0xe2,0xd8,0xe0,0xe1,0xc7,0xe0,
++ 0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,0x00,0xd3,0x18,
++ 0xe2,0x44,0xe1,0xe1,0x33,0xe1,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,0x9e,0x00,0x05,
++ 0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x5c,0xe1,0x10,0x08,0x05,0xff,0xe7,
++ 0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,0x05,0xff,0xe7,
++ 0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,0xff,0xe7,0x86,
++ 0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x5e,0xe3,0xd4,0x1a,0xe3,
++ 0x96,0xe2,0xe2,0x7c,0xe2,0xe1,0x69,0xe2,0x10,0x08,0x05,0xff,0xe7,0x9b,0xb4,0x00,
++ 0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0xde,0xe2,0xe1,0xcc,0xe2,0x10,
++ 0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,0xd2,0x13,0xe1,
++ 0xfa,0xe2,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,0xa9,0x80,0x00,
++ 0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,0xf0,0xa5,0xaa,
++ 0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,0xe7,0xaa,0xae,
++ 0x00,0xe0,0x10,0xe6,0xcf,0x86,0xd5,0x1d,0xe4,0x85,0xe4,0xe3,0x41,0xe4,0xe2,0x1f,
++ 0xe4,0xe1,0x0e,0xe4,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,0x05,0xff,0xe4,
++ 0x8f,0x95,0x00,0xd4,0x19,0xe3,0x2c,0xe5,0xe2,0x08,0xe5,0xe1,0xf7,0xe4,0x10,0x08,
++ 0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,0x18,0xe2,0x77,
++ 0xe5,0xe1,0x66,0xe5,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,0x05,0xff,0xf0,
++ 0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x8f,0xe5,0x10,0x08,0x05,0xff,0xe8,0x9a,0x88,
++ 0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,0xe8,0x9c,0xa8,
++ 0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,0x86,0x00,0x05,
++ 0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdicf_c0100 */
+ 0xd7,0xb0,0x56,0x04,0x01,0x00,0x95,0xa8,0xd4,0x5e,0xd3,0x2e,0xd2,0x16,0xd1,0x0a,
+ 0x10,0x04,0x01,0x00,0x01,0xff,0x61,0x00,0x10,0x06,0x01,0xff,0x62,0x00,0x01,0xff,
+@@ -299,3184 +299,3174 @@ static const unsigned char utf8data[64256] = {
+ 0xd1,0x0c,0x10,0x06,0x01,0xff,0x74,0x00,0x01,0xff,0x75,0x00,0x10,0x06,0x01,0xff,
+ 0x76,0x00,0x01,0xff,0x77,0x00,0x92,0x16,0xd1,0x0c,0x10,0x06,0x01,0xff,0x78,0x00,
+ 0x01,0xff,0x79,0x00,0x10,0x06,0x01,0xff,0x7a,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0xc6,0xe5,0xf9,0x14,0xe4,0x6f,0x0d,0xe3,0x39,0x08,0xe2,0x22,0x01,0xc1,0xd0,0x24,
+- 0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x07,0x63,0xd8,0x43,0x01,0x00,0x93,0x13,0x52,
+- 0x04,0x01,0x00,0x91,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xce,0xbc,0x00,0x01,0x00,
+- 0x01,0x00,0xcf,0x86,0xe5,0xb3,0x44,0xd4,0x7f,0xd3,0x3f,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x61,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x81,0x00,0x10,0x08,0x01,
+- 0xff,0x61,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x61,0xcc,0x88,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x10,0x07,0x01,0xff,0xc3,
+- 0xa6,0x00,0x01,0xff,0x63,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x65,0xcc,0x80,0x00,0x01,0xff,0x65,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,
+- 0x82,0x00,0x01,0xff,0x65,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,
+- 0x80,0x00,0x01,0xff,0x69,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0x82,0x00,
+- 0x01,0xff,0x69,0xcc,0x88,0x00,0xd3,0x3b,0xd2,0x1f,0xd1,0x0f,0x10,0x07,0x01,0xff,
+- 0xc3,0xb0,0x00,0x01,0xff,0x6e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x80,
+- 0x00,0x01,0xff,0x6f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x82,
+- 0x00,0x01,0xff,0x6f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,0x01,
+- 0x00,0xd2,0x1f,0xd1,0x0f,0x10,0x07,0x01,0xff,0xc3,0xb8,0x00,0x01,0xff,0x75,0xcc,
+- 0x80,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x82,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x88,0x00,0x01,0xff,0x79,0xcc,0x81,0x00,
+- 0x10,0x07,0x01,0xff,0xc3,0xbe,0x00,0x01,0xff,0x73,0x73,0x00,0xe1,0xd4,0x03,0xe0,
+- 0xeb,0x01,0xcf,0x86,0xd5,0xfb,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x61,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,
+- 0x61,0xcc,0x86,0x00,0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x61,0xcc,0xa8,0x00,0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,
+- 0x81,0x00,0x01,0xff,0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x63,0xcc,0x82,0x00,0x01,0xff,0x63,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,
+- 0x87,0x00,0x01,0xff,0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x63,0xcc,
+- 0x8c,0x00,0x01,0xff,0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x8c,0x00,
+- 0x01,0xff,0x64,0xcc,0x8c,0x00,0xd3,0x3b,0xd2,0x1b,0xd1,0x0b,0x10,0x07,0x01,0xff,
+- 0xc4,0x91,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x84,0x00,0x01,0xff,0x65,
+- 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x86,0x00,0x01,0xff,0x65,
+- 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x87,0x00,0x01,0xff,0x65,0xcc,0x87,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xa8,0x00,0x01,0xff,0x65,
+- 0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x8c,0x00,0x01,0xff,0x65,0xcc,0x8c,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x82,0x00,0x01,0xff,0x67,0xcc,0x82,
+- 0x00,0x10,0x08,0x01,0xff,0x67,0xcc,0x86,0x00,0x01,0xff,0x67,0xcc,0x86,0x00,0xd4,
+- 0x7b,0xd3,0x3b,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x87,0x00,0x01,
+- 0xff,0x67,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x67,0xcc,0xa7,0x00,0x01,0xff,0x67,
+- 0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x68,0xcc,0x82,0x00,0x01,0xff,0x68,
+- 0xcc,0x82,0x00,0x10,0x07,0x01,0xff,0xc4,0xa7,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x69,0xcc,0x83,0x00,0x01,0xff,0x69,0xcc,0x83,0x00,0x10,0x08,
+- 0x01,0xff,0x69,0xcc,0x84,0x00,0x01,0xff,0x69,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x69,0xcc,0x86,0x00,0x01,0xff,0x69,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,
+- 0x69,0xcc,0xa8,0x00,0x01,0xff,0x69,0xcc,0xa8,0x00,0xd3,0x37,0xd2,0x17,0xd1,0x0c,
+- 0x10,0x08,0x01,0xff,0x69,0xcc,0x87,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc4,0xb3,
+- 0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6a,0xcc,0x82,0x00,0x01,0xff,0x6a,
+- 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x6b,0xcc,0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,
+- 0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x6c,0xcc,0x81,0x00,0x10,
+- 0x08,0x01,0xff,0x6c,0xcc,0x81,0x00,0x01,0xff,0x6c,0xcc,0xa7,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x6c,0xcc,0xa7,0x00,0x01,0xff,0x6c,0xcc,0x8c,0x00,0x10,0x08,0x01,
+- 0xff,0x6c,0xcc,0x8c,0x00,0x01,0xff,0xc5,0x80,0x00,0xcf,0x86,0xd5,0xed,0xd4,0x72,
+- 0xd3,0x37,0xd2,0x17,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc5,0x82,0x00,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0x6e,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
+- 0xcc,0x81,0x00,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa7,
+- 0x00,0x01,0xff,0x6e,0xcc,0x8c,0x00,0xd2,0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
+- 0xcc,0x8c,0x00,0x01,0xff,0xca,0xbc,0x6e,0x00,0x10,0x07,0x01,0xff,0xc5,0x8b,0x00,
+- 0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,
+- 0x84,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,0x86,0x00,
+- 0xd3,0x3b,0xd2,0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,0xff,
+- 0x6f,0xcc,0x8b,0x00,0x10,0x07,0x01,0xff,0xc5,0x93,0x00,0x01,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x72,0xcc,0x81,0x00,0x01,0xff,0x72,0xcc,0x81,0x00,0x10,0x08,0x01,
+- 0xff,0x72,0xcc,0xa7,0x00,0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x72,0xcc,0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,0x00,0x10,0x08,0x01,
+- 0xff,0x73,0xcc,0x81,0x00,0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x73,0xcc,0x82,0x00,0x01,0xff,0x73,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x73,
+- 0xcc,0xa7,0x00,0x01,0xff,0x73,0xcc,0xa7,0x00,0xd4,0x7b,0xd3,0x3b,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x73,0xcc,0x8c,0x00,0x01,0xff,0x73,0xcc,0x8c,0x00,0x10,
+- 0x08,0x01,0xff,0x74,0xcc,0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x74,0xcc,0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,0x00,0x10,0x07,0x01,
+- 0xff,0xc5,0xa7,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,
+- 0x83,0x00,0x01,0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x84,0x00,
+- 0x01,0xff,0x75,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x86,0x00,
+- 0x01,0xff,0x75,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x8a,0x00,0x01,0xff,
+- 0x75,0xcc,0x8a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,
+- 0x8b,0x00,0x01,0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa8,0x00,
+- 0x01,0xff,0x75,0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x82,0x00,
+- 0x01,0xff,0x77,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x82,0x00,0x01,0xff,
+- 0x79,0xcc,0x82,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,0x88,0x00,
+- 0x01,0xff,0x7a,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x81,0x00,0x01,0xff,
+- 0x7a,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x87,0x00,0x01,0xff,
+- 0x7a,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,0x01,0xff,0x73,0x00,
+- 0xe0,0x65,0x01,0xcf,0x86,0xd5,0xb4,0xd4,0x5a,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0xc9,0x93,0x00,0x10,0x07,0x01,0xff,0xc6,0x83,0x00,0x01,
+- 0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc6,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,
+- 0xc9,0x94,0x00,0x01,0xff,0xc6,0x88,0x00,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xc9,0x96,0x00,0x10,0x07,0x01,0xff,0xc9,0x97,0x00,0x01,0xff,0xc6,0x8c,
+- 0x00,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xc7,0x9d,0x00,0x01,0xff,0xc9,0x99,
+- 0x00,0xd3,0x32,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xc9,0x9b,0x00,0x01,0xff,
+- 0xc6,0x92,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xc9,0xa0,0x00,0xd1,0x0b,0x10,0x07,
+- 0x01,0xff,0xc9,0xa3,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc9,0xa9,0x00,0x01,0xff,
+- 0xc9,0xa8,0x00,0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0x99,0x00,0x01,0x00,
+- 0x01,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xc9,0xaf,0x00,0x01,0xff,0xc9,0xb2,0x00,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xc9,0xb5,0x00,0xd4,0x5d,0xd3,0x34,0xd2,0x1b,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x10,
+- 0x07,0x01,0xff,0xc6,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc6,0xa5,
+- 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xca,0x80,0x00,0x01,0xff,0xc6,0xa8,0x00,0xd2,
+- 0x0f,0x91,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xca,0x83,0x00,0x01,0x00,0xd1,0x0b,
+- 0x10,0x07,0x01,0xff,0xc6,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xca,0x88,0x00,
+- 0x01,0xff,0x75,0xcc,0x9b,0x00,0xd3,0x33,0xd2,0x1d,0xd1,0x0f,0x10,0x08,0x01,0xff,
+- 0x75,0xcc,0x9b,0x00,0x01,0xff,0xca,0x8a,0x00,0x10,0x07,0x01,0xff,0xca,0x8b,0x00,
+- 0x01,0xff,0xc6,0xb4,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc6,0xb6,0x00,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xca,0x92,0x00,0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,
+- 0xff,0xc6,0xb9,0x00,0x01,0x00,0x01,0x00,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0xbd,
+- 0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xd4,0xd4,0x44,0xd3,0x16,0x52,0x04,0x01,
+- 0x00,0x51,0x07,0x01,0xff,0xc7,0x86,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xc7,0x89,
+- 0x00,0xd2,0x12,0x91,0x0b,0x10,0x07,0x01,0xff,0xc7,0x89,0x00,0x01,0x00,0x01,0xff,
+- 0xc7,0x8c,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x61,0xcc,0x8c,0x00,0x10,
+- 0x08,0x01,0xff,0x61,0xcc,0x8c,0x00,0x01,0xff,0x69,0xcc,0x8c,0x00,0xd3,0x46,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x8c,0x00,0x01,0xff,0x6f,0xcc,0x8c,
+- 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x8c,0x00,0xd1,
+- 0x12,0x10,0x08,0x01,0xff,0x75,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,
+- 0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x75,0xcc,0x88,
+- 0xcc,0x81,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,
+- 0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x8c,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,
+- 0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,
+- 0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x88,
+- 0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,0xd4,0x87,0xd3,0x41,0xd2,
+- 0x26,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x87,0xcc,0x84,0x00,0x01,0xff,0x61,
+- 0xcc,0x87,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xc3,0xa6,0xcc,0x84,0x00,0x01,0xff,
+- 0xc3,0xa6,0xcc,0x84,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc7,0xa5,0x00,0x01,0x00,
+- 0x10,0x08,0x01,0xff,0x67,0xcc,0x8c,0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,
+- 0x10,0x08,0x01,0xff,0x6f,0xcc,0xa8,0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,
+- 0x10,0x0a,0x01,0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,
+- 0x84,0x00,0x10,0x09,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,
+- 0x8c,0x00,0xd3,0x38,0xd2,0x1a,0xd1,0x0f,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,
+- 0x01,0xff,0xc7,0xb3,0x00,0x10,0x07,0x01,0xff,0xc7,0xb3,0x00,0x01,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x67,0xcc,0x81,0x00,0x01,0xff,0x67,0xcc,0x81,0x00,0x10,0x07,
+- 0x04,0xff,0xc6,0x95,0x00,0x04,0xff,0xc6,0xbf,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,
+- 0x04,0xff,0x6e,0xcc,0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,
+- 0x61,0xcc,0x8a,0xcc,0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,0xcc,0x81,0x00,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,
+- 0x10,0x09,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,
+- 0xe2,0x31,0x02,0xe1,0xc3,0x44,0xe0,0xc8,0x01,0xcf,0x86,0xd5,0xfb,0xd4,0x80,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x8f,0x00,0x01,0xff,0x61,
+- 0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x91,0x00,0x01,0xff,0x61,0xcc,0x91,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x8f,0x00,0x01,0xff,0x65,0xcc,0x8f,
+- 0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x91,0x00,0x01,0xff,0x65,0xcc,0x91,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x8f,0x00,0x01,0xff,0x69,0xcc,0x8f,
+- 0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0x91,0x00,0x01,0xff,0x69,0xcc,0x91,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x10,
+- 0x08,0x01,0xff,0x6f,0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,0x91,0x00,0xd3,0x40,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x72,0xcc,0x8f,0x00,0x01,0xff,0x72,0xcc,0x8f,
+- 0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0x91,0x00,0x01,0xff,0x72,0xcc,0x91,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x8f,0x00,0x01,0xff,0x75,0xcc,0x8f,0x00,0x10,
+- 0x08,0x01,0xff,0x75,0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,0x91,0x00,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x04,0xff,0x73,0xcc,0xa6,0x00,0x04,0xff,0x73,0xcc,0xa6,0x00,0x10,
+- 0x08,0x04,0xff,0x74,0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,0xa6,0x00,0xd1,0x0b,0x10,
+- 0x07,0x04,0xff,0xc8,0x9d,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x68,0xcc,0x8c,0x00,
+- 0x04,0xff,0x68,0xcc,0x8c,0x00,0xd4,0x79,0xd3,0x31,0xd2,0x16,0xd1,0x0b,0x10,0x07,
+- 0x06,0xff,0xc6,0x9e,0x00,0x07,0x00,0x10,0x07,0x04,0xff,0xc8,0xa3,0x00,0x04,0x00,
+- 0xd1,0x0b,0x10,0x07,0x04,0xff,0xc8,0xa5,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x61,
+- 0xcc,0x87,0x00,0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,
+- 0xff,0x65,0xcc,0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,0x0a,0x04,0xff,0x6f,
+- 0xcc,0x88,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x88,0xcc,0x84,0x00,0xd1,0x14,0x10,
+- 0x0a,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,
+- 0x00,0x10,0x08,0x04,0xff,0x6f,0xcc,0x87,0x00,0x04,0xff,0x6f,0xcc,0x87,0x00,0xd3,
+- 0x27,0xe2,0x21,0x43,0xd1,0x14,0x10,0x0a,0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,
+- 0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x79,0xcc,0x84,0x00,
+- 0x04,0xff,0x79,0xcc,0x84,0x00,0xd2,0x13,0x51,0x04,0x08,0x00,0x10,0x08,0x08,0xff,
+- 0xe2,0xb1,0xa5,0x00,0x08,0xff,0xc8,0xbc,0x00,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,
+- 0xff,0xc6,0x9a,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0xa6,0x00,0x08,0x00,0xcf,0x86,
+- 0x95,0x5f,0x94,0x5b,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,0xff,
+- 0xc9,0x82,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xc6,0x80,0x00,0xd1,0x0e,0x10,0x07,
+- 0x09,0xff,0xca,0x89,0x00,0x09,0xff,0xca,0x8c,0x00,0x10,0x07,0x09,0xff,0xc9,0x87,
+- 0x00,0x09,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,0x89,0x00,0x09,0x00,
+- 0x10,0x07,0x09,0xff,0xc9,0x8b,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,
+- 0x8d,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xc9,0x8f,0x00,0x09,0x00,0x01,0x00,0x01,
+- 0x00,0xd1,0x8b,0xd0,0x0c,0xcf,0x86,0xe5,0x10,0x43,0x64,0xef,0x42,0x01,0xe6,0xcf,
+- 0x86,0xd5,0x2a,0xe4,0x99,0x43,0xe3,0x7f,0x43,0xd2,0x11,0xe1,0x5e,0x43,0x10,0x07,
+- 0x01,0xff,0xcc,0x80,0x00,0x01,0xff,0xcc,0x81,0x00,0xe1,0x65,0x43,0x10,0x09,0x01,
+- 0xff,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0x00,0xd4,0x0f,0x93,0x0b,0x92,
+- 0x07,0x61,0xab,0x43,0x01,0xea,0x06,0xe6,0x06,0xe6,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
+- 0x10,0x07,0x0a,0xff,0xcd,0xb1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xcd,0xb3,0x00,
+- 0x0a,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x10,0x07,0x0a,
+- 0xff,0xcd,0xb7,0x00,0x0a,0x00,0xd2,0x07,0x61,0x97,0x43,0x00,0x00,0x51,0x04,0x09,
+- 0x00,0x10,0x06,0x01,0xff,0x3b,0x00,0x10,0xff,0xcf,0xb3,0x00,0xe0,0x31,0x01,0xcf,
+- 0x86,0xd5,0xd3,0xd4,0x5f,0xd3,0x21,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x01,
+- 0x00,0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x81,
+- 0x00,0x01,0xff,0xc2,0xb7,0x00,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,
+- 0xcc,0x81,0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,
+- 0xcc,0x81,0x00,0x00,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,
+- 0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x01,0xff,0xcf,0x89,0xcc,
+- 0x81,0x00,0xd3,0x3c,0xd2,0x20,0xd1,0x12,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x88,
+- 0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0x00,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,
+- 0xff,0xce,0xb3,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xb4,0x00,0x01,0xff,0xce,
+- 0xb5,0x00,0x10,0x07,0x01,0xff,0xce,0xb6,0x00,0x01,0xff,0xce,0xb7,0x00,0xd2,0x1c,
+- 0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xb8,0x00,0x01,0xff,0xce,0xb9,0x00,0x10,0x07,
+- 0x01,0xff,0xce,0xba,0x00,0x01,0xff,0xce,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,
+- 0xce,0xbc,0x00,0x01,0xff,0xce,0xbd,0x00,0x10,0x07,0x01,0xff,0xce,0xbe,0x00,0x01,
+- 0xff,0xce,0xbf,0x00,0xe4,0x85,0x43,0xd3,0x35,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,
+- 0xff,0xcf,0x80,0x00,0x01,0xff,0xcf,0x81,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,
+- 0x83,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xcf,0x84,0x00,0x01,0xff,0xcf,0x85,0x00,
+- 0x10,0x07,0x01,0xff,0xcf,0x86,0x00,0x01,0xff,0xcf,0x87,0x00,0xe2,0x2b,0x43,0xd1,
+- 0x0e,0x10,0x07,0x01,0xff,0xcf,0x88,0x00,0x01,0xff,0xcf,0x89,0x00,0x10,0x09,0x01,
+- 0xff,0xce,0xb9,0xcc,0x88,0x00,0x01,0xff,0xcf,0x85,0xcc,0x88,0x00,0xcf,0x86,0xd5,
+- 0x94,0xd4,0x3c,0xd3,0x13,0x92,0x0f,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,
+- 0x83,0x00,0x01,0x00,0x01,0x00,0xd2,0x07,0x61,0x3a,0x43,0x01,0x00,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x10,
+- 0x09,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x0a,0xff,0xcf,0x97,0x00,0xd3,0x2c,0xd2,
+- 0x11,0xe1,0x46,0x43,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,0xff,0xce,0xb8,0x00,
+- 0xd1,0x10,0x10,0x09,0x01,0xff,0xcf,0x92,0xcc,0x88,0x00,0x01,0xff,0xcf,0x86,0x00,
+- 0x10,0x07,0x01,0xff,0xcf,0x80,0x00,0x04,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,
+- 0xff,0xcf,0x99,0x00,0x06,0x00,0x10,0x07,0x01,0xff,0xcf,0x9b,0x00,0x04,0x00,0xd1,
+- 0x0b,0x10,0x07,0x01,0xff,0xcf,0x9d,0x00,0x04,0x00,0x10,0x07,0x01,0xff,0xcf,0x9f,
+- 0x00,0x04,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,
+- 0xa1,0x00,0x04,0x00,0x10,0x07,0x01,0xff,0xcf,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,
+- 0x07,0x01,0xff,0xcf,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0xa7,0x00,0x01,
+- 0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xa9,0x00,0x01,0x00,0x10,0x07,
+- 0x01,0xff,0xcf,0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xad,0x00,
+- 0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0xaf,0x00,0x01,0x00,0xd3,0x2b,0xd2,0x12,0x91,
+- 0x0e,0x10,0x07,0x01,0xff,0xce,0xba,0x00,0x01,0xff,0xcf,0x81,0x00,0x01,0x00,0xd1,
+- 0x0e,0x10,0x07,0x05,0xff,0xce,0xb8,0x00,0x05,0xff,0xce,0xb5,0x00,0x10,0x04,0x06,
+- 0x00,0x07,0xff,0xcf,0xb8,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x07,0x00,0x07,0xff,
+- 0xcf,0xb2,0x00,0x10,0x07,0x07,0xff,0xcf,0xbb,0x00,0x07,0x00,0xd1,0x0b,0x10,0x04,
+- 0x08,0x00,0x08,0xff,0xcd,0xbb,0x00,0x10,0x07,0x08,0xff,0xcd,0xbc,0x00,0x08,0xff,
+- 0xcd,0xbd,0x00,0xe3,0xed,0x46,0xe2,0x3d,0x05,0xe1,0x27,0x02,0xe0,0x66,0x01,0xcf,
+- 0x86,0xd5,0xf0,0xd4,0x7e,0xd3,0x40,0xd2,0x22,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,
+- 0xb5,0xcc,0x80,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,0x00,0x10,0x07,0x01,0xff,0xd1,
+- 0x92,0x00,0x01,0xff,0xd0,0xb3,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,
+- 0x94,0x00,0x01,0xff,0xd1,0x95,0x00,0x10,0x07,0x01,0xff,0xd1,0x96,0x00,0x01,0xff,
+- 0xd1,0x96,0xcc,0x88,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x98,0x00,
+- 0x01,0xff,0xd1,0x99,0x00,0x10,0x07,0x01,0xff,0xd1,0x9a,0x00,0x01,0xff,0xd1,0x9b,
+- 0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,
+- 0xcc,0x80,0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x86,0x00,0x01,0xff,0xd1,0x9f,
+- 0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xb0,0x00,0x01,0xff,
+- 0xd0,0xb1,0x00,0x10,0x07,0x01,0xff,0xd0,0xb2,0x00,0x01,0xff,0xd0,0xb3,0x00,0xd1,
+- 0x0e,0x10,0x07,0x01,0xff,0xd0,0xb4,0x00,0x01,0xff,0xd0,0xb5,0x00,0x10,0x07,0x01,
+- 0xff,0xd0,0xb6,0x00,0x01,0xff,0xd0,0xb7,0x00,0xd2,0x1e,0xd1,0x10,0x10,0x07,0x01,
+- 0xff,0xd0,0xb8,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x10,0x07,0x01,0xff,0xd0,
+- 0xba,0x00,0x01,0xff,0xd0,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xbc,0x00,
+- 0x01,0xff,0xd0,0xbd,0x00,0x10,0x07,0x01,0xff,0xd0,0xbe,0x00,0x01,0xff,0xd0,0xbf,
+- 0x00,0xe4,0x25,0x42,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x80,
+- 0x00,0x01,0xff,0xd1,0x81,0x00,0x10,0x07,0x01,0xff,0xd1,0x82,0x00,0x01,0xff,0xd1,
+- 0x83,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x84,0x00,0x01,0xff,0xd1,0x85,0x00,
+- 0x10,0x07,0x01,0xff,0xd1,0x86,0x00,0x01,0xff,0xd1,0x87,0x00,0xd2,0x1c,0xd1,0x0e,
+- 0x10,0x07,0x01,0xff,0xd1,0x88,0x00,0x01,0xff,0xd1,0x89,0x00,0x10,0x07,0x01,0xff,
+- 0xd1,0x8a,0x00,0x01,0xff,0xd1,0x8b,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x8c,
+- 0x00,0x01,0xff,0xd1,0x8d,0x00,0x10,0x07,0x01,0xff,0xd1,0x8e,0x00,0x01,0xff,0xd1,
+- 0x8f,0x00,0xcf,0x86,0xd5,0x07,0x64,0xcf,0x41,0x01,0x00,0xd4,0x58,0xd3,0x2c,0xd2,
+- 0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,
+- 0xd1,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa5,0x00,0x01,0x00,
+- 0x10,0x07,0x01,0xff,0xd1,0xa7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,
+- 0xff,0xd1,0xa9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xab,0x00,0x01,0x00,0xd1,
+- 0x0b,0x10,0x07,0x01,0xff,0xd1,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xaf,
+- 0x00,0x01,0x00,0xd3,0x33,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb1,0x00,
+- 0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xb3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,
+- 0xff,0xd1,0xb5,0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb5,0xcc,0x8f,0x00,0x01,
+- 0xff,0xd1,0xb5,0xcc,0x8f,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb9,
+- 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,
+- 0x01,0xff,0xd1,0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xbf,0x00,0x01,0x00,
+- 0xe0,0x41,0x01,0xcf,0x86,0xd5,0x8e,0xd4,0x36,0xd3,0x11,0xe2,0x91,0x41,0xe1,0x88,
+- 0x41,0x10,0x07,0x01,0xff,0xd2,0x81,0x00,0x01,0x00,0xd2,0x0f,0x51,0x04,0x04,0x00,
+- 0x10,0x07,0x06,0xff,0xd2,0x8b,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x04,0xff,0xd2,
+- 0x8d,0x00,0x04,0x00,0x10,0x07,0x04,0xff,0xd2,0x8f,0x00,0x04,0x00,0xd3,0x2c,0xd2,
+- 0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x91,0x00,0x01,0x00,0x10,0x07,0x01,0xff,
+- 0xd2,0x93,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x95,0x00,0x01,0x00,
+- 0x10,0x07,0x01,0xff,0xd2,0x97,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,
+- 0xff,0xd2,0x99,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9b,0x00,0x01,0x00,0xd1,
+- 0x0b,0x10,0x07,0x01,0xff,0xd2,0x9d,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9f,
+- 0x00,0x01,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,
+- 0xa1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,
+- 0x07,0x01,0xff,0xd2,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xa7,0x00,0x01,
+- 0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xa9,0x00,0x01,0x00,0x10,0x07,
+- 0x01,0xff,0xd2,0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xad,0x00,
+- 0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xaf,0x00,0x01,0x00,0xd3,0x2c,0xd2,0x16,0xd1,
+- 0x0b,0x10,0x07,0x01,0xff,0xd2,0xb1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xb3,
+- 0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xb5,0x00,0x01,0x00,0x10,0x07,
+- 0x01,0xff,0xd2,0xb7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,
+- 0xb9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,
+- 0x07,0x01,0xff,0xd2,0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xbf,0x00,0x01,
+- 0x00,0xcf,0x86,0xd5,0xdc,0xd4,0x5a,0xd3,0x36,0xd2,0x20,0xd1,0x10,0x10,0x07,0x01,
+- 0xff,0xd3,0x8f,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,
+- 0xb6,0xcc,0x86,0x00,0x01,0xff,0xd3,0x84,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x06,
+- 0xff,0xd3,0x86,0x00,0x10,0x04,0x06,0x00,0x01,0xff,0xd3,0x88,0x00,0xd2,0x16,0xd1,
+- 0x0b,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x8a,0x00,0x10,0x04,0x06,0x00,0x01,0xff,
+- 0xd3,0x8c,0x00,0xe1,0x69,0x40,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x8e,0x00,0xd3,
+- 0x41,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x01,0xff,
+- 0xd0,0xb0,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x01,0xff,
+- 0xd0,0xb0,0xcc,0x88,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0x95,0x00,0x01,0x00,
+- 0x10,0x09,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,
+- 0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0x99,0x00,0x01,0x00,0x10,0x09,0x01,
+- 0xff,0xd3,0x99,0xcc,0x88,0x00,0x01,0xff,0xd3,0x99,0xcc,0x88,0x00,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x10,
+- 0x09,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0xd4,
+- 0x82,0xd3,0x41,0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa1,0x00,0x01,0x00,
+- 0x10,0x09,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb8,0xcc,
+- 0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,0x01,0xff,0xd0,0xbe,0xcc,
+- 0x88,0x00,0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa9,0x00,0x01,0x00,0x10,
+- 0x09,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,
+- 0x12,0x10,0x09,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,
+- 0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,
+- 0x00,0xd3,0x41,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,
+- 0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,
+- 0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x87,0xcc,
+- 0x88,0x00,0x01,0xff,0xd1,0x87,0xcc,0x88,0x00,0x10,0x07,0x08,0xff,0xd3,0xb7,0x00,
+- 0x08,0x00,0xd2,0x1d,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x8b,0xcc,0x88,0x00,0x01,
+- 0xff,0xd1,0x8b,0xcc,0x88,0x00,0x10,0x07,0x09,0xff,0xd3,0xbb,0x00,0x09,0x00,0xd1,
+- 0x0b,0x10,0x07,0x09,0xff,0xd3,0xbd,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xd3,0xbf,
+- 0x00,0x09,0x00,0xe1,0x26,0x02,0xe0,0x78,0x01,0xcf,0x86,0xd5,0xb0,0xd4,0x58,0xd3,
+- 0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x81,0x00,0x06,0x00,0x10,0x07,
+- 0x06,0xff,0xd4,0x83,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x85,0x00,
+- 0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x87,0x00,0x06,0x00,0xd2,0x16,0xd1,0x0b,0x10,
+- 0x07,0x06,0xff,0xd4,0x89,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x8b,0x00,0x06,
+- 0x00,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x8d,0x00,0x06,0x00,0x10,0x07,0x06,0xff,
+- 0xd4,0x8f,0x00,0x06,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xd4,
+- 0x91,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xd4,0x93,0x00,0x09,0x00,0xd1,0x0b,0x10,
+- 0x07,0x0a,0xff,0xd4,0x95,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0x97,0x00,0x0a,
+- 0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x99,0x00,0x0a,0x00,0x10,0x07,
+- 0x0a,0xff,0xd4,0x9b,0x00,0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x9d,0x00,
+- 0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0x9f,0x00,0x0a,0x00,0xd4,0x58,0xd3,0x2c,0xd2,
+- 0x16,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0xa1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,
+- 0xd4,0xa3,0x00,0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xd4,0xa5,0x00,0x0b,0x00,
+- 0x10,0x07,0x0c,0xff,0xd4,0xa7,0x00,0x0c,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x10,
+- 0xff,0xd4,0xa9,0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xab,0x00,0x10,0x00,0xd1,
+- 0x0b,0x10,0x07,0x10,0xff,0xd4,0xad,0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xaf,
+- 0x00,0x10,0x00,0xd3,0x35,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,
+- 0xa1,0x00,0x10,0x07,0x01,0xff,0xd5,0xa2,0x00,0x01,0xff,0xd5,0xa3,0x00,0xd1,0x0e,
+- 0x10,0x07,0x01,0xff,0xd5,0xa4,0x00,0x01,0xff,0xd5,0xa5,0x00,0x10,0x07,0x01,0xff,
+- 0xd5,0xa6,0x00,0x01,0xff,0xd5,0xa7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,
+- 0xd5,0xa8,0x00,0x01,0xff,0xd5,0xa9,0x00,0x10,0x07,0x01,0xff,0xd5,0xaa,0x00,0x01,
+- 0xff,0xd5,0xab,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xac,0x00,0x01,0xff,0xd5,
+- 0xad,0x00,0x10,0x07,0x01,0xff,0xd5,0xae,0x00,0x01,0xff,0xd5,0xaf,0x00,0xcf,0x86,
+- 0xe5,0x08,0x3f,0xd4,0x70,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,
+- 0xb0,0x00,0x01,0xff,0xd5,0xb1,0x00,0x10,0x07,0x01,0xff,0xd5,0xb2,0x00,0x01,0xff,
+- 0xd5,0xb3,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xb4,0x00,0x01,0xff,0xd5,0xb5,
+- 0x00,0x10,0x07,0x01,0xff,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb7,0x00,0xd2,0x1c,0xd1,
+- 0x0e,0x10,0x07,0x01,0xff,0xd5,0xb8,0x00,0x01,0xff,0xd5,0xb9,0x00,0x10,0x07,0x01,
+- 0xff,0xd5,0xba,0x00,0x01,0xff,0xd5,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,
+- 0xbc,0x00,0x01,0xff,0xd5,0xbd,0x00,0x10,0x07,0x01,0xff,0xd5,0xbe,0x00,0x01,0xff,
+- 0xd5,0xbf,0x00,0xe3,0x87,0x3e,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd6,0x80,
+- 0x00,0x01,0xff,0xd6,0x81,0x00,0x10,0x07,0x01,0xff,0xd6,0x82,0x00,0x01,0xff,0xd6,
+- 0x83,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd6,0x84,0x00,0x01,0xff,0xd6,0x85,0x00,
+- 0x10,0x07,0x01,0xff,0xd6,0x86,0x00,0x00,0x00,0xe0,0x2f,0x3f,0xcf,0x86,0xe5,0xc0,
+- 0x3e,0xe4,0x97,0x3e,0xe3,0x76,0x3e,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0xd5,0xa5,0xd6,0x82,0x00,0xe4,0x3e,0x25,0xe3,0xc3,0x1a,
+- 0xe2,0x7b,0x81,0xe1,0xc0,0x13,0xd0,0x1e,0xcf,0x86,0xc5,0xe4,0x08,0x4b,0xe3,0x53,
+- 0x46,0xe2,0xe9,0x43,0xe1,0x1c,0x43,0xe0,0xe1,0x42,0xcf,0x86,0xe5,0xa6,0x42,0x64,
+- 0x89,0x42,0x0b,0x00,0xcf,0x86,0xe5,0xfa,0x01,0xe4,0x03,0x56,0xe3,0x76,0x01,0xe2,
+- 0x8e,0x53,0xd1,0x0c,0xe0,0xef,0x52,0xcf,0x86,0x65,0x8d,0x52,0x04,0x00,0xe0,0x0d,
+- 0x01,0xcf,0x86,0xd5,0x0a,0xe4,0x10,0x53,0x63,0xff,0x52,0x0a,0x00,0xd4,0x80,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x80,0x00,0x01,0xff,0xe2,
+- 0xb4,0x81,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x82,0x00,0x01,0xff,0xe2,0xb4,0x83,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x84,0x00,0x01,0xff,0xe2,0xb4,0x85,
+- 0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x86,0x00,0x01,0xff,0xe2,0xb4,0x87,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x88,0x00,0x01,0xff,0xe2,0xb4,0x89,
+- 0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x8a,0x00,0x01,0xff,0xe2,0xb4,0x8b,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x8c,0x00,0x01,0xff,0xe2,0xb4,0x8d,0x00,0x10,
+- 0x08,0x01,0xff,0xe2,0xb4,0x8e,0x00,0x01,0xff,0xe2,0xb4,0x8f,0x00,0xd3,0x40,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x90,0x00,0x01,0xff,0xe2,0xb4,0x91,
+- 0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x92,0x00,0x01,0xff,0xe2,0xb4,0x93,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x94,0x00,0x01,0xff,0xe2,0xb4,0x95,0x00,0x10,
+- 0x08,0x01,0xff,0xe2,0xb4,0x96,0x00,0x01,0xff,0xe2,0xb4,0x97,0x00,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x98,0x00,0x01,0xff,0xe2,0xb4,0x99,0x00,0x10,
+- 0x08,0x01,0xff,0xe2,0xb4,0x9a,0x00,0x01,0xff,0xe2,0xb4,0x9b,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe2,0xb4,0x9c,0x00,0x01,0xff,0xe2,0xb4,0x9d,0x00,0x10,0x08,0x01,
+- 0xff,0xe2,0xb4,0x9e,0x00,0x01,0xff,0xe2,0xb4,0x9f,0x00,0xcf,0x86,0xe5,0x42,0x52,
+- 0x94,0x50,0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa0,0x00,
+- 0x01,0xff,0xe2,0xb4,0xa1,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa2,0x00,0x01,0xff,
+- 0xe2,0xb4,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa4,0x00,0x01,0xff,
+- 0xe2,0xb4,0xa5,0x00,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xa7,0x00,0x52,0x04,
+- 0x00,0x00,0x91,0x0c,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xad,0x00,0x00,0x00,
+- 0x01,0x00,0xd2,0x1b,0xe1,0xfc,0x52,0xe0,0xad,0x52,0xcf,0x86,0x95,0x0f,0x94,0x0b,
+- 0x93,0x07,0x62,0x92,0x52,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xd1,0x13,0xe0,
+- 0xd3,0x53,0xcf,0x86,0x95,0x0a,0xe4,0xa8,0x53,0x63,0x97,0x53,0x04,0x00,0x04,0x00,
+- 0xd0,0x0d,0xcf,0x86,0x95,0x07,0x64,0x22,0x54,0x08,0x00,0x04,0x00,0xcf,0x86,0x55,
+- 0x04,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x07,0x62,0x2f,0x54,0x04,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,0xb0,0x00,0x11,0xff,0xe1,0x8f,0xb1,0x00,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0xb2,0x00,0x11,0xff,0xe1,0x8f,0xb3,0x00,0x91,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0xb4,0x00,0x11,0xff,0xe1,0x8f,0xb5,0x00,0x00,0x00,
+- 0xd4,0x1c,0xe3,0xe0,0x56,0xe2,0x17,0x56,0xe1,0xda,0x55,0xe0,0xbb,0x55,0xcf,0x86,
+- 0x95,0x0a,0xe4,0xa4,0x55,0x63,0x88,0x55,0x04,0x00,0x04,0x00,0xe3,0xd2,0x01,0xe2,
+- 0x2b,0x5a,0xd1,0x0c,0xe0,0x4c,0x59,0xcf,0x86,0x65,0x25,0x59,0x0a,0x00,0xe0,0x9c,
+- 0x59,0xcf,0x86,0xd5,0xc5,0xd4,0x45,0xd3,0x31,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x12,
+- 0xff,0xd0,0xb2,0x00,0x12,0xff,0xd0,0xb4,0x00,0x10,0x07,0x12,0xff,0xd0,0xbe,0x00,
+- 0x12,0xff,0xd1,0x81,0x00,0x51,0x07,0x12,0xff,0xd1,0x82,0x00,0x10,0x07,0x12,0xff,
+- 0xd1,0x8a,0x00,0x12,0xff,0xd1,0xa3,0x00,0x92,0x10,0x91,0x0c,0x10,0x08,0x12,0xff,
+- 0xea,0x99,0x8b,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x14,0xff,0xe1,0x83,0x90,0x00,0x14,0xff,0xe1,0x83,0x91,0x00,0x10,0x08,
+- 0x14,0xff,0xe1,0x83,0x92,0x00,0x14,0xff,0xe1,0x83,0x93,0x00,0xd1,0x10,0x10,0x08,
+- 0x14,0xff,0xe1,0x83,0x94,0x00,0x14,0xff,0xe1,0x83,0x95,0x00,0x10,0x08,0x14,0xff,
+- 0xe1,0x83,0x96,0x00,0x14,0xff,0xe1,0x83,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x14,0xff,0xe1,0x83,0x98,0x00,0x14,0xff,0xe1,0x83,0x99,0x00,0x10,0x08,0x14,0xff,
+- 0xe1,0x83,0x9a,0x00,0x14,0xff,0xe1,0x83,0x9b,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,
+- 0xe1,0x83,0x9c,0x00,0x14,0xff,0xe1,0x83,0x9d,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
+- 0x9e,0x00,0x14,0xff,0xe1,0x83,0x9f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x14,0xff,0xe1,0x83,0xa0,0x00,0x14,0xff,0xe1,0x83,0xa1,0x00,0x10,0x08,
+- 0x14,0xff,0xe1,0x83,0xa2,0x00,0x14,0xff,0xe1,0x83,0xa3,0x00,0xd1,0x10,0x10,0x08,
+- 0x14,0xff,0xe1,0x83,0xa4,0x00,0x14,0xff,0xe1,0x83,0xa5,0x00,0x10,0x08,0x14,0xff,
+- 0xe1,0x83,0xa6,0x00,0x14,0xff,0xe1,0x83,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x14,0xff,0xe1,0x83,0xa8,0x00,0x14,0xff,0xe1,0x83,0xa9,0x00,0x10,0x08,0x14,0xff,
+- 0xe1,0x83,0xaa,0x00,0x14,0xff,0xe1,0x83,0xab,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,
+- 0xe1,0x83,0xac,0x00,0x14,0xff,0xe1,0x83,0xad,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
+- 0xae,0x00,0x14,0xff,0xe1,0x83,0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x14,0xff,0xe1,0x83,0xb0,0x00,0x14,0xff,0xe1,0x83,0xb1,0x00,0x10,0x08,0x14,0xff,
+- 0xe1,0x83,0xb2,0x00,0x14,0xff,0xe1,0x83,0xb3,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,
+- 0xe1,0x83,0xb4,0x00,0x14,0xff,0xe1,0x83,0xb5,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
+- 0xb6,0x00,0x14,0xff,0xe1,0x83,0xb7,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x14,0xff,
+- 0xe1,0x83,0xb8,0x00,0x14,0xff,0xe1,0x83,0xb9,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
+- 0xba,0x00,0x00,0x00,0xd1,0x0c,0x10,0x04,0x00,0x00,0x14,0xff,0xe1,0x83,0xbd,0x00,
+- 0x10,0x08,0x14,0xff,0xe1,0x83,0xbe,0x00,0x14,0xff,0xe1,0x83,0xbf,0x00,0xe2,0x9d,
+- 0x08,0xe1,0x48,0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,0xd3,0x40,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa5,0x00,0x01,0xff,0x61,0xcc,
+- 0xa5,0x00,0x10,0x08,0x01,0xff,0x62,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,0x87,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x62,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,0xa3,0x00,
+- 0x10,0x08,0x01,0xff,0x62,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,0xd2,0x24,
+- 0xd1,0x14,0x10,0x0a,0x01,0xff,0x63,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,0x63,0xcc,
+- 0xa7,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x87,0x00,0x01,0xff,0x64,0xcc,
+- 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa3,0x00,0x01,0xff,0x64,0xcc,
+- 0xa3,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,0xb1,0x00,
+- 0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa7,0x00,0x01,0xff,
+- 0x64,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0xad,0x00,0x01,0xff,0x64,0xcc,
+- 0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,
+- 0x65,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,
+- 0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x65,0xcc,0xad,0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,
+- 0xb0,0x00,0x01,0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,
+- 0xa7,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,
+- 0x66,0xcc,0x87,0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,0x84,0x00,
+- 0x10,0x08,0x01,0xff,0x68,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x68,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,0x10,0x08,
+- 0x01,0xff,0x68,0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x68,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,0x10,0x08,
+- 0x01,0xff,0x68,0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x69,0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,0x01,0xff,
+- 0x69,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,0xd3,0x40,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0x81,0x00,0x01,0xff,0x6b,0xcc,
+- 0x81,0x00,0x10,0x08,0x01,0xff,0x6b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,0xa3,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,0xb1,0x00,
+- 0x10,0x08,0x01,0xff,0x6c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,0xd2,0x24,
+- 0xd1,0x14,0x10,0x0a,0x01,0xff,0x6c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,0x6c,0xcc,
+- 0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0xb1,0x00,0x01,0xff,0x6c,0xcc,
+- 0xb1,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6c,0xcc,0xad,0x00,0x01,0xff,0x6c,0xcc,
+- 0xad,0x00,0x10,0x08,0x01,0xff,0x6d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,0x81,0x00,
+- 0xcf,0x86,0xe5,0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x6d,0xcc,0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6d,
+- 0xcc,0xa3,0x00,0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
+- 0xcc,0x87,0x00,0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa3,
+- 0x00,0x01,0xff,0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
+- 0xcc,0xb1,0x00,0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xad,
+- 0x00,0x01,0xff,0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x83,
+- 0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,
+- 0xcc,0x83,0xcc,0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,0x48,0xd2,
+- 0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,0x6f,
+- 0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0x01,
+- 0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x70,0xcc,0x81,
+- 0x00,0x01,0xff,0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x70,0xcc,0x87,0x00,0x01,
+- 0xff,0x70,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x72,0xcc,0x87,
+- 0x00,0x01,0xff,0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xa3,0x00,0x01,
+- 0xff,0x72,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,
+- 0x00,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xb1,
+- 0x00,0x01,0xff,0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x73,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,0x08,0x01,
+- 0xff,0x73,0xcc,0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,
+- 0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,0x10,
+- 0x0a,0x01,0xff,0x73,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,0xcc,0x87,
+- 0x00,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x01,
+- 0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0x87,0x00,0x01,
+- 0xff,0x74,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xa3,0x00,0x01,
+- 0xff,0x74,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0xb1,0x00,0x01,0xff,0x74,
+- 0xcc,0xb1,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xad,
+- 0x00,0x01,0xff,0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa4,0x00,0x01,
+- 0xff,0x75,0xcc,0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0xb0,0x00,0x01,
+- 0xff,0x75,0xcc,0xb0,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xad,0x00,0x01,0xff,0x75,
+- 0xcc,0xad,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,
+- 0x00,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x84,
+- 0xcc,0x88,0x00,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x76,0xcc,0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x76,
+- 0xcc,0xa3,0x00,0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x11,0x02,0xcf,0x86,0xd5,0xe2,
+- 0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x80,0x00,
+- 0x01,0xff,0x77,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x81,0x00,0x01,0xff,
+- 0x77,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x88,0x00,0x01,0xff,
+- 0x77,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x87,0x00,0x01,0xff,0x77,0xcc,
+- 0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0xa3,0x00,0x01,0xff,
+- 0x77,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x78,0xcc,0x87,0x00,0x01,0xff,0x78,0xcc,
+- 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x78,0xcc,0x88,0x00,0x01,0xff,0x78,0xcc,
+- 0x88,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,0x87,0x00,
+- 0xd3,0x33,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x82,0x00,0x01,0xff,
+- 0x7a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0xa3,0x00,0x01,0xff,0x7a,0xcc,
+- 0xa3,0x00,0xe1,0x12,0x59,0x10,0x08,0x01,0xff,0x7a,0xcc,0xb1,0x00,0x01,0xff,0x7a,
+- 0xcc,0xb1,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,
+- 0xff,0x79,0xcc,0x8a,0x00,0x10,0x08,0x01,0xff,0x61,0xca,0xbe,0x00,0x02,0xff,0x73,
+- 0xcc,0x87,0x00,0x51,0x04,0x0a,0x00,0x10,0x07,0x0a,0xff,0x73,0x73,0x00,0x0a,0x00,
+- 0xd4,0x98,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa3,0x00,
+- 0x01,0xff,0x61,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x89,0x00,0x01,0xff,
+- 0x61,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,
+- 0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,
+- 0x80,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,
+- 0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,
+- 0x83,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,
+- 0x61,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,
+- 0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,
+- 0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,
+- 0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x83,0x00,0x01,0xff,
+- 0x61,0xcc,0x86,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,
+- 0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x65,0xcc,0xa3,0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,
+- 0x89,0x00,0x01,0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,
+- 0x83,0x00,0x01,0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,
+- 0x81,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0xcf,0x86,0xe5,0x31,0x01,0xd4,
+- 0x90,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,
+- 0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,
+- 0xcc,0x89,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,
+- 0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x10,
+- 0x0a,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x89,0x00,0x01,0xff,0x69,
+- 0xcc,0x89,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,0xa3,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,0xa3,
+- 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,0xd3,
+- 0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x01,
+- 0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,
+- 0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,
+- 0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,0x01,
+- 0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0xd2,
+- 0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x6f,
+- 0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0x01,
+- 0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,
+- 0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,
+- 0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x89,0x00,0xd4,0x98,0xd3,
+- 0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x01,
+- 0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,
+- 0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,
+- 0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x89,
+- 0x00,0x01,0xff,0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,
+- 0xcc,0x9b,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x81,0x00,0x10,0x0a,0x01,
+- 0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0xd1,
+- 0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,0x9b,
+- 0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x75,
+- 0xcc,0x9b,0xcc,0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,
+- 0xcc,0x9b,0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0xa3,0x00,0x10,0x08,0x01,
+- 0xff,0x79,0xcc,0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x79,0xcc,0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x79,
+- 0xcc,0x89,0x00,0x01,0xff,0x79,0xcc,0x89,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x79,0xcc,0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x10,0x08,0x0a,0xff,0xe1,
+- 0xbb,0xbb,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbd,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbf,0x00,0x0a,0x00,0xe1,0xbf,0x02,0xe0,0xa1,
+- 0x01,0xcf,0x86,0xd5,0xc6,0xd4,0x6c,0xd3,0x18,0xe2,0x0e,0x59,0xe1,0xf7,0x58,0x10,
+- 0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0x00,0xd2,
+- 0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,
+- 0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,
+- 0xce,0xb1,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,
+- 0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,
+- 0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,
+- 0x00,0xd3,0x18,0xe2,0x4a,0x59,0xe1,0x33,0x59,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,
+- 0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,
+- 0xff,0xce,0xb5,0xcc,0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,
+- 0xff,0xce,0xb5,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,
+- 0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,
+- 0xce,0xb5,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd4,0x6c,0xd3,0x18,0xe2,0x74,0x59,
+- 0xe1,0x5d,0x59,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,
+- 0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,
+- 0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,
+- 0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,
+- 0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,
+- 0xcc,0x94,0xcd,0x82,0x00,0xd3,0x18,0xe2,0xb0,0x59,0xe1,0x99,0x59,0x10,0x09,0x01,
+- 0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0x00,0xd2,0x28,0xd1,
+- 0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,
+- 0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,
+- 0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,
+- 0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,
+- 0xb9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,0x00,0xcf,
+- 0x86,0xd5,0xac,0xd4,0x5a,0xd3,0x18,0xe2,0xed,0x59,0xe1,0xd6,0x59,0x10,0x09,0x01,
+- 0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0xd2,0x28,0xd1,
+- 0x12,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,
+- 0x00,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,
+- 0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,
+- 0x81,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x18,0xe2,
+- 0x17,0x5a,0xe1,0x00,0x5a,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,
+- 0xcf,0x85,0xcc,0x94,0x00,0xd2,0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,
+- 0x85,0xcc,0x94,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x80,
+- 0x00,0xd1,0x0f,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,
+- 0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,0xe4,0xd3,0x5a,
+- 0xd3,0x18,0xe2,0x52,0x5a,0xe1,0x3b,0x5a,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,
+- 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,
+- 0xcf,0x89,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0x00,
+- 0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xcf,
+- 0x89,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,
+- 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xe0,0xd9,0x02,0xcf,0x86,0xe5,
+- 0x91,0x01,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,
+- 0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,
+- 0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,
+- 0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,
+- 0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,
+- 0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,
+- 0xb1,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,
+- 0xce,0xb1,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,
+- 0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,
+- 0xb1,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,
+- 0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,
+- 0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,
+- 0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd3,0x64,0xd2,0x30,0xd1,0x16,
+- 0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,
+- 0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0xce,0xb9,
+- 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,
+- 0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,
+- 0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,
+- 0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,
+- 0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,
+- 0xb7,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,
+- 0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,
+- 0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,
+- 0xb7,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,
+- 0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,
+- 0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,
+- 0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,
+- 0xcf,0x89,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,
+- 0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,
+- 0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,
+- 0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,
+- 0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,
+- 0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,
+- 0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,
+- 0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,
+- 0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,
+- 0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,
+- 0x89,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd3,0x49,0xd2,0x26,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,0xcc,0x84,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0xb1,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xce,0xb9,0x00,
+- 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,
+- 0x09,0x01,0xff,0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcd,0x82,0xce,0xb9,
+- 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,
+- 0xce,0xb1,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,0xff,
+- 0xce,0xb1,0xcc,0x81,0x00,0xe1,0xf3,0x5a,0x10,0x09,0x01,0xff,0xce,0xb1,0xce,0xb9,
+- 0x00,0x01,0x00,0xcf,0x86,0xd5,0xbd,0xd4,0x7e,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,
+- 0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0xd1,0x0f,0x10,0x0b,
+- 0x01,0xff,0xce,0xb7,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,
+- 0xb7,0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,0xd1,
+- 0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,
+- 0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,
+- 0x00,0xe1,0x02,0x5b,0x10,0x09,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0x01,0xff,0xe1,
+- 0xbe,0xbf,0xcc,0x80,0x00,0xd3,0x18,0xe2,0x28,0x5b,0xe1,0x11,0x5b,0x10,0x09,0x01,
+- 0xff,0xce,0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0xe2,0x4c,0x5b,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,
+- 0x84,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,
+- 0x81,0x00,0xd4,0x51,0xd3,0x18,0xe2,0x6f,0x5b,0xe1,0x58,0x5b,0x10,0x09,0x01,0xff,
+- 0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,0xd2,0x24,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,
+- 0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,
+- 0xe1,0x8f,0x5b,0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,
+- 0xcc,0x80,0x00,0xd3,0x3b,0xd2,0x18,0x51,0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,
+- 0x89,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0xd1,0x0f,0x10,
+- 0x0b,0x01,0xff,0xcf,0x89,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,
+- 0xcf,0x89,0xcd,0x82,0x00,0x01,0xff,0xcf,0x89,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,
+- 0x81,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,
+- 0x81,0x00,0xe1,0x99,0x5b,0x10,0x09,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0x01,0xff,
+- 0xc2,0xb4,0x00,0xe0,0x0c,0x68,0xcf,0x86,0xe5,0x23,0x02,0xe4,0x25,0x01,0xe3,0x85,
+- 0x5e,0xd2,0x2a,0xe1,0x5f,0x5c,0xe0,0xdd,0x5b,0xcf,0x86,0xe5,0xbb,0x5b,0x94,0x1b,
+- 0xe3,0xa4,0x5b,0x92,0x14,0x91,0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,
+- 0xff,0xe2,0x80,0x83,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd1,0xd6,0xd0,0x46,0xcf,
+- 0x86,0x55,0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,0x00,0x51,0x04,0x01,
+- 0x00,0x10,0x07,0x01,0xff,0xcf,0x89,0x00,0x01,0x00,0x92,0x12,0x51,0x04,0x01,0x00,
+- 0x10,0x06,0x01,0xff,0x6b,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x01,0x00,0xe3,0x25,
+- 0x5d,0x92,0x10,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0x8e,0x00,0x01,
+- 0x00,0x01,0x00,0xcf,0x86,0xd5,0x0a,0xe4,0x42,0x5d,0x63,0x2d,0x5d,0x06,0x00,0x94,
+- 0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb0,0x00,0x01,
+- 0xff,0xe2,0x85,0xb1,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb2,0x00,0x01,0xff,0xe2,
+- 0x85,0xb3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb4,0x00,0x01,0xff,0xe2,
+- 0x85,0xb5,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb6,0x00,0x01,0xff,0xe2,0x85,0xb7,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb8,0x00,0x01,0xff,0xe2,
+- 0x85,0xb9,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xba,0x00,0x01,0xff,0xe2,0x85,0xbb,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xbc,0x00,0x01,0xff,0xe2,0x85,0xbd,
+- 0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xbe,0x00,0x01,0xff,0xe2,0x85,0xbf,0x00,0x01,
+- 0x00,0xe0,0x34,0x5d,0xcf,0x86,0xe5,0x13,0x5d,0xe4,0xf2,0x5c,0xe3,0xe1,0x5c,0xe2,
+- 0xd4,0x5c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0xff,0xe2,0x86,0x84,0x00,
+- 0xe3,0x23,0x61,0xe2,0xf0,0x60,0xd1,0x0c,0xe0,0x9d,0x60,0xcf,0x86,0x65,0x7e,0x60,
+- 0x01,0x00,0xd0,0x62,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x18,
+- 0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x90,0x00,
+- 0x01,0xff,0xe2,0x93,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,
+- 0x92,0x00,0x01,0xff,0xe2,0x93,0x93,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x94,0x00,
+- 0x01,0xff,0xe2,0x93,0x95,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x96,0x00,
+- 0x01,0xff,0xe2,0x93,0x97,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x98,0x00,0x01,0xff,
+- 0xe2,0x93,0x99,0x00,0xcf,0x86,0xe5,0x57,0x60,0x94,0x80,0xd3,0x40,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x9a,0x00,0x01,0xff,0xe2,0x93,0x9b,0x00,0x10,
+- 0x08,0x01,0xff,0xe2,0x93,0x9c,0x00,0x01,0xff,0xe2,0x93,0x9d,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe2,0x93,0x9e,0x00,0x01,0xff,0xe2,0x93,0x9f,0x00,0x10,0x08,0x01,
+- 0xff,0xe2,0x93,0xa0,0x00,0x01,0xff,0xe2,0x93,0xa1,0x00,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe2,0x93,0xa2,0x00,0x01,0xff,0xe2,0x93,0xa3,0x00,0x10,0x08,0x01,
+- 0xff,0xe2,0x93,0xa4,0x00,0x01,0xff,0xe2,0x93,0xa5,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe2,0x93,0xa6,0x00,0x01,0xff,0xe2,0x93,0xa7,0x00,0x10,0x08,0x01,0xff,0xe2,
+- 0x93,0xa8,0x00,0x01,0xff,0xe2,0x93,0xa9,0x00,0x01,0x00,0xd4,0x0c,0xe3,0x33,0x62,
+- 0xe2,0x2c,0x62,0xcf,0x06,0x04,0x00,0xe3,0x0c,0x65,0xe2,0xff,0x63,0xe1,0x2e,0x02,
+- 0xe0,0x84,0x01,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x08,0xff,0xe2,0xb0,0xb0,0x00,0x08,0xff,0xe2,0xb0,0xb1,0x00,0x10,0x08,
+- 0x08,0xff,0xe2,0xb0,0xb2,0x00,0x08,0xff,0xe2,0xb0,0xb3,0x00,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe2,0xb0,0xb4,0x00,0x08,0xff,0xe2,0xb0,0xb5,0x00,0x10,0x08,0x08,0xff,
+- 0xe2,0xb0,0xb6,0x00,0x08,0xff,0xe2,0xb0,0xb7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe2,0xb0,0xb8,0x00,0x08,0xff,0xe2,0xb0,0xb9,0x00,0x10,0x08,0x08,0xff,
+- 0xe2,0xb0,0xba,0x00,0x08,0xff,0xe2,0xb0,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb0,0xbc,0x00,0x08,0xff,0xe2,0xb0,0xbd,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,
+- 0xbe,0x00,0x08,0xff,0xe2,0xb0,0xbf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe2,0xb1,0x80,0x00,0x08,0xff,0xe2,0xb1,0x81,0x00,0x10,0x08,0x08,0xff,
+- 0xe2,0xb1,0x82,0x00,0x08,0xff,0xe2,0xb1,0x83,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb1,0x84,0x00,0x08,0xff,0xe2,0xb1,0x85,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x86,0x00,0x08,0xff,0xe2,0xb1,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb1,0x88,0x00,0x08,0xff,0xe2,0xb1,0x89,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x8a,0x00,0x08,0xff,0xe2,0xb1,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x8c,0x00,0x08,0xff,0xe2,0xb1,0x8d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8e,0x00,
+- 0x08,0xff,0xe2,0xb1,0x8f,0x00,0x94,0x7c,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe2,0xb1,0x90,0x00,0x08,0xff,0xe2,0xb1,0x91,0x00,0x10,0x08,0x08,0xff,
+- 0xe2,0xb1,0x92,0x00,0x08,0xff,0xe2,0xb1,0x93,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb1,0x94,0x00,0x08,0xff,0xe2,0xb1,0x95,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x96,0x00,0x08,0xff,0xe2,0xb1,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb1,0x98,0x00,0x08,0xff,0xe2,0xb1,0x99,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x9a,0x00,0x08,0xff,0xe2,0xb1,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x9c,0x00,0x08,0xff,0xe2,0xb1,0x9d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9e,0x00,
+- 0x00,0x00,0x08,0x00,0xcf,0x86,0xd5,0x07,0x64,0xef,0x61,0x08,0x00,0xd4,0x63,0xd3,
+- 0x32,0xd2,0x1b,0xd1,0x0c,0x10,0x08,0x09,0xff,0xe2,0xb1,0xa1,0x00,0x09,0x00,0x10,
+- 0x07,0x09,0xff,0xc9,0xab,0x00,0x09,0xff,0xe1,0xb5,0xbd,0x00,0xd1,0x0b,0x10,0x07,
+- 0x09,0xff,0xc9,0xbd,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xa8,
+- 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xaa,0x00,0x10,
+- 0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xac,0x00,0xd1,0x0b,0x10,0x04,0x09,0x00,0x0a,
+- 0xff,0xc9,0x91,0x00,0x10,0x07,0x0a,0xff,0xc9,0xb1,0x00,0x0a,0xff,0xc9,0x90,0x00,
+- 0xd3,0x27,0xd2,0x17,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xc9,0x92,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xe2,0xb1,0xb3,0x00,0x0a,0x00,0x91,0x0c,0x10,0x04,0x09,0x00,0x09,
+- 0xff,0xe2,0xb1,0xb6,0x00,0x09,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,
+- 0x07,0x0b,0xff,0xc8,0xbf,0x00,0x0b,0xff,0xc9,0x80,0x00,0xe0,0x83,0x01,0xcf,0x86,
+- 0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
+- 0x81,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x83,0x00,0x08,0x00,0xd1,0x0c,
+- 0x10,0x08,0x08,0xff,0xe2,0xb2,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,
+- 0x87,0x00,0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x89,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x8f,0x00,
+- 0x08,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x91,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x97,0x00,
+- 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x99,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb2,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb2,0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x9f,0x00,0x08,0x00,
+- 0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa1,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0xa5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa7,0x00,
+- 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa9,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb2,0xab,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb2,0xad,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xaf,0x00,0x08,0x00,
+- 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb1,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb2,0xb3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb2,0xb5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb7,0x00,0x08,0x00,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb9,0x00,0x08,0x00,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0xbb,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
+- 0xbd,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xbf,0x00,0x08,0x00,0xcf,0x86,
+- 0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,
+- 0x81,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x83,0x00,0x08,0x00,0xd1,0x0c,
+- 0x10,0x08,0x08,0xff,0xe2,0xb3,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,
+- 0x87,0x00,0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x89,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
+- 0x08,0xff,0xe2,0xb3,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x8f,0x00,
+- 0x08,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x91,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
+- 0x08,0xff,0xe2,0xb3,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x97,0x00,
+- 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x99,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb3,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb3,0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x9f,0x00,0x08,0x00,
+- 0xd4,0x3b,0xd3,0x1c,0x92,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0xa1,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0xa3,0x00,0x08,0x00,0x08,0x00,0xd2,0x10,
+- 0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x0b,0xff,0xe2,0xb3,0xac,0x00,0xe1,0x3b,
+- 0x5f,0x10,0x04,0x0b,0x00,0x0b,0xff,0xe2,0xb3,0xae,0x00,0xe3,0x40,0x5f,0x92,0x10,
+- 0x51,0x04,0x0b,0xe6,0x10,0x08,0x0d,0xff,0xe2,0xb3,0xb3,0x00,0x0d,0x00,0x00,0x00,
+- 0xe2,0x98,0x08,0xd1,0x0b,0xe0,0x11,0x67,0xcf,0x86,0xcf,0x06,0x01,0x00,0xe0,0x65,
+- 0x6c,0xcf,0x86,0xe5,0xa7,0x05,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x0c,0xe2,0xf8,
+- 0x67,0xe1,0x8f,0x67,0xcf,0x06,0x04,0x00,0xe2,0xdb,0x01,0xe1,0x26,0x01,0xd0,0x09,
+- 0xcf,0x86,0x65,0xf4,0x67,0x0a,0x00,0xcf,0x86,0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,
+- 0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,
+- 0xff,0xea,0x99,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x85,
+- 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,
+- 0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
+- 0x99,0x8b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x8d,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,
+- 0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
+- 0x99,0x93,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x95,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x97,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,
+- 0x08,0x0a,0xff,0xea,0x99,0x99,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x9b,
+- 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x9d,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x99,0x9f,0x00,0x0a,0x00,0xe4,0x5d,0x67,0xd3,0x30,0xd2,0x18,
+- 0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x99,0xa1,0x00,0x0c,0x00,0x10,0x08,0x0a,0xff,
+- 0xea,0x99,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0xa5,0x00,
+- 0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,
+- 0x10,0x08,0x0a,0xff,0xea,0x99,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,
+- 0xab,0x00,0x0a,0x00,0xe1,0x0c,0x67,0x10,0x08,0x0a,0xff,0xea,0x99,0xad,0x00,0x0a,
+- 0x00,0xe0,0x35,0x67,0xcf,0x86,0x95,0xab,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
+- 0x10,0x08,0x0a,0xff,0xea,0x9a,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,
+- 0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x85,0x00,0x0a,0x00,
+- 0x10,0x08,0x0a,0xff,0xea,0x9a,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,
+- 0x0a,0xff,0xea,0x9a,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8b,0x00,
+- 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8d,0x00,0x0a,0x00,0x10,0x08,
+- 0x0a,0xff,0xea,0x9a,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,
+- 0x0a,0xff,0xea,0x9a,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x93,0x00,
+- 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x95,0x00,0x0a,0x00,0x10,0x08,
+- 0x0a,0xff,0xea,0x9a,0x97,0x00,0x0a,0x00,0xe2,0x92,0x66,0xd1,0x0c,0x10,0x08,0x10,
+- 0xff,0xea,0x9a,0x99,0x00,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9a,0x9b,0x00,0x10,
+- 0x00,0x0b,0x00,0xe1,0x10,0x02,0xd0,0xb9,0xcf,0x86,0xd5,0x07,0x64,0x9e,0x66,0x08,
+- 0x00,0xd4,0x58,0xd3,0x28,0xd2,0x10,0x51,0x04,0x09,0x00,0x10,0x08,0x0a,0xff,0xea,
+- 0x9c,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa5,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,
+- 0x08,0x0a,0xff,0xea,0x9c,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xab,
+- 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xad,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x9c,0xaf,0x00,0x0a,0x00,0xd3,0x28,0xd2,0x10,0x51,0x04,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x9c,0xb5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb7,0x00,0x0a,
+- 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb9,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x9c,0xbb,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
+- 0x9c,0xbd,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xbf,0x00,0x0a,0x00,0xcf,
+- 0x86,0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
+- 0x9d,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x83,0x00,0x0a,0x00,0xd1,
+- 0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x85,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
+- 0x9d,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x89,
+- 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8b,0x00,0x0a,0x00,0xd1,0x0c,0x10,
+- 0x08,0x0a,0xff,0xea,0x9d,0x8d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8f,
+- 0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x91,
+- 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x93,0x00,0x0a,0x00,0xd1,0x0c,0x10,
+- 0x08,0x0a,0xff,0xea,0x9d,0x95,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x97,
+- 0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x99,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x9b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x9d,0x9d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x9f,0x00,0x0a,
+- 0x00,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa1,
+- 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,
+- 0x08,0x0a,0xff,0xea,0x9d,0xa5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa7,
+- 0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa9,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xab,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x9d,0xad,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xaf,0x00,0x0a,
+- 0x00,0x53,0x04,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,
+- 0x9d,0xba,0x00,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9d,0xbc,0x00,0xd1,0x0c,0x10,
+- 0x04,0x0a,0x00,0x0a,0xff,0xe1,0xb5,0xb9,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xbf,
+- 0x00,0x0a,0x00,0xe0,0x71,0x01,0xcf,0x86,0xd5,0xa6,0xd4,0x4e,0xd3,0x30,0xd2,0x18,
+- 0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9e,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,
+- 0xea,0x9e,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9e,0x85,0x00,
+- 0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9e,0x87,0x00,0x0a,0x00,0xd2,0x10,0x51,0x04,
+- 0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9e,0x8c,0x00,0xe1,0x9a,0x64,0x10,
+- 0x04,0x0a,0x00,0x0c,0xff,0xc9,0xa5,0x00,0xd3,0x28,0xd2,0x18,0xd1,0x0c,0x10,0x08,
+- 0x0c,0xff,0xea,0x9e,0x91,0x00,0x0c,0x00,0x10,0x08,0x0d,0xff,0xea,0x9e,0x93,0x00,
+- 0x0d,0x00,0x51,0x04,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x97,0x00,0x10,0x00,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,0x99,0x00,0x10,0x00,0x10,0x08,
+- 0x10,0xff,0xea,0x9e,0x9b,0x00,0x10,0x00,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,
+- 0x9d,0x00,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x9f,0x00,0x10,0x00,0xd4,0x63,
+- 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa1,0x00,0x0c,0x00,
+- 0x10,0x08,0x0c,0xff,0xea,0x9e,0xa3,0x00,0x0c,0x00,0xd1,0x0c,0x10,0x08,0x0c,0xff,
+- 0xea,0x9e,0xa5,0x00,0x0c,0x00,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa7,0x00,0x0c,0x00,
+- 0xd2,0x1a,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa9,0x00,0x0c,0x00,0x10,0x07,
+- 0x0d,0xff,0xc9,0xa6,0x00,0x10,0xff,0xc9,0x9c,0x00,0xd1,0x0e,0x10,0x07,0x10,0xff,
+- 0xc9,0xa1,0x00,0x10,0xff,0xc9,0xac,0x00,0x10,0x07,0x12,0xff,0xc9,0xaa,0x00,0x14,
+- 0x00,0xd3,0x35,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x10,0xff,0xca,0x9e,0x00,0x10,0xff,
+- 0xca,0x87,0x00,0x10,0x07,0x11,0xff,0xca,0x9d,0x00,0x11,0xff,0xea,0xad,0x93,0x00,
+- 0xd1,0x0c,0x10,0x08,0x11,0xff,0xea,0x9e,0xb5,0x00,0x11,0x00,0x10,0x08,0x11,0xff,
+- 0xea,0x9e,0xb7,0x00,0x11,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x14,0xff,0xea,0x9e,
+- 0xb9,0x00,0x14,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,0xbb,0x00,0x15,0x00,0xd1,0x0c,
+- 0x10,0x08,0x15,0xff,0xea,0x9e,0xbd,0x00,0x15,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,
+- 0xbf,0x00,0x15,0x00,0xcf,0x86,0xe5,0xd4,0x63,0x94,0x2f,0x93,0x2b,0xd2,0x10,0x51,
+- 0x04,0x00,0x00,0x10,0x08,0x15,0xff,0xea,0x9f,0x83,0x00,0x15,0x00,0xd1,0x0f,0x10,
+- 0x08,0x15,0xff,0xea,0x9e,0x94,0x00,0x15,0xff,0xca,0x82,0x00,0x10,0x08,0x15,0xff,
+- 0xe1,0xb6,0x8e,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xe4,0xb4,0x66,0xd3,0x1d,0xe2,
+- 0x5b,0x64,0xe1,0x0a,0x64,0xe0,0xf7,0x63,0xcf,0x86,0xe5,0xd8,0x63,0x94,0x0b,0x93,
+- 0x07,0x62,0xc3,0x63,0x08,0x00,0x08,0x00,0x08,0x00,0xd2,0x0f,0xe1,0x5a,0x65,0xe0,
+- 0x27,0x65,0xcf,0x86,0x65,0x0c,0x65,0x0a,0x00,0xd1,0xab,0xd0,0x1a,0xcf,0x86,0xe5,
+- 0x17,0x66,0xe4,0xfa,0x65,0xe3,0xe1,0x65,0xe2,0xd4,0x65,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x0c,0x00,0x0c,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x0b,0x93,0x07,0x62,
+- 0x27,0x66,0x11,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,
+- 0xe1,0x8e,0xa0,0x00,0x11,0xff,0xe1,0x8e,0xa1,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,
+- 0xa2,0x00,0x11,0xff,0xe1,0x8e,0xa3,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,
+- 0xa4,0x00,0x11,0xff,0xe1,0x8e,0xa5,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa6,0x00,
+- 0x11,0xff,0xe1,0x8e,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,
+- 0xa8,0x00,0x11,0xff,0xe1,0x8e,0xa9,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xaa,0x00,
+- 0x11,0xff,0xe1,0x8e,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xac,0x00,
+- 0x11,0xff,0xe1,0x8e,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xae,0x00,0x11,0xff,
+- 0xe1,0x8e,0xaf,0x00,0xe0,0xb2,0x65,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb0,0x00,0x11,0xff,0xe1,0x8e,
+- 0xb1,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb2,0x00,0x11,0xff,0xe1,0x8e,0xb3,0x00,
+- 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb4,0x00,0x11,0xff,0xe1,0x8e,0xb5,0x00,
+- 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb6,0x00,0x11,0xff,0xe1,0x8e,0xb7,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb8,0x00,0x11,0xff,0xe1,0x8e,0xb9,0x00,
+- 0x10,0x08,0x11,0xff,0xe1,0x8e,0xba,0x00,0x11,0xff,0xe1,0x8e,0xbb,0x00,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8e,0xbc,0x00,0x11,0xff,0xe1,0x8e,0xbd,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8e,0xbe,0x00,0x11,0xff,0xe1,0x8e,0xbf,0x00,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,0x80,0x00,0x11,0xff,0xe1,0x8f,0x81,0x00,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0x82,0x00,0x11,0xff,0xe1,0x8f,0x83,0x00,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0x84,0x00,0x11,0xff,0xe1,0x8f,0x85,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x86,0x00,0x11,0xff,0xe1,0x8f,0x87,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0x88,0x00,0x11,0xff,0xe1,0x8f,0x89,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x8a,0x00,0x11,0xff,0xe1,0x8f,0x8b,0x00,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x8c,0x00,0x11,0xff,0xe1,0x8f,0x8d,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x8e,0x00,0x11,0xff,0xe1,0x8f,0x8f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,0x90,0x00,0x11,0xff,0xe1,0x8f,0x91,0x00,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0x92,0x00,0x11,0xff,0xe1,0x8f,0x93,0x00,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0x94,0x00,0x11,0xff,0xe1,0x8f,0x95,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x96,0x00,0x11,0xff,0xe1,0x8f,0x97,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0x98,0x00,0x11,0xff,0xe1,0x8f,0x99,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x9a,0x00,0x11,0xff,0xe1,0x8f,0x9b,0x00,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x9c,0x00,0x11,0xff,0xe1,0x8f,0x9d,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x9e,0x00,0x11,0xff,0xe1,0x8f,0x9f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0xa0,0x00,0x11,0xff,0xe1,0x8f,0xa1,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0xa2,0x00,0x11,0xff,0xe1,0x8f,0xa3,0x00,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0xa4,0x00,0x11,0xff,0xe1,0x8f,0xa5,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0xa6,0x00,0x11,0xff,0xe1,0x8f,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0xa8,0x00,0x11,0xff,0xe1,0x8f,0xa9,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0xaa,0x00,0x11,0xff,0xe1,0x8f,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0xac,0x00,0x11,0xff,0xe1,0x8f,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
+- 0xae,0x00,0x11,0xff,0xe1,0x8f,0xaf,0x00,0xd1,0x0c,0xe0,0xeb,0x63,0xcf,0x86,0xcf,
+- 0x06,0x02,0xff,0xff,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,
+- 0xcf,0x06,0x01,0x00,0xd4,0xae,0xd3,0x09,0xe2,0x54,0x64,0xcf,0x06,0x01,0x00,0xd2,
+- 0x27,0xe1,0x1f,0x70,0xe0,0x26,0x6e,0xcf,0x86,0xe5,0x3f,0x6d,0xe4,0xce,0x6c,0xe3,
+- 0x99,0x6c,0xe2,0x78,0x6c,0xe1,0x67,0x6c,0x10,0x08,0x01,0xff,0xe5,0x88,0x87,0x00,
+- 0x01,0xff,0xe5,0xba,0xa6,0x00,0xe1,0x74,0x74,0xe0,0xe8,0x73,0xcf,0x86,0xe5,0x22,
+- 0x73,0xd4,0x3b,0x93,0x37,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x01,0xff,0x66,0x66,0x00,
+- 0x01,0xff,0x66,0x69,0x00,0x10,0x07,0x01,0xff,0x66,0x6c,0x00,0x01,0xff,0x66,0x66,
+- 0x69,0x00,0xd1,0x0f,0x10,0x08,0x01,0xff,0x66,0x66,0x6c,0x00,0x01,0xff,0x73,0x74,
+- 0x00,0x10,0x07,0x01,0xff,0x73,0x74,0x00,0x00,0x00,0x00,0x00,0xe3,0xc8,0x72,0xd2,
+- 0x11,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xb6,0x00,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xd5,0xb4,0xd5,0xa5,0x00,0x01,0xff,0xd5,0xb4,0xd5,
+- 0xab,0x00,0x10,0x09,0x01,0xff,0xd5,0xbe,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb4,0xd5,
+- 0xad,0x00,0xd3,0x09,0xe2,0x40,0x74,0xcf,0x06,0x01,0x00,0xd2,0x13,0xe1,0x30,0x75,
+- 0xe0,0xc1,0x74,0xcf,0x86,0xe5,0x9e,0x74,0x64,0x8d,0x74,0x06,0xff,0x00,0xe1,0x96,
+- 0x75,0xe0,0x63,0x75,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x7c,
+- 0xd3,0x3c,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xef,0xbd,0x81,0x00,
+- 0x10,0x08,0x01,0xff,0xef,0xbd,0x82,0x00,0x01,0xff,0xef,0xbd,0x83,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xef,0xbd,0x84,0x00,0x01,0xff,0xef,0xbd,0x85,0x00,0x10,0x08,
+- 0x01,0xff,0xef,0xbd,0x86,0x00,0x01,0xff,0xef,0xbd,0x87,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xef,0xbd,0x88,0x00,0x01,0xff,0xef,0xbd,0x89,0x00,0x10,0x08,
+- 0x01,0xff,0xef,0xbd,0x8a,0x00,0x01,0xff,0xef,0xbd,0x8b,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xef,0xbd,0x8c,0x00,0x01,0xff,0xef,0xbd,0x8d,0x00,0x10,0x08,0x01,0xff,
+- 0xef,0xbd,0x8e,0x00,0x01,0xff,0xef,0xbd,0x8f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xef,0xbd,0x90,0x00,0x01,0xff,0xef,0xbd,0x91,0x00,0x10,0x08,
+- 0x01,0xff,0xef,0xbd,0x92,0x00,0x01,0xff,0xef,0xbd,0x93,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xef,0xbd,0x94,0x00,0x01,0xff,0xef,0xbd,0x95,0x00,0x10,0x08,0x01,0xff,
+- 0xef,0xbd,0x96,0x00,0x01,0xff,0xef,0xbd,0x97,0x00,0x92,0x1c,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xef,0xbd,0x98,0x00,0x01,0xff,0xef,0xbd,0x99,0x00,0x10,0x08,0x01,0xff,
+- 0xef,0xbd,0x9a,0x00,0x01,0x00,0x01,0x00,0x83,0xe2,0x87,0xb3,0xe1,0x60,0xb0,0xe0,
+- 0xdd,0xae,0xcf,0x86,0xe5,0x81,0x9b,0xc4,0xe3,0xc1,0x07,0xe2,0x62,0x06,0xe1,0x11,
+- 0x86,0xe0,0x09,0x05,0xcf,0x86,0xe5,0xfb,0x02,0xd4,0x1c,0xe3,0x7f,0x76,0xe2,0xd6,
+- 0x75,0xe1,0xb1,0x75,0xe0,0x8a,0x75,0xcf,0x86,0xe5,0x57,0x75,0x94,0x07,0x63,0x42,
+- 0x75,0x07,0x00,0x07,0x00,0xe3,0x2b,0x78,0xe2,0xf0,0x77,0xe1,0x77,0x01,0xe0,0x88,
+- 0x77,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,
+- 0x05,0xff,0xf0,0x90,0x90,0xa8,0x00,0x05,0xff,0xf0,0x90,0x90,0xa9,0x00,0x10,0x09,
+- 0x05,0xff,0xf0,0x90,0x90,0xaa,0x00,0x05,0xff,0xf0,0x90,0x90,0xab,0x00,0xd1,0x12,
+- 0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xac,0x00,0x05,0xff,0xf0,0x90,0x90,0xad,0x00,
+- 0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xae,0x00,0x05,0xff,0xf0,0x90,0x90,0xaf,0x00,
+- 0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb0,0x00,0x05,0xff,0xf0,
+- 0x90,0x90,0xb1,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb2,0x00,0x05,0xff,0xf0,
+- 0x90,0x90,0xb3,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb4,0x00,0x05,
+- 0xff,0xf0,0x90,0x90,0xb5,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb6,0x00,0x05,
+- 0xff,0xf0,0x90,0x90,0xb7,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,
+- 0xf0,0x90,0x90,0xb8,0x00,0x05,0xff,0xf0,0x90,0x90,0xb9,0x00,0x10,0x09,0x05,0xff,
+- 0xf0,0x90,0x90,0xba,0x00,0x05,0xff,0xf0,0x90,0x90,0xbb,0x00,0xd1,0x12,0x10,0x09,
+- 0x05,0xff,0xf0,0x90,0x90,0xbc,0x00,0x05,0xff,0xf0,0x90,0x90,0xbd,0x00,0x10,0x09,
+- 0x05,0xff,0xf0,0x90,0x90,0xbe,0x00,0x05,0xff,0xf0,0x90,0x90,0xbf,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x80,0x00,0x05,0xff,0xf0,0x90,0x91,
+- 0x81,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x82,0x00,0x05,0xff,0xf0,0x90,0x91,
+- 0x83,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x84,0x00,0x05,0xff,0xf0,
+- 0x90,0x91,0x85,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x86,0x00,0x05,0xff,0xf0,
+- 0x90,0x91,0x87,0x00,0x94,0x4c,0x93,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,
+- 0xf0,0x90,0x91,0x88,0x00,0x05,0xff,0xf0,0x90,0x91,0x89,0x00,0x10,0x09,0x05,0xff,
+- 0xf0,0x90,0x91,0x8a,0x00,0x05,0xff,0xf0,0x90,0x91,0x8b,0x00,0xd1,0x12,0x10,0x09,
+- 0x05,0xff,0xf0,0x90,0x91,0x8c,0x00,0x05,0xff,0xf0,0x90,0x91,0x8d,0x00,0x10,0x09,
+- 0x07,0xff,0xf0,0x90,0x91,0x8e,0x00,0x07,0xff,0xf0,0x90,0x91,0x8f,0x00,0x05,0x00,
+- 0x05,0x00,0xd0,0xa0,0xcf,0x86,0xd5,0x07,0x64,0x30,0x76,0x07,0x00,0xd4,0x07,0x63,
+- 0x3d,0x76,0x07,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,
+- 0x93,0x98,0x00,0x12,0xff,0xf0,0x90,0x93,0x99,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,
+- 0x93,0x9a,0x00,0x12,0xff,0xf0,0x90,0x93,0x9b,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,
+- 0xf0,0x90,0x93,0x9c,0x00,0x12,0xff,0xf0,0x90,0x93,0x9d,0x00,0x10,0x09,0x12,0xff,
+- 0xf0,0x90,0x93,0x9e,0x00,0x12,0xff,0xf0,0x90,0x93,0x9f,0x00,0xd2,0x24,0xd1,0x12,
+- 0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa0,0x00,0x12,0xff,0xf0,0x90,0x93,0xa1,0x00,
+- 0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa2,0x00,0x12,0xff,0xf0,0x90,0x93,0xa3,0x00,
+- 0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa4,0x00,0x12,0xff,0xf0,0x90,0x93,
+- 0xa5,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa6,0x00,0x12,0xff,0xf0,0x90,0x93,
+- 0xa7,0x00,0xcf,0x86,0xe5,0xc6,0x75,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
+- 0x09,0x12,0xff,0xf0,0x90,0x93,0xa8,0x00,0x12,0xff,0xf0,0x90,0x93,0xa9,0x00,0x10,
+- 0x09,0x12,0xff,0xf0,0x90,0x93,0xaa,0x00,0x12,0xff,0xf0,0x90,0x93,0xab,0x00,0xd1,
+- 0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xac,0x00,0x12,0xff,0xf0,0x90,0x93,0xad,
+- 0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xae,0x00,0x12,0xff,0xf0,0x90,0x93,0xaf,
+- 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb0,0x00,0x12,0xff,
+- 0xf0,0x90,0x93,0xb1,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb2,0x00,0x12,0xff,
+- 0xf0,0x90,0x93,0xb3,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb4,0x00,
+- 0x12,0xff,0xf0,0x90,0x93,0xb5,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb6,0x00,
+- 0x12,0xff,0xf0,0x90,0x93,0xb7,0x00,0x93,0x28,0x92,0x24,0xd1,0x12,0x10,0x09,0x12,
+- 0xff,0xf0,0x90,0x93,0xb8,0x00,0x12,0xff,0xf0,0x90,0x93,0xb9,0x00,0x10,0x09,0x12,
+- 0xff,0xf0,0x90,0x93,0xba,0x00,0x12,0xff,0xf0,0x90,0x93,0xbb,0x00,0x00,0x00,0x12,
+- 0x00,0xd4,0x1f,0xe3,0xdf,0x76,0xe2,0x6a,0x76,0xe1,0x09,0x76,0xe0,0xea,0x75,0xcf,
+- 0x86,0xe5,0xb7,0x75,0x94,0x0a,0xe3,0xa2,0x75,0x62,0x99,0x75,0x07,0x00,0x07,0x00,
+- 0xe3,0xde,0x78,0xe2,0xaf,0x78,0xd1,0x09,0xe0,0x4c,0x78,0xcf,0x06,0x0b,0x00,0xe0,
+- 0x7f,0x78,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
+- 0x09,0x11,0xff,0xf0,0x90,0xb3,0x80,0x00,0x11,0xff,0xf0,0x90,0xb3,0x81,0x00,0x10,
+- 0x09,0x11,0xff,0xf0,0x90,0xb3,0x82,0x00,0x11,0xff,0xf0,0x90,0xb3,0x83,0x00,0xd1,
+- 0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x84,0x00,0x11,0xff,0xf0,0x90,0xb3,0x85,
+- 0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x86,0x00,0x11,0xff,0xf0,0x90,0xb3,0x87,
+- 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x88,0x00,0x11,0xff,
+- 0xf0,0x90,0xb3,0x89,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8a,0x00,0x11,0xff,
+- 0xf0,0x90,0xb3,0x8b,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8c,0x00,
+- 0x11,0xff,0xf0,0x90,0xb3,0x8d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8e,0x00,
+- 0x11,0xff,0xf0,0x90,0xb3,0x8f,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,
+- 0xff,0xf0,0x90,0xb3,0x90,0x00,0x11,0xff,0xf0,0x90,0xb3,0x91,0x00,0x10,0x09,0x11,
+- 0xff,0xf0,0x90,0xb3,0x92,0x00,0x11,0xff,0xf0,0x90,0xb3,0x93,0x00,0xd1,0x12,0x10,
+- 0x09,0x11,0xff,0xf0,0x90,0xb3,0x94,0x00,0x11,0xff,0xf0,0x90,0xb3,0x95,0x00,0x10,
+- 0x09,0x11,0xff,0xf0,0x90,0xb3,0x96,0x00,0x11,0xff,0xf0,0x90,0xb3,0x97,0x00,0xd2,
+- 0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x98,0x00,0x11,0xff,0xf0,0x90,
+- 0xb3,0x99,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9a,0x00,0x11,0xff,0xf0,0x90,
+- 0xb3,0x9b,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9c,0x00,0x11,0xff,
+- 0xf0,0x90,0xb3,0x9d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9e,0x00,0x11,0xff,
+- 0xf0,0x90,0xb3,0x9f,0x00,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,
+- 0xff,0xf0,0x90,0xb3,0xa0,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa1,0x00,0x10,0x09,0x11,
+- 0xff,0xf0,0x90,0xb3,0xa2,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa3,0x00,0xd1,0x12,0x10,
+- 0x09,0x11,0xff,0xf0,0x90,0xb3,0xa4,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa5,0x00,0x10,
+- 0x09,0x11,0xff,0xf0,0x90,0xb3,0xa6,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa7,0x00,0xd2,
+- 0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xa8,0x00,0x11,0xff,0xf0,0x90,
+- 0xb3,0xa9,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xaa,0x00,0x11,0xff,0xf0,0x90,
+- 0xb3,0xab,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xac,0x00,0x11,0xff,
+- 0xf0,0x90,0xb3,0xad,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xae,0x00,0x11,0xff,
+- 0xf0,0x90,0xb3,0xaf,0x00,0x93,0x23,0x92,0x1f,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,
+- 0x90,0xb3,0xb0,0x00,0x11,0xff,0xf0,0x90,0xb3,0xb1,0x00,0x10,0x09,0x11,0xff,0xf0,
+- 0x90,0xb3,0xb2,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x15,0xe4,0x91,
+- 0x7b,0xe3,0x9b,0x79,0xe2,0x94,0x78,0xe1,0xe4,0x77,0xe0,0x9d,0x77,0xcf,0x06,0x0c,
+- 0x00,0xe4,0xeb,0x7e,0xe3,0x44,0x7e,0xe2,0xed,0x7d,0xd1,0x0c,0xe0,0xb2,0x7d,0xcf,
+- 0x86,0x65,0x93,0x7d,0x14,0x00,0xe0,0xb6,0x7d,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,
+- 0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x80,0x00,
+- 0x10,0xff,0xf0,0x91,0xa3,0x81,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x82,0x00,
+- 0x10,0xff,0xf0,0x91,0xa3,0x83,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,
+- 0x84,0x00,0x10,0xff,0xf0,0x91,0xa3,0x85,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,
+- 0x86,0x00,0x10,0xff,0xf0,0x91,0xa3,0x87,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,
+- 0xff,0xf0,0x91,0xa3,0x88,0x00,0x10,0xff,0xf0,0x91,0xa3,0x89,0x00,0x10,0x09,0x10,
+- 0xff,0xf0,0x91,0xa3,0x8a,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8b,0x00,0xd1,0x12,0x10,
+- 0x09,0x10,0xff,0xf0,0x91,0xa3,0x8c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8d,0x00,0x10,
+- 0x09,0x10,0xff,0xf0,0x91,0xa3,0x8e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8f,0x00,0xd3,
+- 0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x90,0x00,0x10,0xff,
+- 0xf0,0x91,0xa3,0x91,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x92,0x00,0x10,0xff,
+- 0xf0,0x91,0xa3,0x93,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x94,0x00,
+- 0x10,0xff,0xf0,0x91,0xa3,0x95,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x96,0x00,
+- 0x10,0xff,0xf0,0x91,0xa3,0x97,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,
+- 0x91,0xa3,0x98,0x00,0x10,0xff,0xf0,0x91,0xa3,0x99,0x00,0x10,0x09,0x10,0xff,0xf0,
+- 0x91,0xa3,0x9a,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9b,0x00,0xd1,0x12,0x10,0x09,0x10,
+- 0xff,0xf0,0x91,0xa3,0x9c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9d,0x00,0x10,0x09,0x10,
+- 0xff,0xf0,0x91,0xa3,0x9e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9f,0x00,0xd1,0x11,0xe0,
+- 0x12,0x81,0xcf,0x86,0xe5,0x09,0x81,0xe4,0xd2,0x80,0xcf,0x06,0x00,0x00,0xe0,0xdb,
+- 0x82,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xd4,0x09,0xe3,0x10,0x81,0xcf,0x06,
+- 0x0c,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xe2,0x3b,0x82,0xe1,0x16,0x82,0xd0,0x06,
+- 0xcf,0x06,0x00,0x00,0xcf,0x86,0xa5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,
+- 0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa1,
+- 0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa3,
+- 0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa4,0x00,0x14,0xff,0xf0,0x96,
+- 0xb9,0xa5,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa6,0x00,0x14,0xff,0xf0,0x96,
+- 0xb9,0xa7,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa8,0x00,
+- 0x14,0xff,0xf0,0x96,0xb9,0xa9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xaa,0x00,
+- 0x14,0xff,0xf0,0x96,0xb9,0xab,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,
+- 0xac,0x00,0x14,0xff,0xf0,0x96,0xb9,0xad,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,
+- 0xae,0x00,0x14,0xff,0xf0,0x96,0xb9,0xaf,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
+- 0x09,0x14,0xff,0xf0,0x96,0xb9,0xb0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb1,0x00,0x10,
+- 0x09,0x14,0xff,0xf0,0x96,0xb9,0xb2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb3,0x00,0xd1,
+- 0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb4,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb5,
+- 0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb6,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb7,
+- 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb8,0x00,0x14,0xff,
+- 0xf0,0x96,0xb9,0xb9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xba,0x00,0x14,0xff,
+- 0xf0,0x96,0xb9,0xbb,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbc,0x00,
+- 0x14,0xff,0xf0,0x96,0xb9,0xbd,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbe,0x00,
+- 0x14,0xff,0xf0,0x96,0xb9,0xbf,0x00,0x14,0x00,0xd2,0x14,0xe1,0x25,0x82,0xe0,0x1c,
+- 0x82,0xcf,0x86,0xe5,0xdd,0x81,0xe4,0x9a,0x81,0xcf,0x06,0x12,0x00,0xd1,0x0b,0xe0,
+- 0x51,0x83,0xcf,0x86,0xcf,0x06,0x00,0x00,0xe0,0x95,0x8b,0xcf,0x86,0xd5,0x22,0xe4,
+- 0xd0,0x88,0xe3,0x93,0x88,0xe2,0x38,0x88,0xe1,0x31,0x88,0xe0,0x2a,0x88,0xcf,0x86,
+- 0xe5,0xfb,0x87,0xe4,0xe2,0x87,0x93,0x07,0x62,0xd1,0x87,0x12,0xe6,0x12,0xe6,0xe4,
+- 0x36,0x89,0xe3,0x2f,0x89,0xd2,0x09,0xe1,0xb8,0x88,0xcf,0x06,0x10,0x00,0xe1,0x1f,
+- 0x89,0xe0,0xec,0x88,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,
+- 0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa3,
+- 0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa5,
+- 0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa6,0x00,0x12,0xff,0xf0,0x9e,
+- 0xa4,0xa7,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa8,0x00,0x12,0xff,0xf0,0x9e,
+- 0xa4,0xa9,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xaa,0x00,
+- 0x12,0xff,0xf0,0x9e,0xa4,0xab,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xac,0x00,
+- 0x12,0xff,0xf0,0x9e,0xa4,0xad,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,
+- 0xae,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xaf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,
+- 0xb0,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb1,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
+- 0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb3,0x00,0x10,
+- 0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb5,0x00,0xd1,
+- 0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb6,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb7,
+- 0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb8,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb9,
+- 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xba,0x00,0x12,0xff,
+- 0xf0,0x9e,0xa4,0xbb,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbc,0x00,0x12,0xff,
+- 0xf0,0x9e,0xa4,0xbd,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbe,0x00,
+- 0x12,0xff,0xf0,0x9e,0xa4,0xbf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa5,0x80,0x00,
+- 0x12,0xff,0xf0,0x9e,0xa5,0x81,0x00,0x94,0x1e,0x93,0x1a,0x92,0x16,0x91,0x12,0x10,
+- 0x09,0x12,0xff,0xf0,0x9e,0xa5,0x82,0x00,0x12,0xff,0xf0,0x9e,0xa5,0x83,0x00,0x12,
+- 0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- /* nfdi_c0100 */
+- 0x57,0x04,0x01,0x00,0xc6,0xe5,0xac,0x13,0xe4,0x41,0x0c,0xe3,0x7a,0x07,0xe2,0xf3,
+- 0x01,0xc1,0xd0,0x1f,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x15,0x53,0x04,0x01,0x00,
+- 0x52,0x04,0x01,0x00,0x91,0x09,0x10,0x04,0x01,0x00,0x01,0xff,0x00,0x01,0x00,0x01,
+- 0x00,0xcf,0x86,0xd5,0xe4,0xd4,0x7c,0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x41,0xcc,0x80,0x00,0x01,0xff,0x41,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x41,
+- 0xcc,0x82,0x00,0x01,0xff,0x41,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,
+- 0xcc,0x88,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x43,
+- 0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0x80,0x00,0x01,
+- 0xff,0x45,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x82,0x00,0x01,0xff,0x45,
+- 0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x80,0x00,0x01,0xff,0x49,
+- 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0x82,0x00,0x01,0xff,0x49,0xcc,0x88,
+- 0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x4e,0xcc,0x83,
+- 0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x80,0x00,0x01,0xff,0x4f,0xcc,0x81,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x82,0x00,0x01,0xff,0x4f,0xcc,0x83,0x00,0x10,
+- 0x08,0x01,0xff,0x4f,0xcc,0x88,0x00,0x01,0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,
+- 0x00,0x01,0xff,0x55,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x81,0x00,0x01,
+- 0xff,0x55,0xcc,0x82,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x88,0x00,0x01,
+- 0xff,0x59,0xcc,0x81,0x00,0x01,0x00,0xd4,0x7c,0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x61,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x81,0x00,0x10,0x08,0x01,
+- 0xff,0x61,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x61,0xcc,0x88,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x01,
++ 0xc6,0xe5,0xf6,0x14,0xe4,0x6c,0x0d,0xe3,0x36,0x08,0xe2,0x1f,0x01,0xc1,0xd0,0x21,
++ 0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x13,0x52,0x04,0x01,0x00,
++ 0x91,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xce,0xbc,0x00,0x01,0x00,0x01,0x00,0xcf,
++ 0x86,0xe5,0x9d,0x44,0xd4,0x7f,0xd3,0x3f,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x61,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,
++ 0x82,0x00,0x01,0xff,0x61,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,
++ 0x88,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x10,0x07,0x01,0xff,0xc3,0xa6,0x00,0x01,
+ 0xff,0x63,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x80,
+ 0x00,0x01,0xff,0x65,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x82,0x00,0x01,
+ 0xff,0x65,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x80,0x00,0x01,
+ 0xff,0x69,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0x82,0x00,0x01,0xff,0x69,
+- 0xcc,0x88,0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x6e,
+- 0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x81,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x82,0x00,0x01,0xff,0x6f,0xcc,0x83,
+- 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,0x01,0x00,0xd2,0x1c,0xd1,0x0c,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0x75,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x81,
+- 0x00,0x01,0xff,0x75,0xcc,0x82,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x88,
+- 0x00,0x01,0xff,0x79,0xcc,0x81,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x79,0xcc,0x88,
+- 0x00,0xe1,0x9a,0x03,0xe0,0xd3,0x01,0xcf,0x86,0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x84,
+- 0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x86,0x00,0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa8,0x00,0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,
+- 0x08,0x01,0xff,0x43,0xcc,0x81,0x00,0x01,0xff,0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x43,0xcc,0x82,0x00,0x01,0xff,0x63,0xcc,0x82,0x00,0x10,
+- 0x08,0x01,0xff,0x43,0xcc,0x87,0x00,0x01,0xff,0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x43,0xcc,0x8c,0x00,0x01,0xff,0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,
+- 0xff,0x44,0xcc,0x8c,0x00,0x01,0xff,0x64,0xcc,0x8c,0x00,0xd3,0x34,0xd2,0x14,0x51,
+- 0x04,0x01,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x84,0x00,0x01,0xff,0x65,0xcc,0x84,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0x86,
+- 0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x87,0x00,0x01,0xff,0x65,0xcc,0x87,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0xa8,0x00,0x01,0xff,0x65,0xcc,0xa8,
+- 0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x8c,0x00,0x01,0xff,0x65,0xcc,0x8c,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x82,0x00,0x01,0xff,0x67,0xcc,0x82,0x00,0x10,
+- 0x08,0x01,0xff,0x47,0xcc,0x86,0x00,0x01,0xff,0x67,0xcc,0x86,0x00,0xd4,0x74,0xd3,
+- 0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x87,0x00,0x01,0xff,0x67,
+- 0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,0xa7,0x00,0x01,0xff,0x67,0xcc,0xa7,
+- 0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0x82,0x00,0x01,0xff,0x68,0xcc,0x82,
+- 0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x83,0x00,0x01,
+- 0xff,0x69,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0x84,0x00,0x01,0xff,0x69,
+- 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x86,0x00,0x01,0xff,0x69,
+- 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0xa8,0x00,0x01,0xff,0x69,0xcc,0xa8,
+- 0x00,0xd3,0x30,0xd2,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x49,0xcc,0x87,0x00,0x01,
+- 0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4a,0xcc,0x82,0x00,0x01,0xff,0x6a,
+- 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,
+- 0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x4c,0xcc,0x81,0x00,0x10,
+- 0x08,0x01,0xff,0x6c,0xcc,0x81,0x00,0x01,0xff,0x4c,0xcc,0xa7,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x6c,0xcc,0xa7,0x00,0x01,0xff,0x4c,0xcc,0x8c,0x00,0x10,0x08,0x01,
+- 0xff,0x6c,0xcc,0x8c,0x00,0x01,0x00,0xcf,0x86,0xd5,0xd4,0xd4,0x60,0xd3,0x30,0xd2,
+- 0x10,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x4e,0xcc,0x81,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x81,0x00,0x01,0xff,0x4e,0xcc,0xa7,0x00,0x10,
+- 0x08,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x01,0xff,0x4e,0xcc,0x8c,0x00,0xd2,0x10,0x91,
+- 0x0c,0x10,0x08,0x01,0xff,0x6e,0xcc,0x8c,0x00,0x01,0x00,0x01,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x4f,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0x84,0x00,0x10,0x08,0x01,
+- 0xff,0x4f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,0x86,0x00,0xd3,0x34,0xd2,0x14,0x91,
+- 0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x8b,0x00,0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,0x81,0x00,0x01,0xff,0x72,0xcc,0x81,
+- 0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa7,0x00,0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,
+- 0x00,0x10,0x08,0x01,0xff,0x53,0xcc,0x81,0x00,0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x82,0x00,0x01,0xff,0x73,0xcc,0x82,0x00,0x10,
+- 0x08,0x01,0xff,0x53,0xcc,0xa7,0x00,0x01,0xff,0x73,0xcc,0xa7,0x00,0xd4,0x74,0xd3,
+- 0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x8c,0x00,0x01,0xff,0x73,
+- 0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,
+- 0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,
+- 0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x83,0x00,0x01,
+- 0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x84,0x00,0x01,0xff,0x75,
+- 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x86,0x00,0x01,0xff,0x75,
+- 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x8a,0x00,0x01,0xff,0x75,0xcc,0x8a,
+- 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x8b,0x00,0x01,
+- 0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xa8,0x00,0x01,0xff,0x75,
+- 0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x82,0x00,0x01,0xff,0x77,
+- 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,0x82,0x00,0x01,0xff,0x79,0xcc,0x82,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x59,0xcc,0x88,0x00,0x01,0xff,0x5a,
+- 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x81,0x00,0x01,0xff,0x5a,0xcc,0x87,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x87,0x00,0x01,0xff,0x5a,0xcc,0x8c,
+- 0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,0x01,0x00,0xd0,0x4a,0xcf,0x86,0x55,
+- 0x04,0x01,0x00,0xd4,0x2c,0xd3,0x18,0x92,0x14,0x91,0x10,0x10,0x08,0x01,0xff,0x4f,
+- 0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,
+- 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x55,0xcc,0x9b,0x00,0x93,
+- 0x14,0x92,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x75,0xcc,0x9b,0x00,0x01,0x00,0x01,
+- 0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xb4,0xd4,0x24,0x53,0x04,0x01,0x00,0x52,
+- 0x04,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x41,0xcc,0x8c,0x00,0x10,
+- 0x08,0x01,0xff,0x61,0xcc,0x8c,0x00,0x01,0xff,0x49,0xcc,0x8c,0x00,0xd3,0x46,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x8c,0x00,0x01,0xff,0x4f,0xcc,0x8c,
+- 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x8c,0x00,0xd1,
+- 0x12,0x10,0x08,0x01,0xff,0x75,0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x84,
+- 0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x55,0xcc,0x88,
+- 0xcc,0x81,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,
+- 0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x8c,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,
+- 0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,
+- 0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x88,
+- 0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,0xd4,0x80,0xd3,0x3a,0xd2,
+- 0x26,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x87,0xcc,0x84,0x00,0x01,0xff,0x61,
+- 0xcc,0x87,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xc3,0x86,0xcc,0x84,0x00,0x01,0xff,
+- 0xc3,0xa6,0xcc,0x84,0x00,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,0x8c,
+- 0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,
+- 0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0xa8,
+- 0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa8,
+- 0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xc6,
+- 0xb7,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0xd3,0x24,0xd2,0x10,0x91,
+- 0x0c,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,0x01,0x00,0x01,0x00,0x91,0x10,0x10,
+- 0x08,0x01,0xff,0x47,0xcc,0x81,0x00,0x01,0xff,0x67,0xcc,0x81,0x00,0x04,0x00,0xd2,
+- 0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x4e,0xcc,0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,
+- 0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x8a,0xcc,0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,
+- 0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xc3,0x86,0xcc,0x81,0x00,0x01,0xff,
+- 0xc3,0xa6,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xc3,0x98,0xcc,0x81,0x00,0x01,0xff,
+- 0xc3,0xb8,0xcc,0x81,0x00,0xe2,0x07,0x02,0xe1,0xae,0x01,0xe0,0x93,0x01,0xcf,0x86,
+- 0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,
+- 0x8f,0x00,0x01,0xff,0x61,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x91,0x00,
+- 0x01,0xff,0x61,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0x8f,0x00,
+- 0x01,0xff,0x65,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x91,0x00,0x01,0xff,
+- 0x65,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x8f,0x00,
+- 0x01,0xff,0x69,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0x91,0x00,0x01,0xff,
+- 0x69,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x8f,0x00,0x01,0xff,
+- 0x6f,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,
+- 0x91,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,0x8f,0x00,
+- 0x01,0xff,0x72,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0x91,0x00,0x01,0xff,
+- 0x72,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x8f,0x00,0x01,0xff,
+- 0x75,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,
+- 0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x04,0xff,0x53,0xcc,0xa6,0x00,0x04,0xff,
+- 0x73,0xcc,0xa6,0x00,0x10,0x08,0x04,0xff,0x54,0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,
+- 0xa6,0x00,0x51,0x04,0x04,0x00,0x10,0x08,0x04,0xff,0x48,0xcc,0x8c,0x00,0x04,0xff,
+- 0x68,0xcc,0x8c,0x00,0xd4,0x68,0xd3,0x20,0xd2,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,
+- 0x07,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x08,0x04,0xff,0x41,0xcc,0x87,0x00,
+- 0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x45,0xcc,
+- 0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x88,0xcc,
++ 0xcc,0x88,0x00,0xd3,0x3b,0xd2,0x1f,0xd1,0x0f,0x10,0x07,0x01,0xff,0xc3,0xb0,0x00,
++ 0x01,0xff,0x6e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x80,0x00,0x01,0xff,
++ 0x6f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x82,0x00,0x01,0xff,
++ 0x6f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,0x01,0x00,0xd2,0x1f,
++ 0xd1,0x0f,0x10,0x07,0x01,0xff,0xc3,0xb8,0x00,0x01,0xff,0x75,0xcc,0x80,0x00,0x10,
++ 0x08,0x01,0xff,0x75,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x82,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x75,0xcc,0x88,0x00,0x01,0xff,0x79,0xcc,0x81,0x00,0x10,0x07,0x01,
++ 0xff,0xc3,0xbe,0x00,0x01,0xff,0x73,0x73,0x00,0xe1,0xd4,0x03,0xe0,0xeb,0x01,0xcf,
++ 0x86,0xd5,0xfb,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,
++ 0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x86,
++ 0x00,0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa8,
++ 0x00,0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,0x81,0x00,0x01,
++ 0xff,0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x63,0xcc,0x82,
++ 0x00,0x01,0xff,0x63,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,0x87,0x00,0x01,
++ 0xff,0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x63,0xcc,0x8c,0x00,0x01,
++ 0xff,0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x8c,0x00,0x01,0xff,0x64,
++ 0xcc,0x8c,0x00,0xd3,0x3b,0xd2,0x1b,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc4,0x91,0x00,
++ 0x01,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x84,0x00,0x01,0xff,0x65,0xcc,0x84,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0x86,0x00,
++ 0x10,0x08,0x01,0xff,0x65,0xcc,0x87,0x00,0x01,0xff,0x65,0xcc,0x87,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xa8,0x00,0x01,0xff,0x65,0xcc,0xa8,0x00,
++ 0x10,0x08,0x01,0xff,0x65,0xcc,0x8c,0x00,0x01,0xff,0x65,0xcc,0x8c,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x67,0xcc,0x82,0x00,0x01,0xff,0x67,0xcc,0x82,0x00,0x10,0x08,
++ 0x01,0xff,0x67,0xcc,0x86,0x00,0x01,0xff,0x67,0xcc,0x86,0x00,0xd4,0x7b,0xd3,0x3b,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x87,0x00,0x01,0xff,0x67,0xcc,
++ 0x87,0x00,0x10,0x08,0x01,0xff,0x67,0xcc,0xa7,0x00,0x01,0xff,0x67,0xcc,0xa7,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x68,0xcc,0x82,0x00,0x01,0xff,0x68,0xcc,0x82,0x00,
++ 0x10,0x07,0x01,0xff,0xc4,0xa7,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x69,0xcc,0x83,0x00,0x01,0xff,0x69,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x69,
++ 0xcc,0x84,0x00,0x01,0xff,0x69,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,
++ 0xcc,0x86,0x00,0x01,0xff,0x69,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0xa8,
++ 0x00,0x01,0xff,0x69,0xcc,0xa8,0x00,0xd3,0x37,0xd2,0x17,0xd1,0x0c,0x10,0x08,0x01,
++ 0xff,0x69,0xcc,0x87,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc4,0xb3,0x00,0x01,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x6a,0xcc,0x82,0x00,0x01,0xff,0x6a,0xcc,0x82,0x00,
++ 0x10,0x08,0x01,0xff,0x6b,0xcc,0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,0x00,0xd2,0x1c,
++ 0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x6c,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,
++ 0x6c,0xcc,0x81,0x00,0x01,0xff,0x6c,0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x6c,0xcc,0xa7,0x00,0x01,0xff,0x6c,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,
++ 0x8c,0x00,0x01,0xff,0xc5,0x80,0x00,0xcf,0x86,0xd5,0xed,0xd4,0x72,0xd3,0x37,0xd2,
++ 0x17,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc5,0x82,0x00,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0x6e,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x81,0x00,
++ 0x01,0xff,0x6e,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x01,0xff,
++ 0x6e,0xcc,0x8c,0x00,0xd2,0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x8c,0x00,
++ 0x01,0xff,0xca,0xbc,0x6e,0x00,0x10,0x07,0x01,0xff,0xc5,0x8b,0x00,0x01,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0x84,0x00,0x10,
++ 0x08,0x01,0xff,0x6f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,0x86,0x00,0xd3,0x3b,0xd2,
++ 0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,0xff,0x6f,0xcc,0x8b,
++ 0x00,0x10,0x07,0x01,0xff,0xc5,0x93,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x72,0xcc,0x81,0x00,0x01,0xff,0x72,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,
++ 0xa7,0x00,0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x72,0xcc,0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x73,0xcc,
++ 0x81,0x00,0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x73,0xcc,
++ 0x82,0x00,0x01,0xff,0x73,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x73,0xcc,0xa7,0x00,
++ 0x01,0xff,0x73,0xcc,0xa7,0x00,0xd4,0x7b,0xd3,0x3b,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x73,0xcc,0x8c,0x00,0x01,0xff,0x73,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,
++ 0x74,0xcc,0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x74,0xcc,0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,0x00,0x10,0x07,0x01,0xff,0xc5,0xa7,
++ 0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x83,0x00,0x01,
++ 0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x84,0x00,0x01,0xff,0x75,
++ 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x86,0x00,0x01,0xff,0x75,
++ 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x8a,0x00,0x01,0xff,0x75,0xcc,0x8a,
++ 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x8b,0x00,0x01,
++ 0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa8,0x00,0x01,0xff,0x75,
++ 0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x82,0x00,0x01,0xff,0x77,
++ 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x82,0x00,0x01,0xff,0x79,0xcc,0x82,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,0x88,0x00,0x01,0xff,0x7a,
++ 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x81,0x00,0x01,0xff,0x7a,0xcc,0x87,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x87,0x00,0x01,0xff,0x7a,0xcc,0x8c,
++ 0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,0x01,0xff,0x73,0x00,0xe0,0x65,0x01,
++ 0xcf,0x86,0xd5,0xb4,0xd4,0x5a,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xc9,0x93,0x00,0x10,0x07,0x01,0xff,0xc6,0x83,0x00,0x01,0x00,0xd1,0x0b,
++ 0x10,0x07,0x01,0xff,0xc6,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc9,0x94,0x00,
++ 0x01,0xff,0xc6,0x88,0x00,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc9,
++ 0x96,0x00,0x10,0x07,0x01,0xff,0xc9,0x97,0x00,0x01,0xff,0xc6,0x8c,0x00,0x51,0x04,
++ 0x01,0x00,0x10,0x07,0x01,0xff,0xc7,0x9d,0x00,0x01,0xff,0xc9,0x99,0x00,0xd3,0x32,
++ 0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xc9,0x9b,0x00,0x01,0xff,0xc6,0x92,0x00,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xc9,0xa0,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc9,
++ 0xa3,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc9,0xa9,0x00,0x01,0xff,0xc9,0xa8,0x00,
++ 0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0x99,0x00,0x01,0x00,0x01,0x00,0xd1,
++ 0x0e,0x10,0x07,0x01,0xff,0xc9,0xaf,0x00,0x01,0xff,0xc9,0xb2,0x00,0x10,0x04,0x01,
++ 0x00,0x01,0xff,0xc9,0xb5,0x00,0xd4,0x5d,0xd3,0x34,0xd2,0x1b,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x6f,0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x10,0x07,0x01,0xff,
++ 0xc6,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc6,0xa5,0x00,0x01,0x00,
++ 0x10,0x07,0x01,0xff,0xca,0x80,0x00,0x01,0xff,0xc6,0xa8,0x00,0xd2,0x0f,0x91,0x0b,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xca,0x83,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,
++ 0xff,0xc6,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xca,0x88,0x00,0x01,0xff,0x75,
++ 0xcc,0x9b,0x00,0xd3,0x33,0xd2,0x1d,0xd1,0x0f,0x10,0x08,0x01,0xff,0x75,0xcc,0x9b,
++ 0x00,0x01,0xff,0xca,0x8a,0x00,0x10,0x07,0x01,0xff,0xca,0x8b,0x00,0x01,0xff,0xc6,
++ 0xb4,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc6,0xb6,0x00,0x10,0x04,0x01,
++ 0x00,0x01,0xff,0xca,0x92,0x00,0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0xb9,
++ 0x00,0x01,0x00,0x01,0x00,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0xbd,0x00,0x01,0x00,
++ 0x01,0x00,0xcf,0x86,0xd5,0xd4,0xd4,0x44,0xd3,0x16,0x52,0x04,0x01,0x00,0x51,0x07,
++ 0x01,0xff,0xc7,0x86,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xc7,0x89,0x00,0xd2,0x12,
++ 0x91,0x0b,0x10,0x07,0x01,0xff,0xc7,0x89,0x00,0x01,0x00,0x01,0xff,0xc7,0x8c,0x00,
++ 0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x61,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,
++ 0x61,0xcc,0x8c,0x00,0x01,0xff,0x69,0xcc,0x8c,0x00,0xd3,0x46,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x69,0xcc,0x8c,0x00,0x01,0xff,0x6f,0xcc,0x8c,0x00,0x10,0x08,
++ 0x01,0xff,0x6f,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x8c,0x00,0xd1,0x12,0x10,0x08,
++ 0x01,0xff,0x75,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x10,0x0a,
++ 0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,0x00,
++ 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,
++ 0x75,0xcc,0x88,0xcc,0x8c,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x8c,0x00,
++ 0x01,0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0x75,0xcc,
++ 0x88,0xcc,0x80,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,
++ 0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,0xd4,0x87,0xd3,0x41,0xd2,0x26,0xd1,0x14,
++ 0x10,0x0a,0x01,0xff,0x61,0xcc,0x87,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x87,0xcc,
++ 0x84,0x00,0x10,0x09,0x01,0xff,0xc3,0xa6,0xcc,0x84,0x00,0x01,0xff,0xc3,0xa6,0xcc,
++ 0x84,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc7,0xa5,0x00,0x01,0x00,0x10,0x08,0x01,
++ 0xff,0x67,0xcc,0x8c,0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x10,0x08,0x01,
++ 0xff,0x6f,0xcc,0xa8,0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,0x10,0x0a,0x01,
++ 0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x10,
++ 0x09,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0xd3,
++ 0x38,0xd2,0x1a,0xd1,0x0f,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,0x01,0xff,0xc7,
++ 0xb3,0x00,0x10,0x07,0x01,0xff,0xc7,0xb3,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x67,0xcc,0x81,0x00,0x01,0xff,0x67,0xcc,0x81,0x00,0x10,0x07,0x04,0xff,0xc6,
++ 0x95,0x00,0x04,0xff,0xc6,0xbf,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x6e,
++ 0xcc,0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x8a,
++ 0xcc,0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,
++ 0xff,0xc3,0xa6,0xcc,0x81,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,0x10,0x09,0x01,
++ 0xff,0xc3,0xb8,0xcc,0x81,0x00,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,0xe2,0x31,0x02,
++ 0xe1,0xad,0x44,0xe0,0xc8,0x01,0xcf,0x86,0xd5,0xfb,0xd4,0x80,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x8f,0x00,0x01,0xff,0x61,0xcc,0x8f,0x00,
++ 0x10,0x08,0x01,0xff,0x61,0xcc,0x91,0x00,0x01,0xff,0x61,0xcc,0x91,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x65,0xcc,0x8f,0x00,0x01,0xff,0x65,0xcc,0x8f,0x00,0x10,0x08,
++ 0x01,0xff,0x65,0xcc,0x91,0x00,0x01,0xff,0x65,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x69,0xcc,0x8f,0x00,0x01,0xff,0x69,0xcc,0x8f,0x00,0x10,0x08,
++ 0x01,0xff,0x69,0xcc,0x91,0x00,0x01,0xff,0x69,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x6f,0xcc,0x8f,0x00,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,
++ 0x6f,0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,0x91,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x72,0xcc,0x8f,0x00,0x01,0xff,0x72,0xcc,0x8f,0x00,0x10,0x08,
++ 0x01,0xff,0x72,0xcc,0x91,0x00,0x01,0xff,0x72,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x75,0xcc,0x8f,0x00,0x01,0xff,0x75,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,
++ 0x75,0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x04,0xff,0x73,0xcc,0xa6,0x00,0x04,0xff,0x73,0xcc,0xa6,0x00,0x10,0x08,0x04,0xff,
++ 0x74,0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,0xa6,0x00,0xd1,0x0b,0x10,0x07,0x04,0xff,
++ 0xc8,0x9d,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x68,0xcc,0x8c,0x00,0x04,0xff,0x68,
++ 0xcc,0x8c,0x00,0xd4,0x79,0xd3,0x31,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,0xc6,
++ 0x9e,0x00,0x07,0x00,0x10,0x07,0x04,0xff,0xc8,0xa3,0x00,0x04,0x00,0xd1,0x0b,0x10,
++ 0x07,0x04,0xff,0xc8,0xa5,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x61,0xcc,0x87,0x00,
++ 0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x65,0xcc,
++ 0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,0x0a,0x04,0xff,0x6f,0xcc,0x88,0xcc,
+ 0x84,0x00,0x04,0xff,0x6f,0xcc,0x88,0xcc,0x84,0x00,0xd1,0x14,0x10,0x0a,0x04,0xff,
+- 0x4f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,0x00,0x10,0x08,
+- 0x04,0xff,0x4f,0xcc,0x87,0x00,0x04,0xff,0x6f,0xcc,0x87,0x00,0x93,0x30,0xd2,0x24,
+- 0xd1,0x14,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x87,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,
+- 0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x59,0xcc,0x84,0x00,0x04,0xff,0x79,0xcc,
+- 0x84,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0xcf,0x86,
+- 0x95,0x14,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x08,0x00,0x09,0x00,0x09,0x00,
+- 0x09,0x00,0x01,0x00,0x01,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x18,
+- 0x53,0x04,0x01,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x04,0x00,
+- 0x11,0x04,0x04,0x00,0x07,0x00,0x01,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x01,0x00,
+- 0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x04,0x00,0x94,0x18,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x04,0x00,
+- 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,0x00,0x07,0x00,0xe1,0x35,0x01,0xd0,
+- 0x72,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0xe6,0xd3,0x10,0x52,0x04,0x01,0xe6,0x91,
+- 0x08,0x10,0x04,0x01,0xe6,0x01,0xe8,0x01,0xdc,0x92,0x0c,0x51,0x04,0x01,0xdc,0x10,
+- 0x04,0x01,0xe8,0x01,0xd8,0x01,0xdc,0xd4,0x2c,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,
+- 0x04,0x01,0xdc,0x01,0xca,0x10,0x04,0x01,0xca,0x01,0xdc,0x51,0x04,0x01,0xdc,0x10,
+- 0x04,0x01,0xdc,0x01,0xca,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0xca,0x01,0xdc,0x01,
+- 0xdc,0x01,0xdc,0xd3,0x08,0x12,0x04,0x01,0xdc,0x01,0x01,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x01,0x01,0x01,0xdc,0x01,0xdc,0x91,0x08,0x10,0x04,0x01,0xdc,0x01,0xe6,0x01,
+- 0xe6,0xcf,0x86,0xd5,0x7f,0xd4,0x47,0xd3,0x2e,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,
+- 0xff,0xcc,0x80,0x00,0x01,0xff,0xcc,0x81,0x00,0x10,0x04,0x01,0xe6,0x01,0xff,0xcc,
+- 0x93,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcc,0x88,0xcc,0x81,0x00,0x01,0xf0,0x10,
+- 0x04,0x04,0xe6,0x04,0xdc,0xd2,0x08,0x11,0x04,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,
+- 0x04,0x04,0xe6,0x04,0xdc,0x10,0x04,0x04,0xdc,0x06,0xff,0x00,0xd3,0x18,0xd2,0x0c,
+- 0x51,0x04,0x07,0xe6,0x10,0x04,0x07,0xe6,0x07,0xdc,0x51,0x04,0x07,0xdc,0x10,0x04,
+- 0x07,0xdc,0x07,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x08,0xe8,0x08,0xdc,0x10,0x04,
+- 0x08,0xdc,0x08,0xe6,0xd1,0x08,0x10,0x04,0x08,0xe9,0x07,0xea,0x10,0x04,0x07,0xea,
+- 0x07,0xe9,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0xea,0x10,0x04,0x04,0xe9,
+- 0x06,0xe6,0x06,0xe6,0x06,0xe6,0xd3,0x13,0x52,0x04,0x0a,0x00,0x91,0x0b,0x10,0x07,
+- 0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,
+- 0x04,0x01,0x00,0x09,0x00,0x51,0x04,0x09,0x00,0x10,0x06,0x01,0xff,0x3b,0x00,0x10,
+- 0x00,0xd0,0xe1,0xcf,0x86,0xd5,0x7a,0xd4,0x5f,0xd3,0x21,0x52,0x04,0x00,0x00,0xd1,
+- 0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
+- 0xce,0x91,0xcc,0x81,0x00,0x01,0xff,0xc2,0xb7,0x00,0xd2,0x1f,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0x95,0xcc,0x81,0x00,0x01,0xff,0xce,0x97,0xcc,0x81,0x00,0x10,0x09,
+- 0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0x00,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xce,
+- 0x9f,0xcc,0x81,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xa5,0xcc,0x81,0x00,0x01,
+- 0xff,0xce,0xa9,0xcc,0x81,0x00,0x93,0x17,0x92,0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,
+- 0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,
+- 0x4a,0xd3,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,
+- 0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x88,0x00,
+- 0x01,0xff,0xce,0xa5,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,
+- 0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,
+- 0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0x93,0x17,0x92,0x13,0x91,0x0f,0x10,
+- 0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x81,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0x01,0x00,0xcf,0x86,0xd5,0x7b,0xd4,0x39,0x53,0x04,0x01,0x00,0xd2,0x16,0x51,0x04,
+- 0x01,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x88,0x00,0x01,0xff,0xcf,0x85,0xcc,
+- 0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x01,0xff,0xcf,
+- 0x85,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x0a,0x00,0xd3,
+- 0x26,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xcf,0x92,0xcc,
+- 0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcf,0x92,0xcc,0x88,0x00,0x01,0x00,0x10,
+- 0x04,0x01,0x00,0x04,0x00,0xd2,0x0c,0x51,0x04,0x06,0x00,0x10,0x04,0x01,0x00,0x04,
+- 0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd4,
+- 0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x01,0x00,0x01,
+- 0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x06,
+- 0x00,0x07,0x00,0x12,0x04,0x07,0x00,0x08,0x00,0xe3,0x47,0x04,0xe2,0xbe,0x02,0xe1,
+- 0x07,0x01,0xd0,0x8b,0xcf,0x86,0xd5,0x6c,0xd4,0x53,0xd3,0x30,0xd2,0x1f,0xd1,0x12,
+- 0x10,0x09,0x04,0xff,0xd0,0x95,0xcc,0x80,0x00,0x01,0xff,0xd0,0x95,0xcc,0x88,0x00,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x93,0xcc,0x81,0x00,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0xd0,0x86,0xcc,0x88,0x00,0x52,0x04,0x01,0x00,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xd0,0x9a,0xcc,0x81,0x00,0x04,0xff,0xd0,0x98,0xcc,0x80,0x00,
+- 0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x86,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0x92,
+- 0x11,0x91,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x98,0xcc,0x86,0x00,0x01,0x00,
+- 0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x92,0x11,0x91,0x0d,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,
+- 0x57,0x54,0x04,0x01,0x00,0xd3,0x30,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,
+- 0xb5,0xcc,0x80,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,0x00,0x10,0x04,0x01,0x00,0x01,
+- 0xff,0xd0,0xb3,0xcc,0x81,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
+- 0xd1,0x96,0xcc,0x88,0x00,0x52,0x04,0x01,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,
+- 0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,0xcc,0x80,0x00,0x10,0x09,0x01,0xff,0xd1,
+- 0x83,0xcc,0x86,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x1a,0x52,0x04,0x01,0x00,
+- 0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb4,0xcc,0x8f,0x00,0x01,0xff,0xd1,
+- 0xb5,0xcc,0x8f,0x00,0x01,0x00,0xd0,0x2e,0xcf,0x86,0x95,0x28,0x94,0x24,0xd3,0x18,
+- 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,0x51,0x04,0x01,0xe6,
+- 0x10,0x04,0x01,0xe6,0x0a,0xe6,0x92,0x08,0x11,0x04,0x04,0x00,0x06,0x00,0x04,0x00,
+- 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xbe,0xd4,0x4a,0xd3,0x2a,0xd2,0x1a,0xd1,0x0d,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x96,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,
+- 0xb6,0xcc,0x86,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,
+- 0x06,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,
+- 0x06,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,
+- 0x09,0x00,0xd3,0x3a,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x90,0xcc,0x86,
+- 0x00,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0x90,0xcc,0x88,
+- 0x00,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,
+- 0xd0,0x95,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,0xd2,0x16,0x51,0x04,
+- 0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0x98,0xcc,0x88,0x00,0x01,0xff,0xd3,0x99,0xcc,
+- 0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x96,0xcc,0x88,0x00,0x01,0xff,0xd0,
+- 0xb6,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0x97,0xcc,0x88,0x00,0x01,0xff,0xd0,
+- 0xb7,0xcc,0x88,0x00,0xd4,0x74,0xd3,0x3a,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,
+- 0x01,0xff,0xd0,0x98,0xcc,0x84,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xd0,0x98,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,
+- 0x10,0x09,0x01,0xff,0xd0,0x9e,0xcc,0x88,0x00,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,
+- 0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0xa8,0xcc,0x88,0x00,0x01,
+- 0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,0xad,0xcc,0x88,
+- 0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x84,
+- 0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0xd3,0x3a,0xd2,0x24,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xd0,0xa3,0xcc,0x88,0x00,0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,0x10,0x09,
+- 0x01,0xff,0xd0,0xa3,0xcc,0x8b,0x00,0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,0x91,0x12,
+- 0x10,0x09,0x01,0xff,0xd0,0xa7,0xcc,0x88,0x00,0x01,0xff,0xd1,0x87,0xcc,0x88,0x00,
+- 0x08,0x00,0x92,0x16,0x91,0x12,0x10,0x09,0x01,0xff,0xd0,0xab,0xcc,0x88,0x00,0x01,
+- 0xff,0xd1,0x8b,0xcc,0x88,0x00,0x09,0x00,0x09,0x00,0xd1,0x74,0xd0,0x36,0xcf,0x86,
+- 0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
+- 0xd4,0x10,0x93,0x0c,0x52,0x04,0x0a,0x00,0x11,0x04,0x0b,0x00,0x0c,0x00,0x10,0x00,
+- 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0x01,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0xba,
+- 0xcf,0x86,0xd5,0x4c,0xd4,0x24,0x53,0x04,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
+- 0x14,0x00,0x01,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
+- 0x10,0x00,0x10,0x04,0x10,0x00,0x0d,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x02,0xdc,0x02,0xe6,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xdc,0x02,0xe6,
+- 0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xde,0x02,0xdc,0x02,0xe6,0xd4,0x2c,
+- 0xd3,0x10,0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x08,0xdc,0x02,0xdc,0x02,0xdc,
+- 0xd2,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xdc,0x02,0xe6,0xd1,0x08,0x10,0x04,
+- 0x02,0xe6,0x02,0xde,0x10,0x04,0x02,0xe4,0x02,0xe6,0xd3,0x20,0xd2,0x10,0xd1,0x08,
+- 0x10,0x04,0x01,0x0a,0x01,0x0b,0x10,0x04,0x01,0x0c,0x01,0x0d,0xd1,0x08,0x10,0x04,
+- 0x01,0x0e,0x01,0x0f,0x10,0x04,0x01,0x10,0x01,0x11,0xd2,0x10,0xd1,0x08,0x10,0x04,
+- 0x01,0x12,0x01,0x13,0x10,0x04,0x09,0x13,0x01,0x14,0xd1,0x08,0x10,0x04,0x01,0x15,
+- 0x01,0x16,0x10,0x04,0x01,0x00,0x01,0x17,0xcf,0x86,0xd5,0x28,0x94,0x24,0x93,0x20,
+- 0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x18,0x10,0x04,0x01,0x19,0x01,0x00,
+- 0xd1,0x08,0x10,0x04,0x02,0xe6,0x08,0xdc,0x10,0x04,0x08,0x00,0x08,0x12,0x00,0x00,
+- 0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,
+- 0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x93,0x10,
+- 0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0xe2,0xfb,0x01,0xe1,0x2b,0x01,0xd0,0xa8,0xcf,0x86,0xd5,0x55,0xd4,0x28,0xd3,0x10,
+- 0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x10,0x00,0x0a,0x00,0xd2,0x0c,
+- 0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x08,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x07,0x00,0x07,0x00,0xd3,0x0c,0x52,0x04,0x07,0xe6,0x11,0x04,0x07,0xe6,0x0a,0xe6,
+- 0xd2,0x10,0xd1,0x08,0x10,0x04,0x0a,0x1e,0x0a,0x1f,0x10,0x04,0x0a,0x20,0x01,0x00,
+- 0xd1,0x09,0x10,0x05,0x0f,0xff,0x00,0x00,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0xd4,
+- 0x3d,0x93,0x39,0xd2,0x1a,0xd1,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x10,0x09,0x01,
+- 0xff,0xd8,0xa7,0xd9,0x93,0x00,0x01,0xff,0xd8,0xa7,0xd9,0x94,0x00,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xd9,0x88,0xd9,0x94,0x00,0x01,0xff,0xd8,0xa7,0xd9,0x95,0x00,0x10,
+- 0x09,0x01,0xff,0xd9,0x8a,0xd9,0x94,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,
+- 0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0a,0x00,0x0a,0x00,0xcf,0x86,
+- 0xd5,0x5c,0xd4,0x20,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,
+- 0x01,0x00,0x01,0x1b,0xd1,0x08,0x10,0x04,0x01,0x1c,0x01,0x1d,0x10,0x04,0x01,0x1e,
+- 0x01,0x1f,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x20,0x01,0x21,0x10,0x04,
+- 0x01,0x22,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x10,0x04,0x07,0xdc,
+- 0x07,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x07,0xe6,0x08,0xe6,0x08,0xe6,0xd1,0x08,
+- 0x10,0x04,0x08,0xdc,0x08,0xe6,0x10,0x04,0x08,0xe6,0x0c,0xdc,0xd4,0x10,0x53,0x04,
+- 0x01,0x00,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x06,0x00,0x93,0x10,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x01,0x23,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x22,
+- 0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,
+- 0x11,0x04,0x04,0x00,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,
+- 0xcf,0x86,0xd5,0x5b,0xd4,0x2e,0xd3,0x1e,0x92,0x1a,0xd1,0x0d,0x10,0x09,0x01,0xff,
+- 0xdb,0x95,0xd9,0x94,0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xdb,0x81,0xd9,0x94,0x00,
++ 0x6f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,0x00,0x10,0x08,
++ 0x04,0xff,0x6f,0xcc,0x87,0x00,0x04,0xff,0x6f,0xcc,0x87,0x00,0xd3,0x27,0xe2,0x0b,
++ 0x43,0xd1,0x14,0x10,0x0a,0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,0x04,0xff,0x6f,
++ 0xcc,0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x79,0xcc,0x84,0x00,0x04,0xff,0x79,
++ 0xcc,0x84,0x00,0xd2,0x13,0x51,0x04,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0xa5,
++ 0x00,0x08,0xff,0xc8,0xbc,0x00,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,0xff,0xc6,0x9a,
++ 0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0xa6,0x00,0x08,0x00,0xcf,0x86,0x95,0x5f,0x94,
++ 0x5b,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,0xff,0xc9,0x82,0x00,
++ 0x10,0x04,0x09,0x00,0x09,0xff,0xc6,0x80,0x00,0xd1,0x0e,0x10,0x07,0x09,0xff,0xca,
++ 0x89,0x00,0x09,0xff,0xca,0x8c,0x00,0x10,0x07,0x09,0xff,0xc9,0x87,0x00,0x09,0x00,
++ 0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,0x89,0x00,0x09,0x00,0x10,0x07,0x09,
++ 0xff,0xc9,0x8b,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,0x8d,0x00,0x09,
++ 0x00,0x10,0x07,0x09,0xff,0xc9,0x8f,0x00,0x09,0x00,0x01,0x00,0x01,0x00,0xd1,0x8b,
++ 0xd0,0x0c,0xcf,0x86,0xe5,0xfa,0x42,0x64,0xd9,0x42,0x01,0xe6,0xcf,0x86,0xd5,0x2a,
++ 0xe4,0x82,0x43,0xe3,0x69,0x43,0xd2,0x11,0xe1,0x48,0x43,0x10,0x07,0x01,0xff,0xcc,
++ 0x80,0x00,0x01,0xff,0xcc,0x81,0x00,0xe1,0x4f,0x43,0x10,0x09,0x01,0xff,0xcc,0x88,
++ 0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0x00,0xd4,0x0f,0x93,0x0b,0x92,0x07,0x61,0x94,
++ 0x43,0x01,0xea,0x06,0xe6,0x06,0xe6,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x0a,
++ 0xff,0xcd,0xb1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xcd,0xb3,0x00,0x0a,0x00,0xd1,
++ 0x0b,0x10,0x07,0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x10,0x07,0x0a,0xff,0xcd,0xb7,
++ 0x00,0x0a,0x00,0xd2,0x07,0x61,0x80,0x43,0x00,0x00,0x51,0x04,0x09,0x00,0x10,0x06,
++ 0x01,0xff,0x3b,0x00,0x10,0xff,0xcf,0xb3,0x00,0xe0,0x31,0x01,0xcf,0x86,0xd5,0xd3,
++ 0xd4,0x5f,0xd3,0x21,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,
++ 0xc2,0xa8,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x81,0x00,0x01,0xff,
++ 0xc2,0xb7,0x00,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,
++ 0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,
++ 0x00,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x00,0x00,0x10,
++ 0x09,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0xd3,
++ 0x3c,0xd2,0x20,0xd1,0x12,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,
++ 0x01,0xff,0xce,0xb1,0x00,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,0xff,0xce,0xb3,
++ 0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xb4,0x00,0x01,0xff,0xce,0xb5,0x00,0x10,
++ 0x07,0x01,0xff,0xce,0xb6,0x00,0x01,0xff,0xce,0xb7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,
++ 0x07,0x01,0xff,0xce,0xb8,0x00,0x01,0xff,0xce,0xb9,0x00,0x10,0x07,0x01,0xff,0xce,
++ 0xba,0x00,0x01,0xff,0xce,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xbc,0x00,
++ 0x01,0xff,0xce,0xbd,0x00,0x10,0x07,0x01,0xff,0xce,0xbe,0x00,0x01,0xff,0xce,0xbf,
++ 0x00,0xe4,0x6e,0x43,0xd3,0x35,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xcf,0x80,
++ 0x00,0x01,0xff,0xcf,0x81,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x83,0x00,0xd1,
++ 0x0e,0x10,0x07,0x01,0xff,0xcf,0x84,0x00,0x01,0xff,0xcf,0x85,0x00,0x10,0x07,0x01,
++ 0xff,0xcf,0x86,0x00,0x01,0xff,0xcf,0x87,0x00,0xe2,0x14,0x43,0xd1,0x0e,0x10,0x07,
++ 0x01,0xff,0xcf,0x88,0x00,0x01,0xff,0xcf,0x89,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,
++ 0xcc,0x88,0x00,0x01,0xff,0xcf,0x85,0xcc,0x88,0x00,0xcf,0x86,0xd5,0x94,0xd4,0x3c,
++ 0xd3,0x13,0x92,0x0f,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0x83,0x00,0x01,
++ 0x00,0x01,0x00,0xd2,0x07,0x61,0x23,0x43,0x01,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,
++ 0xce,0xbf,0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
++ 0xcf,0x89,0xcc,0x81,0x00,0x0a,0xff,0xcf,0x97,0x00,0xd3,0x2c,0xd2,0x11,0xe1,0x2f,
++ 0x43,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,0xff,0xce,0xb8,0x00,0xd1,0x10,0x10,
++ 0x09,0x01,0xff,0xcf,0x92,0xcc,0x88,0x00,0x01,0xff,0xcf,0x86,0x00,0x10,0x07,0x01,
++ 0xff,0xcf,0x80,0x00,0x04,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,0xcf,0x99,
++ 0x00,0x06,0x00,0x10,0x07,0x01,0xff,0xcf,0x9b,0x00,0x04,0x00,0xd1,0x0b,0x10,0x07,
++ 0x01,0xff,0xcf,0x9d,0x00,0x04,0x00,0x10,0x07,0x01,0xff,0xcf,0x9f,0x00,0x04,0x00,
++ 0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xa1,0x00,0x04,
++ 0x00,0x10,0x07,0x01,0xff,0xcf,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,
++ 0xcf,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0xa7,0x00,0x01,0x00,0xd2,0x16,
++ 0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xa9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,
++ 0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xad,0x00,0x01,0x00,0x10,
++ 0x07,0x01,0xff,0xcf,0xaf,0x00,0x01,0x00,0xd3,0x2b,0xd2,0x12,0x91,0x0e,0x10,0x07,
++ 0x01,0xff,0xce,0xba,0x00,0x01,0xff,0xcf,0x81,0x00,0x01,0x00,0xd1,0x0e,0x10,0x07,
++ 0x05,0xff,0xce,0xb8,0x00,0x05,0xff,0xce,0xb5,0x00,0x10,0x04,0x06,0x00,0x07,0xff,
++ 0xcf,0xb8,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x07,0x00,0x07,0xff,0xcf,0xb2,0x00,
++ 0x10,0x07,0x07,0xff,0xcf,0xbb,0x00,0x07,0x00,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,
++ 0xff,0xcd,0xbb,0x00,0x10,0x07,0x08,0xff,0xcd,0xbc,0x00,0x08,0xff,0xcd,0xbd,0x00,
++ 0xe3,0xd6,0x46,0xe2,0x3d,0x05,0xe1,0x27,0x02,0xe0,0x66,0x01,0xcf,0x86,0xd5,0xf0,
++ 0xd4,0x7e,0xd3,0x40,0xd2,0x22,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,0xb5,0xcc,0x80,
++ 0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,0x00,0x10,0x07,0x01,0xff,0xd1,0x92,0x00,0x01,
++ 0xff,0xd0,0xb3,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x94,0x00,0x01,
++ 0xff,0xd1,0x95,0x00,0x10,0x07,0x01,0xff,0xd1,0x96,0x00,0x01,0xff,0xd1,0x96,0xcc,
++ 0x88,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x98,0x00,0x01,0xff,0xd1,
++ 0x99,0x00,0x10,0x07,0x01,0xff,0xd1,0x9a,0x00,0x01,0xff,0xd1,0x9b,0x00,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xd0,0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,0xcc,0x80,0x00,
++ 0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x86,0x00,0x01,0xff,0xd1,0x9f,0x00,0xd3,0x38,
++ 0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xb0,0x00,0x01,0xff,0xd0,0xb1,0x00,
++ 0x10,0x07,0x01,0xff,0xd0,0xb2,0x00,0x01,0xff,0xd0,0xb3,0x00,0xd1,0x0e,0x10,0x07,
++ 0x01,0xff,0xd0,0xb4,0x00,0x01,0xff,0xd0,0xb5,0x00,0x10,0x07,0x01,0xff,0xd0,0xb6,
++ 0x00,0x01,0xff,0xd0,0xb7,0x00,0xd2,0x1e,0xd1,0x10,0x10,0x07,0x01,0xff,0xd0,0xb8,
++ 0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x10,0x07,0x01,0xff,0xd0,0xba,0x00,0x01,
++ 0xff,0xd0,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xbc,0x00,0x01,0xff,0xd0,
++ 0xbd,0x00,0x10,0x07,0x01,0xff,0xd0,0xbe,0x00,0x01,0xff,0xd0,0xbf,0x00,0xe4,0x0e,
++ 0x42,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x80,0x00,0x01,0xff,
++ 0xd1,0x81,0x00,0x10,0x07,0x01,0xff,0xd1,0x82,0x00,0x01,0xff,0xd1,0x83,0x00,0xd1,
++ 0x0e,0x10,0x07,0x01,0xff,0xd1,0x84,0x00,0x01,0xff,0xd1,0x85,0x00,0x10,0x07,0x01,
++ 0xff,0xd1,0x86,0x00,0x01,0xff,0xd1,0x87,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,
++ 0xff,0xd1,0x88,0x00,0x01,0xff,0xd1,0x89,0x00,0x10,0x07,0x01,0xff,0xd1,0x8a,0x00,
++ 0x01,0xff,0xd1,0x8b,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x8c,0x00,0x01,0xff,
++ 0xd1,0x8d,0x00,0x10,0x07,0x01,0xff,0xd1,0x8e,0x00,0x01,0xff,0xd1,0x8f,0x00,0xcf,
++ 0x86,0xd5,0x07,0x64,0xb8,0x41,0x01,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
++ 0x10,0x07,0x01,0xff,0xd1,0xa1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xa3,0x00,
++ 0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,
++ 0xff,0xd1,0xa7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa9,
++ 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,
++ 0x01,0xff,0xd1,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xaf,0x00,0x01,0x00,
++ 0xd3,0x33,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb1,0x00,0x01,0x00,0x10,
++ 0x07,0x01,0xff,0xd1,0xb3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb5,
++ 0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb5,0xcc,0x8f,0x00,0x01,0xff,0xd1,0xb5,
++ 0xcc,0x8f,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb9,0x00,0x01,0x00,
++ 0x10,0x07,0x01,0xff,0xd1,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,
++ 0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xbf,0x00,0x01,0x00,0xe0,0x41,0x01,
++ 0xcf,0x86,0xd5,0x8e,0xd4,0x36,0xd3,0x11,0xe2,0x7a,0x41,0xe1,0x71,0x41,0x10,0x07,
++ 0x01,0xff,0xd2,0x81,0x00,0x01,0x00,0xd2,0x0f,0x51,0x04,0x04,0x00,0x10,0x07,0x06,
++ 0xff,0xd2,0x8b,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x04,0xff,0xd2,0x8d,0x00,0x04,
++ 0x00,0x10,0x07,0x04,0xff,0xd2,0x8f,0x00,0x04,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
++ 0x10,0x07,0x01,0xff,0xd2,0x91,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x93,0x00,
++ 0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x95,0x00,0x01,0x00,0x10,0x07,0x01,
++ 0xff,0xd2,0x97,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x99,
++ 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9b,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,
++ 0x01,0xff,0xd2,0x9d,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9f,0x00,0x01,0x00,
++ 0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xa1,0x00,0x01,
++ 0x00,0x10,0x07,0x01,0xff,0xd2,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,
++ 0xd2,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xa7,0x00,0x01,0x00,0xd2,0x16,
++ 0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xa9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,
++ 0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xad,0x00,0x01,0x00,0x10,
++ 0x07,0x01,0xff,0xd2,0xaf,0x00,0x01,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,
++ 0x01,0xff,0xd2,0xb1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xb3,0x00,0x01,0x00,
++ 0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xb5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,
++ 0xb7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xb9,0x00,0x01,
++ 0x00,0x10,0x07,0x01,0xff,0xd2,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,
++ 0xd2,0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xbf,0x00,0x01,0x00,0xcf,0x86,
++ 0xd5,0xdc,0xd4,0x5a,0xd3,0x36,0xd2,0x20,0xd1,0x10,0x10,0x07,0x01,0xff,0xd3,0x8f,
++ 0x00,0x01,0xff,0xd0,0xb6,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0xb6,0xcc,0x86,
++ 0x00,0x01,0xff,0xd3,0x84,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x86,
++ 0x00,0x10,0x04,0x06,0x00,0x01,0xff,0xd3,0x88,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x04,
++ 0x01,0x00,0x06,0xff,0xd3,0x8a,0x00,0x10,0x04,0x06,0x00,0x01,0xff,0xd3,0x8c,0x00,
++ 0xe1,0x52,0x40,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x8e,0x00,0xd3,0x41,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb0,0xcc,
++ 0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb0,0xcc,
++ 0x88,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0x95,0x00,0x01,0x00,0x10,0x09,0x01,
++ 0xff,0xd0,0xb5,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,0xd2,0x1d,0xd1,
++ 0x0b,0x10,0x07,0x01,0xff,0xd3,0x99,0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0x99,
++ 0xcc,0x88,0x00,0x01,0xff,0xd3,0x99,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,
++ 0xd0,0xb6,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,
++ 0xd0,0xb7,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0xd4,0x82,0xd3,0x41,
++ 0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa1,0x00,0x01,0x00,0x10,0x09,0x01,
++ 0xff,0xd0,0xb8,0xcc,0x84,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,0x10,
++ 0x09,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,0xd2,
++ 0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa9,0x00,0x01,0x00,0x10,0x09,0x01,0xff,
++ 0xd3,0xa9,0xcc,0x88,0x00,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,
++ 0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x10,0x09,
++ 0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0xd3,0x41,
++ 0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,0x01,0xff,0xd1,
++ 0x83,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,0x01,0xff,0xd1,
++ 0x83,0xcc,0x8b,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x87,0xcc,0x88,0x00,0x01,
++ 0xff,0xd1,0x87,0xcc,0x88,0x00,0x10,0x07,0x08,0xff,0xd3,0xb7,0x00,0x08,0x00,0xd2,
++ 0x1d,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x8b,0xcc,0x88,0x00,0x01,0xff,0xd1,0x8b,
++ 0xcc,0x88,0x00,0x10,0x07,0x09,0xff,0xd3,0xbb,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,
++ 0x09,0xff,0xd3,0xbd,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xd3,0xbf,0x00,0x09,0x00,
++ 0xe1,0x26,0x02,0xe0,0x78,0x01,0xcf,0x86,0xd5,0xb0,0xd4,0x58,0xd3,0x2c,0xd2,0x16,
++ 0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x81,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,
++ 0x83,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x85,0x00,0x06,0x00,0x10,
++ 0x07,0x06,0xff,0xd4,0x87,0x00,0x06,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,
++ 0xd4,0x89,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x8b,0x00,0x06,0x00,0xd1,0x0b,
++ 0x10,0x07,0x06,0xff,0xd4,0x8d,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x8f,0x00,
++ 0x06,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xd4,0x91,0x00,0x09,
++ 0x00,0x10,0x07,0x09,0xff,0xd4,0x93,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,0x0a,0xff,
++ 0xd4,0x95,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0x97,0x00,0x0a,0x00,0xd2,0x16,
++ 0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x99,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,
++ 0x9b,0x00,0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x9d,0x00,0x0a,0x00,0x10,
++ 0x07,0x0a,0xff,0xd4,0x9f,0x00,0x0a,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
++ 0x10,0x07,0x0a,0xff,0xd4,0xa1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0xa3,0x00,
++ 0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xd4,0xa5,0x00,0x0b,0x00,0x10,0x07,0x0c,
++ 0xff,0xd4,0xa7,0x00,0x0c,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x10,0xff,0xd4,0xa9,
++ 0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xab,0x00,0x10,0x00,0xd1,0x0b,0x10,0x07,
++ 0x10,0xff,0xd4,0xad,0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xaf,0x00,0x10,0x00,
++ 0xd3,0x35,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,0xa1,0x00,0x10,
++ 0x07,0x01,0xff,0xd5,0xa2,0x00,0x01,0xff,0xd5,0xa3,0x00,0xd1,0x0e,0x10,0x07,0x01,
++ 0xff,0xd5,0xa4,0x00,0x01,0xff,0xd5,0xa5,0x00,0x10,0x07,0x01,0xff,0xd5,0xa6,0x00,
++ 0x01,0xff,0xd5,0xa7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xa8,0x00,
++ 0x01,0xff,0xd5,0xa9,0x00,0x10,0x07,0x01,0xff,0xd5,0xaa,0x00,0x01,0xff,0xd5,0xab,
++ 0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xac,0x00,0x01,0xff,0xd5,0xad,0x00,0x10,
++ 0x07,0x01,0xff,0xd5,0xae,0x00,0x01,0xff,0xd5,0xaf,0x00,0xcf,0x86,0xe5,0xf1,0x3e,
++ 0xd4,0x70,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xb0,0x00,0x01,
++ 0xff,0xd5,0xb1,0x00,0x10,0x07,0x01,0xff,0xd5,0xb2,0x00,0x01,0xff,0xd5,0xb3,0x00,
++ 0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xb4,0x00,0x01,0xff,0xd5,0xb5,0x00,0x10,0x07,
++ 0x01,0xff,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,
++ 0x01,0xff,0xd5,0xb8,0x00,0x01,0xff,0xd5,0xb9,0x00,0x10,0x07,0x01,0xff,0xd5,0xba,
++ 0x00,0x01,0xff,0xd5,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xbc,0x00,0x01,
++ 0xff,0xd5,0xbd,0x00,0x10,0x07,0x01,0xff,0xd5,0xbe,0x00,0x01,0xff,0xd5,0xbf,0x00,
++ 0xe3,0x70,0x3e,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd6,0x80,0x00,0x01,0xff,
++ 0xd6,0x81,0x00,0x10,0x07,0x01,0xff,0xd6,0x82,0x00,0x01,0xff,0xd6,0x83,0x00,0xd1,
++ 0x0e,0x10,0x07,0x01,0xff,0xd6,0x84,0x00,0x01,0xff,0xd6,0x85,0x00,0x10,0x07,0x01,
++ 0xff,0xd6,0x86,0x00,0x00,0x00,0xe0,0x18,0x3f,0xcf,0x86,0xe5,0xa9,0x3e,0xe4,0x80,
++ 0x3e,0xe3,0x5f,0x3e,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xd5,0xa5,0xd6,0x82,0x00,0xe4,0x3e,0x25,0xe3,0xc4,0x1a,0xe2,0xf8,0x80,
++ 0xe1,0xc0,0x13,0xd0,0x1e,0xcf,0x86,0xc5,0xe4,0xf0,0x4a,0xe3,0x3b,0x46,0xe2,0xd1,
++ 0x43,0xe1,0x04,0x43,0xe0,0xc9,0x42,0xcf,0x86,0xe5,0x8e,0x42,0x64,0x71,0x42,0x0b,
++ 0x00,0xcf,0x86,0xe5,0xfa,0x01,0xe4,0xd5,0x55,0xe3,0x76,0x01,0xe2,0x76,0x53,0xd1,
++ 0x0c,0xe0,0xd7,0x52,0xcf,0x86,0x65,0x75,0x52,0x04,0x00,0xe0,0x0d,0x01,0xcf,0x86,
++ 0xd5,0x0a,0xe4,0xf8,0x52,0x63,0xe7,0x52,0x0a,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x80,0x00,0x01,0xff,0xe2,0xb4,0x81,0x00,
++ 0x10,0x08,0x01,0xff,0xe2,0xb4,0x82,0x00,0x01,0xff,0xe2,0xb4,0x83,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe2,0xb4,0x84,0x00,0x01,0xff,0xe2,0xb4,0x85,0x00,0x10,0x08,
++ 0x01,0xff,0xe2,0xb4,0x86,0x00,0x01,0xff,0xe2,0xb4,0x87,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe2,0xb4,0x88,0x00,0x01,0xff,0xe2,0xb4,0x89,0x00,0x10,0x08,
++ 0x01,0xff,0xe2,0xb4,0x8a,0x00,0x01,0xff,0xe2,0xb4,0x8b,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe2,0xb4,0x8c,0x00,0x01,0xff,0xe2,0xb4,0x8d,0x00,0x10,0x08,0x01,0xff,
++ 0xe2,0xb4,0x8e,0x00,0x01,0xff,0xe2,0xb4,0x8f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe2,0xb4,0x90,0x00,0x01,0xff,0xe2,0xb4,0x91,0x00,0x10,0x08,
++ 0x01,0xff,0xe2,0xb4,0x92,0x00,0x01,0xff,0xe2,0xb4,0x93,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe2,0xb4,0x94,0x00,0x01,0xff,0xe2,0xb4,0x95,0x00,0x10,0x08,0x01,0xff,
++ 0xe2,0xb4,0x96,0x00,0x01,0xff,0xe2,0xb4,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe2,0xb4,0x98,0x00,0x01,0xff,0xe2,0xb4,0x99,0x00,0x10,0x08,0x01,0xff,
++ 0xe2,0xb4,0x9a,0x00,0x01,0xff,0xe2,0xb4,0x9b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe2,0xb4,0x9c,0x00,0x01,0xff,0xe2,0xb4,0x9d,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,
++ 0x9e,0x00,0x01,0xff,0xe2,0xb4,0x9f,0x00,0xcf,0x86,0xe5,0x2a,0x52,0x94,0x50,0xd3,
++ 0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa0,0x00,0x01,0xff,0xe2,
++ 0xb4,0xa1,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa2,0x00,0x01,0xff,0xe2,0xb4,0xa3,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa4,0x00,0x01,0xff,0xe2,0xb4,0xa5,
++ 0x00,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xa7,0x00,0x52,0x04,0x00,0x00,0x91,
++ 0x0c,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xad,0x00,0x00,0x00,0x01,0x00,0xd2,
++ 0x1b,0xe1,0xce,0x52,0xe0,0x7f,0x52,0xcf,0x86,0x95,0x0f,0x94,0x0b,0x93,0x07,0x62,
++ 0x64,0x52,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xd1,0x13,0xe0,0xa5,0x53,0xcf,
++ 0x86,0x95,0x0a,0xe4,0x7a,0x53,0x63,0x69,0x53,0x04,0x00,0x04,0x00,0xd0,0x0d,0xcf,
++ 0x86,0x95,0x07,0x64,0xf4,0x53,0x08,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,
++ 0x54,0x04,0x04,0x00,0xd3,0x07,0x62,0x01,0x54,0x04,0x00,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x11,0xff,0xe1,0x8f,0xb0,0x00,0x11,0xff,0xe1,0x8f,0xb1,0x00,0x10,0x08,0x11,
++ 0xff,0xe1,0x8f,0xb2,0x00,0x11,0xff,0xe1,0x8f,0xb3,0x00,0x91,0x10,0x10,0x08,0x11,
++ 0xff,0xe1,0x8f,0xb4,0x00,0x11,0xff,0xe1,0x8f,0xb5,0x00,0x00,0x00,0xd4,0x1c,0xe3,
++ 0x92,0x56,0xe2,0xc9,0x55,0xe1,0x8c,0x55,0xe0,0x6d,0x55,0xcf,0x86,0x95,0x0a,0xe4,
++ 0x56,0x55,0x63,0x45,0x55,0x04,0x00,0x04,0x00,0xe3,0xd2,0x01,0xe2,0xdd,0x59,0xd1,
++ 0x0c,0xe0,0xfe,0x58,0xcf,0x86,0x65,0xd7,0x58,0x0a,0x00,0xe0,0x4e,0x59,0xcf,0x86,
++ 0xd5,0xc5,0xd4,0x45,0xd3,0x31,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x12,0xff,0xd0,0xb2,
++ 0x00,0x12,0xff,0xd0,0xb4,0x00,0x10,0x07,0x12,0xff,0xd0,0xbe,0x00,0x12,0xff,0xd1,
++ 0x81,0x00,0x51,0x07,0x12,0xff,0xd1,0x82,0x00,0x10,0x07,0x12,0xff,0xd1,0x8a,0x00,
++ 0x12,0xff,0xd1,0xa3,0x00,0x92,0x10,0x91,0x0c,0x10,0x08,0x12,0xff,0xea,0x99,0x8b,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,
++ 0xff,0xe1,0x83,0x90,0x00,0x14,0xff,0xe1,0x83,0x91,0x00,0x10,0x08,0x14,0xff,0xe1,
++ 0x83,0x92,0x00,0x14,0xff,0xe1,0x83,0x93,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
++ 0x83,0x94,0x00,0x14,0xff,0xe1,0x83,0x95,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0x96,
++ 0x00,0x14,0xff,0xe1,0x83,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
++ 0x83,0x98,0x00,0x14,0xff,0xe1,0x83,0x99,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0x9a,
++ 0x00,0x14,0xff,0xe1,0x83,0x9b,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0x9c,
++ 0x00,0x14,0xff,0xe1,0x83,0x9d,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0x9e,0x00,0x14,
++ 0xff,0xe1,0x83,0x9f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,
++ 0xff,0xe1,0x83,0xa0,0x00,0x14,0xff,0xe1,0x83,0xa1,0x00,0x10,0x08,0x14,0xff,0xe1,
++ 0x83,0xa2,0x00,0x14,0xff,0xe1,0x83,0xa3,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
++ 0x83,0xa4,0x00,0x14,0xff,0xe1,0x83,0xa5,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xa6,
++ 0x00,0x14,0xff,0xe1,0x83,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
++ 0x83,0xa8,0x00,0x14,0xff,0xe1,0x83,0xa9,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xaa,
++ 0x00,0x14,0xff,0xe1,0x83,0xab,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0xac,
++ 0x00,0x14,0xff,0xe1,0x83,0xad,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xae,0x00,0x14,
++ 0xff,0xe1,0x83,0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
++ 0x83,0xb0,0x00,0x14,0xff,0xe1,0x83,0xb1,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xb2,
++ 0x00,0x14,0xff,0xe1,0x83,0xb3,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0xb4,
++ 0x00,0x14,0xff,0xe1,0x83,0xb5,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xb6,0x00,0x14,
++ 0xff,0xe1,0x83,0xb7,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0xb8,
++ 0x00,0x14,0xff,0xe1,0x83,0xb9,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xba,0x00,0x00,
++ 0x00,0xd1,0x0c,0x10,0x04,0x00,0x00,0x14,0xff,0xe1,0x83,0xbd,0x00,0x10,0x08,0x14,
++ 0xff,0xe1,0x83,0xbe,0x00,0x14,0xff,0xe1,0x83,0xbf,0x00,0xe2,0x9d,0x08,0xe1,0x48,
++ 0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,0xd3,0x40,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa5,0x00,0x01,0xff,0x61,0xcc,0xa5,0x00,0x10,
++ 0x08,0x01,0xff,0x62,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,0x87,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x62,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,0xa3,0x00,0x10,0x08,0x01,
++ 0xff,0x62,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,0xd2,0x24,0xd1,0x14,0x10,
++ 0x0a,0x01,0xff,0x63,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,0x63,0xcc,0xa7,0xcc,0x81,
++ 0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x87,0x00,0x01,0xff,0x64,0xcc,0x87,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa3,0x00,0x01,0xff,0x64,0xcc,0xa3,0x00,0x10,
++ 0x08,0x01,0xff,0x64,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,0xb1,0x00,0xd3,0x48,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa7,0x00,0x01,0xff,0x64,0xcc,0xa7,
++ 0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0xad,0x00,0x01,0xff,0x64,0xcc,0xad,0x00,0xd1,
++ 0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,0x65,0xcc,0x84,
++ 0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0x01,0xff,0x65,
++ 0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xad,
++ 0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0xb0,0x00,0x01,
++ 0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,
++ 0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x66,0xcc,0x87,
++ 0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x67,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,0x84,0x00,0x10,0x08,0x01,
++ 0xff,0x68,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x68,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x68,
++ 0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x68,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x68,
++ 0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,
++ 0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,0x01,0xff,0x69,0xcc,0x88,
++ 0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,0xd3,0x40,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0x81,0x00,0x01,0xff,0x6b,0xcc,0x81,0x00,0x10,
++ 0x08,0x01,0xff,0x6b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,0xa3,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x6b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,0xb1,0x00,0x10,0x08,0x01,
++ 0xff,0x6c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,0xd2,0x24,0xd1,0x14,0x10,
++ 0x0a,0x01,0xff,0x6c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,0x6c,0xcc,0xa3,0xcc,0x84,
++ 0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0xb1,0x00,0x01,0xff,0x6c,0xcc,0xb1,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x6c,0xcc,0xad,0x00,0x01,0xff,0x6c,0xcc,0xad,0x00,0x10,
++ 0x08,0x01,0xff,0x6d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,0x81,0x00,0xcf,0x86,0xe5,
++ 0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6d,0xcc,
++ 0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6d,0xcc,0xa3,0x00,
++ 0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x87,0x00,
++ 0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa3,0x00,0x01,0xff,
++ 0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0xb1,0x00,
++ 0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xad,0x00,0x01,0xff,
++ 0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,
++ 0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x83,0xcc,
++ 0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,0x48,0xd2,0x28,0xd1,0x14,
++ 0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x84,0xcc,
++ 0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,
++ 0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x70,0xcc,0x81,0x00,0x01,0xff,
++ 0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x70,0xcc,0x87,0x00,0x01,0xff,0x70,0xcc,
++ 0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x72,0xcc,0x87,0x00,0x01,0xff,
++ 0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xa3,0x00,0x01,0xff,0x72,0xcc,
++ 0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,
++ 0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xb1,0x00,0x01,0xff,
++ 0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x73,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x73,0xcc,
++ 0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x73,0xcc,
++ 0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,0x10,0x0a,0x01,0xff,
++ 0x73,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,0xcc,0x87,0x00,0xd2,0x24,
++ 0xd1,0x14,0x10,0x0a,0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,
++ 0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0x87,0x00,0x01,0xff,0x74,0xcc,
++ 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xa3,0x00,0x01,0xff,0x74,0xcc,
++ 0xa3,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0xb1,0x00,0x01,0xff,0x74,0xcc,0xb1,0x00,
++ 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xad,0x00,0x01,0xff,
++ 0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa4,0x00,0x01,0xff,0x75,0xcc,
++ 0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0xb0,0x00,0x01,0xff,0x75,0xcc,
++ 0xb0,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xad,0x00,0x01,0xff,0x75,0xcc,0xad,0x00,
++ 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x01,0xff,
++ 0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,
++ 0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x76,0xcc,
++ 0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x76,0xcc,0xa3,0x00,
++ 0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x11,0x02,0xcf,0x86,0xd5,0xe2,0xd4,0x80,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x80,0x00,0x01,0xff,0x77,
++ 0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x81,0x00,0x01,0xff,0x77,0xcc,0x81,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x88,0x00,0x01,0xff,0x77,0xcc,0x88,
++ 0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x87,0x00,0x01,0xff,0x77,0xcc,0x87,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0xa3,0x00,0x01,0xff,0x77,0xcc,0xa3,
++ 0x00,0x10,0x08,0x01,0xff,0x78,0xcc,0x87,0x00,0x01,0xff,0x78,0xcc,0x87,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x78,0xcc,0x88,0x00,0x01,0xff,0x78,0xcc,0x88,0x00,0x10,
++ 0x08,0x01,0xff,0x79,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,0x87,0x00,0xd3,0x33,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x82,0x00,0x01,0xff,0x7a,0xcc,0x82,
++ 0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0xa3,0x00,0x01,0xff,0x7a,0xcc,0xa3,0x00,0xe1,
++ 0xc4,0x58,0x10,0x08,0x01,0xff,0x7a,0xcc,0xb1,0x00,0x01,0xff,0x7a,0xcc,0xb1,0x00,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,0xff,0x79,0xcc,
++ 0x8a,0x00,0x10,0x08,0x01,0xff,0x61,0xca,0xbe,0x00,0x02,0xff,0x73,0xcc,0x87,0x00,
++ 0x51,0x04,0x0a,0x00,0x10,0x07,0x0a,0xff,0x73,0x73,0x00,0x0a,0x00,0xd4,0x98,0xd3,
++ 0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa3,0x00,0x01,0xff,0x61,
++ 0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x89,
++ 0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x01,0xff,0x61,
++ 0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0x01,
++ 0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,
++ 0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,0x01,
++ 0xff,0x61,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x83,0x00,0xd1,
++ 0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,0xa3,
++ 0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,0x01,0xff,0x61,
++ 0xcc,0x86,0xcc,0x81,0x00,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,
++ 0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,0x10,0x0a,0x01,
++ 0xff,0x61,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x89,0x00,0xd1,
++ 0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x86,
++ 0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,0x01,0xff,0x61,
++ 0xcc,0xa3,0xcc,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xa3,
++ 0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x89,0x00,0x01,
++ 0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x83,0x00,0x01,
++ 0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0x01,
++ 0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0xcf,0x86,0xe5,0x31,0x01,0xd4,0x90,0xd3,0x50,
++ 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x01,0xff,
++ 0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,
++ 0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,
++ 0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,
++ 0x65,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x89,0x00,0x01,0xff,0x69,0xcc,0x89,0x00,
++ 0x10,0x08,0x01,0xff,0x69,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,0xa3,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x6f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,0xa3,0x00,0x10,0x08,
++ 0x01,0xff,0x6f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,0xd3,0x50,0xd2,0x28,
++ 0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,
++ 0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0x01,0xff,
++ 0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,
++ 0x89,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,
++ 0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0xd2,0x28,0xd1,0x14,
++ 0x10,0x0a,0x01,0xff,0x6f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x6f,0xcc,0xa3,0xcc,
++ 0x82,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,
++ 0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,
++ 0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,
++ 0x89,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x89,0x00,0xd4,0x98,0xd3,0x48,0xd2,0x28,
++ 0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,
++ 0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0x01,0xff,
++ 0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0xa3,0x00,
++ 0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x89,0x00,0x01,0xff,
++ 0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,
++ 0x81,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,
++ 0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,
++ 0x01,0xff,0x75,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x89,0x00,
++ 0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,
++ 0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,
++ 0xa3,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,
++ 0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,
++ 0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x89,0x00,
++ 0x01,0xff,0x79,0xcc,0x89,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,
++ 0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbb,0x00,
++ 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbd,0x00,0x0a,0x00,0x10,0x08,
++ 0x0a,0xff,0xe1,0xbb,0xbf,0x00,0x0a,0x00,0xe1,0xbf,0x02,0xe0,0xa1,0x01,0xcf,0x86,
++ 0xd5,0xc6,0xd4,0x6c,0xd3,0x18,0xe2,0xc0,0x58,0xe1,0xa9,0x58,0x10,0x09,0x01,0xff,
++ 0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0x00,
++ 0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb1,0xcc,
++ 0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,
++ 0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,
++ 0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0x00,0xd3,0x18,
++ 0xe2,0xfc,0x58,0xe1,0xe5,0x58,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x93,0x00,0x01,
++ 0xff,0xce,0xb5,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,
++ 0xcc,0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb5,
++ 0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,
++ 0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,
++ 0x94,0xcc,0x81,0x00,0x00,0x00,0xd4,0x6c,0xd3,0x18,0xe2,0x26,0x59,0xe1,0x0f,0x59,
++ 0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,
++ 0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,
++ 0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0x00,0x01,
++ 0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,
++ 0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,
++ 0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,
++ 0x82,0x00,0xd3,0x18,0xe2,0x62,0x59,0xe1,0x4b,0x59,0x10,0x09,0x01,0xff,0xce,0xb9,
++ 0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0x00,0x10,0x0b,
++ 0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,
++ 0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x81,0x00,0x01,
++ 0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,
++ 0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,0x00,0xcf,0x86,0xd5,0xac,
++ 0xd4,0x5a,0xd3,0x18,0xe2,0x9f,0x59,0xe1,0x88,0x59,0x10,0x09,0x01,0xff,0xce,0xbf,
++ 0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0x10,0x0b,
++ 0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,
++ 0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x81,0x00,0x01,
++ 0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x18,0xe2,0xc9,0x59,0xe1,
++ 0xb2,0x59,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,0xcf,0x85,0xcc,
++ 0x94,0x00,0xd2,0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,
++ 0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x0f,
++ 0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,0xe4,0x85,0x5a,0xd3,0x18,0xe2,
++ 0x04,0x5a,0xe1,0xed,0x59,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,
++ 0xcf,0x89,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,
++ 0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,
++ 0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,
++ 0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,
++ 0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,
++ 0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xe0,0xd9,0x02,0xcf,0x86,0xe5,0x91,0x01,0xd4,
++ 0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xce,
++ 0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,
++ 0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x80,
++ 0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,0xce,
++ 0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,
++ 0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
++ 0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,
++ 0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,
++ 0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
++ 0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,
++ 0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,
++ 0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,
++ 0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,
++ 0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xce,0xb9,
++ 0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,
++ 0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,
++ 0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,
++ 0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,
++ 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,
++ 0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,
++ 0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,
++ 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,
++ 0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,
++ 0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0xce,
++ 0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd4,0xc8,0xd3,
++ 0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xce,0xb9,0x00,
++ 0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,
++ 0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0xce,0xb9,
++ 0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,
++ 0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xcf,
++ 0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,
++ 0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xce,
++ 0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xcf,
++ 0x89,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,
++ 0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0xce,
++ 0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,
++ 0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,
++ 0xcd,0x82,0xce,0xb9,0x00,0xd3,0x49,0xd2,0x26,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
++ 0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,
++ 0xb1,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xce,0xb9,0x00,0xd1,0x0f,0x10,
++ 0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,
++ 0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,0xcc,
++ 0x84,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,0xff,0xce,0xb1,0xcc,
++ 0x81,0x00,0xe1,0xa5,0x5a,0x10,0x09,0x01,0xff,0xce,0xb1,0xce,0xb9,0x00,0x01,0x00,
++ 0xcf,0x86,0xd5,0xbd,0xd4,0x7e,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x80,0xce,
++ 0xb9,0x00,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,
++ 0xb7,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcd,0x82,
++ 0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,0x10,0x09,
++ 0x01,0xff,0xce,0xb7,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0xe1,0xb4,
++ 0x5a,0x10,0x09,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0x01,0xff,0xe1,0xbe,0xbf,0xcc,
++ 0x80,0x00,0xd3,0x18,0xe2,0xda,0x5a,0xe1,0xc3,0x5a,0x10,0x09,0x01,0xff,0xce,0xb9,
++ 0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0xe2,0xfe,0x5a,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0x10,
++ 0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0xd4,
++ 0x51,0xd3,0x18,0xe2,0x21,0x5b,0xe1,0x0a,0x5b,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,
++ 0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,
++ 0xff,0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,0x10,0x09,0x01,
++ 0xff,0xcf,0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0xe1,0x41,0x5b,
++ 0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x80,0x00,
++ 0xd3,0x3b,0xd2,0x18,0x51,0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x80,
++ 0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,
++ 0xcf,0x89,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcd,
++ 0x82,0x00,0x01,0xff,0xcf,0x89,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x10,
++ 0x09,0x01,0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0xe1,
++ 0x4b,0x5b,0x10,0x09,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0x01,0xff,0xc2,0xb4,0x00,
++ 0xe0,0xa2,0x67,0xcf,0x86,0xe5,0x24,0x02,0xe4,0x26,0x01,0xe3,0x1b,0x5e,0xd2,0x2b,
++ 0xe1,0xf5,0x5b,0xe0,0x7a,0x5b,0xcf,0x86,0xe5,0x5f,0x5b,0x94,0x1c,0x93,0x18,0x92,
++ 0x14,0x91,0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,0xff,0xe2,0x80,0x83,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd1,0xd6,0xd0,0x46,0xcf,0x86,0x55,
++ 0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
++ 0x07,0x01,0xff,0xcf,0x89,0x00,0x01,0x00,0x92,0x12,0x51,0x04,0x01,0x00,0x10,0x06,
++ 0x01,0xff,0x6b,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x01,0x00,0xe3,0xba,0x5c,0x92,
++ 0x10,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0x8e,0x00,0x01,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0x0a,0xe4,0xd7,0x5c,0x63,0xc2,0x5c,0x06,0x00,0x94,0x80,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb0,0x00,0x01,0xff,0xe2,
++ 0x85,0xb1,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb2,0x00,0x01,0xff,0xe2,0x85,0xb3,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb4,0x00,0x01,0xff,0xe2,0x85,0xb5,
++ 0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb6,0x00,0x01,0xff,0xe2,0x85,0xb7,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb8,0x00,0x01,0xff,0xe2,0x85,0xb9,
++ 0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xba,0x00,0x01,0xff,0xe2,0x85,0xbb,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xbc,0x00,0x01,0xff,0xe2,0x85,0xbd,0x00,0x10,
++ 0x08,0x01,0xff,0xe2,0x85,0xbe,0x00,0x01,0xff,0xe2,0x85,0xbf,0x00,0x01,0x00,0xe0,
++ 0xc9,0x5c,0xcf,0x86,0xe5,0xa8,0x5c,0xe4,0x87,0x5c,0xe3,0x76,0x5c,0xe2,0x69,0x5c,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0xff,0xe2,0x86,0x84,0x00,0xe3,0xb8,
++ 0x60,0xe2,0x85,0x60,0xd1,0x0c,0xe0,0x32,0x60,0xcf,0x86,0x65,0x13,0x60,0x01,0x00,
++ 0xd0,0x62,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x18,0x52,0x04,
++ 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x90,0x00,0x01,0xff,
++ 0xe2,0x93,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x92,0x00,
++ 0x01,0xff,0xe2,0x93,0x93,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x94,0x00,0x01,0xff,
++ 0xe2,0x93,0x95,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x96,0x00,0x01,0xff,
++ 0xe2,0x93,0x97,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x98,0x00,0x01,0xff,0xe2,0x93,
++ 0x99,0x00,0xcf,0x86,0xe5,0xec,0x5f,0x94,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe2,0x93,0x9a,0x00,0x01,0xff,0xe2,0x93,0x9b,0x00,0x10,0x08,0x01,
++ 0xff,0xe2,0x93,0x9c,0x00,0x01,0xff,0xe2,0x93,0x9d,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe2,0x93,0x9e,0x00,0x01,0xff,0xe2,0x93,0x9f,0x00,0x10,0x08,0x01,0xff,0xe2,
++ 0x93,0xa0,0x00,0x01,0xff,0xe2,0x93,0xa1,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe2,0x93,0xa2,0x00,0x01,0xff,0xe2,0x93,0xa3,0x00,0x10,0x08,0x01,0xff,0xe2,
++ 0x93,0xa4,0x00,0x01,0xff,0xe2,0x93,0xa5,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,
++ 0x93,0xa6,0x00,0x01,0xff,0xe2,0x93,0xa7,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0xa8,
++ 0x00,0x01,0xff,0xe2,0x93,0xa9,0x00,0x01,0x00,0xd4,0x0c,0xe3,0xc8,0x61,0xe2,0xc1,
++ 0x61,0xcf,0x06,0x04,0x00,0xe3,0xa1,0x64,0xe2,0x94,0x63,0xe1,0x2e,0x02,0xe0,0x84,
++ 0x01,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe2,0xb0,0xb0,0x00,0x08,0xff,0xe2,0xb0,0xb1,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb0,0xb2,0x00,0x08,0xff,0xe2,0xb0,0xb3,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb0,0xb4,0x00,0x08,0xff,0xe2,0xb0,0xb5,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,
++ 0xb6,0x00,0x08,0xff,0xe2,0xb0,0xb7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb0,0xb8,0x00,0x08,0xff,0xe2,0xb0,0xb9,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,
++ 0xba,0x00,0x08,0xff,0xe2,0xb0,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb0,
++ 0xbc,0x00,0x08,0xff,0xe2,0xb0,0xbd,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,0xbe,0x00,
++ 0x08,0xff,0xe2,0xb0,0xbf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x80,0x00,0x08,0xff,0xe2,0xb1,0x81,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x82,0x00,0x08,0xff,0xe2,0xb1,0x83,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x84,0x00,0x08,0xff,0xe2,0xb1,0x85,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x86,0x00,
++ 0x08,0xff,0xe2,0xb1,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x88,0x00,0x08,0xff,0xe2,0xb1,0x89,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8a,0x00,
++ 0x08,0xff,0xe2,0xb1,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8c,0x00,
++ 0x08,0xff,0xe2,0xb1,0x8d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8e,0x00,0x08,0xff,
++ 0xe2,0xb1,0x8f,0x00,0x94,0x7c,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x90,0x00,0x08,0xff,0xe2,0xb1,0x91,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x92,0x00,0x08,0xff,0xe2,0xb1,0x93,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x94,0x00,0x08,0xff,0xe2,0xb1,0x95,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x96,0x00,
++ 0x08,0xff,0xe2,0xb1,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x98,0x00,0x08,0xff,0xe2,0xb1,0x99,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9a,0x00,
++ 0x08,0xff,0xe2,0xb1,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9c,0x00,
++ 0x08,0xff,0xe2,0xb1,0x9d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9e,0x00,0x00,0x00,
++ 0x08,0x00,0xcf,0x86,0xd5,0x07,0x64,0x84,0x61,0x08,0x00,0xd4,0x63,0xd3,0x32,0xd2,
++ 0x1b,0xd1,0x0c,0x10,0x08,0x09,0xff,0xe2,0xb1,0xa1,0x00,0x09,0x00,0x10,0x07,0x09,
++ 0xff,0xc9,0xab,0x00,0x09,0xff,0xe1,0xb5,0xbd,0x00,0xd1,0x0b,0x10,0x07,0x09,0xff,
++ 0xc9,0xbd,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xa8,0x00,0xd2,
++ 0x18,0xd1,0x0c,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xaa,0x00,0x10,0x04,0x09,
++ 0x00,0x09,0xff,0xe2,0xb1,0xac,0x00,0xd1,0x0b,0x10,0x04,0x09,0x00,0x0a,0xff,0xc9,
++ 0x91,0x00,0x10,0x07,0x0a,0xff,0xc9,0xb1,0x00,0x0a,0xff,0xc9,0x90,0x00,0xd3,0x27,
++ 0xd2,0x17,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xc9,0x92,0x00,0x0a,0x00,0x10,0x08,0x0a,
++ 0xff,0xe2,0xb1,0xb3,0x00,0x0a,0x00,0x91,0x0c,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,
++ 0xb1,0xb6,0x00,0x09,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x07,0x0b,
++ 0xff,0xc8,0xbf,0x00,0x0b,0xff,0xc9,0x80,0x00,0xe0,0x83,0x01,0xcf,0x86,0xd5,0xc0,
++ 0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x81,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x83,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x87,0x00,
++ 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x89,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb2,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb2,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x8f,0x00,0x08,0x00,
++ 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x91,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb2,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb2,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x97,0x00,0x08,0x00,
++ 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x99,0x00,0x08,0x00,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
++ 0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x9f,0x00,0x08,0x00,0xd4,0x60,
++ 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa1,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb2,0xa3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb2,0xa5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa7,0x00,0x08,0x00,
++ 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa9,0x00,0x08,0x00,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0xab,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
++ 0xad,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xaf,0x00,0x08,0x00,0xd3,0x30,
++ 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb1,0x00,0x08,0x00,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0xb3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
++ 0xb5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb7,0x00,0x08,0x00,0xd2,0x18,
++ 0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb9,0x00,0x08,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb2,0xbb,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xbd,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xbf,0x00,0x08,0x00,0xcf,0x86,0xd5,0xc0,
++ 0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x81,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x83,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb3,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x87,0x00,
++ 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x89,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb3,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb3,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x8f,0x00,0x08,0x00,
++ 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x91,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb3,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb3,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x97,0x00,0x08,0x00,
++ 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x99,0x00,0x08,0x00,0x10,0x08,
++ 0x08,0xff,0xe2,0xb3,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,
++ 0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x9f,0x00,0x08,0x00,0xd4,0x3b,
++ 0xd3,0x1c,0x92,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0xa1,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb3,0xa3,0x00,0x08,0x00,0x08,0x00,0xd2,0x10,0x51,0x04,
++ 0x08,0x00,0x10,0x04,0x08,0x00,0x0b,0xff,0xe2,0xb3,0xac,0x00,0xe1,0xd0,0x5e,0x10,
++ 0x04,0x0b,0x00,0x0b,0xff,0xe2,0xb3,0xae,0x00,0xe3,0xd5,0x5e,0x92,0x10,0x51,0x04,
++ 0x0b,0xe6,0x10,0x08,0x0d,0xff,0xe2,0xb3,0xb3,0x00,0x0d,0x00,0x00,0x00,0xe2,0x98,
++ 0x08,0xd1,0x0b,0xe0,0x8d,0x66,0xcf,0x86,0xcf,0x06,0x01,0x00,0xe0,0xe1,0x6b,0xcf,
++ 0x86,0xe5,0xa7,0x05,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x0c,0xe2,0x74,0x67,0xe1,
++ 0x0b,0x67,0xcf,0x06,0x04,0x00,0xe2,0xdb,0x01,0xe1,0x26,0x01,0xd0,0x09,0xcf,0x86,
++ 0x65,0x70,0x67,0x0a,0x00,0xcf,0x86,0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,
++ 0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
++ 0x99,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x85,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x99,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x8b,
++ 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x8d,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x99,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x99,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x93,
++ 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x95,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x99,0x97,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,
++ 0xff,0xea,0x99,0x99,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x9b,0x00,0x0a,
++ 0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x9d,0x00,0x0a,0x00,0x10,0x08,0x0a,
++ 0xff,0xea,0x99,0x9f,0x00,0x0a,0x00,0xe4,0xd9,0x66,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
++ 0x10,0x08,0x0c,0xff,0xea,0x99,0xa1,0x00,0x0c,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,
++ 0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0xa5,0x00,0x0a,0x00,
++ 0x10,0x08,0x0a,0xff,0xea,0x99,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,
++ 0x0a,0xff,0xea,0x99,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0xab,0x00,
++ 0x0a,0x00,0xe1,0x88,0x66,0x10,0x08,0x0a,0xff,0xea,0x99,0xad,0x00,0x0a,0x00,0xe0,
++ 0xb1,0x66,0xcf,0x86,0x95,0xab,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,
++ 0x0a,0xff,0xea,0x9a,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x83,0x00,
++ 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x85,0x00,0x0a,0x00,0x10,0x08,
++ 0x0a,0xff,0xea,0x9a,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,
++ 0xea,0x9a,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8b,0x00,0x0a,0x00,
++ 0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,
++ 0xea,0x9a,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,
++ 0xea,0x9a,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x93,0x00,0x0a,0x00,
++ 0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x95,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,
++ 0xea,0x9a,0x97,0x00,0x0a,0x00,0xe2,0x0e,0x66,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,
++ 0x9a,0x99,0x00,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9a,0x9b,0x00,0x10,0x00,0x0b,
++ 0x00,0xe1,0x10,0x02,0xd0,0xb9,0xcf,0x86,0xd5,0x07,0x64,0x1a,0x66,0x08,0x00,0xd4,
++ 0x58,0xd3,0x28,0xd2,0x10,0x51,0x04,0x09,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa3,
++ 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa5,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x9c,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,
++ 0xff,0xea,0x9c,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xab,0x00,0x0a,
++ 0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xad,0x00,0x0a,0x00,0x10,0x08,0x0a,
++ 0xff,0xea,0x9c,0xaf,0x00,0x0a,0x00,0xd3,0x28,0xd2,0x10,0x51,0x04,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x9c,0xb3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
++ 0x9c,0xb5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb7,0x00,0x0a,0x00,0xd2,
++ 0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb9,0x00,0x0a,0x00,0x10,0x08,0x0a,
++ 0xff,0xea,0x9c,0xbb,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xbd,
++ 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xbf,0x00,0x0a,0x00,0xcf,0x86,0xd5,
++ 0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x81,
++ 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x9d,0x85,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x87,
++ 0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x89,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
++ 0xff,0xea,0x9d,0x8d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8f,0x00,0x0a,
++ 0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x91,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x93,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
++ 0xff,0xea,0x9d,0x95,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x97,0x00,0x0a,
++ 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x99,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x9d,0x9b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
++ 0x9d,0x9d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x9f,0x00,0x0a,0x00,0xd4,
++ 0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa1,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
++ 0xff,0xea,0x9d,0xa5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa7,0x00,0x0a,
++ 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa9,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x9d,0xab,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
++ 0x9d,0xad,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xaf,0x00,0x0a,0x00,0x53,
++ 0x04,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9d,0xba,
++ 0x00,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9d,0xbc,0x00,0xd1,0x0c,0x10,0x04,0x0a,
++ 0x00,0x0a,0xff,0xe1,0xb5,0xb9,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xbf,0x00,0x0a,
++ 0x00,0xe0,0x71,0x01,0xcf,0x86,0xd5,0xa6,0xd4,0x4e,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
++ 0x10,0x08,0x0a,0xff,0xea,0x9e,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9e,
++ 0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9e,0x85,0x00,0x0a,0x00,
++ 0x10,0x08,0x0a,0xff,0xea,0x9e,0x87,0x00,0x0a,0x00,0xd2,0x10,0x51,0x04,0x0a,0x00,
++ 0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9e,0x8c,0x00,0xe1,0x16,0x64,0x10,0x04,0x0a,
++ 0x00,0x0c,0xff,0xc9,0xa5,0x00,0xd3,0x28,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0c,0xff,
++ 0xea,0x9e,0x91,0x00,0x0c,0x00,0x10,0x08,0x0d,0xff,0xea,0x9e,0x93,0x00,0x0d,0x00,
++ 0x51,0x04,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x97,0x00,0x10,0x00,0xd2,0x18,
++ 0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,0x99,0x00,0x10,0x00,0x10,0x08,0x10,0xff,
++ 0xea,0x9e,0x9b,0x00,0x10,0x00,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,0x9d,0x00,
++ 0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x9f,0x00,0x10,0x00,0xd4,0x63,0xd3,0x30,
++ 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa1,0x00,0x0c,0x00,0x10,0x08,
++ 0x0c,0xff,0xea,0x9e,0xa3,0x00,0x0c,0x00,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,
++ 0xa5,0x00,0x0c,0x00,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa7,0x00,0x0c,0x00,0xd2,0x1a,
++ 0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa9,0x00,0x0c,0x00,0x10,0x07,0x0d,0xff,
++ 0xc9,0xa6,0x00,0x10,0xff,0xc9,0x9c,0x00,0xd1,0x0e,0x10,0x07,0x10,0xff,0xc9,0xa1,
++ 0x00,0x10,0xff,0xc9,0xac,0x00,0x10,0x07,0x12,0xff,0xc9,0xaa,0x00,0x14,0x00,0xd3,
++ 0x35,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x10,0xff,0xca,0x9e,0x00,0x10,0xff,0xca,0x87,
++ 0x00,0x10,0x07,0x11,0xff,0xca,0x9d,0x00,0x11,0xff,0xea,0xad,0x93,0x00,0xd1,0x0c,
++ 0x10,0x08,0x11,0xff,0xea,0x9e,0xb5,0x00,0x11,0x00,0x10,0x08,0x11,0xff,0xea,0x9e,
++ 0xb7,0x00,0x11,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x14,0xff,0xea,0x9e,0xb9,0x00,
++ 0x14,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,0xbb,0x00,0x15,0x00,0xd1,0x0c,0x10,0x08,
++ 0x15,0xff,0xea,0x9e,0xbd,0x00,0x15,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,0xbf,0x00,
++ 0x15,0x00,0xcf,0x86,0xe5,0x50,0x63,0x94,0x2f,0x93,0x2b,0xd2,0x10,0x51,0x04,0x00,
++ 0x00,0x10,0x08,0x15,0xff,0xea,0x9f,0x83,0x00,0x15,0x00,0xd1,0x0f,0x10,0x08,0x15,
++ 0xff,0xea,0x9e,0x94,0x00,0x15,0xff,0xca,0x82,0x00,0x10,0x08,0x15,0xff,0xe1,0xb6,
++ 0x8e,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xe4,0x30,0x66,0xd3,0x1d,0xe2,0xd7,0x63,
++ 0xe1,0x86,0x63,0xe0,0x73,0x63,0xcf,0x86,0xe5,0x54,0x63,0x94,0x0b,0x93,0x07,0x62,
++ 0x3f,0x63,0x08,0x00,0x08,0x00,0x08,0x00,0xd2,0x0f,0xe1,0xd6,0x64,0xe0,0xa3,0x64,
++ 0xcf,0x86,0x65,0x88,0x64,0x0a,0x00,0xd1,0xab,0xd0,0x1a,0xcf,0x86,0xe5,0x93,0x65,
++ 0xe4,0x76,0x65,0xe3,0x5d,0x65,0xe2,0x50,0x65,0x91,0x08,0x10,0x04,0x00,0x00,0x0c,
++ 0x00,0x0c,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x0b,0x93,0x07,0x62,0xa3,0x65,
++ 0x11,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,
++ 0xa0,0x00,0x11,0xff,0xe1,0x8e,0xa1,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa2,0x00,
++ 0x11,0xff,0xe1,0x8e,0xa3,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa4,0x00,
++ 0x11,0xff,0xe1,0x8e,0xa5,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa6,0x00,0x11,0xff,
++ 0xe1,0x8e,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa8,0x00,
++ 0x11,0xff,0xe1,0x8e,0xa9,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xaa,0x00,0x11,0xff,
++ 0xe1,0x8e,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xac,0x00,0x11,0xff,
++ 0xe1,0x8e,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xae,0x00,0x11,0xff,0xe1,0x8e,
++ 0xaf,0x00,0xe0,0x2e,0x65,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb0,0x00,0x11,0xff,0xe1,0x8e,0xb1,0x00,
++ 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb2,0x00,0x11,0xff,0xe1,0x8e,0xb3,0x00,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb4,0x00,0x11,0xff,0xe1,0x8e,0xb5,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8e,0xb6,0x00,0x11,0xff,0xe1,0x8e,0xb7,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb8,0x00,0x11,0xff,0xe1,0x8e,0xb9,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8e,0xba,0x00,0x11,0xff,0xe1,0x8e,0xbb,0x00,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8e,0xbc,0x00,0x11,0xff,0xe1,0x8e,0xbd,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8e,0xbe,0x00,0x11,0xff,0xe1,0x8e,0xbf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x80,0x00,0x11,0xff,0xe1,0x8f,0x81,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x82,0x00,0x11,0xff,0xe1,0x8f,0x83,0x00,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x84,0x00,0x11,0xff,0xe1,0x8f,0x85,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0x86,0x00,0x11,0xff,0xe1,0x8f,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x88,0x00,0x11,0xff,0xe1,0x8f,0x89,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0x8a,0x00,0x11,0xff,0xe1,0x8f,0x8b,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0x8c,0x00,0x11,0xff,0xe1,0x8f,0x8d,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
++ 0x8e,0x00,0x11,0xff,0xe1,0x8f,0x8f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x90,0x00,0x11,0xff,0xe1,0x8f,0x91,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x92,0x00,0x11,0xff,0xe1,0x8f,0x93,0x00,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x94,0x00,0x11,0xff,0xe1,0x8f,0x95,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0x96,0x00,0x11,0xff,0xe1,0x8f,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x98,0x00,0x11,0xff,0xe1,0x8f,0x99,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0x9a,0x00,0x11,0xff,0xe1,0x8f,0x9b,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0x9c,0x00,0x11,0xff,0xe1,0x8f,0x9d,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
++ 0x9e,0x00,0x11,0xff,0xe1,0x8f,0x9f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0xa0,0x00,0x11,0xff,0xe1,0x8f,0xa1,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0xa2,0x00,0x11,0xff,0xe1,0x8f,0xa3,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0xa4,0x00,0x11,0xff,0xe1,0x8f,0xa5,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
++ 0xa6,0x00,0x11,0xff,0xe1,0x8f,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0xa8,0x00,0x11,0xff,0xe1,0x8f,0xa9,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
++ 0xaa,0x00,0x11,0xff,0xe1,0x8f,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,
++ 0xac,0x00,0x11,0xff,0xe1,0x8f,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,0xae,0x00,
++ 0x11,0xff,0xe1,0x8f,0xaf,0x00,0xd1,0x0c,0xe0,0x67,0x63,0xcf,0x86,0xcf,0x06,0x02,
++ 0xff,0xff,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,
++ 0x01,0x00,0xd4,0xae,0xd3,0x09,0xe2,0xd0,0x63,0xcf,0x06,0x01,0x00,0xd2,0x27,0xe1,
++ 0x9b,0x6f,0xe0,0xa2,0x6d,0xcf,0x86,0xe5,0xbb,0x6c,0xe4,0x4a,0x6c,0xe3,0x15,0x6c,
++ 0xe2,0xf4,0x6b,0xe1,0xe3,0x6b,0x10,0x08,0x01,0xff,0xe5,0x88,0x87,0x00,0x01,0xff,
++ 0xe5,0xba,0xa6,0x00,0xe1,0xf0,0x73,0xe0,0x64,0x73,0xcf,0x86,0xe5,0x9e,0x72,0xd4,
++ 0x3b,0x93,0x37,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x01,0xff,0x66,0x66,0x00,0x01,0xff,
++ 0x66,0x69,0x00,0x10,0x07,0x01,0xff,0x66,0x6c,0x00,0x01,0xff,0x66,0x66,0x69,0x00,
++ 0xd1,0x0f,0x10,0x08,0x01,0xff,0x66,0x66,0x6c,0x00,0x01,0xff,0x73,0x74,0x00,0x10,
++ 0x07,0x01,0xff,0x73,0x74,0x00,0x00,0x00,0x00,0x00,0xe3,0x44,0x72,0xd2,0x11,0x51,
++ 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xb6,0x00,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xd5,0xb4,0xd5,0xa5,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xab,0x00,
++ 0x10,0x09,0x01,0xff,0xd5,0xbe,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xad,0x00,
++ 0xd3,0x09,0xe2,0xbc,0x73,0xcf,0x06,0x01,0x00,0xd2,0x12,0xe1,0xab,0x74,0xe0,0x3c,
++ 0x74,0xcf,0x86,0xe5,0x19,0x74,0x64,0x08,0x74,0x06,0x00,0xe1,0x11,0x75,0xe0,0xde,
++ 0x74,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x7c,0xd3,0x3c,0xd2,
++ 0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xef,0xbd,0x81,0x00,0x10,0x08,0x01,
++ 0xff,0xef,0xbd,0x82,0x00,0x01,0xff,0xef,0xbd,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xef,0xbd,0x84,0x00,0x01,0xff,0xef,0xbd,0x85,0x00,0x10,0x08,0x01,0xff,0xef,
++ 0xbd,0x86,0x00,0x01,0xff,0xef,0xbd,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xef,0xbd,0x88,0x00,0x01,0xff,0xef,0xbd,0x89,0x00,0x10,0x08,0x01,0xff,0xef,
++ 0xbd,0x8a,0x00,0x01,0xff,0xef,0xbd,0x8b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xef,
++ 0xbd,0x8c,0x00,0x01,0xff,0xef,0xbd,0x8d,0x00,0x10,0x08,0x01,0xff,0xef,0xbd,0x8e,
++ 0x00,0x01,0xff,0xef,0xbd,0x8f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xef,0xbd,0x90,0x00,0x01,0xff,0xef,0xbd,0x91,0x00,0x10,0x08,0x01,0xff,0xef,
++ 0xbd,0x92,0x00,0x01,0xff,0xef,0xbd,0x93,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xef,
++ 0xbd,0x94,0x00,0x01,0xff,0xef,0xbd,0x95,0x00,0x10,0x08,0x01,0xff,0xef,0xbd,0x96,
++ 0x00,0x01,0xff,0xef,0xbd,0x97,0x00,0x92,0x1c,0xd1,0x10,0x10,0x08,0x01,0xff,0xef,
++ 0xbd,0x98,0x00,0x01,0xff,0xef,0xbd,0x99,0x00,0x10,0x08,0x01,0xff,0xef,0xbd,0x9a,
++ 0x00,0x01,0x00,0x01,0x00,0x83,0xe2,0xd9,0xb2,0xe1,0xc3,0xaf,0xe0,0x40,0xae,0xcf,
++ 0x86,0xe5,0xe4,0x9a,0xc4,0xe3,0xc1,0x07,0xe2,0x62,0x06,0xe1,0x79,0x85,0xe0,0x09,
++ 0x05,0xcf,0x86,0xe5,0xfb,0x02,0xd4,0x1c,0xe3,0xe7,0x75,0xe2,0x3e,0x75,0xe1,0x19,
++ 0x75,0xe0,0xf2,0x74,0xcf,0x86,0xe5,0xbf,0x74,0x94,0x07,0x63,0xaa,0x74,0x07,0x00,
++ 0x07,0x00,0xe3,0x93,0x77,0xe2,0x58,0x77,0xe1,0x77,0x01,0xe0,0xf0,0x76,0xcf,0x86,
++ 0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,
++ 0x90,0x90,0xa8,0x00,0x05,0xff,0xf0,0x90,0x90,0xa9,0x00,0x10,0x09,0x05,0xff,0xf0,
++ 0x90,0x90,0xaa,0x00,0x05,0xff,0xf0,0x90,0x90,0xab,0x00,0xd1,0x12,0x10,0x09,0x05,
++ 0xff,0xf0,0x90,0x90,0xac,0x00,0x05,0xff,0xf0,0x90,0x90,0xad,0x00,0x10,0x09,0x05,
++ 0xff,0xf0,0x90,0x90,0xae,0x00,0x05,0xff,0xf0,0x90,0x90,0xaf,0x00,0xd2,0x24,0xd1,
++ 0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb0,0x00,0x05,0xff,0xf0,0x90,0x90,0xb1,
++ 0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb2,0x00,0x05,0xff,0xf0,0x90,0x90,0xb3,
++ 0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb4,0x00,0x05,0xff,0xf0,0x90,
++ 0x90,0xb5,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb6,0x00,0x05,0xff,0xf0,0x90,
++ 0x90,0xb7,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,
++ 0xb8,0x00,0x05,0xff,0xf0,0x90,0x90,0xb9,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,
++ 0xba,0x00,0x05,0xff,0xf0,0x90,0x90,0xbb,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,
++ 0x90,0x90,0xbc,0x00,0x05,0xff,0xf0,0x90,0x90,0xbd,0x00,0x10,0x09,0x05,0xff,0xf0,
++ 0x90,0x90,0xbe,0x00,0x05,0xff,0xf0,0x90,0x90,0xbf,0x00,0xd2,0x24,0xd1,0x12,0x10,
++ 0x09,0x05,0xff,0xf0,0x90,0x91,0x80,0x00,0x05,0xff,0xf0,0x90,0x91,0x81,0x00,0x10,
++ 0x09,0x05,0xff,0xf0,0x90,0x91,0x82,0x00,0x05,0xff,0xf0,0x90,0x91,0x83,0x00,0xd1,
++ 0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x84,0x00,0x05,0xff,0xf0,0x90,0x91,0x85,
++ 0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x86,0x00,0x05,0xff,0xf0,0x90,0x91,0x87,
++ 0x00,0x94,0x4c,0x93,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,
++ 0x88,0x00,0x05,0xff,0xf0,0x90,0x91,0x89,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,
++ 0x8a,0x00,0x05,0xff,0xf0,0x90,0x91,0x8b,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,
++ 0x90,0x91,0x8c,0x00,0x05,0xff,0xf0,0x90,0x91,0x8d,0x00,0x10,0x09,0x07,0xff,0xf0,
++ 0x90,0x91,0x8e,0x00,0x07,0xff,0xf0,0x90,0x91,0x8f,0x00,0x05,0x00,0x05,0x00,0xd0,
++ 0xa0,0xcf,0x86,0xd5,0x07,0x64,0x98,0x75,0x07,0x00,0xd4,0x07,0x63,0xa5,0x75,0x07,
++ 0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0x98,0x00,
++ 0x12,0xff,0xf0,0x90,0x93,0x99,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0x9a,0x00,
++ 0x12,0xff,0xf0,0x90,0x93,0x9b,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,
++ 0x9c,0x00,0x12,0xff,0xf0,0x90,0x93,0x9d,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,
++ 0x9e,0x00,0x12,0xff,0xf0,0x90,0x93,0x9f,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,
++ 0xff,0xf0,0x90,0x93,0xa0,0x00,0x12,0xff,0xf0,0x90,0x93,0xa1,0x00,0x10,0x09,0x12,
++ 0xff,0xf0,0x90,0x93,0xa2,0x00,0x12,0xff,0xf0,0x90,0x93,0xa3,0x00,0xd1,0x12,0x10,
++ 0x09,0x12,0xff,0xf0,0x90,0x93,0xa4,0x00,0x12,0xff,0xf0,0x90,0x93,0xa5,0x00,0x10,
++ 0x09,0x12,0xff,0xf0,0x90,0x93,0xa6,0x00,0x12,0xff,0xf0,0x90,0x93,0xa7,0x00,0xcf,
++ 0x86,0xe5,0x2e,0x75,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,
++ 0xf0,0x90,0x93,0xa8,0x00,0x12,0xff,0xf0,0x90,0x93,0xa9,0x00,0x10,0x09,0x12,0xff,
++ 0xf0,0x90,0x93,0xaa,0x00,0x12,0xff,0xf0,0x90,0x93,0xab,0x00,0xd1,0x12,0x10,0x09,
++ 0x12,0xff,0xf0,0x90,0x93,0xac,0x00,0x12,0xff,0xf0,0x90,0x93,0xad,0x00,0x10,0x09,
++ 0x12,0xff,0xf0,0x90,0x93,0xae,0x00,0x12,0xff,0xf0,0x90,0x93,0xaf,0x00,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb0,0x00,0x12,0xff,0xf0,0x90,0x93,
++ 0xb1,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb2,0x00,0x12,0xff,0xf0,0x90,0x93,
++ 0xb3,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb4,0x00,0x12,0xff,0xf0,
++ 0x90,0x93,0xb5,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb6,0x00,0x12,0xff,0xf0,
++ 0x90,0x93,0xb7,0x00,0x93,0x28,0x92,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,
++ 0x93,0xb8,0x00,0x12,0xff,0xf0,0x90,0x93,0xb9,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,
++ 0x93,0xba,0x00,0x12,0xff,0xf0,0x90,0x93,0xbb,0x00,0x00,0x00,0x12,0x00,0xd4,0x1f,
++ 0xe3,0x47,0x76,0xe2,0xd2,0x75,0xe1,0x71,0x75,0xe0,0x52,0x75,0xcf,0x86,0xe5,0x1f,
++ 0x75,0x94,0x0a,0xe3,0x0a,0x75,0x62,0x01,0x75,0x07,0x00,0x07,0x00,0xe3,0x46,0x78,
++ 0xe2,0x17,0x78,0xd1,0x09,0xe0,0xb4,0x77,0xcf,0x06,0x0b,0x00,0xe0,0xe7,0x77,0xcf,
++ 0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,
++ 0xf0,0x90,0xb3,0x80,0x00,0x11,0xff,0xf0,0x90,0xb3,0x81,0x00,0x10,0x09,0x11,0xff,
++ 0xf0,0x90,0xb3,0x82,0x00,0x11,0xff,0xf0,0x90,0xb3,0x83,0x00,0xd1,0x12,0x10,0x09,
++ 0x11,0xff,0xf0,0x90,0xb3,0x84,0x00,0x11,0xff,0xf0,0x90,0xb3,0x85,0x00,0x10,0x09,
++ 0x11,0xff,0xf0,0x90,0xb3,0x86,0x00,0x11,0xff,0xf0,0x90,0xb3,0x87,0x00,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x88,0x00,0x11,0xff,0xf0,0x90,0xb3,
++ 0x89,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8a,0x00,0x11,0xff,0xf0,0x90,0xb3,
++ 0x8b,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8c,0x00,0x11,0xff,0xf0,
++ 0x90,0xb3,0x8d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8e,0x00,0x11,0xff,0xf0,
++ 0x90,0xb3,0x8f,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,
++ 0xb3,0x90,0x00,0x11,0xff,0xf0,0x90,0xb3,0x91,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,
++ 0xb3,0x92,0x00,0x11,0xff,0xf0,0x90,0xb3,0x93,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,
++ 0xf0,0x90,0xb3,0x94,0x00,0x11,0xff,0xf0,0x90,0xb3,0x95,0x00,0x10,0x09,0x11,0xff,
++ 0xf0,0x90,0xb3,0x96,0x00,0x11,0xff,0xf0,0x90,0xb3,0x97,0x00,0xd2,0x24,0xd1,0x12,
++ 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x98,0x00,0x11,0xff,0xf0,0x90,0xb3,0x99,0x00,
++ 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9a,0x00,0x11,0xff,0xf0,0x90,0xb3,0x9b,0x00,
++ 0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9c,0x00,0x11,0xff,0xf0,0x90,0xb3,
++ 0x9d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9e,0x00,0x11,0xff,0xf0,0x90,0xb3,
++ 0x9f,0x00,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,
++ 0xb3,0xa0,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa1,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,
++ 0xb3,0xa2,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa3,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,
++ 0xf0,0x90,0xb3,0xa4,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa5,0x00,0x10,0x09,0x11,0xff,
++ 0xf0,0x90,0xb3,0xa6,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa7,0x00,0xd2,0x24,0xd1,0x12,
++ 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xa8,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa9,0x00,
++ 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xaa,0x00,0x11,0xff,0xf0,0x90,0xb3,0xab,0x00,
++ 0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xac,0x00,0x11,0xff,0xf0,0x90,0xb3,
++ 0xad,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xae,0x00,0x11,0xff,0xf0,0x90,0xb3,
++ 0xaf,0x00,0x93,0x23,0x92,0x1f,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xb0,
++ 0x00,0x11,0xff,0xf0,0x90,0xb3,0xb1,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xb2,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x15,0xe4,0xf9,0x7a,0xe3,0x03,
++ 0x79,0xe2,0xfc,0x77,0xe1,0x4c,0x77,0xe0,0x05,0x77,0xcf,0x06,0x0c,0x00,0xe4,0x53,
++ 0x7e,0xe3,0xac,0x7d,0xe2,0x55,0x7d,0xd1,0x0c,0xe0,0x1a,0x7d,0xcf,0x86,0x65,0xfb,
++ 0x7c,0x14,0x00,0xe0,0x1e,0x7d,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,0x90,0xd3,0x48,
++ 0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x80,0x00,0x10,0xff,0xf0,
++ 0x91,0xa3,0x81,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x82,0x00,0x10,0xff,0xf0,
++ 0x91,0xa3,0x83,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x84,0x00,0x10,
++ 0xff,0xf0,0x91,0xa3,0x85,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x86,0x00,0x10,
++ 0xff,0xf0,0x91,0xa3,0x87,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,
++ 0xa3,0x88,0x00,0x10,0xff,0xf0,0x91,0xa3,0x89,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,
++ 0xa3,0x8a,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8b,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,
++ 0xf0,0x91,0xa3,0x8c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8d,0x00,0x10,0x09,0x10,0xff,
++ 0xf0,0x91,0xa3,0x8e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8f,0x00,0xd3,0x48,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x90,0x00,0x10,0xff,0xf0,0x91,0xa3,
++ 0x91,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x92,0x00,0x10,0xff,0xf0,0x91,0xa3,
++ 0x93,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x94,0x00,0x10,0xff,0xf0,
++ 0x91,0xa3,0x95,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x96,0x00,0x10,0xff,0xf0,
++ 0x91,0xa3,0x97,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x98,
++ 0x00,0x10,0xff,0xf0,0x91,0xa3,0x99,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x9a,
++ 0x00,0x10,0xff,0xf0,0x91,0xa3,0x9b,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,
++ 0xa3,0x9c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9d,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,
++ 0xa3,0x9e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9f,0x00,0xd1,0x11,0xe0,0x7a,0x80,0xcf,
++ 0x86,0xe5,0x71,0x80,0xe4,0x3a,0x80,0xcf,0x06,0x00,0x00,0xe0,0x43,0x82,0xcf,0x86,
++ 0xd5,0x06,0xcf,0x06,0x00,0x00,0xd4,0x09,0xe3,0x78,0x80,0xcf,0x06,0x0c,0x00,0xd3,
++ 0x06,0xcf,0x06,0x00,0x00,0xe2,0xa3,0x81,0xe1,0x7e,0x81,0xd0,0x06,0xcf,0x06,0x00,
++ 0x00,0xcf,0x86,0xa5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,
++ 0x14,0xff,0xf0,0x96,0xb9,0xa0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa1,0x00,0x10,0x09,
++ 0x14,0xff,0xf0,0x96,0xb9,0xa2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa3,0x00,0xd1,0x12,
++ 0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa4,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa5,0x00,
++ 0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa6,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa7,0x00,
++ 0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa8,0x00,0x14,0xff,0xf0,
++ 0x96,0xb9,0xa9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xaa,0x00,0x14,0xff,0xf0,
++ 0x96,0xb9,0xab,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xac,0x00,0x14,
++ 0xff,0xf0,0x96,0xb9,0xad,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xae,0x00,0x14,
++ 0xff,0xf0,0x96,0xb9,0xaf,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,
++ 0xf0,0x96,0xb9,0xb0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb1,0x00,0x10,0x09,0x14,0xff,
++ 0xf0,0x96,0xb9,0xb2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb3,0x00,0xd1,0x12,0x10,0x09,
++ 0x14,0xff,0xf0,0x96,0xb9,0xb4,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb5,0x00,0x10,0x09,
++ 0x14,0xff,0xf0,0x96,0xb9,0xb6,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb7,0x00,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb8,0x00,0x14,0xff,0xf0,0x96,0xb9,
++ 0xb9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xba,0x00,0x14,0xff,0xf0,0x96,0xb9,
++ 0xbb,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbc,0x00,0x14,0xff,0xf0,
++ 0x96,0xb9,0xbd,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbe,0x00,0x14,0xff,0xf0,
++ 0x96,0xb9,0xbf,0x00,0x14,0x00,0xd2,0x14,0xe1,0x8d,0x81,0xe0,0x84,0x81,0xcf,0x86,
++ 0xe5,0x45,0x81,0xe4,0x02,0x81,0xcf,0x06,0x12,0x00,0xd1,0x0b,0xe0,0xb8,0x82,0xcf,
++ 0x86,0xcf,0x06,0x00,0x00,0xe0,0xf8,0x8a,0xcf,0x86,0xd5,0x22,0xe4,0x33,0x88,0xe3,
++ 0xf6,0x87,0xe2,0x9b,0x87,0xe1,0x94,0x87,0xe0,0x8d,0x87,0xcf,0x86,0xe5,0x5e,0x87,
++ 0xe4,0x45,0x87,0x93,0x07,0x62,0x34,0x87,0x12,0xe6,0x12,0xe6,0xe4,0x99,0x88,0xe3,
++ 0x92,0x88,0xd2,0x09,0xe1,0x1b,0x88,0xcf,0x06,0x10,0x00,0xe1,0x82,0x88,0xe0,0x4f,
++ 0x88,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,
++ 0x12,0xff,0xf0,0x9e,0xa4,0xa2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa3,0x00,0x10,0x09,
++ 0x12,0xff,0xf0,0x9e,0xa4,0xa4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa5,0x00,0xd1,0x12,
++ 0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa6,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa7,0x00,
++ 0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa8,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa9,0x00,
++ 0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xaa,0x00,0x12,0xff,0xf0,
++ 0x9e,0xa4,0xab,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xac,0x00,0x12,0xff,0xf0,
++ 0x9e,0xa4,0xad,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xae,0x00,0x12,
++ 0xff,0xf0,0x9e,0xa4,0xaf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb0,0x00,0x12,
++ 0xff,0xf0,0x9e,0xa4,0xb1,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,
++ 0xf0,0x9e,0xa4,0xb2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb3,0x00,0x10,0x09,0x12,0xff,
++ 0xf0,0x9e,0xa4,0xb4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb5,0x00,0xd1,0x12,0x10,0x09,
++ 0x12,0xff,0xf0,0x9e,0xa4,0xb6,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb7,0x00,0x10,0x09,
++ 0x12,0xff,0xf0,0x9e,0xa4,0xb8,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb9,0x00,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xba,0x00,0x12,0xff,0xf0,0x9e,0xa4,
++ 0xbb,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbc,0x00,0x12,0xff,0xf0,0x9e,0xa4,
++ 0xbd,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbe,0x00,0x12,0xff,0xf0,
++ 0x9e,0xa4,0xbf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa5,0x80,0x00,0x12,0xff,0xf0,
++ 0x9e,0xa5,0x81,0x00,0x94,0x1e,0x93,0x1a,0x92,0x16,0x91,0x12,0x10,0x09,0x12,0xff,
++ 0xf0,0x9e,0xa5,0x82,0x00,0x12,0xff,0xf0,0x9e,0xa5,0x83,0x00,0x12,0x00,0x12,0x00,
++ 0x12,0x00,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ /* nfdi_c0100 */
++ 0x57,0x04,0x01,0x00,0xc6,0xe5,0x91,0x13,0xe4,0x27,0x0c,0xe3,0x61,0x07,0xe2,0xda,
++ 0x01,0xc1,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0xe4,0xd4,0x7c,0xd3,0x3c,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x80,0x00,0x01,0xff,0x41,0xcc,
++ 0x81,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x82,0x00,0x01,0xff,0x41,0xcc,0x83,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x88,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0x43,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x45,0xcc,0x80,0x00,0x01,0xff,0x45,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,
++ 0x45,0xcc,0x82,0x00,0x01,0xff,0x45,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x49,0xcc,0x80,0x00,0x01,0xff,0x49,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,
++ 0x82,0x00,0x01,0xff,0x49,0xcc,0x88,0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0x4e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x80,0x00,
++ 0x01,0xff,0x4f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x82,0x00,
++ 0x01,0xff,0x4f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x88,0x00,0x01,0x00,
++ 0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x55,0xcc,0x80,0x00,0x10,0x08,
++ 0x01,0xff,0x55,0xcc,0x81,0x00,0x01,0xff,0x55,0xcc,0x82,0x00,0x91,0x10,0x10,0x08,
++ 0x01,0xff,0x55,0xcc,0x88,0x00,0x01,0xff,0x59,0xcc,0x81,0x00,0x01,0x00,0xd4,0x7c,
++ 0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x80,0x00,0x01,0xff,
++ 0x61,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,
++ 0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x88,0x00,0x01,0xff,0x61,0xcc,
++ 0x8a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x63,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x65,0xcc,0x80,0x00,0x01,0xff,0x65,0xcc,0x81,0x00,0x10,0x08,
++ 0x01,0xff,0x65,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x69,0xcc,0x80,0x00,0x01,0xff,0x69,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,
++ 0x69,0xcc,0x82,0x00,0x01,0xff,0x69,0xcc,0x88,0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0x6e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,
++ 0x80,0x00,0x01,0xff,0x6f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,
++ 0x82,0x00,0x01,0xff,0x6f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,
++ 0x01,0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x75,0xcc,0x80,0x00,
++ 0x10,0x08,0x01,0xff,0x75,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x82,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x75,0xcc,0x88,0x00,0x01,0xff,0x79,0xcc,0x81,0x00,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0x79,0xcc,0x88,0x00,0xe1,0x9a,0x03,0xe0,0xd3,0x01,0xcf,0x86,
++ 0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,
++ 0x84,0x00,0x01,0xff,0x61,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x86,0x00,
++ 0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa8,0x00,
++ 0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x43,0xcc,0x81,0x00,0x01,0xff,
++ 0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x43,0xcc,0x82,0x00,
++ 0x01,0xff,0x63,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x43,0xcc,0x87,0x00,0x01,0xff,
++ 0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x43,0xcc,0x8c,0x00,0x01,0xff,
++ 0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0x8c,0x00,0x01,0xff,0x64,0xcc,
++ 0x8c,0x00,0xd3,0x34,0xd2,0x14,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,
++ 0x84,0x00,0x01,0xff,0x65,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,
++ 0x86,0x00,0x01,0xff,0x65,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x87,0x00,
++ 0x01,0xff,0x65,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,
++ 0xa8,0x00,0x01,0xff,0x65,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x8c,0x00,
++ 0x01,0xff,0x65,0xcc,0x8c,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x82,0x00,
++ 0x01,0xff,0x67,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,0x86,0x00,0x01,0xff,
++ 0x67,0xcc,0x86,0x00,0xd4,0x74,0xd3,0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x47,0xcc,0x87,0x00,0x01,0xff,0x67,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,
++ 0xa7,0x00,0x01,0xff,0x67,0xcc,0xa7,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,
++ 0x82,0x00,0x01,0xff,0x68,0xcc,0x82,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x49,0xcc,0x83,0x00,0x01,0xff,0x69,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,
++ 0x49,0xcc,0x84,0x00,0x01,0xff,0x69,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x49,0xcc,0x86,0x00,0x01,0xff,0x69,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,
++ 0xa8,0x00,0x01,0xff,0x69,0xcc,0xa8,0x00,0xd3,0x30,0xd2,0x10,0x91,0x0c,0x10,0x08,
++ 0x01,0xff,0x49,0xcc,0x87,0x00,0x01,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x4a,0xcc,0x82,0x00,0x01,0xff,0x6a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,
++ 0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0x4c,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0x81,0x00,0x01,0xff,
++ 0x4c,0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6c,0xcc,0xa7,0x00,0x01,0xff,
++ 0x4c,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0x8c,0x00,0x01,0x00,0xcf,0x86,
++ 0xd5,0xd4,0xd4,0x60,0xd3,0x30,0xd2,0x10,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0x4e,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x81,0x00,
++ 0x01,0xff,0x4e,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x01,0xff,
++ 0x4e,0xcc,0x8c,0x00,0xd2,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x6e,0xcc,0x8c,0x00,
++ 0x01,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x84,0x00,0x01,0xff,
++ 0x6f,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,
++ 0x86,0x00,0xd3,0x34,0xd2,0x14,0x91,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x8b,0x00,
++ 0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,
++ 0x81,0x00,0x01,0xff,0x72,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa7,0x00,
++ 0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,
++ 0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x53,0xcc,0x81,0x00,
++ 0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x82,0x00,
++ 0x01,0xff,0x73,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x53,0xcc,0xa7,0x00,0x01,0xff,
++ 0x73,0xcc,0xa7,0x00,0xd4,0x74,0xd3,0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x53,0xcc,0x8c,0x00,0x01,0xff,0x73,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,
++ 0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,
++ 0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x55,0xcc,0x83,0x00,0x01,0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,
++ 0x55,0xcc,0x84,0x00,0x01,0xff,0x75,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x55,0xcc,0x86,0x00,0x01,0xff,0x75,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,
++ 0x8a,0x00,0x01,0xff,0x75,0xcc,0x8a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x55,0xcc,0x8b,0x00,0x01,0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,
++ 0x55,0xcc,0xa8,0x00,0x01,0xff,0x75,0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x57,0xcc,0x82,0x00,0x01,0xff,0x77,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,
++ 0x82,0x00,0x01,0xff,0x79,0xcc,0x82,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x59,0xcc,0x88,0x00,0x01,0xff,0x5a,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,
++ 0x81,0x00,0x01,0xff,0x5a,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,
++ 0x87,0x00,0x01,0xff,0x5a,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,
++ 0x01,0x00,0xd0,0x4a,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x2c,0xd3,0x18,0x92,0x14,
++ 0x91,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,
+ 0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x04,0x00,0xd3,0x19,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
+- 0xdb,0x92,0xd9,0x94,0x00,0x11,0x04,0x01,0x00,0x01,0xe6,0x52,0x04,0x01,0xe6,0xd1,
+- 0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,0xd4,0x38,0xd3,
+- 0x1c,0xd2,0x0c,0x51,0x04,0x01,0xe6,0x10,0x04,0x01,0xe6,0x01,0xdc,0xd1,0x08,0x10,
+- 0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,0xd2,0x10,0xd1,0x08,0x10,
+- 0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0xdc,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,
+- 0xe6,0x01,0xdc,0x07,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x04,
+- 0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,0x00,0xd1,0xc8,0xd0,0x76,0xcf,
+- 0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,
+- 0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,
+- 0x00,0x04,0x24,0x04,0x00,0x04,0x00,0x04,0x00,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,
+- 0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x07,0x00,0x07,0x00,0xd3,0x1c,0xd2,
+- 0x0c,0x91,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,
+- 0xdc,0x04,0xe6,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd2,0x0c,0x51,0x04,0x04,0xdc,0x10,
+- 0x04,0x04,0xe6,0x04,0xdc,0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,
+- 0xdc,0x04,0xe6,0xcf,0x86,0xd5,0x3c,0x94,0x38,0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x04,
+- 0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,
+- 0x04,0x04,0xdc,0x04,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,
+- 0x04,0x04,0xe6,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x08,
+- 0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x52,0x04,0x08,0x00,0x11,0x04,0x08,0x00,0x0a,
+- 0x00,0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,
+- 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0xd4,0x14,0x53,0x04,0x09,0x00,0x92,0x0c,0x51,
+- 0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xe6,0x09,0xe6,0xd3,0x10,0x92,0x0c,0x51,
+- 0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,
+- 0x00,0x10,0x04,0x09,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x14,0xdc,0x14,
+- 0x00,0xe4,0xf8,0x57,0xe3,0x45,0x3f,0xe2,0xf4,0x3e,0xe1,0xc7,0x2c,0xe0,0x21,0x10,
+- 0xcf,0x86,0xc5,0xe4,0x80,0x08,0xe3,0xcb,0x03,0xe2,0x61,0x01,0xd1,0x94,0xd0,0x5a,
+- 0xcf,0x86,0xd5,0x20,0x54,0x04,0x0b,0x00,0xd3,0x0c,0x52,0x04,0x0b,0x00,0x11,0x04,
+- 0x0b,0x00,0x0b,0xe6,0x92,0x0c,0x51,0x04,0x0b,0xe6,0x10,0x04,0x0b,0x00,0x0b,0xe6,
+- 0x0b,0xe6,0xd4,0x24,0xd3,0x10,0x52,0x04,0x0b,0xe6,0x91,0x08,0x10,0x04,0x0b,0x00,
+- 0x0b,0xe6,0x0b,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x0b,0xe6,
+- 0x11,0x04,0x0b,0xe6,0x00,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,
+- 0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xcf,0x86,0xd5,0x20,0x54,0x04,0x0c,0x00,
+- 0x53,0x04,0x0c,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x0c,0xdc,0x0c,0xdc,
+- 0x51,0x04,0x00,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x13,0x00,
+- 0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0xd0,0x4a,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,0x20,0xd3,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0d,0x00,0x10,0x00,0x0d,0x00,0x0d,0x00,0x52,0x04,0x0d,0x00,0x91,0x08,
+- 0x10,0x04,0x0d,0x00,0x10,0x00,0x10,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,
+- 0x10,0x04,0x10,0x00,0x11,0x00,0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x12,0x00,
+- 0x52,0x04,0x12,0x00,0x11,0x04,0x12,0x00,0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,
+- 0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x14,0xdc,
+- 0x12,0xe6,0x12,0xe6,0xd4,0x30,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x12,0xe6,0x10,0x04,
+- 0x12,0x00,0x11,0xdc,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xdc,0x0d,0xe6,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x0d,0xe6,0x91,0x08,0x10,0x04,0x0d,0xe6,
+- 0x0d,0xdc,0x0d,0xdc,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0d,0x1b,0x0d,0x1c,
+- 0x10,0x04,0x0d,0x1d,0x0d,0xe6,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xdc,0x0d,0xe6,
+- 0xd2,0x10,0xd1,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x10,0x04,0x0d,0xdc,0x0d,0xe6,
+- 0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xe6,0x10,0xe6,0xe1,0x3a,0x01,0xd0,0x77,0xcf,
+- 0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x01,
+- 0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x07,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0xd4,0x1b,0x53,0x04,0x01,0x00,0x92,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,0x01,
+- 0xff,0xe0,0xa4,0xa8,0xe0,0xa4,0xbc,0x00,0x01,0x00,0x01,0x00,0xd3,0x26,0xd2,0x13,
+- 0x91,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa4,0xb0,0xe0,0xa4,0xbc,0x00,0x01,
+- 0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xa4,0xb3,0xe0,0xa4,0xbc,0x00,0x01,0x00,
+- 0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x91,0x08,0x10,0x04,0x01,0x07,
+- 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x8c,0xd4,0x18,0x53,0x04,0x01,0x00,0x52,0x04,
+- 0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x10,0x04,0x0b,0x00,0x0c,0x00,
+- 0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0xe6,0x10,0x04,0x01,0xdc,
+- 0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x0b,0x00,0x0c,0x00,0xd2,0x2c,0xd1,0x16,
+- 0x10,0x0b,0x01,0xff,0xe0,0xa4,0x95,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0x96,
+- 0xe0,0xa4,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa4,0x97,0xe0,0xa4,0xbc,0x00,0x01,
+- 0xff,0xe0,0xa4,0x9c,0xe0,0xa4,0xbc,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xa4,
+- 0xa1,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0xa2,0xe0,0xa4,0xbc,0x00,0x10,0x0b,
+- 0x01,0xff,0xe0,0xa4,0xab,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0xaf,0xe0,0xa4,
+- 0xbc,0x00,0x54,0x04,0x01,0x00,0xd3,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,
+- 0x0a,0x00,0x10,0x04,0x0a,0x00,0x0c,0x00,0x0c,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
+- 0x10,0x00,0x0b,0x00,0x10,0x04,0x0b,0x00,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,
+- 0x08,0x00,0x09,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,
++ 0x01,0xff,0x55,0xcc,0x9b,0x00,0x93,0x14,0x92,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,
++ 0x75,0xcc,0x9b,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xb4,
++ 0xd4,0x24,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0x41,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x8c,0x00,0x01,0xff,
++ 0x49,0xcc,0x8c,0x00,0xd3,0x46,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,
++ 0x8c,0x00,0x01,0xff,0x4f,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8c,0x00,
++ 0x01,0xff,0x55,0xcc,0x8c,0x00,0xd1,0x12,0x10,0x08,0x01,0xff,0x75,0xcc,0x8c,0x00,
++ 0x01,0xff,0x55,0xcc,0x88,0xcc,0x84,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,
++ 0x84,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x81,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,
++ 0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x8c,0x00,
++ 0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,
++ 0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0x01,0x00,
++ 0x10,0x0a,0x01,0xff,0x41,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x88,0xcc,
++ 0x84,0x00,0xd4,0x80,0xd3,0x3a,0xd2,0x26,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,
++ 0x87,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x87,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,
++ 0xc3,0x86,0xcc,0x84,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x84,0x00,0x51,0x04,0x01,0x00,
++ 0x10,0x08,0x01,0xff,0x47,0xcc,0x8c,0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,
++ 0x10,0x08,0x01,0xff,0x4f,0xcc,0xa8,0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,
++ 0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa8,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,
++ 0x84,0x00,0x10,0x09,0x01,0xff,0xc6,0xb7,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,
++ 0x8c,0x00,0xd3,0x24,0xd2,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,
++ 0x01,0x00,0x01,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x81,0x00,0x01,0xff,
++ 0x67,0xcc,0x81,0x00,0x04,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x4e,0xcc,
++ 0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x8a,0xcc,
++ 0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,
++ 0xc3,0x86,0xcc,0x81,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
++ 0xc3,0x98,0xcc,0x81,0x00,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,0xe2,0x07,0x02,0xe1,
++ 0xae,0x01,0xe0,0x93,0x01,0xcf,0x86,0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x8f,0x00,0x01,0xff,0x61,0xcc,0x8f,0x00,0x10,
++ 0x08,0x01,0xff,0x41,0xcc,0x91,0x00,0x01,0xff,0x61,0xcc,0x91,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x45,0xcc,0x8f,0x00,0x01,0xff,0x65,0xcc,0x8f,0x00,0x10,0x08,0x01,
++ 0xff,0x45,0xcc,0x91,0x00,0x01,0xff,0x65,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x49,0xcc,0x8f,0x00,0x01,0xff,0x69,0xcc,0x8f,0x00,0x10,0x08,0x01,
++ 0xff,0x49,0xcc,0x91,0x00,0x01,0xff,0x69,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x4f,0xcc,0x8f,0x00,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x4f,
++ 0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,0x91,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x52,0xcc,0x8f,0x00,0x01,0xff,0x72,0xcc,0x8f,0x00,0x10,0x08,0x01,
++ 0xff,0x52,0xcc,0x91,0x00,0x01,0xff,0x72,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x55,0xcc,0x8f,0x00,0x01,0xff,0x75,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x55,
++ 0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x04,
++ 0xff,0x53,0xcc,0xa6,0x00,0x04,0xff,0x73,0xcc,0xa6,0x00,0x10,0x08,0x04,0xff,0x54,
++ 0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,0xa6,0x00,0x51,0x04,0x04,0x00,0x10,0x08,0x04,
++ 0xff,0x48,0xcc,0x8c,0x00,0x04,0xff,0x68,0xcc,0x8c,0x00,0xd4,0x68,0xd3,0x20,0xd2,
++ 0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x07,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,
++ 0x08,0x04,0xff,0x41,0xcc,0x87,0x00,0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,
++ 0x10,0x10,0x08,0x04,0xff,0x45,0xcc,0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,
++ 0x0a,0x04,0xff,0x4f,0xcc,0x88,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x88,0xcc,0x84,
++ 0x00,0xd1,0x14,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,
++ 0xcc,0x83,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x4f,0xcc,0x87,0x00,0x04,0xff,0x6f,
++ 0xcc,0x87,0x00,0x93,0x30,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x87,
++ 0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x59,
++ 0xcc,0x84,0x00,0x04,0xff,0x79,0xcc,0x84,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x07,
++ 0x00,0x08,0x00,0x08,0x00,0xcf,0x86,0x95,0x14,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,
++ 0x04,0x08,0x00,0x09,0x00,0x09,0x00,0x09,0x00,0x01,0x00,0x01,0x00,0xd0,0x22,0xcf,
++ 0x86,0x55,0x04,0x01,0x00,0x94,0x18,0x53,0x04,0x01,0x00,0xd2,0x0c,0x91,0x08,0x10,
++ 0x04,0x01,0x00,0x04,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x07,0x00,0x01,0x00,0xcf,
++ 0x86,0xd5,0x18,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,
++ 0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x94,0x18,0x53,0x04,0x01,0x00,0xd2,
++ 0x08,0x11,0x04,0x01,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,
++ 0x00,0x07,0x00,0xe1,0x34,0x01,0xd0,0x72,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0xe6,
++ 0xd3,0x10,0x52,0x04,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0xe8,0x01,0xdc,
++ 0x92,0x0c,0x51,0x04,0x01,0xdc,0x10,0x04,0x01,0xe8,0x01,0xd8,0x01,0xdc,0xd4,0x2c,
++ 0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0xdc,0x01,0xca,0x10,0x04,0x01,0xca,
++ 0x01,0xdc,0x51,0x04,0x01,0xdc,0x10,0x04,0x01,0xdc,0x01,0xca,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x01,0xca,0x01,0xdc,0x01,0xdc,0x01,0xdc,0xd3,0x08,0x12,0x04,0x01,0xdc,
++ 0x01,0x01,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x01,0x01,0xdc,0x01,0xdc,0x91,0x08,
++ 0x10,0x04,0x01,0xdc,0x01,0xe6,0x01,0xe6,0xcf,0x86,0xd5,0x7e,0xd4,0x46,0xd3,0x2e,
++ 0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xcc,0x80,0x00,0x01,0xff,0xcc,0x81,0x00,
++ 0x10,0x04,0x01,0xe6,0x01,0xff,0xcc,0x93,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcc,
++ 0x88,0xcc,0x81,0x00,0x01,0xf0,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd2,0x08,0x11,0x04,
++ 0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x10,0x04,0x04,0xdc,
++ 0x06,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x07,0xe6,0x10,0x04,0x07,0xe6,0x07,0xdc,
++ 0x51,0x04,0x07,0xdc,0x10,0x04,0x07,0xdc,0x07,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x08,0xe8,0x08,0xdc,0x10,0x04,0x08,0xdc,0x08,0xe6,0xd1,0x08,0x10,0x04,0x08,0xe9,
++ 0x07,0xea,0x10,0x04,0x07,0xea,0x07,0xe9,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,
++ 0x01,0xea,0x10,0x04,0x04,0xe9,0x06,0xe6,0x06,0xe6,0x06,0xe6,0xd3,0x13,0x52,0x04,
++ 0x0a,0x00,0x91,0x0b,0x10,0x07,0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x0a,0x00,0xd2,
++ 0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x01,0x00,0x09,0x00,0x51,0x04,0x09,0x00,0x10,
++ 0x06,0x01,0xff,0x3b,0x00,0x10,0x00,0xd0,0xe1,0xcf,0x86,0xd5,0x7a,0xd4,0x5f,0xd3,
++ 0x21,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xc2,0xa8,0xcc,
++ 0x81,0x00,0x10,0x09,0x01,0xff,0xce,0x91,0xcc,0x81,0x00,0x01,0xff,0xc2,0xb7,0x00,
++ 0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x95,0xcc,0x81,0x00,0x01,0xff,0xce,
++ 0x97,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0x00,0x00,0xd1,
++ 0x0d,0x10,0x09,0x01,0xff,0xce,0x9f,0xcc,0x81,0x00,0x00,0x00,0x10,0x09,0x01,0xff,
++ 0xce,0xa5,0xcc,0x81,0x00,0x01,0xff,0xce,0xa9,0xcc,0x81,0x00,0x93,0x17,0x92,0x13,
++ 0x91,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0x01,0x00,0xd4,0x4a,0xd3,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,
++ 0xff,0xce,0x99,0xcc,0x88,0x00,0x01,0xff,0xce,0xa5,0xcc,0x88,0x00,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0xb1,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,0x10,
++ 0x09,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0x93,
++ 0x17,0x92,0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x81,0x00,
++ 0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x7b,0xd4,0x39,0x53,0x04,
++ 0x01,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x88,
++ 0x00,0x01,0xff,0xcf,0x85,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xbf,
++ 0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,
++ 0xcc,0x81,0x00,0x0a,0x00,0xd3,0x26,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x01,0xff,0xcf,0x92,0xcc,0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcf,0x92,
++ 0xcc,0x88,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd2,0x0c,0x51,0x04,0x06,
++ 0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,
++ 0x04,0x01,0x00,0x04,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,
++ 0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,
++ 0x04,0x05,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0x12,0x04,0x07,0x00,0x08,0x00,0xe3,
++ 0x47,0x04,0xe2,0xbe,0x02,0xe1,0x07,0x01,0xd0,0x8b,0xcf,0x86,0xd5,0x6c,0xd4,0x53,
++ 0xd3,0x30,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,0x95,0xcc,0x80,0x00,0x01,
++ 0xff,0xd0,0x95,0xcc,0x88,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x93,0xcc,0x81,
++ 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x86,0xcc,0x88,0x00,
++ 0x52,0x04,0x01,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x9a,0xcc,0x81,0x00,0x04,
++ 0xff,0xd0,0x98,0xcc,0x80,0x00,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x86,0x00,0x01,
++ 0x00,0x53,0x04,0x01,0x00,0x92,0x11,0x91,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,
++ 0x98,0xcc,0x86,0x00,0x01,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,
++ 0x92,0x11,0x91,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x01,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0x57,0x54,0x04,0x01,0x00,0xd3,0x30,0xd2,0x1f,0xd1,
++ 0x12,0x10,0x09,0x04,0xff,0xd0,0xb5,0xcc,0x80,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,
++ 0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0xb3,0xcc,0x81,0x00,0x51,0x04,0x01,0x00,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xd1,0x96,0xcc,0x88,0x00,0x52,0x04,0x01,0x00,0xd1,
++ 0x12,0x10,0x09,0x01,0xff,0xd0,0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,0xcc,0x80,
++ 0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x86,0x00,0x01,0x00,0x54,0x04,0x01,0x00,
++ 0x93,0x1a,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb4,
++ 0xcc,0x8f,0x00,0x01,0xff,0xd1,0xb5,0xcc,0x8f,0x00,0x01,0x00,0xd0,0x2e,0xcf,0x86,
++ 0x95,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x01,0xe6,0x51,0x04,0x01,0xe6,0x10,0x04,0x01,0xe6,0x0a,0xe6,0x92,0x08,0x11,0x04,
++ 0x04,0x00,0x06,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xbe,0xd4,0x4a,
++ 0xd3,0x2a,0xd2,0x1a,0xd1,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x96,0xcc,0x86,
++ 0x00,0x10,0x09,0x01,0xff,0xd0,0xb6,0xcc,0x86,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,
++ 0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,
++ 0x06,0x00,0x10,0x04,0x06,0x00,0x09,0x00,0xd3,0x3a,0xd2,0x24,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xd0,0x90,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x10,0x09,
++ 0x01,0xff,0xd0,0x90,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x51,0x04,
++ 0x01,0x00,0x10,0x09,0x01,0xff,0xd0,0x95,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,
++ 0x86,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0x98,0xcc,0x88,
++ 0x00,0x01,0xff,0xd3,0x99,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x96,
++ 0xcc,0x88,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0x97,
++ 0xcc,0x88,0x00,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0xd4,0x74,0xd3,0x3a,0xd2,0x16,
++ 0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd0,0x98,0xcc,0x84,0x00,0x01,0xff,0xd0,
++ 0xb8,0xcc,0x84,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x98,0xcc,0x88,0x00,0x01,
++ 0xff,0xd0,0xb8,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0x9e,0xcc,0x88,0x00,0x01,
++ 0xff,0xd0,0xbe,0xcc,0x88,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,
++ 0xd3,0xa8,0xcc,0x88,0x00,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,
++ 0x04,0xff,0xd0,0xad,0xcc,0x88,0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x10,0x09,
++ 0x01,0xff,0xd0,0xa3,0xcc,0x84,0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0xd3,0x3a,
++ 0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x88,0x00,0x01,0xff,0xd1,
++ 0x83,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x8b,0x00,0x01,0xff,0xd1,
++ 0x83,0xcc,0x8b,0x00,0x91,0x12,0x10,0x09,0x01,0xff,0xd0,0xa7,0xcc,0x88,0x00,0x01,
++ 0xff,0xd1,0x87,0xcc,0x88,0x00,0x08,0x00,0x92,0x16,0x91,0x12,0x10,0x09,0x01,0xff,
++ 0xd0,0xab,0xcc,0x88,0x00,0x01,0xff,0xd1,0x8b,0xcc,0x88,0x00,0x09,0x00,0x09,0x00,
++ 0xd1,0x74,0xd0,0x36,0xcf,0x86,0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,
++ 0x09,0x00,0x0a,0x00,0x0a,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x0a,0x00,0x11,0x04,
++ 0x0b,0x00,0x0c,0x00,0x10,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0x00,
++ 0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,
++ 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x14,
++ 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0xd0,0xba,0xcf,0x86,0xd5,0x4c,0xd4,0x24,0x53,0x04,0x01,0x00,
++ 0xd2,0x10,0xd1,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x10,0x04,0x04,0x00,0x00,0x00,
++ 0xd1,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x04,0x10,0x00,0x0d,0x00,0xd3,0x18,
++ 0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x02,0xdc,0x02,0xe6,0x51,0x04,0x02,0xe6,
++ 0x10,0x04,0x02,0xdc,0x02,0xe6,0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xde,
++ 0x02,0xdc,0x02,0xe6,0xd4,0x2c,0xd3,0x10,0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,
++ 0x08,0xdc,0x02,0xdc,0x02,0xdc,0xd2,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xdc,
++ 0x02,0xe6,0xd1,0x08,0x10,0x04,0x02,0xe6,0x02,0xde,0x10,0x04,0x02,0xe4,0x02,0xe6,
++ 0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x0a,0x01,0x0b,0x10,0x04,0x01,0x0c,
++ 0x01,0x0d,0xd1,0x08,0x10,0x04,0x01,0x0e,0x01,0x0f,0x10,0x04,0x01,0x10,0x01,0x11,
++ 0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x12,0x01,0x13,0x10,0x04,0x09,0x13,0x01,0x14,
++ 0xd1,0x08,0x10,0x04,0x01,0x15,0x01,0x16,0x10,0x04,0x01,0x00,0x01,0x17,0xcf,0x86,
++ 0xd5,0x28,0x94,0x24,0x93,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x18,
++ 0x10,0x04,0x01,0x19,0x01,0x00,0xd1,0x08,0x10,0x04,0x02,0xe6,0x08,0xdc,0x10,0x04,
++ 0x08,0x00,0x08,0x12,0x00,0x00,0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,0xd2,0x0c,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x14,0x00,0x93,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0xe2,0xfa,0x01,0xe1,0x2a,0x01,0xd0,0xa7,0xcf,0x86,
++ 0xd5,0x54,0xd4,0x28,0xd3,0x10,0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,
++ 0x10,0x00,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x08,0x00,
++ 0x91,0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x07,0x00,0xd3,0x0c,0x52,0x04,0x07,0xe6,
++ 0x11,0x04,0x07,0xe6,0x0a,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0a,0x1e,0x0a,0x1f,
++ 0x10,0x04,0x0a,0x20,0x01,0x00,0xd1,0x08,0x10,0x04,0x0f,0x00,0x00,0x00,0x10,0x04,
++ 0x08,0x00,0x01,0x00,0xd4,0x3d,0x93,0x39,0xd2,0x1a,0xd1,0x08,0x10,0x04,0x0c,0x00,
++ 0x01,0x00,0x10,0x09,0x01,0xff,0xd8,0xa7,0xd9,0x93,0x00,0x01,0xff,0xd8,0xa7,0xd9,
++ 0x94,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd9,0x88,0xd9,0x94,0x00,0x01,0xff,0xd8,
++ 0xa7,0xd9,0x95,0x00,0x10,0x09,0x01,0xff,0xd9,0x8a,0xd9,0x94,0x00,0x01,0x00,0x01,
++ 0x00,0x53,0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0a,
++ 0x00,0x0a,0x00,0xcf,0x86,0xd5,0x5c,0xd4,0x20,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,
++ 0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0x1b,0xd1,0x08,0x10,0x04,0x01,0x1c,0x01,
++ 0x1d,0x10,0x04,0x01,0x1e,0x01,0x1f,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,
++ 0x20,0x01,0x21,0x10,0x04,0x01,0x22,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xe6,0x04,
++ 0xdc,0x10,0x04,0x07,0xdc,0x07,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x07,0xe6,0x08,
++ 0xe6,0x08,0xe6,0xd1,0x08,0x10,0x04,0x08,0xdc,0x08,0xe6,0x10,0x04,0x08,0xe6,0x0c,
++ 0xdc,0xd4,0x10,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x06,
++ 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x23,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,
++ 0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x04,0x00,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x04,0x00,0xcf,0x86,0xd5,0x5b,0xd4,0x2e,0xd3,0x1e,0x92,0x1a,0xd1,
++ 0x0d,0x10,0x09,0x01,0xff,0xdb,0x95,0xd9,0x94,0x00,0x01,0x00,0x10,0x09,0x01,0xff,
++ 0xdb,0x81,0xd9,0x94,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd3,0x19,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0xdb,0x92,0xd9,0x94,0x00,0x11,0x04,0x01,0x00,0x01,0xe6,
++ 0x52,0x04,0x01,0xe6,0xd1,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x01,0xe6,0xd4,0x38,0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x01,0xe6,0x10,0x04,0x01,0xe6,
++ 0x01,0xdc,0xd1,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,
++ 0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0xdc,0x01,0xe6,
++ 0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0xdc,0x07,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,
++ 0x11,0x04,0x01,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,0x00,
++ 0xd1,0xc8,0xd0,0x76,0xcf,0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,0x04,
++ 0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x93,0x10,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x04,0x00,0x04,0x24,0x04,0x00,0x04,0x00,0x04,0x00,0xd4,0x14,
++ 0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x07,0x00,
++ 0x07,0x00,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x04,0xe6,
++ 0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd2,0x0c,
++ 0x51,0x04,0x04,0xdc,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd1,0x08,0x10,0x04,0x04,0xdc,
++ 0x04,0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xcf,0x86,0xd5,0x3c,0x94,0x38,0xd3,0x1c,
++ 0xd2,0x0c,0x51,0x04,0x04,0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,
++ 0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,0xe6,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x07,0x00,0x07,0x00,0x08,0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x52,0x04,0x08,0x00,
++ 0x11,0x04,0x08,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x04,0x00,
++ 0x54,0x04,0x04,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0xd4,0x14,0x53,0x04,
++ 0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xe6,0x09,0xe6,
++ 0xd3,0x10,0x92,0x0c,0x51,0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0x00,
++ 0xd2,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x00,0x00,0x91,0x08,0x10,0x04,
++ 0x00,0x00,0x14,0xdc,0x14,0x00,0xe4,0x78,0x57,0xe3,0xda,0x3e,0xe2,0x89,0x3e,0xe1,
++ 0x91,0x2c,0xe0,0x21,0x10,0xcf,0x86,0xc5,0xe4,0x80,0x08,0xe3,0xcb,0x03,0xe2,0x61,
++ 0x01,0xd1,0x94,0xd0,0x5a,0xcf,0x86,0xd5,0x20,0x54,0x04,0x0b,0x00,0xd3,0x0c,0x52,
++ 0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0b,0xe6,0x92,0x0c,0x51,0x04,0x0b,0xe6,0x10,
++ 0x04,0x0b,0x00,0x0b,0xe6,0x0b,0xe6,0xd4,0x24,0xd3,0x10,0x52,0x04,0x0b,0xe6,0x91,
++ 0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x0b,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,
++ 0x00,0x0b,0xe6,0x0b,0xe6,0x11,0x04,0x0b,0xe6,0x00,0x00,0x53,0x04,0x0b,0x00,0x52,
++ 0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xcf,0x86,0xd5,
++ 0x20,0x54,0x04,0x0c,0x00,0x53,0x04,0x0c,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0c,
++ 0x00,0x0c,0xdc,0x0c,0xdc,0x51,0x04,0x00,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x94,
++ 0x14,0x53,0x04,0x13,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0xd0,0x4a,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,0x20,0xd3,
++ 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x00,0x10,0x00,0x0d,0x00,0x0d,0x00,0x52,
++ 0x04,0x0d,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x10,0x00,0x10,0x00,0xd3,0x18,0xd2,
++ 0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x11,0x00,0x91,0x08,0x10,0x04,0x11,
++ 0x00,0x00,0x00,0x12,0x00,0x52,0x04,0x12,0x00,0x11,0x04,0x12,0x00,0x00,0x00,0xcf,
++ 0x86,0xd5,0x18,0x54,0x04,0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,
++ 0x04,0x00,0x00,0x14,0xdc,0x12,0xe6,0x12,0xe6,0xd4,0x30,0xd3,0x18,0xd2,0x0c,0x51,
++ 0x04,0x12,0xe6,0x10,0x04,0x12,0x00,0x11,0xdc,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,
++ 0xdc,0x0d,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x0d,0xe6,0x91,
++ 0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x0d,0xdc,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,
++ 0x04,0x0d,0x1b,0x0d,0x1c,0x10,0x04,0x0d,0x1d,0x0d,0xe6,0x51,0x04,0x0d,0xe6,0x10,
++ 0x04,0x0d,0xdc,0x0d,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x10,
++ 0x04,0x0d,0xdc,0x0d,0xe6,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xe6,0x10,0xe6,0xe1,
++ 0x3a,0x01,0xd0,0x77,0xcf,0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,0xd2,0x0c,0x91,0x08,
++ 0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x07,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x1b,0x53,0x04,0x01,0x00,0x92,0x13,0x91,0x0f,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa4,0xa8,0xe0,0xa4,0xbc,0x00,0x01,0x00,0x01,
++ 0x00,0xd3,0x26,0xd2,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa4,0xb0,
++ 0xe0,0xa4,0xbc,0x00,0x01,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xa4,0xb3,0xe0,
++ 0xa4,0xbc,0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x91,
++ 0x08,0x10,0x04,0x01,0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x8c,0xd4,0x18,0x53,
++ 0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x10,
++ 0x04,0x0b,0x00,0x0c,0x00,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,
++ 0xe6,0x10,0x04,0x01,0xdc,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x0b,0x00,0x0c,
++ 0x00,0xd2,0x2c,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xa4,0x95,0xe0,0xa4,0xbc,0x00,
++ 0x01,0xff,0xe0,0xa4,0x96,0xe0,0xa4,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa4,0x97,
++ 0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0x9c,0xe0,0xa4,0xbc,0x00,0xd1,0x16,0x10,
++ 0x0b,0x01,0xff,0xe0,0xa4,0xa1,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0xa2,0xe0,
++ 0xa4,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa4,0xab,0xe0,0xa4,0xbc,0x00,0x01,0xff,
++ 0xe0,0xa4,0xaf,0xe0,0xa4,0xbc,0x00,0x54,0x04,0x01,0x00,0xd3,0x14,0x92,0x10,0xd1,
++ 0x08,0x10,0x04,0x01,0x00,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0c,0x00,0x0c,0x00,0xd2,
++ 0x10,0xd1,0x08,0x10,0x04,0x10,0x00,0x0b,0x00,0x10,0x04,0x0b,0x00,0x09,0x00,0x91,
++ 0x08,0x10,0x04,0x09,0x00,0x08,0x00,0x09,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x44,0xd4,
++ 0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,
++ 0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
++ 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x08,0x11,
++ 0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x07,0x00,0x01,0x00,0xcf,
++ 0x86,0xd5,0x7b,0xd4,0x42,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,
++ 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x17,0xd1,0x08,0x10,0x04,0x01,
++ 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xe0,0xa7,0x87,0xe0,0xa6,0xbe,0x00,
++ 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xa7,0x87,0xe0,0xa7,0x97,0x00,0x01,0x09,0x10,
++ 0x04,0x08,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,
++ 0x04,0x00,0x00,0x01,0x00,0x52,0x04,0x00,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,
++ 0xa6,0xa1,0xe0,0xa6,0xbc,0x00,0x01,0xff,0xe0,0xa6,0xa2,0xe0,0xa6,0xbc,0x00,0x10,
++ 0x04,0x00,0x00,0x01,0xff,0xe0,0xa6,0xaf,0xe0,0xa6,0xbc,0x00,0xd4,0x10,0x93,0x0c,
++ 0x52,0x04,0x01,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,
++ 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,0x13,0x00,
++ 0x10,0x04,0x14,0xe6,0x00,0x00,0xe2,0x48,0x02,0xe1,0x4f,0x01,0xd0,0xa4,0xcf,0x86,
++ 0xd5,0x4c,0xd4,0x34,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x07,0x00,
++ 0x10,0x04,0x01,0x00,0x07,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
++ 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,
+ 0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,
+ 0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,
+ 0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0xd3,0x18,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,
+- 0x91,0x08,0x10,0x04,0x01,0x07,0x07,0x00,0x01,0x00,0xcf,0x86,0xd5,0x7b,0xd4,0x42,
+- 0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x01,0x00,0xd2,0x17,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x01,0xff,0xe0,0xa7,0x87,0xe0,0xa6,0xbe,0x00,0xd1,0x0f,0x10,0x0b,0x01,
+- 0xff,0xe0,0xa7,0x87,0xe0,0xa7,0x97,0x00,0x01,0x09,0x10,0x04,0x08,0x00,0x00,0x00,
+- 0xd3,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
+- 0x52,0x04,0x00,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xa6,0xa1,0xe0,0xa6,0xbc,
+- 0x00,0x01,0xff,0xe0,0xa6,0xa2,0xe0,0xa6,0xbc,0x00,0x10,0x04,0x00,0x00,0x01,0xff,
+- 0xe0,0xa6,0xaf,0xe0,0xa6,0xbc,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x01,0x00,0x11,
+- 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x14,0xe6,0x00,
+- 0x00,0xe2,0x48,0x02,0xe1,0x4f,0x01,0xd0,0xa4,0xcf,0x86,0xd5,0x4c,0xd4,0x34,0xd3,
+- 0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x10,0x04,0x01,0x00,0x07,
+- 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,
+- 0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x2e,0xd2,0x17,0xd1,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa8,0xb2,
+- 0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,
+- 0xe0,0xa8,0xb8,0xe0,0xa8,0xbc,0x00,0x00,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,
+- 0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x00,0x00,0x01,0x00,0xcf,0x86,0xd5,0x80,0xd4,
+- 0x34,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,
+- 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,
+- 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x01,
+- 0x09,0x00,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x00,
+- 0x00,0x00,0x00,0xd2,0x25,0xd1,0x0f,0x10,0x04,0x00,0x00,0x01,0xff,0xe0,0xa8,0x96,
+- 0xe0,0xa8,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0x97,0xe0,0xa8,0xbc,0x00,0x01,
+- 0xff,0xe0,0xa8,0x9c,0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,
+- 0x10,0x0b,0x01,0xff,0xe0,0xa8,0xab,0xe0,0xa8,0xbc,0x00,0x00,0x00,0xd4,0x10,0x93,
+- 0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x14,0x52,
+- 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,0x00,0x10,0x04,0x14,0x00,0x00,
+- 0x00,0x00,0x00,0xd0,0x82,0xcf,0x86,0xd5,0x40,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,
+- 0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x07,0x00,0x01,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,0x0c,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,
+- 0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,
+- 0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x10,0x52,0x04,0x01,
+- 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x00,
+- 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0xd4,0x18,0x93,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x07,
+- 0x00,0x07,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x0d,0x00,0x07,0x00,0x00,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x00,0x00,0x11,0x00,0x13,0x00,0x13,0x00,0xe1,0x24,0x01,0xd0,0x86,0xcf,0x86,
+- 0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,
+- 0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x14,
+- 0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
+- 0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x01,0x00,
+- 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x01,0x00,
+- 0x01,0x00,0xcf,0x86,0xd5,0x73,0xd4,0x45,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,
+- 0x10,0x04,0x0a,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x0f,
+- 0x10,0x0b,0x01,0xff,0xe0,0xad,0x87,0xe0,0xad,0x96,0x00,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x01,0xff,0xe0,0xad,0x87,0xe0,0xac,0xbe,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,
+- 0xe0,0xad,0x87,0xe0,0xad,0x97,0x00,0x01,0x09,0x00,0x00,0xd3,0x0c,0x52,0x04,0x00,
+- 0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x52,0x04,0x00,0x00,0xd1,0x16,0x10,0x0b,0x01,
+- 0xff,0xe0,0xac,0xa1,0xe0,0xac,0xbc,0x00,0x01,0xff,0xe0,0xac,0xa2,0xe0,0xac,0xbc,
+- 0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,0x01,
+- 0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x0c,0x00,0x0c,0x00,0x00,0x00,0xd0,0xb1,0xcf,
+- 0x86,0xd5,0x63,0xd4,0x28,0xd3,0x14,0xd2,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd3,0x1f,0xd2,0x0c,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,
+- 0xae,0x92,0xe0,0xaf,0x97,0x00,0x01,0x00,0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
+- 0x00,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x01,0x00,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd2,0x0c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,
+- 0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x08,0x00,0x01,0x00,
+- 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xcf,0x86,
+- 0xd5,0x61,0xd4,0x45,0xd3,0x14,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xaf,0x86,0xe0,0xae,0xbe,0x00,0x01,0xff,0xe0,
+- 0xaf,0x87,0xe0,0xae,0xbe,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xaf,0x86,0xe0,
+- 0xaf,0x97,0x00,0x01,0x09,0x00,0x00,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0a,
+- 0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x00,
+- 0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x08,
+- 0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
+- 0x00,0x07,0x00,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,
+- 0x00,0x00,0x00,0xe3,0x1c,0x04,0xe2,0x1a,0x02,0xd1,0xf3,0xd0,0x76,0xcf,0x86,0xd5,
+- 0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,
+- 0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,
+- 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,
+- 0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0xd2,
+- 0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x01,
+- 0x00,0xcf,0x86,0xd5,0x53,0xd4,0x2f,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,
+- 0x04,0x01,0x00,0x00,0x00,0x01,0x00,0xd2,0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,
+- 0xb1,0x86,0xe0,0xb1,0x96,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
+- 0x01,0x54,0x10,0x04,0x01,0x5b,0x00,0x00,0x92,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,
+- 0x11,0x00,0x00,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,0x01,0x00,
+- 0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x10,0x52,0x04,0x00,0x00,
+- 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0a,0x00,0xd0,0x76,0xcf,0x86,
+- 0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x10,0x00,
+- 0x01,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,
+- 0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,
++ 0xd3,0x2e,0xd2,0x17,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xe0,0xa8,0xb2,0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,
++ 0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0xb8,0xe0,0xa8,0xbc,0x00,0x00,0x00,0xd2,0x08,
++ 0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x00,0x00,0x01,0x00,
++ 0xcf,0x86,0xd5,0x80,0xd4,0x34,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,
++ 0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x10,
++ 0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,
++ 0x10,0x04,0x01,0x00,0x01,0x09,0x00,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x00,0x00,0x0a,0x00,0x00,0x00,0x00,0x00,0xd2,0x25,0xd1,0x0f,0x10,0x04,0x00,0x00,
++ 0x01,0xff,0xe0,0xa8,0x96,0xe0,0xa8,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0x97,
++ 0xe0,0xa8,0xbc,0x00,0x01,0xff,0xe0,0xa8,0x9c,0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,
++ 0x04,0x01,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0xab,0xe0,0xa8,0xbc,0x00,
++ 0x00,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0x93,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,0x00,
++ 0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0xd0,0x82,0xcf,0x86,0xd5,0x40,0xd4,0x2c,
++ 0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x91,0x08,
++ 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,
++ 0x07,0x00,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,
++ 0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,
+ 0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x07,0x07,0x07,0x00,
+- 0x01,0x00,0xcf,0x86,0xd5,0x82,0xd4,0x5e,0xd3,0x2a,0xd2,0x13,0x91,0x0f,0x10,0x0b,
+- 0x01,0xff,0xe0,0xb2,0xbf,0xe0,0xb3,0x95,0x00,0x01,0x00,0x01,0x00,0xd1,0x08,0x10,
+- 0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,
+- 0x95,0x00,0xd2,0x28,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,0x96,
+- 0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,0x82,0x00,0x01,0xff,
+- 0xe0,0xb3,0x86,0xe0,0xb3,0x82,0xe0,0xb3,0x95,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
+- 0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,
+- 0x10,0x04,0x01,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,0x01,0x00,
+- 0x09,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,
+- 0x10,0x04,0x00,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0xe1,0x06,0x01,0xd0,0x6e,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,
+- 0x08,0x10,0x04,0x13,0x00,0x10,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,
+- 0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,
+- 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,
+- 0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,
+- 0x00,0x0c,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x0c,0x00,0x13,0x09,0x91,0x08,0x10,0x04,0x13,0x09,0x0a,0x00,0x01,
+- 0x00,0xcf,0x86,0xd5,0x65,0xd4,0x45,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,
+- 0x04,0x0a,0x00,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x10,0x0b,0x01,0xff,0xe0,0xb5,0x86,0xe0,0xb4,0xbe,0x00,0x01,0xff,0xe0,0xb5,
+- 0x87,0xe0,0xb4,0xbe,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xb5,0x86,0xe0,0xb5,
+- 0x97,0x00,0x01,0x09,0x10,0x04,0x0c,0x00,0x12,0x00,0xd3,0x10,0x52,0x04,0x00,0x00,
+- 0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x01,0x00,0x52,0x04,0x12,0x00,0x51,0x04,
+- 0x12,0x00,0x10,0x04,0x12,0x00,0x11,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,
+- 0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x0c,0x52,0x04,
+- 0x0a,0x00,0x11,0x04,0x0a,0x00,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,
+- 0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x5a,0xcf,0x86,0xd5,0x34,0xd4,0x18,0x93,0x14,
+- 0xd2,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x04,0x00,
+- 0x04,0x00,0x04,0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,
+- 0x04,0x00,0x00,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x54,0x04,
+- 0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x00,0x00,0x04,0x00,
+- 0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x04,0x00,0x00,0x00,
+- 0xcf,0x86,0xd5,0x77,0xd4,0x28,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,
+- 0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x04,0x09,
+- 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x04,0x00,0xd3,0x14,0x52,0x04,
+- 0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x10,0x04,0x04,0x00,0x00,0x00,
+- 0xd2,0x13,0x51,0x04,0x04,0x00,0x10,0x0b,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8a,
+- 0x00,0x04,0x00,0xd1,0x19,0x10,0x0b,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8f,0x00,
+- 0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8f,0xe0,0xb7,0x8a,0x00,0x10,0x0b,0x04,0xff,
+- 0xe0,0xb7,0x99,0xe0,0xb7,0x9f,0x00,0x04,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x00,
+- 0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x93,0x14,0xd2,0x08,0x11,0x04,0x00,
+- 0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xe2,
+- 0x31,0x01,0xd1,0x58,0xd0,0x3a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x67,0x10,0x04,
+- 0x01,0x09,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xcf,0x86,
+- 0x95,0x18,0xd4,0x0c,0x53,0x04,0x01,0x00,0x12,0x04,0x01,0x6b,0x01,0x00,0x53,0x04,
+- 0x01,0x00,0x12,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd0,0x9e,0xcf,0x86,0xd5,0x54,
+- 0xd4,0x3c,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,
+- 0x01,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x15,0x00,
+- 0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x15,0x00,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,0x15,0x00,0xd3,0x08,0x12,0x04,
+- 0x15,0x00,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,0x01,0x00,
+- 0x01,0x00,0xd4,0x30,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,
+- 0x01,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
+- 0xd2,0x08,0x11,0x04,0x15,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,
+- 0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x76,0x10,0x04,0x15,0x09,
+- 0x01,0x00,0x11,0x04,0x01,0x00,0x00,0x00,0xcf,0x86,0x95,0x34,0xd4,0x20,0xd3,0x14,
+- 0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x52,0x04,0x01,0x7a,0x11,0x04,0x01,0x00,0x00,0x00,0x53,0x04,0x01,0x00,
+- 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x01,0x00,0x0d,0x00,0x00,0x00,
+- 0xe1,0x2b,0x01,0xd0,0x3e,0xcf,0x86,0xd5,0x14,0x54,0x04,0x02,0x00,0x53,0x04,0x02,
+- 0x00,0x92,0x08,0x11,0x04,0x02,0xdc,0x02,0x00,0x02,0x00,0x54,0x04,0x02,0x00,0xd3,
+- 0x14,0x52,0x04,0x02,0x00,0xd1,0x08,0x10,0x04,0x02,0x00,0x02,0xdc,0x10,0x04,0x02,
+- 0x00,0x02,0xdc,0x92,0x0c,0x91,0x08,0x10,0x04,0x02,0x00,0x02,0xd8,0x02,0x00,0x02,
+- 0x00,0xcf,0x86,0xd5,0x73,0xd4,0x36,0xd3,0x17,0x92,0x13,0x51,0x04,0x02,0x00,0x10,
+- 0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x82,0xe0,0xbe,0xb7,0x00,0x02,0x00,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x00,0x00,0x02,0x00,0x02,0x00,0x91,0x0f,0x10,0x04,0x02,0x00,
+- 0x02,0xff,0xe0,0xbd,0x8c,0xe0,0xbe,0xb7,0x00,0x02,0x00,0xd3,0x26,0xd2,0x13,0x51,
+- 0x04,0x02,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbd,0x91,0xe0,0xbe,0xb7,0x00,0x02,0x00,
+- 0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x96,0xe0,0xbe,0xb7,
+- 0x00,0x52,0x04,0x02,0x00,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbd,0x9b,0xe0,0xbe,
+- 0xb7,0x00,0x02,0x00,0x02,0x00,0xd4,0x27,0x53,0x04,0x02,0x00,0xd2,0x17,0xd1,0x0f,
+- 0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x80,0xe0,0xbe,0xb5,0x00,0x10,0x04,0x04,
+- 0x00,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0xd3,0x35,0xd2,
+- 0x17,0xd1,0x08,0x10,0x04,0x00,0x00,0x02,0x81,0x10,0x04,0x02,0x82,0x02,0xff,0xe0,
+- 0xbd,0xb1,0xe0,0xbd,0xb2,0x00,0xd1,0x0f,0x10,0x04,0x02,0x84,0x02,0xff,0xe0,0xbd,
+- 0xb1,0xe0,0xbd,0xb4,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xb2,0xe0,0xbe,0x80,0x00,
+- 0x02,0x00,0xd2,0x13,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xb3,0xe0,0xbe,0x80,
+- 0x00,0x02,0x00,0x02,0x82,0x11,0x04,0x02,0x82,0x02,0x00,0xd0,0xd3,0xcf,0x86,0xd5,
+- 0x65,0xd4,0x27,0xd3,0x1f,0xd2,0x13,0x91,0x0f,0x10,0x04,0x02,0x82,0x02,0xff,0xe0,
+- 0xbd,0xb1,0xe0,0xbe,0x80,0x00,0x02,0xe6,0x91,0x08,0x10,0x04,0x02,0x09,0x02,0x00,
+- 0x02,0xe6,0x12,0x04,0x02,0x00,0x0c,0x00,0xd3,0x1f,0xd2,0x13,0x51,0x04,0x02,0x00,
+- 0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0x92,0xe0,0xbe,0xb7,0x00,0x51,0x04,0x02,
+- 0x00,0x10,0x04,0x04,0x00,0x02,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x02,
+- 0x00,0x02,0x00,0x91,0x0f,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0x9c,0xe0,0xbe,
+- 0xb7,0x00,0x02,0x00,0xd4,0x3d,0xd3,0x26,0xd2,0x13,0x51,0x04,0x02,0x00,0x10,0x0b,
+- 0x02,0xff,0xe0,0xbe,0xa1,0xe0,0xbe,0xb7,0x00,0x02,0x00,0x51,0x04,0x02,0x00,0x10,
+- 0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0xa6,0xe0,0xbe,0xb7,0x00,0x52,0x04,0x02,0x00,
+- 0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xab,0xe0,0xbe,0xb7,0x00,0x02,0x00,0x04,
+- 0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x02,0x00,0x02,0x00,0x02,
+- 0x00,0xd2,0x13,0x91,0x0f,0x10,0x04,0x04,0x00,0x02,0xff,0xe0,0xbe,0x90,0xe0,0xbe,
+- 0xb5,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,
+- 0x95,0x4c,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,
+- 0x04,0xdc,0x04,0x00,0x52,0x04,0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x00,0x00,
+- 0x10,0x04,0x0a,0x00,0x04,0x00,0xd3,0x14,0xd2,0x08,0x11,0x04,0x08,0x00,0x0a,0x00,
+- 0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,
+- 0x0b,0x00,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
+- 0xe5,0xf7,0x04,0xe4,0x79,0x03,0xe3,0x7b,0x01,0xe2,0x04,0x01,0xd1,0x7f,0xd0,0x65,
+- 0xcf,0x86,0x55,0x04,0x04,0x00,0xd4,0x33,0xd3,0x1f,0xd2,0x0c,0x51,0x04,0x04,0x00,
+- 0x10,0x04,0x0a,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x0b,0x04,0xff,0xe1,0x80,
+- 0xa5,0xe1,0x80,0xae,0x00,0x04,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x0a,0x00,0x04,
+- 0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x04,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x04,
+- 0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x04,0x00,0x04,
+- 0x07,0x92,0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0x09,0x10,0x04,0x0a,0x09,0x0a,
+- 0x00,0x0a,0x00,0xcf,0x86,0x95,0x14,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,
+- 0x08,0x11,0x04,0x04,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x2e,0xcf,0x86,0x95,
+- 0x28,0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,
+- 0x00,0x0a,0xdc,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,0x0b,
+- 0x00,0x11,0x04,0x0b,0x00,0x0a,0x00,0x01,0x00,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,
+- 0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x52,
+- 0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0d,0x00,0x00,0x00,0x01,0x00,0x54,
+- 0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
+- 0x00,0x06,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x06,0x00,0x08,0x00,0x10,0x04,0x08,
+- 0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0d,0x00,0x0d,0x00,0xd1,0x3e,0xd0,
+- 0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x1d,0x54,0x04,0x01,0x00,0x53,0x04,0x01,
+- 0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,
+- 0x00,0x01,0xff,0x00,0x94,0x15,0x93,0x11,0x92,0x0d,0x91,0x09,0x10,0x05,0x01,0xff,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,
+- 0x04,0x01,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
+- 0x00,0x0b,0x00,0x0b,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,
+- 0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x0b,
+- 0x00,0xe2,0x21,0x01,0xd1,0x6c,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,
+- 0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,
+- 0x04,0x00,0x04,0x00,0xcf,0x86,0x95,0x48,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,
+- 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,
+- 0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,
+- 0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,
+- 0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,
+- 0xd0,0x62,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,
+- 0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,
+- 0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xd4,0x14,0x53,0x04,
+- 0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,
+- 0xd3,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,
+- 0x04,0x00,0x00,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,
+- 0x00,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,0xd3,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,
+- 0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x52,0x04,0x04,0x00,
+- 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x93,0x10,0x52,0x04,0x04,0x00,
+- 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x94,0x14,0x53,0x04,
+- 0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,
+- 0x04,0x00,0xd1,0x9c,0xd0,0x3e,0xcf,0x86,0x95,0x38,0xd4,0x14,0x53,0x04,0x04,0x00,
+- 0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd3,0x14,
+- 0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,
+- 0x00,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,
+- 0x04,0x00,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x93,0x10,0x52,0x04,0x04,0x00,0x51,0x04,
+- 0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,0x53,0x04,0x04,0x00,0xd2,0x0c,
+- 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
+- 0x0c,0xe6,0x10,0x04,0x0c,0xe6,0x08,0xe6,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x08,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x53,0x04,0x04,0x00,
+- 0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,
+- 0xcf,0x86,0x95,0x14,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,
+- 0x08,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,
+- 0x04,0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x11,0x00,
+- 0x00,0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x00,0x00,0xd3,0x30,0xd2,0x2a,
+- 0xd1,0x24,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0b,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,
+- 0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xd2,0x6c,0xd1,0x24,
+- 0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,
+- 0x93,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x0b,0x00,
+- 0x0b,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,
+- 0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x04,0x00,
+- 0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x04,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x46,0xcf,0x86,0xd5,0x28,
+- 0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x00,
+- 0x00,0x00,0x06,0x00,0x93,0x10,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x09,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x06,0x00,0x93,0x14,0x52,0x04,0x06,0x00,
+- 0xd1,0x08,0x10,0x04,0x06,0x09,0x06,0x00,0x10,0x04,0x06,0x00,0x00,0x00,0x00,0x00,
+- 0xcf,0x86,0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,0x06,0x00,0x00,0x00,
+- 0x00,0x00,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,
+- 0x06,0x00,0x00,0x00,0x06,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,
+- 0x00,0x00,0x06,0x00,0x00,0x00,0x00,0x00,0xd0,0x1b,0xcf,0x86,0x55,0x04,0x04,0x00,
+- 0x54,0x04,0x04,0x00,0x93,0x0d,0x52,0x04,0x04,0x00,0x11,0x05,0x04,0xff,0x00,0x04,
+- 0x00,0x04,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x09,0x04,0x00,0x04,0x00,0x52,0x04,0x04,0x00,0x91,
+- 0x08,0x10,0x04,0x04,0x00,0x07,0xe6,0x00,0x00,0xd4,0x10,0x53,0x04,0x04,0x00,0x92,
+- 0x08,0x11,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x08,0x11,
+- 0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xe4,0xb7,0x03,0xe3,0x58,0x01,0xd2,0x8f,0xd1,
+- 0x53,0xd0,0x35,0xcf,0x86,0x95,0x2f,0xd4,0x1f,0x53,0x04,0x04,0x00,0xd2,0x0d,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x04,0xff,0x00,0x51,0x05,0x04,0xff,0x00,0x10,
+- 0x05,0x04,0xff,0x00,0x00,0x00,0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,
+- 0x00,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,
+- 0x53,0x04,0x04,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x04,0x00,0x94,0x18,0x53,0x04,0x04,0x00,
+- 0x92,0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0xe4,0x10,0x04,0x0a,0x00,0x00,0x00,
+- 0x00,0x00,0x0b,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x0c,
+- 0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x42,
+- 0xcf,0x86,0xd5,0x1c,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,
+- 0xd1,0x08,0x10,0x04,0x07,0x00,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0xd4,0x0c,
+- 0x53,0x04,0x07,0x00,0x12,0x04,0x07,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x10,
+- 0xd1,0x08,0x10,0x04,0x07,0x00,0x07,0xde,0x10,0x04,0x07,0xe6,0x07,0xdc,0x00,0x00,
+- 0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,
+- 0x00,0x00,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0xd4,0x10,0x53,0x04,0x07,0x00,
+- 0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x07,0x00,
+- 0x91,0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,0xcf,0x86,
+- 0x55,0x04,0x08,0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,
+- 0x0b,0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x95,0x28,0xd4,0x10,0x53,0x04,0x08,0x00,
+- 0x92,0x08,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0xd2,0x0c,
+- 0x51,0x04,0x08,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x08,0x00,
+- 0x07,0x00,0xd2,0xe4,0xd1,0x80,0xd0,0x2e,0xcf,0x86,0x95,0x28,0x54,0x04,0x08,0x00,
+- 0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x08,0xe6,
+- 0xd2,0x0c,0x91,0x08,0x10,0x04,0x08,0xdc,0x08,0x00,0x08,0x00,0x11,0x04,0x00,0x00,
+- 0x08,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,
+- 0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xd4,0x14,
+- 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,
+- 0x0b,0x00,0xd3,0x10,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,
+- 0x0b,0xe6,0x52,0x04,0x0b,0xe6,0xd1,0x08,0x10,0x04,0x0b,0xe6,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x0b,0xdc,0xd0,0x5e,0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0b,0x00,
+- 0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,
+- 0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd4,0x10,0x53,0x04,0x0b,0x00,0x52,0x04,
+- 0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x10,0xe6,0x91,0x08,
+- 0x10,0x04,0x10,0xe6,0x10,0xdc,0x10,0xdc,0xd2,0x0c,0x51,0x04,0x10,0xdc,0x10,0x04,
+- 0x10,0xdc,0x10,0xe6,0xd1,0x08,0x10,0x04,0x10,0xe6,0x10,0xdc,0x10,0x04,0x10,0x00,
+- 0x00,0x00,0xcf,0x06,0x00,0x00,0xe1,0x1e,0x01,0xd0,0xaa,0xcf,0x86,0xd5,0x6e,0xd4,
+- 0x53,0xd3,0x17,0x52,0x04,0x09,0x00,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,
+- 0xac,0x85,0xe1,0xac,0xb5,0x00,0x09,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x09,0xff,
+- 0xe1,0xac,0x87,0xe1,0xac,0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x89,
+- 0xe1,0xac,0xb5,0x00,0x09,0x00,0xd1,0x0f,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8b,0xe1,
+- 0xac,0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8d,0xe1,0xac,0xb5,0x00,
+- 0x09,0x00,0x93,0x17,0x92,0x13,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,
+- 0x91,0xe1,0xac,0xb5,0x00,0x09,0x00,0x09,0x00,0x09,0x00,0x54,0x04,0x09,0x00,0xd3,
+- 0x10,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x07,0x09,0x00,0x09,0x00,0xd2,
+- 0x13,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xba,0xe1,0xac,
+- 0xb5,0x00,0x91,0x0f,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xbc,0xe1,0xac,0xb5,
+- 0x00,0x09,0x00,0xcf,0x86,0xd5,0x3d,0x94,0x39,0xd3,0x31,0xd2,0x25,0xd1,0x16,0x10,
+- 0x0b,0x09,0xff,0xe1,0xac,0xbe,0xe1,0xac,0xb5,0x00,0x09,0xff,0xe1,0xac,0xbf,0xe1,
+- 0xac,0xb5,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xad,0x82,0xe1,0xac,0xb5,0x00,
+- 0x91,0x08,0x10,0x04,0x09,0x09,0x09,0x00,0x09,0x00,0x12,0x04,0x09,0x00,0x00,0x00,
+- 0x09,0x00,0xd4,0x1c,0x53,0x04,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,
+- 0x09,0x00,0x09,0xe6,0x91,0x08,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0xe6,0xd3,0x08,
+- 0x12,0x04,0x09,0xe6,0x09,0x00,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,
+- 0x00,0x00,0x00,0x00,0xd0,0x2e,0xcf,0x86,0x55,0x04,0x0a,0x00,0xd4,0x18,0x53,0x04,
+- 0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x09,0x0d,0x09,0x11,0x04,
+- 0x0d,0x00,0x0a,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x0d,0x00,
+- 0x0d,0x00,0xcf,0x86,0x55,0x04,0x0c,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x0c,0x00,
+- 0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x07,0x0c,0x00,0x0c,0x00,0xd3,0x0c,0x92,0x08,
+- 0x11,0x04,0x0c,0x00,0x0c,0x09,0x00,0x00,0x12,0x04,0x00,0x00,0x0c,0x00,0xe3,0xb2,
+- 0x01,0xe2,0x09,0x01,0xd1,0x4c,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x0a,0x00,0x54,0x04,
+- 0x0a,0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,
+- 0x0a,0x07,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,
+- 0xcf,0x86,0x95,0x1c,0x94,0x18,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,
+- 0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,
+- 0xd0,0x3a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x14,0x00,0x54,0x04,0x14,0x00,
+- 0x53,0x04,0x14,0x00,0xd2,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,
+- 0x91,0x08,0x10,0x04,0x00,0x00,0x14,0x00,0x14,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x08,
+- 0x13,0x04,0x0d,0x00,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x0b,0xe6,0x10,0x04,
+- 0x0b,0xe6,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x01,0x0b,0xdc,0x0b,0xdc,0x92,0x08,
+- 0x11,0x04,0x0b,0xdc,0x0b,0xe6,0x0b,0xdc,0xd4,0x28,0xd3,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x01,0x0b,0x01,0xd2,0x0c,0x91,0x08,0x10,0x04,
+- 0x0b,0x01,0x0b,0x00,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xdc,0x0b,0x00,
+- 0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0d,0x00,0xd1,0x08,
+- 0x10,0x04,0x0d,0xe6,0x0d,0x00,0x10,0x04,0x0d,0x00,0x13,0x00,0x92,0x0c,0x51,0x04,
+- 0x10,0xe6,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,
+- 0x07,0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x94,0x0c,0x53,0x04,0x07,0x00,0x12,0x04,
+- 0x07,0x00,0x08,0x00,0x08,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,0xcf,0x86,0xd5,0x40,
+- 0xd4,0x2c,0xd3,0x10,0x92,0x0c,0x51,0x04,0x08,0xe6,0x10,0x04,0x08,0xdc,0x08,0xe6,
+- 0x09,0xe6,0xd2,0x0c,0x51,0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x0a,0xe6,0xd1,0x08,
+- 0x10,0x04,0x0a,0xe6,0x0a,0xea,0x10,0x04,0x0a,0xd6,0x0a,0xdc,0x93,0x10,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x0a,0xca,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0xd4,0x14,
+- 0x93,0x10,0x52,0x04,0x0a,0xe6,0x51,0x04,0x0a,0xe6,0x10,0x04,0x0a,0xe6,0x10,0xe6,
+- 0x10,0xe6,0xd3,0x10,0x52,0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x13,0xe8,
+- 0x13,0xe4,0xd2,0x10,0xd1,0x08,0x10,0x04,0x13,0xe4,0x13,0xdc,0x10,0x04,0x00,0x00,
+- 0x12,0xe6,0xd1,0x08,0x10,0x04,0x0c,0xe9,0x0b,0xdc,0x10,0x04,0x09,0xe6,0x09,0xdc,
+- 0xe2,0x80,0x08,0xe1,0x48,0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,
+- 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa5,0x00,0x01,0xff,
+- 0x61,0xcc,0xa5,0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,
+- 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x42,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,
+- 0xa3,0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,
+- 0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x43,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,
+- 0x63,0xcc,0xa7,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0x87,0x00,0x01,0xff,
+- 0x64,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa3,0x00,0x01,0xff,
+- 0x64,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,
+- 0xb1,0x00,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa7,0x00,
+- 0x01,0xff,0x64,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xad,0x00,0x01,0xff,
+- 0x64,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,0x80,0x00,
+- 0x01,0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,
+- 0x81,0x00,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x45,0xcc,0xad,0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,
+- 0x45,0xcc,0xb0,0x00,0x01,0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,
+- 0x45,0xcc,0xa7,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,
+- 0x01,0xff,0x46,0xcc,0x87,0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,
+- 0x84,0x00,0x10,0x08,0x01,0xff,0x48,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,
+- 0x10,0x08,0x01,0xff,0x48,0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,
+- 0x10,0x08,0x01,0xff,0x48,0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x49,0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,
+- 0x01,0xff,0x49,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,
+- 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0x81,0x00,0x01,0xff,
+- 0x6b,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,
+- 0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,
+- 0xb1,0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,
+- 0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,
+- 0x6c,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xb1,0x00,0x01,0xff,
+- 0x6c,0xcc,0xb1,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4c,0xcc,0xad,0x00,0x01,0xff,
+- 0x6c,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x4d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,
+- 0x81,0x00,0xcf,0x86,0xe5,0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x4d,0xcc,0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,
+- 0xff,0x4d,0xcc,0xa3,0x00,0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x4e,0xcc,0x87,0x00,0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x4e,
+- 0xcc,0xa3,0x00,0x01,0xff,0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x4e,0xcc,0xb1,0x00,0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x4e,
+- 0xcc,0xad,0x00,0x01,0xff,0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,
+- 0xcc,0x83,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,
+- 0xff,0x4f,0xcc,0x83,0xcc,0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,
+- 0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x80,0x00,0x01,
+- 0xff,0x6f,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x81,
+- 0x00,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x50,
+- 0xcc,0x81,0x00,0x01,0xff,0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x50,0xcc,0x87,
+- 0x00,0x01,0xff,0x70,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,
+- 0xcc,0x87,0x00,0x01,0xff,0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa3,
+- 0x00,0x01,0xff,0x72,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x52,0xcc,0xa3,
+- 0xcc,0x84,0x00,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x52,
+- 0xcc,0xb1,0x00,0x01,0xff,0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,
+- 0x08,0x01,0xff,0x53,0xcc,0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,
+- 0x0a,0x01,0xff,0x53,0xcc,0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,
+- 0x00,0x10,0x0a,0x01,0xff,0x53,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,
+- 0xcc,0x87,0x00,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x53,0xcc,0xa3,0xcc,0x87,
+- 0x00,0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0x87,
+- 0x00,0x01,0xff,0x74,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,0xa3,
+- 0x00,0x01,0xff,0x74,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0xb1,0x00,0x01,
+- 0xff,0x74,0xcc,0xb1,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,
+- 0xcc,0xad,0x00,0x01,0xff,0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xa4,
+- 0x00,0x01,0xff,0x75,0xcc,0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0xb0,
+- 0x00,0x01,0xff,0x75,0xcc,0xb0,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xad,0x00,0x01,
+- 0xff,0x75,0xcc,0xad,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x83,
+- 0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x55,
+- 0xcc,0x84,0xcc,0x88,0x00,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x56,0xcc,0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,
+- 0xff,0x56,0xcc,0xa3,0x00,0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x10,0x02,0xcf,0x86,
+- 0xd5,0xe1,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,
+- 0x80,0x00,0x01,0xff,0x77,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x81,0x00,
+- 0x01,0xff,0x77,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x88,0x00,
+- 0x01,0xff,0x77,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x87,0x00,0x01,0xff,
+- 0x77,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0xa3,0x00,
+- 0x01,0xff,0x77,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x58,0xcc,0x87,0x00,0x01,0xff,
+- 0x78,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x58,0xcc,0x88,0x00,0x01,0xff,
+- 0x78,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,
+- 0x87,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0x82,0x00,
+- 0x01,0xff,0x7a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x5a,0xcc,0xa3,0x00,0x01,0xff,
+- 0x7a,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0xb1,0x00,0x01,0xff,
+- 0x7a,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x68,0xcc,0xb1,0x00,0x01,0xff,0x74,0xcc,
+- 0x88,0x00,0x92,0x1d,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,0xff,
+- 0x79,0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x02,0xff,0xc5,0xbf,0xcc,0x87,0x00,0x0a,
+- 0x00,0xd4,0x98,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa3,
+- 0x00,0x01,0xff,0x61,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x89,0x00,0x01,
+- 0xff,0x61,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x81,
+- 0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,
+- 0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0xd2,0x28,0xd1,0x14,0x10,
+- 0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,
+- 0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,
+- 0xcc,0x83,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x82,0x00,0x01,
+- 0xff,0x61,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x81,
+- 0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,
+- 0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,
+- 0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,
+- 0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x83,0x00,0x01,
+- 0xff,0x61,0xcc,0x86,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x86,
+- 0x00,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x45,0xcc,0xa3,0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x45,
+- 0xcc,0x89,0x00,0x01,0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,
+- 0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,
+- 0xcc,0x81,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0xcf,0x86,0xe5,0x31,0x01,
+- 0xd4,0x90,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,0xcc,
+- 0x80,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,
+- 0x82,0xcc,0x89,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x45,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,
+- 0x10,0x0a,0x01,0xff,0x45,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,
+- 0x82,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x89,0x00,0x01,0xff,
+- 0x69,0xcc,0x89,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,
+- 0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,
+- 0xa3,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,
+- 0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,0x81,0x00,
+- 0x01,0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,
+- 0x80,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,
+- 0x4f,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,
+- 0x01,0xff,0x4f,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,
+- 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,
+- 0x6f,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x81,0x00,
+- 0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,
+- 0x9b,0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,
+- 0x4f,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x89,0x00,0xd4,0x98,
+- 0xd3,0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x83,0x00,
+- 0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,
+- 0xa3,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x55,0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,
+- 0x89,0x00,0x01,0xff,0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,
+- 0x55,0xcc,0x9b,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x81,0x00,0x10,0x0a,
+- 0x01,0xff,0x55,0xcc,0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,
+- 0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,
+- 0x9b,0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,
+- 0x75,0xcc,0x9b,0xcc,0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,
+- 0x55,0xcc,0x9b,0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0xa3,0x00,0x10,0x08,
+- 0x01,0xff,0x59,0xcc,0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x59,0xcc,0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,
+- 0x59,0xcc,0x89,0x00,0x01,0xff,0x79,0xcc,0x89,0x00,0x92,0x14,0x91,0x10,0x10,0x08,
+- 0x01,0xff,0x59,0xcc,0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x0a,0x00,0x0a,0x00,
+- 0xe1,0xc0,0x04,0xe0,0x80,0x02,0xcf,0x86,0xe5,0x2d,0x01,0xd4,0xa8,0xd3,0x54,0xd2,
+- 0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,
+- 0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,
+- 0xce,0xb1,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,
+- 0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,
+- 0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,
+- 0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x91,0xcc,0x93,0x00,0x01,0xff,
+- 0xce,0x91,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,0x00,
+- 0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,
+- 0x91,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x81,0x00,0x10,
+- 0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,
+- 0xcd,0x82,0x00,0xd3,0x42,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,
+- 0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,
+- 0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,
+- 0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,
+- 0xcc,0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x95,0xcc,
+- 0x93,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x95,0xcc,
+- 0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,
+- 0x0b,0x01,0xff,0xce,0x95,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,
+- 0xcc,0x81,0x00,0x00,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,
+- 0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,
+- 0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,
+- 0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,
+- 0xce,0xb7,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,
+- 0x82,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0x97,0xcc,0x93,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0x00,0x10,
+- 0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,
+- 0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,0x00,
+- 0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,
+- 0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x82,0x00,0xd3,0x54,0xd2,
+- 0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,
+- 0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,
+- 0xce,0xb9,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,
+- 0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,
+- 0xff,0xce,0xb9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,
+- 0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x93,0x00,0x01,0xff,
+- 0xce,0x99,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x99,0xcc,0x93,0xcc,0x80,0x00,
+- 0x01,0xff,0xce,0x99,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,
+- 0x99,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,0xcc,0x81,0x00,0x10,
+- 0x0b,0x01,0xff,0xce,0x99,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,
+- 0xcd,0x82,0x00,0xcf,0x86,0xe5,0x13,0x01,0xd4,0x84,0xd3,0x42,0xd2,0x28,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,
+- 0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,
+- 0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x81,
+- 0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xce,0x9f,0xcc,0x93,0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,0x00,
+- 0x10,0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,
+- 0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x81,
+- 0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x54,0xd2,0x28,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,0xcf,0x85,0xcc,
+- 0x94,0x00,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,
+- 0x85,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,
+- 0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
+- 0xcf,0x85,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,
+- 0xd2,0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x0f,0x10,0x04,
+- 0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x81,0x00,0x10,0x04,0x00,0x00,0x01,
+- 0xff,0xce,0xa5,0xcc,0x94,0xcd,0x82,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,
+- 0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,
+- 0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,
+- 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,
+- 0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xa9,0xcc,0x93,0x00,0x01,0xff,0xce,0xa9,0xcc,
+- 0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,
+- 0xa9,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,
+- 0xcc,0x81,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
+- 0xce,0xa9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0x00,
+- 0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,
+- 0xff,0xce,0xb1,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,
+- 0xff,0xce,0xb5,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x80,
+- 0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,
+- 0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xce,0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
+- 0xcf,0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x91,0x12,0x10,0x09,
+- 0x01,0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x00,0x00,
+- 0xe0,0xe1,0x02,0xcf,0x86,0xe5,0x91,0x01,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,
+- 0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,
+- 0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xcd,0x85,
+- 0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,
+- 0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,
+- 0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,
+- 0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,
+- 0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,
+- 0x91,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,
+- 0xcd,0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,
+- 0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,
+- 0x91,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,
+- 0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,
+- 0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x85,
+- 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,
+- 0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xcd,
+- 0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xcd,0x85,
+- 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,
+- 0xce,0xb7,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,
+- 0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,
+- 0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,
+- 0xce,0x97,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,
+- 0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,
+- 0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,
+- 0x01,0xff,0xce,0x97,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,
+- 0x94,0xcd,0x82,0xcd,0x85,0x00,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,
+- 0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,
+- 0x85,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,
+- 0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,
+- 0xcf,0x89,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,
+- 0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xcd,0x85,
+- 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,
+- 0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,
+- 0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0xcd,0x85,
+- 0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,
+- 0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,
+- 0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x82,
+- 0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd3,0x49,
+- 0xd2,0x26,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,
+- 0xb1,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x80,0xcd,0x85,0x00,0x01,
+- 0xff,0xce,0xb1,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,
+- 0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,
+- 0xce,0xb1,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
+- 0x91,0xcc,0x86,0x00,0x01,0xff,0xce,0x91,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,
+- 0x91,0xcc,0x80,0x00,0x01,0xff,0xce,0x91,0xcc,0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,
+- 0xff,0xce,0x91,0xcd,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xb9,0x00,0x01,
+- 0x00,0xcf,0x86,0xe5,0x16,0x01,0xd4,0x8f,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,
+- 0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,
+- 0xff,0xce,0xb7,0xcc,0x81,0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,
+- 0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xce,0x95,0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x81,0x00,
+- 0x10,0x09,0x01,0xff,0xce,0x97,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x81,0x00,
+- 0xd1,0x13,0x10,0x09,0x01,0xff,0xce,0x97,0xcd,0x85,0x00,0x01,0xff,0xe1,0xbe,0xbf,
+- 0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbe,0xbf,0xcc,0x81,0x00,0x01,0xff,0xe1,
+- 0xbe,0xbf,0xcd,0x82,0x00,0xd3,0x40,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
+- 0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,
+- 0xb9,0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x51,
+- 0x04,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,
+- 0xcc,0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,
+- 0x86,0x00,0x01,0xff,0xce,0x99,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,
+- 0x80,0x00,0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x04,0x00,0x00,0x01,
+- 0xff,0xe1,0xbf,0xbe,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbf,0xbe,0xcc,0x81,
+- 0x00,0x01,0xff,0xe1,0xbf,0xbe,0xcd,0x82,0x00,0xd4,0x93,0xd3,0x4e,0xd2,0x28,0xd1,
+- 0x12,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,
+- 0x00,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,
+- 0xcc,0x88,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x93,0x00,
+- 0x01,0xff,0xcf,0x81,0xcc,0x94,0x00,0x10,0x09,0x01,0xff,0xcf,0x85,0xcd,0x82,0x00,
+- 0x01,0xff,0xcf,0x85,0xcc,0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,
+- 0xff,0xce,0xa5,0xcc,0x86,0x00,0x01,0xff,0xce,0xa5,0xcc,0x84,0x00,0x10,0x09,0x01,
+- 0xff,0xce,0xa5,0xcc,0x80,0x00,0x01,0xff,0xce,0xa5,0xcc,0x81,0x00,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0xa1,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x80,0x00,0x10,
+- 0x09,0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x01,0xff,0x60,0x00,0xd3,0x3b,0xd2,0x18,
+- 0x51,0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x80,0xcd,0x85,0x00,0x01,
+- 0xff,0xcf,0x89,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x81,
+- 0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcd,0x82,0x00,0x01,0xff,
+- 0xcf,0x89,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
+- 0x9f,0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,
+- 0xa9,0xcc,0x80,0x00,0x01,0xff,0xce,0xa9,0xcc,0x81,0x00,0xd1,0x10,0x10,0x09,0x01,
+- 0xff,0xce,0xa9,0xcd,0x85,0x00,0x01,0xff,0xc2,0xb4,0x00,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0xe0,0x7e,0x0c,0xcf,0x86,0xe5,0xbb,0x08,0xe4,0x14,0x06,0xe3,0xf7,0x02,0xe2,
+- 0xbd,0x01,0xd1,0xd0,0xd0,0x4f,0xcf,0x86,0xd5,0x2e,0x94,0x2a,0xd3,0x18,0x92,0x14,
+- 0x91,0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,0xff,0xe2,0x80,0x83,0x00,
+- 0x01,0x00,0x01,0x00,0x92,0x0d,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
+- 0x00,0x01,0xff,0x00,0x01,0x00,0x94,0x1b,0x53,0x04,0x01,0x00,0xd2,0x09,0x11,0x04,
+- 0x01,0x00,0x01,0xff,0x00,0x51,0x05,0x01,0xff,0x00,0x10,0x05,0x01,0xff,0x00,0x04,
+- 0x00,0x01,0x00,0xcf,0x86,0xd5,0x48,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,
+- 0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x52,0x04,0x04,0x00,0x11,0x04,0x04,
+- 0x00,0x06,0x00,0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x06,0x00,0x10,0x04,0x06,0x00,0x07,
+- 0x00,0xd1,0x08,0x10,0x04,0x07,0x00,0x08,0x00,0x10,0x04,0x08,0x00,0x06,0x00,0x52,
+- 0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x06,0x00,0xd4,0x23,0xd3,
+- 0x14,0x52,0x05,0x06,0xff,0x00,0x91,0x0a,0x10,0x05,0x0a,0xff,0x00,0x00,0xff,0x00,
+- 0x0f,0xff,0x00,0x92,0x0a,0x11,0x05,0x0f,0xff,0x00,0x01,0xff,0x00,0x01,0xff,0x00,
+- 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0xd0,0x7e,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x53,0x04,0x01,0x00,0x52,0x04,
+- 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,
+- 0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0c,0x00,0x0c,0x00,0x52,0x04,0x0c,0x00,
+- 0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,
+- 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x02,0x00,0x91,0x08,0x10,0x04,
+- 0x03,0x00,0x04,0x00,0x04,0x00,0xd3,0x10,0xd2,0x08,0x11,0x04,0x06,0x00,0x08,0x00,
+- 0x11,0x04,0x08,0x00,0x0b,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,
+- 0x10,0x04,0x0e,0x00,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x11,0x00,0x13,0x00,
+- 0xcf,0x86,0xd5,0x28,0x54,0x04,0x00,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x01,0xe6,
+- 0x01,0x01,0x01,0xe6,0xd2,0x0c,0x51,0x04,0x01,0x01,0x10,0x04,0x01,0x01,0x01,0xe6,
+- 0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x01,0x00,0xd4,0x30,0xd3,0x1c,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x01,0x00,0x01,0xe6,0x04,0x00,0xd1,0x08,0x10,0x04,0x06,0x00,
+- 0x06,0x01,0x10,0x04,0x06,0x01,0x06,0xe6,0x92,0x10,0xd1,0x08,0x10,0x04,0x06,0xdc,
+- 0x06,0xe6,0x10,0x04,0x06,0x01,0x08,0x01,0x09,0xdc,0x93,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0a,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x81,0xd0,0x4f,
+- 0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,0x00,0x51,0x04,
+- 0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xa9,0x00,0x01,0x00,0x92,0x12,0x51,0x04,0x01,
+- 0x00,0x10,0x06,0x01,0xff,0x4b,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,0x01,0x00,0x53,
+- 0x04,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,0x04,
+- 0x00,0x07,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x06,0x00,0x06,0x00,0xcf,0x86,0x95,
+- 0x2c,0xd4,0x18,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0xd1,0x08,0x10,0x04,0x08,
+- 0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,
+- 0x00,0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x68,0xcf,
+- 0x86,0xd5,0x48,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
+- 0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x11,0x00,0x00,0x00,0x53,0x04,0x01,0x00,0x92,
+- 0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,0x90,0xcc,0xb8,0x00,0x01,
+- 0xff,0xe2,0x86,0x92,0xcc,0xb8,0x00,0x01,0x00,0x94,0x1a,0x53,0x04,0x01,0x00,0x52,
+- 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,0x94,0xcc,0xb8,
+- 0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x2e,0x94,0x2a,0x53,0x04,0x01,0x00,0x52,
+- 0x04,0x01,0x00,0xd1,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x87,0x90,0xcc,0xb8,
+- 0x00,0x10,0x0a,0x01,0xff,0xe2,0x87,0x94,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x87,0x92,
+- 0xcc,0xb8,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x04,0x00,0x93,0x08,0x12,0x04,0x04,0x00,0x06,
+- 0x00,0x06,0x00,0xe2,0x38,0x02,0xe1,0x3f,0x01,0xd0,0x68,0xcf,0x86,0xd5,0x3e,0x94,
+- 0x3a,0xd3,0x16,0x52,0x04,0x01,0x00,0x91,0x0e,0x10,0x0a,0x01,0xff,0xe2,0x88,0x83,
+- 0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xd2,0x12,0x91,0x0e,0x10,0x04,0x01,0x00,0x01,
+- 0xff,0xe2,0x88,0x88,0xcc,0xb8,0x00,0x01,0x00,0x91,0x0e,0x10,0x0a,0x01,0xff,0xe2,
+- 0x88,0x8b,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x24,0x93,0x20,0x52,
+- 0x04,0x01,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa3,0xcc,0xb8,0x00,0x01,
+- 0x00,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa5,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0xcf,0x86,0xd5,0x48,0x94,0x44,0xd3,0x2e,0xd2,0x12,0x91,0x0e,0x10,0x04,0x01,
+- 0x00,0x01,0xff,0xe2,0x88,0xbc,0xcc,0xb8,0x00,0x01,0x00,0xd1,0x0e,0x10,0x0a,0x01,
+- 0xff,0xe2,0x89,0x83,0xcc,0xb8,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,
+- 0x89,0x85,0xcc,0xb8,0x00,0x92,0x12,0x91,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,
+- 0x89,0x88,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x40,0xd3,0x1e,0x92,
+- 0x1a,0xd1,0x0c,0x10,0x08,0x01,0xff,0x3d,0xcc,0xb8,0x00,0x01,0x00,0x10,0x0a,0x01,
+- 0xff,0xe2,0x89,0xa1,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,
+- 0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x89,0x8d,0xcc,0xb8,0x00,0x10,0x08,0x01,
+- 0xff,0x3c,0xcc,0xb8,0x00,0x01,0xff,0x3e,0xcc,0xb8,0x00,0xd3,0x30,0xd2,0x18,0x91,
+- 0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xa4,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xa5,
+- 0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xb2,0xcc,0xb8,
+- 0x00,0x01,0xff,0xe2,0x89,0xb3,0xcc,0xb8,0x00,0x01,0x00,0x92,0x18,0x91,0x14,0x10,
+- 0x0a,0x01,0xff,0xe2,0x89,0xb6,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xb7,0xcc,0xb8,
+- 0x00,0x01,0x00,0x01,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x50,0x94,0x4c,0xd3,0x30,0xd2,
+- 0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xba,0xcc,0xb8,0x00,0x01,0xff,0xe2,
+- 0x89,0xbb,0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0x82,
+- 0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x83,0xcc,0xb8,0x00,0x01,0x00,0x92,0x18,0x91,
+- 0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0x86,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x87,
+- 0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x30,0x53,0x04,0x01,0x00,0x52,
+- 0x04,0x01,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xa2,0xcc,0xb8,0x00,0x01,
+- 0xff,0xe2,0x8a,0xa8,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xa9,0xcc,0xb8,
+- 0x00,0x01,0xff,0xe2,0x8a,0xab,0xcc,0xb8,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,
+- 0x00,0xd4,0x5c,0xd3,0x2c,0x92,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xbc,
+- 0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xbd,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,
+- 0x8a,0x91,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x92,0xcc,0xb8,0x00,0x01,0x00,0xd2,
+- 0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xb2,0xcc,0xb8,0x00,0x01,
+- 0xff,0xe2,0x8a,0xb3,0xcc,0xb8,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xb4,
+- 0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0xb5,0xcc,0xb8,0x00,0x01,0x00,0x93,0x0c,0x92,
+- 0x08,0x11,0x04,0x01,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xd1,0x64,0xd0,0x3e,0xcf,
+- 0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x04,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x20,0x53,0x04,0x01,0x00,0x92,
+- 0x18,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x80,0x88,0x00,0x10,0x08,0x01,
+- 0xff,0xe3,0x80,0x89,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,
+- 0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x01,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x04,0x00,0x04,0x00,0xd0,
+- 0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xcf,0x86,0xd5,
+- 0x2c,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,0x06,0x00,0x10,
+- 0x04,0x06,0x00,0x07,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x08,
+- 0x00,0x08,0x00,0x08,0x00,0x12,0x04,0x08,0x00,0x09,0x00,0xd4,0x14,0x53,0x04,0x09,
+- 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,0x0c,0x00,0x0c,0x00,0xd3,
+- 0x08,0x12,0x04,0x0c,0x00,0x10,0x00,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,
+- 0x00,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x13,0x00,0xd3,0xa6,0xd2,
+- 0x74,0xd1,0x40,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x18,0x93,0x14,0x52,
+- 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,0x04,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x01,0x00,0x92,
+- 0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x01,
+- 0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x14,0x53,
+- 0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x06,
+- 0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,0x06,0x00,0x10,0x04,0x06,
+- 0x00,0x07,0x00,0xd1,0x06,0xcf,0x06,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x54,
+- 0x04,0x01,0x00,0x93,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x06,0x00,0x06,
+- 0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x13,0x04,0x04,
+- 0x00,0x06,0x00,0xd2,0xdc,0xd1,0x48,0xd0,0x26,0xcf,0x86,0x95,0x20,0x54,0x04,0x01,
+- 0x00,0xd3,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x07,0x00,0x06,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x08,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,
+- 0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x04,0x00,0x06,
+- 0x00,0x06,0x00,0x52,0x04,0x06,0x00,0x11,0x04,0x06,0x00,0x08,0x00,0xd0,0x5e,0xcf,
+- 0x86,0xd5,0x2c,0xd4,0x10,0x53,0x04,0x06,0x00,0x92,0x08,0x11,0x04,0x06,0x00,0x07,
+- 0x00,0x07,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0x52,
+- 0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0a,0x00,0x0b,0x00,0xd4,0x10,0x93,
+- 0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0xd3,0x10,0x92,
+- 0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x52,0x04,0x0a,
+- 0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x1c,0x94,
+- 0x18,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,
+- 0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0b,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,
+- 0x04,0x0b,0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0c,0x00,0x0b,0x00,0x0b,0x00,0xd1,
+- 0xa8,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x0c,0x00,0x01,
+- 0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x01,0x00,0x01,0x00,0x94,0x14,0x53,
+- 0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0x01,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x18,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
+- 0x00,0xd1,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x10,0x04,0x0c,0x00,0x01,0x00,0xd3,
+- 0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,0x51,0x04,0x0c,
+- 0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x01,0x00,0x0c,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x06,0x00,0x93,0x0c,0x52,0x04,0x06,0x00,0x11,
+- 0x04,0x06,0x00,0x01,0x00,0x01,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x54,0x04,0x01,
+- 0x00,0x93,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x0c,0x00,0x0c,
+- 0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x01,0x00,0x0c,0x00,0xcf,0x86,0xd5,0x2c,0x94,0x28,0xd3,0x10,0x52,0x04,0x08,
+- 0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,
+- 0x00,0x10,0x04,0x09,0x00,0x0d,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0d,0x00,0x0c,
+- 0x00,0x06,0x00,0x94,0x0c,0x53,0x04,0x06,0x00,0x12,0x04,0x06,0x00,0x0a,0x00,0x06,
+- 0x00,0xe4,0x39,0x01,0xd3,0x0c,0xd2,0x06,0xcf,0x06,0x04,0x00,0xcf,0x06,0x06,0x00,
+- 0xd2,0x30,0xd1,0x06,0xcf,0x06,0x06,0x00,0xd0,0x06,0xcf,0x06,0x06,0x00,0xcf,0x86,
+- 0x95,0x1e,0x54,0x04,0x06,0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x0e,
+- 0x10,0x0a,0x06,0xff,0xe2,0xab,0x9d,0xcc,0xb8,0x00,0x06,0x00,0x06,0x00,0x06,0x00,
+- 0xd1,0x80,0xd0,0x3a,0xcf,0x86,0xd5,0x28,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,0x04,
+- 0x07,0x00,0x11,0x04,0x07,0x00,0x08,0x00,0xd3,0x08,0x12,0x04,0x08,0x00,0x09,0x00,
+- 0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x94,0x0c,
+- 0x93,0x08,0x12,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x30,
+- 0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,
+- 0x10,0x00,0x10,0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,
+- 0x0b,0x00,0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x10,0x00,0x10,0x00,0x54,0x04,
+- 0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
+- 0xd0,0x32,0xcf,0x86,0xd5,0x14,0x54,0x04,0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,
+- 0x11,0x04,0x10,0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,
+- 0xd2,0x08,0x11,0x04,0x10,0x00,0x14,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x10,0x00,
+- 0x10,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x10,0x00,0x15,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,
+- 0x10,0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x14,0x00,0x14,0x00,0xd4,0x0c,0x53,0x04,
+- 0x14,0x00,0x12,0x04,0x14,0x00,0x11,0x00,0x53,0x04,0x14,0x00,0x52,0x04,0x14,0x00,
+- 0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0xe3,0xb9,0x01,0xd2,0xac,0xd1,
+- 0x68,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x08,0x00,0x94,0x14,0x53,0x04,0x08,0x00,0x52,
+- 0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x08,0x00,0xcf,
+- 0x86,0xd5,0x18,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x52,0x04,0x08,0x00,0x51,
+- 0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0xd4,0x14,0x53,0x04,0x09,0x00,0x52,
+- 0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0xd3,0x10,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0a,0x00,0x0a,0x00,0x09,0x00,0x52,0x04,0x0a,
+- 0x00,0x11,0x04,0x0a,0x00,0x0b,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,0xcf,0x86,0x55,
+- 0x04,0x08,0x00,0xd4,0x1c,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,0x04,0x08,0x00,0x10,
+- 0x04,0x08,0x00,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xe6,0xd3,
+- 0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0d,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x00,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0xd1,0x6c,0xd0,0x2a,0xcf,0x86,0x55,
+- 0x04,0x08,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,
+- 0x04,0x00,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0d,
+- 0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x55,0x04,0x08,0x00,0xd4,0x1c,0xd3,0x0c,0x52,
+- 0x04,0x08,0x00,0x11,0x04,0x08,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,
+- 0x00,0x10,0x04,0x00,0x00,0x08,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,
+- 0x04,0x00,0x00,0x0c,0x09,0xd0,0x5a,0xcf,0x86,0xd5,0x18,0x54,0x04,0x08,0x00,0x93,
+- 0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x00,
+- 0x00,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,
+- 0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,
+- 0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,
+- 0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0xcf,
+- 0x86,0x95,0x40,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,
+- 0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,
+- 0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,
+- 0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,
+- 0x00,0x0a,0xe6,0xd2,0x9c,0xd1,0x68,0xd0,0x32,0xcf,0x86,0xd5,0x14,0x54,0x04,0x08,
+- 0x00,0x53,0x04,0x08,0x00,0x52,0x04,0x0a,0x00,0x11,0x04,0x08,0x00,0x0a,0x00,0x54,
+- 0x04,0x0a,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0d,
+- 0x00,0x0d,0x00,0x12,0x04,0x0d,0x00,0x10,0x00,0xcf,0x86,0x95,0x30,0x94,0x2c,0xd3,
+- 0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x12,0x00,0x91,0x08,0x10,
+- 0x04,0x12,0x00,0x13,0x00,0x13,0x00,0xd2,0x08,0x11,0x04,0x13,0x00,0x14,0x00,0x51,
+- 0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,
+- 0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,0x51,0x04,0x04,
+- 0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,
+- 0x00,0x54,0x04,0x04,0x00,0x93,0x08,0x12,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xd1,
+- 0x06,0xcf,0x06,0x04,0x00,0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0xd5,0x14,0x54,
+- 0x04,0x04,0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x00,
+- 0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x04,0x00,0x12,0x04,0x04,0x00,0x00,0x00,0xcf,
+- 0x86,0xe5,0xa6,0x05,0xe4,0x9f,0x05,0xe3,0x96,0x04,0xe2,0xe4,0x03,0xe1,0xc0,0x01,
+- 0xd0,0x3e,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,0xd2,0x0c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0xda,0x01,0xe4,0x91,0x08,0x10,0x04,0x01,0xe8,
+- 0x01,0xde,0x01,0xe0,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,
+- 0x04,0x00,0x06,0x00,0x51,0x04,0x06,0x00,0x10,0x04,0x04,0x00,0x01,0x00,0xcf,0x86,
+- 0xd5,0xaa,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,
+- 0x8b,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x8d,0xe3,0x82,
+- 0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,
+- 0x8f,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x91,0xe3,0x82,
+- 0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x93,0xe3,0x82,0x99,
+- 0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x95,0xe3,0x82,0x99,0x00,0x01,0x00,
+- 0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x97,0xe3,0x82,0x99,0x00,0x01,
+- 0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x99,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,
+- 0x10,0x0b,0x01,0xff,0xe3,0x81,0x9b,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,
+- 0xff,0xe3,0x81,0x9d,0xe3,0x82,0x99,0x00,0x01,0x00,0xd4,0x53,0xd3,0x3c,0xd2,0x1e,
+- 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x9f,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,
+- 0x0b,0x01,0xff,0xe3,0x81,0xa1,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0xe3,0x81,0xa4,0xe3,0x82,0x99,0x00,0x10,0x04,0x01,0x00,0x01,
+- 0xff,0xe3,0x81,0xa6,0xe3,0x82,0x99,0x00,0x92,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xe3,0x81,0xa8,0xe3,0x82,0x99,0x00,0x01,0x00,0x01,0x00,0xd3,0x4a,0xd2,
+- 0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,0xaf,0xe3,0x82,0x99,0x00,0x01,0xff,
+- 0xe3,0x81,0xaf,0xe3,0x82,0x9a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x81,0xb2,
+- 0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb2,0xe3,0x82,0x9a,
+- 0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb5,0xe3,0x82,0x99,0x00,0x01,0xff,
+- 0xe3,0x81,0xb5,0xe3,0x82,0x9a,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,
+- 0xff,0xe3,0x81,0xb8,0xe3,0x82,0x99,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb8,0xe3,
+- 0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,0xbb,0xe3,0x82,
+- 0x99,0x00,0x01,0xff,0xe3,0x81,0xbb,0xe3,0x82,0x9a,0x00,0x01,0x00,0xd0,0xee,0xcf,
+- 0x86,0xd5,0x42,0x54,0x04,0x01,0x00,0xd3,0x1b,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,
+- 0x0b,0x01,0xff,0xe3,0x81,0x86,0xe3,0x82,0x99,0x00,0x06,0x00,0x10,0x04,0x06,0x00,
+- 0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x08,0x10,0x04,0x01,0x08,
+- 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0x9d,0xe3,0x82,0x99,
+- 0x00,0x06,0x00,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x01,
+- 0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,
+- 0x82,0xab,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xad,0xe3,
+- 0x82,0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,
+- 0x82,0xaf,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb1,0xe3,
+- 0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb3,0xe3,0x82,
+- 0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb5,0xe3,0x82,0x99,0x00,0x01,
+- 0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb7,0xe3,0x82,0x99,0x00,
+- 0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb9,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,
+- 0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbb,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,
+- 0x01,0xff,0xe3,0x82,0xbd,0xe3,0x82,0x99,0x00,0x01,0x00,0xcf,0x86,0xd5,0xd5,0xd4,
+- 0x53,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbf,0xe3,0x82,
+- 0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0x81,0xe3,0x82,0x99,0x00,0x01,
+- 0x00,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x84,0xe3,0x82,0x99,0x00,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x86,0xe3,0x82,0x99,0x00,0x92,0x13,0x91,
+- 0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x88,0xe3,0x82,0x99,0x00,0x01,0x00,
+- 0x01,0x00,0xd3,0x4a,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,0x83,0x8f,0xe3,
+- 0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x8f,0xe3,0x82,0x9a,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xe3,0x83,0x92,0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,
+- 0x83,0x92,0xe3,0x82,0x9a,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0x95,0xe3,
+- 0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x95,0xe3,0x82,0x9a,0x00,0xd2,0x1e,0xd1,0x0f,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x98,0xe3,0x82,0x99,0x00,0x10,0x0b,0x01,
+- 0xff,0xe3,0x83,0x98,0xe3,0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,
+- 0xe3,0x83,0x9b,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x9b,0xe3,0x82,0x9a,0x00,
+- 0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x22,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,
+- 0x01,0xff,0xe3,0x82,0xa6,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x01,
+- 0xff,0xe3,0x83,0xaf,0xe3,0x82,0x99,0x00,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,
+- 0xe3,0x83,0xb0,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0xb1,0xe3,0x82,0x99,0x00,
+- 0x10,0x0b,0x01,0xff,0xe3,0x83,0xb2,0xe3,0x82,0x99,0x00,0x01,0x00,0x51,0x04,0x01,
+- 0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0xbd,0xe3,0x82,0x99,0x00,0x06,0x00,0xd1,0x65,
+- 0xd0,0x46,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x91,0x08,
+- 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x18,0x53,0x04,
+- 0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,0x00,0x10,0x04,
+- 0x13,0x00,0x14,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x15,0x93,0x11,
+- 0x52,0x04,0x01,0x00,0x91,0x09,0x10,0x05,0x01,0xff,0x00,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0x01,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x01,0x00,0x52,
+- 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x54,
+- 0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,
+- 0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,0x08,0x00,0x0a,0x00,0x94,
+- 0x0c,0x93,0x08,0x12,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x06,0x00,0xd2,0xa4,0xd1,
+- 0x5c,0xd0,0x22,0xcf,0x86,0x95,0x1c,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,
+- 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x10,0x04,0x07,0x00,0x00,
+- 0x00,0x01,0x00,0xcf,0x86,0xd5,0x20,0xd4,0x0c,0x93,0x08,0x12,0x04,0x01,0x00,0x0b,
+- 0x00,0x0b,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x06,0x00,0x06,
+- 0x00,0x06,0x00,0x06,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
+- 0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,
+- 0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,
+- 0x00,0x06,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xcf,0x86,0xd5,0x10,0x94,0x0c,0x53,
+- 0x04,0x01,0x00,0x12,0x04,0x01,0x00,0x07,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,
+- 0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x16,
+- 0x00,0xd1,0x30,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,
+- 0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
+- 0x00,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x01,0x00,0x01,
+- 0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x01,0x00,0x53,
+- 0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x07,0x00,0x54,0x04,0x01,
+- 0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
+- 0x00,0x07,0x00,0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xd1,0x48,0xd0,0x40,0xcf,
+- 0x86,0xd5,0x06,0xcf,0x06,0x04,0x00,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x2c,0xd2,
+- 0x06,0xcf,0x06,0x04,0x00,0xd1,0x06,0xcf,0x06,0x04,0x00,0xd0,0x1a,0xcf,0x86,0x55,
+- 0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,
+- 0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x07,0x00,0xcf,0x06,0x01,0x00,0xcf,0x86,0xcf,
+- 0x06,0x01,0x00,0xcf,0x86,0xcf,0x06,0x01,0x00,0xe2,0x71,0x05,0xd1,0x8c,0xd0,0x08,
+- 0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xd4,0x06,
+- 0xcf,0x06,0x01,0x00,0xd3,0x06,0xcf,0x06,0x01,0x00,0xd2,0x06,0xcf,0x06,0x01,0x00,
+- 0xd1,0x06,0xcf,0x06,0x01,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x10,
+- 0x93,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x08,0x00,0x08,0x00,0x53,0x04,
+- 0x08,0x00,0x12,0x04,0x08,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0xd3,0x08,
+- 0x12,0x04,0x0a,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,
+- 0x11,0x00,0x11,0x00,0x93,0x0c,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x13,0x00,
+- 0x13,0x00,0x94,0x14,0x53,0x04,0x13,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,
+- 0x13,0x00,0x14,0x00,0x14,0x00,0x00,0x00,0xe0,0xdb,0x04,0xcf,0x86,0xe5,0xdf,0x01,
+- 0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x74,0xd2,0x6e,0xd1,0x06,0xcf,0x06,0x04,0x00,
+- 0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,
+- 0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xd4,0x10,0x93,0x0c,
+- 0x92,0x08,0x11,0x04,0x04,0x00,0x06,0x00,0x04,0x00,0x04,0x00,0x93,0x10,0x52,0x04,
+- 0x04,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,
+- 0x95,0x24,0x94,0x20,0x93,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,
+- 0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x06,0x00,0x10,0x04,0x04,0x00,0x00,0x00,
+- 0x00,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x06,0x0a,0x00,0xd2,0x84,0xd1,0x4c,0xd0,0x16,
+- 0xcf,0x86,0x55,0x04,0x0a,0x00,0x94,0x0c,0x53,0x04,0x0a,0x00,0x12,0x04,0x0a,0x00,
+- 0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x0a,0x00,0xd4,0x1c,0xd3,0x0c,0x92,0x08,
+- 0x11,0x04,0x0c,0x00,0x0a,0x00,0x0a,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,
+- 0x10,0x04,0x0a,0x00,0x0a,0xe6,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0d,0xe6,0x52,0x04,
+- 0x0d,0xe6,0x11,0x04,0x0a,0xe6,0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,
+- 0x0a,0x00,0x53,0x04,0x0a,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
+- 0x11,0xe6,0x0d,0xe6,0x0b,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,
+- 0x93,0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x00,0x00,0xd1,0x40,
+- 0xd0,0x3a,0xcf,0x86,0xd5,0x24,0x54,0x04,0x08,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,
+- 0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,
+- 0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,
+- 0x09,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xcf,0x06,0x0a,0x00,0xd0,0x5e,
+- 0xcf,0x86,0xd5,0x28,0xd4,0x18,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0xd1,0x08,
+- 0x10,0x04,0x0a,0x00,0x0c,0x00,0x10,0x04,0x0c,0x00,0x11,0x00,0x93,0x0c,0x92,0x08,
+- 0x11,0x04,0x0c,0x00,0x0d,0x00,0x10,0x00,0x10,0x00,0xd4,0x1c,0x53,0x04,0x0c,0x00,
+- 0xd2,0x0c,0x51,0x04,0x0c,0x00,0x10,0x04,0x0d,0x00,0x10,0x00,0x51,0x04,0x10,0x00,
+- 0x10,0x04,0x12,0x00,0x14,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x10,0x00,0x11,0x00,
+- 0x11,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x1c,
+- 0x94,0x18,0x93,0x14,0xd2,0x08,0x11,0x04,0x00,0x00,0x15,0x00,0x51,0x04,0x15,0x00,
+- 0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x00,0x00,0xd3,0x10,
+- 0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x92,0x0c,
+- 0x51,0x04,0x0d,0x00,0x10,0x04,0x0c,0x00,0x0a,0x00,0x0a,0x00,0xe4,0xf2,0x02,0xe3,
+- 0x65,0x01,0xd2,0x98,0xd1,0x48,0xd0,0x36,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,
+- 0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x09,0x08,0x00,0x08,0x00,
+- 0x08,0x00,0xd4,0x0c,0x53,0x04,0x08,0x00,0x12,0x04,0x08,0x00,0x00,0x00,0x53,0x04,
+- 0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,
+- 0x09,0x00,0x54,0x04,0x09,0x00,0x13,0x04,0x09,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,
+- 0x0a,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,
+- 0x10,0x04,0x0a,0x09,0x12,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,
+- 0x0a,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,
+- 0x54,0x04,0x0b,0xe6,0xd3,0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,
+- 0x52,0x04,0x0b,0x00,0x11,0x04,0x11,0x00,0x14,0x00,0xd1,0x60,0xd0,0x22,0xcf,0x86,
+- 0x55,0x04,0x0a,0x00,0x94,0x18,0x53,0x04,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,
+- 0x10,0x04,0x0a,0x00,0x0a,0xdc,0x11,0x04,0x0a,0xdc,0x0a,0x00,0x0a,0x00,0xcf,0x86,
+- 0xd5,0x24,0x54,0x04,0x0a,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,
+- 0x0a,0x00,0x0a,0x09,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x0a,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,
+- 0x91,0x08,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,
+- 0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,
+- 0x0b,0x00,0x0b,0x07,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x34,0xd4,0x20,0xd3,0x10,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x52,0x04,
+- 0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x53,0x04,0x0b,0x00,
+- 0xd2,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x0b,0x00,0x54,0x04,
+- 0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
+- 0x10,0x00,0x00,0x00,0xd2,0xd0,0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0a,0x00,
+- 0x54,0x04,0x0a,0x00,0x93,0x10,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,
+- 0x0a,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0a,0x00,
+- 0x52,0x04,0x0a,0x00,0x11,0x04,0x0a,0x00,0x00,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,
+- 0x11,0x04,0x0a,0x00,0x00,0x00,0x0a,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,
+- 0x12,0x04,0x0b,0x00,0x10,0x00,0xd0,0x3a,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,
+- 0x0b,0x00,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0xe6,
+- 0xd1,0x08,0x10,0x04,0x0b,0xdc,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xe6,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0b,0xe6,
+- 0xcf,0x86,0xd5,0x2c,0xd4,0x18,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,
+- 0x0b,0xe6,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x00,0x00,
+- 0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x54,0x04,
+- 0x0d,0x00,0x93,0x10,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,
+- 0x00,0x00,0x00,0x00,0xd1,0x8c,0xd0,0x72,0xcf,0x86,0xd5,0x4c,0xd4,0x30,0xd3,0x18,
+- 0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,
+- 0x10,0x04,0x0c,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0c,0x00,
+- 0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x00,0x00,0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,
+- 0x0c,0x00,0x00,0x00,0x00,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x0c,0x00,0x51,0x04,
+- 0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,
+- 0x10,0x04,0x0c,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x10,
+- 0x93,0x0c,0x52,0x04,0x11,0x00,0x11,0x04,0x10,0x00,0x15,0x00,0x00,0x00,0x11,0x00,
+- 0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0xd4,0x14,0x53,0x04,
+- 0x0b,0x00,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0x09,0x00,0x00,
+- 0x53,0x04,0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,
+- 0x02,0xff,0xff,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xd1,0x76,0xd0,0x09,0xcf,0x86,
+- 0xcf,0x06,0x02,0xff,0xff,0xcf,0x86,0x85,0xd4,0x07,0xcf,0x06,0x02,0xff,0xff,0xd3,
+- 0x07,0xcf,0x06,0x02,0xff,0xff,0xd2,0x07,0xcf,0x06,0x02,0xff,0xff,0xd1,0x07,0xcf,
+- 0x06,0x02,0xff,0xff,0xd0,0x18,0xcf,0x86,0x55,0x05,0x02,0xff,0xff,0x94,0x0d,0x93,
+- 0x09,0x12,0x05,0x02,0xff,0xff,0x00,0x00,0x00,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x24,
+- 0x94,0x20,0xd3,0x10,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,
+- 0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,
+- 0x0b,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x12,0x04,0x0b,0x00,0x00,0x00,
+- 0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,
+- 0xe4,0x9c,0x10,0xe3,0x16,0x08,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x08,0x04,0xe0,
+- 0x04,0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0x10,0x08,0x01,
+- 0xff,0xe8,0xbb,0x8a,0x00,0x01,0xff,0xe8,0xb3,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe6,0xbb,0x91,0x00,0x01,0xff,0xe4,0xb8,0xb2,0x00,0x10,0x08,0x01,0xff,0xe5,
+- 0x8f,0xa5,0x00,0x01,0xff,0xe9,0xbe,0x9c,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe9,0xbe,0x9c,0x00,0x01,0xff,0xe5,0xa5,0x91,0x00,0x10,0x08,0x01,0xff,0xe9,
+- 0x87,0x91,0x00,0x01,0xff,0xe5,0x96,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
+- 0xa5,0x88,0x00,0x01,0xff,0xe6,0x87,0xb6,0x00,0x10,0x08,0x01,0xff,0xe7,0x99,0xa9,
+- 0x00,0x01,0xff,0xe7,0xbe,0x85,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe8,0x98,0xbf,0x00,0x01,0xff,0xe8,0x9e,0xba,0x00,0x10,0x08,0x01,0xff,0xe8,
+- 0xa3,0xb8,0x00,0x01,0xff,0xe9,0x82,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,
+- 0xa8,0x82,0x00,0x01,0xff,0xe6,0xb4,0x9b,0x00,0x10,0x08,0x01,0xff,0xe7,0x83,0x99,
+- 0x00,0x01,0xff,0xe7,0x8f,0x9e,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
+- 0x90,0xbd,0x00,0x01,0xff,0xe9,0x85,0xaa,0x00,0x10,0x08,0x01,0xff,0xe9,0xa7,0xb1,
+- 0x00,0x01,0xff,0xe4,0xba,0x82,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x8d,0xb5,
+- 0x00,0x01,0xff,0xe6,0xac,0x84,0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x9b,0x00,0x01,
+- 0xff,0xe8,0x98,0xad,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe9,0xb8,0x9e,0x00,0x01,0xff,0xe5,0xb5,0x90,0x00,0x10,0x08,0x01,0xff,0xe6,
+- 0xbf,0xab,0x00,0x01,0xff,0xe8,0x97,0x8d,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
+- 0xa5,0xa4,0x00,0x01,0xff,0xe6,0x8b,0x89,0x00,0x10,0x08,0x01,0xff,0xe8,0x87,0x98,
+- 0x00,0x01,0xff,0xe8,0xa0,0x9f,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
+- 0xbb,0x8a,0x00,0x01,0xff,0xe6,0x9c,0x97,0x00,0x10,0x08,0x01,0xff,0xe6,0xb5,0xaa,
+- 0x00,0x01,0xff,0xe7,0x8b,0xbc,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x83,0x8e,
+- 0x00,0x01,0xff,0xe4,0xbe,0x86,0x00,0x10,0x08,0x01,0xff,0xe5,0x86,0xb7,0x00,0x01,
+- 0xff,0xe5,0x8b,0x9e,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,
+- 0x93,0x84,0x00,0x01,0xff,0xe6,0xab,0x93,0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x90,
+- 0x00,0x01,0xff,0xe7,0x9b,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x80,0x81,
+- 0x00,0x01,0xff,0xe8,0x98,0x86,0x00,0x10,0x08,0x01,0xff,0xe8,0x99,0x9c,0x00,0x01,
+- 0xff,0xe8,0xb7,0xaf,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9c,0xb2,
+- 0x00,0x01,0xff,0xe9,0xad,0xaf,0x00,0x10,0x08,0x01,0xff,0xe9,0xb7,0xba,0x00,0x01,
+- 0xff,0xe7,0xa2,0x8c,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xa5,0xbf,0x00,0x01,
+- 0xff,0xe7,0xb6,0xa0,0x00,0x10,0x08,0x01,0xff,0xe8,0x8f,0x89,0x00,0x01,0xff,0xe9,
+- 0x8c,0x84,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0x10,0x08,
+- 0x01,0xff,0xe5,0xa3,0x9f,0x00,0x01,0xff,0xe5,0xbc,0x84,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe7,0xb1,0xa0,0x00,0x01,0xff,0xe8,0x81,0xbe,0x00,0x10,0x08,0x01,0xff,
+- 0xe7,0x89,0xa2,0x00,0x01,0xff,0xe7,0xa3,0x8a,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe8,0xb3,0x82,0x00,0x01,0xff,0xe9,0x9b,0xb7,0x00,0x10,0x08,0x01,0xff,
+- 0xe5,0xa3,0x98,0x00,0x01,0xff,0xe5,0xb1,0xa2,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe6,0xa8,0x93,0x00,0x01,0xff,0xe6,0xb7,0x9a,0x00,0x10,0x08,0x01,0xff,0xe6,0xbc,
+- 0x8f,0x00,0x01,0xff,0xe7,0xb4,0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe7,0xb8,0xb7,0x00,0x01,0xff,0xe9,0x99,0x8b,0x00,0x10,0x08,0x01,0xff,
+- 0xe5,0x8b,0x92,0x00,0x01,0xff,0xe8,0x82,0x8b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe5,0x87,0x9c,0x00,0x01,0xff,0xe5,0x87,0x8c,0x00,0x10,0x08,0x01,0xff,0xe7,0xa8,
+- 0x9c,0x00,0x01,0xff,0xe7,0xb6,0xbe,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe8,0x8f,0xb1,0x00,0x01,0xff,0xe9,0x99,0xb5,0x00,0x10,0x08,0x01,0xff,0xe8,0xae,
+- 0x80,0x00,0x01,0xff,0xe6,0x8b,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,
+- 0x82,0x00,0x01,0xff,0xe8,0xab,0xbe,0x00,0x10,0x08,0x01,0xff,0xe4,0xb8,0xb9,0x00,
+- 0x01,0xff,0xe5,0xaf,0xa7,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe6,0x80,0x92,0x00,0x01,0xff,0xe7,0x8e,0x87,0x00,0x10,0x08,0x01,0xff,
+- 0xe7,0x95,0xb0,0x00,0x01,0xff,0xe5,0x8c,0x97,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe7,0xa3,0xbb,0x00,0x01,0xff,0xe4,0xbe,0xbf,0x00,0x10,0x08,0x01,0xff,0xe5,0xbe,
+- 0xa9,0x00,0x01,0xff,0xe4,0xb8,0x8d,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe6,0xb3,0x8c,0x00,0x01,0xff,0xe6,0x95,0xb8,0x00,0x10,0x08,0x01,0xff,0xe7,0xb4,
+- 0xa2,0x00,0x01,0xff,0xe5,0x8f,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xa1,
+- 0x9e,0x00,0x01,0xff,0xe7,0x9c,0x81,0x00,0x10,0x08,0x01,0xff,0xe8,0x91,0x89,0x00,
+- 0x01,0xff,0xe8,0xaa,0xaa,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe6,0xae,0xba,0x00,0x01,0xff,0xe8,0xbe,0xb0,0x00,0x10,0x08,0x01,0xff,0xe6,0xb2,
+- 0x88,0x00,0x01,0xff,0xe6,0x8b,0xbe,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x8b,
+- 0xa5,0x00,0x01,0xff,0xe6,0x8e,0xa0,0x00,0x10,0x08,0x01,0xff,0xe7,0x95,0xa5,0x00,
+- 0x01,0xff,0xe4,0xba,0xae,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,
+- 0xa9,0x00,0x01,0xff,0xe5,0x87,0x89,0x00,0x10,0x08,0x01,0xff,0xe6,0xa2,0x81,0x00,
+- 0x01,0xff,0xe7,0xb3,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x89,0xaf,0x00,
+- 0x01,0xff,0xe8,0xab,0x92,0x00,0x10,0x08,0x01,0xff,0xe9,0x87,0x8f,0x00,0x01,0xff,
+- 0xe5,0x8b,0xb5,0x00,0xe0,0x04,0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x91,0x82,0x00,0x01,0xff,0xe5,0xa5,
+- 0xb3,0x00,0x10,0x08,0x01,0xff,0xe5,0xbb,0xac,0x00,0x01,0xff,0xe6,0x97,0x85,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xbf,0xbe,0x00,0x01,0xff,0xe7,0xa4,0xaa,0x00,
+- 0x10,0x08,0x01,0xff,0xe9,0x96,0xad,0x00,0x01,0xff,0xe9,0xa9,0xaa,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xba,0x97,0x00,0x01,0xff,0xe9,0xbb,0x8e,0x00,
+- 0x10,0x08,0x01,0xff,0xe5,0x8a,0x9b,0x00,0x01,0xff,0xe6,0x9b,0x86,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe6,0xad,0xb7,0x00,0x01,0xff,0xe8,0xbd,0xa2,0x00,0x10,0x08,
+- 0x01,0xff,0xe5,0xb9,0xb4,0x00,0x01,0xff,0xe6,0x86,0x90,0x00,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x88,0x80,0x00,0x01,0xff,0xe6,0x92,0x9a,0x00,
+- 0x10,0x08,0x01,0xff,0xe6,0xbc,0xa3,0x00,0x01,0xff,0xe7,0x85,0x89,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe7,0x92,0x89,0x00,0x01,0xff,0xe7,0xa7,0x8a,0x00,0x10,0x08,
+- 0x01,0xff,0xe7,0xb7,0xb4,0x00,0x01,0xff,0xe8,0x81,0xaf,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe8,0xbc,0xa6,0x00,0x01,0xff,0xe8,0x93,0xae,0x00,0x10,0x08,
+- 0x01,0xff,0xe9,0x80,0xa3,0x00,0x01,0xff,0xe9,0x8d,0x8a,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe5,0x88,0x97,0x00,0x01,0xff,0xe5,0x8a,0xa3,0x00,0x10,0x08,0x01,0xff,
+- 0xe5,0x92,0xbd,0x00,0x01,0xff,0xe7,0x83,0x88,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xa3,0x82,0x00,0x01,0xff,0xe8,0xaa,0xaa,0x00,
+- 0x10,0x08,0x01,0xff,0xe5,0xbb,0x89,0x00,0x01,0xff,0xe5,0xbf,0xb5,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe6,0x8d,0xbb,0x00,0x01,0xff,0xe6,0xae,0xae,0x00,0x10,0x08,
+- 0x01,0xff,0xe7,0xb0,0xbe,0x00,0x01,0xff,0xe7,0x8d,0xb5,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe4,0xbb,0xa4,0x00,0x01,0xff,0xe5,0x9b,0xb9,0x00,0x10,0x08,
+- 0x01,0xff,0xe5,0xaf,0xa7,0x00,0x01,0xff,0xe5,0xb6,0xba,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe6,0x80,0x9c,0x00,0x01,0xff,0xe7,0x8e,0xb2,0x00,0x10,0x08,0x01,0xff,
+- 0xe7,0x91,0xa9,0x00,0x01,0xff,0xe7,0xbe,0x9a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe8,0x81,0x86,0x00,0x01,0xff,0xe9,0x88,0xb4,0x00,0x10,0x08,
+- 0x01,0xff,0xe9,0x9b,0xb6,0x00,0x01,0xff,0xe9,0x9d,0x88,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe9,0xa0,0x98,0x00,0x01,0xff,0xe4,0xbe,0x8b,0x00,0x10,0x08,0x01,0xff,
+- 0xe7,0xa6,0xae,0x00,0x01,0xff,0xe9,0x86,0xb4,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe9,0x9a,0xb8,0x00,0x01,0xff,0xe6,0x83,0xa1,0x00,0x10,0x08,0x01,0xff,
+- 0xe4,0xba,0x86,0x00,0x01,0xff,0xe5,0x83,0x9a,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe5,0xaf,0xae,0x00,0x01,0xff,0xe5,0xb0,0xbf,0x00,0x10,0x08,0x01,0xff,0xe6,0x96,
+- 0x99,0x00,0x01,0xff,0xe6,0xa8,0x82,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0x87,0x8e,0x00,0x01,0xff,0xe7,
+- 0x99,0x82,0x00,0x10,0x08,0x01,0xff,0xe8,0x93,0xbc,0x00,0x01,0xff,0xe9,0x81,0xbc,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xbe,0x8d,0x00,0x01,0xff,0xe6,0x9a,0x88,
+- 0x00,0x10,0x08,0x01,0xff,0xe9,0x98,0xae,0x00,0x01,0xff,0xe5,0x8a,0x89,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x9d,0xbb,0x00,0x01,0xff,0xe6,0x9f,0xb3,
+- 0x00,0x10,0x08,0x01,0xff,0xe6,0xb5,0x81,0x00,0x01,0xff,0xe6,0xba,0x9c,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe7,0x90,0x89,0x00,0x01,0xff,0xe7,0x95,0x99,0x00,0x10,
+- 0x08,0x01,0xff,0xe7,0xa1,0xab,0x00,0x01,0xff,0xe7,0xb4,0x90,0x00,0xd3,0x40,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xa1,0x9e,0x00,0x01,0xff,0xe5,0x85,0xad,
+- 0x00,0x10,0x08,0x01,0xff,0xe6,0x88,0xae,0x00,0x01,0xff,0xe9,0x99,0xb8,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe5,0x80,0xab,0x00,0x01,0xff,0xe5,0xb4,0x99,0x00,0x10,
+- 0x08,0x01,0xff,0xe6,0xb7,0xaa,0x00,0x01,0xff,0xe8,0xbc,0xaa,0x00,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe5,0xbe,0x8b,0x00,0x01,0xff,0xe6,0x85,0x84,0x00,0x10,
+- 0x08,0x01,0xff,0xe6,0xa0,0x97,0x00,0x01,0xff,0xe7,0x8e,0x87,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe9,0x9a,0x86,0x00,0x01,0xff,0xe5,0x88,0xa9,0x00,0x10,0x08,0x01,
+- 0xff,0xe5,0x90,0x8f,0x00,0x01,0xff,0xe5,0xb1,0xa5,0x00,0xd4,0x80,0xd3,0x40,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x98,0x93,0x00,0x01,0xff,0xe6,0x9d,0x8e,
+- 0x00,0x10,0x08,0x01,0xff,0xe6,0xa2,0xa8,0x00,0x01,0xff,0xe6,0xb3,0xa5,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe7,0x90,0x86,0x00,0x01,0xff,0xe7,0x97,0xa2,0x00,0x10,
+- 0x08,0x01,0xff,0xe7,0xbd,0xb9,0x00,0x01,0xff,0xe8,0xa3,0x8f,0x00,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe8,0xa3,0xa1,0x00,0x01,0xff,0xe9,0x87,0x8c,0x00,0x10,
+- 0x08,0x01,0xff,0xe9,0x9b,0xa2,0x00,0x01,0xff,0xe5,0x8c,0xbf,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe6,0xba,0xba,0x00,0x01,0xff,0xe5,0x90,0x9d,0x00,0x10,0x08,0x01,
+- 0xff,0xe7,0x87,0x90,0x00,0x01,0xff,0xe7,0x92,0x98,0x00,0xd3,0x40,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe8,0x97,0xba,0x00,0x01,0xff,0xe9,0x9a,0xa3,0x00,0x10,
+- 0x08,0x01,0xff,0xe9,0xb1,0x97,0x00,0x01,0xff,0xe9,0xba,0x9f,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe6,0x9e,0x97,0x00,0x01,0xff,0xe6,0xb7,0x8b,0x00,0x10,0x08,0x01,
+- 0xff,0xe8,0x87,0xa8,0x00,0x01,0xff,0xe7,0xab,0x8b,0x00,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe7,0xac,0xa0,0x00,0x01,0xff,0xe7,0xb2,0x92,0x00,0x10,0x08,0x01,
+- 0xff,0xe7,0x8b,0x80,0x00,0x01,0xff,0xe7,0x82,0x99,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe8,0xad,0x98,0x00,0x01,0xff,0xe4,0xbb,0x80,0x00,0x10,0x08,0x01,0xff,0xe8,
+- 0x8c,0xb6,0x00,0x01,0xff,0xe5,0x88,0xba,0x00,0xe2,0xad,0x06,0xe1,0xc4,0x03,0xe0,
+- 0xcb,0x01,0xcf,0x86,0xd5,0xe4,0xd4,0x74,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe5,0x88,0x87,0x00,0x01,0xff,0xe5,0xba,0xa6,0x00,0x10,0x08,0x01,0xff,
+- 0xe6,0x8b,0x93,0x00,0x01,0xff,0xe7,0xb3,0x96,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe5,0xae,0x85,0x00,0x01,0xff,0xe6,0xb4,0x9e,0x00,0x10,0x08,0x01,0xff,0xe6,0x9a,
+- 0xb4,0x00,0x01,0xff,0xe8,0xbc,0xbb,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe8,0xa1,0x8c,0x00,0x01,0xff,0xe9,0x99,0x8d,0x00,0x10,0x08,0x01,0xff,0xe8,0xa6,
+- 0x8b,0x00,0x01,0xff,0xe5,0xbb,0x93,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,
+- 0x80,0x00,0x01,0xff,0xe5,0x97,0x80,0x00,0x01,0x00,0xd3,0x34,0xd2,0x18,0xd1,0x0c,
+- 0x10,0x08,0x01,0xff,0xe5,0xa1,0x9a,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe6,0x99,
+- 0xb4,0x00,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe5,0x87,0x9e,0x00,
+- 0x10,0x08,0x01,0xff,0xe7,0x8c,0xaa,0x00,0x01,0xff,0xe7,0x9b,0x8a,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xa4,0xbc,0x00,0x01,0xff,0xe7,0xa5,0x9e,0x00,
+- 0x10,0x08,0x01,0xff,0xe7,0xa5,0xa5,0x00,0x01,0xff,0xe7,0xa6,0x8f,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe9,0x9d,0x96,0x00,0x01,0xff,0xe7,0xb2,0xbe,0x00,0x10,0x08,
+- 0x01,0xff,0xe7,0xbe,0xbd,0x00,0x01,0x00,0xd4,0x64,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
+- 0x10,0x08,0x01,0xff,0xe8,0x98,0x92,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe8,0xab,
+- 0xb8,0x00,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe9,0x80,0xb8,0x00,
+- 0x10,0x08,0x01,0xff,0xe9,0x83,0xbd,0x00,0x01,0x00,0xd2,0x14,0x51,0x04,0x01,0x00,
+- 0x10,0x08,0x01,0xff,0xe9,0xa3,0xaf,0x00,0x01,0xff,0xe9,0xa3,0xbc,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe9,0xa4,0xa8,0x00,0x01,0xff,0xe9,0xb6,0xb4,0x00,0x10,0x08,
+- 0x0d,0xff,0xe9,0x83,0x9e,0x00,0x0d,0xff,0xe9,0x9a,0xb7,0x00,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x06,0xff,0xe4,0xbe,0xae,0x00,0x06,0xff,0xe5,0x83,0xa7,0x00,
+- 0x10,0x08,0x06,0xff,0xe5,0x85,0x8d,0x00,0x06,0xff,0xe5,0x8b,0x89,0x00,0xd1,0x10,
+- 0x10,0x08,0x06,0xff,0xe5,0x8b,0xa4,0x00,0x06,0xff,0xe5,0x8d,0x91,0x00,0x10,0x08,
+- 0x06,0xff,0xe5,0x96,0x9d,0x00,0x06,0xff,0xe5,0x98,0x86,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x06,0xff,0xe5,0x99,0xa8,0x00,0x06,0xff,0xe5,0xa1,0x80,0x00,0x10,0x08,
+- 0x06,0xff,0xe5,0xa2,0xa8,0x00,0x06,0xff,0xe5,0xb1,0xa4,0x00,0xd1,0x10,0x10,0x08,
+- 0x06,0xff,0xe5,0xb1,0xae,0x00,0x06,0xff,0xe6,0x82,0x94,0x00,0x10,0x08,0x06,0xff,
+- 0xe6,0x85,0xa8,0x00,0x06,0xff,0xe6,0x86,0x8e,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,
+- 0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe6,0x87,0xb2,0x00,0x06,
+- 0xff,0xe6,0x95,0x8f,0x00,0x10,0x08,0x06,0xff,0xe6,0x97,0xa2,0x00,0x06,0xff,0xe6,
+- 0x9a,0x91,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe6,0xa2,0x85,0x00,0x06,0xff,0xe6,
+- 0xb5,0xb7,0x00,0x10,0x08,0x06,0xff,0xe6,0xb8,0x9a,0x00,0x06,0xff,0xe6,0xbc,0xa2,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0x85,0xae,0x00,0x06,0xff,0xe7,
+- 0x88,0xab,0x00,0x10,0x08,0x06,0xff,0xe7,0x90,0xa2,0x00,0x06,0xff,0xe7,0xa2,0x91,
+- 0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xa4,0xbe,0x00,0x06,0xff,0xe7,0xa5,0x89,
+- 0x00,0x10,0x08,0x06,0xff,0xe7,0xa5,0x88,0x00,0x06,0xff,0xe7,0xa5,0x90,0x00,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xa5,0x96,0x00,0x06,0xff,0xe7,
+- 0xa5,0x9d,0x00,0x10,0x08,0x06,0xff,0xe7,0xa6,0x8d,0x00,0x06,0xff,0xe7,0xa6,0x8e,
+- 0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xa9,0x80,0x00,0x06,0xff,0xe7,0xaa,0x81,
+- 0x00,0x10,0x08,0x06,0xff,0xe7,0xaf,0x80,0x00,0x06,0xff,0xe7,0xb7,0xb4,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xb8,0x89,0x00,0x06,0xff,0xe7,0xb9,0x81,
+- 0x00,0x10,0x08,0x06,0xff,0xe7,0xbd,0xb2,0x00,0x06,0xff,0xe8,0x80,0x85,0x00,0xd1,
+- 0x10,0x10,0x08,0x06,0xff,0xe8,0x87,0xad,0x00,0x06,0xff,0xe8,0x89,0xb9,0x00,0x10,
+- 0x08,0x06,0xff,0xe8,0x89,0xb9,0x00,0x06,0xff,0xe8,0x91,0x97,0x00,0xd4,0x75,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe8,0xa4,0x90,0x00,0x06,0xff,0xe8,
+- 0xa6,0x96,0x00,0x10,0x08,0x06,0xff,0xe8,0xac,0x81,0x00,0x06,0xff,0xe8,0xac,0xb9,
+- 0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe8,0xb3,0x93,0x00,0x06,0xff,0xe8,0xb4,0x88,
+- 0x00,0x10,0x08,0x06,0xff,0xe8,0xbe,0xb6,0x00,0x06,0xff,0xe9,0x80,0xb8,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe9,0x9b,0xa3,0x00,0x06,0xff,0xe9,0x9f,0xbf,
+- 0x00,0x10,0x08,0x06,0xff,0xe9,0xa0,0xbb,0x00,0x0b,0xff,0xe6,0x81,0xb5,0x00,0x91,
+- 0x11,0x10,0x09,0x0b,0xff,0xf0,0xa4,0x8b,0xae,0x00,0x0b,0xff,0xe8,0x88,0x98,0x00,
+- 0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe4,0xb8,0xa6,0x00,
+- 0x08,0xff,0xe5,0x86,0xb5,0x00,0x10,0x08,0x08,0xff,0xe5,0x85,0xa8,0x00,0x08,0xff,
+- 0xe4,0xbe,0x80,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0x85,0x85,0x00,0x08,0xff,
+- 0xe5,0x86,0x80,0x00,0x10,0x08,0x08,0xff,0xe5,0x8b,0x87,0x00,0x08,0xff,0xe5,0x8b,
+- 0xba,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0x96,0x9d,0x00,0x08,0xff,
+- 0xe5,0x95,0x95,0x00,0x10,0x08,0x08,0xff,0xe5,0x96,0x99,0x00,0x08,0xff,0xe5,0x97,
+- 0xa2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0xa1,0x9a,0x00,0x08,0xff,0xe5,0xa2,
+- 0xb3,0x00,0x10,0x08,0x08,0xff,0xe5,0xa5,0x84,0x00,0x08,0xff,0xe5,0xa5,0x94,0x00,
+- 0xe0,0x04,0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x08,0xff,0xe5,0xa9,0xa2,0x00,0x08,0xff,0xe5,0xac,0xa8,0x00,0x10,0x08,
+- 0x08,0xff,0xe5,0xbb,0x92,0x00,0x08,0xff,0xe5,0xbb,0x99,0x00,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe5,0xbd,0xa9,0x00,0x08,0xff,0xe5,0xbe,0xad,0x00,0x10,0x08,0x08,0xff,
+- 0xe6,0x83,0x98,0x00,0x08,0xff,0xe6,0x85,0x8e,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe6,0x84,0x88,0x00,0x08,0xff,0xe6,0x86,0x8e,0x00,0x10,0x08,0x08,0xff,
+- 0xe6,0x85,0xa0,0x00,0x08,0xff,0xe6,0x87,0xb2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe6,0x88,0xb4,0x00,0x08,0xff,0xe6,0x8f,0x84,0x00,0x10,0x08,0x08,0xff,0xe6,0x90,
+- 0x9c,0x00,0x08,0xff,0xe6,0x91,0x92,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe6,0x95,0x96,0x00,0x08,0xff,0xe6,0x99,0xb4,0x00,0x10,0x08,0x08,0xff,
+- 0xe6,0x9c,0x97,0x00,0x08,0xff,0xe6,0x9c,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe6,0x9d,0x96,0x00,0x08,0xff,0xe6,0xad,0xb9,0x00,0x10,0x08,0x08,0xff,0xe6,0xae,
+- 0xba,0x00,0x08,0xff,0xe6,0xb5,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe6,0xbb,0x9b,0x00,0x08,0xff,0xe6,0xbb,0x8b,0x00,0x10,0x08,0x08,0xff,0xe6,0xbc,
+- 0xa2,0x00,0x08,0xff,0xe7,0x80,0x9e,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x85,
+- 0xae,0x00,0x08,0xff,0xe7,0x9e,0xa7,0x00,0x10,0x08,0x08,0xff,0xe7,0x88,0xb5,0x00,
+- 0x08,0xff,0xe7,0x8a,0xaf,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe7,0x8c,0xaa,0x00,0x08,0xff,0xe7,0x91,0xb1,0x00,0x10,0x08,0x08,0xff,
+- 0xe7,0x94,0x86,0x00,0x08,0xff,0xe7,0x94,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe7,0x98,0x9d,0x00,0x08,0xff,0xe7,0x98,0x9f,0x00,0x10,0x08,0x08,0xff,0xe7,0x9b,
+- 0x8a,0x00,0x08,0xff,0xe7,0x9b,0x9b,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe7,0x9b,0xb4,0x00,0x08,0xff,0xe7,0x9d,0x8a,0x00,0x10,0x08,0x08,0xff,0xe7,0x9d,
+- 0x80,0x00,0x08,0xff,0xe7,0xa3,0x8c,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0xaa,
+- 0xb1,0x00,0x08,0xff,0xe7,0xaf,0x80,0x00,0x10,0x08,0x08,0xff,0xe7,0xb1,0xbb,0x00,
+- 0x08,0xff,0xe7,0xb5,0x9b,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe7,0xb7,0xb4,0x00,0x08,0xff,0xe7,0xbc,0xbe,0x00,0x10,0x08,0x08,0xff,0xe8,0x80,
+- 0x85,0x00,0x08,0xff,0xe8,0x8d,0x92,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0x8f,
+- 0xaf,0x00,0x08,0xff,0xe8,0x9d,0xb9,0x00,0x10,0x08,0x08,0xff,0xe8,0xa5,0x81,0x00,
+- 0x08,0xff,0xe8,0xa6,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xa6,
+- 0x96,0x00,0x08,0xff,0xe8,0xaa,0xbf,0x00,0x10,0x08,0x08,0xff,0xe8,0xab,0xb8,0x00,
+- 0x08,0xff,0xe8,0xab,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xac,0x81,0x00,
+- 0x08,0xff,0xe8,0xab,0xbe,0x00,0x10,0x08,0x08,0xff,0xe8,0xab,0xad,0x00,0x08,0xff,
+- 0xe8,0xac,0xb9,0x00,0xcf,0x86,0x95,0xde,0xd4,0x81,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x08,0xff,0xe8,0xae,0x8a,0x00,0x08,0xff,0xe8,0xb4,0x88,0x00,0x10,0x08,
+- 0x08,0xff,0xe8,0xbc,0xb8,0x00,0x08,0xff,0xe9,0x81,0xb2,0x00,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe9,0x86,0x99,0x00,0x08,0xff,0xe9,0x89,0xb6,0x00,0x10,0x08,0x08,0xff,
+- 0xe9,0x99,0xbc,0x00,0x08,0xff,0xe9,0x9b,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe9,0x9d,0x96,0x00,0x08,0xff,0xe9,0x9f,0x9b,0x00,0x10,0x08,0x08,0xff,
+- 0xe9,0x9f,0xbf,0x00,0x08,0xff,0xe9,0xa0,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe9,0xa0,0xbb,0x00,0x08,0xff,0xe9,0xac,0x92,0x00,0x10,0x08,0x08,0xff,0xe9,0xbe,
+- 0x9c,0x00,0x08,0xff,0xf0,0xa2,0xa1,0x8a,0x00,0xd3,0x45,0xd2,0x22,0xd1,0x12,0x10,
+- 0x09,0x08,0xff,0xf0,0xa2,0xa1,0x84,0x00,0x08,0xff,0xf0,0xa3,0x8f,0x95,0x00,0x10,
+- 0x08,0x08,0xff,0xe3,0xae,0x9d,0x00,0x08,0xff,0xe4,0x80,0x98,0x00,0xd1,0x11,0x10,
+- 0x08,0x08,0xff,0xe4,0x80,0xb9,0x00,0x08,0xff,0xf0,0xa5,0x89,0x89,0x00,0x10,0x09,
+- 0x08,0xff,0xf0,0xa5,0xb3,0x90,0x00,0x08,0xff,0xf0,0xa7,0xbb,0x93,0x00,0x92,0x14,
+- 0x91,0x10,0x10,0x08,0x08,0xff,0xe9,0xbd,0x83,0x00,0x08,0xff,0xe9,0xbe,0x8e,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0xe1,0x94,0x01,0xe0,0x08,0x01,0xcf,0x86,0xd5,0x42,
+- 0xd4,0x14,0x93,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x00,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,
+- 0x01,0x00,0x01,0x00,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x00,0x00,0x04,0xff,
+- 0xd7,0x99,0xd6,0xb4,0x00,0x10,0x04,0x01,0x1a,0x01,0xff,0xd7,0xb2,0xd6,0xb7,0x00,
+- 0xd4,0x42,0x53,0x04,0x01,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,
+- 0xd7,0xa9,0xd7,0x81,0x00,0x01,0xff,0xd7,0xa9,0xd7,0x82,0x00,0xd1,0x16,0x10,0x0b,
+- 0x01,0xff,0xd7,0xa9,0xd6,0xbc,0xd7,0x81,0x00,0x01,0xff,0xd7,0xa9,0xd6,0xbc,0xd7,
+- 0x82,0x00,0x10,0x09,0x01,0xff,0xd7,0x90,0xd6,0xb7,0x00,0x01,0xff,0xd7,0x90,0xd6,
+- 0xb8,0x00,0xd3,0x43,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x90,0xd6,0xbc,
+- 0x00,0x01,0xff,0xd7,0x91,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x92,0xd6,0xbc,
+- 0x00,0x01,0xff,0xd7,0x93,0xd6,0xbc,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x94,
+- 0xd6,0xbc,0x00,0x01,0xff,0xd7,0x95,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x96,
+- 0xd6,0xbc,0x00,0x00,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x98,0xd6,
+- 0xbc,0x00,0x01,0xff,0xd7,0x99,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x9a,0xd6,
+- 0xbc,0x00,0x01,0xff,0xd7,0x9b,0xd6,0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,
+- 0x9c,0xd6,0xbc,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xd7,0x9e,0xd6,0xbc,0x00,0x00,
+- 0x00,0xcf,0x86,0x95,0x85,0x94,0x81,0xd3,0x3e,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,
+- 0xff,0xd7,0xa0,0xd6,0xbc,0x00,0x01,0xff,0xd7,0xa1,0xd6,0xbc,0x00,0x10,0x04,0x00,
+- 0x00,0x01,0xff,0xd7,0xa3,0xd6,0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,0xa4,
+- 0xd6,0xbc,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xd7,0xa6,0xd6,0xbc,0x00,0x01,0xff,
+- 0xd7,0xa7,0xd6,0xbc,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0xa8,0xd6,
+- 0xbc,0x00,0x01,0xff,0xd7,0xa9,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0xaa,0xd6,
+- 0xbc,0x00,0x01,0xff,0xd7,0x95,0xd6,0xb9,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,
+- 0x91,0xd6,0xbf,0x00,0x01,0xff,0xd7,0x9b,0xd6,0xbf,0x00,0x10,0x09,0x01,0xff,0xd7,
+- 0xa4,0xd6,0xbf,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,
+- 0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,
+- 0x0c,0x00,0x0c,0x00,0xcf,0x86,0x95,0x24,0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,
+- 0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,
+- 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd3,0x5a,0xd2,0x06,
+- 0xcf,0x06,0x01,0x00,0xd1,0x14,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x95,0x08,
+- 0x14,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x54,0x04,
+- 0x01,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0x01,0x00,0xcf,0x86,0xd5,0x0c,0x94,0x08,0x13,0x04,0x01,0x00,0x00,0x00,0x05,0x00,
+- 0x54,0x04,0x05,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,
+- 0x06,0x00,0x07,0x00,0x00,0x00,0xd2,0xce,0xd1,0xa5,0xd0,0x37,0xcf,0x86,0xd5,0x15,
+- 0x54,0x05,0x06,0xff,0x00,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,0x00,
+- 0x00,0x00,0x00,0x94,0x1c,0xd3,0x10,0x52,0x04,0x01,0xe6,0x51,0x04,0x0a,0xe6,0x10,
+- 0x04,0x0a,0xe6,0x10,0xdc,0x52,0x04,0x10,0xdc,0x11,0x04,0x10,0xdc,0x11,0xe6,0x01,
+- 0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,
+- 0x04,0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x07,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd4,0x18,0xd3,0x10,0x52,
+- 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x12,0x04,0x01,
+- 0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,
+- 0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd0,0x06,0xcf,
+- 0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,
++ 0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,
++ 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,
++ 0x91,0x08,0x10,0x04,0x01,0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x3c,0xd4,0x28,
++ 0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,
++ 0x01,0x00,0x01,0x09,0x00,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd4,0x18,0x93,0x14,0xd2,0x0c,0x91,0x08,
++ 0x10,0x04,0x01,0x00,0x07,0x00,0x07,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
++ 0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x00,0x07,0x00,0x00,0x00,0x00,0x00,
++ 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x11,0x00,0x13,0x00,0x13,0x00,0xe1,0x24,
++ 0x01,0xd0,0x86,0xcf,0x86,0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,
++ 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,
+ 0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x01,0xff,0x00,0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,
+- 0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x94,0x14,
+- 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0xd0,0x2f,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x15,0x93,0x11,
+- 0x92,0x0d,0x91,0x09,0x10,0x05,0x01,0xff,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
+- 0x00,0x00,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x18,0xd3,0x0c,0x92,0x08,0x11,0x04,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,
+- 0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x00,
+- 0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd4,0x20,0xd3,
+- 0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,
+- 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x53,0x05,0x00,
+- 0xff,0x00,0xd2,0x0d,0x91,0x09,0x10,0x05,0x00,0xff,0x00,0x04,0x00,0x04,0x00,0x91,
+- 0x08,0x10,0x04,0x03,0x00,0x01,0x00,0x01,0x00,0x83,0xe2,0x46,0x3e,0xe1,0x1f,0x3b,
+- 0xe0,0x9c,0x39,0xcf,0x86,0xe5,0x40,0x26,0xc4,0xe3,0x16,0x14,0xe2,0xef,0x11,0xe1,
+- 0xd0,0x10,0xe0,0x60,0x07,0xcf,0x86,0xe5,0x53,0x03,0xe4,0x4c,0x02,0xe3,0x3d,0x01,
+- 0xd2,0x94,0xd1,0x70,0xd0,0x4a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x07,0x00,
+- 0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,
+- 0xd4,0x14,0x93,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,
+- 0x00,0x00,0x07,0x00,0x53,0x04,0x07,0x00,0xd2,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,
+- 0x07,0x00,0x00,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0xcf,0x86,
+- 0x95,0x20,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,
+- 0x00,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,
+- 0x00,0x00,0xd0,0x06,0xcf,0x06,0x07,0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x54,0x04,
+- 0x07,0x00,0x53,0x04,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,
+- 0x00,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,
+- 0xd2,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x51,0x04,0x00,0x00,
+- 0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,0x04,0x07,0x00,0x93,0x10,
+- 0x52,0x04,0x07,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,
+- 0xcf,0x06,0x08,0x00,0xd0,0x46,0xcf,0x86,0xd5,0x2c,0xd4,0x20,0x53,0x04,0x08,0x00,
+- 0xd2,0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x10,0x00,0xd1,0x08,0x10,0x04,
+- 0x10,0x00,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x53,0x04,0x0a,0x00,0x12,0x04,
+- 0x0a,0x00,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,
+- 0x00,0x00,0x0a,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,
+- 0x91,0x08,0x10,0x04,0x0a,0x00,0x0a,0xdc,0x00,0x00,0xd2,0x5e,0xd1,0x06,0xcf,0x06,
+- 0x00,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,
+- 0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x0a,0x00,
+- 0xcf,0x86,0xd5,0x18,0x54,0x04,0x0a,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x0a,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x10,0xdc,0x10,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x53,0x04,
+- 0x10,0x00,0x12,0x04,0x10,0x00,0x00,0x00,0xd1,0x70,0xd0,0x36,0xcf,0x86,0xd5,0x18,
+- 0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,0x00,0x51,0x04,0x05,0x00,
+- 0x10,0x04,0x05,0x00,0x10,0x00,0x94,0x18,0xd3,0x08,0x12,0x04,0x05,0x00,0x00,0x00,
+- 0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x13,0x00,0x13,0x00,0x05,0x00,
+- 0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x05,0x00,0x92,0x0c,0x51,0x04,0x05,0x00,
+- 0x10,0x04,0x05,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x0c,
+- 0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x10,0xe6,0x92,0x0c,0x51,0x04,0x10,0xe6,
+- 0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,
+- 0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,
+- 0x00,0x00,0x07,0x00,0x08,0x00,0xcf,0x86,0x95,0x1c,0xd4,0x0c,0x93,0x08,0x12,0x04,
+- 0x08,0x00,0x00,0x00,0x08,0x00,0x93,0x0c,0x52,0x04,0x08,0x00,0x11,0x04,0x08,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0xba,0xd2,0x80,0xd1,0x34,0xd0,0x1a,0xcf,0x86,
+- 0x55,0x04,0x05,0x00,0x94,0x10,0x93,0x0c,0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,
+- 0x07,0x00,0x05,0x00,0x05,0x00,0xcf,0x86,0x95,0x14,0x94,0x10,0x53,0x04,0x05,0x00,
+- 0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0xd0,0x2a,
+- 0xcf,0x86,0xd5,0x14,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,
+- 0x11,0x04,0x07,0x00,0x00,0x00,0x94,0x10,0x53,0x04,0x07,0x00,0x92,0x08,0x11,0x04,
+- 0x07,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0xcf,0x86,0xd5,0x10,0x54,0x04,0x12,0x00,
+- 0x93,0x08,0x12,0x04,0x12,0x00,0x00,0x00,0x12,0x00,0x54,0x04,0x12,0x00,0x53,0x04,
+- 0x12,0x00,0x12,0x04,0x12,0x00,0x00,0x00,0xd1,0x34,0xd0,0x12,0xcf,0x86,0x55,0x04,
+- 0x10,0x00,0x94,0x08,0x13,0x04,0x10,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,
+- 0x10,0x00,0x94,0x18,0xd3,0x08,0x12,0x04,0x10,0x00,0x00,0x00,0x52,0x04,0x00,0x00,
+- 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,
+- 0xd2,0x06,0xcf,0x06,0x10,0x00,0xd1,0x40,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,
+- 0x54,0x04,0x10,0x00,0x93,0x10,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
+- 0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x10,0x00,0x93,0x0c,
+- 0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x08,0x13,0x04,
+- 0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe4,0xce,0x02,0xe3,0x45,0x01,
+- 0xd2,0xd0,0xd1,0x70,0xd0,0x52,0xcf,0x86,0xd5,0x20,0x94,0x1c,0xd3,0x0c,0x52,0x04,
+- 0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,
+- 0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,0x04,0x07,0x00,0xd3,0x10,0x52,0x04,
+- 0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0xd2,0x0c,0x91,0x08,
+- 0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x07,0x00,0x00,0x00,
+- 0x10,0x04,0x00,0x00,0x07,0x00,0xcf,0x86,0x95,0x18,0x54,0x04,0x0b,0x00,0x93,0x10,
+- 0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,
+- 0x10,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,
+- 0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x94,0x14,
+- 0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,
+- 0x10,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x11,0x00,0xd3,0x14,
+- 0xd2,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x11,0x04,0x11,0x00,
+- 0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x11,0x00,0x11,0x00,
+- 0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x1c,0x54,0x04,0x09,0x00,0x53,0x04,0x09,0x00,
+- 0xd2,0x08,0x11,0x04,0x09,0x00,0x0b,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,
+- 0x09,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,
+- 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0xcf,0x06,0x00,0x00,
+- 0xd0,0x1a,0xcf,0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,0x00,0x53,0x04,0x0d,0x00,
+- 0x52,0x04,0x00,0x00,0x11,0x04,0x11,0x00,0x0d,0x00,0xcf,0x86,0x95,0x14,0x54,0x04,
+- 0x11,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x11,0x00,0x11,0x00,0x11,0x00,
+- 0x11,0x00,0xd2,0xec,0xd1,0xa4,0xd0,0x76,0xcf,0x86,0xd5,0x48,0xd4,0x28,0xd3,0x14,
+- 0x52,0x04,0x08,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x10,0x04,0x08,0x00,
+- 0x00,0x00,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x08,0x00,0x08,0xdc,0x10,0x04,
+- 0x08,0x00,0x08,0xe6,0xd3,0x10,0x52,0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x08,0x00,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x08,0x00,
+- 0x08,0x00,0x54,0x04,0x08,0x00,0xd3,0x0c,0x52,0x04,0x08,0x00,0x11,0x04,0x14,0x00,
+- 0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x08,0xe6,0x08,0x01,0x10,0x04,0x08,0xdc,
+- 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x08,0x09,0xcf,0x86,0x95,0x28,
+- 0xd4,0x14,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x08,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0xd0,0x0a,0xcf,0x86,0x15,0x04,0x10,0x00,
+- 0x00,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x24,0xd3,0x14,0x52,0x04,0x10,0x00,
+- 0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0xe6,0x10,0x04,0x10,0xdc,0x00,0x00,0x92,0x0c,
+- 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x52,0x04,
+- 0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd1,0x54,
+- 0xd0,0x26,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0xd3,0x0c,0x52,0x04,
+- 0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x0b,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x0b,0x00,0x93,0x0c,
+- 0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x0b,0x00,0x54,0x04,0x0b,0x00,
+- 0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,
+- 0x0b,0x00,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x54,0x04,0x10,0x00,0xd3,0x0c,0x92,0x08,
+- 0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x10,0x00,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x14,
+- 0x53,0x04,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
+- 0x10,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x96,0xd2,0x68,0xd1,0x24,0xd0,0x06,
+- 0xcf,0x06,0x0b,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x0b,0x00,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0xd0,0x1e,0xcf,0x86,0x55,0x04,0x11,0x00,0x54,0x04,0x11,0x00,0x93,0x10,0x92,0x0c,
+- 0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
+- 0x55,0x04,0x11,0x00,0x54,0x04,0x11,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x11,0x00,
+- 0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x11,0x00,
+- 0x11,0x00,0xd1,0x28,0xd0,0x22,0xcf,0x86,0x55,0x04,0x14,0x00,0xd4,0x0c,0x93,0x08,
+- 0x12,0x04,0x14,0x00,0x14,0xe6,0x00,0x00,0x53,0x04,0x14,0x00,0x92,0x08,0x11,0x04,
+- 0x14,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xd2,0x2a,
+- 0xd1,0x24,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,
+- 0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,
+- 0x0b,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x58,0xd0,0x12,0xcf,0x86,0x55,0x04,
+- 0x14,0x00,0x94,0x08,0x13,0x04,0x14,0x00,0x00,0x00,0x14,0x00,0xcf,0x86,0x95,0x40,
+- 0xd4,0x24,0xd3,0x0c,0x52,0x04,0x14,0x00,0x11,0x04,0x14,0x00,0x14,0xdc,0xd2,0x0c,
+- 0x51,0x04,0x14,0xe6,0x10,0x04,0x14,0xe6,0x14,0xdc,0x91,0x08,0x10,0x04,0x14,0xe6,
+- 0x14,0xdc,0x14,0xdc,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0xdc,0x14,0x00,
+- 0x14,0x00,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x15,0x00,
+- 0x93,0x10,0x52,0x04,0x15,0x00,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,
+- 0x00,0x00,0xcf,0x86,0xe5,0x0f,0x06,0xe4,0xf8,0x03,0xe3,0x02,0x02,0xd2,0xfb,0xd1,
+- 0x4c,0xd0,0x06,0xcf,0x06,0x0c,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x1c,0xd3,0x10,0x52,
+- 0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x09,0x0c,0x00,0x52,0x04,0x0c,
+- 0x00,0x11,0x04,0x0c,0x00,0x00,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x0c,
+- 0x00,0x0c,0x00,0x0c,0x00,0x54,0x04,0x0c,0x00,0x53,0x04,0x00,0x00,0x52,0x04,0x00,
+- 0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x09,0xd0,0x69,0xcf,0x86,0xd5,
+- 0x32,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0xd2,0x15,0x51,0x04,0x0b,0x00,0x10,
+- 0x0d,0x0b,0xff,0xf0,0x91,0x82,0x99,0xf0,0x91,0x82,0xba,0x00,0x0b,0x00,0x91,0x11,
+- 0x10,0x0d,0x0b,0xff,0xf0,0x91,0x82,0x9b,0xf0,0x91,0x82,0xba,0x00,0x0b,0x00,0x0b,
+- 0x00,0xd4,0x1d,0x53,0x04,0x0b,0x00,0x92,0x15,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,
+- 0x00,0x0b,0xff,0xf0,0x91,0x82,0xa5,0xf0,0x91,0x82,0xba,0x00,0x0b,0x00,0x53,0x04,
+- 0x0b,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0b,0x09,0x10,0x04,0x0b,0x07,
+- 0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x20,0x94,0x1c,0xd3,0x0c,0x92,0x08,0x11,0x04,
+- 0x0b,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x14,0x00,0x00,0x00,0x0d,0x00,0xd4,0x14,0x53,0x04,0x0d,0x00,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x0d,0x00,0x92,0x08,
+- 0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0xd1,0x96,0xd0,0x5c,0xcf,0x86,0xd5,0x18,
+- 0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xe6,0x0d,0x00,
+- 0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd4,0x26,0x53,0x04,0x0d,0x00,0x52,0x04,0x0d,0x00,
+- 0x51,0x04,0x0d,0x00,0x10,0x0d,0x0d,0xff,0xf0,0x91,0x84,0xb1,0xf0,0x91,0x84,0xa7,
+- 0x00,0x0d,0xff,0xf0,0x91,0x84,0xb2,0xf0,0x91,0x84,0xa7,0x00,0x93,0x18,0xd2,0x0c,
+- 0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x00,0x0d,0x09,0x91,0x08,0x10,0x04,0x0d,0x09,
+- 0x00,0x00,0x0d,0x00,0x0d,0x00,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,
+- 0x0d,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x10,0x00,
+- 0x54,0x04,0x10,0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,
+- 0x10,0x07,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd0,0x06,
+- 0xcf,0x06,0x0d,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x2c,0xd3,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0d,0x09,0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
+- 0x0d,0x00,0x11,0x00,0x10,0x04,0x11,0x07,0x11,0x00,0x91,0x08,0x10,0x04,0x11,0x00,
+- 0x10,0x00,0x00,0x00,0x53,0x04,0x0d,0x00,0x92,0x0c,0x51,0x04,0x0d,0x00,0x10,0x04,
+- 0x10,0x00,0x11,0x00,0x11,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x52,0x04,0x10,0x00,
+- 0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd2,0xc8,0xd1,0x48,
+- 0xd0,0x42,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,
+- 0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x54,0x04,0x10,0x00,
+- 0xd3,0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0x09,0x10,0x04,
+- 0x10,0x07,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x12,0x00,
+- 0x00,0x00,0xcf,0x06,0x00,0x00,0xd0,0x52,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x10,
+- 0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,
+- 0x00,0x00,0x11,0x00,0x53,0x04,0x11,0x00,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,
+- 0x10,0x04,0x00,0x00,0x11,0x00,0x94,0x10,0x53,0x04,0x11,0x00,0x92,0x08,0x11,0x04,
+- 0x11,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x18,
+- 0x53,0x04,0x10,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0x07,0x10,0x04,
+- 0x10,0x09,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,
+- 0x00,0x00,0x00,0x00,0xe1,0x27,0x01,0xd0,0x8a,0xcf,0x86,0xd5,0x44,0xd4,0x2c,0xd3,
+- 0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,0x00,0x10,0x00,0x10,0x00,0x91,0x08,0x10,
+- 0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,
+- 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,
+- 0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0xd4,
+- 0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,
+- 0x00,0x10,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,
+- 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd2,0x0c,0x51,0x04,0x10,
+- 0x00,0x10,0x04,0x00,0x00,0x14,0x07,0x91,0x08,0x10,0x04,0x10,0x07,0x10,0x00,0x10,
+- 0x00,0xcf,0x86,0xd5,0x6a,0xd4,0x42,0xd3,0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,
+- 0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0xd2,0x19,0xd1,0x08,0x10,
+- 0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0xff,0xf0,0x91,0x8d,0x87,0xf0,
+- 0x91,0x8c,0xbe,0x00,0x91,0x11,0x10,0x0d,0x10,0xff,0xf0,0x91,0x8d,0x87,0xf0,0x91,
+- 0x8d,0x97,0x00,0x10,0x09,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,
+- 0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x52,
+- 0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd4,0x1c,0xd3,
+- 0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,0xe6,0x52,0x04,0x10,0xe6,0x91,
+- 0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x10,0xe6,0x91,
+- 0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe3,
+- 0x30,0x01,0xd2,0xb7,0xd1,0x48,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x95,0x3c,
+- 0xd4,0x1c,0x93,0x18,0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x09,0x12,0x00,
+- 0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x07,0x12,0x00,0x12,0x00,0x53,0x04,0x12,0x00,
+- 0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x00,0x00,0x12,0x00,0xd1,0x08,0x10,0x04,
+- 0x00,0x00,0x12,0x00,0x10,0x04,0x14,0xe6,0x15,0x00,0x00,0x00,0xd0,0x45,0xcf,0x86,
+- 0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0xd2,0x15,0x51,0x04,
+- 0x10,0x00,0x10,0x04,0x10,0x00,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,0x91,0x92,0xba,
+- 0x00,0xd1,0x11,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,0x91,0x92,0xb0,0x00,
+- 0x10,0x00,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,0x91,0x92,0xbd,0x00,0x10,
+- 0x00,0xcf,0x86,0x95,0x24,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,
+- 0x04,0x10,0x09,0x10,0x07,0x10,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x08,0x11,
+- 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,
+- 0x40,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x0c,0x52,0x04,0x10,
+- 0x00,0x11,0x04,0x10,0x00,0x00,0x00,0xd2,0x1e,0x51,0x04,0x10,0x00,0x10,0x0d,0x10,
+- 0xff,0xf0,0x91,0x96,0xb8,0xf0,0x91,0x96,0xaf,0x00,0x10,0xff,0xf0,0x91,0x96,0xb9,
+- 0xf0,0x91,0x96,0xaf,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x10,0x09,0xcf,
+- 0x86,0x95,0x2c,0xd4,0x1c,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x07,0x10,
+- 0x00,0x10,0x00,0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,0x11,0x00,0x11,0x00,0x53,
+- 0x04,0x11,0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0xd2,
+- 0xa0,0xd1,0x5c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,
+- 0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x10,
+- 0x09,0xcf,0x86,0xd5,0x24,0xd4,0x14,0x93,0x10,0x52,0x04,0x10,0x00,0x91,0x08,0x10,
+- 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x08,0x11,
+- 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x12,0x00,0x52,0x04,0x12,
+- 0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd0,0x2a,0xcf,
+- 0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,0x00,0xd3,0x10,0x52,0x04,0x0d,0x00,0x51,
+- 0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,0x0d,0x07,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x14,0x94,0x10,0x53,0x04,0x0d,
+- 0x00,0x92,0x08,0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,
+- 0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x54,0x04,0x11,0x00,0x53,0x04,0x11,0x00,0xd2,
+- 0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x11,0x00,0x11,0x00,0x94,0x14,0x53,0x04,0x11,0x00,0x92,0x0c,0x51,0x04,0x11,
+- 0x00,0x10,0x04,0x11,0x00,0x11,0x09,0x00,0x00,0x11,0x00,0xcf,0x06,0x00,0x00,0xcf,
+- 0x06,0x00,0x00,0xe4,0x59,0x01,0xd3,0xb2,0xd2,0x5c,0xd1,0x28,0xd0,0x22,0xcf,0x86,
+- 0x55,0x04,0x14,0x00,0x54,0x04,0x14,0x00,0x53,0x04,0x14,0x00,0x92,0x10,0xd1,0x08,
+- 0x10,0x04,0x14,0x00,0x14,0x09,0x10,0x04,0x14,0x07,0x14,0x00,0x00,0x00,0xcf,0x06,
+- 0x00,0x00,0xd0,0x0a,0xcf,0x86,0x15,0x04,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,
+- 0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,
+- 0x10,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x10,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,
+- 0x00,0x00,0x94,0x10,0x53,0x04,0x15,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x15,0x00,
+- 0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x15,0x00,0x53,0x04,0x15,0x00,
+- 0x92,0x08,0x11,0x04,0x00,0x00,0x15,0x00,0x15,0x00,0x94,0x1c,0x93,0x18,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x15,0x09,0x15,0x00,0x15,0x00,0x91,0x08,0x10,0x04,0x15,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd2,0xa0,0xd1,0x3c,0xd0,0x1e,0xcf,0x86,
+- 0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x93,0x10,0x52,0x04,0x13,0x00,0x91,0x08,
+- 0x10,0x04,0x13,0x09,0x13,0x00,0x13,0x00,0x13,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,
+- 0x93,0x10,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x13,0x09,
+- 0x00,0x00,0x13,0x00,0x13,0x00,0xd0,0x46,0xcf,0x86,0xd5,0x2c,0xd4,0x10,0x93,0x0c,
+- 0x52,0x04,0x13,0x00,0x11,0x04,0x15,0x00,0x13,0x00,0x13,0x00,0x53,0x04,0x13,0x00,
+- 0xd2,0x0c,0x91,0x08,0x10,0x04,0x13,0x00,0x13,0x09,0x13,0x00,0x91,0x08,0x10,0x04,
+- 0x13,0x00,0x14,0x00,0x13,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x13,0x00,
+- 0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,
+- 0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe3,0xa9,0x01,0xd2,
+- 0xb0,0xd1,0x6c,0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x12,0x00,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x54,
+- 0x04,0x12,0x00,0xd3,0x10,0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,
+- 0x00,0x00,0x00,0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x12,
+- 0x09,0xcf,0x86,0xd5,0x14,0x94,0x10,0x93,0x0c,0x52,0x04,0x12,0x00,0x11,0x04,0x12,
+- 0x00,0x00,0x00,0x00,0x00,0x12,0x00,0x94,0x14,0x53,0x04,0x12,0x00,0x52,0x04,0x12,
+- 0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0xd0,0x3e,0xcf,
+- 0x86,0xd5,0x14,0x54,0x04,0x12,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x12,
+- 0x00,0x12,0x00,0x12,0x00,0xd4,0x14,0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x93,0x10,0x52,0x04,0x12,0x00,0x51,
+- 0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,
+- 0xa0,0xd0,0x52,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,0x10,0x52,0x04,0x13,0x00,0x51,
+- 0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,
+- 0x04,0x00,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x54,0x04,0x13,0x00,0xd3,0x10,0x52,
+- 0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0xd2,0x0c,0x51,
+- 0x04,0x00,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x00,
+- 0x00,0x13,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0x93,0x14,0xd2,0x0c,0x51,0x04,0x13,
+- 0x00,0x10,0x04,0x13,0x07,0x13,0x00,0x11,0x04,0x13,0x09,0x13,0x00,0x00,0x00,0x53,
+- 0x04,0x13,0x00,0x92,0x08,0x11,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x94,0x20,0xd3,
+- 0x10,0x52,0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x14,0x00,0x14,0x00,0x14,0x00,0xd0,
+- 0x52,0xcf,0x86,0xd5,0x3c,0xd4,0x14,0x53,0x04,0x14,0x00,0x52,0x04,0x14,0x00,0x51,
+- 0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x14,
+- 0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x14,
+- 0x09,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x94,
+- 0x10,0x53,0x04,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0xcf,0x06,0x00,0x00,0xd2,0x2a,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,
+- 0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x14,0x00,0x53,0x04,0x14,
+- 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,
+- 0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x15,
+- 0x00,0x54,0x04,0x15,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x15,0x00,0x00,0x00,0x00,
+- 0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0xd0,
+- 0xca,0xcf,0x86,0xd5,0xc2,0xd4,0x54,0xd3,0x06,0xcf,0x06,0x09,0x00,0xd2,0x06,0xcf,
+- 0x06,0x09,0x00,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x09,0x00,0xcf,0x86,0x55,0x04,0x09,
+- 0x00,0x94,0x14,0x53,0x04,0x09,0x00,0x52,0x04,0x09,0x00,0x51,0x04,0x09,0x00,0x10,
+- 0x04,0x09,0x00,0x10,0x00,0x10,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x10,
+- 0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x11,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x68,0xd2,0x46,0xd1,0x40,0xd0,
+- 0x06,0xcf,0x06,0x09,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0xd4,0x20,0xd3,0x10,0x92,
+- 0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x10,0x00,0x10,0x00,0x52,0x04,0x10,
+- 0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x09,
+- 0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x11,
+- 0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,0x95,0x10,0x94,0x0c,0x93,
+- 0x08,0x12,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,
+- 0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x4c,0xd4,0x06,0xcf,
+- 0x06,0x0b,0x00,0xd3,0x40,0xd2,0x3a,0xd1,0x34,0xd0,0x2e,0xcf,0x86,0x55,0x04,0x0b,
+- 0x00,0xd4,0x14,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,
+- 0x04,0x0b,0x00,0x00,0x00,0x53,0x04,0x15,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,
++ 0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,
++ 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,
++ 0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x07,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,
++ 0x04,0x01,0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x73,0xd4,0x45,0xd3,0x14,0x52,
++ 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,
++ 0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xad,0x87,0xe0,0xad,0x96,0x00,
++ 0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xe0,0xad,0x87,0xe0,0xac,0xbe,0x00,0x91,
++ 0x0f,0x10,0x0b,0x01,0xff,0xe0,0xad,0x87,0xe0,0xad,0x97,0x00,0x01,0x09,0x00,0x00,
++ 0xd3,0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x52,0x04,0x00,0x00,
++ 0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xac,0xa1,0xe0,0xac,0xbc,0x00,0x01,0xff,0xe0,
++ 0xac,0xa2,0xe0,0xac,0xbc,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd4,0x14,0x93,0x10,
++ 0xd2,0x08,0x11,0x04,0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
++ 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x0c,0x00,0x0c,0x00,
++ 0x00,0x00,0xd0,0xb1,0xcf,0x86,0xd5,0x63,0xd4,0x28,0xd3,0x14,0xd2,0x08,0x11,0x04,
++ 0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x0c,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,
++ 0xd3,0x1f,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x0f,
++ 0x10,0x0b,0x01,0xff,0xe0,0xae,0x92,0xe0,0xaf,0x97,0x00,0x01,0x00,0x00,0x00,0xd2,
++ 0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x51,
++ 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,
++ 0x00,0x00,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,
++ 0x04,0x00,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x08,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0x61,0xd4,0x45,0xd3,0x14,0xd2,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xaf,0x86,0xe0,0xae,
++ 0xbe,0x00,0x01,0xff,0xe0,0xaf,0x87,0xe0,0xae,0xbe,0x00,0x91,0x0f,0x10,0x0b,0x01,
++ 0xff,0xe0,0xaf,0x86,0xe0,0xaf,0x97,0x00,0x01,0x09,0x00,0x00,0x93,0x18,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x01,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,
++ 0x00,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,
++ 0x01,0x00,0x10,0x04,0x01,0x00,0x07,0x00,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,
++ 0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xe3,0x1c,0x04,0xe2,0x1a,0x02,0xd1,0xf3,
++ 0xd0,0x76,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,
++ 0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,
++ 0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,
++ 0x01,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x10,0x00,
++ 0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,
++ 0x00,0x00,0x0a,0x00,0x01,0x00,0xcf,0x86,0xd5,0x53,0xd4,0x2f,0xd3,0x10,0x52,0x04,
++ 0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0xd2,0x13,0x91,0x0f,
++ 0x10,0x0b,0x01,0xff,0xe0,0xb1,0x86,0xe0,0xb1,0x96,0x00,0x00,0x00,0x01,0x00,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,
++ 0x08,0x10,0x04,0x00,0x00,0x01,0x54,0x10,0x04,0x01,0x5b,0x00,0x00,0x92,0x0c,0x51,
++ 0x04,0x0a,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,
++ 0x08,0x11,0x04,0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,
++ 0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0a,
++ 0x00,0xd0,0x76,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,
++ 0x04,0x12,0x00,0x10,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,
++ 0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,
++ 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,
++ 0x04,0x07,0x07,0x07,0x00,0x01,0x00,0xcf,0x86,0xd5,0x82,0xd4,0x5e,0xd3,0x2a,0xd2,
++ 0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xb2,0xbf,0xe0,0xb3,0x95,0x00,0x01,0x00,
++ 0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
++ 0xe0,0xb3,0x86,0xe0,0xb3,0x95,0x00,0xd2,0x28,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,
++ 0xb3,0x86,0xe0,0xb3,0x96,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xb3,0x86,0xe0,
++ 0xb3,0x82,0x00,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,0x82,0xe0,0xb3,0x95,0x00,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,
++ 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x00,
++ 0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,
++ 0x08,0x11,0x04,0x01,0x00,0x09,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,
++ 0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0xe1,0x06,0x01,0xd0,0x6e,0xcf,0x86,0xd5,0x3c,0xd4,0x28,
++ 0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x13,0x00,0x10,0x00,0x01,0x00,0x91,0x08,
++ 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,
++ 0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x01,0x00,0x0c,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,
++ 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x0c,0x00,0x13,0x09,0x91,0x08,0x10,0x04,
++ 0x13,0x09,0x0a,0x00,0x01,0x00,0xcf,0x86,0xd5,0x65,0xd4,0x45,0xd3,0x10,0x52,0x04,
++ 0x01,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x08,
++ 0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xb5,0x86,0xe0,0xb4,0xbe,
++ 0x00,0x01,0xff,0xe0,0xb5,0x87,0xe0,0xb4,0xbe,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,
++ 0xe0,0xb5,0x86,0xe0,0xb5,0x97,0x00,0x01,0x09,0x10,0x04,0x0c,0x00,0x12,0x00,0xd3,
++ 0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x01,0x00,0x52,
++ 0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x11,0x00,0xd4,0x14,0x93,
++ 0x10,0xd2,0x08,0x11,0x04,0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,
++ 0x00,0xd3,0x0c,0x52,0x04,0x0a,0x00,0x11,0x04,0x0a,0x00,0x12,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x12,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x5a,0xcf,0x86,0xd5,
++ 0x34,0xd4,0x18,0x93,0x14,0xd2,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x91,0x08,0x10,
++ 0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x04,
++ 0x00,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x04,0x00,0x10,
++ 0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x04,0x00,0x00,0x00,0xcf,0x86,0xd5,0x77,0xd4,0x28,0xd3,0x10,0x52,0x04,0x04,
++ 0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x51,0x04,0x00,
++ 0x00,0x10,0x04,0x04,0x09,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x04,
++ 0x00,0xd3,0x14,0x52,0x04,0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x10,
++ 0x04,0x04,0x00,0x00,0x00,0xd2,0x13,0x51,0x04,0x04,0x00,0x10,0x0b,0x04,0xff,0xe0,
++ 0xb7,0x99,0xe0,0xb7,0x8a,0x00,0x04,0x00,0xd1,0x19,0x10,0x0b,0x04,0xff,0xe0,0xb7,
++ 0x99,0xe0,0xb7,0x8f,0x00,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8f,0xe0,0xb7,0x8a,
++ 0x00,0x10,0x0b,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x9f,0x00,0x04,0x00,0xd4,0x10,
++ 0x93,0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x93,0x14,
++ 0xd2,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0xe2,0x31,0x01,0xd1,0x58,0xd0,0x3a,0xcf,0x86,0xd5,0x18,0x94,
++ 0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,
++ 0x04,0x01,0x67,0x10,0x04,0x01,0x09,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0x00,0xcf,0x86,0x95,0x18,0xd4,0x0c,0x53,0x04,0x01,0x00,0x12,0x04,0x01,
++ 0x6b,0x01,0x00,0x53,0x04,0x01,0x00,0x12,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd0,
++ 0x9e,0xcf,0x86,0xd5,0x54,0xd4,0x3c,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,
++ 0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
++ 0x00,0x10,0x04,0x15,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x15,
++ 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,0x15,
++ 0x00,0xd3,0x08,0x12,0x04,0x15,0x00,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x30,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,
++ 0x04,0x15,0x00,0x01,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,
++ 0x04,0x00,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x15,0x00,0x01,0x00,0x91,0x08,0x10,
++ 0x04,0x15,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
++ 0x76,0x10,0x04,0x15,0x09,0x01,0x00,0x11,0x04,0x01,0x00,0x00,0x00,0xcf,0x86,0x95,
++ 0x34,0xd4,0x20,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
++ 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x01,0x7a,0x11,0x04,0x01,0x00,0x00,
++ 0x00,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x01,
++ 0x00,0x0d,0x00,0x00,0x00,0xe1,0x2b,0x01,0xd0,0x3e,0xcf,0x86,0xd5,0x14,0x54,0x04,
++ 0x02,0x00,0x53,0x04,0x02,0x00,0x92,0x08,0x11,0x04,0x02,0xdc,0x02,0x00,0x02,0x00,
++ 0x54,0x04,0x02,0x00,0xd3,0x14,0x52,0x04,0x02,0x00,0xd1,0x08,0x10,0x04,0x02,0x00,
++ 0x02,0xdc,0x10,0x04,0x02,0x00,0x02,0xdc,0x92,0x0c,0x91,0x08,0x10,0x04,0x02,0x00,
++ 0x02,0xd8,0x02,0x00,0x02,0x00,0xcf,0x86,0xd5,0x73,0xd4,0x36,0xd3,0x17,0x92,0x13,
++ 0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x82,0xe0,0xbe,0xb7,
++ 0x00,0x02,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x02,0x00,0x02,0x00,0x91,
++ 0x0f,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x8c,0xe0,0xbe,0xb7,0x00,0x02,0x00,
++ 0xd3,0x26,0xd2,0x13,0x51,0x04,0x02,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbd,0x91,0xe0,
++ 0xbe,0xb7,0x00,0x02,0x00,0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,
++ 0xbd,0x96,0xe0,0xbe,0xb7,0x00,0x52,0x04,0x02,0x00,0x91,0x0f,0x10,0x0b,0x02,0xff,
++ 0xe0,0xbd,0x9b,0xe0,0xbe,0xb7,0x00,0x02,0x00,0x02,0x00,0xd4,0x27,0x53,0x04,0x02,
++ 0x00,0xd2,0x17,0xd1,0x0f,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x80,0xe0,0xbe,
++ 0xb5,0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,
++ 0x00,0x00,0xd3,0x35,0xd2,0x17,0xd1,0x08,0x10,0x04,0x00,0x00,0x02,0x81,0x10,0x04,
++ 0x02,0x82,0x02,0xff,0xe0,0xbd,0xb1,0xe0,0xbd,0xb2,0x00,0xd1,0x0f,0x10,0x04,0x02,
++ 0x84,0x02,0xff,0xe0,0xbd,0xb1,0xe0,0xbd,0xb4,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbe,
++ 0xb2,0xe0,0xbe,0x80,0x00,0x02,0x00,0xd2,0x13,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,
++ 0xbe,0xb3,0xe0,0xbe,0x80,0x00,0x02,0x00,0x02,0x82,0x11,0x04,0x02,0x82,0x02,0x00,
++ 0xd0,0xd3,0xcf,0x86,0xd5,0x65,0xd4,0x27,0xd3,0x1f,0xd2,0x13,0x91,0x0f,0x10,0x04,
++ 0x02,0x82,0x02,0xff,0xe0,0xbd,0xb1,0xe0,0xbe,0x80,0x00,0x02,0xe6,0x91,0x08,0x10,
++ 0x04,0x02,0x09,0x02,0x00,0x02,0xe6,0x12,0x04,0x02,0x00,0x0c,0x00,0xd3,0x1f,0xd2,
++ 0x13,0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0x92,0xe0,0xbe,
++ 0xb7,0x00,0x51,0x04,0x02,0x00,0x10,0x04,0x04,0x00,0x02,0x00,0xd2,0x0c,0x91,0x08,
++ 0x10,0x04,0x00,0x00,0x02,0x00,0x02,0x00,0x91,0x0f,0x10,0x04,0x02,0x00,0x02,0xff,
++ 0xe0,0xbe,0x9c,0xe0,0xbe,0xb7,0x00,0x02,0x00,0xd4,0x3d,0xd3,0x26,0xd2,0x13,0x51,
++ 0x04,0x02,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xa1,0xe0,0xbe,0xb7,0x00,0x02,0x00,
++ 0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0xa6,0xe0,0xbe,0xb7,
++ 0x00,0x52,0x04,0x02,0x00,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xab,0xe0,0xbe,
++ 0xb7,0x00,0x02,0x00,0x04,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,
++ 0x02,0x00,0x02,0x00,0x02,0x00,0xd2,0x13,0x91,0x0f,0x10,0x04,0x04,0x00,0x02,0xff,
++ 0xe0,0xbe,0x90,0xe0,0xbe,0xb5,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,
++ 0x00,0x04,0x00,0xcf,0x86,0x95,0x4c,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0xdc,0x04,0x00,0x52,0x04,0x04,0x00,0xd1,0x08,0x10,
++ 0x04,0x04,0x00,0x00,0x00,0x10,0x04,0x0a,0x00,0x04,0x00,0xd3,0x14,0xd2,0x08,0x11,
++ 0x04,0x08,0x00,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0x92,
++ 0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0xcf,0x86,0xe5,0xcc,0x04,0xe4,0x63,0x03,0xe3,0x65,0x01,0xe2,0x04,
++ 0x01,0xd1,0x7f,0xd0,0x65,0xcf,0x86,0x55,0x04,0x04,0x00,0xd4,0x33,0xd3,0x1f,0xd2,
++ 0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x0a,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,
++ 0x0b,0x04,0xff,0xe1,0x80,0xa5,0xe1,0x80,0xae,0x00,0x04,0x00,0x92,0x10,0xd1,0x08,
++ 0x10,0x04,0x0a,0x00,0x04,0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x04,0x00,0xd3,0x18,
++ 0xd2,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x51,0x04,0x0a,0x00,
++ 0x10,0x04,0x04,0x00,0x04,0x07,0x92,0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0x09,
++ 0x10,0x04,0x0a,0x09,0x0a,0x00,0x0a,0x00,0xcf,0x86,0x95,0x14,0x54,0x04,0x04,0x00,
++ 0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,
++ 0xd0,0x2e,0xcf,0x86,0x95,0x28,0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,
++ 0x91,0x08,0x10,0x04,0x0a,0x00,0x0a,0xdc,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,0x08,
++ 0x11,0x04,0x0a,0x00,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0a,0x00,0x01,0x00,0xcf,0x86,
++ 0xd5,0x24,0x94,0x20,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,
++ 0x00,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0d,0x00,
++ 0x00,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,
++ 0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x06,0x00,
++ 0x08,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0d,0x00,
++ 0x0d,0x00,0xd1,0x28,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x95,0x1c,0x54,0x04,
++ 0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,
++ 0x0b,0x00,0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,
++ 0x01,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x0b,0x00,0x0b,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,
++ 0x01,0x00,0x53,0x04,0x01,0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x0b,0x00,
++ 0xe2,0x21,0x01,0xd1,0x6c,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x52,
++ 0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,0x04,
++ 0x00,0x04,0x00,0xcf,0x86,0x95,0x48,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,
++ 0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x04,
++ 0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,
++ 0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xd0,
++ 0x62,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,
++ 0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,
++ 0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xd4,0x14,0x53,0x04,0x04,
++ 0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd3,
++ 0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,
++ 0x00,0x00,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,
++ 0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,0xd3,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,
++ 0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x52,0x04,0x04,0x00,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x93,0x10,0x52,0x04,0x04,0x00,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x94,0x14,0x53,0x04,0x04,
++ 0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,
++ 0x00,0xd1,0x9c,0xd0,0x3e,0xcf,0x86,0x95,0x38,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,
++ 0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd3,0x14,0xd2,
++ 0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,
++ 0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,
++ 0x00,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x93,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,
++ 0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,0x53,0x04,0x04,0x00,0xd2,0x0c,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x0c,
++ 0xe6,0x10,0x04,0x0c,0xe6,0x08,0xe6,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x08,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x53,0x04,0x04,0x00,0x52,
++ 0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,0xcf,
++ 0x86,0x95,0x14,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,
++ 0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,
++ 0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x11,0x00,0x00,
++ 0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x00,0x00,0xd3,0x30,0xd2,0x2a,0xd1,
++ 0x24,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x0b,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,
++ 0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xd2,0x6c,0xd1,0x24,0xd0,
++ 0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,
++ 0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x0b,0x00,0x0b,
++ 0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x52,
++ 0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xcf,
++ 0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x04,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x46,0xcf,0x86,0xd5,0x28,0xd4,
++ 0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x00,
++ 0x00,0x06,0x00,0x93,0x10,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x09,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x06,0x00,0x93,0x14,0x52,0x04,0x06,0x00,0xd1,
++ 0x08,0x10,0x04,0x06,0x09,0x06,0x00,0x10,0x04,0x06,0x00,0x00,0x00,0x00,0x00,0xcf,
++ 0x86,0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,0x06,0x00,0x00,0x00,0x00,
++ 0x00,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,
++ 0x00,0x00,0x00,0x06,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x00,
++ 0x00,0x06,0x00,0x00,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0xd5,
++ 0x24,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x04,
++ 0x09,0x04,0x00,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x07,
++ 0xe6,0x00,0x00,0xd4,0x10,0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,0x00,
++ 0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x08,0x11,0x04,0x07,0x00,0x00,0x00,0x00,
++ 0x00,0xe4,0xac,0x03,0xe3,0x4d,0x01,0xd2,0x84,0xd1,0x48,0xd0,0x2a,0xcf,0x86,0x95,
++ 0x24,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,
++ 0x04,0x04,0x00,0x00,0x00,0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,0x00,
++ 0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x53,
++ 0x04,0x04,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x04,0x00,0x94,0x18,0x53,0x04,0x04,0x00,0x92,
++ 0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0xe4,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,
++ 0x00,0x0b,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x0c,0x52,
++ 0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x42,0xcf,
++ 0x86,0xd5,0x1c,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0xd1,
++ 0x08,0x10,0x04,0x07,0x00,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0xd4,0x0c,0x53,
++ 0x04,0x07,0x00,0x12,0x04,0x07,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x10,0xd1,
++ 0x08,0x10,0x04,0x07,0x00,0x07,0xde,0x10,0x04,0x07,0xe6,0x07,0xdc,0x00,0x00,0xcf,
++ 0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x00,
++ 0x00,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,
++ 0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x07,0x00,0x91,
++ 0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,0xcf,0x86,0x55,
++ 0x04,0x08,0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,0x0b,
++ 0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x95,0x28,0xd4,0x10,0x53,0x04,0x08,0x00,0x92,
++ 0x08,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,
++ 0x04,0x08,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x08,0x00,0x07,
++ 0x00,0xd2,0xe4,0xd1,0x80,0xd0,0x2e,0xcf,0x86,0x95,0x28,0x54,0x04,0x08,0x00,0xd3,
++ 0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x08,0xe6,0xd2,
++ 0x0c,0x91,0x08,0x10,0x04,0x08,0xdc,0x08,0x00,0x08,0x00,0x11,0x04,0x00,0x00,0x08,
++ 0x00,0x0b,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,
++ 0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xd4,0x14,0x93,
++ 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x0b,
++ 0x00,0xd3,0x10,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x0b,
++ 0xe6,0x52,0x04,0x0b,0xe6,0xd1,0x08,0x10,0x04,0x0b,0xe6,0x00,0x00,0x10,0x04,0x00,
++ 0x00,0x0b,0xdc,0xd0,0x5e,0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0b,0x00,0x92,
++ 0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,0x11,
++ 0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd4,0x10,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,
++ 0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x10,0xe6,0x91,0x08,0x10,
++ 0x04,0x10,0xe6,0x10,0xdc,0x10,0xdc,0xd2,0x0c,0x51,0x04,0x10,0xdc,0x10,0x04,0x10,
++ 0xdc,0x10,0xe6,0xd1,0x08,0x10,0x04,0x10,0xe6,0x10,0xdc,0x10,0x04,0x10,0x00,0x00,
++ 0x00,0xcf,0x06,0x00,0x00,0xe1,0x1e,0x01,0xd0,0xaa,0xcf,0x86,0xd5,0x6e,0xd4,0x53,
++ 0xd3,0x17,0x52,0x04,0x09,0x00,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,
++ 0x85,0xe1,0xac,0xb5,0x00,0x09,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x09,0xff,0xe1,
++ 0xac,0x87,0xe1,0xac,0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x89,0xe1,
++ 0xac,0xb5,0x00,0x09,0x00,0xd1,0x0f,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8b,0xe1,0xac,
++ 0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8d,0xe1,0xac,0xb5,0x00,0x09,
++ 0x00,0x93,0x17,0x92,0x13,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x91,
++ 0xe1,0xac,0xb5,0x00,0x09,0x00,0x09,0x00,0x09,0x00,0x54,0x04,0x09,0x00,0xd3,0x10,
++ 0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x07,0x09,0x00,0x09,0x00,0xd2,0x13,
++ 0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xba,0xe1,0xac,0xb5,
++ 0x00,0x91,0x0f,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xbc,0xe1,0xac,0xb5,0x00,
++ 0x09,0x00,0xcf,0x86,0xd5,0x3d,0x94,0x39,0xd3,0x31,0xd2,0x25,0xd1,0x16,0x10,0x0b,
++ 0x09,0xff,0xe1,0xac,0xbe,0xe1,0xac,0xb5,0x00,0x09,0xff,0xe1,0xac,0xbf,0xe1,0xac,
++ 0xb5,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xad,0x82,0xe1,0xac,0xb5,0x00,0x91,
++ 0x08,0x10,0x04,0x09,0x09,0x09,0x00,0x09,0x00,0x12,0x04,0x09,0x00,0x00,0x00,0x09,
++ 0x00,0xd4,0x1c,0x53,0x04,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,
++ 0x00,0x09,0xe6,0x91,0x08,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0xe6,0xd3,0x08,0x12,
++ 0x04,0x09,0xe6,0x09,0x00,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x00,
++ 0x00,0x00,0x00,0xd0,0x2e,0xcf,0x86,0x55,0x04,0x0a,0x00,0xd4,0x18,0x53,0x04,0x0a,
++ 0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x09,0x0d,0x09,0x11,0x04,0x0d,
++ 0x00,0x0a,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x0d,0x00,0x0d,
++ 0x00,0xcf,0x86,0x55,0x04,0x0c,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x0c,0x00,0x51,
++ 0x04,0x0c,0x00,0x10,0x04,0x0c,0x07,0x0c,0x00,0x0c,0x00,0xd3,0x0c,0x92,0x08,0x11,
++ 0x04,0x0c,0x00,0x0c,0x09,0x00,0x00,0x12,0x04,0x00,0x00,0x0c,0x00,0xe3,0xb2,0x01,
++ 0xe2,0x09,0x01,0xd1,0x4c,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x0a,0x00,0x54,0x04,0x0a,
++ 0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,
++ 0x07,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,0xcf,
++ 0x86,0x95,0x1c,0x94,0x18,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,0x00,
++ 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,
++ 0x3a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x14,0x00,0x54,0x04,0x14,0x00,0x53,
++ 0x04,0x14,0x00,0xd2,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x14,0x00,0x14,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x08,0x13,
++ 0x04,0x0d,0x00,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x0b,0xe6,0x10,0x04,0x0b,
++ 0xe6,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x01,0x0b,0xdc,0x0b,0xdc,0x92,0x08,0x11,
++ 0x04,0x0b,0xdc,0x0b,0xe6,0x0b,0xdc,0xd4,0x28,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x01,0x0b,0x01,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,
++ 0x01,0x0b,0x00,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xdc,0x0b,0x00,0xd3,
++ 0x1c,0xd2,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0d,0x00,0xd1,0x08,0x10,
++ 0x04,0x0d,0xe6,0x0d,0x00,0x10,0x04,0x0d,0x00,0x13,0x00,0x92,0x0c,0x51,0x04,0x10,
++ 0xe6,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,0x07,
++ 0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x94,0x0c,0x53,0x04,0x07,0x00,0x12,0x04,0x07,
++ 0x00,0x08,0x00,0x08,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,0xcf,0x86,0xd5,0x40,0xd4,
++ 0x2c,0xd3,0x10,0x92,0x0c,0x51,0x04,0x08,0xe6,0x10,0x04,0x08,0xdc,0x08,0xe6,0x09,
++ 0xe6,0xd2,0x0c,0x51,0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x0a,0xe6,0xd1,0x08,0x10,
++ 0x04,0x0a,0xe6,0x0a,0xea,0x10,0x04,0x0a,0xd6,0x0a,0xdc,0x93,0x10,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x0a,0xca,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0xd4,0x14,0x93,
++ 0x10,0x52,0x04,0x0a,0xe6,0x51,0x04,0x0a,0xe6,0x10,0x04,0x0a,0xe6,0x10,0xe6,0x10,
++ 0xe6,0xd3,0x10,0x52,0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x13,0xe8,0x13,
++ 0xe4,0xd2,0x10,0xd1,0x08,0x10,0x04,0x13,0xe4,0x13,0xdc,0x10,0x04,0x00,0x00,0x12,
++ 0xe6,0xd1,0x08,0x10,0x04,0x0c,0xe9,0x0b,0xdc,0x10,0x04,0x09,0xe6,0x09,0xdc,0xe2,
++ 0x80,0x08,0xe1,0x48,0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa5,0x00,0x01,0xff,0x61,
++ 0xcc,0xa5,0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,0x87,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x42,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,0xa3,
++ 0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,0xd2,
++ 0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x43,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,0x63,
++ 0xcc,0xa7,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0x87,0x00,0x01,0xff,0x64,
++ 0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa3,0x00,0x01,0xff,0x64,
++ 0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,0xb1,
++ 0x00,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa7,0x00,0x01,
++ 0xff,0x64,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xad,0x00,0x01,0xff,0x64,
++ 0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,0x80,0x00,0x01,
++ 0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,0x81,
++ 0x00,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x45,0xcc,0xad,0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x45,
++ 0xcc,0xb0,0x00,0x01,0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,
++ 0xcc,0xa7,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,0x01,
++ 0xff,0x46,0xcc,0x87,0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,0x84,
++ 0x00,0x10,0x08,0x01,0xff,0x48,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,0x10,
++ 0x08,0x01,0xff,0x48,0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,0x10,
++ 0x08,0x01,0xff,0x48,0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x49,0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,0x01,
++ 0xff,0x49,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0x81,0x00,0x01,0xff,0x6b,
++ 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,0xa3,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,0xb1,
++ 0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,0xd2,
++ 0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,0x6c,
++ 0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xb1,0x00,0x01,0xff,0x6c,
++ 0xcc,0xb1,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4c,0xcc,0xad,0x00,0x01,0xff,0x6c,
++ 0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x4d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,0x81,
++ 0x00,0xcf,0x86,0xe5,0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x4d,0xcc,0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,
++ 0x4d,0xcc,0xa3,0x00,0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x4e,0xcc,0x87,0x00,0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x4e,0xcc,
++ 0xa3,0x00,0x01,0xff,0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x4e,0xcc,0xb1,0x00,0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x4e,0xcc,
++ 0xad,0x00,0x01,0xff,0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,
++ 0x83,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,
++ 0x4f,0xcc,0x83,0xcc,0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,0x48,
++ 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,
++ 0x6f,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x81,0x00,
++ 0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x50,0xcc,
++ 0x81,0x00,0x01,0xff,0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x50,0xcc,0x87,0x00,
++ 0x01,0xff,0x70,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,
++ 0x87,0x00,0x01,0xff,0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa3,0x00,
++ 0x01,0xff,0x72,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x52,0xcc,0xa3,0xcc,
++ 0x84,0x00,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,
++ 0xb1,0x00,0x01,0xff,0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x53,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,0x08,
++ 0x01,0xff,0x53,0xcc,0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,
++ 0x01,0xff,0x53,0xcc,0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,
++ 0x10,0x0a,0x01,0xff,0x53,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,0xcc,
++ 0x87,0x00,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x53,0xcc,0xa3,0xcc,0x87,0x00,
++ 0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0x87,0x00,
++ 0x01,0xff,0x74,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,0xa3,0x00,
++ 0x01,0xff,0x74,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0xb1,0x00,0x01,0xff,
++ 0x74,0xcc,0xb1,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,
++ 0xad,0x00,0x01,0xff,0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xa4,0x00,
++ 0x01,0xff,0x75,0xcc,0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0xb0,0x00,
++ 0x01,0xff,0x75,0xcc,0xb0,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xad,0x00,0x01,0xff,
++ 0x75,0xcc,0xad,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x83,0xcc,
++ 0x81,0x00,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x55,0xcc,
++ 0x84,0xcc,0x88,0x00,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x56,0xcc,0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,
++ 0x56,0xcc,0xa3,0x00,0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x10,0x02,0xcf,0x86,0xd5,
++ 0xe1,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x80,
++ 0x00,0x01,0xff,0x77,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x81,0x00,0x01,
++ 0xff,0x77,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x88,0x00,0x01,
++ 0xff,0x77,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x87,0x00,0x01,0xff,0x77,
++ 0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0xa3,0x00,0x01,
++ 0xff,0x77,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x58,0xcc,0x87,0x00,0x01,0xff,0x78,
++ 0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x58,0xcc,0x88,0x00,0x01,0xff,0x78,
++ 0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,0x87,
++ 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0x82,0x00,0x01,
++ 0xff,0x7a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x5a,0xcc,0xa3,0x00,0x01,0xff,0x7a,
++ 0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0xb1,0x00,0x01,0xff,0x7a,
++ 0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x68,0xcc,0xb1,0x00,0x01,0xff,0x74,0xcc,0x88,
++ 0x00,0x92,0x1d,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,0xff,0x79,
++ 0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x02,0xff,0xc5,0xbf,0xcc,0x87,0x00,0x0a,0x00,
++ 0xd4,0x98,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa3,0x00,
++ 0x01,0xff,0x61,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x89,0x00,0x01,0xff,
++ 0x61,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x81,0x00,
++ 0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,
++ 0x80,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,
++ 0x01,0xff,0x41,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,
++ 0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,
++ 0x83,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,
++ 0x61,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x81,0x00,
++ 0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,
++ 0x01,0xff,0x41,0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,
++ 0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,
++ 0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x83,0x00,0x01,0xff,
++ 0x61,0xcc,0x86,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x86,0x00,
++ 0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x45,0xcc,0xa3,0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,
++ 0x89,0x00,0x01,0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,
++ 0x83,0x00,0x01,0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,0xcc,
++ 0x81,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0xcf,0x86,0xe5,0x31,0x01,0xd4,
++ 0x90,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,0xcc,0x80,
++ 0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,
++ 0xcc,0x89,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,
++ 0xff,0x45,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x10,
++ 0x0a,0x01,0xff,0x45,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x89,0x00,0x01,0xff,0x69,
++ 0xcc,0x89,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,0xa3,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,0xa3,
++ 0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,0xd3,
++ 0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,0x81,0x00,0x01,
++ 0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,0x80,
++ 0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,
++ 0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,0x01,
++ 0xff,0x4f,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0xd2,
++ 0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x6f,
++ 0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x81,0x00,0x01,
++ 0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,
++ 0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x4f,
++ 0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x89,0x00,0xd4,0x98,0xd3,
++ 0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x83,0x00,0x01,
++ 0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0xa3,
++ 0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,
++ 0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x89,
++ 0x00,0x01,0xff,0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,
++ 0xcc,0x9b,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x81,0x00,0x10,0x0a,0x01,
++ 0xff,0x55,0xcc,0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0xd1,
++ 0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,0x9b,
++ 0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x75,
++ 0xcc,0x9b,0xcc,0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,
++ 0xcc,0x9b,0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0xa3,0x00,0x10,0x08,0x01,
++ 0xff,0x59,0xcc,0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x59,0xcc,0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x59,
++ 0xcc,0x89,0x00,0x01,0xff,0x79,0xcc,0x89,0x00,0x92,0x14,0x91,0x10,0x10,0x08,0x01,
++ 0xff,0x59,0xcc,0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x0a,0x00,0x0a,0x00,0xe1,
++ 0xc0,0x04,0xe0,0x80,0x02,0xcf,0x86,0xe5,0x2d,0x01,0xd4,0xa8,0xd3,0x54,0xd2,0x28,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,
++ 0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,
++ 0xb1,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,
++ 0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
++ 0xce,0xb1,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0x00,
++ 0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x91,0xcc,0x93,0x00,0x01,0xff,0xce,
++ 0x91,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,0x00,0x01,
++ 0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x91,
++ 0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,
++ 0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcd,
++ 0x82,0x00,0xd3,0x42,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x93,
++ 0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,
++ 0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,
++ 0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,
++ 0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x95,0xcc,0x93,
++ 0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x95,0xcc,0x93,
++ 0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,
++ 0x01,0xff,0xce,0x95,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0xcc,
++ 0x81,0x00,0x00,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,
++ 0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,
++ 0xce,0xb7,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0x00,
++ 0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,
++ 0xb7,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,
++ 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xce,0x97,0xcc,0x93,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0x00,0x10,0x0b,
++ 0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,
++ 0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,0x00,0x01,
++ 0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,
++ 0xcd,0x82,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x82,0x00,0xd3,0x54,0xd2,0x28,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,
++ 0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,
++ 0xb9,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,
++ 0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
++ 0xce,0xb9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,0x00,
++ 0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x93,0x00,0x01,0xff,0xce,
++ 0x99,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x99,0xcc,0x93,0xcc,0x80,0x00,0x01,
++ 0xff,0xce,0x99,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x99,
++ 0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,
++ 0x01,0xff,0xce,0x99,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,0xcd,
++ 0x82,0x00,0xcf,0x86,0xe5,0x13,0x01,0xd4,0x84,0xd3,0x42,0xd2,0x28,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0x10,
++ 0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,
++ 0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x81,0x00,
++ 0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0x9f,0xcc,0x93,0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,0x00,0x10,
++ 0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,
++ 0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x81,0x00,
++ 0x01,0xff,0xce,0x9f,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x54,0xd2,0x28,0xd1,
++ 0x12,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,
++ 0x00,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,
++ 0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,0xcc,
++ 0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,
++ 0x85,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,0xd2,
++ 0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0x00,0x10,0x04,
++ 0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x0f,0x10,0x04,0x00,
++ 0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x81,0x00,0x10,0x04,0x00,0x00,0x01,0xff,
++ 0xce,0xa5,0xcc,0x94,0xcd,0x82,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0x10,
++ 0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,
++ 0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0x00,
++ 0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,
++ 0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,0xd1,
++ 0x12,0x10,0x09,0x01,0xff,0xce,0xa9,0xcc,0x93,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,
++ 0x00,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xa9,
++ 0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,
++ 0x81,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,
++ 0xa9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0x00,0xd3,
++ 0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,0xff,
++ 0xce,0xb1,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,0xff,
++ 0xce,0xb5,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x80,0x00,
++ 0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,0x00,
++ 0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
++ 0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xcf,
++ 0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x91,0x12,0x10,0x09,0x01,
++ 0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x00,0x00,0xe0,
++ 0xe1,0x02,0xcf,0x86,0xe5,0x91,0x01,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,
++ 0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
++ 0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,
++ 0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,
++ 0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
++ 0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xcd,
++ 0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,
++ 0x16,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,
++ 0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,0xcd,
++ 0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,
++ 0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,
++ 0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,
++ 0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd3,
++ 0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x85,0x00,
++ 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,
++ 0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xcd,0x85,
++ 0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,
++ 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,
++ 0xb7,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,
++ 0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcd,
++ 0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,
++ 0x97,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x80,
++ 0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,0xcd,
++ 0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,
++ 0xff,0xce,0x97,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,
++ 0xcd,0x82,0xcd,0x85,0x00,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,
++ 0xff,0xcf,0x89,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x85,
++ 0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,
++ 0xcf,0x89,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,
++ 0x89,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,
++ 0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,
++ 0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,0x10,
++ 0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,
++ 0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,
++ 0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,
++ 0xff,0xce,0xa9,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,
++ 0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x82,0xcd,
++ 0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd3,0x49,0xd2,
++ 0x26,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,
++ 0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,
++ 0xce,0xb1,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,0xcd,
++ 0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,0xce,
++ 0xb1,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x91,
++ 0xcc,0x86,0x00,0x01,0xff,0xce,0x91,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0x91,
++ 0xcc,0x80,0x00,0x01,0xff,0xce,0x91,0xcc,0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,
++ 0xce,0x91,0xcd,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xb9,0x00,0x01,0x00,
++ 0xcf,0x86,0xe5,0x16,0x01,0xd4,0x8f,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,0x04,0x01,
++ 0x00,0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x80,
++ 0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,
++ 0xce,0xb7,0xcc,0x81,0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcd,
++ 0x82,0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0x95,0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x81,0x00,0x10,
++ 0x09,0x01,0xff,0xce,0x97,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x81,0x00,0xd1,
++ 0x13,0x10,0x09,0x01,0xff,0xce,0x97,0xcd,0x85,0x00,0x01,0xff,0xe1,0xbe,0xbf,0xcc,
++ 0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbe,0xbf,0xcc,0x81,0x00,0x01,0xff,0xe1,0xbe,
++ 0xbf,0xcd,0x82,0x00,0xd3,0x40,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,
++ 0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,
++ 0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x51,0x04,
++ 0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,
++ 0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x86,
++ 0x00,0x01,0xff,0xce,0x99,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x80,
++ 0x00,0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x04,0x00,0x00,0x01,0xff,
++ 0xe1,0xbf,0xbe,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbf,0xbe,0xcc,0x81,0x00,
++ 0x01,0xff,0xe1,0xbf,0xbe,0xcd,0x82,0x00,0xd4,0x93,0xd3,0x4e,0xd2,0x28,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,
++ 0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,
++ 0x88,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x93,0x00,0x01,
++ 0xff,0xcf,0x81,0xcc,0x94,0x00,0x10,0x09,0x01,0xff,0xcf,0x85,0xcd,0x82,0x00,0x01,
++ 0xff,0xcf,0x85,0xcc,0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,
++ 0xce,0xa5,0xcc,0x86,0x00,0x01,0xff,0xce,0xa5,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,
++ 0xce,0xa5,0xcc,0x80,0x00,0x01,0xff,0xce,0xa5,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xce,0xa1,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x80,0x00,0x10,0x09,
++ 0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x01,0xff,0x60,0x00,0xd3,0x3b,0xd2,0x18,0x51,
++ 0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,
++ 0xcf,0x89,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x81,0xcd,
++ 0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcd,0x82,0x00,0x01,0xff,0xcf,
++ 0x89,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x9f,
++ 0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xa9,
++ 0xcc,0x80,0x00,0x01,0xff,0xce,0xa9,0xcc,0x81,0x00,0xd1,0x10,0x10,0x09,0x01,0xff,
++ 0xce,0xa9,0xcd,0x85,0x00,0x01,0xff,0xc2,0xb4,0x00,0x10,0x04,0x01,0x00,0x00,0x00,
++ 0xe0,0x62,0x0c,0xcf,0x86,0xe5,0x9f,0x08,0xe4,0xf8,0x05,0xe3,0xdb,0x02,0xe2,0xa1,
++ 0x01,0xd1,0xb4,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,0x92,0x14,0x91,
++ 0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,0xff,0xe2,0x80,0x83,0x00,0x01,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x14,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
++ 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x01,0x00,0xcf,0x86,0xd5,
++ 0x48,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x06,0x00,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,0x06,0x00,0xd3,0x1c,0xd2,
++ 0x0c,0x51,0x04,0x06,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0xd1,0x08,0x10,0x04,0x07,
++ 0x00,0x08,0x00,0x10,0x04,0x08,0x00,0x06,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,
++ 0x00,0x10,0x04,0x08,0x00,0x06,0x00,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x06,0x00,0x91,
++ 0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x0f,0x00,0x92,0x08,0x11,0x04,0x0f,0x00,0x01,
++ 0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0xd0,0x7e,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x53,0x04,0x01,
++ 0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd3,
++ 0x10,0x52,0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0c,0x00,0x0c,0x00,0x52,
++ 0x04,0x0c,0x00,0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0xd4,0x1c,0x53,
++ 0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x02,0x00,0x91,
++ 0x08,0x10,0x04,0x03,0x00,0x04,0x00,0x04,0x00,0xd3,0x10,0xd2,0x08,0x11,0x04,0x06,
++ 0x00,0x08,0x00,0x11,0x04,0x08,0x00,0x0b,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0b,
++ 0x00,0x0c,0x00,0x10,0x04,0x0e,0x00,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x11,
++ 0x00,0x13,0x00,0xcf,0x86,0xd5,0x28,0x54,0x04,0x00,0x00,0xd3,0x0c,0x92,0x08,0x11,
++ 0x04,0x01,0xe6,0x01,0x01,0x01,0xe6,0xd2,0x0c,0x51,0x04,0x01,0x01,0x10,0x04,0x01,
++ 0x01,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x01,0x00,0xd4,0x30,0xd3,
++ 0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x01,0xe6,0x04,0x00,0xd1,0x08,0x10,
++ 0x04,0x06,0x00,0x06,0x01,0x10,0x04,0x06,0x01,0x06,0xe6,0x92,0x10,0xd1,0x08,0x10,
++ 0x04,0x06,0xdc,0x06,0xe6,0x10,0x04,0x06,0x01,0x08,0x01,0x09,0xdc,0x93,0x10,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x0a,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,
++ 0x81,0xd0,0x4f,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,
++ 0x00,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xa9,0x00,0x01,0x00,0x92,0x12,
++ 0x51,0x04,0x01,0x00,0x10,0x06,0x01,0xff,0x4b,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,
++ 0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,
++ 0x10,0x04,0x04,0x00,0x07,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x06,0x00,0x06,0x00,
++ 0xcf,0x86,0x95,0x2c,0xd4,0x18,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0xd1,0x08,
++ 0x10,0x04,0x08,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x93,0x10,0x92,0x0c,
++ 0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0xd0,0x68,0xcf,0x86,0xd5,0x48,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,
++ 0x10,0x04,0x01,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
++ 0x92,0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x11,0x00,0x00,0x00,0x53,0x04,
++ 0x01,0x00,0x92,0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,0x90,0xcc,
++ 0xb8,0x00,0x01,0xff,0xe2,0x86,0x92,0xcc,0xb8,0x00,0x01,0x00,0x94,0x1a,0x53,0x04,
++ 0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,
++ 0x94,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x2e,0x94,0x2a,0x53,0x04,
++ 0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x87,
++ 0x90,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,0x87,0x94,0xcc,0xb8,0x00,0x01,0xff,
++ 0xe2,0x87,0x92,0xcc,0xb8,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x04,0x00,0x93,0x08,0x12,0x04,
++ 0x04,0x00,0x06,0x00,0x06,0x00,0xe2,0x38,0x02,0xe1,0x3f,0x01,0xd0,0x68,0xcf,0x86,
++ 0xd5,0x3e,0x94,0x3a,0xd3,0x16,0x52,0x04,0x01,0x00,0x91,0x0e,0x10,0x0a,0x01,0xff,
++ 0xe2,0x88,0x83,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xd2,0x12,0x91,0x0e,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0xe2,0x88,0x88,0xcc,0xb8,0x00,0x01,0x00,0x91,0x0e,0x10,0x0a,
++ 0x01,0xff,0xe2,0x88,0x8b,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x24,
++ 0x93,0x20,0x52,0x04,0x01,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa3,0xcc,
++ 0xb8,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa5,0xcc,0xb8,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x48,0x94,0x44,0xd3,0x2e,0xd2,0x12,0x91,0x0e,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x88,0xbc,0xcc,0xb8,0x00,0x01,0x00,0xd1,0x0e,
++ 0x10,0x0a,0x01,0xff,0xe2,0x89,0x83,0xcc,0xb8,0x00,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xe2,0x89,0x85,0xcc,0xb8,0x00,0x92,0x12,0x91,0x0e,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xe2,0x89,0x88,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x40,
++ 0xd3,0x1e,0x92,0x1a,0xd1,0x0c,0x10,0x08,0x01,0xff,0x3d,0xcc,0xb8,0x00,0x01,0x00,
++ 0x10,0x0a,0x01,0xff,0xe2,0x89,0xa1,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x52,0x04,
++ 0x01,0x00,0xd1,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x89,0x8d,0xcc,0xb8,0x00,
++ 0x10,0x08,0x01,0xff,0x3c,0xcc,0xb8,0x00,0x01,0xff,0x3e,0xcc,0xb8,0x00,0xd3,0x30,
++ 0xd2,0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xa4,0xcc,0xb8,0x00,0x01,0xff,
++ 0xe2,0x89,0xa5,0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,
++ 0xb2,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xb3,0xcc,0xb8,0x00,0x01,0x00,0x92,0x18,
++ 0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xb6,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,
++ 0xb7,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x50,0x94,0x4c,
++ 0xd3,0x30,0xd2,0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xba,0xcc,0xb8,0x00,
++ 0x01,0xff,0xe2,0x89,0xbb,0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,
++ 0xe2,0x8a,0x82,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x83,0xcc,0xb8,0x00,0x01,0x00,
++ 0x92,0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0x86,0xcc,0xb8,0x00,0x01,0xff,
++ 0xe2,0x8a,0x87,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x30,0x53,0x04,
++ 0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xa2,0xcc,
++ 0xb8,0x00,0x01,0xff,0xe2,0x8a,0xa8,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,
++ 0xa9,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0xab,0xcc,0xb8,0x00,0x01,0x00,0xcf,0x86,
++ 0x55,0x04,0x01,0x00,0xd4,0x5c,0xd3,0x2c,0x92,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,
++ 0xe2,0x89,0xbc,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xbd,0xcc,0xb8,0x00,0x10,0x0a,
++ 0x01,0xff,0xe2,0x8a,0x91,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x92,0xcc,0xb8,0x00,
++ 0x01,0x00,0xd2,0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xb2,0xcc,
++ 0xb8,0x00,0x01,0xff,0xe2,0x8a,0xb3,0xcc,0xb8,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,
++ 0xe2,0x8a,0xb4,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0xb5,0xcc,0xb8,0x00,0x01,0x00,
++ 0x93,0x0c,0x92,0x08,0x11,0x04,0x01,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xd1,0x64,
++ 0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x01,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x20,0x53,0x04,
++ 0x01,0x00,0x92,0x18,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x80,0x88,0x00,
++ 0x10,0x08,0x01,0xff,0xe3,0x80,0x89,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,
++ 0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,
++ 0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x04,0x00,
++ 0x04,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,
++ 0x92,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x06,0x00,0x06,0x00,0x06,0x00,
++ 0xcf,0x86,0xd5,0x2c,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,
++ 0x06,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x07,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0x12,0x04,0x08,0x00,0x09,0x00,0xd4,0x14,
++ 0x53,0x04,0x09,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,0x0c,0x00,
++ 0x0c,0x00,0xd3,0x08,0x12,0x04,0x0c,0x00,0x10,0x00,0xd2,0x0c,0x51,0x04,0x10,0x00,
++ 0x10,0x04,0x10,0x00,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x13,0x00,
++ 0xd3,0xa6,0xd2,0x74,0xd1,0x40,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x18,
++ 0x93,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,
++ 0x04,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,
++ 0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x01,0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,
++ 0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x06,0x00,0x06,0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,0x06,0x00,
++ 0x10,0x04,0x06,0x00,0x07,0x00,0xd1,0x06,0xcf,0x06,0x01,0x00,0xd0,0x1a,0xcf,0x86,
++ 0x95,0x14,0x54,0x04,0x01,0x00,0x93,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,
++ 0x06,0x00,0x06,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,
++ 0x13,0x04,0x04,0x00,0x06,0x00,0xd2,0xdc,0xd1,0x48,0xd0,0x26,0xcf,0x86,0x95,0x20,
++ 0x54,0x04,0x01,0x00,0xd3,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x07,0x00,0x06,0x00,
++ 0x92,0x0c,0x91,0x08,0x10,0x04,0x08,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,
++ 0x04,0x00,0x06,0x00,0x06,0x00,0x52,0x04,0x06,0x00,0x11,0x04,0x06,0x00,0x08,0x00,
++ 0xd0,0x5e,0xcf,0x86,0xd5,0x2c,0xd4,0x10,0x53,0x04,0x06,0x00,0x92,0x08,0x11,0x04,
++ 0x06,0x00,0x07,0x00,0x07,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,
++ 0x08,0x00,0x52,0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0a,0x00,0x0b,0x00,
++ 0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0x08,0x00,
++ 0xd3,0x10,0x92,0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
++ 0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,
++ 0xd5,0x1c,0x94,0x18,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,
++ 0x51,0x04,0x0b,0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0b,0x00,0x94,0x14,0x93,0x10,
++ 0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0c,0x00,0x0b,0x00,
++ 0x0b,0x00,0xd1,0xa8,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x0c,0x00,0x01,0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x01,0x00,0x01,0x00,
++ 0x94,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x18,0x53,0x04,0x01,0x00,
++ 0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x10,0x04,0x0c,0x00,
++ 0x01,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,
++ 0x51,0x04,0x0c,0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x52,0x04,0x01,0x00,0x51,0x04,
++ 0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0c,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x06,0x00,0x93,0x0c,0x52,0x04,
++ 0x06,0x00,0x11,0x04,0x06,0x00,0x01,0x00,0x01,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x18,
++ 0x54,0x04,0x01,0x00,0x93,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x0c,0x00,0x0c,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0c,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,
++ 0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,0xcf,0x86,0xd5,0x2c,0x94,0x28,0xd3,0x10,
++ 0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x09,0x00,0xd2,0x0c,
++ 0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0d,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,
++ 0x0d,0x00,0x0c,0x00,0x06,0x00,0x94,0x0c,0x53,0x04,0x06,0x00,0x12,0x04,0x06,0x00,
++ 0x0a,0x00,0x06,0x00,0xe4,0x39,0x01,0xd3,0x0c,0xd2,0x06,0xcf,0x06,0x04,0x00,0xcf,
++ 0x06,0x06,0x00,0xd2,0x30,0xd1,0x06,0xcf,0x06,0x06,0x00,0xd0,0x06,0xcf,0x06,0x06,
++ 0x00,0xcf,0x86,0x95,0x1e,0x54,0x04,0x06,0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,
++ 0x00,0x91,0x0e,0x10,0x0a,0x06,0xff,0xe2,0xab,0x9d,0xcc,0xb8,0x00,0x06,0x00,0x06,
++ 0x00,0x06,0x00,0xd1,0x80,0xd0,0x3a,0xcf,0x86,0xd5,0x28,0xd4,0x10,0x53,0x04,0x07,
++ 0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x08,0x00,0xd3,0x08,0x12,0x04,0x08,
++ 0x00,0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,
++ 0x00,0x94,0x0c,0x93,0x08,0x12,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xcf,
++ 0x86,0xd5,0x30,0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,
++ 0x04,0x0a,0x00,0x10,0x00,0x10,0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,
++ 0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x10,0x00,0x10,
++ 0x00,0x54,0x04,0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,
++ 0x00,0x10,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x14,0x54,0x04,0x10,0x00,0x93,0x0c,0x52,
++ 0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0x53,
++ 0x04,0x10,0x00,0xd2,0x08,0x11,0x04,0x10,0x00,0x14,0x00,0x91,0x08,0x10,0x04,0x14,
++ 0x00,0x10,0x00,0x10,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x15,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x92,
++ 0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x14,0x00,0x14,0x00,0xd4,
++ 0x0c,0x53,0x04,0x14,0x00,0x12,0x04,0x14,0x00,0x11,0x00,0x53,0x04,0x14,0x00,0x52,
++ 0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0xe3,0xb9,0x01,
++ 0xd2,0xac,0xd1,0x68,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x08,0x00,0x94,0x14,0x53,0x04,
++ 0x08,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,
++ 0x08,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x52,0x04,
++ 0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0xd4,0x14,0x53,0x04,
++ 0x09,0x00,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
++ 0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0a,0x00,0x0a,0x00,0x09,0x00,
++ 0x52,0x04,0x0a,0x00,0x11,0x04,0x0a,0x00,0x0b,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,
++ 0xcf,0x86,0x55,0x04,0x08,0x00,0xd4,0x1c,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,0x04,
++ 0x08,0x00,0x10,0x04,0x08,0x00,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,
++ 0x0b,0xe6,0xd3,0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0d,0x00,0x00,0x00,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0xd1,0x6c,0xd0,0x2a,
++ 0xcf,0x86,0x55,0x04,0x08,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,
++ 0x08,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,
++ 0x00,0x00,0x0d,0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x55,0x04,0x08,0x00,0xd4,0x1c,
++ 0xd3,0x0c,0x52,0x04,0x08,0x00,0x11,0x04,0x08,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,
++ 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x08,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,
++ 0x00,0x00,0x10,0x04,0x00,0x00,0x0c,0x09,0xd0,0x5a,0xcf,0x86,0xd5,0x18,0x54,0x04,
++ 0x08,0x00,0x93,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,
++ 0x00,0x00,0x00,0x00,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,
++ 0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
++ 0x08,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
++ 0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,
++ 0x00,0x00,0xcf,0x86,0x95,0x40,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,
++ 0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,
++ 0x10,0x04,0x08,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,
++ 0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
++ 0x08,0x00,0x00,0x00,0x0a,0xe6,0xd2,0x9c,0xd1,0x68,0xd0,0x32,0xcf,0x86,0xd5,0x14,
++ 0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x52,0x04,0x0a,0x00,0x11,0x04,0x08,0x00,
++ 0x0a,0x00,0x54,0x04,0x0a,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,
++ 0x0b,0x00,0x0d,0x00,0x0d,0x00,0x12,0x04,0x0d,0x00,0x10,0x00,0xcf,0x86,0x95,0x30,
++ 0x94,0x2c,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x12,0x00,
++ 0x91,0x08,0x10,0x04,0x12,0x00,0x13,0x00,0x13,0x00,0xd2,0x08,0x11,0x04,0x13,0x00,
++ 0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0x00,0x00,0x00,0x00,
++ 0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,
++ 0x51,0x04,0x04,0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,
++ 0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,0x08,0x12,0x04,0x04,0x00,0x00,0x00,
++ 0x00,0x00,0xd1,0x06,0xcf,0x06,0x04,0x00,0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,
++ 0xd5,0x14,0x54,0x04,0x04,0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,
++ 0x00,0x00,0x00,0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x04,0x00,0x12,0x04,0x04,0x00,
++ 0x00,0x00,0xcf,0x86,0xe5,0x8d,0x05,0xe4,0x86,0x05,0xe3,0x7d,0x04,0xe2,0xe4,0x03,
++ 0xe1,0xc0,0x01,0xd0,0x3e,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,
++ 0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0xda,0x01,0xe4,0x91,0x08,0x10,
++ 0x04,0x01,0xe8,0x01,0xde,0x01,0xe0,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x04,
++ 0x00,0x10,0x04,0x04,0x00,0x06,0x00,0x51,0x04,0x06,0x00,0x10,0x04,0x04,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0xaa,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,
++ 0xff,0xe3,0x81,0x8b,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,
++ 0x8d,0xe3,0x82,0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,
++ 0xff,0xe3,0x81,0x8f,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,
++ 0x91,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x93,
++ 0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x95,0xe3,0x82,0x99,
++ 0x00,0x01,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x97,0xe3,0x82,
++ 0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x99,0xe3,0x82,0x99,0x00,0x01,
++ 0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x9b,0xe3,0x82,0x99,0x00,0x01,0x00,
++ 0x10,0x0b,0x01,0xff,0xe3,0x81,0x9d,0xe3,0x82,0x99,0x00,0x01,0x00,0xd4,0x53,0xd3,
++ 0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x9f,0xe3,0x82,0x99,0x00,
++ 0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xa1,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,
++ 0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x81,0xa4,0xe3,0x82,0x99,0x00,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0xe3,0x81,0xa6,0xe3,0x82,0x99,0x00,0x92,0x13,0x91,0x0f,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0xe3,0x81,0xa8,0xe3,0x82,0x99,0x00,0x01,0x00,0x01,0x00,
++ 0xd3,0x4a,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,0xaf,0xe3,0x82,0x99,
++ 0x00,0x01,0xff,0xe3,0x81,0xaf,0xe3,0x82,0x9a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
++ 0xe3,0x81,0xb2,0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb2,
++ 0xe3,0x82,0x9a,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb5,0xe3,0x82,0x99,
++ 0x00,0x01,0xff,0xe3,0x81,0xb5,0xe3,0x82,0x9a,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0xe3,0x81,0xb8,0xe3,0x82,0x99,0x00,0x10,0x0b,0x01,0xff,0xe3,
++ 0x81,0xb8,0xe3,0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,
++ 0xbb,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x81,0xbb,0xe3,0x82,0x9a,0x00,0x01,0x00,
++ 0xd0,0xee,0xcf,0x86,0xd5,0x42,0x54,0x04,0x01,0x00,0xd3,0x1b,0x52,0x04,0x01,0x00,
++ 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x86,0xe3,0x82,0x99,0x00,0x06,0x00,0x10,
++ 0x04,0x06,0x00,0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x08,0x10,
++ 0x04,0x01,0x08,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0x9d,
++ 0xe3,0x82,0x99,0x00,0x06,0x00,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x06,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,
++ 0x01,0xff,0xe3,0x82,0xab,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,
++ 0x82,0xad,0xe3,0x82,0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,
++ 0x01,0xff,0xe3,0x82,0xaf,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,
++ 0x82,0xb1,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,
++ 0xb3,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb5,0xe3,0x82,
++ 0x99,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb7,0xe3,
++ 0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb9,0xe3,0x82,0x99,0x00,
++ 0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbb,0xe3,0x82,0x99,0x00,0x01,
++ 0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbd,0xe3,0x82,0x99,0x00,0x01,0x00,0xcf,0x86,
++ 0xd5,0xd5,0xd4,0x53,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,
++ 0xbf,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0x81,0xe3,0x82,
++ 0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x84,0xe3,
++ 0x82,0x99,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x86,0xe3,0x82,0x99,0x00,
++ 0x92,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x88,0xe3,0x82,0x99,
++ 0x00,0x01,0x00,0x01,0x00,0xd3,0x4a,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,
++ 0x83,0x8f,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x8f,0xe3,0x82,0x9a,0x00,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x92,0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,
++ 0x01,0xff,0xe3,0x83,0x92,0xe3,0x82,0x9a,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,
++ 0x83,0x95,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x95,0xe3,0x82,0x9a,0x00,0xd2,
++ 0x1e,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x98,0xe3,0x82,0x99,0x00,
++ 0x10,0x0b,0x01,0xff,0xe3,0x83,0x98,0xe3,0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,
++ 0x0b,0x01,0xff,0xe3,0x83,0x9b,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x9b,0xe3,
++ 0x82,0x9a,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x22,0x52,0x04,0x01,0x00,0xd1,
++ 0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xa6,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0xe3,0x83,0xaf,0xe3,0x82,0x99,0x00,0xd2,0x25,0xd1,0x16,0x10,
++ 0x0b,0x01,0xff,0xe3,0x83,0xb0,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0xb1,0xe3,
++ 0x82,0x99,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0xb2,0xe3,0x82,0x99,0x00,0x01,0x00,
++ 0x51,0x04,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0xbd,0xe3,0x82,0x99,0x00,0x06,
++ 0x00,0xd1,0x4c,0xd0,0x46,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x00,
++ 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,
++ 0x18,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,
++ 0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x06,0x01,0x00,0xd0,0x32,0xcf,
++ 0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x54,0x04,0x04,0x00,0x53,0x04,0x04,
++ 0x00,0x92,0x0c,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0xcf,
++ 0x86,0xd5,0x08,0x14,0x04,0x08,0x00,0x0a,0x00,0x94,0x0c,0x93,0x08,0x12,0x04,0x0a,
++ 0x00,0x00,0x00,0x00,0x00,0x06,0x00,0xd2,0xa4,0xd1,0x5c,0xd0,0x22,0xcf,0x86,0x95,
++ 0x1c,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,
++ 0x04,0x01,0x00,0x07,0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x01,0x00,0xcf,0x86,0xd5,
++ 0x20,0xd4,0x0c,0x93,0x08,0x12,0x04,0x01,0x00,0x0b,0x00,0x0b,0x00,0x93,0x10,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0x54,
++ 0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x07,0x00,0x10,
++ 0x04,0x08,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,
++ 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x06,0x00,0x06,
++ 0x00,0x06,0x00,0xcf,0x86,0xd5,0x10,0x94,0x0c,0x53,0x04,0x01,0x00,0x12,0x04,0x01,
++ 0x00,0x07,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
++ 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x16,0x00,0xd1,0x30,0xd0,0x06,0xcf,
++ 0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x52,
++ 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x07,0x00,0x92,0x0c,0x51,
++ 0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x01,0x00,0x01,0x00,0xd0,0x06,0xcf,0x06,0x01,
++ 0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
++ 0x00,0x11,0x04,0x01,0x00,0x07,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x07,0x00,0xcf,0x06,0x04,
++ 0x00,0xcf,0x06,0x04,0x00,0xd1,0x48,0xd0,0x40,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x04,
++ 0x00,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x2c,0xd2,0x06,0xcf,0x06,0x04,0x00,0xd1,
++ 0x06,0xcf,0x06,0x04,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,
++ 0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xcf,
++ 0x06,0x07,0x00,0xcf,0x06,0x01,0x00,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xcf,
++ 0x06,0x01,0x00,0xe2,0x71,0x05,0xd1,0x8c,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,
++ 0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xd4,0x06,0xcf,0x06,0x01,0x00,0xd3,0x06,
++ 0xcf,0x06,0x01,0x00,0xd2,0x06,0xcf,0x06,0x01,0x00,0xd1,0x06,0xcf,0x06,0x01,0x00,
++ 0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x01,0x00,
++ 0x11,0x04,0x01,0x00,0x08,0x00,0x08,0x00,0x53,0x04,0x08,0x00,0x12,0x04,0x08,0x00,
++ 0x0a,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0b,0x00,
++ 0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x11,0x00,0x11,0x00,0x93,0x0c,
++ 0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x13,0x00,0x13,0x00,0x94,0x14,0x53,0x04,
++ 0x13,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x14,0x00,
++ 0x00,0x00,0xe0,0xdb,0x04,0xcf,0x86,0xe5,0xdf,0x01,0xd4,0x06,0xcf,0x06,0x04,0x00,
++ 0xd3,0x74,0xd2,0x6e,0xd1,0x06,0xcf,0x06,0x04,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x18,
++ 0x94,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,
++ 0x00,0x00,0x00,0x00,0x04,0x00,0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x04,0x00,
++ 0x06,0x00,0x04,0x00,0x04,0x00,0x93,0x10,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,
++ 0x06,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,0x95,0x24,0x94,0x20,0x93,0x1c,
++ 0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,0x04,0x00,0xd1,0x08,0x10,0x04,
++ 0x04,0x00,0x06,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0x0b,0x00,
++ 0xcf,0x06,0x0a,0x00,0xd2,0x84,0xd1,0x4c,0xd0,0x16,0xcf,0x86,0x55,0x04,0x0a,0x00,
++ 0x94,0x0c,0x53,0x04,0x0a,0x00,0x12,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
++ 0x55,0x04,0x0a,0x00,0xd4,0x1c,0xd3,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x0a,0x00,
++ 0x0a,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0xe6,
++ 0xd3,0x08,0x12,0x04,0x0a,0x00,0x0d,0xe6,0x52,0x04,0x0d,0xe6,0x11,0x04,0x0a,0xe6,
++ 0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,
++ 0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x11,0xe6,0x0d,0xe6,0x0b,0x00,
++ 0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,
++ 0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x24,
++ 0x54,0x04,0x08,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
++ 0x08,0x00,0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,
++ 0x0a,0x00,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
++ 0x0a,0x00,0x0a,0x00,0xcf,0x06,0x0a,0x00,0xd0,0x5e,0xcf,0x86,0xd5,0x28,0xd4,0x18,
++ 0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0xd1,0x08,0x10,0x04,0x0a,0x00,0x0c,0x00,
++ 0x10,0x04,0x0c,0x00,0x11,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x0d,0x00,
++ 0x10,0x00,0x10,0x00,0xd4,0x1c,0x53,0x04,0x0c,0x00,0xd2,0x0c,0x51,0x04,0x0c,0x00,
++ 0x10,0x04,0x0d,0x00,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x12,0x00,0x14,0x00,
++ 0xd3,0x0c,0x92,0x08,0x11,0x04,0x10,0x00,0x11,0x00,0x11,0x00,0x92,0x08,0x11,0x04,
++ 0x14,0x00,0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x1c,0x94,0x18,0x93,0x14,0xd2,0x08,
++ 0x11,0x04,0x00,0x00,0x15,0x00,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x54,0x04,0x00,0x00,0xd3,0x10,0x52,0x04,0x00,0x00,0x51,0x04,
++ 0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x92,0x0c,0x51,0x04,0x0d,0x00,0x10,0x04,
++ 0x0c,0x00,0x0a,0x00,0x0a,0x00,0xe4,0xf2,0x02,0xe3,0x65,0x01,0xd2,0x98,0xd1,0x48,
++ 0xd0,0x36,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x08,0x00,0x51,0x04,
++ 0x08,0x00,0x10,0x04,0x08,0x09,0x08,0x00,0x08,0x00,0x08,0x00,0xd4,0x0c,0x53,0x04,
++ 0x08,0x00,0x12,0x04,0x08,0x00,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,0x11,0x04,
++ 0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0x54,0x04,0x09,0x00,
++ 0x13,0x04,0x09,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x0a,0x00,0xcf,0x86,0xd5,0x2c,
++ 0xd4,0x1c,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x09,0x12,0x00,
++ 0x00,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x0a,0x00,0x53,0x04,0x0a,0x00,
++ 0x92,0x08,0x11,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x0b,0xe6,0xd3,0x0c,
++ 0x92,0x08,0x11,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x11,0x04,
++ 0x11,0x00,0x14,0x00,0xd1,0x60,0xd0,0x22,0xcf,0x86,0x55,0x04,0x0a,0x00,0x94,0x18,
++ 0x53,0x04,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0xdc,
++ 0x11,0x04,0x0a,0xdc,0x0a,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x0a,0x00,
++ 0xd3,0x10,0x92,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0x09,0x00,0x00,
++ 0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0x54,0x04,
++ 0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,
++ 0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,
++ 0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0x07,0x0b,0x00,
++ 0x0b,0x00,0xcf,0x86,0xd5,0x34,0xd4,0x20,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,
++ 0x10,0x04,0x00,0x00,0x0b,0x00,0x53,0x04,0x0b,0x00,0xd2,0x08,0x11,0x04,0x0b,0x00,
++ 0x00,0x00,0x11,0x04,0x00,0x00,0x0b,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,
++ 0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0xd2,0xd0,
++ 0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0a,0x00,0x54,0x04,0x0a,0x00,0x93,0x10,
++ 0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,
++ 0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x11,0x04,
++ 0x0a,0x00,0x00,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x00,0x00,
++ 0x0a,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x12,0x04,0x0b,0x00,0x10,0x00,
++ 0xd0,0x3a,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0xd3,0x1c,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0xe6,0xd1,0x08,0x10,0x04,0x0b,0xdc,
++ 0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0xe6,
++ 0x0b,0x00,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0b,0xe6,0xcf,0x86,0xd5,0x2c,0xd4,0x18,
++ 0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x10,0x04,0x0b,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,
++ 0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x54,0x04,0x0d,0x00,0x93,0x10,0x52,0x04,
++ 0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,0x00,0x00,0x00,0x00,0xd1,0x8c,
++ 0xd0,0x72,0xcf,0x86,0xd5,0x4c,0xd4,0x30,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x00,0x00,0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,
++ 0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,
++ 0x10,0x04,0x0c,0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,
++ 0x94,0x20,0xd3,0x10,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,
++ 0x00,0x00,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,
++ 0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x10,0x93,0x0c,0x52,0x04,0x11,0x00,
++ 0x11,0x04,0x10,0x00,0x15,0x00,0x00,0x00,0x11,0x00,0xd0,0x06,0xcf,0x06,0x11,0x00,
++ 0xcf,0x86,0x55,0x04,0x0b,0x00,0xd4,0x14,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,
++ 0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0x09,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,
++ 0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x02,0xff,0xff,0xcf,0x86,0xcf,
++ 0x06,0x02,0xff,0xff,0xd1,0x76,0xd0,0x09,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xcf,
++ 0x86,0x85,0xd4,0x07,0xcf,0x06,0x02,0xff,0xff,0xd3,0x07,0xcf,0x06,0x02,0xff,0xff,
++ 0xd2,0x07,0xcf,0x06,0x02,0xff,0xff,0xd1,0x07,0xcf,0x06,0x02,0xff,0xff,0xd0,0x18,
++ 0xcf,0x86,0x55,0x05,0x02,0xff,0xff,0x94,0x0d,0x93,0x09,0x12,0x05,0x02,0xff,0xff,
++ 0x00,0x00,0x00,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,0x10,0x52,0x04,
++ 0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x92,0x0c,0x51,0x04,
++ 0x00,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x54,0x04,0x0b,0x00,
++ 0x53,0x04,0x0b,0x00,0x12,0x04,0x0b,0x00,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,
++ 0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xe4,0x9c,0x10,0xe3,0x16,0x08,
++ 0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x08,0x04,0xe0,0x04,0x02,0xcf,0x86,0xe5,0x01,
++ 0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,
++ 0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0x10,0x08,0x01,0xff,0xe8,0xbb,0x8a,0x00,0x01,
++ 0xff,0xe8,0xb3,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xbb,0x91,0x00,0x01,
++ 0xff,0xe4,0xb8,0xb2,0x00,0x10,0x08,0x01,0xff,0xe5,0x8f,0xa5,0x00,0x01,0xff,0xe9,
++ 0xbe,0x9c,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xbe,0x9c,0x00,0x01,
++ 0xff,0xe5,0xa5,0x91,0x00,0x10,0x08,0x01,0xff,0xe9,0x87,0x91,0x00,0x01,0xff,0xe5,
++ 0x96,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xa5,0x88,0x00,0x01,0xff,0xe6,
++ 0x87,0xb6,0x00,0x10,0x08,0x01,0xff,0xe7,0x99,0xa9,0x00,0x01,0xff,0xe7,0xbe,0x85,
++ 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x98,0xbf,0x00,0x01,
++ 0xff,0xe8,0x9e,0xba,0x00,0x10,0x08,0x01,0xff,0xe8,0xa3,0xb8,0x00,0x01,0xff,0xe9,
++ 0x82,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,0x82,0x00,0x01,0xff,0xe6,
++ 0xb4,0x9b,0x00,0x10,0x08,0x01,0xff,0xe7,0x83,0x99,0x00,0x01,0xff,0xe7,0x8f,0x9e,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x90,0xbd,0x00,0x01,0xff,0xe9,
++ 0x85,0xaa,0x00,0x10,0x08,0x01,0xff,0xe9,0xa7,0xb1,0x00,0x01,0xff,0xe4,0xba,0x82,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x8d,0xb5,0x00,0x01,0xff,0xe6,0xac,0x84,
++ 0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x9b,0x00,0x01,0xff,0xe8,0x98,0xad,0x00,0xd4,
++ 0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xb8,0x9e,0x00,0x01,
++ 0xff,0xe5,0xb5,0x90,0x00,0x10,0x08,0x01,0xff,0xe6,0xbf,0xab,0x00,0x01,0xff,0xe8,
++ 0x97,0x8d,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xa5,0xa4,0x00,0x01,0xff,0xe6,
++ 0x8b,0x89,0x00,0x10,0x08,0x01,0xff,0xe8,0x87,0x98,0x00,0x01,0xff,0xe8,0xa0,0x9f,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xbb,0x8a,0x00,0x01,0xff,0xe6,
++ 0x9c,0x97,0x00,0x10,0x08,0x01,0xff,0xe6,0xb5,0xaa,0x00,0x01,0xff,0xe7,0x8b,0xbc,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x83,0x8e,0x00,0x01,0xff,0xe4,0xbe,0x86,
++ 0x00,0x10,0x08,0x01,0xff,0xe5,0x86,0xb7,0x00,0x01,0xff,0xe5,0x8b,0x9e,0x00,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x93,0x84,0x00,0x01,0xff,0xe6,
++ 0xab,0x93,0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x90,0x00,0x01,0xff,0xe7,0x9b,0xa7,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x80,0x81,0x00,0x01,0xff,0xe8,0x98,0x86,
++ 0x00,0x10,0x08,0x01,0xff,0xe8,0x99,0x9c,0x00,0x01,0xff,0xe8,0xb7,0xaf,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9c,0xb2,0x00,0x01,0xff,0xe9,0xad,0xaf,
++ 0x00,0x10,0x08,0x01,0xff,0xe9,0xb7,0xba,0x00,0x01,0xff,0xe7,0xa2,0x8c,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe7,0xa5,0xbf,0x00,0x01,0xff,0xe7,0xb6,0xa0,0x00,0x10,
++ 0x08,0x01,0xff,0xe8,0x8f,0x89,0x00,0x01,0xff,0xe9,0x8c,0x84,0x00,0xcf,0x86,0xe5,
++ 0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xb9,
++ 0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0x10,0x08,0x01,0xff,0xe5,0xa3,0x9f,0x00,
++ 0x01,0xff,0xe5,0xbc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xb1,0xa0,0x00,
++ 0x01,0xff,0xe8,0x81,0xbe,0x00,0x10,0x08,0x01,0xff,0xe7,0x89,0xa2,0x00,0x01,0xff,
++ 0xe7,0xa3,0x8a,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xb3,0x82,0x00,
++ 0x01,0xff,0xe9,0x9b,0xb7,0x00,0x10,0x08,0x01,0xff,0xe5,0xa3,0x98,0x00,0x01,0xff,
++ 0xe5,0xb1,0xa2,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,0x93,0x00,0x01,0xff,
++ 0xe6,0xb7,0x9a,0x00,0x10,0x08,0x01,0xff,0xe6,0xbc,0x8f,0x00,0x01,0xff,0xe7,0xb4,
++ 0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,
++ 0x01,0xff,0xe9,0x99,0x8b,0x00,0x10,0x08,0x01,0xff,0xe5,0x8b,0x92,0x00,0x01,0xff,
++ 0xe8,0x82,0x8b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x87,0x9c,0x00,0x01,0xff,
++ 0xe5,0x87,0x8c,0x00,0x10,0x08,0x01,0xff,0xe7,0xa8,0x9c,0x00,0x01,0xff,0xe7,0xb6,
++ 0xbe,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x8f,0xb1,0x00,0x01,0xff,
++ 0xe9,0x99,0xb5,0x00,0x10,0x08,0x01,0xff,0xe8,0xae,0x80,0x00,0x01,0xff,0xe6,0x8b,
++ 0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,0x82,0x00,0x01,0xff,0xe8,0xab,
++ 0xbe,0x00,0x10,0x08,0x01,0xff,0xe4,0xb8,0xb9,0x00,0x01,0xff,0xe5,0xaf,0xa7,0x00,
++ 0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x80,0x92,0x00,
++ 0x01,0xff,0xe7,0x8e,0x87,0x00,0x10,0x08,0x01,0xff,0xe7,0x95,0xb0,0x00,0x01,0xff,
++ 0xe5,0x8c,0x97,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xa3,0xbb,0x00,0x01,0xff,
++ 0xe4,0xbe,0xbf,0x00,0x10,0x08,0x01,0xff,0xe5,0xbe,0xa9,0x00,0x01,0xff,0xe4,0xb8,
++ 0x8d,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xb3,0x8c,0x00,0x01,0xff,
++ 0xe6,0x95,0xb8,0x00,0x10,0x08,0x01,0xff,0xe7,0xb4,0xa2,0x00,0x01,0xff,0xe5,0x8f,
++ 0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xa1,0x9e,0x00,0x01,0xff,0xe7,0x9c,
++ 0x81,0x00,0x10,0x08,0x01,0xff,0xe8,0x91,0x89,0x00,0x01,0xff,0xe8,0xaa,0xaa,0x00,
++ 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xae,0xba,0x00,0x01,0xff,
++ 0xe8,0xbe,0xb0,0x00,0x10,0x08,0x01,0xff,0xe6,0xb2,0x88,0x00,0x01,0xff,0xe6,0x8b,
++ 0xbe,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x8b,0xa5,0x00,0x01,0xff,0xe6,0x8e,
++ 0xa0,0x00,0x10,0x08,0x01,0xff,0xe7,0x95,0xa5,0x00,0x01,0xff,0xe4,0xba,0xae,0x00,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,0xa9,0x00,0x01,0xff,0xe5,0x87,
++ 0x89,0x00,0x10,0x08,0x01,0xff,0xe6,0xa2,0x81,0x00,0x01,0xff,0xe7,0xb3,0xa7,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x89,0xaf,0x00,0x01,0xff,0xe8,0xab,0x92,0x00,
++ 0x10,0x08,0x01,0xff,0xe9,0x87,0x8f,0x00,0x01,0xff,0xe5,0x8b,0xb5,0x00,0xe0,0x04,
++ 0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe5,0x91,0x82,0x00,0x01,0xff,0xe5,0xa5,0xb3,0x00,0x10,0x08,0x01,0xff,
++ 0xe5,0xbb,0xac,0x00,0x01,0xff,0xe6,0x97,0x85,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe6,0xbf,0xbe,0x00,0x01,0xff,0xe7,0xa4,0xaa,0x00,0x10,0x08,0x01,0xff,0xe9,0x96,
++ 0xad,0x00,0x01,0xff,0xe9,0xa9,0xaa,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe9,0xba,0x97,0x00,0x01,0xff,0xe9,0xbb,0x8e,0x00,0x10,0x08,0x01,0xff,0xe5,0x8a,
++ 0x9b,0x00,0x01,0xff,0xe6,0x9b,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xad,
++ 0xb7,0x00,0x01,0xff,0xe8,0xbd,0xa2,0x00,0x10,0x08,0x01,0xff,0xe5,0xb9,0xb4,0x00,
++ 0x01,0xff,0xe6,0x86,0x90,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe6,0x88,0x80,0x00,0x01,0xff,0xe6,0x92,0x9a,0x00,0x10,0x08,0x01,0xff,0xe6,0xbc,
++ 0xa3,0x00,0x01,0xff,0xe7,0x85,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0x92,
++ 0x89,0x00,0x01,0xff,0xe7,0xa7,0x8a,0x00,0x10,0x08,0x01,0xff,0xe7,0xb7,0xb4,0x00,
++ 0x01,0xff,0xe8,0x81,0xaf,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xbc,
++ 0xa6,0x00,0x01,0xff,0xe8,0x93,0xae,0x00,0x10,0x08,0x01,0xff,0xe9,0x80,0xa3,0x00,
++ 0x01,0xff,0xe9,0x8d,0x8a,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x88,0x97,0x00,
++ 0x01,0xff,0xe5,0x8a,0xa3,0x00,0x10,0x08,0x01,0xff,0xe5,0x92,0xbd,0x00,0x01,0xff,
++ 0xe7,0x83,0x88,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe8,0xa3,0x82,0x00,0x01,0xff,0xe8,0xaa,0xaa,0x00,0x10,0x08,0x01,0xff,0xe5,0xbb,
++ 0x89,0x00,0x01,0xff,0xe5,0xbf,0xb5,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x8d,
++ 0xbb,0x00,0x01,0xff,0xe6,0xae,0xae,0x00,0x10,0x08,0x01,0xff,0xe7,0xb0,0xbe,0x00,
++ 0x01,0xff,0xe7,0x8d,0xb5,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe4,0xbb,
++ 0xa4,0x00,0x01,0xff,0xe5,0x9b,0xb9,0x00,0x10,0x08,0x01,0xff,0xe5,0xaf,0xa7,0x00,
++ 0x01,0xff,0xe5,0xb6,0xba,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x80,0x9c,0x00,
++ 0x01,0xff,0xe7,0x8e,0xb2,0x00,0x10,0x08,0x01,0xff,0xe7,0x91,0xa9,0x00,0x01,0xff,
++ 0xe7,0xbe,0x9a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x81,
++ 0x86,0x00,0x01,0xff,0xe9,0x88,0xb4,0x00,0x10,0x08,0x01,0xff,0xe9,0x9b,0xb6,0x00,
++ 0x01,0xff,0xe9,0x9d,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xa0,0x98,0x00,
++ 0x01,0xff,0xe4,0xbe,0x8b,0x00,0x10,0x08,0x01,0xff,0xe7,0xa6,0xae,0x00,0x01,0xff,
++ 0xe9,0x86,0xb4,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9a,0xb8,0x00,
++ 0x01,0xff,0xe6,0x83,0xa1,0x00,0x10,0x08,0x01,0xff,0xe4,0xba,0x86,0x00,0x01,0xff,
++ 0xe5,0x83,0x9a,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xaf,0xae,0x00,0x01,0xff,
++ 0xe5,0xb0,0xbf,0x00,0x10,0x08,0x01,0xff,0xe6,0x96,0x99,0x00,0x01,0xff,0xe6,0xa8,
++ 0x82,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe7,0x87,0x8e,0x00,0x01,0xff,0xe7,0x99,0x82,0x00,0x10,0x08,0x01,
++ 0xff,0xe8,0x93,0xbc,0x00,0x01,0xff,0xe9,0x81,0xbc,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe9,0xbe,0x8d,0x00,0x01,0xff,0xe6,0x9a,0x88,0x00,0x10,0x08,0x01,0xff,0xe9,
++ 0x98,0xae,0x00,0x01,0xff,0xe5,0x8a,0x89,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe6,0x9d,0xbb,0x00,0x01,0xff,0xe6,0x9f,0xb3,0x00,0x10,0x08,0x01,0xff,0xe6,
++ 0xb5,0x81,0x00,0x01,0xff,0xe6,0xba,0x9c,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,
++ 0x90,0x89,0x00,0x01,0xff,0xe7,0x95,0x99,0x00,0x10,0x08,0x01,0xff,0xe7,0xa1,0xab,
++ 0x00,0x01,0xff,0xe7,0xb4,0x90,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe9,0xa1,0x9e,0x00,0x01,0xff,0xe5,0x85,0xad,0x00,0x10,0x08,0x01,0xff,0xe6,
++ 0x88,0xae,0x00,0x01,0xff,0xe9,0x99,0xb8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
++ 0x80,0xab,0x00,0x01,0xff,0xe5,0xb4,0x99,0x00,0x10,0x08,0x01,0xff,0xe6,0xb7,0xaa,
++ 0x00,0x01,0xff,0xe8,0xbc,0xaa,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
++ 0xbe,0x8b,0x00,0x01,0xff,0xe6,0x85,0x84,0x00,0x10,0x08,0x01,0xff,0xe6,0xa0,0x97,
++ 0x00,0x01,0xff,0xe7,0x8e,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9a,0x86,
++ 0x00,0x01,0xff,0xe5,0x88,0xa9,0x00,0x10,0x08,0x01,0xff,0xe5,0x90,0x8f,0x00,0x01,
++ 0xff,0xe5,0xb1,0xa5,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe6,0x98,0x93,0x00,0x01,0xff,0xe6,0x9d,0x8e,0x00,0x10,0x08,0x01,0xff,0xe6,
++ 0xa2,0xa8,0x00,0x01,0xff,0xe6,0xb3,0xa5,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,
++ 0x90,0x86,0x00,0x01,0xff,0xe7,0x97,0xa2,0x00,0x10,0x08,0x01,0xff,0xe7,0xbd,0xb9,
++ 0x00,0x01,0xff,0xe8,0xa3,0x8f,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
++ 0xa3,0xa1,0x00,0x01,0xff,0xe9,0x87,0x8c,0x00,0x10,0x08,0x01,0xff,0xe9,0x9b,0xa2,
++ 0x00,0x01,0xff,0xe5,0x8c,0xbf,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xba,0xba,
++ 0x00,0x01,0xff,0xe5,0x90,0x9d,0x00,0x10,0x08,0x01,0xff,0xe7,0x87,0x90,0x00,0x01,
++ 0xff,0xe7,0x92,0x98,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
++ 0x97,0xba,0x00,0x01,0xff,0xe9,0x9a,0xa3,0x00,0x10,0x08,0x01,0xff,0xe9,0xb1,0x97,
++ 0x00,0x01,0xff,0xe9,0xba,0x9f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x9e,0x97,
++ 0x00,0x01,0xff,0xe6,0xb7,0x8b,0x00,0x10,0x08,0x01,0xff,0xe8,0x87,0xa8,0x00,0x01,
++ 0xff,0xe7,0xab,0x8b,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xac,0xa0,
++ 0x00,0x01,0xff,0xe7,0xb2,0x92,0x00,0x10,0x08,0x01,0xff,0xe7,0x8b,0x80,0x00,0x01,
++ 0xff,0xe7,0x82,0x99,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xad,0x98,0x00,0x01,
++ 0xff,0xe4,0xbb,0x80,0x00,0x10,0x08,0x01,0xff,0xe8,0x8c,0xb6,0x00,0x01,0xff,0xe5,
++ 0x88,0xba,0x00,0xe2,0xad,0x06,0xe1,0xc4,0x03,0xe0,0xcb,0x01,0xcf,0x86,0xd5,0xe4,
++ 0xd4,0x74,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x88,0x87,0x00,
++ 0x01,0xff,0xe5,0xba,0xa6,0x00,0x10,0x08,0x01,0xff,0xe6,0x8b,0x93,0x00,0x01,0xff,
++ 0xe7,0xb3,0x96,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xae,0x85,0x00,0x01,0xff,
++ 0xe6,0xb4,0x9e,0x00,0x10,0x08,0x01,0xff,0xe6,0x9a,0xb4,0x00,0x01,0xff,0xe8,0xbc,
++ 0xbb,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xa1,0x8c,0x00,0x01,0xff,
++ 0xe9,0x99,0x8d,0x00,0x10,0x08,0x01,0xff,0xe8,0xa6,0x8b,0x00,0x01,0xff,0xe5,0xbb,
++ 0x93,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,0x80,0x00,0x01,0xff,0xe5,0x97,
++ 0x80,0x00,0x01,0x00,0xd3,0x34,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x01,0xff,0xe5,0xa1,
++ 0x9a,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe6,0x99,0xb4,0x00,0x01,0x00,0xd1,0x0c,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xe5,0x87,0x9e,0x00,0x10,0x08,0x01,0xff,0xe7,0x8c,
++ 0xaa,0x00,0x01,0xff,0xe7,0x9b,0x8a,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe7,0xa4,0xbc,0x00,0x01,0xff,0xe7,0xa5,0x9e,0x00,0x10,0x08,0x01,0xff,0xe7,0xa5,
++ 0xa5,0x00,0x01,0xff,0xe7,0xa6,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9d,
++ 0x96,0x00,0x01,0xff,0xe7,0xb2,0xbe,0x00,0x10,0x08,0x01,0xff,0xe7,0xbe,0xbd,0x00,
++ 0x01,0x00,0xd4,0x64,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x01,0xff,0xe8,0x98,
++ 0x92,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe8,0xab,0xb8,0x00,0x01,0x00,0xd1,0x0c,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xe9,0x80,0xb8,0x00,0x10,0x08,0x01,0xff,0xe9,0x83,
++ 0xbd,0x00,0x01,0x00,0xd2,0x14,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe9,0xa3,
++ 0xaf,0x00,0x01,0xff,0xe9,0xa3,0xbc,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xa4,
++ 0xa8,0x00,0x01,0xff,0xe9,0xb6,0xb4,0x00,0x10,0x08,0x0d,0xff,0xe9,0x83,0x9e,0x00,
++ 0x0d,0xff,0xe9,0x9a,0xb7,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,
++ 0xe4,0xbe,0xae,0x00,0x06,0xff,0xe5,0x83,0xa7,0x00,0x10,0x08,0x06,0xff,0xe5,0x85,
++ 0x8d,0x00,0x06,0xff,0xe5,0x8b,0x89,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe5,0x8b,
++ 0xa4,0x00,0x06,0xff,0xe5,0x8d,0x91,0x00,0x10,0x08,0x06,0xff,0xe5,0x96,0x9d,0x00,
++ 0x06,0xff,0xe5,0x98,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe5,0x99,
++ 0xa8,0x00,0x06,0xff,0xe5,0xa1,0x80,0x00,0x10,0x08,0x06,0xff,0xe5,0xa2,0xa8,0x00,
++ 0x06,0xff,0xe5,0xb1,0xa4,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe5,0xb1,0xae,0x00,
++ 0x06,0xff,0xe6,0x82,0x94,0x00,0x10,0x08,0x06,0xff,0xe6,0x85,0xa8,0x00,0x06,0xff,
++ 0xe6,0x86,0x8e,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x06,0xff,0xe6,0x87,0xb2,0x00,0x06,0xff,0xe6,0x95,0x8f,0x00,0x10,
++ 0x08,0x06,0xff,0xe6,0x97,0xa2,0x00,0x06,0xff,0xe6,0x9a,0x91,0x00,0xd1,0x10,0x10,
++ 0x08,0x06,0xff,0xe6,0xa2,0x85,0x00,0x06,0xff,0xe6,0xb5,0xb7,0x00,0x10,0x08,0x06,
++ 0xff,0xe6,0xb8,0x9a,0x00,0x06,0xff,0xe6,0xbc,0xa2,0x00,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x06,0xff,0xe7,0x85,0xae,0x00,0x06,0xff,0xe7,0x88,0xab,0x00,0x10,0x08,0x06,
++ 0xff,0xe7,0x90,0xa2,0x00,0x06,0xff,0xe7,0xa2,0x91,0x00,0xd1,0x10,0x10,0x08,0x06,
++ 0xff,0xe7,0xa4,0xbe,0x00,0x06,0xff,0xe7,0xa5,0x89,0x00,0x10,0x08,0x06,0xff,0xe7,
++ 0xa5,0x88,0x00,0x06,0xff,0xe7,0xa5,0x90,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x06,0xff,0xe7,0xa5,0x96,0x00,0x06,0xff,0xe7,0xa5,0x9d,0x00,0x10,0x08,0x06,
++ 0xff,0xe7,0xa6,0x8d,0x00,0x06,0xff,0xe7,0xa6,0x8e,0x00,0xd1,0x10,0x10,0x08,0x06,
++ 0xff,0xe7,0xa9,0x80,0x00,0x06,0xff,0xe7,0xaa,0x81,0x00,0x10,0x08,0x06,0xff,0xe7,
++ 0xaf,0x80,0x00,0x06,0xff,0xe7,0xb7,0xb4,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,
++ 0xff,0xe7,0xb8,0x89,0x00,0x06,0xff,0xe7,0xb9,0x81,0x00,0x10,0x08,0x06,0xff,0xe7,
++ 0xbd,0xb2,0x00,0x06,0xff,0xe8,0x80,0x85,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe8,
++ 0x87,0xad,0x00,0x06,0xff,0xe8,0x89,0xb9,0x00,0x10,0x08,0x06,0xff,0xe8,0x89,0xb9,
++ 0x00,0x06,0xff,0xe8,0x91,0x97,0x00,0xd4,0x75,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x06,0xff,0xe8,0xa4,0x90,0x00,0x06,0xff,0xe8,0xa6,0x96,0x00,0x10,0x08,0x06,
++ 0xff,0xe8,0xac,0x81,0x00,0x06,0xff,0xe8,0xac,0xb9,0x00,0xd1,0x10,0x10,0x08,0x06,
++ 0xff,0xe8,0xb3,0x93,0x00,0x06,0xff,0xe8,0xb4,0x88,0x00,0x10,0x08,0x06,0xff,0xe8,
++ 0xbe,0xb6,0x00,0x06,0xff,0xe9,0x80,0xb8,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,
++ 0xff,0xe9,0x9b,0xa3,0x00,0x06,0xff,0xe9,0x9f,0xbf,0x00,0x10,0x08,0x06,0xff,0xe9,
++ 0xa0,0xbb,0x00,0x0b,0xff,0xe6,0x81,0xb5,0x00,0x91,0x11,0x10,0x09,0x0b,0xff,0xf0,
++ 0xa4,0x8b,0xae,0x00,0x0b,0xff,0xe8,0x88,0x98,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x08,0xff,0xe4,0xb8,0xa6,0x00,0x08,0xff,0xe5,0x86,0xb5,0x00,
++ 0x10,0x08,0x08,0xff,0xe5,0x85,0xa8,0x00,0x08,0xff,0xe4,0xbe,0x80,0x00,0xd1,0x10,
++ 0x10,0x08,0x08,0xff,0xe5,0x85,0x85,0x00,0x08,0xff,0xe5,0x86,0x80,0x00,0x10,0x08,
++ 0x08,0xff,0xe5,0x8b,0x87,0x00,0x08,0xff,0xe5,0x8b,0xba,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x08,0xff,0xe5,0x96,0x9d,0x00,0x08,0xff,0xe5,0x95,0x95,0x00,0x10,0x08,
++ 0x08,0xff,0xe5,0x96,0x99,0x00,0x08,0xff,0xe5,0x97,0xa2,0x00,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe5,0xa1,0x9a,0x00,0x08,0xff,0xe5,0xa2,0xb3,0x00,0x10,0x08,0x08,0xff,
++ 0xe5,0xa5,0x84,0x00,0x08,0xff,0xe5,0xa5,0x94,0x00,0xe0,0x04,0x02,0xcf,0x86,0xe5,
++ 0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0xa9,
++ 0xa2,0x00,0x08,0xff,0xe5,0xac,0xa8,0x00,0x10,0x08,0x08,0xff,0xe5,0xbb,0x92,0x00,
++ 0x08,0xff,0xe5,0xbb,0x99,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0xbd,0xa9,0x00,
++ 0x08,0xff,0xe5,0xbe,0xad,0x00,0x10,0x08,0x08,0xff,0xe6,0x83,0x98,0x00,0x08,0xff,
++ 0xe6,0x85,0x8e,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x84,0x88,0x00,
++ 0x08,0xff,0xe6,0x86,0x8e,0x00,0x10,0x08,0x08,0xff,0xe6,0x85,0xa0,0x00,0x08,0xff,
++ 0xe6,0x87,0xb2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x88,0xb4,0x00,0x08,0xff,
++ 0xe6,0x8f,0x84,0x00,0x10,0x08,0x08,0xff,0xe6,0x90,0x9c,0x00,0x08,0xff,0xe6,0x91,
++ 0x92,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x95,0x96,0x00,
++ 0x08,0xff,0xe6,0x99,0xb4,0x00,0x10,0x08,0x08,0xff,0xe6,0x9c,0x97,0x00,0x08,0xff,
++ 0xe6,0x9c,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x9d,0x96,0x00,0x08,0xff,
++ 0xe6,0xad,0xb9,0x00,0x10,0x08,0x08,0xff,0xe6,0xae,0xba,0x00,0x08,0xff,0xe6,0xb5,
++ 0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0xbb,0x9b,0x00,0x08,0xff,
++ 0xe6,0xbb,0x8b,0x00,0x10,0x08,0x08,0xff,0xe6,0xbc,0xa2,0x00,0x08,0xff,0xe7,0x80,
++ 0x9e,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x85,0xae,0x00,0x08,0xff,0xe7,0x9e,
++ 0xa7,0x00,0x10,0x08,0x08,0xff,0xe7,0x88,0xb5,0x00,0x08,0xff,0xe7,0x8a,0xaf,0x00,
++ 0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x8c,0xaa,0x00,
++ 0x08,0xff,0xe7,0x91,0xb1,0x00,0x10,0x08,0x08,0xff,0xe7,0x94,0x86,0x00,0x08,0xff,
++ 0xe7,0x94,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x98,0x9d,0x00,0x08,0xff,
++ 0xe7,0x98,0x9f,0x00,0x10,0x08,0x08,0xff,0xe7,0x9b,0x8a,0x00,0x08,0xff,0xe7,0x9b,
++ 0x9b,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x9b,0xb4,0x00,0x08,0xff,
++ 0xe7,0x9d,0x8a,0x00,0x10,0x08,0x08,0xff,0xe7,0x9d,0x80,0x00,0x08,0xff,0xe7,0xa3,
++ 0x8c,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0xaa,0xb1,0x00,0x08,0xff,0xe7,0xaf,
++ 0x80,0x00,0x10,0x08,0x08,0xff,0xe7,0xb1,0xbb,0x00,0x08,0xff,0xe7,0xb5,0x9b,0x00,
++ 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0xb7,0xb4,0x00,0x08,0xff,
++ 0xe7,0xbc,0xbe,0x00,0x10,0x08,0x08,0xff,0xe8,0x80,0x85,0x00,0x08,0xff,0xe8,0x8d,
++ 0x92,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0x8f,0xaf,0x00,0x08,0xff,0xe8,0x9d,
++ 0xb9,0x00,0x10,0x08,0x08,0xff,0xe8,0xa5,0x81,0x00,0x08,0xff,0xe8,0xa6,0x86,0x00,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xa6,0x96,0x00,0x08,0xff,0xe8,0xaa,
++ 0xbf,0x00,0x10,0x08,0x08,0xff,0xe8,0xab,0xb8,0x00,0x08,0xff,0xe8,0xab,0x8b,0x00,
++ 0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xac,0x81,0x00,0x08,0xff,0xe8,0xab,0xbe,0x00,
++ 0x10,0x08,0x08,0xff,0xe8,0xab,0xad,0x00,0x08,0xff,0xe8,0xac,0xb9,0x00,0xcf,0x86,
++ 0x95,0xde,0xd4,0x81,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xae,
++ 0x8a,0x00,0x08,0xff,0xe8,0xb4,0x88,0x00,0x10,0x08,0x08,0xff,0xe8,0xbc,0xb8,0x00,
++ 0x08,0xff,0xe9,0x81,0xb2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe9,0x86,0x99,0x00,
++ 0x08,0xff,0xe9,0x89,0xb6,0x00,0x10,0x08,0x08,0xff,0xe9,0x99,0xbc,0x00,0x08,0xff,
++ 0xe9,0x9b,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe9,0x9d,0x96,0x00,
++ 0x08,0xff,0xe9,0x9f,0x9b,0x00,0x10,0x08,0x08,0xff,0xe9,0x9f,0xbf,0x00,0x08,0xff,
++ 0xe9,0xa0,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe9,0xa0,0xbb,0x00,0x08,0xff,
++ 0xe9,0xac,0x92,0x00,0x10,0x08,0x08,0xff,0xe9,0xbe,0x9c,0x00,0x08,0xff,0xf0,0xa2,
++ 0xa1,0x8a,0x00,0xd3,0x45,0xd2,0x22,0xd1,0x12,0x10,0x09,0x08,0xff,0xf0,0xa2,0xa1,
++ 0x84,0x00,0x08,0xff,0xf0,0xa3,0x8f,0x95,0x00,0x10,0x08,0x08,0xff,0xe3,0xae,0x9d,
++ 0x00,0x08,0xff,0xe4,0x80,0x98,0x00,0xd1,0x11,0x10,0x08,0x08,0xff,0xe4,0x80,0xb9,
++ 0x00,0x08,0xff,0xf0,0xa5,0x89,0x89,0x00,0x10,0x09,0x08,0xff,0xf0,0xa5,0xb3,0x90,
++ 0x00,0x08,0xff,0xf0,0xa7,0xbb,0x93,0x00,0x92,0x14,0x91,0x10,0x10,0x08,0x08,0xff,
++ 0xe9,0xbd,0x83,0x00,0x08,0xff,0xe9,0xbe,0x8e,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xe1,0x94,0x01,0xe0,0x08,0x01,0xcf,0x86,0xd5,0x42,0xd4,0x14,0x93,0x10,0x52,0x04,
++ 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd3,0x10,
++ 0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,
++ 0x00,0x00,0xd1,0x0d,0x10,0x04,0x00,0x00,0x04,0xff,0xd7,0x99,0xd6,0xb4,0x00,0x10,
++ 0x04,0x01,0x1a,0x01,0xff,0xd7,0xb2,0xd6,0xb7,0x00,0xd4,0x42,0x53,0x04,0x01,0x00,
++ 0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd7,0xa9,0xd7,0x81,0x00,0x01,
++ 0xff,0xd7,0xa9,0xd7,0x82,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xd7,0xa9,0xd6,0xbc,
++ 0xd7,0x81,0x00,0x01,0xff,0xd7,0xa9,0xd6,0xbc,0xd7,0x82,0x00,0x10,0x09,0x01,0xff,
++ 0xd7,0x90,0xd6,0xb7,0x00,0x01,0xff,0xd7,0x90,0xd6,0xb8,0x00,0xd3,0x43,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x90,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x91,0xd6,
++ 0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x92,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x93,0xd6,
++ 0xbc,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x94,0xd6,0xbc,0x00,0x01,0xff,0xd7,
++ 0x95,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x96,0xd6,0xbc,0x00,0x00,0x00,0xd2,
++ 0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x98,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x99,
++ 0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x9a,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x9b,
++ 0xd6,0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,0x9c,0xd6,0xbc,0x00,0x00,0x00,
++ 0x10,0x09,0x01,0xff,0xd7,0x9e,0xd6,0xbc,0x00,0x00,0x00,0xcf,0x86,0x95,0x85,0x94,
++ 0x81,0xd3,0x3e,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0xa0,0xd6,0xbc,0x00,
++ 0x01,0xff,0xd7,0xa1,0xd6,0xbc,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xd7,0xa3,0xd6,
++ 0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,0xa4,0xd6,0xbc,0x00,0x00,0x00,0x10,
++ 0x09,0x01,0xff,0xd7,0xa6,0xd6,0xbc,0x00,0x01,0xff,0xd7,0xa7,0xd6,0xbc,0x00,0xd2,
++ 0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0xa8,0xd6,0xbc,0x00,0x01,0xff,0xd7,0xa9,
++ 0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0xaa,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x95,
++ 0xd6,0xb9,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x91,0xd6,0xbf,0x00,0x01,0xff,
++ 0xd7,0x9b,0xd6,0xbf,0x00,0x10,0x09,0x01,0xff,0xd7,0xa4,0xd6,0xbf,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,
++ 0x93,0x0c,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x0c,0x00,0x0c,0x00,0xcf,0x86,
++ 0x95,0x24,0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0x01,0x00,0xd3,0x5a,0xd2,0x06,0xcf,0x06,0x01,0x00,0xd1,0x14,
++ 0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x95,0x08,0x14,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x54,0x04,0x01,0x00,0x93,0x0c,0x92,0x08,
++ 0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x0c,
++ 0x94,0x08,0x13,0x04,0x01,0x00,0x00,0x00,0x05,0x00,0x54,0x04,0x05,0x00,0x53,0x04,
++ 0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x07,0x00,0x00,0x00,
++ 0xd2,0xcc,0xd1,0xa4,0xd0,0x36,0xcf,0x86,0xd5,0x14,0x54,0x04,0x06,0x00,0x53,0x04,
++ 0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x94,0x1c,0xd3,0x10,
++ 0x52,0x04,0x01,0xe6,0x51,0x04,0x0a,0xe6,0x10,0x04,0x0a,0xe6,0x10,0xdc,0x52,0x04,
++ 0x10,0xdc,0x11,0x04,0x10,0xdc,0x11,0xe6,0x01,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,
++ 0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,
++ 0x06,0x00,0x07,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,
++ 0x01,0x00,0x01,0x00,0xd4,0x18,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,
++ 0x10,0x04,0x01,0x00,0x00,0x00,0x12,0x04,0x01,0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x01,0x00,0x01,0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,
++ 0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,
++ 0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd1,0x50,0xd0,0x1e,
++ 0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x18,
++ 0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,
++ 0x10,0x04,0x01,0x00,0x06,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x06,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,
++ 0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x18,
++ 0xd3,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x92,0x08,0x11,0x04,
++ 0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0xd2,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x00,0x00,0xd4,0x20,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,
++ 0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,
++ 0x01,0x00,0x00,0x00,0x53,0x04,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x04,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x03,0x00,0x01,0x00,0x01,0x00,0x83,0xe2,
++ 0x30,0x3e,0xe1,0x1a,0x3b,0xe0,0x97,0x39,0xcf,0x86,0xe5,0x3b,0x26,0xc4,0xe3,0x16,
++ 0x14,0xe2,0xef,0x11,0xe1,0xd0,0x10,0xe0,0x60,0x07,0xcf,0x86,0xe5,0x53,0x03,0xe4,
++ 0x4c,0x02,0xe3,0x3d,0x01,0xd2,0x94,0xd1,0x70,0xd0,0x4a,0xcf,0x86,0xd5,0x18,0x94,
++ 0x14,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,
++ 0x00,0x07,0x00,0x07,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x07,
++ 0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x07,0x00,0x53,0x04,0x07,0x00,0xd2,0x0c,0x51,
++ 0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,
++ 0x00,0x07,0x00,0xcf,0x86,0x95,0x20,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,0x04,0x07,
++ 0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x11,
++ 0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x07,0x00,0xcf,0x86,0x55,
++ 0x04,0x07,0x00,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,
++ 0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,
++ 0x20,0x94,0x1c,0x93,0x18,0xd2,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,
++ 0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,
++ 0x04,0x07,0x00,0x93,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,
++ 0x00,0x07,0x00,0x07,0x00,0xcf,0x06,0x08,0x00,0xd0,0x46,0xcf,0x86,0xd5,0x2c,0xd4,
++ 0x20,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x10,
++ 0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x53,
++ 0x04,0x0a,0x00,0x12,0x04,0x0a,0x00,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,
++ 0x86,0xd5,0x08,0x14,0x04,0x00,0x00,0x0a,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,
++ 0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0a,0xdc,0x00,0x00,0xd2,
++ 0x5e,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x0a,
++ 0x00,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,
++ 0x00,0x00,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x0a,0x00,0x93,0x10,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd4,
++ 0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0xdc,0x10,0x00,0x10,0x00,0x10,
++ 0x00,0x10,0x00,0x53,0x04,0x10,0x00,0x12,0x04,0x10,0x00,0x00,0x00,0xd1,0x70,0xd0,
++ 0x36,0xcf,0x86,0xd5,0x18,0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,
++ 0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x10,0x00,0x94,0x18,0xd3,0x08,0x12,
++ 0x04,0x05,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x13,
++ 0x00,0x13,0x00,0x05,0x00,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x05,0x00,0x92,
++ 0x0c,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0x54,
++ 0x04,0x10,0x00,0xd3,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x10,0xe6,0x92,
++ 0x0c,0x51,0x04,0x10,0xe6,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,
++ 0x86,0x95,0x18,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x51,
++ 0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0x08,0x00,0xcf,0x86,0x95,0x1c,0xd4,
++ 0x0c,0x93,0x08,0x12,0x04,0x08,0x00,0x00,0x00,0x08,0x00,0x93,0x0c,0x52,0x04,0x08,
++ 0x00,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0xba,0xd2,0x80,0xd1,
++ 0x34,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x05,0x00,0x94,0x10,0x93,0x0c,0x52,0x04,0x05,
++ 0x00,0x11,0x04,0x05,0x00,0x07,0x00,0x05,0x00,0x05,0x00,0xcf,0x86,0x95,0x14,0x94,
++ 0x10,0x53,0x04,0x05,0x00,0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,0x07,0x00,0x07,
++ 0x00,0x07,0x00,0xd0,0x2a,0xcf,0x86,0xd5,0x14,0x54,0x04,0x07,0x00,0x53,0x04,0x07,
++ 0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x94,0x10,0x53,0x04,0x07,
++ 0x00,0x92,0x08,0x11,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0xcf,0x86,0xd5,
++ 0x10,0x54,0x04,0x12,0x00,0x93,0x08,0x12,0x04,0x12,0x00,0x00,0x00,0x12,0x00,0x54,
++ 0x04,0x12,0x00,0x53,0x04,0x12,0x00,0x12,0x04,0x12,0x00,0x00,0x00,0xd1,0x34,0xd0,
++ 0x12,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x08,0x13,0x04,0x10,0x00,0x00,0x00,0x10,
++ 0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x18,0xd3,0x08,0x12,0x04,0x10,0x00,0x00,
++ 0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x00,
++ 0x00,0xcf,0x06,0x00,0x00,0xd2,0x06,0xcf,0x06,0x10,0x00,0xd1,0x40,0xd0,0x1e,0xcf,
++ 0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x93,0x10,0x52,0x04,0x10,0x00,0x51,
++ 0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x14,0x54,
++ 0x04,0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x00,
++ 0x00,0x94,0x08,0x13,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe4,
++ 0xce,0x02,0xe3,0x45,0x01,0xd2,0xd0,0xd1,0x70,0xd0,0x52,0xcf,0x86,0xd5,0x20,0x94,
++ 0x1c,0xd3,0x0c,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,0x04,0x07,
++ 0x00,0xd3,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,
++ 0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd1,0x08,0x10,
++ 0x04,0x07,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0xcf,0x86,0x95,0x18,0x54,
++ 0x04,0x0b,0x00,0x93,0x10,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x00,
++ 0x00,0x0b,0x00,0x0b,0x00,0x10,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,
++ 0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,
++ 0x00,0x00,0x00,0x94,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,
++ 0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,
++ 0x04,0x11,0x00,0xd3,0x14,0xd2,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,
++ 0x00,0x11,0x04,0x11,0x00,0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,
++ 0x00,0x11,0x00,0x11,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x1c,0x54,0x04,0x09,
++ 0x00,0x53,0x04,0x09,0x00,0xd2,0x08,0x11,0x04,0x09,0x00,0x0b,0x00,0x51,0x04,0x00,
++ 0x00,0x10,0x04,0x00,0x00,0x09,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,
++ 0x08,0x11,0x04,0x0a,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,
++ 0x00,0xcf,0x06,0x00,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,
++ 0x00,0x53,0x04,0x0d,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x11,0x00,0x0d,0x00,0xcf,
++ 0x86,0x95,0x14,0x54,0x04,0x11,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x11,
++ 0x00,0x11,0x00,0x11,0x00,0x11,0x00,0xd2,0xec,0xd1,0xa4,0xd0,0x76,0xcf,0x86,0xd5,
++ 0x48,0xd4,0x28,0xd3,0x14,0x52,0x04,0x08,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x08,
++ 0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x08,
++ 0x00,0x08,0xdc,0x10,0x04,0x08,0x00,0x08,0xe6,0xd3,0x10,0x52,0x04,0x08,0x00,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x08,0x00,0x08,0x00,0x08,0x00,0x54,0x04,0x08,0x00,0xd3,0x0c,0x52,0x04,0x08,
++ 0x00,0x11,0x04,0x14,0x00,0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x08,0xe6,0x08,
++ 0x01,0x10,0x04,0x08,0xdc,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x08,
++ 0x09,0xcf,0x86,0x95,0x28,0xd4,0x14,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0xd0,0x0a,0xcf,
++ 0x86,0x15,0x04,0x10,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x24,0xd3,
++ 0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0xe6,0x10,0x04,0x10,
++ 0xdc,0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,
++ 0x00,0x93,0x10,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,
++ 0x00,0x00,0x00,0xd1,0x54,0xd0,0x26,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,
++ 0x00,0xd3,0x0c,0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x14,0x54,
++ 0x04,0x0b,0x00,0x93,0x0c,0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x0b,
++ 0x00,0x54,0x04,0x0b,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,
++ 0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x54,0x04,0x10,
++ 0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd2,0x0c,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,
++ 0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x96,0xd2,
++ 0x68,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x0b,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,
++ 0x04,0x0b,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x11,0x00,0x54,0x04,0x11,
++ 0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x11,0x00,0x54,0x04,0x11,0x00,0xd3,0x10,0x92,
++ 0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x92,0x08,0x11,
++ 0x04,0x00,0x00,0x11,0x00,0x11,0x00,0xd1,0x28,0xd0,0x22,0xcf,0x86,0x55,0x04,0x14,
++ 0x00,0xd4,0x0c,0x93,0x08,0x12,0x04,0x14,0x00,0x14,0xe6,0x00,0x00,0x53,0x04,0x14,
++ 0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,
++ 0x06,0x00,0x00,0xd2,0x2a,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,
++ 0x04,0x00,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,
++ 0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x58,0xd0,
++ 0x12,0xcf,0x86,0x55,0x04,0x14,0x00,0x94,0x08,0x13,0x04,0x14,0x00,0x00,0x00,0x14,
++ 0x00,0xcf,0x86,0x95,0x40,0xd4,0x24,0xd3,0x0c,0x52,0x04,0x14,0x00,0x11,0x04,0x14,
++ 0x00,0x14,0xdc,0xd2,0x0c,0x51,0x04,0x14,0xe6,0x10,0x04,0x14,0xe6,0x14,0xdc,0x91,
++ 0x08,0x10,0x04,0x14,0xe6,0x14,0xdc,0x14,0xdc,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x14,0xdc,0x14,0x00,0x14,0x00,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,
++ 0x00,0x54,0x04,0x15,0x00,0x93,0x10,0x52,0x04,0x15,0x00,0x51,0x04,0x15,0x00,0x10,
++ 0x04,0x15,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xe5,0x0f,0x06,0xe4,0xf8,0x03,0xe3,
++ 0x02,0x02,0xd2,0xfb,0xd1,0x4c,0xd0,0x06,0xcf,0x06,0x0c,0x00,0xcf,0x86,0xd5,0x2c,
++ 0xd4,0x1c,0xd3,0x10,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x09,
++ 0x0c,0x00,0x52,0x04,0x0c,0x00,0x11,0x04,0x0c,0x00,0x00,0x00,0x93,0x0c,0x92,0x08,
++ 0x11,0x04,0x00,0x00,0x0c,0x00,0x0c,0x00,0x0c,0x00,0x54,0x04,0x0c,0x00,0x53,0x04,
++ 0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x09,
++ 0xd0,0x69,0xcf,0x86,0xd5,0x32,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0xd2,0x15,
++ 0x51,0x04,0x0b,0x00,0x10,0x0d,0x0b,0xff,0xf0,0x91,0x82,0x99,0xf0,0x91,0x82,0xba,
++ 0x00,0x0b,0x00,0x91,0x11,0x10,0x0d,0x0b,0xff,0xf0,0x91,0x82,0x9b,0xf0,0x91,0x82,
++ 0xba,0x00,0x0b,0x00,0x0b,0x00,0xd4,0x1d,0x53,0x04,0x0b,0x00,0x92,0x15,0x51,0x04,
++ 0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xff,0xf0,0x91,0x82,0xa5,0xf0,0x91,0x82,0xba,
++ 0x00,0x0b,0x00,0x53,0x04,0x0b,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0b,
++ 0x09,0x10,0x04,0x0b,0x07,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x20,0x94,0x1c,0xd3,
++ 0x0c,0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x14,0x00,0x00,0x00,0x0d,0x00,0xd4,0x14,0x53,0x04,0x0d,
++ 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,
++ 0x04,0x0d,0x00,0x92,0x08,0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0xd1,0x96,0xd0,
++ 0x5c,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x0d,0xe6,0x10,
++ 0x04,0x0d,0xe6,0x0d,0x00,0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd4,0x26,0x53,0x04,0x0d,
++ 0x00,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x0d,0x0d,0xff,0xf0,0x91,0x84,
++ 0xb1,0xf0,0x91,0x84,0xa7,0x00,0x0d,0xff,0xf0,0x91,0x84,0xb2,0xf0,0x91,0x84,0xa7,
++ 0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x00,0x0d,0x09,0x91,
++ 0x08,0x10,0x04,0x0d,0x09,0x00,0x00,0x0d,0x00,0x0d,0x00,0xcf,0x86,0xd5,0x18,0x94,
++ 0x14,0x93,0x10,0x52,0x04,0x0d,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,
++ 0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x10,
++ 0x00,0x10,0x04,0x10,0x00,0x10,0x07,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,
++ 0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x0d,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x2c,0xd3,
++ 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x09,0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd2,
++ 0x10,0xd1,0x08,0x10,0x04,0x0d,0x00,0x11,0x00,0x10,0x04,0x11,0x07,0x11,0x00,0x91,
++ 0x08,0x10,0x04,0x11,0x00,0x10,0x00,0x00,0x00,0x53,0x04,0x0d,0x00,0x92,0x0c,0x51,
++ 0x04,0x0d,0x00,0x10,0x04,0x10,0x00,0x11,0x00,0x11,0x00,0xd4,0x14,0x93,0x10,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x93,
++ 0x10,0x52,0x04,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0xd2,0xc8,0xd1,0x48,0xd0,0x42,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x93,
++ 0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,
++ 0x00,0x54,0x04,0x10,0x00,0xd3,0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,
++ 0x00,0x10,0x09,0x10,0x04,0x10,0x07,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,
++ 0x00,0x10,0x04,0x12,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd0,0x52,0xcf,0x86,0xd5,
++ 0x3c,0xd4,0x28,0xd3,0x10,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x11,
++ 0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x11,0x00,0x51,
++ 0x04,0x11,0x00,0x10,0x04,0x00,0x00,0x11,0x00,0x53,0x04,0x11,0x00,0x52,0x04,0x11,
++ 0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x00,0x00,0x11,0x00,0x94,0x10,0x53,0x04,0x11,
++ 0x00,0x92,0x08,0x11,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,
++ 0x04,0x10,0x00,0xd4,0x18,0x53,0x04,0x10,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x10,
++ 0x00,0x10,0x07,0x10,0x04,0x10,0x09,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,
++ 0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xe1,0x27,0x01,0xd0,0x8a,0xcf,0x86,
++ 0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,0x00,0x10,0x00,
++ 0x10,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x52,0x04,0x10,0x00,
++ 0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x93,0x14,
++ 0x92,0x10,0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,
++ 0x10,0x00,0x10,0x00,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0x10,0x00,0x10,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0x10,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
++ 0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x00,0x00,0x14,0x07,0x91,0x08,0x10,0x04,
++ 0x10,0x07,0x10,0x00,0x10,0x00,0xcf,0x86,0xd5,0x6a,0xd4,0x42,0xd3,0x14,0x52,0x04,
++ 0x10,0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,
++ 0xd2,0x19,0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0xff,
++ 0xf0,0x91,0x8d,0x87,0xf0,0x91,0x8c,0xbe,0x00,0x91,0x11,0x10,0x0d,0x10,0xff,0xf0,
++ 0x91,0x8d,0x87,0xf0,0x91,0x8d,0x97,0x00,0x10,0x09,0x00,0x00,0xd3,0x18,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x10,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,
++ 0x10,0x00,0xd4,0x1c,0xd3,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,0xe6,
++ 0x52,0x04,0x10,0xe6,0x91,0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x93,0x10,
++ 0x52,0x04,0x10,0xe6,0x91,0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xcf,0x06,0x00,0x00,0xe3,0x30,0x01,0xd2,0xb7,0xd1,0x48,0xd0,0x06,0xcf,0x06,0x12,
++ 0x00,0xcf,0x86,0x95,0x3c,0xd4,0x1c,0x93,0x18,0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,
++ 0x04,0x12,0x09,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x07,0x12,0x00,0x12,
++ 0x00,0x53,0x04,0x12,0x00,0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x00,0x00,0x12,
++ 0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x12,0x00,0x10,0x04,0x14,0xe6,0x15,0x00,0x00,
++ 0x00,0xd0,0x45,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,
++ 0x00,0xd2,0x15,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x10,0xff,0xf0,0x91,0x92,
++ 0xb9,0xf0,0x91,0x92,0xba,0x00,0xd1,0x11,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,
++ 0xf0,0x91,0x92,0xb0,0x00,0x10,0x00,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,
++ 0x91,0x92,0xbd,0x00,0x10,0x00,0xcf,0x86,0x95,0x24,0xd4,0x14,0x93,0x10,0x92,0x0c,
++ 0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x09,0x10,0x07,0x10,0x00,0x00,0x00,0x53,0x04,
++ 0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x06,
++ 0xcf,0x06,0x00,0x00,0xd0,0x40,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,
++ 0xd3,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0xd2,0x1e,0x51,0x04,
++ 0x10,0x00,0x10,0x0d,0x10,0xff,0xf0,0x91,0x96,0xb8,0xf0,0x91,0x96,0xaf,0x00,0x10,
++ 0xff,0xf0,0x91,0x96,0xb9,0xf0,0x91,0x96,0xaf,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
++ 0x10,0x00,0x10,0x09,0xcf,0x86,0x95,0x2c,0xd4,0x1c,0xd3,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x10,0x07,0x10,0x00,0x10,0x00,0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,
++ 0x11,0x00,0x11,0x00,0x53,0x04,0x11,0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,
++ 0x00,0x00,0x00,0x00,0xd2,0xa0,0xd1,0x5c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,
++ 0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,
++ 0x10,0x04,0x10,0x00,0x10,0x09,0xcf,0x86,0xd5,0x24,0xd4,0x14,0x93,0x10,0x52,0x04,
++ 0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,
++ 0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x14,0x53,0x04,
++ 0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,0x00,0xd3,0x10,
++ 0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,0x0d,0x07,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x14,
++ 0x94,0x10,0x53,0x04,0x0d,0x00,0x92,0x08,0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x54,0x04,0x11,0x00,
++ 0x53,0x04,0x11,0x00,0xd2,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x14,0x00,0x00,0x00,
++ 0x91,0x08,0x10,0x04,0x00,0x00,0x11,0x00,0x11,0x00,0x94,0x14,0x53,0x04,0x11,0x00,
++ 0x92,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x11,0x09,0x00,0x00,0x11,0x00,
++ 0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xe4,0x59,0x01,0xd3,0xb2,0xd2,0x5c,0xd1,
++ 0x28,0xd0,0x22,0xcf,0x86,0x55,0x04,0x14,0x00,0x54,0x04,0x14,0x00,0x53,0x04,0x14,
++ 0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x14,0x00,0x14,0x09,0x10,0x04,0x14,0x07,0x14,
++ 0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd0,0x0a,0xcf,0x86,0x15,0x04,0x00,0x00,0x10,
++ 0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x10,0x92,0x0c,0x51,
++ 0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,
++ 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,
++ 0x1a,0xcf,0x86,0x55,0x04,0x00,0x00,0x94,0x10,0x53,0x04,0x15,0x00,0x92,0x08,0x11,
++ 0x04,0x00,0x00,0x15,0x00,0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x15,
++ 0x00,0x53,0x04,0x15,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x15,0x00,0x15,0x00,0x94,
++ 0x1c,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x15,0x09,0x15,0x00,0x15,0x00,0x91,
++ 0x08,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd2,0xa0,0xd1,
++ 0x3c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x93,0x10,0x52,
++ 0x04,0x13,0x00,0x91,0x08,0x10,0x04,0x13,0x09,0x13,0x00,0x13,0x00,0x13,0x00,0xcf,
++ 0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,
++ 0x04,0x13,0x00,0x13,0x09,0x00,0x00,0x13,0x00,0x13,0x00,0xd0,0x46,0xcf,0x86,0xd5,
++ 0x2c,0xd4,0x10,0x93,0x0c,0x52,0x04,0x13,0x00,0x11,0x04,0x15,0x00,0x13,0x00,0x13,
++ 0x00,0x53,0x04,0x13,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x13,0x00,0x13,0x09,0x13,
++ 0x00,0x91,0x08,0x10,0x04,0x13,0x00,0x14,0x00,0x13,0x00,0x94,0x14,0x93,0x10,0x92,
++ 0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,
++ 0x00,0xe3,0xa9,0x01,0xd2,0xb0,0xd1,0x6c,0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,
++ 0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x12,0x00,
++ 0x12,0x00,0x12,0x00,0x54,0x04,0x12,0x00,0xd3,0x10,0x52,0x04,0x12,0x00,0x51,0x04,
++ 0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,
++ 0x10,0x04,0x12,0x00,0x12,0x09,0xcf,0x86,0xd5,0x14,0x94,0x10,0x93,0x0c,0x52,0x04,
++ 0x12,0x00,0x11,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0x94,0x14,0x53,0x04,
++ 0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
++ 0x12,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x14,0x54,0x04,0x12,0x00,0x93,0x0c,0x92,0x08,
++ 0x11,0x04,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0xd4,0x14,0x53,0x04,0x12,0x00,
++ 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x93,0x10,
++ 0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
++ 0xcf,0x06,0x00,0x00,0xd1,0xa0,0xd0,0x52,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,0x10,
++ 0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x92,0x0c,
++ 0x51,0x04,0x13,0x00,0x10,0x04,0x00,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x54,0x04,
++ 0x13,0x00,0xd3,0x10,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,
++ 0x00,0x00,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x51,0x04,
++ 0x13,0x00,0x10,0x04,0x00,0x00,0x13,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0x93,0x14,
++ 0xd2,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x07,0x13,0x00,0x11,0x04,0x13,0x09,
++ 0x13,0x00,0x00,0x00,0x53,0x04,0x13,0x00,0x92,0x08,0x11,0x04,0x13,0x00,0x00,0x00,
++ 0x00,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,
++ 0x00,0x00,0x14,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x14,0x00,
++ 0x14,0x00,0x14,0x00,0xd0,0x52,0xcf,0x86,0xd5,0x3c,0xd4,0x14,0x53,0x04,0x14,0x00,
++ 0x52,0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0xd3,0x18,
++ 0xd2,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x51,0x04,0x14,0x00,
++ 0x10,0x04,0x14,0x00,0x14,0x09,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x94,0x10,0x53,0x04,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd2,0x2a,0xd1,0x06,0xcf,0x06,
++ 0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,
++ 0x14,0x00,0x53,0x04,0x14,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,
++ 0xcf,0x86,0x55,0x04,0x15,0x00,0x54,0x04,0x15,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,
++ 0x15,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x15,0x00,0xd0,0xca,0xcf,0x86,0xd5,0xc2,0xd4,0x54,0xd3,0x06,0xcf,0x06,
++ 0x09,0x00,0xd2,0x06,0xcf,0x06,0x09,0x00,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x09,0x00,
++ 0xcf,0x86,0x55,0x04,0x09,0x00,0x94,0x14,0x53,0x04,0x09,0x00,0x52,0x04,0x09,0x00,
++ 0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x10,0x00,0x10,0x00,0xd0,0x1e,0xcf,0x86,
++ 0x95,0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x10,0x00,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x68,
++ 0xd2,0x46,0xd1,0x40,0xd0,0x06,0xcf,0x06,0x09,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,
++ 0xd4,0x20,0xd3,0x10,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x10,0x00,
++ 0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,
++ 0x93,0x10,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0xcf,0x06,0x11,0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,
++ 0x95,0x10,0x94,0x0c,0x93,0x08,0x12,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,
++ 0xd5,0x4c,0xd4,0x06,0xcf,0x06,0x0b,0x00,0xd3,0x40,0xd2,0x3a,0xd1,0x34,0xd0,0x2e,
++ 0xcf,0x86,0x55,0x04,0x0b,0x00,0xd4,0x14,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,
++ 0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x53,0x04,0x15,0x00,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,
++ 0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,
++ 0xd1,0x4c,0xd0,0x44,0xcf,0x86,0xd5,0x3c,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,
++ 0xcf,0x06,0x11,0x00,0xd2,0x2a,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,
++ 0x95,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,
++ 0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,
++ 0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xe0,0xd2,0x01,0xcf,
++ 0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x0b,0x01,0xd3,0x06,0xcf,0x06,0x0c,0x00,
++ 0xd2,0x84,0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0c,0x00,0x54,0x04,0x0c,0x00,
++ 0x53,0x04,0x0c,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,
++ 0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x94,0x14,0x53,0x04,
++ 0x10,0x00,0xd2,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x10,0x00,
++ 0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,0x00,0x00,
++ 0x10,0x00,0xd4,0x10,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,
++ 0x00,0x00,0x93,0x10,0x52,0x04,0x10,0x01,0x91,0x08,0x10,0x04,0x10,0x01,0x10,0x00,
++ 0x00,0x00,0x00,0x00,0xd1,0x6c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,
++ 0x10,0x00,0x93,0x10,0x52,0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x10,0xe6,
++ 0x10,0x00,0x10,0x00,0xcf,0x86,0xd5,0x24,0xd4,0x10,0x93,0x0c,0x52,0x04,0x10,0x00,
++ 0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x51,0x04,
++ 0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,
++ 0x51,0x04,0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x53,0x04,
++ 0x10,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
++ 0xd0,0x0e,0xcf,0x86,0x95,0x08,0x14,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,
++ 0x00,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xd2,0x30,0xd1,0x0c,0xd0,0x06,0xcf,0x06,
++ 0x00,0x00,0xcf,0x06,0x14,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x14,0x00,
++ 0x53,0x04,0x14,0x00,0x92,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x4c,0xd0,0x06,0xcf,0x06,0x0d,0x00,
++ 0xcf,0x86,0xd5,0x2c,0x94,0x28,0xd3,0x10,0x52,0x04,0x0d,0x00,0x91,0x08,0x10,0x04,
++ 0x0d,0x00,0x15,0x00,0x15,0x00,0xd2,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,
++ 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0d,0x00,0x54,0x04,
++ 0x0d,0x00,0x53,0x04,0x0d,0x00,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,
++ 0x0d,0x00,0x15,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x15,0x00,
++ 0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x0d,0x00,
++ 0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x12,0x00,0x13,0x00,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
++ 0xcf,0x06,0x12,0x00,0xe2,0xc5,0x01,0xd1,0x8e,0xd0,0x86,0xcf,0x86,0xd5,0x48,0xd4,
++ 0x06,0xcf,0x06,0x12,0x00,0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x06,0xcf,0x06,0x12,
++ 0x00,0xd1,0x06,0xcf,0x06,0x12,0x00,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,
++ 0x04,0x12,0x00,0xd4,0x14,0x53,0x04,0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,
++ 0x04,0x12,0x00,0x14,0x00,0x14,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x14,0x00,0x15,
++ 0x00,0x15,0x00,0x00,0x00,0xd4,0x36,0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x2a,0xd1,
++ 0x06,0xcf,0x06,0x12,0x00,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,0x04,0x12,
++ 0x00,0x54,0x04,0x12,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x12,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,
+- 0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x4c,0xd0,0x44,0xcf,
+- 0x86,0xd5,0x3c,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,0x06,0x11,0x00,0xd2,
+- 0x2a,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,
+- 0x10,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
+- 0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xe0,0xd2,0x01,0xcf,0x86,0xd5,0x06,0xcf,0x06,
+- 0x00,0x00,0xe4,0x0b,0x01,0xd3,0x06,0xcf,0x06,0x0c,0x00,0xd2,0x84,0xd1,0x50,0xd0,
+- 0x1e,0xcf,0x86,0x55,0x04,0x0c,0x00,0x54,0x04,0x0c,0x00,0x53,0x04,0x0c,0x00,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,
+- 0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,
+- 0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x10,0x00,0xd2,0x08,0x11,
+- 0x04,0x10,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x00,0x00,0xd0,0x06,0xcf,
+- 0x06,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,0x00,0x00,0x10,0x00,0xd4,0x10,0x53,
+- 0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x93,0x10,0x52,
+- 0x04,0x10,0x01,0x91,0x08,0x10,0x04,0x10,0x01,0x10,0x00,0x00,0x00,0x00,0x00,0xd1,
+- 0x6c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x93,0x10,0x52,
+- 0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x10,0xe6,0x10,0x00,0x10,0x00,0xcf,
+- 0x86,0xd5,0x24,0xd4,0x10,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,
+- 0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x00,
+- 0x00,0x10,0x00,0x10,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,
+- 0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x00,
+- 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd0,0x0e,0xcf,0x86,0x95,
+- 0x08,0x14,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,
+- 0x06,0x00,0x00,0xd2,0x30,0xd1,0x0c,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x06,0x14,
+- 0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x14,0x00,0x53,0x04,0x14,0x00,0x92,
+- 0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,
+- 0x06,0x00,0x00,0xd1,0x4c,0xd0,0x06,0xcf,0x06,0x0d,0x00,0xcf,0x86,0xd5,0x2c,0x94,
+- 0x28,0xd3,0x10,0x52,0x04,0x0d,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x15,0x00,0x15,
+- 0x00,0xd2,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,0x51,0x04,0x00,
+- 0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0d,0x00,0x54,0x04,0x0d,0x00,0x53,0x04,0x0d,
+- 0x00,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x00,0x15,0x00,0xd0,
+- 0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x15,0x00,0x52,0x04,0x00,0x00,0x51,
+- 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x0d,0x00,0x00,0x00,0xcf,0x86,0x55,
+- 0x04,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x13,
+- 0x00,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xcf,0x06,0x12,0x00,0xe2,
+- 0xc6,0x01,0xd1,0x8e,0xd0,0x86,0xcf,0x86,0xd5,0x48,0xd4,0x06,0xcf,0x06,0x12,0x00,
+- 0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x06,0xcf,0x06,0x12,0x00,0xd1,0x06,0xcf,0x06,
+- 0x12,0x00,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,0x04,0x12,0x00,0xd4,0x14,
+- 0x53,0x04,0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x14,0x00,
+- 0x14,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x14,0x00,0x15,0x00,0x15,0x00,0x00,0x00,
+- 0xd4,0x36,0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x2a,0xd1,0x06,0xcf,0x06,0x12,0x00,
+- 0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,0x04,0x12,0x00,0x54,0x04,0x12,0x00,
+- 0x93,0x10,0x92,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,
+- 0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0xa2,0xd4,0x9c,0xd3,0x74,
+- 0xd2,0x26,0xd1,0x20,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x94,0x10,0x93,0x0c,0x92,0x08,
+- 0x11,0x04,0x0c,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0xcf,0x06,
+- 0x13,0x00,0xcf,0x06,0x13,0x00,0xd1,0x48,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,
+- 0x13,0x00,0x53,0x04,0x13,0x00,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,
+- 0x13,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x00,0x00,0x93,0x10,
+- 0x92,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x94,0x0c,0x93,0x08,0x12,0x04,0x00,0x00,0x15,0x00,0x00,0x00,0x13,0x00,0xcf,0x06,
+- 0x13,0x00,0xd2,0x22,0xd1,0x06,0xcf,0x06,0x13,0x00,0xd0,0x06,0xcf,0x06,0x13,0x00,
+- 0xcf,0x86,0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x53,0x04,0x13,0x00,0x12,0x04,
+- 0x13,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xd4,0x06,0xcf,0x06,
+- 0x00,0x00,0xd3,0x7f,0xd2,0x79,0xd1,0x34,0xd0,0x06,0xcf,0x06,0x10,0x00,0xcf,0x86,
+- 0x55,0x04,0x10,0x00,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x51,0x04,0x10,0x00,
+- 0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,
+- 0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd0,0x3f,0xcf,0x86,0xd5,0x2c,
+- 0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0xd2,0x08,0x11,0x04,0x10,0x00,0x00,0x00,
+- 0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x01,0x10,0x00,0x94,0x0d,0x93,0x09,0x12,0x05,
+- 0x10,0xff,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
+- 0x00,0xcf,0x06,0x00,0x00,0xe1,0x96,0x04,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,
+- 0xcf,0x86,0xe5,0x33,0x04,0xe4,0x83,0x02,0xe3,0xf8,0x01,0xd2,0x26,0xd1,0x06,0xcf,
+- 0x06,0x05,0x00,0xd0,0x06,0xcf,0x06,0x05,0x00,0xcf,0x86,0x55,0x04,0x05,0x00,0x54,
+- 0x04,0x05,0x00,0x93,0x0c,0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,0x00,0x00,0x00,
+- 0x00,0xd1,0xef,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x05,0x00,0x94,0x20,0xd3,0x10,0x52,
+- 0x04,0x05,0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x00,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x05,0x00,0x05,0x00,0x05,0x00,0xcf,0x86,0xd5,
+- 0x2a,0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,0x00,0x51,0x04,0x05,
+- 0x00,0x10,0x0d,0x05,0xff,0xf0,0x9d,0x85,0x97,0xf0,0x9d,0x85,0xa5,0x00,0x05,0xff,
+- 0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0x00,0xd4,0x75,0xd3,0x61,0xd2,0x44,0xd1,
+- 0x22,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,
+- 0xae,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,0xaf,
+- 0x00,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,
+- 0xb0,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,0xb1,
+- 0x00,0xd1,0x15,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,
+- 0x9d,0x85,0xb2,0x00,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0x01,0xd2,0x08,0x11,0x04,
+- 0x05,0x01,0x05,0x00,0x91,0x08,0x10,0x04,0x05,0x00,0x05,0xe2,0x05,0xd8,0xd3,0x12,
+- 0x92,0x0d,0x51,0x04,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0xff,0x00,0x05,0xff,0x00,
+- 0x92,0x0e,0x51,0x05,0x05,0xff,0x00,0x10,0x05,0x05,0xff,0x00,0x05,0xdc,0x05,0xdc,
++ 0x86,0xcf,0x06,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,
++ 0xa2,0xd4,0x9c,0xd3,0x74,0xd2,0x26,0xd1,0x20,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x94,
++ 0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x13,
++ 0x00,0x13,0x00,0xcf,0x06,0x13,0x00,0xcf,0x06,0x13,0x00,0xd1,0x48,0xd0,0x1e,0xcf,
++ 0x86,0x95,0x18,0x54,0x04,0x13,0x00,0x53,0x04,0x13,0x00,0x52,0x04,0x13,0x00,0x51,
++ 0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,
++ 0x04,0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x94,0x0c,0x93,0x08,0x12,0x04,0x00,0x00,0x15,0x00,0x00,
++ 0x00,0x13,0x00,0xcf,0x06,0x13,0x00,0xd2,0x22,0xd1,0x06,0xcf,0x06,0x13,0x00,0xd0,
++ 0x06,0xcf,0x06,0x13,0x00,0xcf,0x86,0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x53,
++ 0x04,0x13,0x00,0x12,0x04,0x13,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
++ 0x00,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x7e,0xd2,0x78,0xd1,0x34,0xd0,0x06,0xcf,
++ 0x06,0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,
++ 0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,
++ 0x00,0x52,0x04,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd0,
++ 0x3e,0xcf,0x86,0xd5,0x2c,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0xd2,0x08,0x11,
++ 0x04,0x10,0x00,0x00,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x01,0x10,0x00,0x94,
++ 0x0c,0x93,0x08,0x12,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,
++ 0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xe1,0x92,0x04,0xd0,0x08,0xcf,0x86,
++ 0xcf,0x06,0x00,0x00,0xcf,0x86,0xe5,0x2f,0x04,0xe4,0x7f,0x02,0xe3,0xf4,0x01,0xd2,
++ 0x26,0xd1,0x06,0xcf,0x06,0x05,0x00,0xd0,0x06,0xcf,0x06,0x05,0x00,0xcf,0x86,0x55,
++ 0x04,0x05,0x00,0x54,0x04,0x05,0x00,0x93,0x0c,0x52,0x04,0x05,0x00,0x11,0x04,0x05,
++ 0x00,0x00,0x00,0x00,0x00,0xd1,0xeb,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x05,0x00,0x94,
++ 0x20,0xd3,0x10,0x52,0x04,0x05,0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x00,
++ 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x05,0x00,0x05,0x00,0x05,
++ 0x00,0xcf,0x86,0xd5,0x2a,0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,
++ 0x00,0x51,0x04,0x05,0x00,0x10,0x0d,0x05,0xff,0xf0,0x9d,0x85,0x97,0xf0,0x9d,0x85,
++ 0xa5,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0x00,0xd4,0x75,0xd3,
++ 0x61,0xd2,0x44,0xd1,0x22,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,
++ 0xa5,0xf0,0x9d,0x85,0xae,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,
++ 0xf0,0x9d,0x85,0xaf,0x00,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,
++ 0xa5,0xf0,0x9d,0x85,0xb0,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,
++ 0xf0,0x9d,0x85,0xb1,0x00,0xd1,0x15,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,
++ 0x9d,0x85,0xa5,0xf0,0x9d,0x85,0xb2,0x00,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0x01,
++ 0xd2,0x08,0x11,0x04,0x05,0x01,0x05,0x00,0x91,0x08,0x10,0x04,0x05,0x00,0x05,0xe2,
++ 0x05,0xd8,0xd3,0x10,0x92,0x0c,0x51,0x04,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0x00,
++ 0x05,0x00,0x92,0x0c,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x05,0xdc,0x05,0xdc,
+ 0xd0,0x97,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x05,0xdc,
+ 0x10,0x04,0x05,0xdc,0x05,0x00,0x91,0x08,0x10,0x04,0x05,0x00,0x05,0xe6,0x05,0xe6,
+ 0x92,0x08,0x11,0x04,0x05,0xe6,0x05,0xdc,0x05,0x00,0x05,0x00,0xd4,0x14,0x53,0x04,
+@@ -4090,21 +4080,20 @@ static const unsigned char utf8data[64256] = {
+ 0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xd2,0x06,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,
+ 0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,
+ 0x04,0x00,0x00,0x53,0x04,0x00,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x02,
+- 0x00,0xd4,0xd9,0xd3,0x81,0xd2,0x79,0xd1,0x71,0xd0,0x69,0xcf,0x86,0xd5,0x60,0xd4,
+- 0x59,0xd3,0x52,0xd2,0x33,0xd1,0x2c,0xd0,0x25,0xcf,0x86,0x95,0x1e,0x94,0x19,0x93,
+- 0x14,0x92,0x0f,0x91,0x0a,0x10,0x05,0x00,0xff,0x00,0x05,0xff,0x00,0x00,0xff,0x00,
+- 0x00,0xff,0x00,0x00,0xff,0x00,0x00,0xff,0x00,0x05,0xff,0x00,0xcf,0x06,0x05,0xff,
+- 0x00,0xcf,0x06,0x00,0xff,0x00,0xd1,0x07,0xcf,0x06,0x07,0xff,0x00,0xd0,0x07,0xcf,
+- 0x06,0x07,0xff,0x00,0xcf,0x86,0x55,0x05,0x07,0xff,0x00,0x14,0x05,0x07,0xff,0x00,
+- 0x00,0xff,0x00,0xcf,0x06,0x00,0xff,0x00,0xcf,0x06,0x00,0xff,0x00,0xcf,0x06,0x00,
+- 0xff,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,
+- 0xcf,0x06,0x00,0x00,0xd2,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xd1,0x08,0xcf,0x86,
+- 0xcf,0x06,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x06,
+- 0xcf,0x06,0x00,0x00,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,
+- 0xd2,0x06,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,
+- 0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x00,0x00,
+- 0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x02,0x00,0xcf,0x86,0xcf,0x06,0x02,0x00,
+- 0x81,0x80,0xcf,0x86,0x85,0x84,0xcf,0x86,0xcf,0x06,0x02,0x00,0x00,0x00,0x00,0x00
++ 0x00,0xd4,0xc8,0xd3,0x70,0xd2,0x68,0xd1,0x60,0xd0,0x58,0xcf,0x86,0xd5,0x50,0xd4,
++ 0x4a,0xd3,0x44,0xd2,0x2a,0xd1,0x24,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,
++ 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x05,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x05,0x00,0xcf,0x06,0x05,0x00,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,
++ 0x06,0x07,0x00,0xd0,0x06,0xcf,0x06,0x07,0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x14,
++ 0x04,0x07,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
++ 0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,
++ 0x06,0x00,0x00,0xd2,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xd1,0x08,0xcf,0x86,0xcf,
++ 0x06,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x06,0xcf,
++ 0x06,0x00,0x00,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xd2,
++ 0x06,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,
++ 0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x00,0x00,0x52,
++ 0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x02,0x00,0xcf,0x86,0xcf,0x06,0x02,0x00,0x81,
++ 0x80,0xcf,0x86,0x85,0x84,0xcf,0x86,0xcf,0x06,0x02,0x00,0x00,0x00,0x00,0x00,0x00
+ };
+
+ struct utf8data_table utf8_data_table = {
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 70fa4ffc3879f5..f3e5ce397b8ef7 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -294,6 +294,7 @@ struct bpf_map {
+ * same prog type, JITed flag and xdp_has_frags flag.
+ */
+ struct {
++ const struct btf_type *attach_func_proto;
+ spinlock_t lock;
+ enum bpf_prog_type type;
+ bool jited;
+diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
+index 1df86ab98c775e..793a4a610db2c7 100644
+--- a/include/linux/nfs_fs_sb.h
++++ b/include/linux/nfs_fs_sb.h
+@@ -240,6 +240,7 @@ struct nfs_server {
+ struct list_head layouts;
+ struct list_head delegations;
+ struct list_head ss_copies;
++ struct list_head ss_src_copies;
+
+ unsigned long delegation_gen;
+ unsigned long mig_gen;
+diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h
+index 85bdf2adb76073..8e3dcac55dcd5b 100644
+--- a/include/linux/pci-epc.h
++++ b/include/linux/pci-epc.h
+@@ -128,6 +128,7 @@ struct pci_epc_mem {
+ * @group: configfs group representing the PCI EPC device
+ * @lock: mutex to protect pci_epc ops
+ * @function_num_map: bitmap to manage physical function number
++ * @domain_nr: PCI domain number of the endpoint controller
+ * @init_complete: flag to indicate whether the EPC initialization is complete
+ * or not
+ */
+@@ -145,6 +146,7 @@ struct pci_epc {
+ /* mutex to protect against concurrent access of EP controller */
+ struct mutex lock;
+ unsigned long function_num_map;
++ int domain_nr;
+ bool init_complete;
+ };
+
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 4cf89a4b4cbcf0..37d97bef060f9e 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1884,7 +1884,7 @@ static inline int acpi_pci_bus_find_domain_nr(struct pci_bus *bus)
+ { return 0; }
+ #endif
+ int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent);
+-void pci_bus_release_domain_nr(struct pci_bus *bus, struct device *parent);
++void pci_bus_release_domain_nr(struct device *parent, int domain_nr);
+ #endif
+
+ /* Some architectures require additional setup to direct VGA traffic */
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index e388c8b1cbc276..e4bddb92779564 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -580,6 +580,7 @@
+ #define PCI_DEVICE_ID_AMD_19H_M78H_DF_F3 0x12fb
+ #define PCI_DEVICE_ID_AMD_1AH_M00H_DF_F3 0x12c3
+ #define PCI_DEVICE_ID_AMD_1AH_M20H_DF_F3 0x16fb
++#define PCI_DEVICE_ID_AMD_1AH_M60H_DF_F3 0x124b
+ #define PCI_DEVICE_ID_AMD_1AH_M70H_DF_F3 0x12bb
+ #define PCI_DEVICE_ID_AMD_MI200_DF_F3 0x14d3
+ #define PCI_DEVICE_ID_AMD_MI300_DF_F3 0x152b
+@@ -2661,6 +2662,8 @@
+ #define PCI_DEVICE_ID_DCI_PCCOM8 0x0002
+ #define PCI_DEVICE_ID_DCI_PCCOM2 0x0004
+
++#define PCI_VENDOR_ID_GLENFLY 0x6766
++
+ #define PCI_VENDOR_ID_INTEL 0x8086
+ #define PCI_DEVICE_ID_INTEL_EESSC 0x0008
+ #define PCI_DEVICE_ID_INTEL_HDA_CML_LP 0x02c8
+diff --git a/include/net/mctp.h b/include/net/mctp.h
+index 7b17c52e8ce2a4..28d59ae94ca3b4 100644
+--- a/include/net/mctp.h
++++ b/include/net/mctp.h
+@@ -295,7 +295,7 @@ void mctp_neigh_remove_dev(struct mctp_dev *mdev);
+ int mctp_routes_init(void);
+ void mctp_routes_exit(void);
+
+-void mctp_device_init(void);
++int mctp_device_init(void);
+ void mctp_device_exit(void);
+
+ #endif /* __NET_MCTP_H */
+diff --git a/include/net/rtnetlink.h b/include/net/rtnetlink.h
+index b45d57b5968af4..2d3eb7cb4dfff0 100644
+--- a/include/net/rtnetlink.h
++++ b/include/net/rtnetlink.h
+@@ -29,6 +29,15 @@ static inline enum rtnl_kinds rtnl_msgtype_kind(int msgtype)
+ return msgtype & RTNL_KIND_MASK;
+ }
+
++struct rtnl_msg_handler {
++ struct module *owner;
++ int protocol;
++ int msgtype;
++ rtnl_doit_func doit;
++ rtnl_dumpit_func dumpit;
++ int flags;
++};
++
+ void rtnl_register(int protocol, int msgtype,
+ rtnl_doit_func, rtnl_dumpit_func, unsigned int flags);
+ int rtnl_register_module(struct module *owner, int protocol, int msgtype,
+@@ -36,6 +45,14 @@ int rtnl_register_module(struct module *owner, int protocol, int msgtype,
+ int rtnl_unregister(int protocol, int msgtype);
+ void rtnl_unregister_all(int protocol);
+
++int __rtnl_register_many(const struct rtnl_msg_handler *handlers, int n);
++void __rtnl_unregister_many(const struct rtnl_msg_handler *handlers, int n);
++
++#define rtnl_register_many(handlers) \
++ __rtnl_register_many(handlers, ARRAY_SIZE(handlers))
++#define rtnl_unregister_many(handlers) \
++ __rtnl_unregister_many(handlers, ARRAY_SIZE(handlers))
++
+ static inline int rtnl_msg_family(const struct nlmsghdr *nlh)
+ {
+ if (nlmsg_len(nlh) >= sizeof(struct rtgenmsg))
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 79edd5b5e3c913..5d74fa7e694cc8 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -848,7 +848,6 @@ static inline void qdisc_calculate_pkt_len(struct sk_buff *skb,
+ static inline int qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ struct sk_buff **to_free)
+ {
+- qdisc_calculate_pkt_len(skb, sch);
+ return sch->enqueue(skb, sch, to_free);
+ }
+
+diff --git a/include/net/sock.h b/include/net/sock.h
+index cce23ac4d51489..2d4149075091b1 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -892,6 +892,8 @@ static inline void sk_add_bind_node(struct sock *sk,
+ hlist_for_each_entry_safe(__sk, tmp, list, sk_node)
+ #define sk_for_each_bound(__sk, list) \
+ hlist_for_each_entry(__sk, list, sk_bind_node)
++#define sk_for_each_bound_safe(__sk, tmp, list) \
++ hlist_for_each_entry_safe(__sk, tmp, list, sk_bind_node)
+
+ /**
+ * sk_for_each_entry_offset_rcu - iterate over a list at a given struct offset
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 7a166120a45c3d..7057d942fb2b04 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -627,6 +627,21 @@ static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool dying)
+ }
+ list_del(&ocqe->list);
+ kfree(ocqe);
++
++ /*
++ * For silly syzbot cases that deliberately overflow by huge
++ * amounts, check if we need to resched and drop and
++ * reacquire the locks if so. Nothing real would ever hit this.
++ * Ideally we'd have a non-posting unlock for this, but hard
++ * to care for a non-real case.
++ */
++ if (need_resched()) {
++ io_cq_unlock_post(ctx);
++ mutex_unlock(&ctx->uring_lock);
++ cond_resched();
++ mutex_lock(&ctx->uring_lock);
++ io_cq_lock(ctx);
++ }
+ }
+
+ if (list_empty(&ctx->cq_overflow_list)) {
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index d85e2d41a992b8..6b3bc0876f7fe0 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -973,17 +973,21 @@ int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
+ if (issue_flags & IO_URING_F_MULTISHOT)
+ return IOU_ISSUE_SKIP_COMPLETE;
+ return -EAGAIN;
+- }
+-
+- /*
+- * Any successful return value will keep the multishot read armed.
+- */
+- if (ret > 0 && req->flags & REQ_F_APOLL_MULTISHOT) {
++ } else if (ret <= 0) {
++ io_kbuf_recycle(req, issue_flags);
++ if (ret < 0)
++ req_set_fail(req);
++ } else {
+ /*
+- * Put our buffer and post a CQE. If we fail to post a CQE, then
++ * Any successful return value will keep the multishot read
++ * armed, if it's still set. Put our buffer and post a CQE. If
++ * we fail to post a CQE, or multishot is no longer set, then
+ * jump to the termination path. This request is then done.
+ */
+ cflags = io_put_kbuf(req, issue_flags);
++ if (!(req->flags & REQ_F_APOLL_MULTISHOT))
++ goto done;
++
+ rw->len = 0; /* similarly to above, reset len to 0 */
+
+ if (io_req_post_cqe(req, ret, cflags | IORING_CQE_F_MORE)) {
+@@ -1004,6 +1008,7 @@ int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
+ * Either an error, or we've hit overflow posting the CQE. For any
+ * multishot request, hitting overflow will terminate it.
+ */
++done:
+ io_req_set_res(req, ret, cflags);
+ io_req_rw_cleanup(req, issue_flags);
+ if (issue_flags & IO_URING_F_MULTISHOT)
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index feabc019385258..a5c6f8aa490156 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -73,6 +73,9 @@ int array_map_alloc_check(union bpf_attr *attr)
+ /* avoid overflow on round_up(map->value_size) */
+ if (attr->value_size > INT_MAX)
+ return -E2BIG;
++ /* percpu map value size is bound by PCPU_MIN_UNIT_SIZE */
++ if (percpu && round_up(attr->value_size, 8) > PCPU_MIN_UNIT_SIZE)
++ return -E2BIG;
+
+ return 0;
+ }
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 7ee62e38faf0e9..4e07cc057d6f2e 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -2302,6 +2302,7 @@ bool bpf_prog_map_compatible(struct bpf_map *map,
+ {
+ enum bpf_prog_type prog_type = resolve_prog_type(fp);
+ bool ret;
++ struct bpf_prog_aux *aux = fp->aux;
+
+ if (fp->kprobe_override)
+ return false;
+@@ -2311,7 +2312,7 @@ bool bpf_prog_map_compatible(struct bpf_map *map,
+ * in the case of devmap and cpumap). Until device checks
+ * are implemented, prohibit adding dev-bound programs to program maps.
+ */
+- if (bpf_prog_is_dev_bound(fp->aux))
++ if (bpf_prog_is_dev_bound(aux))
+ return false;
+
+ spin_lock(&map->owner.lock);
+@@ -2321,12 +2322,26 @@ bool bpf_prog_map_compatible(struct bpf_map *map,
+ */
+ map->owner.type = prog_type;
+ map->owner.jited = fp->jited;
+- map->owner.xdp_has_frags = fp->aux->xdp_has_frags;
++ map->owner.xdp_has_frags = aux->xdp_has_frags;
++ map->owner.attach_func_proto = aux->attach_func_proto;
+ ret = true;
+ } else {
+ ret = map->owner.type == prog_type &&
+ map->owner.jited == fp->jited &&
+- map->owner.xdp_has_frags == fp->aux->xdp_has_frags;
++ map->owner.xdp_has_frags == aux->xdp_has_frags;
++ if (ret &&
++ map->owner.attach_func_proto != aux->attach_func_proto) {
++ switch (prog_type) {
++ case BPF_PROG_TYPE_TRACING:
++ case BPF_PROG_TYPE_LSM:
++ case BPF_PROG_TYPE_EXT:
++ case BPF_PROG_TYPE_STRUCT_OPS:
++ ret = false;
++ break;
++ default:
++ break;
++ }
++ }
+ }
+ spin_unlock(&map->owner.lock);
+
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 06115f8728e89e..bcb74ac880cb53 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -462,6 +462,9 @@ static int htab_map_alloc_check(union bpf_attr *attr)
+ * kmalloc-able later in htab_map_update_elem()
+ */
+ return -E2BIG;
++ /* percpu map value size is bound by PCPU_MIN_UNIT_SIZE */
++ if (percpu && round_up(attr->value_size, 8) > PCPU_MIN_UNIT_SIZE)
++ return -E2BIG;
+
+ return 0;
+ }
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index d9cae8e259699c..21fb9c4d498fb0 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -733,15 +733,11 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
+ }
+ }
+
+-/* called from workqueue */
+-static void bpf_map_free_deferred(struct work_struct *work)
++static void bpf_map_free(struct bpf_map *map)
+ {
+- struct bpf_map *map = container_of(work, struct bpf_map, work);
+ struct btf_record *rec = map->record;
+ struct btf *btf = map->btf;
+
+- security_bpf_map_free(map);
+- bpf_map_release_memcg(map);
+ /* implementation dependent freeing */
+ map->ops->map_free(map);
+ /* Delay freeing of btf_record for maps, as map_free
+@@ -760,6 +756,16 @@ static void bpf_map_free_deferred(struct work_struct *work)
+ btf_put(btf);
+ }
+
++/* called from workqueue */
++static void bpf_map_free_deferred(struct work_struct *work)
++{
++ struct bpf_map *map = container_of(work, struct bpf_map, work);
++
++ security_bpf_map_free(map);
++ bpf_map_release_memcg(map);
++ bpf_map_free(map);
++}
++
+ static void bpf_map_put_uref(struct bpf_map *map)
+ {
+ if (atomic64_dec_and_test(&map->usercnt)) {
+@@ -1411,8 +1417,7 @@ static int map_create(union bpf_attr *attr)
+ free_map_sec:
+ security_bpf_map_free(map);
+ free_map:
+- btf_put(map->btf);
+- map->ops->map_free(map);
++ bpf_map_free(map);
+ put_token:
+ bpf_token_put(token);
+ return err;
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index db4ceb0f503cca..9bb36897b6c62d 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -623,6 +623,8 @@ void kthread_unpark(struct task_struct *k)
+ {
+ struct kthread *kthread = to_kthread(k);
+
++ if (!test_bit(KTHREAD_SHOULD_PARK, &kthread->flags))
++ return;
+ /*
+ * Newly created kthread was parked when the CPU was offline.
+ * The binding was lost and we need to set it again.
+diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
+index 2686ba122fa08d..3630f712358e46 100644
+--- a/kernel/rcu/tree_nocb.h
++++ b/kernel/rcu/tree_nocb.h
+@@ -569,13 +569,19 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone,
+ rcu_nocb_unlock(rdp);
+ wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_LAZY,
+ TPS("WakeLazy"));
+- } else if (!irqs_disabled_flags(flags)) {
++ } else if (!irqs_disabled_flags(flags) && cpu_online(rdp->cpu)) {
+ /* ... if queue was empty ... */
+ rcu_nocb_unlock(rdp);
+ wake_nocb_gp(rdp, false);
+ trace_rcu_nocb_wake(rcu_state.name, rdp->cpu,
+ TPS("WakeEmpty"));
+ } else {
++ /*
++ * Don't do the wake-up upfront on fragile paths.
++ * Also offline CPUs can't call swake_up_one_online() from
++ * (soft-)IRQs. Rely on the final deferred wake-up from
++ * rcutree_report_cpu_dead()
++ */
+ rcu_nocb_unlock(rdp);
+ wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE,
+ TPS("WakeEmptyIsDeferred"));
+diff --git a/mm/secretmem.c b/mm/secretmem.c
+index 3afb5ad701e14a..399552814fd0ff 100644
+--- a/mm/secretmem.c
++++ b/mm/secretmem.c
+@@ -238,7 +238,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned int, flags)
+ /* make sure local flags do not confict with global fcntl.h */
+ BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+- if (!secretmem_enable)
++ if (!secretmem_enable || !can_set_direct_map())
+ return -ENOSYS;
+
+ if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+@@ -280,7 +280,7 @@ static struct file_system_type secretmem_fs = {
+
+ static int __init secretmem_init(void)
+ {
+- if (!secretmem_enable)
++ if (!secretmem_enable || !can_set_direct_map())
+ return 0;
+
+ secretmem_mnt = kern_mount(&secretmem_fs);
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 58b528df1a86ec..a7ffce48512354 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -289,6 +289,9 @@ static int hci_enhanced_setup_sync(struct hci_dev *hdev, void *data)
+
+ kfree(conn_handle);
+
++ if (!hci_conn_valid(hdev, conn))
++ return -ECANCELED;
++
+ bt_dev_dbg(hdev, "hcon %p", conn);
+
+ configure_datapath_sync(hdev, &conn->codec);
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index 37d63d768afb8c..f48250e3f2e103 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -865,9 +865,7 @@ static int rfcomm_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned lon
+
+ if (err == -ENOIOCTLCMD) {
+ #ifdef CONFIG_BT_RFCOMM_TTY
+- lock_sock(sk);
+ err = rfcomm_dev_ioctl(sk, cmd, (void __user *) arg);
+- release_sock(sk);
+ #else
+ err = -EOPNOTSUPP;
+ #endif
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 8f9c19d992ac5c..d5aada7bad571c 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -33,6 +33,7 @@
+ #include <net/ip.h>
+ #include <net/ipv6.h>
+ #include <net/addrconf.h>
++#include <net/dst_metadata.h>
+ #include <net/route.h>
+ #include <net/netfilter/br_netfilter.h>
+ #include <net/netns/generic.h>
+@@ -878,6 +879,10 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff
+ return br_dev_queue_push_xmit(net, sk, skb);
+ }
+
++ /* Fragmentation on metadata/template dst is not supported */
++ if (unlikely(!skb_valid_dst(skb)))
++ goto drop;
++
+ /* This is wrong! We should preserve the original fragment
+ * boundaries by preserving frag_list rather than refragmenting.
+ */
+diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
+index f17dbac7d82843..6b97ae47f85524 100644
+--- a/net/bridge/br_netlink.c
++++ b/net/bridge/br_netlink.c
+@@ -1920,7 +1920,10 @@ int __init br_netlink_init(void)
+ {
+ int err;
+
+- br_vlan_rtnl_init();
++ err = br_vlan_rtnl_init();
++ if (err)
++ goto out;
++
+ rtnl_af_register(&br_af_ops);
+
+ err = rtnl_link_register(&br_link_ops);
+@@ -1931,6 +1934,7 @@ int __init br_netlink_init(void)
+
+ out_af:
+ rtnl_af_unregister(&br_af_ops);
++out:
+ return err;
+ }
+
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index d4bedc87b1d8f1..041f6e571a2097 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -1571,7 +1571,7 @@ void br_vlan_get_stats(const struct net_bridge_vlan *v,
+ void br_vlan_port_event(struct net_bridge_port *p, unsigned long event);
+ int br_vlan_bridge_event(struct net_device *dev, unsigned long event,
+ void *ptr);
+-void br_vlan_rtnl_init(void);
++int br_vlan_rtnl_init(void);
+ void br_vlan_rtnl_uninit(void);
+ void br_vlan_notify(const struct net_bridge *br,
+ const struct net_bridge_port *p,
+@@ -1802,8 +1802,9 @@ static inline int br_vlan_bridge_event(struct net_device *dev,
+ return 0;
+ }
+
+-static inline void br_vlan_rtnl_init(void)
++static inline int br_vlan_rtnl_init(void)
+ {
++ return 0;
+ }
+
+ static inline void br_vlan_rtnl_uninit(void)
+diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c
+index 9c2fffb827ab19..89f51ea4cabece 100644
+--- a/net/bridge/br_vlan.c
++++ b/net/bridge/br_vlan.c
+@@ -2296,19 +2296,18 @@ static int br_vlan_rtm_process(struct sk_buff *skb, struct nlmsghdr *nlh,
+ return err;
+ }
+
+-void br_vlan_rtnl_init(void)
++static const struct rtnl_msg_handler br_vlan_rtnl_msg_handlers[] = {
++ {THIS_MODULE, PF_BRIDGE, RTM_NEWVLAN, br_vlan_rtm_process, NULL, 0},
++ {THIS_MODULE, PF_BRIDGE, RTM_DELVLAN, br_vlan_rtm_process, NULL, 0},
++ {THIS_MODULE, PF_BRIDGE, RTM_GETVLAN, NULL, br_vlan_rtm_dump, 0},
++};
++
++int br_vlan_rtnl_init(void)
+ {
+- rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_GETVLAN, NULL,
+- br_vlan_rtm_dump, 0);
+- rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_NEWVLAN,
+- br_vlan_rtm_process, NULL, 0);
+- rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_DELVLAN,
+- br_vlan_rtm_process, NULL, 0);
++ return rtnl_register_many(br_vlan_rtnl_msg_handlers);
+ }
+
+ void br_vlan_rtnl_uninit(void)
+ {
+- rtnl_unregister(PF_BRIDGE, RTM_GETVLAN);
+- rtnl_unregister(PF_BRIDGE, RTM_NEWVLAN);
+- rtnl_unregister(PF_BRIDGE, RTM_DELVLAN);
++ rtnl_unregister_many(br_vlan_rtnl_msg_handlers);
+ }
+diff --git a/net/core/dst.c b/net/core/dst.c
+index 95f533844f17f1..9552a90d4772dc 100644
+--- a/net/core/dst.c
++++ b/net/core/dst.c
+@@ -109,9 +109,6 @@ static void dst_destroy(struct dst_entry *dst)
+ child = xdst->child;
+ }
+ #endif
+- if (!(dst->flags & DST_NOCOUNT))
+- dst_entries_add(dst->ops, -1);
+-
+ if (dst->ops->destroy)
+ dst->ops->destroy(dst);
+ netdev_put(dst->dev, &dst->dev_tracker);
+@@ -159,17 +156,27 @@ void dst_dev_put(struct dst_entry *dst)
+ }
+ EXPORT_SYMBOL(dst_dev_put);
+
++static void dst_count_dec(struct dst_entry *dst)
++{
++ if (!(dst->flags & DST_NOCOUNT))
++ dst_entries_add(dst->ops, -1);
++}
++
+ void dst_release(struct dst_entry *dst)
+ {
+- if (dst && rcuref_put(&dst->__rcuref))
++ if (dst && rcuref_put(&dst->__rcuref)) {
++ dst_count_dec(dst);
+ call_rcu_hurry(&dst->rcu_head, dst_destroy_rcu);
++ }
+ }
+ EXPORT_SYMBOL(dst_release);
+
+ void dst_release_immediate(struct dst_entry *dst)
+ {
+- if (dst && rcuref_put(&dst->__rcuref))
++ if (dst && rcuref_put(&dst->__rcuref)) {
++ dst_count_dec(dst);
+ dst_destroy(dst);
++ }
+ }
+ EXPORT_SYMBOL(dst_release_immediate);
+
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 73fd7f543fd095..97a38a7e1b2cc3 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -384,6 +384,35 @@ void rtnl_unregister_all(int protocol)
+ }
+ EXPORT_SYMBOL_GPL(rtnl_unregister_all);
+
++int __rtnl_register_many(const struct rtnl_msg_handler *handlers, int n)
++{
++ const struct rtnl_msg_handler *handler;
++ int i, err;
++
++ for (i = 0, handler = handlers; i < n; i++, handler++) {
++ err = rtnl_register_internal(handler->owner, handler->protocol,
++ handler->msgtype, handler->doit,
++ handler->dumpit, handler->flags);
++ if (err) {
++ __rtnl_unregister_many(handlers, i);
++ break;
++ }
++ }
++
++ return err;
++}
++EXPORT_SYMBOL_GPL(__rtnl_register_many);
++
++void __rtnl_unregister_many(const struct rtnl_msg_handler *handlers, int n)
++{
++ const struct rtnl_msg_handler *handler;
++ int i;
++
++ for (i = n - 1, handler = handlers + n - 1; i >= 0; i--, handler--)
++ rtnl_unregister(handler->protocol, handler->msgtype);
++}
++EXPORT_SYMBOL_GPL(__rtnl_unregister_many);
++
+ static LIST_HEAD(link_ops);
+
+ static const struct rtnl_link_ops *rtnl_link_ops_get(const char *kind)
+diff --git a/net/dsa/user.c b/net/dsa/user.c
+index f5adfa1d978a28..ac34d5d1deb097 100644
+--- a/net/dsa/user.c
++++ b/net/dsa/user.c
+@@ -1392,6 +1392,14 @@ dsa_user_add_cls_matchall_mirred(struct net_device *dev,
+ if (!dsa_user_dev_check(act->dev))
+ return -EOPNOTSUPP;
+
++ to_dp = dsa_user_to_port(act->dev);
++
++ if (dp->ds != to_dp->ds) {
++ NL_SET_ERR_MSG_MOD(extack,
++ "Cross-chip mirroring not implemented");
++ return -EOPNOTSUPP;
++ }
++
+ mall_tc_entry = kzalloc(sizeof(*mall_tc_entry), GFP_KERNEL);
+ if (!mall_tc_entry)
+ return -ENOMEM;
+@@ -1399,9 +1407,6 @@ dsa_user_add_cls_matchall_mirred(struct net_device *dev,
+ mall_tc_entry->cookie = cls->cookie;
+ mall_tc_entry->type = DSA_PORT_MALL_MIRROR;
+ mirror = &mall_tc_entry->mirror;
+-
+- to_dp = dsa_user_to_port(act->dev);
+-
+ mirror->to_local_port = to_dp->index;
+ mirror->ingress = ingress;
+
+diff --git a/net/ipv4/netfilter/nf_reject_ipv4.c b/net/ipv4/netfilter/nf_reject_ipv4.c
+index 04504b2b51df56..87fd945a0d27a5 100644
+--- a/net/ipv4/netfilter/nf_reject_ipv4.c
++++ b/net/ipv4/netfilter/nf_reject_ipv4.c
+@@ -239,9 +239,8 @@ static int nf_reject_fill_skb_dst(struct sk_buff *skb_in)
+ void nf_send_reset(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ int hook)
+ {
+- struct sk_buff *nskb;
+- struct iphdr *niph;
+ const struct tcphdr *oth;
++ struct sk_buff *nskb;
+ struct tcphdr _oth;
+
+ oth = nf_reject_ip_tcphdr_get(oldskb, &_oth, hook);
+@@ -266,14 +265,12 @@ void nf_send_reset(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ nskb->mark = IP4_REPLY_MARK(net, oldskb->mark);
+
+ skb_reserve(nskb, LL_MAX_HEADER);
+- niph = nf_reject_iphdr_put(nskb, oldskb, IPPROTO_TCP,
+- ip4_dst_hoplimit(skb_dst(nskb)));
++ nf_reject_iphdr_put(nskb, oldskb, IPPROTO_TCP,
++ ip4_dst_hoplimit(skb_dst(nskb)));
+ nf_reject_ip_tcphdr_put(nskb, oldskb, oth);
+ if (ip_route_me_harder(net, sk, nskb, RTN_UNSPEC))
+ goto free_nskb;
+
+- niph = ip_hdr(nskb);
+-
+ /* "Never happens" */
+ if (nskb->len > dst_mtu(skb_dst(nskb)))
+ goto free_nskb;
+@@ -290,6 +287,7 @@ void nf_send_reset(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ */
+ if (nf_bridge_info_exists(oldskb)) {
+ struct ethhdr *oeth = eth_hdr(oldskb);
++ struct iphdr *niph = ip_hdr(nskb);
+ struct net_device *br_indev;
+
+ br_indev = nf_bridge_get_physindev(oldskb, net);
+diff --git a/net/ipv4/netfilter/nft_fib_ipv4.c b/net/ipv4/netfilter/nft_fib_ipv4.c
+index 9eee535c64dd48..ba233fdd81886b 100644
+--- a/net/ipv4/netfilter/nft_fib_ipv4.c
++++ b/net/ipv4/netfilter/nft_fib_ipv4.c
+@@ -66,6 +66,7 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ .flowi4_scope = RT_SCOPE_UNIVERSE,
+ .flowi4_iif = LOOPBACK_IFINDEX,
+ .flowi4_uid = sock_net_uid(nft_net(pkt), NULL),
++ .flowi4_l3mdev = l3mdev_master_ifindex_rcu(nft_in(pkt)),
+ };
+ const struct net_device *oif;
+ const struct net_device *found;
+@@ -84,9 +85,6 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ else
+ oif = NULL;
+
+- if (priv->flags & NFTA_FIB_F_IIF)
+- fl4.flowi4_l3mdev = l3mdev_master_ifindex_rcu(oif);
+-
+ if (nft_hook(pkt) == NF_INET_PRE_ROUTING &&
+ nft_fib_is_loopback(pkt->skb, nft_in(pkt))) {
+ nft_fib_store_result(dest, priv, nft_in(pkt));
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index e37488d3453f03..889db23bfc05de 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2473,8 +2473,22 @@ static bool tcp_skb_spurious_retrans(const struct tcp_sock *tp,
+ */
+ static inline bool tcp_packet_delayed(const struct tcp_sock *tp)
+ {
+- return tp->retrans_stamp &&
+- tcp_tsopt_ecr_before(tp, tp->retrans_stamp);
++ const struct sock *sk = (const struct sock *)tp;
++
++ if (tp->retrans_stamp &&
++ tcp_tsopt_ecr_before(tp, tp->retrans_stamp))
++ return true; /* got echoed TS before first retransmission */
++
++ /* Check if nothing was retransmitted (retrans_stamp==0), which may
++ * happen in fast recovery due to TSQ. But we ignore zero retrans_stamp
++ * in TCP_SYN_SENT, since when we set FLAG_SYN_ACKED we also clear
++ * retrans_stamp even if we had retransmitted the SYN.
++ */
++ if (!tp->retrans_stamp && /* no record of a retransmit/SYN? */
++ sk->sk_state != TCP_SYN_SENT) /* not the FLAG_SYN_ACKED case? */
++ return true; /* nothing was retransmitted */
++
++ return false;
+ }
+
+ /* Undo procedures. */
+@@ -2508,6 +2522,16 @@ static bool tcp_any_retrans_done(const struct sock *sk)
+ return false;
+ }
+
++/* If loss recovery is finished and there are no retransmits out in the
++ * network, then we clear retrans_stamp so that upon the next loss recovery
++ * retransmits_timed_out() and timestamp-undo are using the correct value.
++ */
++static void tcp_retrans_stamp_cleanup(struct sock *sk)
++{
++ if (!tcp_any_retrans_done(sk))
++ tcp_sk(sk)->retrans_stamp = 0;
++}
++
+ static void DBGUNDO(struct sock *sk, const char *msg)
+ {
+ #if FASTRETRANS_DEBUG > 1
+@@ -2875,6 +2899,9 @@ void tcp_enter_recovery(struct sock *sk, bool ece_ack)
+ struct tcp_sock *tp = tcp_sk(sk);
+ int mib_idx;
+
++ /* Start the clock with our fast retransmit, for undo and ETIMEDOUT. */
++ tcp_retrans_stamp_cleanup(sk);
++
+ if (tcp_is_reno(tp))
+ mib_idx = LINUX_MIB_TCPRENORECOVERY;
+ else
+@@ -6650,10 +6677,17 @@ static void tcp_rcv_synrecv_state_fastopen(struct sock *sk)
+ if (inet_csk(sk)->icsk_ca_state == TCP_CA_Loss && !tp->packets_out)
+ tcp_try_undo_recovery(sk);
+
+- /* Reset rtx states to prevent spurious retransmits_timed_out() */
+ tcp_update_rto_time(tp);
+- tp->retrans_stamp = 0;
+ inet_csk(sk)->icsk_retransmits = 0;
++ /* In tcp_fastopen_synack_timer() on the first SYNACK RTO we set
++ * retrans_stamp but don't enter CA_Loss, so in case that happened we
++ * need to zero retrans_stamp here to prevent spurious
++ * retransmits_timed_out(). However, if the ACK of our SYNACK caused us
++ * to enter CA_Recovery then we need to leave retrans_stamp as it was
++ * set entering CA_Recovery, for correct retransmits_timed_out() and
++ * undo behavior.
++ */
++ tcp_retrans_stamp_cleanup(sk);
+
+ /* Once we leave TCP_SYN_RECV or TCP_FIN_WAIT_1,
+ * we no longer need req so release it.
+diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c
+index b9457473c176df..7db0437140bf22 100644
+--- a/net/ipv6/netfilter/nf_reject_ipv6.c
++++ b/net/ipv6/netfilter/nf_reject_ipv6.c
+@@ -273,7 +273,6 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ const struct tcphdr *otcph;
+ unsigned int otcplen, hh_len;
+ const struct ipv6hdr *oip6h = ipv6_hdr(oldskb);
+- struct ipv6hdr *ip6h;
+ struct dst_entry *dst = NULL;
+ struct flowi6 fl6;
+
+@@ -329,8 +328,7 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ nskb->mark = fl6.flowi6_mark;
+
+ skb_reserve(nskb, hh_len + dst->header_len);
+- ip6h = nf_reject_ip6hdr_put(nskb, oldskb, IPPROTO_TCP,
+- ip6_dst_hoplimit(dst));
++ nf_reject_ip6hdr_put(nskb, oldskb, IPPROTO_TCP, ip6_dst_hoplimit(dst));
+ nf_reject_ip6_tcphdr_put(nskb, oldskb, otcph, otcplen);
+
+ nf_ct_attach(nskb, oldskb);
+@@ -345,6 +343,7 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ */
+ if (nf_bridge_info_exists(oldskb)) {
+ struct ethhdr *oeth = eth_hdr(oldskb);
++ struct ipv6hdr *ip6h = ipv6_hdr(nskb);
+ struct net_device *br_indev;
+
+ br_indev = nf_bridge_get_physindev(oldskb, net);
+diff --git a/net/ipv6/netfilter/nft_fib_ipv6.c b/net/ipv6/netfilter/nft_fib_ipv6.c
+index 36dc14b34388c8..c9f1634b3838ae 100644
+--- a/net/ipv6/netfilter/nft_fib_ipv6.c
++++ b/net/ipv6/netfilter/nft_fib_ipv6.c
+@@ -41,8 +41,6 @@ static int nft_fib6_flowi_init(struct flowi6 *fl6, const struct nft_fib *priv,
+ if (ipv6_addr_type(&fl6->daddr) & IPV6_ADDR_LINKLOCAL) {
+ lookup_flags |= RT6_LOOKUP_F_IFACE;
+ fl6->flowi6_oif = get_ifindex(dev ? dev : pkt->skb->dev);
+- } else if (priv->flags & NFTA_FIB_F_IIF) {
+- fl6->flowi6_l3mdev = l3mdev_master_ifindex_rcu(dev);
+ }
+
+ if (ipv6_addr_type(&fl6->saddr) & IPV6_ADDR_UNICAST)
+@@ -75,6 +73,8 @@ static u32 __nft_fib6_eval_type(const struct nft_fib *priv,
+ else if (priv->flags & NFTA_FIB_F_OIF)
+ dev = nft_out(pkt);
+
++ fl6.flowi6_l3mdev = l3mdev_master_ifindex_rcu(dev);
++
+ nft_fib6_flowi_init(&fl6, priv, pkt, dev, iph);
+
+ if (dev && nf_ipv6_chk_addr(nft_net(pkt), &fl6.daddr, dev, true))
+@@ -165,6 +165,7 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ .flowi6_iif = LOOPBACK_IFINDEX,
+ .flowi6_proto = pkt->tprot,
+ .flowi6_uid = sock_net_uid(nft_net(pkt), NULL),
++ .flowi6_l3mdev = l3mdev_master_ifindex_rcu(nft_in(pkt)),
+ };
+ struct rt6_info *rt;
+ int lookup_flags;
+diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c
+index de52a9191da0ed..75f4790d996233 100644
+--- a/net/mctp/af_mctp.c
++++ b/net/mctp/af_mctp.c
+@@ -753,10 +753,14 @@ static __init int mctp_init(void)
+ if (rc)
+ goto err_unreg_routes;
+
+- mctp_device_init();
++ rc = mctp_device_init();
++ if (rc)
++ goto err_unreg_neigh;
+
+ return 0;
+
++err_unreg_neigh:
++ mctp_neigh_exit();
+ err_unreg_routes:
+ mctp_routes_exit();
+ err_unreg_proto:
+diff --git a/net/mctp/device.c b/net/mctp/device.c
+index acb97b25742896..85cc5f31f1e7c0 100644
+--- a/net/mctp/device.c
++++ b/net/mctp/device.c
+@@ -524,25 +524,31 @@ static struct notifier_block mctp_dev_nb = {
+ .priority = ADDRCONF_NOTIFY_PRIORITY,
+ };
+
+-void __init mctp_device_init(void)
++static const struct rtnl_msg_handler mctp_device_rtnl_msg_handlers[] = {
++ {THIS_MODULE, PF_MCTP, RTM_NEWADDR, mctp_rtm_newaddr, NULL, 0},
++ {THIS_MODULE, PF_MCTP, RTM_DELADDR, mctp_rtm_deladdr, NULL, 0},
++ {THIS_MODULE, PF_MCTP, RTM_GETADDR, NULL, mctp_dump_addrinfo, 0},
++};
++
++int __init mctp_device_init(void)
+ {
+- register_netdevice_notifier(&mctp_dev_nb);
++ int err;
+
+- rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_GETADDR,
+- NULL, mctp_dump_addrinfo, 0);
+- rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_NEWADDR,
+- mctp_rtm_newaddr, NULL, 0);
+- rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_DELADDR,
+- mctp_rtm_deladdr, NULL, 0);
++ register_netdevice_notifier(&mctp_dev_nb);
+ rtnl_af_register(&mctp_af_ops);
++
++ err = rtnl_register_many(mctp_device_rtnl_msg_handlers);
++ if (err) {
++ rtnl_af_unregister(&mctp_af_ops);
++ unregister_netdevice_notifier(&mctp_dev_nb);
++ }
++
++ return err;
+ }
+
+ void __exit mctp_device_exit(void)
+ {
++ rtnl_unregister_many(mctp_device_rtnl_msg_handlers);
+ rtnl_af_unregister(&mctp_af_ops);
+- rtnl_unregister(PF_MCTP, RTM_DELADDR);
+- rtnl_unregister(PF_MCTP, RTM_NEWADDR);
+- rtnl_unregister(PF_MCTP, RTM_GETADDR);
+-
+ unregister_netdevice_notifier(&mctp_dev_nb);
+ }
+diff --git a/net/mctp/neigh.c b/net/mctp/neigh.c
+index ffa0f9e0983fba..590f642413e4ef 100644
+--- a/net/mctp/neigh.c
++++ b/net/mctp/neigh.c
+@@ -322,22 +322,29 @@ static struct pernet_operations mctp_net_ops = {
+ .exit = mctp_neigh_net_exit,
+ };
+
++static const struct rtnl_msg_handler mctp_neigh_rtnl_msg_handlers[] = {
++ {THIS_MODULE, PF_MCTP, RTM_NEWNEIGH, mctp_rtm_newneigh, NULL, 0},
++ {THIS_MODULE, PF_MCTP, RTM_DELNEIGH, mctp_rtm_delneigh, NULL, 0},
++ {THIS_MODULE, PF_MCTP, RTM_GETNEIGH, NULL, mctp_rtm_getneigh, 0},
++};
++
+ int __init mctp_neigh_init(void)
+ {
+- rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_NEWNEIGH,
+- mctp_rtm_newneigh, NULL, 0);
+- rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_DELNEIGH,
+- mctp_rtm_delneigh, NULL, 0);
+- rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_GETNEIGH,
+- NULL, mctp_rtm_getneigh, 0);
+-
+- return register_pernet_subsys(&mctp_net_ops);
++ int err;
++
++ err = register_pernet_subsys(&mctp_net_ops);
++ if (err)
++ return err;
++
++ err = rtnl_register_many(mctp_neigh_rtnl_msg_handlers);
++ if (err)
++ unregister_pernet_subsys(&mctp_net_ops);
++
++ return err;
+ }
+
+-void __exit mctp_neigh_exit(void)
++void mctp_neigh_exit(void)
+ {
++ rtnl_unregister_many(mctp_neigh_rtnl_msg_handlers);
+ unregister_pernet_subsys(&mctp_net_ops);
+- rtnl_unregister(PF_MCTP, RTM_GETNEIGH);
+- rtnl_unregister(PF_MCTP, RTM_DELNEIGH);
+- rtnl_unregister(PF_MCTP, RTM_NEWNEIGH);
+ }
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index eefd7834d9a009..597e9cf5aa6444 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -1474,26 +1474,39 @@ static struct pernet_operations mctp_net_ops = {
+ .exit = mctp_routes_net_exit,
+ };
+
++static const struct rtnl_msg_handler mctp_route_rtnl_msg_handlers[] = {
++ {THIS_MODULE, PF_MCTP, RTM_NEWROUTE, mctp_newroute, NULL, 0},
++ {THIS_MODULE, PF_MCTP, RTM_DELROUTE, mctp_delroute, NULL, 0},
++ {THIS_MODULE, PF_MCTP, RTM_GETROUTE, NULL, mctp_dump_rtinfo, 0},
++};
++
+ int __init mctp_routes_init(void)
+ {
++ int err;
++
+ dev_add_pack(&mctp_packet_type);
+
+- rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_GETROUTE,
+- NULL, mctp_dump_rtinfo, 0);
+- rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_NEWROUTE,
+- mctp_newroute, NULL, 0);
+- rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_DELROUTE,
+- mctp_delroute, NULL, 0);
++ err = register_pernet_subsys(&mctp_net_ops);
++ if (err)
++ goto err_pernet;
++
++ err = rtnl_register_many(mctp_route_rtnl_msg_handlers);
++ if (err)
++ goto err_rtnl;
+
+- return register_pernet_subsys(&mctp_net_ops);
++ return 0;
++
++err_rtnl:
++ unregister_pernet_subsys(&mctp_net_ops);
++err_pernet:
++ dev_remove_pack(&mctp_packet_type);
++ return err;
+ }
+
+ void mctp_routes_exit(void)
+ {
++ rtnl_unregister_many(mctp_route_rtnl_msg_handlers);
+ unregister_pernet_subsys(&mctp_net_ops);
+- rtnl_unregister(PF_MCTP, RTM_DELROUTE);
+- rtnl_unregister(PF_MCTP, RTM_NEWROUTE);
+- rtnl_unregister(PF_MCTP, RTM_GETROUTE);
+ dev_remove_pack(&mctp_packet_type);
+ }
+
+diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
+index 0e6c94a8c2bc63..4f6836677e40b4 100644
+--- a/net/mpls/af_mpls.c
++++ b/net/mpls/af_mpls.c
+@@ -2730,6 +2730,15 @@ static struct rtnl_af_ops mpls_af_ops __read_mostly = {
+ .get_stats_af_size = mpls_get_stats_af_size,
+ };
+
++static const struct rtnl_msg_handler mpls_rtnl_msg_handlers[] __initdata_or_module = {
++ {THIS_MODULE, PF_MPLS, RTM_NEWROUTE, mpls_rtm_newroute, NULL, 0},
++ {THIS_MODULE, PF_MPLS, RTM_DELROUTE, mpls_rtm_delroute, NULL, 0},
++ {THIS_MODULE, PF_MPLS, RTM_GETROUTE, mpls_getroute, mpls_dump_routes, 0},
++ {THIS_MODULE, PF_MPLS, RTM_GETNETCONF,
++ mpls_netconf_get_devconf, mpls_netconf_dump_devconf,
++ RTNL_FLAG_DUMP_UNLOCKED},
++};
++
+ static int __init mpls_init(void)
+ {
+ int err;
+@@ -2748,24 +2757,25 @@ static int __init mpls_init(void)
+
+ rtnl_af_register(&mpls_af_ops);
+
+- rtnl_register_module(THIS_MODULE, PF_MPLS, RTM_NEWROUTE,
+- mpls_rtm_newroute, NULL, 0);
+- rtnl_register_module(THIS_MODULE, PF_MPLS, RTM_DELROUTE,
+- mpls_rtm_delroute, NULL, 0);
+- rtnl_register_module(THIS_MODULE, PF_MPLS, RTM_GETROUTE,
+- mpls_getroute, mpls_dump_routes, 0);
+- rtnl_register_module(THIS_MODULE, PF_MPLS, RTM_GETNETCONF,
+- mpls_netconf_get_devconf,
+- mpls_netconf_dump_devconf,
+- RTNL_FLAG_DUMP_UNLOCKED);
+- err = ipgre_tunnel_encap_add_mpls_ops();
++ err = rtnl_register_many(mpls_rtnl_msg_handlers);
+ if (err)
++ goto out_unregister_rtnl_af;
++
++ err = ipgre_tunnel_encap_add_mpls_ops();
++ if (err) {
+ pr_err("Can't add mpls over gre tunnel ops\n");
++ goto out_unregister_rtnl;
++ }
+
+ err = 0;
+ out:
+ return err;
+
++out_unregister_rtnl:
++ rtnl_unregister_many(mpls_rtnl_msg_handlers);
++out_unregister_rtnl_af:
++ rtnl_af_unregister(&mpls_af_ops);
++ dev_remove_pack(&mpls_packet_type);
+ out_unregister_pernet:
+ unregister_pernet_subsys(&mpls_net_ops);
+ goto out;
+diff --git a/net/mptcp/mib.c b/net/mptcp/mib.c
+index 7884217f33eb2f..79f2278434b950 100644
+--- a/net/mptcp/mib.c
++++ b/net/mptcp/mib.c
+@@ -26,6 +26,8 @@ static const struct snmp_mib mptcp_snmp_list[] = {
+ SNMP_MIB_ITEM("MPJoinAckRx", MPTCP_MIB_JOINACKRX),
+ SNMP_MIB_ITEM("MPJoinAckHMacFailure", MPTCP_MIB_JOINACKMAC),
+ SNMP_MIB_ITEM("DSSNotMatching", MPTCP_MIB_DSSNOMATCH),
++ SNMP_MIB_ITEM("DSSCorruptionFallback", MPTCP_MIB_DSSCORRUPTIONFALLBACK),
++ SNMP_MIB_ITEM("DSSCorruptionReset", MPTCP_MIB_DSSCORRUPTIONRESET),
+ SNMP_MIB_ITEM("InfiniteMapTx", MPTCP_MIB_INFINITEMAPTX),
+ SNMP_MIB_ITEM("InfiniteMapRx", MPTCP_MIB_INFINITEMAPRX),
+ SNMP_MIB_ITEM("DSSNoMatchTCP", MPTCP_MIB_DSSTCPMISMATCH),
+diff --git a/net/mptcp/mib.h b/net/mptcp/mib.h
+index 66aa67f49d032c..3f4ca290be6904 100644
+--- a/net/mptcp/mib.h
++++ b/net/mptcp/mib.h
+@@ -21,6 +21,8 @@ enum linux_mptcp_mib_field {
+ MPTCP_MIB_JOINACKRX, /* Received an ACK + MP_JOIN */
+ MPTCP_MIB_JOINACKMAC, /* HMAC was wrong on ACK + MP_JOIN */
+ MPTCP_MIB_DSSNOMATCH, /* Received a new mapping that did not match the previous one */
++ MPTCP_MIB_DSSCORRUPTIONFALLBACK,/* DSS corruption detected, fallback */
++ MPTCP_MIB_DSSCORRUPTIONRESET, /* DSS corruption detected, MPJ subflow reset */
+ MPTCP_MIB_INFINITEMAPTX, /* Sent an infinite mapping */
+ MPTCP_MIB_INFINITEMAPRX, /* Received an infinite mapping */
+ MPTCP_MIB_DSSTCPMISMATCH, /* DSS-mapping did not map with TCP's sequence numbers */
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index ad935d34c973a9..6c586b2b377bbe 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -856,7 +856,8 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
+ int how = RCV_SHUTDOWN | SEND_SHUTDOWN;
+ u8 id = subflow_get_local_id(subflow);
+
+- if (inet_sk_state_load(ssk) == TCP_CLOSE)
++ if ((1 << inet_sk_state_load(ssk)) &
++ (TCPF_FIN_WAIT1 | TCPF_FIN_WAIT2 | TCPF_CLOSING | TCPF_CLOSE))
+ continue;
+ if (rm_type == MPTCP_MIB_RMADDR && remote_id != rm_id)
+ continue;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 37ebcb7640ebb3..d4b3bc46cdaaf5 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -620,6 +620,18 @@ static bool mptcp_check_data_fin(struct sock *sk)
+ return ret;
+ }
+
++static void mptcp_dss_corruption(struct mptcp_sock *msk, struct sock *ssk)
++{
++ if (READ_ONCE(msk->allow_infinite_fallback)) {
++ MPTCP_INC_STATS(sock_net(ssk),
++ MPTCP_MIB_DSSCORRUPTIONFALLBACK);
++ mptcp_do_fallback(ssk);
++ } else {
++ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DSSCORRUPTIONRESET);
++ mptcp_subflow_reset(ssk);
++ }
++}
++
+ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
+ struct sock *ssk,
+ unsigned int *bytes)
+@@ -692,10 +704,16 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
+ moved += len;
+ seq += len;
+
+- if (WARN_ON_ONCE(map_remaining < len))
+- break;
++ if (unlikely(map_remaining < len)) {
++ DEBUG_NET_WARN_ON_ONCE(1);
++ mptcp_dss_corruption(msk, ssk);
++ }
+ } else {
+- WARN_ON_ONCE(!fin);
++ if (unlikely(!fin)) {
++ DEBUG_NET_WARN_ON_ONCE(1);
++ mptcp_dss_corruption(msk, ssk);
++ }
++
+ sk_eat_skb(ssk, skb);
+ done = true;
+ }
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 064ab32358934d..7d14da95a28305 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -971,8 +971,10 @@ static bool skb_is_fully_mapped(struct sock *ssk, struct sk_buff *skb)
+ unsigned int skb_consumed;
+
+ skb_consumed = tcp_sk(ssk)->copied_seq - TCP_SKB_CB(skb)->seq;
+- if (WARN_ON_ONCE(skb_consumed >= skb->len))
++ if (unlikely(skb_consumed >= skb->len)) {
++ DEBUG_NET_WARN_ON_ONCE(1);
+ return true;
++ }
+
+ return skb->len - skb_consumed <= subflow->map_data_len -
+ mptcp_subflow_get_map_offset(subflow);
+@@ -1276,7 +1278,7 @@ static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)
+ else if (READ_ONCE(msk->csum_enabled))
+ return !subflow->valid_csum_seen;
+ else
+- return !subflow->fully_established;
++ return READ_ONCE(msk->allow_infinite_fallback);
+ }
+
+ static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
+diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c
+index 016c816d91cbc4..c212b1b1372222 100644
+--- a/net/netfilter/nf_nat_core.c
++++ b/net/netfilter/nf_nat_core.c
+@@ -183,7 +183,35 @@ hash_by_src(const struct net *net,
+ return reciprocal_scale(hash, nf_nat_htable_size);
+ }
+
+-/* Is this tuple already taken? (not by us) */
++/**
++ * nf_nat_used_tuple - check if proposed nat tuple clashes with existing entry
++ * @tuple: proposed NAT binding
++ * @ignored_conntrack: our (unconfirmed) conntrack entry
++ *
++ * A conntrack entry can be inserted to the connection tracking table
++ * if there is no existing entry with an identical tuple in either direction.
++ *
++ * Example:
++ * INITIATOR -> NAT/PAT -> RESPONDER
++ *
++ * INITIATOR passes through NAT/PAT ("us") and SNAT is done (saddr rewrite).
++ * Then, later, NAT/PAT itself also connects to RESPONDER.
++ *
++ * This will not work if the SNAT done earlier has same IP:PORT source pair.
++ *
++ * Conntrack table has:
++ * ORIGINAL: $IP_INITIATOR:$SPORT -> $IP_RESPONDER:$DPORT
++ * REPLY: $IP_RESPONDER:$DPORT -> $IP_NAT:$SPORT
++ *
++ * and new locally originating connection wants:
++ * ORIGINAL: $IP_NAT:$SPORT -> $IP_RESPONDER:$DPORT
++ * REPLY: $IP_RESPONDER:$DPORT -> $IP_NAT:$SPORT
++ *
++ * ... which would mean incoming packets cannot be distinguished between
++ * the existing and the newly added entry (identical IP_CT_DIR_REPLY tuple).
++ *
++ * @return: true if the proposed NAT mapping collides with an existing entry.
++ */
+ static int
+ nf_nat_used_tuple(const struct nf_conntrack_tuple *tuple,
+ const struct nf_conn *ignored_conntrack)
+@@ -200,6 +228,94 @@ nf_nat_used_tuple(const struct nf_conntrack_tuple *tuple,
+ return nf_conntrack_tuple_taken(&reply, ignored_conntrack);
+ }
+
++static bool nf_nat_allow_clash(const struct nf_conn *ct)
++{
++ return nf_ct_l4proto_find(nf_ct_protonum(ct))->allow_clash;
++}
++
++/**
++ * nf_nat_used_tuple_new - check if to-be-inserted conntrack collides with existing entry
++ * @tuple: proposed NAT binding
++ * @ignored_ct: our (unconfirmed) conntrack entry
++ *
++ * Same as nf_nat_used_tuple, but also check for rare clash in reverse
++ * direction. Should be called only when @tuple has not been altered, i.e.
++ * @ignored_conntrack will not be subject to NAT.
++ *
++ * @return: true if the proposed NAT mapping collides with existing entry.
++ */
++static noinline bool
++nf_nat_used_tuple_new(const struct nf_conntrack_tuple *tuple,
++ const struct nf_conn *ignored_ct)
++{
++ static const unsigned long uses_nat = IPS_NAT_MASK | IPS_SEQ_ADJUST_BIT;
++ const struct nf_conntrack_tuple_hash *thash;
++ const struct nf_conntrack_zone *zone;
++ struct nf_conn *ct;
++ bool taken = true;
++ struct net *net;
++
++ if (!nf_nat_used_tuple(tuple, ignored_ct))
++ return false;
++
++ if (!nf_nat_allow_clash(ignored_ct))
++ return true;
++
++ /* Initial choice clashes with existing conntrack.
++ * Check for (rare) reverse collision.
++ *
++ * This can happen when new packets are received in both directions
++ * at the exact same time on different CPUs.
++ *
++ * Without SMP, first packet creates new conntrack entry and second
++ * packet is resolved as established reply packet.
++ *
++ * With parallel processing, both packets could be picked up as
++ * new and both get their own ct entry allocated.
++ *
++ * If ignored_conntrack and colliding ct are not subject to NAT then
++ * pretend the tuple is available and let later clash resolution
++ * handle this at insertion time.
++ *
++ * Without it, the 'reply' packet has its source port rewritten
++ * by nat engine.
++ */
++ if (READ_ONCE(ignored_ct->status) & uses_nat)
++ return true;
++
++ net = nf_ct_net(ignored_ct);
++ zone = nf_ct_zone(ignored_ct);
++
++ thash = nf_conntrack_find_get(net, zone, tuple);
++ if (unlikely(!thash)) /* clashing entry went away */
++ return false;
++
++ ct = nf_ct_tuplehash_to_ctrack(thash);
++
++ /* NB: IP_CT_DIR_ORIGINAL should be impossible because
++ * nf_nat_used_tuple() handles origin collisions.
++ *
++ * Handle remote chance other CPU confirmed its ct right after.
++ */
++ if (thash->tuple.dst.dir != IP_CT_DIR_REPLY)
++ goto out;
++
++ /* clashing connection subject to NAT? Retry with new tuple. */
++ if (READ_ONCE(ct->status) & uses_nat)
++ goto out;
++
++ if (nf_ct_tuple_equal(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple,
++ &ignored_ct->tuplehash[IP_CT_DIR_REPLY].tuple) &&
++ nf_ct_tuple_equal(&ct->tuplehash[IP_CT_DIR_REPLY].tuple,
++ &ignored_ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple)) {
++ taken = false;
++ goto out;
++ }
++out:
++ nf_ct_put(ct);
++ return taken;
++}
++
+ static bool nf_nat_may_kill(struct nf_conn *ct, unsigned long flags)
+ {
+ static const unsigned long flags_refuse = IPS_FIXED_TIMEOUT |
+@@ -611,7 +727,7 @@ get_unique_tuple(struct nf_conntrack_tuple *tuple,
+ !(range->flags & NF_NAT_RANGE_PROTO_RANDOM_ALL)) {
+ /* try the original tuple first */
+ if (nf_in_range(orig_tuple, range)) {
+- if (!nf_nat_used_tuple(orig_tuple, ct)) {
++ if (!nf_nat_used_tuple_new(orig_tuple, ct)) {
+ *tuple = *orig_tuple;
+ return;
+ }
+diff --git a/net/netfilter/xt_CHECKSUM.c b/net/netfilter/xt_CHECKSUM.c
+index c8a639f5616841..9d99f5a3d1764b 100644
+--- a/net/netfilter/xt_CHECKSUM.c
++++ b/net/netfilter/xt_CHECKSUM.c
+@@ -63,24 +63,37 @@ static int checksum_tg_check(const struct xt_tgchk_param *par)
+ return 0;
+ }
+
+-static struct xt_target checksum_tg_reg __read_mostly = {
+- .name = "CHECKSUM",
+- .family = NFPROTO_UNSPEC,
+- .target = checksum_tg,
+- .targetsize = sizeof(struct xt_CHECKSUM_info),
+- .table = "mangle",
+- .checkentry = checksum_tg_check,
+- .me = THIS_MODULE,
++static struct xt_target checksum_tg_reg[] __read_mostly = {
++ {
++ .name = "CHECKSUM",
++ .family = NFPROTO_IPV4,
++ .target = checksum_tg,
++ .targetsize = sizeof(struct xt_CHECKSUM_info),
++ .table = "mangle",
++ .checkentry = checksum_tg_check,
++ .me = THIS_MODULE,
++ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "CHECKSUM",
++ .family = NFPROTO_IPV6,
++ .target = checksum_tg,
++ .targetsize = sizeof(struct xt_CHECKSUM_info),
++ .table = "mangle",
++ .checkentry = checksum_tg_check,
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static int __init checksum_tg_init(void)
+ {
+- return xt_register_target(&checksum_tg_reg);
++ return xt_register_targets(checksum_tg_reg, ARRAY_SIZE(checksum_tg_reg));
+ }
+
+ static void __exit checksum_tg_exit(void)
+ {
+- xt_unregister_target(&checksum_tg_reg);
++ xt_unregister_targets(checksum_tg_reg, ARRAY_SIZE(checksum_tg_reg));
+ }
+
+ module_init(checksum_tg_init);
+diff --git a/net/netfilter/xt_CLASSIFY.c b/net/netfilter/xt_CLASSIFY.c
+index 0accac98dea784..0ae8d8a1216e19 100644
+--- a/net/netfilter/xt_CLASSIFY.c
++++ b/net/netfilter/xt_CLASSIFY.c
+@@ -38,9 +38,9 @@ static struct xt_target classify_tg_reg[] __read_mostly = {
+ {
+ .name = "CLASSIFY",
+ .revision = 0,
+- .family = NFPROTO_UNSPEC,
++ .family = NFPROTO_IPV4,
+ .hooks = (1 << NF_INET_LOCAL_OUT) | (1 << NF_INET_FORWARD) |
+- (1 << NF_INET_POST_ROUTING),
++ (1 << NF_INET_POST_ROUTING),
+ .target = classify_tg,
+ .targetsize = sizeof(struct xt_classify_target_info),
+ .me = THIS_MODULE,
+@@ -54,6 +54,18 @@ static struct xt_target classify_tg_reg[] __read_mostly = {
+ .targetsize = sizeof(struct xt_classify_target_info),
+ .me = THIS_MODULE,
+ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "CLASSIFY",
++ .revision = 0,
++ .family = NFPROTO_IPV6,
++ .hooks = (1 << NF_INET_LOCAL_OUT) | (1 << NF_INET_FORWARD) |
++ (1 << NF_INET_POST_ROUTING),
++ .target = classify_tg,
++ .targetsize = sizeof(struct xt_classify_target_info),
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static int __init classify_tg_init(void)
+diff --git a/net/netfilter/xt_CONNSECMARK.c b/net/netfilter/xt_CONNSECMARK.c
+index 76acecf3e757a0..1494b3ee30e11e 100644
+--- a/net/netfilter/xt_CONNSECMARK.c
++++ b/net/netfilter/xt_CONNSECMARK.c
+@@ -114,25 +114,39 @@ static void connsecmark_tg_destroy(const struct xt_tgdtor_param *par)
+ nf_ct_netns_put(par->net, par->family);
+ }
+
+-static struct xt_target connsecmark_tg_reg __read_mostly = {
+- .name = "CONNSECMARK",
+- .revision = 0,
+- .family = NFPROTO_UNSPEC,
+- .checkentry = connsecmark_tg_check,
+- .destroy = connsecmark_tg_destroy,
+- .target = connsecmark_tg,
+- .targetsize = sizeof(struct xt_connsecmark_target_info),
+- .me = THIS_MODULE,
++static struct xt_target connsecmark_tg_reg[] __read_mostly = {
++ {
++ .name = "CONNSECMARK",
++ .revision = 0,
++ .family = NFPROTO_IPV4,
++ .checkentry = connsecmark_tg_check,
++ .destroy = connsecmark_tg_destroy,
++ .target = connsecmark_tg,
++ .targetsize = sizeof(struct xt_connsecmark_target_info),
++ .me = THIS_MODULE,
++ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "CONNSECMARK",
++ .revision = 0,
++ .family = NFPROTO_IPV6,
++ .checkentry = connsecmark_tg_check,
++ .destroy = connsecmark_tg_destroy,
++ .target = connsecmark_tg,
++ .targetsize = sizeof(struct xt_connsecmark_target_info),
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static int __init connsecmark_tg_init(void)
+ {
+- return xt_register_target(&connsecmark_tg_reg);
++ return xt_register_targets(connsecmark_tg_reg, ARRAY_SIZE(connsecmark_tg_reg));
+ }
+
+ static void __exit connsecmark_tg_exit(void)
+ {
+- xt_unregister_target(&connsecmark_tg_reg);
++ xt_unregister_targets(connsecmark_tg_reg, ARRAY_SIZE(connsecmark_tg_reg));
+ }
+
+ module_init(connsecmark_tg_init);
+diff --git a/net/netfilter/xt_CT.c b/net/netfilter/xt_CT.c
+index 2be2f7a7b60f4e..3ba94c34297cf5 100644
+--- a/net/netfilter/xt_CT.c
++++ b/net/netfilter/xt_CT.c
+@@ -313,10 +313,30 @@ static void xt_ct_tg_destroy_v1(const struct xt_tgdtor_param *par)
+ xt_ct_tg_destroy(par, par->targinfo);
+ }
+
++static unsigned int
++notrack_tg(struct sk_buff *skb, const struct xt_action_param *par)
++{
++ /* Previously seen (loopback)? Ignore. */
++ if (skb->_nfct != 0)
++ return XT_CONTINUE;
++
++ nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
++
++ return XT_CONTINUE;
++}
++
+ static struct xt_target xt_ct_tg_reg[] __read_mostly = {
++ {
++ .name = "NOTRACK",
++ .revision = 0,
++ .family = NFPROTO_IPV4,
++ .target = notrack_tg,
++ .table = "raw",
++ .me = THIS_MODULE,
++ },
+ {
+ .name = "CT",
+- .family = NFPROTO_UNSPEC,
++ .family = NFPROTO_IPV4,
+ .targetsize = sizeof(struct xt_ct_target_info),
+ .usersize = offsetof(struct xt_ct_target_info, ct),
+ .checkentry = xt_ct_tg_check_v0,
+@@ -327,7 +347,7 @@ static struct xt_target xt_ct_tg_reg[] __read_mostly = {
+ },
+ {
+ .name = "CT",
+- .family = NFPROTO_UNSPEC,
++ .family = NFPROTO_IPV4,
+ .revision = 1,
+ .targetsize = sizeof(struct xt_ct_target_info_v1),
+ .usersize = offsetof(struct xt_ct_target_info, ct),
+@@ -339,7 +359,7 @@ static struct xt_target xt_ct_tg_reg[] __read_mostly = {
+ },
+ {
+ .name = "CT",
+- .family = NFPROTO_UNSPEC,
++ .family = NFPROTO_IPV4,
+ .revision = 2,
+ .targetsize = sizeof(struct xt_ct_target_info_v1),
+ .usersize = offsetof(struct xt_ct_target_info, ct),
+@@ -349,49 +369,61 @@ static struct xt_target xt_ct_tg_reg[] __read_mostly = {
+ .table = "raw",
+ .me = THIS_MODULE,
+ },
+-};
+-
+-static unsigned int
+-notrack_tg(struct sk_buff *skb, const struct xt_action_param *par)
+-{
+- /* Previously seen (loopback)? Ignore. */
+- if (skb->_nfct != 0)
+- return XT_CONTINUE;
+-
+- nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
+-
+- return XT_CONTINUE;
+-}
+-
+-static struct xt_target notrack_tg_reg __read_mostly = {
+- .name = "NOTRACK",
+- .revision = 0,
+- .family = NFPROTO_UNSPEC,
+- .target = notrack_tg,
+- .table = "raw",
+- .me = THIS_MODULE,
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "NOTRACK",
++ .revision = 0,
++ .family = NFPROTO_IPV6,
++ .target = notrack_tg,
++ .table = "raw",
++ .me = THIS_MODULE,
++ },
++ {
++ .name = "CT",
++ .family = NFPROTO_IPV6,
++ .targetsize = sizeof(struct xt_ct_target_info),
++ .usersize = offsetof(struct xt_ct_target_info, ct),
++ .checkentry = xt_ct_tg_check_v0,
++ .destroy = xt_ct_tg_destroy_v0,
++ .target = xt_ct_target_v0,
++ .table = "raw",
++ .me = THIS_MODULE,
++ },
++ {
++ .name = "CT",
++ .family = NFPROTO_IPV6,
++ .revision = 1,
++ .targetsize = sizeof(struct xt_ct_target_info_v1),
++ .usersize = offsetof(struct xt_ct_target_info, ct),
++ .checkentry = xt_ct_tg_check_v1,
++ .destroy = xt_ct_tg_destroy_v1,
++ .target = xt_ct_target_v1,
++ .table = "raw",
++ .me = THIS_MODULE,
++ },
++ {
++ .name = "CT",
++ .family = NFPROTO_IPV6,
++ .revision = 2,
++ .targetsize = sizeof(struct xt_ct_target_info_v1),
++ .usersize = offsetof(struct xt_ct_target_info, ct),
++ .checkentry = xt_ct_tg_check_v2,
++ .destroy = xt_ct_tg_destroy_v1,
++ .target = xt_ct_target_v1,
++ .table = "raw",
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static int __init xt_ct_tg_init(void)
+ {
+- int ret;
+-
+- ret = xt_register_target(¬rack_tg_reg);
+- if (ret < 0)
+- return ret;
+-
+- ret = xt_register_targets(xt_ct_tg_reg, ARRAY_SIZE(xt_ct_tg_reg));
+- if (ret < 0) {
+- xt_unregister_target(¬rack_tg_reg);
+- return ret;
+- }
+- return 0;
++ return xt_register_targets(xt_ct_tg_reg, ARRAY_SIZE(xt_ct_tg_reg));
+ }
+
+ static void __exit xt_ct_tg_exit(void)
+ {
+ xt_unregister_targets(xt_ct_tg_reg, ARRAY_SIZE(xt_ct_tg_reg));
+- xt_unregister_target(¬rack_tg_reg);
+ }
+
+ module_init(xt_ct_tg_init);
+diff --git a/net/netfilter/xt_IDLETIMER.c b/net/netfilter/xt_IDLETIMER.c
+index db720efa811d58..f8b25b6f5da736 100644
+--- a/net/netfilter/xt_IDLETIMER.c
++++ b/net/netfilter/xt_IDLETIMER.c
+@@ -458,28 +458,49 @@ static void idletimer_tg_destroy_v1(const struct xt_tgdtor_param *par)
+
+ static struct xt_target idletimer_tg[] __read_mostly = {
+ {
+- .name = "IDLETIMER",
+- .family = NFPROTO_UNSPEC,
+- .target = idletimer_tg_target,
+- .targetsize = sizeof(struct idletimer_tg_info),
+- .usersize = offsetof(struct idletimer_tg_info, timer),
+- .checkentry = idletimer_tg_checkentry,
+- .destroy = idletimer_tg_destroy,
+- .me = THIS_MODULE,
++ .name = "IDLETIMER",
++ .family = NFPROTO_IPV4,
++ .target = idletimer_tg_target,
++ .targetsize = sizeof(struct idletimer_tg_info),
++ .usersize = offsetof(struct idletimer_tg_info, timer),
++ .checkentry = idletimer_tg_checkentry,
++ .destroy = idletimer_tg_destroy,
++ .me = THIS_MODULE,
+ },
+ {
+- .name = "IDLETIMER",
+- .family = NFPROTO_UNSPEC,
+- .revision = 1,
+- .target = idletimer_tg_target_v1,
+- .targetsize = sizeof(struct idletimer_tg_info_v1),
+- .usersize = offsetof(struct idletimer_tg_info_v1, timer),
+- .checkentry = idletimer_tg_checkentry_v1,
+- .destroy = idletimer_tg_destroy_v1,
+- .me = THIS_MODULE,
++ .name = "IDLETIMER",
++ .family = NFPROTO_IPV4,
++ .revision = 1,
++ .target = idletimer_tg_target_v1,
++ .targetsize = sizeof(struct idletimer_tg_info_v1),
++ .usersize = offsetof(struct idletimer_tg_info_v1, timer),
++ .checkentry = idletimer_tg_checkentry_v1,
++ .destroy = idletimer_tg_destroy_v1,
++ .me = THIS_MODULE,
+ },
+-
+-
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "IDLETIMER",
++ .family = NFPROTO_IPV6,
++ .target = idletimer_tg_target,
++ .targetsize = sizeof(struct idletimer_tg_info),
++ .usersize = offsetof(struct idletimer_tg_info, timer),
++ .checkentry = idletimer_tg_checkentry,
++ .destroy = idletimer_tg_destroy,
++ .me = THIS_MODULE,
++ },
++ {
++ .name = "IDLETIMER",
++ .family = NFPROTO_IPV6,
++ .revision = 1,
++ .target = idletimer_tg_target_v1,
++ .targetsize = sizeof(struct idletimer_tg_info_v1),
++ .usersize = offsetof(struct idletimer_tg_info_v1, timer),
++ .checkentry = idletimer_tg_checkentry_v1,
++ .destroy = idletimer_tg_destroy_v1,
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static struct class *idletimer_tg_class;
+diff --git a/net/netfilter/xt_LED.c b/net/netfilter/xt_LED.c
+index 36c9720ad8d6d4..f7b0286d106ac1 100644
+--- a/net/netfilter/xt_LED.c
++++ b/net/netfilter/xt_LED.c
+@@ -175,26 +175,41 @@ static void led_tg_destroy(const struct xt_tgdtor_param *par)
+ kfree(ledinternal);
+ }
+
+-static struct xt_target led_tg_reg __read_mostly = {
+- .name = "LED",
+- .revision = 0,
+- .family = NFPROTO_UNSPEC,
+- .target = led_tg,
+- .targetsize = sizeof(struct xt_led_info),
+- .usersize = offsetof(struct xt_led_info, internal_data),
+- .checkentry = led_tg_check,
+- .destroy = led_tg_destroy,
+- .me = THIS_MODULE,
++static struct xt_target led_tg_reg[] __read_mostly = {
++ {
++ .name = "LED",
++ .revision = 0,
++ .family = NFPROTO_IPV4,
++ .target = led_tg,
++ .targetsize = sizeof(struct xt_led_info),
++ .usersize = offsetof(struct xt_led_info, internal_data),
++ .checkentry = led_tg_check,
++ .destroy = led_tg_destroy,
++ .me = THIS_MODULE,
++ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "LED",
++ .revision = 0,
++ .family = NFPROTO_IPV6,
++ .target = led_tg,
++ .targetsize = sizeof(struct xt_led_info),
++ .usersize = offsetof(struct xt_led_info, internal_data),
++ .checkentry = led_tg_check,
++ .destroy = led_tg_destroy,
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static int __init led_tg_init(void)
+ {
+- return xt_register_target(&led_tg_reg);
++ return xt_register_targets(led_tg_reg, ARRAY_SIZE(led_tg_reg));
+ }
+
+ static void __exit led_tg_exit(void)
+ {
+- xt_unregister_target(&led_tg_reg);
++ xt_unregister_targets(led_tg_reg, ARRAY_SIZE(led_tg_reg));
+ }
+
+ module_init(led_tg_init);
+diff --git a/net/netfilter/xt_NFLOG.c b/net/netfilter/xt_NFLOG.c
+index e660c3710a1096..d80abd6ccaf8f7 100644
+--- a/net/netfilter/xt_NFLOG.c
++++ b/net/netfilter/xt_NFLOG.c
+@@ -64,25 +64,39 @@ static void nflog_tg_destroy(const struct xt_tgdtor_param *par)
+ nf_logger_put(par->family, NF_LOG_TYPE_ULOG);
+ }
+
+-static struct xt_target nflog_tg_reg __read_mostly = {
+- .name = "NFLOG",
+- .revision = 0,
+- .family = NFPROTO_UNSPEC,
+- .checkentry = nflog_tg_check,
+- .destroy = nflog_tg_destroy,
+- .target = nflog_tg,
+- .targetsize = sizeof(struct xt_nflog_info),
+- .me = THIS_MODULE,
++static struct xt_target nflog_tg_reg[] __read_mostly = {
++ {
++ .name = "NFLOG",
++ .revision = 0,
++ .family = NFPROTO_IPV4,
++ .checkentry = nflog_tg_check,
++ .destroy = nflog_tg_destroy,
++ .target = nflog_tg,
++ .targetsize = sizeof(struct xt_nflog_info),
++ .me = THIS_MODULE,
++ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "NFLOG",
++ .revision = 0,
++ .family = NFPROTO_IPV4,
++ .checkentry = nflog_tg_check,
++ .destroy = nflog_tg_destroy,
++ .target = nflog_tg,
++ .targetsize = sizeof(struct xt_nflog_info),
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static int __init nflog_tg_init(void)
+ {
+- return xt_register_target(&nflog_tg_reg);
++ return xt_register_targets(nflog_tg_reg, ARRAY_SIZE(nflog_tg_reg));
+ }
+
+ static void __exit nflog_tg_exit(void)
+ {
+- xt_unregister_target(&nflog_tg_reg);
++ xt_unregister_targets(nflog_tg_reg, ARRAY_SIZE(nflog_tg_reg));
+ }
+
+ module_init(nflog_tg_init);
+diff --git a/net/netfilter/xt_RATEEST.c b/net/netfilter/xt_RATEEST.c
+index 80f6624e23554b..4f49cfc2783120 100644
+--- a/net/netfilter/xt_RATEEST.c
++++ b/net/netfilter/xt_RATEEST.c
+@@ -179,16 +179,31 @@ static void xt_rateest_tg_destroy(const struct xt_tgdtor_param *par)
+ xt_rateest_put(par->net, info->est);
+ }
+
+-static struct xt_target xt_rateest_tg_reg __read_mostly = {
+- .name = "RATEEST",
+- .revision = 0,
+- .family = NFPROTO_UNSPEC,
+- .target = xt_rateest_tg,
+- .checkentry = xt_rateest_tg_checkentry,
+- .destroy = xt_rateest_tg_destroy,
+- .targetsize = sizeof(struct xt_rateest_target_info),
+- .usersize = offsetof(struct xt_rateest_target_info, est),
+- .me = THIS_MODULE,
++static struct xt_target xt_rateest_tg_reg[] __read_mostly = {
++ {
++ .name = "RATEEST",
++ .revision = 0,
++ .family = NFPROTO_IPV4,
++ .target = xt_rateest_tg,
++ .checkentry = xt_rateest_tg_checkentry,
++ .destroy = xt_rateest_tg_destroy,
++ .targetsize = sizeof(struct xt_rateest_target_info),
++ .usersize = offsetof(struct xt_rateest_target_info, est),
++ .me = THIS_MODULE,
++ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "RATEEST",
++ .revision = 0,
++ .family = NFPROTO_IPV6,
++ .target = xt_rateest_tg,
++ .checkentry = xt_rateest_tg_checkentry,
++ .destroy = xt_rateest_tg_destroy,
++ .targetsize = sizeof(struct xt_rateest_target_info),
++ .usersize = offsetof(struct xt_rateest_target_info, est),
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static __net_init int xt_rateest_net_init(struct net *net)
+@@ -214,12 +229,12 @@ static int __init xt_rateest_tg_init(void)
+
+ if (err)
+ return err;
+- return xt_register_target(&xt_rateest_tg_reg);
++ return xt_register_targets(xt_rateest_tg_reg, ARRAY_SIZE(xt_rateest_tg_reg));
+ }
+
+ static void __exit xt_rateest_tg_fini(void)
+ {
+- xt_unregister_target(&xt_rateest_tg_reg);
++ xt_unregister_targets(xt_rateest_tg_reg, ARRAY_SIZE(xt_rateest_tg_reg));
+ unregister_pernet_subsys(&xt_rateest_net_ops);
+ }
+
+diff --git a/net/netfilter/xt_SECMARK.c b/net/netfilter/xt_SECMARK.c
+index 498a0bf6f0444a..5bc5ea505eb9e0 100644
+--- a/net/netfilter/xt_SECMARK.c
++++ b/net/netfilter/xt_SECMARK.c
+@@ -157,7 +157,7 @@ static struct xt_target secmark_tg_reg[] __read_mostly = {
+ {
+ .name = "SECMARK",
+ .revision = 0,
+- .family = NFPROTO_UNSPEC,
++ .family = NFPROTO_IPV4,
+ .checkentry = secmark_tg_check_v0,
+ .destroy = secmark_tg_destroy,
+ .target = secmark_tg_v0,
+@@ -167,7 +167,7 @@ static struct xt_target secmark_tg_reg[] __read_mostly = {
+ {
+ .name = "SECMARK",
+ .revision = 1,
+- .family = NFPROTO_UNSPEC,
++ .family = NFPROTO_IPV4,
+ .checkentry = secmark_tg_check_v1,
+ .destroy = secmark_tg_destroy,
+ .target = secmark_tg_v1,
+@@ -175,6 +175,29 @@ static struct xt_target secmark_tg_reg[] __read_mostly = {
+ .usersize = offsetof(struct xt_secmark_target_info_v1, secid),
+ .me = THIS_MODULE,
+ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "SECMARK",
++ .revision = 0,
++ .family = NFPROTO_IPV6,
++ .checkentry = secmark_tg_check_v0,
++ .destroy = secmark_tg_destroy,
++ .target = secmark_tg_v0,
++ .targetsize = sizeof(struct xt_secmark_target_info),
++ .me = THIS_MODULE,
++ },
++ {
++ .name = "SECMARK",
++ .revision = 1,
++ .family = NFPROTO_IPV6,
++ .checkentry = secmark_tg_check_v1,
++ .destroy = secmark_tg_destroy,
++ .target = secmark_tg_v1,
++ .targetsize = sizeof(struct xt_secmark_target_info_v1),
++ .usersize = offsetof(struct xt_secmark_target_info_v1, secid),
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static int __init secmark_tg_init(void)
+diff --git a/net/netfilter/xt_TRACE.c b/net/netfilter/xt_TRACE.c
+index 5582dce98cae7d..f3fa4f11348cd8 100644
+--- a/net/netfilter/xt_TRACE.c
++++ b/net/netfilter/xt_TRACE.c
+@@ -29,25 +29,38 @@ trace_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ return XT_CONTINUE;
+ }
+
+-static struct xt_target trace_tg_reg __read_mostly = {
+- .name = "TRACE",
+- .revision = 0,
+- .family = NFPROTO_UNSPEC,
+- .table = "raw",
+- .target = trace_tg,
+- .checkentry = trace_tg_check,
+- .destroy = trace_tg_destroy,
+- .me = THIS_MODULE,
++static struct xt_target trace_tg_reg[] __read_mostly = {
++ {
++ .name = "TRACE",
++ .revision = 0,
++ .family = NFPROTO_IPV4,
++ .table = "raw",
++ .target = trace_tg,
++ .checkentry = trace_tg_check,
++ .destroy = trace_tg_destroy,
++ .me = THIS_MODULE,
++ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "TRACE",
++ .revision = 0,
++ .family = NFPROTO_IPV6,
++ .table = "raw",
++ .target = trace_tg,
++ .checkentry = trace_tg_check,
++ .destroy = trace_tg_destroy,
++ },
++#endif
+ };
+
+ static int __init trace_tg_init(void)
+ {
+- return xt_register_target(&trace_tg_reg);
++ return xt_register_targets(trace_tg_reg, ARRAY_SIZE(trace_tg_reg));
+ }
+
+ static void __exit trace_tg_exit(void)
+ {
+- xt_unregister_target(&trace_tg_reg);
++ xt_unregister_targets(trace_tg_reg, ARRAY_SIZE(trace_tg_reg));
+ }
+
+ module_init(trace_tg_init);
+diff --git a/net/netfilter/xt_addrtype.c b/net/netfilter/xt_addrtype.c
+index e9b2181e8c425f..a7708894310716 100644
+--- a/net/netfilter/xt_addrtype.c
++++ b/net/netfilter/xt_addrtype.c
+@@ -208,13 +208,24 @@ static struct xt_match addrtype_mt_reg[] __read_mostly = {
+ },
+ {
+ .name = "addrtype",
+- .family = NFPROTO_UNSPEC,
++ .family = NFPROTO_IPV4,
+ .revision = 1,
+ .match = addrtype_mt_v1,
+ .checkentry = addrtype_mt_checkentry_v1,
+ .matchsize = sizeof(struct xt_addrtype_info_v1),
+ .me = THIS_MODULE
+- }
++ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "addrtype",
++ .family = NFPROTO_IPV6,
++ .revision = 1,
++ .match = addrtype_mt_v1,
++ .checkentry = addrtype_mt_checkentry_v1,
++ .matchsize = sizeof(struct xt_addrtype_info_v1),
++ .me = THIS_MODULE
++ },
++#endif
+ };
+
+ static int __init addrtype_mt_init(void)
+diff --git a/net/netfilter/xt_cluster.c b/net/netfilter/xt_cluster.c
+index a047a545371e18..908fd5f2c3c848 100644
+--- a/net/netfilter/xt_cluster.c
++++ b/net/netfilter/xt_cluster.c
+@@ -146,24 +146,37 @@ static void xt_cluster_mt_destroy(const struct xt_mtdtor_param *par)
+ nf_ct_netns_put(par->net, par->family);
+ }
+
+-static struct xt_match xt_cluster_match __read_mostly = {
+- .name = "cluster",
+- .family = NFPROTO_UNSPEC,
+- .match = xt_cluster_mt,
+- .checkentry = xt_cluster_mt_checkentry,
+- .matchsize = sizeof(struct xt_cluster_match_info),
+- .destroy = xt_cluster_mt_destroy,
+- .me = THIS_MODULE,
++static struct xt_match xt_cluster_match[] __read_mostly = {
++ {
++ .name = "cluster",
++ .family = NFPROTO_IPV4,
++ .match = xt_cluster_mt,
++ .checkentry = xt_cluster_mt_checkentry,
++ .matchsize = sizeof(struct xt_cluster_match_info),
++ .destroy = xt_cluster_mt_destroy,
++ .me = THIS_MODULE,
++ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "cluster",
++ .family = NFPROTO_IPV6,
++ .match = xt_cluster_mt,
++ .checkentry = xt_cluster_mt_checkentry,
++ .matchsize = sizeof(struct xt_cluster_match_info),
++ .destroy = xt_cluster_mt_destroy,
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static int __init xt_cluster_mt_init(void)
+ {
+- return xt_register_match(&xt_cluster_match);
++ return xt_register_matches(xt_cluster_match, ARRAY_SIZE(xt_cluster_match));
+ }
+
+ static void __exit xt_cluster_mt_fini(void)
+ {
+- xt_unregister_match(&xt_cluster_match);
++ xt_unregister_matches(xt_cluster_match, ARRAY_SIZE(xt_cluster_match));
+ }
+
+ MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>");
+diff --git a/net/netfilter/xt_connbytes.c b/net/netfilter/xt_connbytes.c
+index 93cb018c3055f8..2aabdcea870723 100644
+--- a/net/netfilter/xt_connbytes.c
++++ b/net/netfilter/xt_connbytes.c
+@@ -111,9 +111,11 @@ static int connbytes_mt_check(const struct xt_mtchk_param *par)
+ return -EINVAL;
+
+ ret = nf_ct_netns_get(par->net, par->family);
+- if (ret < 0)
++ if (ret < 0) {
+ pr_info_ratelimited("cannot load conntrack support for proto=%u\n",
+ par->family);
++ return ret;
++ }
+
+ /*
+ * This filter cannot function correctly unless connection tracking
+diff --git a/net/netfilter/xt_connlimit.c b/net/netfilter/xt_connlimit.c
+index 5d04ef80a61dcf..d1d0fa6c8061e9 100644
+--- a/net/netfilter/xt_connlimit.c
++++ b/net/netfilter/xt_connlimit.c
+@@ -106,26 +106,41 @@ static void connlimit_mt_destroy(const struct xt_mtdtor_param *par)
+ nf_conncount_destroy(par->net, par->family, info->data);
+ }
+
+-static struct xt_match connlimit_mt_reg __read_mostly = {
+- .name = "connlimit",
+- .revision = 1,
+- .family = NFPROTO_UNSPEC,
+- .checkentry = connlimit_mt_check,
+- .match = connlimit_mt,
+- .matchsize = sizeof(struct xt_connlimit_info),
+- .usersize = offsetof(struct xt_connlimit_info, data),
+- .destroy = connlimit_mt_destroy,
+- .me = THIS_MODULE,
++static struct xt_match connlimit_mt_reg[] __read_mostly = {
++ {
++ .name = "connlimit",
++ .revision = 1,
++ .family = NFPROTO_IPV4,
++ .checkentry = connlimit_mt_check,
++ .match = connlimit_mt,
++ .matchsize = sizeof(struct xt_connlimit_info),
++ .usersize = offsetof(struct xt_connlimit_info, data),
++ .destroy = connlimit_mt_destroy,
++ .me = THIS_MODULE,
++ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "connlimit",
++ .revision = 1,
++ .family = NFPROTO_IPV6,
++ .checkentry = connlimit_mt_check,
++ .match = connlimit_mt,
++ .matchsize = sizeof(struct xt_connlimit_info),
++ .usersize = offsetof(struct xt_connlimit_info, data),
++ .destroy = connlimit_mt_destroy,
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static int __init connlimit_mt_init(void)
+ {
+- return xt_register_match(&connlimit_mt_reg);
++ return xt_register_matches(connlimit_mt_reg, ARRAY_SIZE(connlimit_mt_reg));
+ }
+
+ static void __exit connlimit_mt_exit(void)
+ {
+- xt_unregister_match(&connlimit_mt_reg);
++ xt_unregister_matches(connlimit_mt_reg, ARRAY_SIZE(connlimit_mt_reg));
+ }
+
+ module_init(connlimit_mt_init);
+diff --git a/net/netfilter/xt_connmark.c b/net/netfilter/xt_connmark.c
+index ad3c033db64e70..4277084de2e70c 100644
+--- a/net/netfilter/xt_connmark.c
++++ b/net/netfilter/xt_connmark.c
+@@ -151,7 +151,7 @@ static struct xt_target connmark_tg_reg[] __read_mostly = {
+ {
+ .name = "CONNMARK",
+ .revision = 1,
+- .family = NFPROTO_UNSPEC,
++ .family = NFPROTO_IPV4,
+ .checkentry = connmark_tg_check,
+ .target = connmark_tg,
+ .targetsize = sizeof(struct xt_connmark_tginfo1),
+@@ -161,13 +161,35 @@ static struct xt_target connmark_tg_reg[] __read_mostly = {
+ {
+ .name = "CONNMARK",
+ .revision = 2,
+- .family = NFPROTO_UNSPEC,
++ .family = NFPROTO_IPV4,
+ .checkentry = connmark_tg_check,
+ .target = connmark_tg_v2,
+ .targetsize = sizeof(struct xt_connmark_tginfo2),
+ .destroy = connmark_tg_destroy,
+ .me = THIS_MODULE,
+- }
++ },
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "CONNMARK",
++ .revision = 1,
++ .family = NFPROTO_IPV6,
++ .checkentry = connmark_tg_check,
++ .target = connmark_tg,
++ .targetsize = sizeof(struct xt_connmark_tginfo1),
++ .destroy = connmark_tg_destroy,
++ .me = THIS_MODULE,
++ },
++ {
++ .name = "CONNMARK",
++ .revision = 2,
++ .family = NFPROTO_IPV6,
++ .checkentry = connmark_tg_check,
++ .target = connmark_tg_v2,
++ .targetsize = sizeof(struct xt_connmark_tginfo2),
++ .destroy = connmark_tg_destroy,
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static struct xt_match connmark_mt_reg __read_mostly = {
+diff --git a/net/netfilter/xt_mark.c b/net/netfilter/xt_mark.c
+index 1ad74b5920b533..f76fe04fc9a4e1 100644
+--- a/net/netfilter/xt_mark.c
++++ b/net/netfilter/xt_mark.c
+@@ -39,13 +39,35 @@ mark_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ return ((skb->mark & info->mask) == info->mark) ^ info->invert;
+ }
+
+-static struct xt_target mark_tg_reg __read_mostly = {
+- .name = "MARK",
+- .revision = 2,
+- .family = NFPROTO_UNSPEC,
+- .target = mark_tg,
+- .targetsize = sizeof(struct xt_mark_tginfo2),
+- .me = THIS_MODULE,
++static struct xt_target mark_tg_reg[] __read_mostly = {
++ {
++ .name = "MARK",
++ .revision = 2,
++ .family = NFPROTO_IPV4,
++ .target = mark_tg,
++ .targetsize = sizeof(struct xt_mark_tginfo2),
++ .me = THIS_MODULE,
++ },
++#if IS_ENABLED(CONFIG_IP_NF_ARPTABLES)
++ {
++ .name = "MARK",
++ .revision = 2,
++ .family = NFPROTO_ARP,
++ .target = mark_tg,
++ .targetsize = sizeof(struct xt_mark_tginfo2),
++ .me = THIS_MODULE,
++ },
++#endif
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
++ {
++ .name = "MARK",
++ .revision = 2,
++ .family = NFPROTO_IPV4,
++ .target = mark_tg,
++ .targetsize = sizeof(struct xt_mark_tginfo2),
++ .me = THIS_MODULE,
++ },
++#endif
+ };
+
+ static struct xt_match mark_mt_reg __read_mostly = {
+@@ -61,12 +83,12 @@ static int __init mark_mt_init(void)
+ {
+ int ret;
+
+- ret = xt_register_target(&mark_tg_reg);
++ ret = xt_register_targets(mark_tg_reg, ARRAY_SIZE(mark_tg_reg));
+ if (ret < 0)
+ return ret;
+ ret = xt_register_match(&mark_mt_reg);
+ if (ret < 0) {
+- xt_unregister_target(&mark_tg_reg);
++ xt_unregister_targets(mark_tg_reg, ARRAY_SIZE(mark_tg_reg));
+ return ret;
+ }
+ return 0;
+@@ -75,7 +97,7 @@ static int __init mark_mt_init(void)
+ static void __exit mark_mt_exit(void)
+ {
+ xt_unregister_match(&mark_mt_reg);
+- xt_unregister_target(&mark_tg_reg);
++ xt_unregister_targets(mark_tg_reg, ARRAY_SIZE(mark_tg_reg));
+ }
+
+ module_init(mark_mt_init);
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 0b7a89db3ab74e..0a9287fadb47a2 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -2136,8 +2136,9 @@ void __netlink_clear_multicast_users(struct sock *ksk, unsigned int group)
+ {
+ struct sock *sk;
+ struct netlink_table *tbl = &nl_table[ksk->sk_protocol];
++ struct hlist_node *tmp;
+
+- sk_for_each_bound(sk, &tbl->mc_list)
++ sk_for_each_bound_safe(sk, tmp, &tbl->mc_list)
+ netlink_update_socket_mc(nlk_sk(sk), group, 0);
+ }
+
+diff --git a/net/phonet/pn_netlink.c b/net/phonet/pn_netlink.c
+index 7008d402499d5b..894e5c72d6bfff 100644
+--- a/net/phonet/pn_netlink.c
++++ b/net/phonet/pn_netlink.c
+@@ -285,23 +285,17 @@ static int route_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
+ return err;
+ }
+
++static const struct rtnl_msg_handler phonet_rtnl_msg_handlers[] __initdata_or_module = {
++ {THIS_MODULE, PF_PHONET, RTM_NEWADDR, addr_doit, NULL, 0},
++ {THIS_MODULE, PF_PHONET, RTM_DELADDR, addr_doit, NULL, 0},
++ {THIS_MODULE, PF_PHONET, RTM_GETADDR, NULL, getaddr_dumpit, 0},
++ {THIS_MODULE, PF_PHONET, RTM_NEWROUTE, route_doit, NULL, 0},
++ {THIS_MODULE, PF_PHONET, RTM_DELROUTE, route_doit, NULL, 0},
++ {THIS_MODULE, PF_PHONET, RTM_GETROUTE, NULL, route_dumpit,
++ RTNL_FLAG_DUMP_UNLOCKED},
++};
++
+ int __init phonet_netlink_register(void)
+ {
+- int err = rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_NEWADDR,
+- addr_doit, NULL, 0);
+- if (err)
+- return err;
+-
+- /* Further rtnl_register_module() cannot fail */
+- rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_DELADDR,
+- addr_doit, NULL, 0);
+- rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_GETADDR,
+- NULL, getaddr_dumpit, 0);
+- rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_NEWROUTE,
+- route_doit, NULL, 0);
+- rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_DELROUTE,
+- route_doit, NULL, 0);
+- rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_GETROUTE,
+- NULL, route_dumpit, RTNL_FLAG_DUMP_UNLOCKED);
+- return 0;
++ return rtnl_register_many(phonet_rtnl_msg_handlers);
+ }
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index 894b8fa68e5e95..23d18fe5de9f0d 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -303,6 +303,11 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
+ sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);
+
+ reload:
++ txb = call->tx_pending;
++ call->tx_pending = NULL;
++ if (txb)
++ rxrpc_see_txbuf(txb, rxrpc_txbuf_see_send_more);
++
+ ret = -EPIPE;
+ if (sk->sk_shutdown & SEND_SHUTDOWN)
+ goto maybe_error;
+@@ -329,11 +334,6 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
+ goto maybe_error;
+ }
+
+- txb = call->tx_pending;
+- call->tx_pending = NULL;
+- if (txb)
+- rxrpc_see_txbuf(txb, rxrpc_txbuf_see_send_more);
+-
+ do {
+ if (!txb) {
+ size_t remain;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 74afc210527d23..2eefa478387997 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -593,7 +593,6 @@ void __qdisc_calculate_pkt_len(struct sk_buff *skb,
+ pkt_len = 1;
+ qdisc_skb_cb(skb)->pkt_len = pkt_len;
+ }
+-EXPORT_SYMBOL(__qdisc_calculate_pkt_len);
+
+ void qdisc_warn_nonwc(const char *txt, struct Qdisc *qdisc)
+ {
+@@ -1201,6 +1200,12 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ return -EINVAL;
+ }
+
++ if (new &&
++ !(parent->flags & TCQ_F_MQROOT) &&
++ rcu_access_pointer(new->stab)) {
++ NL_SET_ERR_MSG(extack, "STAB not supported on a non root");
++ return -EINVAL;
++ }
+ err = cops->graft(parent, cl, new, &old, extack);
+ if (err)
+ return err;
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 078bcb3858c79c..36ee34f483d703 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -8531,6 +8531,7 @@ static int sctp_listen_start(struct sock *sk, int backlog)
+ struct sctp_endpoint *ep = sp->ep;
+ struct crypto_shash *tfm = NULL;
+ char alg[32];
++ int err;
+
+ /* Allocate HMAC for generating cookie. */
+ if (!sp->hmac && sp->sctp_hmac_alg) {
+@@ -8558,18 +8559,25 @@ static int sctp_listen_start(struct sock *sk, int backlog)
+ inet_sk_set_state(sk, SCTP_SS_LISTENING);
+ if (!ep->base.bind_addr.port) {
+ if (sctp_autobind(sk)) {
+- inet_sk_set_state(sk, SCTP_SS_CLOSED);
+- return -EAGAIN;
++ err = -EAGAIN;
++ goto err;
+ }
+ } else {
+ if (sctp_get_port(sk, inet_sk(sk)->inet_num)) {
+- inet_sk_set_state(sk, SCTP_SS_CLOSED);
+- return -EADDRINUSE;
++ err = -EADDRINUSE;
++ goto err;
+ }
+ }
+
+ WRITE_ONCE(sk->sk_max_ack_backlog, backlog);
+- return sctp_hash_endpoint(ep);
++ err = sctp_hash_endpoint(ep);
++ if (err)
++ goto err;
++
++ return 0;
++err:
++ inet_sk_set_state(sk, SCTP_SS_CLOSED);
++ return err;
+ }
+
+ /*
+diff --git a/net/smc/smc_inet.c b/net/smc/smc_inet.c
+index a5b2041600f959..a944e7dcb8b967 100644
+--- a/net/smc/smc_inet.c
++++ b/net/smc/smc_inet.c
+@@ -108,12 +108,23 @@ static struct inet_protosw smc_inet6_protosw = {
+ };
+ #endif /* CONFIG_IPV6 */
+
++static unsigned int smc_sync_mss(struct sock *sk, u32 pmtu)
++{
++ /* No need pass it through to clcsock, mss can always be set by
++ * sock_create_kern or smc_setsockopt.
++ */
++ return 0;
++}
++
+ static int smc_inet_init_sock(struct sock *sk)
+ {
+ struct net *net = sock_net(sk);
+
+ /* init common smc sock */
+ smc_sk_init(net, sk, IPPROTO_SMC);
++
++ inet_csk(sk)->icsk_sync_mss = smc_sync_mss;
++
+ /* create clcsock */
+ return smc_create_clcsk(net, sk, sk->sk_family);
+ }
+diff --git a/net/socket.c b/net/socket.c
+index 0a2bd22ec105c8..87fc3c6047bc8b 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -1569,8 +1569,13 @@ int __sock_create(struct net *net, int family, int type, int protocol,
+ rcu_read_unlock();
+
+ err = pf->create(net, sock, protocol, kern);
+- if (err < 0)
++ if (err < 0) {
++ /* ->create should release the allocated sock->sk object on error
++ * but it may leave the dangling pointer
++ */
++ sock->sk = NULL;
+ goto out_module_put;
++ }
+
+ /*
+ * Now to bump the refcnt of the [loadable] module that owns this
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index b33602e64d174c..e8958a4646476f 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2671,7 +2671,7 @@ static const struct pci_device_id azx_ids[] = {
+ .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ AZX_DCAPS_PM_RUNTIME },
+ /* GLENFLY */
+- { PCI_DEVICE(0x6766, PCI_ANY_ID),
++ { PCI_DEVICE(PCI_VENDOR_ID_GLENFLY, PCI_ANY_ID),
+ .class = PCI_CLASS_MULTIMEDIA_HD_AUDIO << 8,
+ .class_mask = 0xffffff,
+ .driver_data = AZX_DRIVER_GFHDMI | AZX_DCAPS_POSFIX_LPIB |
+diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
+index 12796808f07a8c..ead476b373f693 100644
+--- a/tools/build/feature/Makefile
++++ b/tools/build/feature/Makefile
+@@ -172,7 +172,7 @@ DWARFLIBS := -ldw
+ ifeq ($(findstring -static,${LDFLAGS}),-static)
+ DWARFLIBS += -lelf -lz -llzma -lbz2 -lzstd
+
+- LIBDW_VERSION := $(shell $(PKG_CONFIG) --modversion libdw)
++ LIBDW_VERSION := $(shell $(PKG_CONFIG) --modversion libdw).0.0
+ LIBDW_VERSION_1 := $(word 1, $(subst ., ,$(LIBDW_VERSION)))
+ LIBDW_VERSION_2 := $(word 2, $(subst ., ,$(LIBDW_VERSION)))
+
+@@ -181,6 +181,9 @@ ifeq ($(findstring -static,${LDFLAGS}),-static)
+ ifeq ($(shell test $(LIBDW_VERSION_2) -lt 177; echo $$?),0)
+ DWARFLIBS += -lebl
+ endif
++
++ # Must put -ldl after -lebl for dependency
++ DWARFLIBS += -ldl
+ endif
+
+ $(OUTPUT)test-dwarf.bin:
+diff --git a/tools/iio/iio_generic_buffer.c b/tools/iio/iio_generic_buffer.c
+index 0d0a7a19d6f952..9ef5ee087eda36 100644
+--- a/tools/iio/iio_generic_buffer.c
++++ b/tools/iio/iio_generic_buffer.c
+@@ -498,6 +498,10 @@ int main(int argc, char **argv)
+ return -ENOMEM;
+ }
+ trigger_name = malloc(IIO_MAX_NAME_LENGTH);
++ if (!trigger_name) {
++ ret = -ENOMEM;
++ goto error;
++ }
+ ret = read_sysfs_string("name", trig_dev_name, trigger_name);
+ free(trig_dev_name);
+ if (ret < 0) {
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index fa679db61f6226..9fccdff682af70 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -152,9 +152,9 @@ ifdef LIBDW_DIR
+ endif
+ DWARFLIBS := -ldw
+ ifeq ($(findstring -static,${LDFLAGS}),-static)
+- DWARFLIBS += -lelf -ldl -lz -llzma -lbz2 -lzstd
++ DWARFLIBS += -lelf -lz -llzma -lbz2 -lzstd
+
+- LIBDW_VERSION := $(shell $(PKG_CONFIG) --modversion libdw)
++ LIBDW_VERSION := $(shell $(PKG_CONFIG) --modversion libdw).0.0
+ LIBDW_VERSION_1 := $(word 1, $(subst ., ,$(LIBDW_VERSION)))
+ LIBDW_VERSION_2 := $(word 2, $(subst ., ,$(LIBDW_VERSION)))
+
+@@ -163,6 +163,9 @@ ifeq ($(findstring -static,${LDFLAGS}),-static)
+ ifeq ($(shell test $(LIBDW_VERSION_2) -lt 177; echo $$?),0)
+ DWARFLIBS += -lebl
+ endif
++
++ # Must put -ldl after -lebl for dependency
++ DWARFLIBS += -ldl
+ endif
+ FEATURE_CHECK_CFLAGS-libdw-dwarf-unwind := $(LIBDW_CFLAGS)
+ FEATURE_CHECK_LDFLAGS-libdw-dwarf-unwind := $(LIBDW_LDFLAGS) $(DWARFLIBS)
+diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
+index 1b6f8f6db7aa47..c12f5d8c4bf696 100644
+--- a/tools/perf/util/vdso.c
++++ b/tools/perf/util/vdso.c
+@@ -308,8 +308,10 @@ static struct dso *machine__find_vdso(struct machine *machine,
+ if (!dso) {
+ dso = dsos__find(&machine->dsos, DSO__NAME_VDSO,
+ true);
+- if (dso && dso_type != dso__type(dso, machine))
++ if (dso && dso_type != dso__type(dso, machine)) {
++ dso__put(dso);
+ dso = NULL;
++ }
+ }
+ break;
+ case DSO__TYPE_X32BIT:
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index eb31cd9c977bfb..e24cd825e70a6b 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -2047,7 +2047,7 @@ sub get_grub_index {
+ } elsif ($reboot_type eq "grub2") {
+ $command = "cat $grub_file";
+ $target = '^\s*menuentry.*' . $grub_menu_qt;
+- $skip = '^\s*menuentry';
++ $skip = '^\s*menuentry\s';
+ $submenu = '^\s*submenu\s';
+ } elsif ($reboot_type eq "grub2bls") {
+ $command = $grub_bls_get;
+diff --git a/tools/testing/selftests/bpf/progs/verifier_int_ptr.c b/tools/testing/selftests/bpf/progs/verifier_int_ptr.c
+index 9fc3fae5cd833b..87206803c02557 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_int_ptr.c
++++ b/tools/testing/selftests/bpf/progs/verifier_int_ptr.c
+@@ -8,7 +8,6 @@
+ SEC("socket")
+ __description("ARG_PTR_TO_LONG uninitialized")
+ __success
+-__failure_unpriv __msg_unpriv("invalid indirect read from stack R4 off -16+0 size 8")
+ __naked void arg_ptr_to_long_uninitialized(void)
+ {
+ asm volatile (" \
+@@ -36,9 +35,7 @@ __naked void arg_ptr_to_long_uninitialized(void)
+
+ SEC("socket")
+ __description("ARG_PTR_TO_LONG half-uninitialized")
+-/* in privileged mode reads from uninitialized stack locations are permitted */
+-__success __failure_unpriv
+-__msg_unpriv("invalid indirect read from stack R4 off -16+4 size 8")
++__success
+ __retval(0)
+ __naked void ptr_to_long_half_uninitialized(void)
+ {
+diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftests/mm/hmm-tests.c
+index d2cfc9b494a0ee..141bf63cbe05ec 100644
+--- a/tools/testing/selftests/mm/hmm-tests.c
++++ b/tools/testing/selftests/mm/hmm-tests.c
+@@ -1657,7 +1657,7 @@ TEST_F(hmm2, double_map)
+
+ buffer->fd = -1;
+ buffer->size = size;
+- buffer->mirror = malloc(npages);
++ buffer->mirror = malloc(size);
+ ASSERT_NE(buffer->mirror, NULL);
+
+ /* Reserve a range of addresses. */
+diff --git a/tools/testing/selftests/net/forwarding/no_forwarding.sh b/tools/testing/selftests/net/forwarding/no_forwarding.sh
+index 9e677aa64a06a6..694ece9ba3a742 100755
+--- a/tools/testing/selftests/net/forwarding/no_forwarding.sh
++++ b/tools/testing/selftests/net/forwarding/no_forwarding.sh
+@@ -202,7 +202,7 @@ one_bridge_two_pvids()
+ ip link set $swp2 master br0
+
+ bridge vlan add dev $swp1 vid 1 pvid untagged
+- bridge vlan add dev $swp1 vid 2 pvid untagged
++ bridge vlan add dev $swp2 vid 2 pvid untagged
+
+ run_test "Switch ports in VLAN-aware bridge with different PVIDs"
+
+diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c
+index 96e812bdf8a45c..5b9772cdf2651b 100644
+--- a/tools/testing/selftests/rseq/rseq.c
++++ b/tools/testing/selftests/rseq/rseq.c
+@@ -60,12 +60,6 @@ unsigned int rseq_size = -1U;
+ /* Flags used during rseq registration. */
+ unsigned int rseq_flags;
+
+-/*
+- * rseq feature size supported by the kernel. 0 if the registration was
+- * unsuccessful.
+- */
+-unsigned int rseq_feature_size = -1U;
+-
+ static int rseq_ownership;
+ static int rseq_reg_success; /* At least one rseq registration has succeded. */
+
+@@ -111,6 +105,43 @@ int rseq_available(void)
+ }
+ }
+
++/* The rseq areas need to be at least 32 bytes. */
++static
++unsigned int get_rseq_min_alloc_size(void)
++{
++ unsigned int alloc_size = rseq_size;
++
++ if (alloc_size < ORIG_RSEQ_ALLOC_SIZE)
++ alloc_size = ORIG_RSEQ_ALLOC_SIZE;
++ return alloc_size;
++}
++
++/*
++ * Return the feature size supported by the kernel.
++ *
++ * Depending on the value returned by getauxval(AT_RSEQ_FEATURE_SIZE):
++ *
++ * 0: Return ORIG_RSEQ_FEATURE_SIZE (20)
++ * > 0: Return the value from getauxval(AT_RSEQ_FEATURE_SIZE).
++ *
++ * It should never return a value below ORIG_RSEQ_FEATURE_SIZE.
++ */
++static
++unsigned int get_rseq_kernel_feature_size(void)
++{
++ unsigned long auxv_rseq_feature_size, auxv_rseq_align;
++
++ auxv_rseq_align = getauxval(AT_RSEQ_ALIGN);
++ assert(!auxv_rseq_align || auxv_rseq_align <= RSEQ_THREAD_AREA_ALLOC_SIZE);
++
++ auxv_rseq_feature_size = getauxval(AT_RSEQ_FEATURE_SIZE);
++ assert(!auxv_rseq_feature_size || auxv_rseq_feature_size <= RSEQ_THREAD_AREA_ALLOC_SIZE);
++ if (auxv_rseq_feature_size)
++ return auxv_rseq_feature_size;
++ else
++ return ORIG_RSEQ_FEATURE_SIZE;
++}
++
+ int rseq_register_current_thread(void)
+ {
+ int rc;
+@@ -119,7 +150,7 @@ int rseq_register_current_thread(void)
+ /* Treat libc's ownership as a successful registration. */
+ return 0;
+ }
+- rc = sys_rseq(&__rseq_abi, rseq_size, 0, RSEQ_SIG);
++ rc = sys_rseq(&__rseq_abi, get_rseq_min_alloc_size(), 0, RSEQ_SIG);
+ if (rc) {
+ if (RSEQ_READ_ONCE(rseq_reg_success)) {
+ /* Incoherent success/failure within process. */
+@@ -140,28 +171,12 @@ int rseq_unregister_current_thread(void)
+ /* Treat libc's ownership as a successful unregistration. */
+ return 0;
+ }
+- rc = sys_rseq(&__rseq_abi, rseq_size, RSEQ_ABI_FLAG_UNREGISTER, RSEQ_SIG);
++ rc = sys_rseq(&__rseq_abi, get_rseq_min_alloc_size(), RSEQ_ABI_FLAG_UNREGISTER, RSEQ_SIG);
+ if (rc)
+ return -1;
+ return 0;
+ }
+
+-static
+-unsigned int get_rseq_feature_size(void)
+-{
+- unsigned long auxv_rseq_feature_size, auxv_rseq_align;
+-
+- auxv_rseq_align = getauxval(AT_RSEQ_ALIGN);
+- assert(!auxv_rseq_align || auxv_rseq_align <= RSEQ_THREAD_AREA_ALLOC_SIZE);
+-
+- auxv_rseq_feature_size = getauxval(AT_RSEQ_FEATURE_SIZE);
+- assert(!auxv_rseq_feature_size || auxv_rseq_feature_size <= RSEQ_THREAD_AREA_ALLOC_SIZE);
+- if (auxv_rseq_feature_size)
+- return auxv_rseq_feature_size;
+- else
+- return ORIG_RSEQ_FEATURE_SIZE;
+-}
+-
+ static __attribute__((constructor))
+ void rseq_init(void)
+ {
+@@ -178,28 +193,54 @@ void rseq_init(void)
+ }
+ if (libc_rseq_size_p && libc_rseq_offset_p && libc_rseq_flags_p &&
+ *libc_rseq_size_p != 0) {
++ unsigned int libc_rseq_size;
++
+ /* rseq registration owned by glibc */
+ rseq_offset = *libc_rseq_offset_p;
+- rseq_size = *libc_rseq_size_p;
++ libc_rseq_size = *libc_rseq_size_p;
+ rseq_flags = *libc_rseq_flags_p;
+- rseq_feature_size = get_rseq_feature_size();
+- if (rseq_feature_size > rseq_size)
+- rseq_feature_size = rseq_size;
++
++ /*
++ * Previous versions of glibc expose the value
++ * 32 even though the kernel only supported 20
++ * bytes initially. Therefore treat 32 as a
++ * special-case. glibc 2.40 exposes a 20 bytes
++ * __rseq_size without using getauxval(3) to
++ * query the supported size, while still allocating a 32
++ * bytes area. Also treat 20 as a special-case.
++ *
++ * Special-cases are handled by using the following
++ * value as active feature set size:
++ *
++ * rseq_size = min(32, get_rseq_kernel_feature_size())
++ */
++ switch (libc_rseq_size) {
++ case ORIG_RSEQ_FEATURE_SIZE:
++ fallthrough;
++ case ORIG_RSEQ_ALLOC_SIZE:
++ {
++ unsigned int rseq_kernel_feature_size = get_rseq_kernel_feature_size();
++
++ if (rseq_kernel_feature_size < ORIG_RSEQ_ALLOC_SIZE)
++ rseq_size = rseq_kernel_feature_size;
++ else
++ rseq_size = ORIG_RSEQ_ALLOC_SIZE;
++ break;
++ }
++ default:
++ /* Otherwise just use the __rseq_size from libc as rseq_size. */
++ rseq_size = libc_rseq_size;
++ break;
++ }
+ return;
+ }
+ rseq_ownership = 1;
+ if (!rseq_available()) {
+ rseq_size = 0;
+- rseq_feature_size = 0;
+ return;
+ }
+ rseq_offset = (void *)&__rseq_abi - rseq_thread_pointer();
+ rseq_flags = 0;
+- rseq_feature_size = get_rseq_feature_size();
+- if (rseq_feature_size == ORIG_RSEQ_FEATURE_SIZE)
+- rseq_size = ORIG_RSEQ_ALLOC_SIZE;
+- else
+- rseq_size = RSEQ_THREAD_AREA_ALLOC_SIZE;
+ }
+
+ static __attribute__((destructor))
+@@ -209,7 +250,6 @@ void rseq_exit(void)
+ return;
+ rseq_offset = 0;
+ rseq_size = -1U;
+- rseq_feature_size = -1U;
+ rseq_ownership = 0;
+ }
+
+diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h
+index d7364ea4d201d2..4e217b620e0c7a 100644
+--- a/tools/testing/selftests/rseq/rseq.h
++++ b/tools/testing/selftests/rseq/rseq.h
+@@ -68,12 +68,6 @@ extern unsigned int rseq_size;
+ /* Flags used during rseq registration. */
+ extern unsigned int rseq_flags;
+
+-/*
+- * rseq feature size supported by the kernel. 0 if the registration was
+- * unsuccessful.
+- */
+-extern unsigned int rseq_feature_size;
+-
+ enum rseq_mo {
+ RSEQ_MO_RELAXED = 0,
+ RSEQ_MO_CONSUME = 1, /* Unused */
+@@ -193,7 +187,7 @@ static inline uint32_t rseq_current_cpu(void)
+
+ static inline bool rseq_node_id_available(void)
+ {
+- return (int) rseq_feature_size >= rseq_offsetofend(struct rseq_abi, node_id);
++ return (int) rseq_size >= rseq_offsetofend(struct rseq_abi, node_id);
+ }
+
+ /*
+@@ -207,7 +201,7 @@ static inline uint32_t rseq_current_node_id(void)
+
+ static inline bool rseq_mm_cid_available(void)
+ {
+- return (int) rseq_feature_size >= rseq_offsetofend(struct rseq_abi, mm_cid);
++ return (int) rseq_size >= rseq_offsetofend(struct rseq_abi, mm_cid);
+ }
+
+ static inline uint32_t rseq_current_mm_cid(void)
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-10-17 14:31 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-10-17 14:31 UTC (permalink / raw
To: gentoo-commits
commit: 9db34e305c11fa0748185b0dce072d084cae24e8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 17 14:31:00 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct 17 14:31:00 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9db34e30
Remove redundant patch
Removed:
2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ---
...3-Fix-build-issue-by-selecting-CONFIG_REG.patch | 30 ----------------------
2 files changed, 34 deletions(-)
diff --git a/0000_README b/0000_README
index 9e83ebe0..df7729f5 100644
--- a/0000_README
+++ b/0000_README
@@ -75,10 +75,6 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
-Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
-From: https://bugs.gentoo.org/710790
-Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
-
Patch: 2901_tools-lib-subcmd-compile-fix.patch
From: https://lore.kernel.org/all/20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de/
Desc: tools lib subcmd: Fixed uninitialized use of variable in parse-options
diff --git a/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
deleted file mode 100644
index 43356857..00000000
--- a/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-From dc328d75a6f37f4ff11a81ae16b1ec88c3197640 Mon Sep 17 00:00:00 2001
-From: Mike Pagano <mpagano@gentoo.org>
-Date: Mon, 23 Mar 2020 08:20:06 -0400
-Subject: [PATCH 1/1] This driver requires REGMAP_I2C to build. Select it by
- default in Kconfig. Reported at gentoo bugzilla:
- https://bugs.gentoo.org/710790
-Cc: mpagano@gentoo.org
-
-Reported-by: Phil Stracchino <phils@caerllewys.net>
-
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/hwmon/Kconfig | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
-index 47ac20aee06f..530b4f29ba85 100644
---- a/drivers/hwmon/Kconfig
-+++ b/drivers/hwmon/Kconfig
-@@ -1769,6 +1769,7 @@ config SENSORS_TMP421
- config SENSORS_TMP513
- tristate "Texas Instruments TMP513 and compatibles"
- depends on I2C
-+ select REGMAP_I2C
- help
- If you say yes here you get support for Texas Instruments TMP512,
- and TMP513 temperature and power supply sensor chips.
---
-2.24.1
-
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-10-21 13:35 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-10-21 13:35 UTC (permalink / raw
To: gentoo-commits
commit: 02df93448e5da7575a47ca81df06650e07d2e154
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 21 13:34:48 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Oct 21 13:34:48 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=02df9344
Update CPU Optimization Patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
5010_enable-cpu-optimizations-universal.patch | 421 ++++++++++++--------------
1 file changed, 188 insertions(+), 233 deletions(-)
diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index b5382da3..0758b0ba 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,26 +1,37 @@
-rom 86977b5357d9212d57841bc325e80f43081bb333 Mon Sep 17 00:00:00 2001
-From: graysky <therealgraysky@proton.me>
-Date: Wed, 21 Feb 2024 08:38:13 -0500
+From d66d4da9e6fbd22780826ec7d55d65c3ecaf1e66 Mon Sep 17 00:00:00 2001
+From: graysky <therealgraysky AT proton DOT me>
+Date: Mon, 16 Sep 2024 05:55:58 -0400
FEATURES
-This patch adds additional CPU options to the Linux kernel accessible under:
- Processor type and features --->
- Processor family --->
+This patch adds additional tunings via new x86-64 ISA levels and
+more micro-architecture options to the Linux kernel in three classes.
-With the release of gcc 11.1 and clang 12.0, several generic 64-bit levels are
-offered which are good for supported Intel or AMD CPUs:
-• x86-64-v2
-• x86-64-v3
-• x86-64-v4
+1. New generic x86-64 ISA levels
+
+These are selectable under:
+ Processor type and features ---> x86-64 compiler ISA level
+
+• x86-64 A value of (1) is the default
+• x86-64-v2 A value of (2) brings support for vector
+ instructions up to Streaming SIMD Extensions 4.2 (SSE4.2)
+ and Supplemental Streaming SIMD Extensions 3 (SSSE3), the
+ POPCNT instruction, and CMPXCHG16B.
+• x86-64-v3 A value of (3) adds vector instructions up to AVX2, MOVBE,
+ and additional bit-manipulation instructions.
+
+There is also x86-64-v4 but including this makes little sense as
+the kernel does not use any of the AVX512 instructions anyway.
Users of glibc 2.33 and above can see which level is supported by running:
- /lib/ld-linux-x86-64.so.2 --help | grep supported
+ /lib/ld-linux-x86-64.so.2 --help | grep supported
Or
- /lib64/ld-linux-x86-64.so.2 --help | grep supported
+ /lib64/ld-linux-x86-64.so.2 --help | grep supported
+
+2. New micro-architectures
-Alternatively, compare the flags from /proc/cpuinfo to this list.[1]
+These are selectable under:
+ Processor type and features ---> Processor family
-CPU-specific microarchitectures include:
• AMD Improved K8-family
• AMD K10-family
• AMD Family 10h (Barcelona)
@@ -63,14 +74,15 @@ Notes: If not otherwise noted, gcc >=9.1 is required for support.
**Required gcc >=10.3 or clang >=12.0
†Required gcc >=11.1 or clang >=12.0
‡Required gcc >=13.0 or clang >=15.0.5
- §Required gcc >=14.1 or clang >=19.0?
+ §Required gcc >14.0 or clang >=19.0?
+3. Auto-detected micro-architecture levels
-It also offers to compile passing the 'native' option which, "selects the CPU
+Compile by passing the '-march=native' option which, "selects the CPU
to generate code for at compilation time by determining the processor type of
the compiling machine. Using -march=native enables all instruction subsets
supported by the local machine and will produce code optimized for the local
-machine under the constraints of the selected instruction set."[2]
+machine under the constraints of the selected instruction set."[1]
Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
CPUs should select the 'AMD-Native' option.
@@ -78,9 +90,9 @@ CPUs should select the 'AMD-Native' option.
MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
This patch also changes -march=atom to -march=bonnell in accordance with the
gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
-believe it should use the newer -march=bonnell flag for atom processors.[3]
+believe it should use the newer -march=bonnell flag for atom processors.[2]
-It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+It is not recommended to compile on Atom-CPUs with the 'native' option.[3] The
recommendation is to use the 'atom' option instead.
BENEFITS
@@ -88,52 +100,54 @@ Small but real speed increases are measurable using a make endpoint comparing
a generic kernel to one built with one of the respective microarchs.
See the following experimental evidence supporting this statement:
-https://github.com/graysky2/kernel_gcc_patch
+https://github.com/graysky2/kernel_compiler_patch?tab=readme-ov-file#benchmarks
REQUIREMENTS
-linux version 5.17+
+linux version 6.1.79+
gcc version >=9.0 or clang version >=9.0
ACKNOWLEDGMENTS
-This patch builds on the seminal work by Jeroen.[5]
+This patch builds on the seminal work by Jeroen.[4]
REFERENCES
-1. https://gitlab.com/x86-psABIs/x86-64-ABI/-/commit/77566eb03bc6a326811cb7e9
-2. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
-3. https://bugzilla.kernel.org/show_bug.cgi?id=77461
-4. https://github.com/graysky2/kernel_gcc_patch/issues/15
-5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+1. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
+2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3. https://github.com/graysky2/kernel_gcc_patch/issues/15
+4. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
---
- arch/x86/Kconfig.cpu | 432 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile | 45 +++-
- arch/x86/include/asm/vermagic.h | 76 ++++++
- 3 files changed, 537 insertions(+), 16 deletions(-)
+ arch/x86/Kconfig.cpu | 359 ++++++++++++++++++++++++++++++--
+ arch/x86/Makefile | 87 +++++++-
+ arch/x86/include/asm/vermagic.h | 70 +++++++
+ 3 files changed, 499 insertions(+), 17 deletions(-)
diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 2a7279d80460a..55941c31ade35 100644
+index 2a7279d80460..abfadddd1b23 100644
--- a/arch/x86/Kconfig.cpu
+++ b/arch/x86/Kconfig.cpu
-@@ -157,7 +157,7 @@ config MPENTIUM4
-
-
+@@ -155,9 +155,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
config MK6
- bool "K6/K6-II/K6-III"
+ bool "AMD K6/K6-II/K6-III"
depends on X86_32
help
Select this for an AMD K6-family processor. Enables use of
-@@ -165,7 +165,7 @@ config MK6
+@@ -165,7 +164,7 @@ config MK6
flags to GCC.
-
+
config MK7
- bool "Athlon/Duron/K7"
+ bool "AMD Athlon/Duron/K7"
depends on X86_32
help
Select this for an AMD Athlon K7-family processor. Enables use of
-@@ -173,12 +173,114 @@ config MK7
+@@ -173,12 +172,114 @@ config MK7
flags to GCC.
-
+
config MK8
- bool "Opteron/Athlon64/Hammer/K8"
+ bool "AMD Opteron/Athlon64/Hammer/K8"
@@ -141,7 +155,7 @@ index 2a7279d80460a..55941c31ade35 100644
Select this for an AMD Opteron or Athlon64 Hammer-family processor.
Enables use of some extended instructions, and passes appropriate
optimization flags to GCC.
-
+
+config MK8SSE3
+ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
+ help
@@ -238,7 +252,7 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MZEN5
+ bool "AMD Zen 5"
-+ depends on (CC_IS_GCC && GCC_VERSION >= 141000) || (CC_IS_CLANG && CLANG_VERSION >= 180000)
++ depends on (CC_IS_GCC && GCC_VERSION > 140000) || (CC_IS_CLANG && CLANG_VERSION >= 191000)
+ help
+ Select this for AMD Family 19h Zen 5 processors.
+
@@ -247,40 +261,47 @@ index 2a7279d80460a..55941c31ade35 100644
config MCRUSOE
bool "Crusoe"
depends on X86_32
-@@ -270,7 +372,7 @@ config MPSC
+@@ -269,8 +370,17 @@ config MPSC
+ using the cpu family field
in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
-
+
++config MATOM
++ bool "Intel Atom"
++ help
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
config MCORE2
- bool "Core 2/newer Xeon"
+ bool "Intel Core 2"
help
-
+
Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -278,6 +380,8 @@ config MCORE2
+@@ -278,14 +388,191 @@ config MCORE2
family in /proc/cpuinfo. Newer ones have 6 and older ones 15
(not a typo)
-
+
+-config MATOM
+- bool "Intel Atom"
+ Enables -march=core2
+
- config MATOM
- bool "Intel Atom"
- help
-@@ -287,6 +391,212 @@ config MATOM
- accordingly optimized code. Use a recent GCC with specific Atom
- support in order to fully benefit from selecting this option.
-
+config MNEHALEM
+ bool "Intel Nehalem"
-+ select X86_P6_NOP
-+ help
-+
+ help
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
+ Select this for 1st Gen Core processors in the Nehalem family.
+
+ Enables -march=nehalem
+
+config MWESTMERE
+ bool "Intel Westmere"
-+ select X86_P6_NOP
+ help
+
+ Select this for the Intel Westmere formerly Nehalem-C family.
@@ -289,7 +310,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MSILVERMONT
+ bool "Intel Silvermont"
-+ select X86_P6_NOP
+ help
+
+ Select this for the Intel Silvermont platform.
@@ -298,7 +318,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MGOLDMONT
+ bool "Intel Goldmont"
-+ select X86_P6_NOP
+ help
+
+ Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
@@ -307,7 +326,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MGOLDMONTPLUS
+ bool "Intel Goldmont Plus"
-+ select X86_P6_NOP
+ help
+
+ Select this for the Intel Goldmont Plus platform including Gemini Lake.
@@ -316,7 +334,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MSANDYBRIDGE
+ bool "Intel Sandy Bridge"
-+ select X86_P6_NOP
+ help
+
+ Select this for 2nd Gen Core processors in the Sandy Bridge family.
@@ -325,7 +342,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MIVYBRIDGE
+ bool "Intel Ivy Bridge"
-+ select X86_P6_NOP
+ help
+
+ Select this for 3rd Gen Core processors in the Ivy Bridge family.
@@ -334,7 +350,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MHASWELL
+ bool "Intel Haswell"
-+ select X86_P6_NOP
+ help
+
+ Select this for 4th Gen Core processors in the Haswell family.
@@ -343,7 +358,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MBROADWELL
+ bool "Intel Broadwell"
-+ select X86_P6_NOP
+ help
+
+ Select this for 5th Gen Core processors in the Broadwell family.
@@ -352,7 +366,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MSKYLAKE
+ bool "Intel Skylake"
-+ select X86_P6_NOP
+ help
+
+ Select this for 6th Gen Core processors in the Skylake family.
@@ -361,7 +374,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MSKYLAKEX
+ bool "Intel Skylake X"
-+ select X86_P6_NOP
+ help
+
+ Select this for 6th Gen Core processors in the Skylake X family.
@@ -370,7 +382,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MCANNONLAKE
+ bool "Intel Cannon Lake"
-+ select X86_P6_NOP
+ help
+
+ Select this for 8th Gen Core processors
@@ -379,7 +390,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MICELAKE
+ bool "Intel Ice Lake"
-+ select X86_P6_NOP
+ help
+
+ Select this for 10th Gen Core processors in the Ice Lake family.
@@ -388,7 +398,6 @@ index 2a7279d80460a..55941c31ade35 100644
+
+config MCASCADELAKE
+ bool "Intel Cascade Lake"
-+ select X86_P6_NOP
+ help
+
+ Select this for Xeon processors in the Cascade Lake family.
@@ -398,7 +407,6 @@ index 2a7279d80460a..55941c31ade35 100644
+config MCOOPERLAKE
+ bool "Intel Cooper Lake"
+ depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
-+ select X86_P6_NOP
+ help
+
+ Select this for Xeon processors in the Cooper Lake family.
@@ -408,7 +416,6 @@ index 2a7279d80460a..55941c31ade35 100644
+config MTIGERLAKE
+ bool "Intel Tiger Lake"
+ depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
-+ select X86_P6_NOP
+ help
+
+ Select this for third-generation 10 nm process processors in the Tiger Lake family.
@@ -418,7 +425,6 @@ index 2a7279d80460a..55941c31ade35 100644
+config MSAPPHIRERAPIDS
+ bool "Intel Sapphire Rapids"
+ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+ select X86_P6_NOP
+ help
+
+ Select this for fourth-generation 10 nm process processors in the Sapphire Rapids family.
@@ -428,7 +434,6 @@ index 2a7279d80460a..55941c31ade35 100644
+config MROCKETLAKE
+ bool "Intel Rocket Lake"
+ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+ select X86_P6_NOP
+ help
+
+ Select this for eleventh-generation processors in the Rocket Lake family.
@@ -438,7 +443,6 @@ index 2a7279d80460a..55941c31ade35 100644
+config MALDERLAKE
+ bool "Intel Alder Lake"
+ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+ select X86_P6_NOP
+ help
+
+ Select this for twelfth-generation processors in the Alder Lake family.
@@ -448,7 +452,6 @@ index 2a7279d80460a..55941c31ade35 100644
+config MRAPTORLAKE
+ bool "Intel Raptor Lake"
+ depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
-+ select X86_P6_NOP
+ help
+
+ Select this for thirteenth-generation processors in the Raptor Lake family.
@@ -458,7 +461,6 @@ index 2a7279d80460a..55941c31ade35 100644
+config MMETEORLAKE
+ bool "Intel Meteor Lake"
+ depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
-+ select X86_P6_NOP
+ help
+
+ Select this for fourteenth-generation processors in the Meteor Lake family.
@@ -468,44 +470,18 @@ index 2a7279d80460a..55941c31ade35 100644
+config MEMERALDRAPIDS
+ bool "Intel Emerald Rapids"
+ depends on (CC_IS_GCC && GCC_VERSION > 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
-+ select X86_P6_NOP
+ help
+
+ Select this for fifth-generation 10 nm process processors in the Emerald Rapids family.
+
+ Enables -march=emeraldrapids
-+
+
config GENERIC_CPU
bool "Generic-x86-64"
- depends on X86_64
-@@ -294,6 +604,50 @@ config GENERIC_CPU
+@@ -294,6 +581,26 @@ config GENERIC_CPU
Generic x86-64 CPU.
Run equally well on all x86-64 CPUs.
-
-+config GENERIC_CPU2
-+ bool "Generic-x86-64-v2"
-+ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+ depends on X86_64
-+ help
-+ Generic x86-64 CPU.
-+ Run equally well on all x86-64 CPUs with min support of x86-64-v2.
-+
-+config GENERIC_CPU3
-+ bool "Generic-x86-64-v3"
-+ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+ depends on X86_64
-+ help
-+ Generic x86-64-v3 CPU with v3 instructions.
-+ Run equally well on all x86-64 CPUs with min support of x86-64-v3.
-+
-+config GENERIC_CPU4
-+ bool "Generic-x86-64-v4"
-+ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+ depends on X86_64
-+ help
-+ Generic x86-64 CPU with v4 instructions.
-+ Run equally well on all x86-64 CPUs with min support of x86-64-v4.
-+
+
+config MNATIVE_INTEL
+ bool "Intel-Native optimizations autodetected by the compiler"
+ help
@@ -527,135 +503,80 @@ index 2a7279d80460a..55941c31ade35 100644
+ Enables -march=native
+
endchoice
-
+
config X86_GENERIC
-@@ -318,9 +672,17 @@ config X86_INTERNODE_CACHE_SHIFT
+@@ -308,6 +615,30 @@ config X86_GENERIC
+ This is really intended for distributors who need more
+ generic optimizations.
+
++config X86_64_VERSION
++ int "x86-64 compiler ISA level"
++ range 1 3
++ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ depends on X86_64 && GENERIC_CPU
++ help
++ Specify a specific x86-64 compiler ISA level.
++
++ There are three x86-64 ISA levels that work on top of
++ the x86-64 baseline, namely: x86-64-v2, x86-64-v3, and x86-64-v4.
++
++ x86-64-v2 brings support for vector instructions up to Streaming SIMD
++ Extensions 4.2 (SSE4.2) and Supplemental Streaming SIMD Extensions 3
++ (SSSE3), the POPCNT instruction, and CMPXCHG16B.
++
++ x86-64-v3 adds vector instructions up to AVX2, MOVBE, and additional
++ bit-manipulation instructions.
++
++ x86-64-v4 is not included since the kernel does not use AVX512 instructions
++
++ You can find the best version for your CPU by running one of the following:
++ /lib/ld-linux-x86-64.so.2 --help | grep supported
++ /lib64/ld-linux-x86-64.so.2 --help | grep supported
++
+ #
+ # Define implied options from the CPU selection here
+ config X86_INTERNODE_CACHE_SHIFT
+@@ -318,7 +649,7 @@ config X86_INTERNODE_CACHE_SHIFT
config X86_L1_CACHE_SHIFT
int
default "7" if MPENTIUM4 || MPSC
- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+ default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 \
-+ || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER \
-+ || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT \
-+ || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL \
-+ || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE \
-+ || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE \
-+ || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 \
-+ || GENERIC_CPU3 || GENERIC_CPU4
++ default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
default "4" if MELAN || M486SX || M486 || MGEODEGX1
-- default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
-+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII \
-+ || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
-
- config X86_F00F_BUG
- def_bool y
-@@ -332,15 +694,27 @@ config X86_INVD_BUG
-
- config X86_ALIGNMENT_16
- def_bool y
-- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MELAN || MK6 || M586MMX || M586TSC || M586 || M486SX || M486 || MVIAC3_2 || MGEODEGX1
-+ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MELAN || MK6 || M586MMX || M586TSC \
-+ || M586 || M486SX || M486 || MVIAC3_2 || MGEODEGX1
-
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -336,11 +667,11 @@ config X86_ALIGNMENT_16
+
config X86_INTEL_USERCOPY
def_bool y
- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC \
-+ || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT \
-+ || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX \
-+ || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS \
-+ || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL
-
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL
+
config X86_USE_PPRO_CHECKSUM
def_bool y
- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM \
-+ || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX \
-+ || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER \
-+ || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM \
-+ || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE \
-+ || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE \
-+ || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE \
-+ || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
-
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
+
#
# P6_NOPs are a relatively minor optimization that require a family >=
-@@ -356,11 +730,22 @@ config X86_USE_PPRO_CHECKSUM
- config X86_P6_NOP
- def_bool y
- depends on X86_64
-- depends on (MCORE2 || MPENTIUM4 || MPSC)
-+ depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT \
-+ || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE \
-+ || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE \
-+ || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS \
-+ || MNATIVE_INTEL)
-
- config X86_TSC
- def_bool y
-- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
-+ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM \
-+ || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 \
-+ || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER \
-+ || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM \
-+ || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL \
-+ || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE \
-+ || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS \
-+ || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
-
- config X86_HAVE_PAE
- def_bool y
-@@ -368,18 +753,37 @@ config X86_HAVE_PAE
-
- config X86_CMPXCHG64
- def_bool y
-- depends on X86_HAVE_PAE || M586TSC || M586MMX || MK6 || MK7
-+ depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 \
-+ || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 \
-+ || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN \
-+ || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS \
-+ || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE \
-+ || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE \
-+ || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
-
- # this should be set for all -march=.. options where the compiler
- # generates cmov.
- config X86_CMOV
- def_bool y
-- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
-+ depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 \
-+ || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 \
-+ || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR \
-+ || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT \
-+ || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX \
-+ || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS \
-+ || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD)
-
- config X86_MINIMUM_CPU_FAMILY
- int
- default "64" if X86_64
-- default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCORE2 || MK7 || MK8)
-+ default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 \
-+ || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCORE2 || MK7 || MK8 || MK8SSE3 \
-+ || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER \
-+ || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT \
-+ || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL \
-+ || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE \
-+ || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MRAPTORLAKE \
-+ || MNATIVE_INTEL || MNATIVE_AMD)
- default "5" if X86_32 && X86_CMPXCHG64
- default "4"
-
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 5ab93fcdd691d..ac203b599befd 100644
+index cd75e78a06c1..396d1db12bca 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
-@@ -156,8 +156,49 @@ else
- # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
+@@ -181,15 +181,96 @@ else
cflags-$(CONFIG_MK8) += -march=k8
cflags-$(CONFIG_MPSC) += -march=nocona
-- cflags-$(CONFIG_MCORE2) += -march=core2
+ cflags-$(CONFIG_MCORE2) += -march=core2
- cflags-$(CONFIG_MATOM) += -march=atom
+- cflags-$(CONFIG_GENERIC_CPU) += -mtune=generic
++ cflags-$(CONFIG_MATOM) += -march=bonnell
++ ifeq ($(CONFIG_X86_64_VERSION),1)
++ cflags-$(CONFIG_GENERIC_CPU) += -mtune=generic
++ rustflags-$(CONFIG_GENERIC_CPU) += -Ztune-cpu=generic
++ else
++ cflags-$(CONFIG_GENERIC_CPU) += -march=x86-64-v$(CONFIG_X86_64_VERSION)
++ rustflags-$(CONFIG_GENERIC_CPU) += -Ctarget-cpu=x86-64-v$(CONFIG_X86_64_VERSION)
++ endif
+ cflags-$(CONFIG_MK8SSE3) += -march=k8-sse3
+ cflags-$(CONFIG_MK10) += -march=amdfam10
+ cflags-$(CONFIG_MBARCELONA) += -march=barcelona
@@ -671,9 +592,7 @@ index 5ab93fcdd691d..ac203b599befd 100644
+ cflags-$(CONFIG_MZEN4) += -march=znver4
+ cflags-$(CONFIG_MZEN5) += -march=znver5
+ cflags-$(CONFIG_MNATIVE_INTEL) += -march=native
-+ cflags-$(CONFIG_MNATIVE_AMD) += -march=native
-+ cflags-$(CONFIG_MATOM) += -march=bonnell
-+ cflags-$(CONFIG_MCORE2) += -march=core2
++ cflags-$(CONFIG_MNATIVE_AMD) += -march=native -mno-tbm
+ cflags-$(CONFIG_MNEHALEM) += -march=nehalem
+ cflags-$(CONFIG_MWESTMERE) += -march=westmere
+ cflags-$(CONFIG_MSILVERMONT) += -march=silvermont
@@ -696,14 +615,56 @@ index 5ab93fcdd691d..ac203b599befd 100644
+ cflags-$(CONFIG_MRAPTORLAKE) += -march=raptorlake
+ cflags-$(CONFIG_MMETEORLAKE) += -march=meteorlake
+ cflags-$(CONFIG_MEMERALDRAPIDS) += -march=emeraldrapids
-+ cflags-$(CONFIG_GENERIC_CPU2) += -march=x86-64-v2
-+ cflags-$(CONFIG_GENERIC_CPU3) += -march=x86-64-v3
-+ cflags-$(CONFIG_GENERIC_CPU4) += -march=x86-64-v4
- cflags-$(CONFIG_GENERIC_CPU) += -mtune=generic
KBUILD_CFLAGS += $(cflags-y)
-
+
+ rustflags-$(CONFIG_MK8) += -Ctarget-cpu=k8
+ rustflags-$(CONFIG_MPSC) += -Ctarget-cpu=nocona
+ rustflags-$(CONFIG_MCORE2) += -Ctarget-cpu=core2
+ rustflags-$(CONFIG_MATOM) += -Ctarget-cpu=atom
+- rustflags-$(CONFIG_GENERIC_CPU) += -Ztune-cpu=generic
++ rustflags-$(CONFIG_MK8SSE3) += -Ctarget-cpu=k8-sse3
++ rustflags-$(CONFIG_MK10) += -Ctarget-cpu=amdfam10
++ rustflags-$(CONFIG_MBARCELONA) += -Ctarget-cpu=barcelona
++ rustflags-$(CONFIG_MBOBCAT) += -Ctarget-cpu=btver1
++ rustflags-$(CONFIG_MJAGUAR) += -Ctarget-cpu=btver2
++ rustflags-$(CONFIG_MBULLDOZER) += -Ctarget-cpu=bdver1
++ rustflags-$(CONFIG_MPILEDRIVER) += -Ctarget-cpu=bdver2
++ rustflags-$(CONFIG_MSTEAMROLLER) += -Ctarget-cpu=bdver3
++ rustflags-$(CONFIG_MEXCAVATOR) += -Ctarget-cpu=bdver4
++ rustflags-$(CONFIG_MZEN) += -Ctarget-cpu=znver1
++ rustflags-$(CONFIG_MZEN2) += -Ctarget-cpu=znver2
++ rustflags-$(CONFIG_MZEN3) += -Ctarget-cpu=znver3
++ rustflags-$(CONFIG_MZEN4) += -Ctarget-cpu=znver4
++ rustflags-$(CONFIG_MZEN5) += -Ctarget-cpu=znver5
++ rustflags-$(CONFIG_MNATIVE_INTEL) += -Ctarget-cpu=native
++ rustflags-$(CONFIG_MNATIVE_AMD) += -Ctarget-cpu=native
++ rustflags-$(CONFIG_MNEHALEM) += -Ctarget-cpu=nehalem
++ rustflags-$(CONFIG_MWESTMERE) += -Ctarget-cpu=westmere
++ rustflags-$(CONFIG_MSILVERMONT) += -Ctarget-cpu=silvermont
++ rustflags-$(CONFIG_MGOLDMONT) += -Ctarget-cpu=goldmont
++ rustflags-$(CONFIG_MGOLDMONTPLUS) += -Ctarget-cpu=goldmont-plus
++ rustflags-$(CONFIG_MSANDYBRIDGE) += -Ctarget-cpu=sandybridge
++ rustflags-$(CONFIG_MIVYBRIDGE) += -Ctarget-cpu=ivybridge
++ rustflags-$(CONFIG_MHASWELL) += -Ctarget-cpu=haswell
++ rustflags-$(CONFIG_MBROADWELL) += -Ctarget-cpu=broadwell
++ rustflags-$(CONFIG_MSKYLAKE) += -Ctarget-cpu=skylake
++ rustflags-$(CONFIG_MSKYLAKEX) += -Ctarget-cpu=skylake-avx512
++ rustflags-$(CONFIG_MCANNONLAKE) += -Ctarget-cpu=cannonlake
++ rustflags-$(CONFIG_MICELAKE) += -Ctarget-cpu=icelake-client
++ rustflags-$(CONFIG_MCASCADELAKE) += -Ctarget-cpu=cascadelake
++ rustflags-$(CONFIG_MCOOPERLAKE) += -Ctarget-cpu=cooperlake
++ rustflags-$(CONFIG_MTIGERLAKE) += -Ctarget-cpu=tigerlake
++ rustflags-$(CONFIG_MSAPPHIRERAPIDS) += -Ctarget-cpu=sapphirerapids
++ rustflags-$(CONFIG_MROCKETLAKE) += -Ctarget-cpu=rocketlake
++ rustflags-$(CONFIG_MALDERLAKE) += -Ctarget-cpu=alderlake
++ rustflags-$(CONFIG_MRAPTORLAKE) += -Ctarget-cpu=raptorlake
++ rustflags-$(CONFIG_MMETEORLAKE) += -Ctarget-cpu=meteorlake
++ rustflags-$(CONFIG_MEMERALDRAPIDS) += -Ctarget-cpu=emeraldrapids
+ KBUILD_RUSTFLAGS += $(rustflags-y)
+
+ KBUILD_CFLAGS += -mno-red-zone
diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
-index 75884d2cdec37..7acca9b5a9d56 100644
+index 75884d2cdec3..f4e29563473d 100644
--- a/arch/x86/include/asm/vermagic.h
+++ b/arch/x86/include/asm/vermagic.h
@@ -17,6 +17,54 @@
@@ -761,7 +722,7 @@ index 75884d2cdec37..7acca9b5a9d56 100644
#elif defined CONFIG_MATOM
#define MODULE_PROC_FAMILY "ATOM "
#elif defined CONFIG_M686
-@@ -35,6 +83,34 @@
+@@ -35,6 +83,28 @@
#define MODULE_PROC_FAMILY "K7 "
#elif defined CONFIG_MK8
#define MODULE_PROC_FAMILY "K8 "
@@ -787,15 +748,9 @@ index 75884d2cdec37..7acca9b5a9d56 100644
+#define MODULE_PROC_FAMILY "ZEN "
+#elif defined CONFIG_MZEN2
+#define MODULE_PROC_FAMILY "ZEN2 "
-+#elif defined CONFIG_MZEN3
-+#define MODULE_PROC_FAMILY "ZEN3 "
-+#elif defined CONFIG_MZEN4
-+#define MODULE_PROC_FAMILY "ZEN4 "
-+#elif defined CONFIG_MZEN5
-+#define MODULE_PROC_FAMILY "ZEN5 "
#elif defined CONFIG_MELAN
#define MODULE_PROC_FAMILY "ELAN "
#elif defined CONFIG_MCRUSOE
---
-2.45.0
+--
+2.46.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-10-22 16:56 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-10-22 16:56 UTC (permalink / raw
To: gentoo-commits
commit: 688ece981c67234af4e9b2ee34676eb3fec43278
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Oct 22 16:56:26 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Oct 22 16:56:26 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=688ece98
Linux patch 6.11.5
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1004_linux-6.11.5.patch | 3901 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3905 insertions(+)
diff --git a/0000_README b/0000_README
index df7729f5..70f8b56f 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1003_linux-6.11.4.patch
From: https://www.kernel.org
Desc: Linux 6.11.4
+Patch: 1004_linux-6.11.5.patch
+From: https://www.kernel.org
+Desc: Linux 6.11.5
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1004_linux-6.11.5.patch b/1004_linux-6.11.5.patch
new file mode 100644
index 00000000..d7a8488f
--- /dev/null
+++ b/1004_linux-6.11.5.patch
@@ -0,0 +1,3901 @@
+diff --git a/Makefile b/Makefile
+index 50c615983e4405..687ce7aee67a73 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 11
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/boot/dts/marvell/cn9130-sr-som.dtsi b/arch/arm64/boot/dts/marvell/cn9130-sr-som.dtsi
+index 4676e3488f54d5..cb8d54895a7775 100644
+--- a/arch/arm64/boot/dts/marvell/cn9130-sr-som.dtsi
++++ b/arch/arm64/boot/dts/marvell/cn9130-sr-som.dtsi
+@@ -136,7 +136,7 @@ cp0_i2c0_pins: cp0-i2c0-pins {
+ };
+
+ cp0_mdio_pins: cp0-mdio-pins {
+- marvell,pins = "mpp40", "mpp41";
++ marvell,pins = "mpp0", "mpp1";
+ marvell,function = "ge";
+ };
+
+diff --git a/arch/arm64/include/asm/uprobes.h b/arch/arm64/include/asm/uprobes.h
+index 2b09495499c618..014b02897f8e22 100644
+--- a/arch/arm64/include/asm/uprobes.h
++++ b/arch/arm64/include/asm/uprobes.h
+@@ -10,11 +10,9 @@
+ #include <asm/insn.h>
+ #include <asm/probes.h>
+
+-#define MAX_UINSN_BYTES AARCH64_INSN_SIZE
+-
+ #define UPROBE_SWBP_INSN cpu_to_le32(BRK64_OPCODE_UPROBES)
+ #define UPROBE_SWBP_INSN_SIZE AARCH64_INSN_SIZE
+-#define UPROBE_XOL_SLOT_BYTES MAX_UINSN_BYTES
++#define UPROBE_XOL_SLOT_BYTES AARCH64_INSN_SIZE
+
+ typedef __le32 uprobe_opcode_t;
+
+@@ -23,8 +21,8 @@ struct arch_uprobe_task {
+
+ struct arch_uprobe {
+ union {
+- u8 insn[MAX_UINSN_BYTES];
+- u8 ixol[MAX_UINSN_BYTES];
++ __le32 insn;
++ __le32 ixol;
+ };
+ struct arch_probe_insn api;
+ bool simulate;
+diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
+index 968d5fffe23302..3496d6169e59b2 100644
+--- a/arch/arm64/kernel/probes/decode-insn.c
++++ b/arch/arm64/kernel/probes/decode-insn.c
+@@ -99,10 +99,6 @@ arm_probe_decode_insn(probe_opcode_t insn, struct arch_probe_insn *api)
+ aarch64_insn_is_blr(insn) ||
+ aarch64_insn_is_ret(insn)) {
+ api->handler = simulate_br_blr_ret;
+- } else if (aarch64_insn_is_ldr_lit(insn)) {
+- api->handler = simulate_ldr_literal;
+- } else if (aarch64_insn_is_ldrsw_lit(insn)) {
+- api->handler = simulate_ldrsw_literal;
+ } else {
+ /*
+ * Instruction cannot be stepped out-of-line and we don't
+@@ -140,6 +136,17 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
+ probe_opcode_t insn = le32_to_cpu(*addr);
+ probe_opcode_t *scan_end = NULL;
+ unsigned long size = 0, offset = 0;
++ struct arch_probe_insn *api = &asi->api;
++
++ if (aarch64_insn_is_ldr_lit(insn)) {
++ api->handler = simulate_ldr_literal;
++ decoded = INSN_GOOD_NO_SLOT;
++ } else if (aarch64_insn_is_ldrsw_lit(insn)) {
++ api->handler = simulate_ldrsw_literal;
++ decoded = INSN_GOOD_NO_SLOT;
++ } else {
++ decoded = arm_probe_decode_insn(insn, &asi->api);
++ }
+
+ /*
+ * If there's a symbol defined in front of and near enough to
+@@ -157,7 +164,6 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
+ else
+ scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE;
+ }
+- decoded = arm_probe_decode_insn(insn, &asi->api);
+
+ if (decoded != INSN_REJECTED && scan_end)
+ if (is_probed_address_atomic(addr - 1, scan_end))
+diff --git a/arch/arm64/kernel/probes/simulate-insn.c b/arch/arm64/kernel/probes/simulate-insn.c
+index 22d0b32524763e..b65334ab79d2b0 100644
+--- a/arch/arm64/kernel/probes/simulate-insn.c
++++ b/arch/arm64/kernel/probes/simulate-insn.c
+@@ -171,17 +171,15 @@ simulate_tbz_tbnz(u32 opcode, long addr, struct pt_regs *regs)
+ void __kprobes
+ simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs)
+ {
+- u64 *load_addr;
++ unsigned long load_addr;
+ int xn = opcode & 0x1f;
+- int disp;
+
+- disp = ldr_displacement(opcode);
+- load_addr = (u64 *) (addr + disp);
++ load_addr = addr + ldr_displacement(opcode);
+
+ if (opcode & (1 << 30)) /* x0-x30 */
+- set_x_reg(regs, xn, *load_addr);
++ set_x_reg(regs, xn, READ_ONCE(*(u64 *)load_addr));
+ else /* w0-w30 */
+- set_w_reg(regs, xn, *load_addr);
++ set_w_reg(regs, xn, READ_ONCE(*(u32 *)load_addr));
+
+ instruction_pointer_set(regs, instruction_pointer(regs) + 4);
+ }
+@@ -189,14 +187,12 @@ simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs)
+ void __kprobes
+ simulate_ldrsw_literal(u32 opcode, long addr, struct pt_regs *regs)
+ {
+- s32 *load_addr;
++ unsigned long load_addr;
+ int xn = opcode & 0x1f;
+- int disp;
+
+- disp = ldr_displacement(opcode);
+- load_addr = (s32 *) (addr + disp);
++ load_addr = addr + ldr_displacement(opcode);
+
+- set_x_reg(regs, xn, *load_addr);
++ set_x_reg(regs, xn, READ_ONCE(*(s32 *)load_addr));
+
+ instruction_pointer_set(regs, instruction_pointer(regs) + 4);
+ }
+diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c
+index d49aef2657cdf7..a2f137a595fc1c 100644
+--- a/arch/arm64/kernel/probes/uprobes.c
++++ b/arch/arm64/kernel/probes/uprobes.c
+@@ -42,7 +42,7 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
+ else if (!IS_ALIGNED(addr, AARCH64_INSN_SIZE))
+ return -EINVAL;
+
+- insn = *(probe_opcode_t *)(&auprobe->insn[0]);
++ insn = le32_to_cpu(auprobe->insn);
+
+ switch (arm_probe_decode_insn(insn, &auprobe->api)) {
+ case INSN_REJECTED:
+@@ -108,7 +108,7 @@ bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
+ if (!auprobe->simulate)
+ return false;
+
+- insn = *(probe_opcode_t *)(&auprobe->insn[0]);
++ insn = le32_to_cpu(auprobe->insn);
+ addr = instruction_pointer(regs);
+
+ if (auprobe->api.handler)
+diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c
+index 2a32438e09ceba..74f73141f9b96b 100644
+--- a/arch/s390/kvm/diag.c
++++ b/arch/s390/kvm/diag.c
+@@ -77,7 +77,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu)
+ vcpu->stat.instruction_diagnose_258++;
+ if (vcpu->run->s.regs.gprs[rx] & 7)
+ return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
+- rc = read_guest(vcpu, vcpu->run->s.regs.gprs[rx], rx, &parm, sizeof(parm));
++ rc = read_guest_real(vcpu, vcpu->run->s.regs.gprs[rx], &parm, sizeof(parm));
+ if (rc)
+ return kvm_s390_inject_prog_cond(vcpu, rc);
+ if (parm.parm_version != 2 || parm.parm_len < 5 || parm.code != 0x258)
+diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
+index e65f597e3044a7..a688351f4ab521 100644
+--- a/arch/s390/kvm/gaccess.c
++++ b/arch/s390/kvm/gaccess.c
+@@ -828,6 +828,8 @@ static int access_guest_page(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa,
+ const gfn_t gfn = gpa_to_gfn(gpa);
+ int rc;
+
++ if (!gfn_to_memslot(kvm, gfn))
++ return PGM_ADDRESSING;
+ if (mode == GACC_STORE)
+ rc = kvm_write_guest_page(kvm, gfn, data, offset, len);
+ else
+@@ -985,6 +987,8 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
+ gra += fragment_len;
+ data += fragment_len;
+ }
++ if (rc > 0)
++ vcpu->arch.pgm.code = rc;
+ return rc;
+ }
+
+diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
+index b320d12aa04934..3fde45a151f22e 100644
+--- a/arch/s390/kvm/gaccess.h
++++ b/arch/s390/kvm/gaccess.h
+@@ -405,11 +405,12 @@ int read_guest_abs(struct kvm_vcpu *vcpu, unsigned long gpa, void *data,
+ * @len: number of bytes to copy
+ *
+ * Copy @len bytes from @data (kernel space) to @gra (guest real address).
+- * It is up to the caller to ensure that the entire guest memory range is
+- * valid memory before calling this function.
+ * Guest low address and key protection are not checked.
+ *
+- * Returns zero on success or -EFAULT on error.
++ * Returns zero on success, -EFAULT when copying from @data failed, or
++ * PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info
++ * is also stored to allow injecting into the guest (if applicable) using
++ * kvm_s390_inject_prog_cond().
+ *
+ * If an error occurs data may have been copied partially to guest memory.
+ */
+@@ -428,11 +429,12 @@ int write_guest_real(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
+ * @len: number of bytes to copy
+ *
+ * Copy @len bytes from @gra (guest real address) to @data (kernel space).
+- * It is up to the caller to ensure that the entire guest memory range is
+- * valid memory before calling this function.
+ * Guest key protection is not checked.
+ *
+- * Returns zero on success or -EFAULT on error.
++ * Returns zero on success, -EFAULT when copying to @data failed, or
++ * PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info
++ * is also stored to allow injecting into the guest (if applicable) using
++ * kvm_s390_inject_prog_cond().
+ *
+ * If an error occurs data may have been copied partially to kernel space.
+ */
+diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
+index d9feadffa972da..324686bca36813 100644
+--- a/arch/x86/entry/entry.S
++++ b/arch/x86/entry/entry.S
+@@ -9,6 +9,8 @@
+ #include <asm/unwind_hints.h>
+ #include <asm/segment.h>
+ #include <asm/cache.h>
++#include <asm/cpufeatures.h>
++#include <asm/nospec-branch.h>
+
+ #include "calling.h"
+
+@@ -19,6 +21,9 @@ SYM_FUNC_START(entry_ibpb)
+ movl $PRED_CMD_IBPB, %eax
+ xorl %edx, %edx
+ wrmsr
++
++ /* Make sure IBPB clears return stack preductions too. */
++ FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_BUG_IBPB_NO_RET
+ RET
+ SYM_FUNC_END(entry_ibpb)
+ /* For KVM */
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index d3a814efbff663..20be5758c2d2e2 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -871,6 +871,8 @@ SYM_FUNC_START(entry_SYSENTER_32)
+
+ /* Now ready to switch the cr3 */
+ SWITCH_TO_USER_CR3 scratch_reg=%eax
++ /* Clobbers ZF */
++ CLEAR_CPU_BUFFERS
+
+ /*
+ * Restore all flags except IF. (We restore IF separately because
+@@ -881,7 +883,6 @@ SYM_FUNC_START(entry_SYSENTER_32)
+ BUG_IF_WRONG_CR3 no_user_check=1
+ popfl
+ popl %eax
+- CLEAR_CPU_BUFFERS
+
+ /*
+ * Return back to the vDSO, which will pop ecx and edx.
+@@ -1144,7 +1145,6 @@ SYM_CODE_START(asm_exc_nmi)
+
+ /* Not on SYSENTER stack. */
+ call exc_nmi
+- CLEAR_CPU_BUFFERS
+ jmp .Lnmi_return
+
+ .Lnmi_from_sysenter_stack:
+@@ -1165,6 +1165,7 @@ SYM_CODE_START(asm_exc_nmi)
+
+ CHECK_AND_APPLY_ESPFIX
+ RESTORE_ALL_NMI cr3_reg=%edi pop=4
++ CLEAR_CPU_BUFFERS
+ jmp .Lirq_return
+
+ #ifdef CONFIG_X86_ESPFIX32
+@@ -1206,6 +1207,7 @@ SYM_CODE_START(asm_exc_nmi)
+ * 1 - orig_ax
+ */
+ lss (1+5+6)*4(%esp), %esp # back to espfix stack
++ CLEAR_CPU_BUFFERS
+ jmp .Lirq_return
+ #endif
+ SYM_CODE_END(asm_exc_nmi)
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index dd4682857c1208..913fd3a7bac650 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -215,7 +215,7 @@
+ #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* Disable Speculative Store Bypass. */
+ #define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* AMD SSBD implementation via LS_CFG MSR */
+ #define X86_FEATURE_IBRS ( 7*32+25) /* "ibrs" Indirect Branch Restricted Speculation */
+-#define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier */
++#define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier without a guaranteed RSB flush */
+ #define X86_FEATURE_STIBP ( 7*32+27) /* "stibp" Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN ( 7*32+28) /* Generic flag for all Zen and newer */
+ #define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* L1TF workaround PTE inversion */
+@@ -348,6 +348,7 @@
+ #define X86_FEATURE_CPPC (13*32+27) /* "cppc" Collaborative Processor Performance Control */
+ #define X86_FEATURE_AMD_PSFD (13*32+28) /* Predictive Store Forwarding Disable */
+ #define X86_FEATURE_BTC_NO (13*32+29) /* Not vulnerable to Branch Type Confusion */
++#define X86_FEATURE_AMD_IBPB_RET (13*32+30) /* IBPB clears return address predictor */
+ #define X86_FEATURE_BRS (13*32+31) /* "brs" Branch Sampling available */
+
+ /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
+@@ -523,4 +524,5 @@
+ #define X86_BUG_DIV0 X86_BUG(1*32 + 1) /* "div0" AMD DIV0 speculation bug */
+ #define X86_BUG_RFDS X86_BUG(1*32 + 2) /* "rfds" CPU is vulnerable to Register File Data Sampling */
+ #define X86_BUG_BHI X86_BUG(1*32 + 3) /* "bhi" CPU is affected by Branch History Injection */
++#define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index ff5f1ecc7d1e65..96b410b1d4e841 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -323,7 +323,16 @@
+ * Note: Only the memory operand variant of VERW clears the CPU buffers.
+ */
+ .macro CLEAR_CPU_BUFFERS
+- ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
++#ifdef CONFIG_X86_64
++ ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
++#else
++ /*
++ * In 32bit mode, the memory operand must be a %cs reference. The data
++ * segments may not be usable (vm86 mode), and the stack segment may not
++ * be flat (ESPFIX32).
++ */
++ ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
++#endif
+ .endm
+
+ #ifdef CONFIG_X86_64
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 373638691cd480..3244ab43fff998 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -440,7 +440,19 @@ static int lapic_timer_shutdown(struct clock_event_device *evt)
+ v = apic_read(APIC_LVTT);
+ v |= (APIC_LVT_MASKED | LOCAL_TIMER_VECTOR);
+ apic_write(APIC_LVTT, v);
+- apic_write(APIC_TMICT, 0);
++
++ /*
++ * Setting APIC_LVT_MASKED (above) should be enough to tell
++ * the hardware that this timer will never fire. But AMD
++ * erratum 411 and some Intel CPU behavior circa 2024 say
++ * otherwise. Time for belt and suspenders programming: mask
++ * the timer _and_ zero the counter registers:
++ */
++ if (v & APIC_LVT_TIMER_TSCDEADLINE)
++ wrmsrl(MSR_IA32_TSC_DEADLINE, 0);
++ else
++ apic_write(APIC_TMICT, 0);
++
+ return 0;
+ }
+
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 1e0fe5f8ab84e4..f01b72052f7908 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1218,5 +1218,6 @@ void amd_check_microcode(void)
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
+ return;
+
+- on_each_cpu(zenbleed_check_cpu, NULL, 1);
++ if (cpu_feature_enabled(X86_FEATURE_ZEN2))
++ on_each_cpu(zenbleed_check_cpu, NULL, 1);
+ }
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 468449f73a9575..2ef649ec32ce5e 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1113,8 +1113,25 @@ static void __init retbleed_select_mitigation(void)
+
+ case RETBLEED_MITIGATION_IBPB:
+ setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
++
++ /*
++ * IBPB on entry already obviates the need for
++ * software-based untraining so clear those in case some
++ * other mitigation like SRSO has selected them.
++ */
++ setup_clear_cpu_cap(X86_FEATURE_UNRET);
++ setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
++
+ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+ mitigate_smt = true;
++
++ /*
++ * There is no need for RSB filling: entry_ibpb() ensures
++ * all predictions, including the RSB, are invalidated,
++ * regardless of IBPB implementation.
++ */
++ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
++
+ break;
+
+ case RETBLEED_MITIGATION_STUFF:
+@@ -2621,6 +2638,14 @@ static void __init srso_select_mitigation(void)
+ if (has_microcode) {
+ setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
+ srso_mitigation = SRSO_MITIGATION_IBPB;
++
++ /*
++ * IBPB on entry already obviates the need for
++ * software-based untraining so clear those in case some
++ * other mitigation like Retbleed has selected them.
++ */
++ setup_clear_cpu_cap(X86_FEATURE_UNRET);
++ setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
+ }
+ } else {
+ pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
+@@ -2632,6 +2657,13 @@ static void __init srso_select_mitigation(void)
+ if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
+ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+ srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
++
++ /*
++ * There is no need for RSB filling: entry_ibpb() ensures
++ * all predictions, including the RSB, are invalidated,
++ * regardless of IBPB implementation.
++ */
++ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
+ }
+ } else {
+ pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index be307c9ef263d8..ab0e2da7c9ef50 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1443,6 +1443,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ boot_cpu_has(X86_FEATURE_HYPERVISOR)))
+ setup_force_cpu_bug(X86_BUG_BHI);
+
++ if (cpu_has(c, X86_FEATURE_AMD_IBPB) && !cpu_has(c, X86_FEATURE_AMD_IBPB_RET))
++ setup_force_cpu_bug(X86_BUG_IBPB_NO_RET);
++
+ if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ return;
+
+diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
+index 8591d53c144bb1..b681c2e07dbf84 100644
+--- a/arch/x86/kernel/cpu/resctrl/core.c
++++ b/arch/x86/kernel/cpu/resctrl/core.c
+@@ -207,7 +207,7 @@ static inline bool rdt_get_mb_table(struct rdt_resource *r)
+ return false;
+ }
+
+-static bool __get_mem_config_intel(struct rdt_resource *r)
++static __init bool __get_mem_config_intel(struct rdt_resource *r)
+ {
+ struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
+ union cpuid_0x10_3_eax eax;
+@@ -241,7 +241,7 @@ static bool __get_mem_config_intel(struct rdt_resource *r)
+ return true;
+ }
+
+-static bool __rdt_get_mem_config_amd(struct rdt_resource *r)
++static __init bool __rdt_get_mem_config_amd(struct rdt_resource *r)
+ {
+ struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
+ u32 eax, ebx, ecx, edx, subleaf;
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index e3c3c0c21b5536..b56a1c0dd13878 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -4307,6 +4307,12 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+ /* mark the queue as mq asap */
+ q->mq_ops = set->ops;
+
++ /*
++ * ->tag_set has to be setup before initialize hctx, which cpuphp
++ * handler needs it for checking queue mapping
++ */
++ q->tag_set = set;
++
+ if (blk_mq_alloc_ctxs(q))
+ goto err_exit;
+
+@@ -4325,8 +4331,6 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+ INIT_WORK(&q->timeout_work, blk_mq_timeout_work);
+ blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ);
+
+- q->tag_set = set;
+-
+ q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT;
+
+ INIT_DELAYED_WORK(&q->requeue_work, blk_mq_requeue_work);
+diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
+index dd7310c94713c9..dc510f493ba572 100644
+--- a/block/blk-rq-qos.c
++++ b/block/blk-rq-qos.c
+@@ -219,8 +219,8 @@ static int rq_qos_wake_function(struct wait_queue_entry *curr,
+
+ data->got_token = true;
+ smp_wmb();
+- list_del_init(&curr->entry);
+ wake_up_process(data->task);
++ list_del_init_careful(&curr->entry);
+ return 1;
+ }
+
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index bca06bfb4bc32f..2633f7356fac72 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -2381,10 +2381,19 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
+ * TODO: provide forward progress for RECOVERY handler, so that
+ * unprivileged device can benefit from it
+ */
+- if (info.flags & UBLK_F_UNPRIVILEGED_DEV)
++ if (info.flags & UBLK_F_UNPRIVILEGED_DEV) {
+ info.flags &= ~(UBLK_F_USER_RECOVERY_REISSUE |
+ UBLK_F_USER_RECOVERY);
+
++ /*
++ * For USER_COPY, we depends on userspace to fill request
++ * buffer by pwrite() to ublk char device, which can't be
++ * used for unprivileged device
++ */
++ if (info.flags & UBLK_F_USER_COPY)
++ return -EINVAL;
++ }
++
+ /* the created device is always owned by current user */
+ ublk_store_owner_uid_gid(&info.owner_uid, &info.owner_gid);
+
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index a1e9b052bc8476..2408e50743ca64 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -1399,10 +1399,15 @@ static int btusb_submit_intr_urb(struct hci_dev *hdev, gfp_t mem_flags)
+ if (!urb)
+ return -ENOMEM;
+
+- /* Use maximum HCI Event size so the USB stack handles
+- * ZPL/short-transfer automatically.
+- */
+- size = HCI_MAX_EVENT_SIZE;
++ if (le16_to_cpu(data->udev->descriptor.idVendor) == 0x0a12 &&
++ le16_to_cpu(data->udev->descriptor.idProduct) == 0x0001)
++ /* Fake CSR devices don't seem to support sort-transter */
++ size = le16_to_cpu(data->intr_ep->wMaxPacketSize);
++ else
++ /* Use maximum HCI Event size so the USB stack handles
++ * ZPL/short-transfer automatically.
++ */
++ size = HCI_MAX_EVENT_SIZE;
+
+ buf = kmalloc(size, mem_flags);
+ if (!buf) {
+@@ -4092,7 +4097,6 @@ static void btusb_disconnect(struct usb_interface *intf)
+ static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
+ {
+ struct btusb_data *data = usb_get_intfdata(intf);
+- int err;
+
+ BT_DBG("intf %p", intf);
+
+@@ -4105,16 +4109,6 @@ static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
+ if (data->suspend_count++)
+ return 0;
+
+- /* Notify Host stack to suspend; this has to be done before stopping
+- * the traffic since the hci_suspend_dev itself may generate some
+- * traffic.
+- */
+- err = hci_suspend_dev(data->hdev);
+- if (err) {
+- data->suspend_count--;
+- return err;
+- }
+-
+ spin_lock_irq(&data->txlock);
+ if (!(PMSG_IS_AUTO(message) && data->tx_in_flight)) {
+ set_bit(BTUSB_SUSPENDING, &data->flags);
+@@ -4122,7 +4116,6 @@ static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
+ } else {
+ spin_unlock_irq(&data->txlock);
+ data->suspend_count--;
+- hci_resume_dev(data->hdev);
+ return -EBUSY;
+ }
+
+@@ -4243,8 +4236,6 @@ static int btusb_resume(struct usb_interface *intf)
+ spin_unlock_irq(&data->txlock);
+ schedule_work(&data->work);
+
+- hci_resume_dev(data->hdev);
+-
+ return 0;
+
+ failed:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 78b3c067fea7e2..a95811da242b55 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -265,7 +265,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
+
+ /* Only a single BO list is allowed to simplify handling. */
+ if (p->bo_list)
+- ret = -EINVAL;
++ goto free_partial_kdata;
+
+ ret = amdgpu_cs_p1_bo_handles(p, p->chunks[i].kdata);
+ if (ret)
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+index 48b3c4e4b1cad8..62d792ed0323aa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+@@ -595,7 +595,7 @@ static int mes_v12_0_set_hw_resources(struct amdgpu_mes *mes, int pipe)
+
+ if (amdgpu_mes_log_enable) {
+ mes_set_hw_res_pkt.enable_mes_event_int_logging = 1;
+- mes_set_hw_res_pkt.event_intr_history_gpu_mc_ptr = mes->event_log_gpu_addr;
++ mes_set_hw_res_pkt.event_intr_history_gpu_mc_ptr = mes->event_log_gpu_addr + pipe * AMDGPU_MES_LOG_BUFFER_SIZE;
+ }
+
+ return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe,
+@@ -1270,7 +1270,7 @@ static int mes_v12_0_sw_init(void *handle)
+ adev->mes.kiq_hw_fini = &mes_v12_0_kiq_hw_fini;
+ adev->mes.enable_legacy_queue_map = true;
+
+- adev->mes.event_log_size = AMDGPU_MES_LOG_BUFFER_SIZE;
++ adev->mes.event_log_size = adev->enable_uni_mes ? (AMDGPU_MAX_MES_PIPES * AMDGPU_MES_LOG_BUFFER_SIZE) : AMDGPU_MES_LOG_BUFFER_SIZE;
+
+ r = amdgpu_mes_init(adev);
+ if (r)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 2cf95118456182..87672ca714de5b 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -2226,7 +2226,7 @@ static int smu_bump_power_profile_mode(struct smu_context *smu,
+ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ enum amd_dpm_forced_level level,
+ bool skip_display_settings,
+- bool force_update)
++ bool init)
+ {
+ int ret = 0;
+ int index = 0;
+@@ -2255,7 +2255,7 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ }
+ }
+
+- if (force_update || smu_dpm_ctx->dpm_level != level) {
++ if (smu_dpm_ctx->dpm_level != level) {
+ ret = smu_asic_set_performance_level(smu, level);
+ if (ret) {
+ dev_err(smu->adev->dev, "Failed to set performance level!");
+@@ -2272,7 +2272,7 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+ workload[0] = smu->workload_setting[index];
+
+- if (force_update || smu->power_profile_mode != workload[0])
++ if (init || smu->power_profile_mode != workload[0])
+ smu_bump_power_profile_mode(smu, workload, 0);
+ }
+
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index 1d024b122b0c02..cb923e33fd6fc7 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2555,18 +2555,16 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ workload_mask = 1 << workload_type;
+
+ /* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE) {
+- if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
+- ((smu->adev->pm.fw_version == 0x004e6601) ||
+- (smu->adev->pm.fw_version >= 0x004e7300))) ||
+- (amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 10) &&
+- smu->adev->pm.fw_version >= 0x00504500)) {
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- PP_SMC_POWER_PROFILE_POWERSAVING);
+- if (workload_type >= 0)
+- workload_mask |= 1 << workload_type;
+- }
++ if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
++ ((smu->adev->pm.fw_version == 0x004e6601) ||
++ (smu->adev->pm.fw_version >= 0x004e7300))) ||
++ (amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 10) &&
++ smu->adev->pm.fw_version >= 0x00504500)) {
++ workload_type = smu_cmn_to_asic_specific_index(smu,
++ CMN2ASIC_MAPPING_WORKLOAD,
++ PP_SMC_POWER_PROFILE_POWERSAVING);
++ if (workload_type >= 0)
++ workload_mask |= 1 << workload_type;
+ }
+
+ ret = smu_cmn_send_smc_msg_with_param(smu,
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index 17978a1f9ab0a0..baaa331fbcfa06 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -88,25 +88,19 @@ static int intel_dp_mst_max_dpt_bpp(const struct intel_crtc_state *crtc_state,
+
+ static int intel_dp_mst_bw_overhead(const struct intel_crtc_state *crtc_state,
+ const struct intel_connector *connector,
+- bool ssc, bool dsc, int bpp_x16)
++ bool ssc, int dsc_slice_count, int bpp_x16)
+ {
+ const struct drm_display_mode *adjusted_mode =
+ &crtc_state->hw.adjusted_mode;
+ unsigned long flags = DRM_DP_BW_OVERHEAD_MST;
+- int dsc_slice_count = 0;
+ int overhead;
+
+ flags |= intel_dp_is_uhbr(crtc_state) ? DRM_DP_BW_OVERHEAD_UHBR : 0;
+ flags |= ssc ? DRM_DP_BW_OVERHEAD_SSC_REF_CLK : 0;
+ flags |= crtc_state->fec_enable ? DRM_DP_BW_OVERHEAD_FEC : 0;
+
+- if (dsc) {
++ if (dsc_slice_count)
+ flags |= DRM_DP_BW_OVERHEAD_DSC;
+- dsc_slice_count = intel_dp_dsc_get_slice_count(connector,
+- adjusted_mode->clock,
+- adjusted_mode->hdisplay,
+- crtc_state->joiner_pipes);
+- }
+
+ overhead = drm_dp_bw_overhead(crtc_state->lane_count,
+ adjusted_mode->hdisplay,
+@@ -152,6 +146,19 @@ static int intel_dp_mst_calc_pbn(int pixel_clock, int bpp_x16, int bw_overhead)
+ return DIV_ROUND_UP(effective_data_rate * 64, 54 * 1000);
+ }
+
++static int intel_dp_mst_dsc_get_slice_count(const struct intel_connector *connector,
++ const struct intel_crtc_state *crtc_state)
++{
++ const struct drm_display_mode *adjusted_mode =
++ &crtc_state->hw.adjusted_mode;
++ int num_joined_pipes = crtc_state->joiner_pipes;
++
++ return intel_dp_dsc_get_slice_count(connector,
++ adjusted_mode->clock,
++ adjusted_mode->hdisplay,
++ num_joined_pipes);
++}
++
+ static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,
+ struct intel_crtc_state *crtc_state,
+ int max_bpp,
+@@ -171,6 +178,7 @@ static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,
+ const struct drm_display_mode *adjusted_mode =
+ &crtc_state->hw.adjusted_mode;
+ int bpp, slots = -EINVAL;
++ int dsc_slice_count = 0;
+ int max_dpt_bpp;
+ int ret = 0;
+
+@@ -202,6 +210,15 @@ static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,
+ drm_dbg_kms(&i915->drm, "Looking for slots in range min bpp %d max bpp %d\n",
+ min_bpp, max_bpp);
+
++ if (dsc) {
++ dsc_slice_count = intel_dp_mst_dsc_get_slice_count(connector, crtc_state);
++ if (!dsc_slice_count) {
++ drm_dbg_kms(&i915->drm, "Can't get valid DSC slice count\n");
++
++ return -ENOSPC;
++ }
++ }
++
+ for (bpp = max_bpp; bpp >= min_bpp; bpp -= step) {
+ int local_bw_overhead;
+ int remote_bw_overhead;
+@@ -215,9 +232,9 @@ static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,
+ intel_dp_output_bpp(crtc_state->output_format, bpp));
+
+ local_bw_overhead = intel_dp_mst_bw_overhead(crtc_state, connector,
+- false, dsc, link_bpp_x16);
++ false, dsc_slice_count, link_bpp_x16);
+ remote_bw_overhead = intel_dp_mst_bw_overhead(crtc_state, connector,
+- true, dsc, link_bpp_x16);
++ true, dsc_slice_count, link_bpp_x16);
+
+ intel_dp_mst_compute_m_n(crtc_state, connector,
+ local_bw_overhead,
+@@ -448,6 +465,9 @@ hblank_expansion_quirk_needs_dsc(const struct intel_connector *connector,
+ if (mode_hblank_period_ns(adjusted_mode) > hblank_limit)
+ return false;
+
++ if (!intel_dp_mst_dsc_get_slice_count(connector, crtc_state))
++ return false;
++
+ return true;
+ }
+
+diff --git a/drivers/gpu/drm/radeon/radeon_encoders.c b/drivers/gpu/drm/radeon/radeon_encoders.c
+index 0f723292409e5a..fafed331e0a03e 100644
+--- a/drivers/gpu/drm/radeon/radeon_encoders.c
++++ b/drivers/gpu/drm/radeon/radeon_encoders.c
+@@ -43,7 +43,7 @@ static uint32_t radeon_encoder_clones(struct drm_encoder *encoder)
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+ struct drm_encoder *clone_encoder;
+- uint32_t index_mask = 0;
++ uint32_t index_mask = drm_encoder_mask(encoder);
+ int count;
+
+ /* DIG routing gets problematic */
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 288ed0bb75cb98..aec624196d6ea7 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -1283,7 +1283,6 @@ static int vmw_kms_new_framebuffer_surface(struct vmw_private *dev_priv,
+ {
+ struct drm_device *dev = &dev_priv->drm;
+ struct vmw_framebuffer_surface *vfbs;
+- enum SVGA3dSurfaceFormat format;
+ struct vmw_surface *surface;
+ int ret;
+
+@@ -1320,34 +1319,6 @@ static int vmw_kms_new_framebuffer_surface(struct vmw_private *dev_priv,
+ return -EINVAL;
+ }
+
+- switch (mode_cmd->pixel_format) {
+- case DRM_FORMAT_ARGB8888:
+- format = SVGA3D_A8R8G8B8;
+- break;
+- case DRM_FORMAT_XRGB8888:
+- format = SVGA3D_X8R8G8B8;
+- break;
+- case DRM_FORMAT_RGB565:
+- format = SVGA3D_R5G6B5;
+- break;
+- case DRM_FORMAT_XRGB1555:
+- format = SVGA3D_A1R5G5B5;
+- break;
+- default:
+- DRM_ERROR("Invalid pixel format: %p4cc\n",
+- &mode_cmd->pixel_format);
+- return -EINVAL;
+- }
+-
+- /*
+- * For DX, surface format validation is done when surface->scanout
+- * is set.
+- */
+- if (!has_sm4_context(dev_priv) && format != surface->metadata.format) {
+- DRM_ERROR("Invalid surface format for requested mode.\n");
+- return -EINVAL;
+- }
+-
+ vfbs = kzalloc(sizeof(*vfbs), GFP_KERNEL);
+ if (!vfbs) {
+ ret = -ENOMEM;
+@@ -1539,6 +1510,7 @@ static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev,
+ DRM_ERROR("Surface size cannot exceed %dx%d\n",
+ dev_priv->texture_max_width,
+ dev_priv->texture_max_height);
++ ret = -EINVAL;
+ goto err_out;
+ }
+
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index 1625b30d997004..5721c74da3e0b9 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -2276,9 +2276,12 @@ int vmw_dumb_create(struct drm_file *file_priv,
+ const struct SVGA3dSurfaceDesc *desc = vmw_surface_get_desc(format);
+ SVGA3dSurfaceAllFlags flags = SVGA3D_SURFACE_HINT_TEXTURE |
+ SVGA3D_SURFACE_HINT_RENDERTARGET |
+- SVGA3D_SURFACE_SCREENTARGET |
+- SVGA3D_SURFACE_BIND_SHADER_RESOURCE |
+- SVGA3D_SURFACE_BIND_RENDER_TARGET;
++ SVGA3D_SURFACE_SCREENTARGET;
++
++ if (vmw_surface_is_dx_screen_target_format(format)) {
++ flags |= SVGA3D_SURFACE_BIND_SHADER_RESOURCE |
++ SVGA3D_SURFACE_BIND_RENDER_TARGET;
++ }
+
+ /*
+ * Without mob support we're just going to use raw memory buffer
+diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
+index 80499681bd583d..de80c8b7c8913c 100644
+--- a/drivers/gpu/drm/xe/xe_sync.c
++++ b/drivers/gpu/drm/xe/xe_sync.c
+@@ -58,7 +58,7 @@ static struct xe_user_fence *user_fence_create(struct xe_device *xe, u64 addr,
+ if (!access_ok(ptr, sizeof(*ptr)))
+ return ERR_PTR(-EFAULT);
+
+- ufence = kmalloc(sizeof(*ufence), GFP_KERNEL);
++ ufence = kzalloc(sizeof(*ufence), GFP_KERNEL);
+ if (!ufence)
+ return ERR_PTR(-ENOMEM);
+
+diff --git a/drivers/gpu/drm/xe/xe_wait_user_fence.c b/drivers/gpu/drm/xe/xe_wait_user_fence.c
+index f69721339201d6..92f65b9c528015 100644
+--- a/drivers/gpu/drm/xe/xe_wait_user_fence.c
++++ b/drivers/gpu/drm/xe/xe_wait_user_fence.c
+@@ -169,9 +169,6 @@ int xe_wait_user_fence_ioctl(struct drm_device *dev, void *data,
+ args->timeout = 0;
+ }
+
+- if (!timeout && !(err < 0))
+- err = -ETIME;
+-
+ if (q)
+ xe_exec_queue_put(q);
+
+diff --git a/drivers/iio/accel/Kconfig b/drivers/iio/accel/Kconfig
+index 80b57d3ee3a726..c5b1db001d6f33 100644
+--- a/drivers/iio/accel/Kconfig
++++ b/drivers/iio/accel/Kconfig
+@@ -420,6 +420,8 @@ config IIO_ST_ACCEL_SPI_3AXIS
+
+ config IIO_KX022A
+ tristate
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+
+ config IIO_KX022A_SPI
+ tristate "Kionix KX022A tri-axis digital accelerometer SPI interface"
+diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
+index f60fe85a30d529..cceac30e2bb9f9 100644
+--- a/drivers/iio/adc/Kconfig
++++ b/drivers/iio/adc/Kconfig
+@@ -52,6 +52,8 @@ config AD7091R8
+ depends on SPI
+ select AD7091R
+ select REGMAP_SPI
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ Say yes here to build support for Analog Devices AD7091R-2, AD7091R-4,
+ and AD7091R-8 ADC.
+@@ -305,6 +307,8 @@ config AD7923
+ config AD7944
+ tristate "Analog Devices AD7944 and similar ADCs driver"
+ depends on SPI
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ Say yes here to build support for Analog Devices
+ AD7944, AD7985, AD7986 ADCs.
+@@ -1433,6 +1437,8 @@ config TI_ADS8344
+ config TI_ADS8688
+ tristate "Texas Instruments ADS8688"
+ depends on SPI
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ If you say yes here you get support for Texas Instruments ADS8684 and
+ and ADS8688 ADC chips
+@@ -1443,6 +1449,8 @@ config TI_ADS8688
+ config TI_ADS124S08
+ tristate "Texas Instruments ADS124S08"
+ depends on SPI
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ If you say yes here you get support for Texas Instruments ADS124S08
+ and ADS124S06 ADC chips
+@@ -1477,6 +1485,7 @@ config TI_AM335X_ADC
+ config TI_LMP92064
+ tristate "Texas Instruments LMP92064 ADC driver"
+ depends on SPI
++ select REGMAP_SPI
+ help
+ Say yes here to build support for the LMP92064 Precision Current and Voltage
+ sensor.
+diff --git a/drivers/iio/amplifiers/Kconfig b/drivers/iio/amplifiers/Kconfig
+index b54fe01734b0d7..55eb16b32f6c9a 100644
+--- a/drivers/iio/amplifiers/Kconfig
++++ b/drivers/iio/amplifiers/Kconfig
+@@ -27,6 +27,7 @@ config AD8366
+ config ADA4250
+ tristate "Analog Devices ADA4250 Instrumentation Amplifier"
+ depends on SPI
++ select REGMAP_SPI
+ help
+ Say yes here to build support for Analog Devices ADA4250
+ SPI Amplifier's support. The driver provides direct access via
+diff --git a/drivers/iio/chemical/Kconfig b/drivers/iio/chemical/Kconfig
+index 678a6adb9a7583..6c87223f58d903 100644
+--- a/drivers/iio/chemical/Kconfig
++++ b/drivers/iio/chemical/Kconfig
+@@ -80,6 +80,8 @@ config ENS160
+ tristate "ScioSense ENS160 sensor driver"
+ depends on (I2C || SPI)
+ select REGMAP
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ select ENS160_I2C if I2C
+ select ENS160_SPI if SPI
+ help
+diff --git a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
+index ad8910e6ad59df..abb09fefc792c5 100644
+--- a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
++++ b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
+@@ -32,7 +32,7 @@ static ssize_t _hid_sensor_set_report_latency(struct device *dev,
+ latency = integer * 1000 + fract / 1000;
+ ret = hid_sensor_set_report_latency(attrb, latency);
+ if (ret < 0)
+- return len;
++ return ret;
+
+ attrb->latency_ms = hid_sensor_get_report_latency(attrb);
+
+diff --git a/drivers/iio/dac/Kconfig b/drivers/iio/dac/Kconfig
+index a2596c2d3de316..d2012c91dea8b2 100644
+--- a/drivers/iio/dac/Kconfig
++++ b/drivers/iio/dac/Kconfig
+@@ -9,6 +9,8 @@ menu "Digital to analog converters"
+ config AD3552R
+ tristate "Analog Devices AD3552R DAC driver"
+ depends on SPI_MASTER
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ Say yes here to build support for Analog Devices AD3552R
+ Digital to Analog Converter.
+@@ -252,6 +254,8 @@ config AD5764
+ config AD5766
+ tristate "Analog Devices AD5766/AD5767 DAC driver"
+ depends on SPI_MASTER
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ Say yes here to build support for Analog Devices AD5766, AD5767
+ Digital to Analog Converter.
+@@ -262,6 +266,7 @@ config AD5766
+ config AD5770R
+ tristate "Analog Devices AD5770R IDAC driver"
+ depends on SPI_MASTER
++ select REGMAP_SPI
+ help
+ Say yes here to build support for Analog Devices AD5770R Digital to
+ Analog Converter.
+@@ -353,6 +358,7 @@ config LPC18XX_DAC
+ config LTC1660
+ tristate "Linear Technology LTC1660/LTC1665 DAC SPI driver"
+ depends on SPI
++ select REGMAP_SPI
+ help
+ Say yes here to build support for Linear Technology
+ LTC1660 and LTC1665 Digital to Analog Converters.
+@@ -472,6 +478,7 @@ config STM32_DAC
+
+ config STM32_DAC_CORE
+ tristate
++ select REGMAP_MMIO
+
+ config TI_DAC082S085
+ tristate "Texas Instruments 8/10/12-bit 2/4-channel DAC driver"
+diff --git a/drivers/iio/frequency/Kconfig b/drivers/iio/frequency/Kconfig
+index c455be7d4a1c88..89ae09db5ca5fc 100644
+--- a/drivers/iio/frequency/Kconfig
++++ b/drivers/iio/frequency/Kconfig
+@@ -53,6 +53,7 @@ config ADF4371
+ config ADF4377
+ tristate "Analog Devices ADF4377 Microwave Wideband Synthesizer"
+ depends on SPI && COMMON_CLK
++ select REGMAP_SPI
+ help
+ Say yes here to build support for Analog Devices ADF4377 Microwave
+ Wideband Synthesizer.
+diff --git a/drivers/iio/light/Kconfig b/drivers/iio/light/Kconfig
+index b68dcc1fbaca4c..c63fe9228ddba8 100644
+--- a/drivers/iio/light/Kconfig
++++ b/drivers/iio/light/Kconfig
+@@ -322,6 +322,8 @@ config ROHM_BU27008
+ depends on I2C
+ select REGMAP_I2C
+ select IIO_GTS_HELPER
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ Enable support for the ROHM BU27008 color sensor.
+ The ROHM BU27008 is a sensor with 5 photodiodes (red, green,
+diff --git a/drivers/iio/light/opt3001.c b/drivers/iio/light/opt3001.c
+index 887c4b776a8696..176e54bb48c33b 100644
+--- a/drivers/iio/light/opt3001.c
++++ b/drivers/iio/light/opt3001.c
+@@ -138,6 +138,10 @@ static const struct opt3001_scale opt3001_scales[] = {
+ .val = 20966,
+ .val2 = 400000,
+ },
++ {
++ .val = 41932,
++ .val2 = 800000,
++ },
+ {
+ .val = 83865,
+ .val2 = 600000,
+diff --git a/drivers/iio/light/veml6030.c b/drivers/iio/light/veml6030.c
+index 2e86d310952ede..9630de1c578ecb 100644
+--- a/drivers/iio/light/veml6030.c
++++ b/drivers/iio/light/veml6030.c
+@@ -99,9 +99,8 @@ static const char * const period_values[] = {
+ static ssize_t in_illuminance_period_available_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
++ struct veml6030_data *data = iio_priv(dev_to_iio_dev(dev));
+ int ret, reg, x;
+- struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
+- struct veml6030_data *data = iio_priv(indio_dev);
+
+ ret = regmap_read(data->regmap, VEML6030_REG_ALS_CONF, ®);
+ if (ret) {
+@@ -780,7 +779,7 @@ static int veml6030_hw_init(struct iio_dev *indio_dev)
+
+ /* Cache currently active measurement parameters */
+ data->cur_gain = 3;
+- data->cur_resolution = 4608;
++ data->cur_resolution = 5376;
+ data->cur_integration_time = 3;
+
+ return ret;
+diff --git a/drivers/iio/magnetometer/Kconfig b/drivers/iio/magnetometer/Kconfig
+index cd2917d719047b..8d076fdd5f5db4 100644
+--- a/drivers/iio/magnetometer/Kconfig
++++ b/drivers/iio/magnetometer/Kconfig
+@@ -11,6 +11,8 @@ config AF8133J
+ depends on I2C
+ depends on OF
+ select REGMAP_I2C
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ Say yes here to build support for Voltafield AF8133J I2C-based
+ 3-axis magnetometer chip.
+diff --git a/drivers/iio/pressure/Kconfig b/drivers/iio/pressure/Kconfig
+index 3ad38506028ef0..346dace9d651de 100644
+--- a/drivers/iio/pressure/Kconfig
++++ b/drivers/iio/pressure/Kconfig
+@@ -19,6 +19,9 @@ config ABP060MG
+ config ROHM_BM1390
+ tristate "ROHM BM1390GLV-Z pressure sensor driver"
+ depends on I2C
++ select REGMAP_I2C
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ Support for the ROHM BM1390 pressure sensor. The BM1390GLV-Z
+ can measure pressures ranging from 300 hPa to 1300 hPa with
+diff --git a/drivers/iio/proximity/Kconfig b/drivers/iio/proximity/Kconfig
+index 2ca3b0bc5eba10..931eaea046b328 100644
+--- a/drivers/iio/proximity/Kconfig
++++ b/drivers/iio/proximity/Kconfig
+@@ -72,6 +72,8 @@ config LIDAR_LITE_V2
+ config MB1232
+ tristate "MaxSonar I2CXL family ultrasonic sensors"
+ depends on I2C
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ Say Y to build a driver for the ultrasonic sensors I2CXL of
+ MaxBotix which have an i2c interface. It can be used to measure
+diff --git a/drivers/iio/resolver/Kconfig b/drivers/iio/resolver/Kconfig
+index 424529d36080e8..de2dee3832a1a3 100644
+--- a/drivers/iio/resolver/Kconfig
++++ b/drivers/iio/resolver/Kconfig
+@@ -31,6 +31,9 @@ config AD2S1210
+ depends on SPI
+ depends on COMMON_CLK
+ depends on GPIOLIB || COMPILE_TEST
++ select REGMAP
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ Say yes here to build support for Analog Devices spi resolver
+ to digital converters, ad2s1210, provides direct access via sysfs.
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 4eda18f4f46e39..22ea58bf76cb5c 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -218,6 +218,7 @@ static const struct xpad_device {
+ { 0x0c12, 0x8810, "Zeroplus Xbox Controller", 0, XTYPE_XBOX },
+ { 0x0c12, 0x9902, "HAMA VibraX - *FAULTY HARDWARE*", 0, XTYPE_XBOX },
+ { 0x0d2f, 0x0002, "Andamiro Pump It Up pad", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX },
++ { 0x0db0, 0x1901, "Micro Star International Xbox360 Controller for Windows", 0, XTYPE_XBOX360 },
+ { 0x0e4c, 0x1097, "Radica Gamester Controller", 0, XTYPE_XBOX },
+ { 0x0e4c, 0x1103, "Radica Gamester Reflex", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX },
+ { 0x0e4c, 0x2390, "Radica Games Jtech Controller", 0, XTYPE_XBOX },
+@@ -373,6 +374,7 @@ static const struct xpad_device {
+ { 0x294b, 0x3404, "Snakebyte GAMEPAD RGB X", 0, XTYPE_XBOXONE },
+ { 0x2dc8, 0x2000, "8BitDo Pro 2 Wired Controller fox Xbox", 0, XTYPE_XBOXONE },
+ { 0x2dc8, 0x3106, "8BitDo Pro 2 Wired Controller", 0, XTYPE_XBOX360 },
++ { 0x2dc8, 0x310a, "8BitDo Ultimate 2C Wireless Controller", 0, XTYPE_XBOX360 },
+ { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE },
+ { 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 },
+@@ -492,6 +494,7 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x07ff), /* Mad Catz Gamepad */
+ XPAD_XBOXONE_VENDOR(0x0b05), /* ASUS controllers */
+ XPAD_XBOX360_VENDOR(0x0c12), /* Zeroplus X-Box 360 controllers */
++ XPAD_XBOX360_VENDOR(0x0db0), /* Micro Star International X-Box 360 controllers */
+ XPAD_XBOX360_VENDOR(0x0e6f), /* 0x0e6f Xbox 360 controllers */
+ XPAD_XBOXONE_VENDOR(0x0e6f), /* 0x0e6f Xbox One controllers */
+ XPAD_XBOX360_VENDOR(0x0f0d), /* Hori controllers */
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index e3e513cabc86ac..dda6dea7cce099 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -3520,8 +3520,10 @@ static int domain_context_clear_one_cb(struct pci_dev *pdev, u16 alias, void *op
+ */
+ static void domain_context_clear(struct device_domain_info *info)
+ {
+- if (!dev_is_pci(info->dev))
++ if (!dev_is_pci(info->dev)) {
+ domain_context_clear_one(info, info->bus, info->devfn);
++ return;
++ }
+
+ pci_for_each_dma_alias(to_pci_dev(info->dev),
+ &domain_context_clear_one_cb, info);
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index fdec478ba5e70a..ab597e74ba08ef 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -797,8 +797,8 @@ static struct its_vpe *its_build_vmapp_cmd(struct its_node *its,
+ its_encode_valid(cmd, desc->its_vmapp_cmd.valid);
+
+ if (!desc->its_vmapp_cmd.valid) {
++ alloc = !atomic_dec_return(&desc->its_vmapp_cmd.vpe->vmapp_count);
+ if (is_v4_1(its)) {
+- alloc = !atomic_dec_return(&desc->its_vmapp_cmd.vpe->vmapp_count);
+ its_encode_alloc(cmd, alloc);
+ /*
+ * Unmapping a VPE is self-synchronizing on GICv4.1,
+@@ -817,13 +817,13 @@ static struct its_vpe *its_build_vmapp_cmd(struct its_node *its,
+ its_encode_vpt_addr(cmd, vpt_addr);
+ its_encode_vpt_size(cmd, LPI_NRBITS - 1);
+
++ alloc = !atomic_fetch_inc(&desc->its_vmapp_cmd.vpe->vmapp_count);
++
+ if (!is_v4_1(its))
+ goto out;
+
+ vconf_addr = virt_to_phys(page_address(desc->its_vmapp_cmd.vpe->its_vm->vprop_page));
+
+- alloc = !atomic_fetch_inc(&desc->its_vmapp_cmd.vpe->vmapp_count);
+-
+ its_encode_alloc(cmd, alloc);
+
+ /*
+@@ -3806,6 +3806,13 @@ static int its_vpe_set_affinity(struct irq_data *d,
+ struct cpumask *table_mask;
+ unsigned long flags;
+
++ /*
++ * Check if we're racing against a VPE being destroyed, for
++ * which we don't want to allow a VMOVP.
++ */
++ if (!atomic_read(&vpe->vmapp_count))
++ return -EINVAL;
++
+ /*
+ * Changing affinity is mega expensive, so let's be as lazy as
+ * we can and only do it if we really have to. Also, if mapped
+@@ -4463,9 +4470,8 @@ static int its_vpe_init(struct its_vpe *vpe)
+ raw_spin_lock_init(&vpe->vpe_lock);
+ vpe->vpe_id = vpe_id;
+ vpe->vpt_page = vpt_page;
+- if (gic_rdists->has_rvpeid)
+- atomic_set(&vpe->vmapp_count, 0);
+- else
++ atomic_set(&vpe->vmapp_count, 0);
++ if (!gic_rdists->has_rvpeid)
+ vpe->vpe_proxy_event = -1;
+
+ return 0;
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index 4d9ea718086d30..70867bea560f78 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -123,16 +123,6 @@ static inline void plic_irq_toggle(const struct cpumask *mask,
+ }
+ }
+
+-static void plic_irq_enable(struct irq_data *d)
+-{
+- plic_irq_toggle(irq_data_get_effective_affinity_mask(d), d, 1);
+-}
+-
+-static void plic_irq_disable(struct irq_data *d)
+-{
+- plic_irq_toggle(irq_data_get_effective_affinity_mask(d), d, 0);
+-}
+-
+ static void plic_irq_unmask(struct irq_data *d)
+ {
+ struct plic_priv *priv = irq_data_get_irq_chip_data(d);
+@@ -147,6 +137,17 @@ static void plic_irq_mask(struct irq_data *d)
+ writel(0, priv->regs + PRIORITY_BASE + d->hwirq * PRIORITY_PER_ID);
+ }
+
++static void plic_irq_enable(struct irq_data *d)
++{
++ plic_irq_toggle(irq_data_get_effective_affinity_mask(d), d, 1);
++ plic_irq_unmask(d);
++}
++
++static void plic_irq_disable(struct irq_data *d)
++{
++ plic_irq_toggle(irq_data_get_effective_affinity_mask(d), d, 0);
++}
++
+ static void plic_irq_eoi(struct irq_data *d)
+ {
+ struct plic_handler *handler = this_cpu_ptr(&plic_handlers);
+@@ -577,8 +578,10 @@ static int plic_probe(struct fwnode_handle *fwnode)
+
+ handler->enable_save = kcalloc(DIV_ROUND_UP(nr_irqs, 32),
+ sizeof(*handler->enable_save), GFP_KERNEL);
+- if (!handler->enable_save)
++ if (!handler->enable_save) {
++ error = -ENOMEM;
+ goto fail_cleanup_contexts;
++ }
+ done:
+ for (hwirq = 1; hwirq <= nr_irqs; hwirq++) {
+ plic_toggle(handler, hwirq, 0);
+@@ -590,8 +593,10 @@ static int plic_probe(struct fwnode_handle *fwnode)
+
+ priv->irqdomain = irq_domain_add_linear(to_of_node(fwnode), nr_irqs + 1,
+ &plic_irqdomain_ops, priv);
+- if (WARN_ON(!priv->irqdomain))
++ if (WARN_ON(!priv->irqdomain)) {
++ error = -ENOMEM;
+ goto fail_cleanup_contexts;
++ }
+
+ /*
+ * We can have multiple PLIC instances so setup global state
+diff --git a/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_otpe2p.c b/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_otpe2p.c
+index 7c3d8bedf90ba2..a2ed477e0370bc 100644
+--- a/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_otpe2p.c
++++ b/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_otpe2p.c
+@@ -364,6 +364,7 @@ static int pci1xxxx_otp_eeprom_probe(struct auxiliary_device *aux_dev,
+ if (is_eeprom_responsive(priv)) {
+ priv->nvmem_config_eeprom.type = NVMEM_TYPE_EEPROM;
+ priv->nvmem_config_eeprom.name = EEPROM_NAME;
++ priv->nvmem_config_eeprom.id = NVMEM_DEVID_AUTO;
+ priv->nvmem_config_eeprom.dev = &aux_dev->dev;
+ priv->nvmem_config_eeprom.owner = THIS_MODULE;
+ priv->nvmem_config_eeprom.reg_read = pci1xxxx_eeprom_read;
+@@ -383,6 +384,7 @@ static int pci1xxxx_otp_eeprom_probe(struct auxiliary_device *aux_dev,
+
+ priv->nvmem_config_otp.type = NVMEM_TYPE_OTP;
+ priv->nvmem_config_otp.name = OTP_NAME;
++ priv->nvmem_config_otp.id = NVMEM_DEVID_AUTO;
+ priv->nvmem_config_otp.dev = &aux_dev->dev;
+ priv->nvmem_config_otp.owner = THIS_MODULE;
+ priv->nvmem_config_otp.reg_read = pci1xxxx_otp_read;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index dcd3f54ed0cf00..c1dcd93f6b1c3a 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -930,9 +930,6 @@ static int macb_mdiobus_register(struct macb *bp)
+ return ret;
+ }
+
+- if (of_phy_is_fixed_link(np))
+- return mdiobus_register(bp->mii_bus);
+-
+ /* Only create the PHY from the device tree if at least one PHY is
+ * described. Otherwise scan the entire MDIO bus. We do this to support
+ * old device tree that did not follow the best practices and did not
+@@ -953,8 +950,19 @@ static int macb_mdiobus_register(struct macb *bp)
+
+ static int macb_mii_init(struct macb *bp)
+ {
++ struct device_node *child, *np = bp->pdev->dev.of_node;
+ int err = -ENXIO;
+
++ /* With fixed-link, we don't need to register the MDIO bus,
++ * except if we have a child named "mdio" in the device tree.
++ * In that case, some devices may be attached to the MACB's MDIO bus.
++ */
++ child = of_get_child_by_name(np, "mdio");
++ if (child)
++ of_node_put(child);
++ else if (of_phy_is_fixed_link(np))
++ return macb_mii_probe(bp->dev);
++
+ /* Enable management port */
+ macb_writel(bp, NCR, MACB_BIT(MPE));
+
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index f04f42ea60c0f7..0b6b92b04b91d5 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -902,6 +902,7 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
+
+ if (unlikely(tx_frm_cnt && netif_carrier_ok(ndev) &&
+ __netif_subqueue_stopped(ndev, tx_ring->index) &&
++ !test_bit(ENETC_TX_DOWN, &priv->flags) &&
+ (enetc_bd_unused(tx_ring) >= ENETC_TXBDS_MAX_NEEDED))) {
+ netif_wake_subqueue(ndev, tx_ring->index);
+ }
+@@ -1380,6 +1381,9 @@ int enetc_xdp_xmit(struct net_device *ndev, int num_frames,
+ int xdp_tx_bd_cnt, i, k;
+ int xdp_tx_frm_cnt = 0;
+
++ if (unlikely(test_bit(ENETC_TX_DOWN, &priv->flags)))
++ return -ENETDOWN;
++
+ enetc_lock_mdio();
+
+ tx_ring = priv->xdp_tx_ring[smp_processor_id()];
+@@ -1524,7 +1528,6 @@ static void enetc_xdp_drop(struct enetc_bdr *rx_ring, int rx_ring_first,
+ &rx_ring->rx_swbd[rx_ring_first]);
+ enetc_bdr_idx_inc(rx_ring, &rx_ring_first);
+ }
+- rx_ring->stats.xdp_drops++;
+ }
+
+ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
+@@ -1589,6 +1592,7 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
+ fallthrough;
+ case XDP_DROP:
+ enetc_xdp_drop(rx_ring, orig_i, i);
++ rx_ring->stats.xdp_drops++;
+ break;
+ case XDP_PASS:
+ rxbd = orig_rxbd;
+@@ -1605,6 +1609,12 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
+ break;
+ case XDP_TX:
+ tx_ring = priv->xdp_tx_ring[rx_ring->index];
++ if (unlikely(test_bit(ENETC_TX_DOWN, &priv->flags))) {
++ enetc_xdp_drop(rx_ring, orig_i, i);
++ tx_ring->stats.xdp_tx_drops++;
++ break;
++ }
++
+ xdp_tx_bd_cnt = enetc_rx_swbd_to_xdp_tx_swbd(xdp_tx_arr,
+ rx_ring,
+ orig_i, i);
+@@ -2226,18 +2236,24 @@ static void enetc_enable_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
+ enetc_rxbdr_wr(hw, idx, ENETC_RBMR, rbmr);
+ }
+
+-static void enetc_enable_bdrs(struct enetc_ndev_priv *priv)
++static void enetc_enable_rx_bdrs(struct enetc_ndev_priv *priv)
+ {
+ struct enetc_hw *hw = &priv->si->hw;
+ int i;
+
+- for (i = 0; i < priv->num_tx_rings; i++)
+- enetc_enable_txbdr(hw, priv->tx_ring[i]);
+-
+ for (i = 0; i < priv->num_rx_rings; i++)
+ enetc_enable_rxbdr(hw, priv->rx_ring[i]);
+ }
+
++static void enetc_enable_tx_bdrs(struct enetc_ndev_priv *priv)
++{
++ struct enetc_hw *hw = &priv->si->hw;
++ int i;
++
++ for (i = 0; i < priv->num_tx_rings; i++)
++ enetc_enable_txbdr(hw, priv->tx_ring[i]);
++}
++
+ static void enetc_disable_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
+ {
+ int idx = rx_ring->index;
+@@ -2254,18 +2270,24 @@ static void enetc_disable_txbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
+ enetc_txbdr_wr(hw, idx, ENETC_TBMR, 0);
+ }
+
+-static void enetc_disable_bdrs(struct enetc_ndev_priv *priv)
++static void enetc_disable_rx_bdrs(struct enetc_ndev_priv *priv)
+ {
+ struct enetc_hw *hw = &priv->si->hw;
+ int i;
+
+- for (i = 0; i < priv->num_tx_rings; i++)
+- enetc_disable_txbdr(hw, priv->tx_ring[i]);
+-
+ for (i = 0; i < priv->num_rx_rings; i++)
+ enetc_disable_rxbdr(hw, priv->rx_ring[i]);
+ }
+
++static void enetc_disable_tx_bdrs(struct enetc_ndev_priv *priv)
++{
++ struct enetc_hw *hw = &priv->si->hw;
++ int i;
++
++ for (i = 0; i < priv->num_tx_rings; i++)
++ enetc_disable_txbdr(hw, priv->tx_ring[i]);
++}
++
+ static void enetc_wait_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
+ {
+ int delay = 8, timeout = 100;
+@@ -2463,9 +2485,13 @@ void enetc_start(struct net_device *ndev)
+ enable_irq(irq);
+ }
+
+- enetc_enable_bdrs(priv);
++ enetc_enable_tx_bdrs(priv);
++
++ enetc_enable_rx_bdrs(priv);
+
+ netif_tx_start_all_queues(ndev);
++
++ clear_bit(ENETC_TX_DOWN, &priv->flags);
+ }
+ EXPORT_SYMBOL_GPL(enetc_start);
+
+@@ -2523,9 +2549,15 @@ void enetc_stop(struct net_device *ndev)
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ int i;
+
++ set_bit(ENETC_TX_DOWN, &priv->flags);
++
+ netif_tx_stop_all_queues(ndev);
+
+- enetc_disable_bdrs(priv);
++ enetc_disable_rx_bdrs(priv);
++
++ enetc_wait_bdrs(priv);
++
++ enetc_disable_tx_bdrs(priv);
+
+ for (i = 0; i < priv->bdr_int_num; i++) {
+ int irq = pci_irq_vector(priv->si->pdev,
+@@ -2536,8 +2568,6 @@ void enetc_stop(struct net_device *ndev)
+ napi_disable(&priv->int_vector[i]->napi);
+ }
+
+- enetc_wait_bdrs(priv);
+-
+ enetc_clear_interrupts(priv);
+ }
+ EXPORT_SYMBOL_GPL(enetc_stop);
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h
+index a9c2ff22431c57..4f3b314aeead98 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc.h
+@@ -328,6 +328,7 @@ enum enetc_active_offloads {
+
+ enum enetc_flags_bit {
+ ENETC_TX_ONESTEP_TSTAMP_IN_PROGRESS = 0,
++ ENETC_TX_DOWN,
+ };
+
+ /* interrupt coalescing modes */
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index 5e8fac50f945d4..a4eb6edb850add 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -90,6 +90,30 @@
+ #define FEC_PTP_MAX_NSEC_PERIOD 4000000000ULL
+ #define FEC_PTP_MAX_NSEC_COUNTER 0x80000000ULL
+
++/**
++ * fec_ptp_read - read raw cycle counter (to be used by time counter)
++ * @cc: the cyclecounter structure
++ *
++ * this function reads the cyclecounter registers and is called by the
++ * cyclecounter structure used to construct a ns counter from the
++ * arbitrary fixed point registers
++ */
++static u64 fec_ptp_read(const struct cyclecounter *cc)
++{
++ struct fec_enet_private *fep =
++ container_of(cc, struct fec_enet_private, cc);
++ u32 tempval;
++
++ tempval = readl(fep->hwp + FEC_ATIME_CTRL);
++ tempval |= FEC_T_CTRL_CAPTURE;
++ writel(tempval, fep->hwp + FEC_ATIME_CTRL);
++
++ if (fep->quirks & FEC_QUIRK_BUG_CAPTURE)
++ udelay(1);
++
++ return readl(fep->hwp + FEC_ATIME);
++}
++
+ /**
+ * fec_ptp_enable_pps
+ * @fep: the fec_enet_private structure handle
+@@ -136,7 +160,7 @@ static int fec_ptp_enable_pps(struct fec_enet_private *fep, uint enable)
+ * NSEC_PER_SEC - ts.tv_nsec. Add the remaining nanoseconds
+ * to current timer would be next second.
+ */
+- tempval = fep->cc.read(&fep->cc);
++ tempval = fec_ptp_read(&fep->cc);
+ /* Convert the ptp local counter to 1588 timestamp */
+ ns = timecounter_cyc2time(&fep->tc, tempval);
+ ts = ns_to_timespec64(ns);
+@@ -211,13 +235,7 @@ static int fec_ptp_pps_perout(struct fec_enet_private *fep)
+ timecounter_read(&fep->tc);
+
+ /* Get the current ptp hardware time counter */
+- temp_val = readl(fep->hwp + FEC_ATIME_CTRL);
+- temp_val |= FEC_T_CTRL_CAPTURE;
+- writel(temp_val, fep->hwp + FEC_ATIME_CTRL);
+- if (fep->quirks & FEC_QUIRK_BUG_CAPTURE)
+- udelay(1);
+-
+- ptp_hc = readl(fep->hwp + FEC_ATIME);
++ ptp_hc = fec_ptp_read(&fep->cc);
+
+ /* Convert the ptp local counter to 1588 timestamp */
+ curr_time = timecounter_cyc2time(&fep->tc, ptp_hc);
+@@ -271,30 +289,6 @@ static enum hrtimer_restart fec_ptp_pps_perout_handler(struct hrtimer *timer)
+ return HRTIMER_NORESTART;
+ }
+
+-/**
+- * fec_ptp_read - read raw cycle counter (to be used by time counter)
+- * @cc: the cyclecounter structure
+- *
+- * this function reads the cyclecounter registers and is called by the
+- * cyclecounter structure used to construct a ns counter from the
+- * arbitrary fixed point registers
+- */
+-static u64 fec_ptp_read(const struct cyclecounter *cc)
+-{
+- struct fec_enet_private *fep =
+- container_of(cc, struct fec_enet_private, cc);
+- u32 tempval;
+-
+- tempval = readl(fep->hwp + FEC_ATIME_CTRL);
+- tempval |= FEC_T_CTRL_CAPTURE;
+- writel(tempval, fep->hwp + FEC_ATIME_CTRL);
+-
+- if (fep->quirks & FEC_QUIRK_BUG_CAPTURE)
+- udelay(1);
+-
+- return readl(fep->hwp + FEC_ATIME);
+-}
+-
+ /**
+ * fec_ptp_start_cyclecounter - create the cycle counter from hw
+ * @ndev: network device
+diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+index f2a5a36fdacd43..7251121ab196e3 100644
+--- a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
++++ b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+@@ -1444,6 +1444,8 @@ static void vcap_api_encode_rule_test(struct kunit *test)
+
+ ret = vcap_del_rule(&test_vctrl, &test_netdev, id);
+ KUNIT_EXPECT_EQ(test, 0, ret);
++
++ vcap_free_rule(rule);
+ }
+
+ static void vcap_api_set_rule_counter_test(struct kunit *test)
+diff --git a/drivers/parport/procfs.c b/drivers/parport/procfs.c
+index 3ef486cd3d6d57..3880460e67f25a 100644
+--- a/drivers/parport/procfs.c
++++ b/drivers/parport/procfs.c
+@@ -51,12 +51,12 @@ static int do_active_device(const struct ctl_table *table, int write,
+
+ for (dev = port->devices; dev ; dev = dev->next) {
+ if(dev == port->cad) {
+- len += snprintf(buffer, sizeof(buffer), "%s\n", dev->name);
++ len += scnprintf(buffer, sizeof(buffer), "%s\n", dev->name);
+ }
+ }
+
+ if(!len) {
+- len += snprintf(buffer, sizeof(buffer), "%s\n", "none");
++ len += scnprintf(buffer, sizeof(buffer), "%s\n", "none");
+ }
+
+ if (len > *lenp)
+@@ -87,19 +87,19 @@ static int do_autoprobe(const struct ctl_table *table, int write,
+ }
+
+ if ((str = info->class_name) != NULL)
+- len += snprintf (buffer + len, sizeof(buffer) - len, "CLASS:%s;\n", str);
++ len += scnprintf (buffer + len, sizeof(buffer) - len, "CLASS:%s;\n", str);
+
+ if ((str = info->model) != NULL)
+- len += snprintf (buffer + len, sizeof(buffer) - len, "MODEL:%s;\n", str);
++ len += scnprintf (buffer + len, sizeof(buffer) - len, "MODEL:%s;\n", str);
+
+ if ((str = info->mfr) != NULL)
+- len += snprintf (buffer + len, sizeof(buffer) - len, "MANUFACTURER:%s;\n", str);
++ len += scnprintf (buffer + len, sizeof(buffer) - len, "MANUFACTURER:%s;\n", str);
+
+ if ((str = info->description) != NULL)
+- len += snprintf (buffer + len, sizeof(buffer) - len, "DESCRIPTION:%s;\n", str);
++ len += scnprintf (buffer + len, sizeof(buffer) - len, "DESCRIPTION:%s;\n", str);
+
+ if ((str = info->cmdset) != NULL)
+- len += snprintf (buffer + len, sizeof(buffer) - len, "COMMAND SET:%s;\n", str);
++ len += scnprintf (buffer + len, sizeof(buffer) - len, "COMMAND SET:%s;\n", str);
+
+ if (len > *lenp)
+ len = *lenp;
+@@ -128,7 +128,7 @@ static int do_hardware_base_addr(const struct ctl_table *table, int write,
+ if (write) /* permissions prevent this anyway */
+ return -EACCES;
+
+- len += snprintf (buffer, sizeof(buffer), "%lu\t%lu\n", port->base, port->base_hi);
++ len += scnprintf (buffer, sizeof(buffer), "%lu\t%lu\n", port->base, port->base_hi);
+
+ if (len > *lenp)
+ len = *lenp;
+@@ -155,7 +155,7 @@ static int do_hardware_irq(const struct ctl_table *table, int write,
+ if (write) /* permissions prevent this anyway */
+ return -EACCES;
+
+- len += snprintf (buffer, sizeof(buffer), "%d\n", port->irq);
++ len += scnprintf (buffer, sizeof(buffer), "%d\n", port->irq);
+
+ if (len > *lenp)
+ len = *lenp;
+@@ -182,7 +182,7 @@ static int do_hardware_dma(const struct ctl_table *table, int write,
+ if (write) /* permissions prevent this anyway */
+ return -EACCES;
+
+- len += snprintf (buffer, sizeof(buffer), "%d\n", port->dma);
++ len += scnprintf (buffer, sizeof(buffer), "%d\n", port->dma);
+
+ if (len > *lenp)
+ len = *lenp;
+@@ -213,7 +213,7 @@ static int do_hardware_modes(const struct ctl_table *table, int write,
+ #define printmode(x) \
+ do { \
+ if (port->modes & PARPORT_MODE_##x) \
+- len += snprintf(buffer + len, sizeof(buffer) - len, "%s%s", f++ ? "," : "", #x); \
++ len += scnprintf(buffer + len, sizeof(buffer) - len, "%s%s", f++ ? "," : "", #x); \
+ } while (0)
+ int f = 0;
+ printmode(PCSPP);
+diff --git a/drivers/pinctrl/intel/pinctrl-intel-platform.c b/drivers/pinctrl/intel/pinctrl-intel-platform.c
+index 4a19ab3b4ba743..2d5ba8278fb9bc 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel-platform.c
++++ b/drivers/pinctrl/intel/pinctrl-intel-platform.c
+@@ -90,7 +90,6 @@ static int intel_platform_pinctrl_prepare_community(struct device *dev,
+ struct intel_community *community,
+ struct intel_platform_pins *pins)
+ {
+- struct fwnode_handle *child;
+ struct intel_padgroup *gpps;
+ unsigned int group;
+ size_t ngpps;
+@@ -131,7 +130,7 @@ static int intel_platform_pinctrl_prepare_community(struct device *dev,
+ return -ENOMEM;
+
+ group = 0;
+- device_for_each_child_node(dev, child) {
++ device_for_each_child_node_scoped(dev, child) {
+ struct intel_padgroup *gpp = &gpps[group];
+
+ gpp->reg_num = group;
+diff --git a/drivers/pinctrl/nuvoton/pinctrl-ma35.c b/drivers/pinctrl/nuvoton/pinctrl-ma35.c
+index 1fa00a23534a9d..59c4e7c6cddea1 100644
+--- a/drivers/pinctrl/nuvoton/pinctrl-ma35.c
++++ b/drivers/pinctrl/nuvoton/pinctrl-ma35.c
+@@ -218,7 +218,7 @@ static int ma35_pinctrl_dt_node_to_map_func(struct pinctrl_dev *pctldev,
+ }
+
+ map_num += grp->npins;
+- new_map = devm_kcalloc(pctldev->dev, map_num, sizeof(*new_map), GFP_KERNEL);
++ new_map = kcalloc(map_num, sizeof(*new_map), GFP_KERNEL);
+ if (!new_map)
+ return -ENOMEM;
+
+diff --git a/drivers/pinctrl/pinctrl-apple-gpio.c b/drivers/pinctrl/pinctrl-apple-gpio.c
+index 3751c7de37aa9f..f861e63f411521 100644
+--- a/drivers/pinctrl/pinctrl-apple-gpio.c
++++ b/drivers/pinctrl/pinctrl-apple-gpio.c
+@@ -474,6 +474,9 @@ static int apple_gpio_pinctrl_probe(struct platform_device *pdev)
+ for (i = 0; i < npins; i++) {
+ pins[i].number = i;
+ pins[i].name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "PIN%u", i);
++ if (!pins[i].name)
++ return -ENOMEM;
++
+ pins[i].drv_data = pctl;
+ pin_names[i] = pins[i].name;
+ pin_nums[i] = i;
+diff --git a/drivers/pinctrl/pinctrl-ocelot.c b/drivers/pinctrl/pinctrl-ocelot.c
+index be9b8c01016708..d1ab8450ea93eb 100644
+--- a/drivers/pinctrl/pinctrl-ocelot.c
++++ b/drivers/pinctrl/pinctrl-ocelot.c
+@@ -1955,21 +1955,21 @@ static void ocelot_irq_handler(struct irq_desc *desc)
+ unsigned int reg = 0, irq, i;
+ unsigned long irqs;
+
++ chained_irq_enter(parent_chip, desc);
++
+ for (i = 0; i < info->stride; i++) {
+ regmap_read(info->map, id_reg + 4 * i, ®);
+ if (!reg)
+ continue;
+
+- chained_irq_enter(parent_chip, desc);
+-
+ irqs = reg;
+
+ for_each_set_bit(irq, &irqs,
+ min(32U, info->desc->npins - 32 * i))
+ generic_handle_domain_irq(chip->irq.domain, irq + 32 * i);
+-
+- chained_irq_exit(parent_chip, desc);
+ }
++
++ chained_irq_exit(parent_chip, desc);
+ }
+
+ static int ocelot_gpiochip_register(struct platform_device *pdev,
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index a8673739871d81..5b7fa77c118436 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -1374,10 +1374,15 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl, struct fwnode
+
+ for (i = 0; i < npins; i++) {
+ stm32_pin = stm32_pctrl_get_desc_pin_from_gpio(pctl, bank, i);
+- if (stm32_pin && stm32_pin->pin.name)
++ if (stm32_pin && stm32_pin->pin.name) {
+ names[i] = devm_kasprintf(dev, GFP_KERNEL, "%s", stm32_pin->pin.name);
+- else
++ if (!names[i]) {
++ err = -ENOMEM;
++ goto err_clk;
++ }
++ } else {
+ names[i] = NULL;
++ }
+ }
+
+ bank->gpio_chip.names = (const char * const *)names;
+diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c
+index f3621adbd5debc..fbffd451031fdb 100644
+--- a/drivers/s390/char/sclp.c
++++ b/drivers/s390/char/sclp.c
+@@ -1195,7 +1195,8 @@ sclp_reboot_event(struct notifier_block *this, unsigned long event, void *ptr)
+ }
+
+ static struct notifier_block sclp_reboot_notifier = {
+- .notifier_call = sclp_reboot_event
++ .notifier_call = sclp_reboot_event,
++ .priority = INT_MIN,
+ };
+
+ static ssize_t con_pages_show(struct device_driver *dev, char *buf)
+diff --git a/drivers/s390/char/sclp_vt220.c b/drivers/s390/char/sclp_vt220.c
+index 218ae604f737ff..33b9c968dbcba6 100644
+--- a/drivers/s390/char/sclp_vt220.c
++++ b/drivers/s390/char/sclp_vt220.c
+@@ -319,7 +319,7 @@ sclp_vt220_add_msg(struct sclp_vt220_request *request,
+ buffer = (void *) ((addr_t) sccb + sccb->header.length);
+
+ if (convertlf) {
+- /* Perform Linefeed conversion (0x0a -> 0x0a 0x0d)*/
++ /* Perform Linefeed conversion (0x0a -> 0x0d 0x0a)*/
+ for (from=0, to=0;
+ (from < count) && (to < sclp_vt220_space_left(request));
+ from++) {
+@@ -328,8 +328,8 @@ sclp_vt220_add_msg(struct sclp_vt220_request *request,
+ /* Perform conversion */
+ if (c == 0x0a) {
+ if (to + 1 < sclp_vt220_space_left(request)) {
+- ((unsigned char *) buffer)[to++] = c;
+ ((unsigned char *) buffer)[to++] = 0x0d;
++ ((unsigned char *) buffer)[to++] = c;
+ } else
+ break;
+
+diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h
+index dc2cdd5f031114..3822efe349e13f 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr.h
++++ b/drivers/scsi/mpi3mr/mpi3mr.h
+@@ -541,8 +541,8 @@ struct mpi3mr_hba_port {
+ * @port_list: List of ports belonging to a SAS node
+ * @num_phys: Number of phys associated with port
+ * @marked_responding: used while refresing the sas ports
+- * @lowest_phy: lowest phy ID of current sas port
+- * @phy_mask: phy_mask of current sas port
++ * @lowest_phy: lowest phy ID of current sas port, valid for controller port
++ * @phy_mask: phy_mask of current sas port, valid for controller port
+ * @hba_port: HBA port entry
+ * @remote_identify: Attached device identification
+ * @rphy: SAS transport layer rphy object
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_transport.c b/drivers/scsi/mpi3mr/mpi3mr_transport.c
+index ccd23def2e0cfa..0ba9e6a6a13c6d 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_transport.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_transport.c
+@@ -590,12 +590,13 @@ static enum sas_linkrate mpi3mr_convert_phy_link_rate(u8 link_rate)
+ * @mrioc: Adapter instance reference
+ * @mr_sas_port: Internal Port object
+ * @mr_sas_phy: Internal Phy object
++ * @host_node: Flag to indicate this is a host_node
+ *
+ * Return: None.
+ */
+ static void mpi3mr_delete_sas_phy(struct mpi3mr_ioc *mrioc,
+ struct mpi3mr_sas_port *mr_sas_port,
+- struct mpi3mr_sas_phy *mr_sas_phy)
++ struct mpi3mr_sas_phy *mr_sas_phy, u8 host_node)
+ {
+ u64 sas_address = mr_sas_port->remote_identify.sas_address;
+
+@@ -605,9 +606,13 @@ static void mpi3mr_delete_sas_phy(struct mpi3mr_ioc *mrioc,
+
+ list_del(&mr_sas_phy->port_siblings);
+ mr_sas_port->num_phys--;
+- mr_sas_port->phy_mask &= ~(1 << mr_sas_phy->phy_id);
+- if (mr_sas_port->lowest_phy == mr_sas_phy->phy_id)
+- mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;
++
++ if (host_node) {
++ mr_sas_port->phy_mask &= ~(1 << mr_sas_phy->phy_id);
++
++ if (mr_sas_port->lowest_phy == mr_sas_phy->phy_id)
++ mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;
++ }
+ sas_port_delete_phy(mr_sas_port->port, mr_sas_phy->phy);
+ mr_sas_phy->phy_belongs_to_port = 0;
+ }
+@@ -617,12 +622,13 @@ static void mpi3mr_delete_sas_phy(struct mpi3mr_ioc *mrioc,
+ * @mrioc: Adapter instance reference
+ * @mr_sas_port: Internal Port object
+ * @mr_sas_phy: Internal Phy object
++ * @host_node: Flag to indicate this is a host_node
+ *
+ * Return: None.
+ */
+ static void mpi3mr_add_sas_phy(struct mpi3mr_ioc *mrioc,
+ struct mpi3mr_sas_port *mr_sas_port,
+- struct mpi3mr_sas_phy *mr_sas_phy)
++ struct mpi3mr_sas_phy *mr_sas_phy, u8 host_node)
+ {
+ u64 sas_address = mr_sas_port->remote_identify.sas_address;
+
+@@ -632,9 +638,12 @@ static void mpi3mr_add_sas_phy(struct mpi3mr_ioc *mrioc,
+
+ list_add_tail(&mr_sas_phy->port_siblings, &mr_sas_port->phy_list);
+ mr_sas_port->num_phys++;
+- mr_sas_port->phy_mask |= (1 << mr_sas_phy->phy_id);
+- if (mr_sas_phy->phy_id < mr_sas_port->lowest_phy)
+- mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;
++ if (host_node) {
++ mr_sas_port->phy_mask |= (1 << mr_sas_phy->phy_id);
++
++ if (mr_sas_phy->phy_id < mr_sas_port->lowest_phy)
++ mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;
++ }
+ sas_port_add_phy(mr_sas_port->port, mr_sas_phy->phy);
+ mr_sas_phy->phy_belongs_to_port = 1;
+ }
+@@ -675,7 +684,7 @@ static void mpi3mr_add_phy_to_an_existing_port(struct mpi3mr_ioc *mrioc,
+ if (srch_phy == mr_sas_phy)
+ return;
+ }
+- mpi3mr_add_sas_phy(mrioc, mr_sas_port, mr_sas_phy);
++ mpi3mr_add_sas_phy(mrioc, mr_sas_port, mr_sas_phy, mr_sas_node->host_node);
+ return;
+ }
+ }
+@@ -736,7 +745,7 @@ static void mpi3mr_del_phy_from_an_existing_port(struct mpi3mr_ioc *mrioc,
+ mpi3mr_delete_sas_port(mrioc, mr_sas_port);
+ else
+ mpi3mr_delete_sas_phy(mrioc, mr_sas_port,
+- mr_sas_phy);
++ mr_sas_phy, mr_sas_node->host_node);
+ return;
+ }
+ }
+@@ -1028,7 +1037,7 @@ mpi3mr_alloc_hba_port(struct mpi3mr_ioc *mrioc, u16 port_id)
+ /**
+ * mpi3mr_get_hba_port_by_id - find hba port by id
+ * @mrioc: Adapter instance reference
+- * @port_id - Port ID to search
++ * @port_id: Port ID to search
+ *
+ * Return: mpi3mr_hba_port reference for the matched port
+ */
+@@ -1367,7 +1376,8 @@ static struct mpi3mr_sas_port *mpi3mr_sas_port_add(struct mpi3mr_ioc *mrioc,
+ mpi3mr_sas_port_sanity_check(mrioc, mr_sas_node,
+ mr_sas_port->remote_identify.sas_address, hba_port);
+
+- if (mr_sas_node->num_phys >= sizeof(mr_sas_port->phy_mask) * 8)
++ if (mr_sas_node->host_node && mr_sas_node->num_phys >=
++ sizeof(mr_sas_port->phy_mask) * 8)
+ ioc_info(mrioc, "max port count %u could be too high\n",
+ mr_sas_node->num_phys);
+
+@@ -1377,7 +1387,7 @@ static struct mpi3mr_sas_port *mpi3mr_sas_port_add(struct mpi3mr_ioc *mrioc,
+ (mr_sas_node->phy[i].hba_port != hba_port))
+ continue;
+
+- if (i >= sizeof(mr_sas_port->phy_mask) * 8) {
++ if (mr_sas_node->host_node && (i >= sizeof(mr_sas_port->phy_mask) * 8)) {
+ ioc_warn(mrioc, "skipping port %u, max allowed value is %zu\n",
+ i, sizeof(mr_sas_port->phy_mask) * 8);
+ goto out_fail;
+@@ -1385,7 +1395,8 @@ static struct mpi3mr_sas_port *mpi3mr_sas_port_add(struct mpi3mr_ioc *mrioc,
+ list_add_tail(&mr_sas_node->phy[i].port_siblings,
+ &mr_sas_port->phy_list);
+ mr_sas_port->num_phys++;
+- mr_sas_port->phy_mask |= (1 << i);
++ if (mr_sas_node->host_node)
++ mr_sas_port->phy_mask |= (1 << i);
+ }
+
+ if (!mr_sas_port->num_phys) {
+@@ -1394,7 +1405,8 @@ static struct mpi3mr_sas_port *mpi3mr_sas_port_add(struct mpi3mr_ioc *mrioc,
+ goto out_fail;
+ }
+
+- mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;
++ if (mr_sas_node->host_node)
++ mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;
+
+ if (mr_sas_port->remote_identify.device_type == SAS_END_DEVICE) {
+ tgtdev = mpi3mr_get_tgtdev_by_addr(mrioc,
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index 5d37a09849163f..252849910588f6 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -3157,6 +3157,8 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ mutex_unlock(&gsm->mutex);
+ /* Now wipe the queues */
+ tty_ldisc_flush(gsm->tty);
++
++ guard(spinlock_irqsave)(&gsm->tx_lock);
+ list_for_each_entry_safe(txq, ntxq, &gsm->tx_ctrl_list, list)
+ kfree(txq);
+ INIT_LIST_HEAD(&gsm->tx_ctrl_list);
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 67d4a72eda770b..90974d338f3c0b 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -762,6 +762,21 @@ static irqreturn_t __imx_uart_rtsint(int irq, void *dev_id)
+
+ imx_uart_writel(sport, USR1_RTSD, USR1);
+ usr1 = imx_uart_readl(sport, USR1) & USR1_RTSS;
++ /*
++ * Update sport->old_status here, so any follow-up calls to
++ * imx_uart_mctrl_check() will be able to recognize that RTS
++ * state changed since last imx_uart_mctrl_check() call.
++ *
++ * In case RTS has been detected as asserted here and later on
++ * deasserted by the time imx_uart_mctrl_check() was called,
++ * imx_uart_mctrl_check() can detect the RTS state change and
++ * trigger uart_handle_cts_change() to unblock the port for
++ * further TX transfers.
++ */
++ if (usr1 & USR1_RTSS)
++ sport->old_status |= TIOCM_CTS;
++ else
++ sport->old_status &= ~TIOCM_CTS;
+ uart_handle_cts_change(&sport->port, usr1);
+ wake_up_interruptible(&sport->port.state->port.delta_msr_wait);
+
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index f8f6e9466b400d..3acba0887fca40 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -146,6 +146,7 @@ static struct uart_driver qcom_geni_console_driver;
+ static struct uart_driver qcom_geni_uart_driver;
+
+ static void qcom_geni_serial_cancel_tx_cmd(struct uart_port *uport);
++static int qcom_geni_serial_port_setup(struct uart_port *uport);
+
+ static inline struct qcom_geni_serial_port *to_dev_port(struct uart_port *uport)
+ {
+@@ -393,6 +394,23 @@ static void qcom_geni_serial_poll_put_char(struct uart_port *uport,
+ writel(M_TX_FIFO_WATERMARK_EN, uport->membase + SE_GENI_M_IRQ_CLEAR);
+ qcom_geni_serial_poll_tx_done(uport);
+ }
++
++static int qcom_geni_serial_poll_init(struct uart_port *uport)
++{
++ struct qcom_geni_serial_port *port = to_dev_port(uport);
++ int ret;
++
++ if (!port->setup) {
++ ret = qcom_geni_serial_port_setup(uport);
++ if (ret)
++ return ret;
++ }
++
++ if (!qcom_geni_serial_secondary_active(uport))
++ geni_se_setup_s_cmd(&port->se, UART_START_READ, 0);
++
++ return 0;
++}
+ #endif
+
+ #ifdef CONFIG_SERIAL_QCOM_GENI_CONSOLE
+@@ -769,17 +787,27 @@ static void qcom_geni_serial_start_rx_fifo(struct uart_port *uport)
+ static void qcom_geni_serial_stop_rx_dma(struct uart_port *uport)
+ {
+ struct qcom_geni_serial_port *port = to_dev_port(uport);
++ bool done;
+
+ if (!qcom_geni_serial_secondary_active(uport))
+ return;
+
+ geni_se_cancel_s_cmd(&port->se);
+- qcom_geni_serial_poll_bit(uport, SE_GENI_S_IRQ_STATUS,
+- S_CMD_CANCEL_EN, true);
+-
+- if (qcom_geni_serial_secondary_active(uport))
++ done = qcom_geni_serial_poll_bit(uport, SE_DMA_RX_IRQ_STAT,
++ RX_EOT, true);
++ if (done) {
++ writel(RX_EOT | RX_DMA_DONE,
++ uport->membase + SE_DMA_RX_IRQ_CLR);
++ } else {
+ qcom_geni_serial_abort_rx(uport);
+
++ writel(1, uport->membase + SE_DMA_RX_FSM_RST);
++ qcom_geni_serial_poll_bit(uport, SE_DMA_RX_IRQ_STAT,
++ RX_RESET_DONE, true);
++ writel(RX_RESET_DONE | RX_DMA_DONE,
++ uport->membase + SE_DMA_RX_IRQ_CLR);
++ }
++
+ if (port->rx_dma_addr) {
+ geni_se_rx_dma_unprep(&port->se, port->rx_dma_addr,
+ DMA_RX_BUF_SIZE);
+@@ -1078,10 +1106,12 @@ static void qcom_geni_serial_shutdown(struct uart_port *uport)
+ {
+ disable_irq(uport->irq);
+
++ uart_port_lock_irq(uport);
+ qcom_geni_serial_stop_tx(uport);
+ qcom_geni_serial_stop_rx(uport);
+
+ qcom_geni_serial_cancel_tx_cmd(uport);
++ uart_port_unlock_irq(uport);
+ }
+
+ static void qcom_geni_serial_flush_buffer(struct uart_port *uport)
+@@ -1134,7 +1164,6 @@ static int qcom_geni_serial_port_setup(struct uart_port *uport)
+ false, true, true);
+ geni_se_init(&port->se, UART_RX_WM, port->rx_fifo_depth - 2);
+ geni_se_select_mode(&port->se, port->dev_data->mode);
+- qcom_geni_serial_start_rx(uport);
+ port->setup = true;
+
+ return 0;
+@@ -1150,6 +1179,11 @@ static int qcom_geni_serial_startup(struct uart_port *uport)
+ if (ret)
+ return ret;
+ }
++
++ uart_port_lock_irq(uport);
++ qcom_geni_serial_start_rx(uport);
++ uart_port_unlock_irq(uport);
++
+ enable_irq(uport->irq);
+
+ return 0;
+@@ -1235,7 +1269,6 @@ static void qcom_geni_serial_set_termios(struct uart_port *uport,
+ unsigned int avg_bw_core;
+ unsigned long timeout;
+
+- qcom_geni_serial_stop_rx(uport);
+ /* baud rate */
+ baud = uart_get_baud_rate(uport, termios, old, 300, 4000000);
+
+@@ -1251,7 +1284,7 @@ static void qcom_geni_serial_set_termios(struct uart_port *uport,
+ dev_err(port->se.dev,
+ "Couldn't find suitable clock rate for %u\n",
+ baud * sampling_rate);
+- goto out_restart_rx;
++ return;
+ }
+
+ dev_dbg(port->se.dev, "desired_rate = %u, clk_rate = %lu, clk_div = %u\n",
+@@ -1342,8 +1375,6 @@ static void qcom_geni_serial_set_termios(struct uart_port *uport,
+ writel(stop_bit_len, uport->membase + SE_UART_TX_STOP_BIT_LEN);
+ writel(ser_clk_cfg, uport->membase + GENI_SER_M_CLK_CFG);
+ writel(ser_clk_cfg, uport->membase + GENI_SER_S_CLK_CFG);
+-out_restart_rx:
+- qcom_geni_serial_start_rx(uport);
+ }
+
+ #ifdef CONFIG_SERIAL_QCOM_GENI_CONSOLE
+@@ -1564,7 +1595,7 @@ static const struct uart_ops qcom_geni_console_pops = {
+ #ifdef CONFIG_CONSOLE_POLL
+ .poll_get_char = qcom_geni_serial_get_char,
+ .poll_put_char = qcom_geni_serial_poll_put_char,
+- .poll_init = qcom_geni_serial_port_setup,
++ .poll_init = qcom_geni_serial_poll_init,
+ #endif
+ .pm = qcom_geni_serial_pm,
+ };
+@@ -1763,38 +1794,6 @@ static int qcom_geni_serial_sys_resume(struct device *dev)
+ return ret;
+ }
+
+-static int qcom_geni_serial_sys_hib_resume(struct device *dev)
+-{
+- int ret = 0;
+- struct uart_port *uport;
+- struct qcom_geni_private_data *private_data;
+- struct qcom_geni_serial_port *port = dev_get_drvdata(dev);
+-
+- uport = &port->uport;
+- private_data = uport->private_data;
+-
+- if (uart_console(uport)) {
+- geni_icc_set_tag(&port->se, QCOM_ICC_TAG_ALWAYS);
+- geni_icc_set_bw(&port->se);
+- ret = uart_resume_port(private_data->drv, uport);
+- /*
+- * For hibernation usecase clients for
+- * console UART won't call port setup during restore,
+- * hence call port setup for console uart.
+- */
+- qcom_geni_serial_port_setup(uport);
+- } else {
+- /*
+- * Peripheral register settings are lost during hibernation.
+- * Update setup flag such that port setup happens again
+- * during next session. Clients of HS-UART will close and
+- * open the port during hibernation.
+- */
+- port->setup = false;
+- }
+- return ret;
+-}
+-
+ static const struct qcom_geni_device_data qcom_geni_console_data = {
+ .console = true,
+ .mode = GENI_SE_FIFO,
+@@ -1806,12 +1805,8 @@ static const struct qcom_geni_device_data qcom_geni_uart_data = {
+ };
+
+ static const struct dev_pm_ops qcom_geni_serial_pm_ops = {
+- .suspend = pm_sleep_ptr(qcom_geni_serial_sys_suspend),
+- .resume = pm_sleep_ptr(qcom_geni_serial_sys_resume),
+- .freeze = pm_sleep_ptr(qcom_geni_serial_sys_suspend),
+- .poweroff = pm_sleep_ptr(qcom_geni_serial_sys_suspend),
+- .restore = pm_sleep_ptr(qcom_geni_serial_sys_hib_resume),
+- .thaw = pm_sleep_ptr(qcom_geni_serial_sys_hib_resume),
++ SYSTEM_SLEEP_PM_OPS(qcom_geni_serial_sys_suspend,
++ qcom_geni_serial_sys_resume)
+ };
+
+ static const struct of_device_id qcom_geni_serial_match_table[] = {
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index cd87e3d1291edc..96842ce817af47 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -4726,7 +4726,7 @@ static int con_font_get(struct vc_data *vc, struct console_font_op *op)
+ return -EINVAL;
+
+ if (op->data) {
+- font.data = kvmalloc(max_font_size, GFP_KERNEL);
++ font.data = kvzalloc(max_font_size, GFP_KERNEL);
+ if (!font.data)
+ return -ENOMEM;
+ } else
+diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
+index 5891cdacd0b3c5..3903947dbed1ca 100644
+--- a/drivers/ufs/core/ufs-mcq.c
++++ b/drivers/ufs/core/ufs-mcq.c
+@@ -539,7 +539,7 @@ int ufshcd_mcq_sq_cleanup(struct ufs_hba *hba, int task_tag)
+ struct scsi_cmnd *cmd = lrbp->cmd;
+ struct ufs_hw_queue *hwq;
+ void __iomem *reg, *opr_sqd_base;
+- u32 nexus, id, val;
++ u32 nexus, id, val, rtc;
+ int err;
+
+ if (hba->quirks & UFSHCD_QUIRK_MCQ_BROKEN_RTC)
+@@ -569,17 +569,18 @@ int ufshcd_mcq_sq_cleanup(struct ufs_hba *hba, int task_tag)
+ opr_sqd_base = mcq_opr_base(hba, OPR_SQD, id);
+ writel(nexus, opr_sqd_base + REG_SQCTI);
+
+- /* SQRTCy.ICU = 1 */
+- writel(SQ_ICU, opr_sqd_base + REG_SQRTC);
++ /* Initiate Cleanup */
++ writel(readl(opr_sqd_base + REG_SQRTC) | SQ_ICU,
++ opr_sqd_base + REG_SQRTC);
+
+ /* Poll SQRTSy.CUS = 1. Return result from SQRTSy.RTC */
+ reg = opr_sqd_base + REG_SQRTS;
+ err = read_poll_timeout(readl, val, val & SQ_CUS, 20,
+ MCQ_POLL_US, false, reg);
+- if (err)
+- dev_err(hba->dev, "%s: failed. hwq=%d, tag=%d err=%ld\n",
+- __func__, id, task_tag,
+- FIELD_GET(SQ_ICU_ERR_CODE_MASK, readl(reg)));
++ rtc = FIELD_GET(SQ_ICU_ERR_CODE_MASK, readl(reg));
++ if (err || rtc)
++ dev_err(hba->dev, "%s: failed. hwq=%d, tag=%d err=%d RTC=%d\n",
++ __func__, id, task_tag, err, rtc);
+
+ if (ufshcd_mcq_sq_start(hba, hwq))
+ err = -ETIMEDOUT;
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index ce0620e804484a..09408642a6efba 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -5403,10 +5403,12 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp,
+ }
+ break;
+ case OCS_ABORTED:
+- result |= DID_ABORT << 16;
+- break;
+ case OCS_INVALID_COMMAND_STATUS:
+ result |= DID_REQUEUE << 16;
++ dev_warn(hba->dev,
++ "OCS %s from controller for tag %d\n",
++ (ocs == OCS_ABORTED ? "aborted" : "invalid"),
++ lrbp->task_tag);
+ break;
+ case OCS_INVALID_CMD_TABLE_ATTR:
+ case OCS_INVALID_PRDT_ATTR:
+@@ -6470,26 +6472,12 @@ static bool ufshcd_abort_one(struct request *rq, void *priv)
+ struct scsi_device *sdev = cmd->device;
+ struct Scsi_Host *shost = sdev->host;
+ struct ufs_hba *hba = shost_priv(shost);
+- struct ufshcd_lrb *lrbp = &hba->lrb[tag];
+- struct ufs_hw_queue *hwq;
+- unsigned long flags;
+
+ *ret = ufshcd_try_to_abort_task(hba, tag);
+ dev_err(hba->dev, "Aborting tag %d / CDB %#02x %s\n", tag,
+ hba->lrb[tag].cmd ? hba->lrb[tag].cmd->cmnd[0] : -1,
+ *ret ? "failed" : "succeeded");
+
+- /* Release cmd in MCQ mode if abort succeeds */
+- if (hba->mcq_enabled && (*ret == 0)) {
+- hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(lrbp->cmd));
+- if (!hwq)
+- return 0;
+- spin_lock_irqsave(&hwq->cq_lock, flags);
+- if (ufshcd_cmd_inflight(lrbp->cmd))
+- ufshcd_release_scsi_cmd(hba, lrbp);
+- spin_unlock_irqrestore(&hwq->cq_lock, flags);
+- }
+-
+ return *ret == 0;
+ }
+
+@@ -10214,7 +10202,9 @@ static void ufshcd_wl_shutdown(struct device *dev)
+ shost_for_each_device(sdev, hba->host) {
+ if (sdev == hba->ufs_device_wlun)
+ continue;
+- scsi_device_quiesce(sdev);
++ mutex_lock(&sdev->state_mutex);
++ scsi_device_set_state(sdev, SDEV_OFFLINE);
++ mutex_unlock(&sdev->state_mutex);
+ }
+ __ufshcd_wl_suspend(hba, UFS_SHUTDOWN_PM);
+
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 21740e2b8f0781..427e5660f87c24 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -2342,6 +2342,11 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ u32 reg;
+ int i;
+
++ dwc->susphy_state = (dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)) &
++ DWC3_GUSB2PHYCFG_SUSPHY) ||
++ (dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0)) &
++ DWC3_GUSB3PIPECTL_SUSPHY);
++
+ switch (dwc->current_dr_role) {
+ case DWC3_GCTL_PRTCAP_DEVICE:
+ if (pm_runtime_suspended(dwc->dev))
+@@ -2393,6 +2398,15 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ break;
+ }
+
++ if (!PMSG_IS_AUTO(msg)) {
++ /*
++ * TI AM62 platform requires SUSPHY to be
++ * enabled for system suspend to work.
++ */
++ if (!dwc->susphy_state)
++ dwc3_enable_susphy(dwc, true);
++ }
++
+ return 0;
+ }
+
+@@ -2460,6 +2474,11 @@ static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
+ break;
+ }
+
++ if (!PMSG_IS_AUTO(msg)) {
++ /* restore SUSPHY state to that before system suspend. */
++ dwc3_enable_susphy(dwc, dwc->susphy_state);
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 9c508e0c5cdf54..eab81dfdcc3502 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -1150,6 +1150,8 @@ struct dwc3_scratchpad_array {
+ * @sys_wakeup: set if the device may do system wakeup.
+ * @wakeup_configured: set if the device is configured for remote wakeup.
+ * @suspended: set to track suspend event due to U3/L2.
++ * @susphy_state: state of DWC3_GUSB2PHYCFG_SUSPHY + DWC3_GUSB3PIPECTL_SUSPHY
++ * before PM suspend.
+ * @imod_interval: set the interrupt moderation interval in 250ns
+ * increments or 0 to disable.
+ * @max_cfg_eps: current max number of IN eps used across all USB configs.
+@@ -1382,6 +1384,7 @@ struct dwc3 {
+ unsigned sys_wakeup:1;
+ unsigned wakeup_configured:1;
+ unsigned suspended:1;
++ unsigned susphy_state:1;
+
+ u16 imod_interval;
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 10178e5eda5a3f..4959c26d3b71b8 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -438,6 +438,10 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
+ dwc3_gadget_ep_get_transfer_index(dep);
+ }
+
++ if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_ENDTRANSFER &&
++ !(cmd & DWC3_DEPCMD_CMDIOC))
++ mdelay(1);
++
+ if (saved_config) {
+ reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
+ reg |= saved_config;
+@@ -1715,12 +1719,10 @@ static int __dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force, bool int
+ WARN_ON_ONCE(ret);
+ dep->resource_index = 0;
+
+- if (!interrupt) {
+- mdelay(1);
++ if (!interrupt)
+ dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
+- } else if (!ret) {
++ else if (!ret)
+ dep->flags |= DWC3_EP_END_TRANSFER_PENDING;
+- }
+
+ dep->flags &= ~DWC3_EP_DELAY_STOP;
+ return ret;
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index 2d6d3286ffde2c..080dc512741889 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -2055,7 +2055,7 @@ static ssize_t f_uac2_opts_##name##_store(struct config_item *item, \
+ const char *page, size_t len) \
+ { \
+ struct f_uac2_opts *opts = to_f_uac2_opts(item); \
+- int ret = 0; \
++ int ret = len; \
+ \
+ mutex_lock(&opts->lock); \
+ if (opts->refcnt) { \
+@@ -2066,8 +2066,8 @@ static ssize_t f_uac2_opts_##name##_store(struct config_item *item, \
+ if (len && page[len - 1] == '\n') \
+ len--; \
+ \
+- ret = scnprintf(opts->name, min(sizeof(opts->name), len + 1), \
+- "%s", page); \
++ scnprintf(opts->name, min(sizeof(opts->name), len + 1), \
++ "%s", page); \
+ \
+ end: \
+ mutex_unlock(&opts->lock); \
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index ff7bee78bcc492..d5d89fadde433f 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -254,6 +254,7 @@ struct dummy_hcd {
+ u32 stream_en_ep;
+ u8 num_stream[30 / 2];
+
++ unsigned timer_pending:1;
+ unsigned active:1;
+ unsigned old_active:1;
+ unsigned resuming:1;
+@@ -1303,9 +1304,11 @@ static int dummy_urb_enqueue(
+ urb->error_count = 1; /* mark as a new urb */
+
+ /* kick the scheduler, it'll do the rest */
+- if (!hrtimer_active(&dum_hcd->timer))
++ if (!dum_hcd->timer_pending) {
++ dum_hcd->timer_pending = 1;
+ hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS),
+ HRTIMER_MODE_REL_SOFT);
++ }
+
+ done:
+ spin_unlock_irqrestore(&dum_hcd->dum->lock, flags);
+@@ -1324,9 +1327,10 @@ static int dummy_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+ spin_lock_irqsave(&dum_hcd->dum->lock, flags);
+
+ rc = usb_hcd_check_unlink_urb(hcd, urb, status);
+- if (!rc && dum_hcd->rh_state != DUMMY_RH_RUNNING &&
+- !list_empty(&dum_hcd->urbp_list))
++ if (rc == 0 && !dum_hcd->timer_pending) {
++ dum_hcd->timer_pending = 1;
+ hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT);
++ }
+
+ spin_unlock_irqrestore(&dum_hcd->dum->lock, flags);
+ return rc;
+@@ -1813,6 +1817,7 @@ static enum hrtimer_restart dummy_timer(struct hrtimer *t)
+
+ /* look at each urb queued by the host side driver */
+ spin_lock_irqsave(&dum->lock, flags);
++ dum_hcd->timer_pending = 0;
+
+ if (!dum_hcd->udev) {
+ dev_err(dummy_dev(dum_hcd),
+@@ -1994,8 +1999,10 @@ static enum hrtimer_restart dummy_timer(struct hrtimer *t)
+ if (list_empty(&dum_hcd->urbp_list)) {
+ usb_put_dev(dum_hcd->udev);
+ dum_hcd->udev = NULL;
+- } else if (dum_hcd->rh_state == DUMMY_RH_RUNNING) {
++ } else if (!dum_hcd->timer_pending &&
++ dum_hcd->rh_state == DUMMY_RH_RUNNING) {
+ /* want a 1 msec delay here */
++ dum_hcd->timer_pending = 1;
+ hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS),
+ HRTIMER_MODE_REL_SOFT);
+ }
+@@ -2390,8 +2397,10 @@ static int dummy_bus_resume(struct usb_hcd *hcd)
+ } else {
+ dum_hcd->rh_state = DUMMY_RH_RUNNING;
+ set_link_state(dum_hcd);
+- if (!list_empty(&dum_hcd->urbp_list))
++ if (!list_empty(&dum_hcd->urbp_list)) {
++ dum_hcd->timer_pending = 1;
+ hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT);
++ }
+ hcd->state = HC_STATE_RUNNING;
+ }
+ spin_unlock_irq(&dum_hcd->dum->lock);
+@@ -2522,6 +2531,7 @@ static void dummy_stop(struct usb_hcd *hcd)
+ struct dummy_hcd *dum_hcd = hcd_to_dummy_hcd(hcd);
+
+ hrtimer_cancel(&dum_hcd->timer);
++ dum_hcd->timer_pending = 0;
+ device_remove_file(dummy_dev(dum_hcd), &dev_attr_urbs);
+ dev_info(dummy_dev(dum_hcd), "stopped\n");
+ }
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 90726899bc5bcc..785183f0b5f9de 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1023,7 +1023,7 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
+ td_to_noop(xhci, ring, cached_td, false);
+ cached_td->cancel_status = TD_CLEARED;
+ }
+-
++ td_to_noop(xhci, ring, td, false);
+ td->cancel_status = TD_CLEARING_CACHE;
+ cached_td = td;
+ break;
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 6246d5ad146848..76f228e7443cb6 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -2183,7 +2183,7 @@ static int tegra_xusb_enter_elpg(struct tegra_xusb *tegra, bool runtime)
+ goto out;
+ }
+
+- for (i = 0; i < tegra->num_usb_phys; i++) {
++ for (i = 0; i < xhci->usb2_rhub.num_ports; i++) {
+ if (!xhci->usb2_rhub.ports[i])
+ continue;
+ portsc = readl(xhci->usb2_rhub.ports[i]->addr);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 0ea95ad4cb9022..856f16e64dcf05 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1001,7 +1001,7 @@ enum xhci_setup_dev {
+ /* Set TR Dequeue Pointer command TRB fields, 6.4.3.9 */
+ #define TRB_TO_STREAM_ID(p) ((((p) & (0xffff << 16)) >> 16))
+ #define STREAM_ID_FOR_TRB(p) ((((p)) & 0xffff) << 16)
+-#define SCT_FOR_TRB(p) (((p) << 1) & 0x7)
++#define SCT_FOR_TRB(p) (((p) & 0x7) << 1)
+
+ /* Link TRB specific fields */
+ #define TRB_TC (1<<1)
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 176f38750ad589..55886b64cadd83 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -279,6 +279,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EG912Y 0x6001
+ #define QUECTEL_PRODUCT_EC200S_CN 0x6002
+ #define QUECTEL_PRODUCT_EC200A 0x6005
++#define QUECTEL_PRODUCT_EG916Q 0x6007
+ #define QUECTEL_PRODUCT_EM061K_LWW 0x6008
+ #define QUECTEL_PRODUCT_EM061K_LCN 0x6009
+ #define QUECTEL_PRODUCT_EC200T 0x6026
+@@ -1270,6 +1271,7 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG912Y, 0xff, 0, 0) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG916Q, 0xff, 0x00, 0x00) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
+
+ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+@@ -1380,10 +1382,16 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(0) | RSVD(1) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a0, 0xff), /* Telit FN20C04 (rmnet) */
+ .driver_info = RSVD(0) | NCTRL(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a2, 0xff), /* Telit FN920C04 (MBIM) */
++ .driver_info = NCTRL(4) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a4, 0xff), /* Telit FN20C04 (rmnet) */
+ .driver_info = RSVD(0) | NCTRL(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a7, 0xff), /* Telit FN920C04 (MBIM) */
++ .driver_info = NCTRL(4) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a9, 0xff), /* Telit FN20C04 (rmnet) */
+ .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10aa, 0xff), /* Telit FN920C04 (MBIM) */
++ .driver_info = NCTRL(3) | RSVD(4) | RSVD(5) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+diff --git a/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_port.c b/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_port.c
+index a747baa2978498..c37dede62e12cd 100644
+--- a/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_port.c
++++ b/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_port.c
+@@ -432,7 +432,6 @@ static int qcom_pmic_typec_port_get_cc(struct tcpc_dev *tcpc,
+ val = TYPEC_CC_RP_DEF;
+ break;
+ }
+- val = TYPEC_CC_RP_DEF;
+ }
+
+ if (misc & CC_ORIENTATION)
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index f0cf8ce26f010f..a5098973bcef6e 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1374,7 +1374,7 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ struct inode *inode = NULL;
+ unsigned long ref_ptr;
+ unsigned long ref_end;
+- struct fscrypt_str name;
++ struct fscrypt_str name = { 0 };
+ int ret;
+ int log_ref_ver = 0;
+ u64 parent_objectid;
+@@ -1845,7 +1845,7 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans,
+ struct btrfs_dir_item *di,
+ struct btrfs_key *key)
+ {
+- struct fscrypt_str name;
++ struct fscrypt_str name = { 0 };
+ struct btrfs_dir_item *dir_dst_di;
+ struct btrfs_dir_item *index_dst_di;
+ bool dir_dst_matches = false;
+@@ -2125,7 +2125,7 @@ static noinline int check_item_in_log(struct btrfs_trans_handle *trans,
+ struct extent_buffer *eb;
+ int slot;
+ struct btrfs_dir_item *di;
+- struct fscrypt_str name;
++ struct fscrypt_str name = { 0 };
+ struct inode *inode = NULL;
+ struct btrfs_key location;
+
+diff --git a/fs/fat/namei_vfat.c b/fs/fat/namei_vfat.c
+index 6423e1dedf1471..15bf32c21ac0db 100644
+--- a/fs/fat/namei_vfat.c
++++ b/fs/fat/namei_vfat.c
+@@ -1037,7 +1037,7 @@ static int vfat_rename(struct inode *old_dir, struct dentry *old_dentry,
+ if (corrupt < 0) {
+ fat_fs_error(new_dir->i_sb,
+ "%s: Filesystem corrupted (i_pos %lld)",
+- __func__, sinfo.i_pos);
++ __func__, new_i_pos);
+ }
+ goto out;
+ }
+diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c
+index 4a29b0138d75f5..87f59b748b0ba6 100644
+--- a/fs/nilfs2/dir.c
++++ b/fs/nilfs2/dir.c
+@@ -323,7 +323,7 @@ static int nilfs_readdir(struct file *file, struct dir_context *ctx)
+ * The folio is mapped and unlocked. When the caller is finished with
+ * the entry, it should call folio_release_kmap().
+ *
+- * On failure, returns NULL and the caller should ignore foliop.
++ * On failure, returns an error pointer and the caller should ignore foliop.
+ */
+ struct nilfs_dir_entry *nilfs_find_entry(struct inode *dir,
+ const struct qstr *qstr, struct folio **foliop)
+@@ -346,22 +346,24 @@ struct nilfs_dir_entry *nilfs_find_entry(struct inode *dir,
+ do {
+ char *kaddr = nilfs_get_folio(dir, n, foliop);
+
+- if (!IS_ERR(kaddr)) {
+- de = (struct nilfs_dir_entry *)kaddr;
+- kaddr += nilfs_last_byte(dir, n) - reclen;
+- while ((char *) de <= kaddr) {
+- if (de->rec_len == 0) {
+- nilfs_error(dir->i_sb,
+- "zero-length directory entry");
+- folio_release_kmap(*foliop, kaddr);
+- goto out;
+- }
+- if (nilfs_match(namelen, name, de))
+- goto found;
+- de = nilfs_next_entry(de);
++ if (IS_ERR(kaddr))
++ return ERR_CAST(kaddr);
++
++ de = (struct nilfs_dir_entry *)kaddr;
++ kaddr += nilfs_last_byte(dir, n) - reclen;
++ while ((char *)de <= kaddr) {
++ if (de->rec_len == 0) {
++ nilfs_error(dir->i_sb,
++ "zero-length directory entry");
++ folio_release_kmap(*foliop, kaddr);
++ goto out;
+ }
+- folio_release_kmap(*foliop, kaddr);
++ if (nilfs_match(namelen, name, de))
++ goto found;
++ de = nilfs_next_entry(de);
+ }
++ folio_release_kmap(*foliop, kaddr);
++
+ if (++n >= npages)
+ n = 0;
+ /* next folio is past the blocks we've got */
+@@ -374,7 +376,7 @@ struct nilfs_dir_entry *nilfs_find_entry(struct inode *dir,
+ }
+ } while (n != start);
+ out:
+- return NULL;
++ return ERR_PTR(-ENOENT);
+
+ found:
+ ei->i_dir_start_lookup = n;
+@@ -418,18 +420,18 @@ struct nilfs_dir_entry *nilfs_dotdot(struct inode *dir, struct folio **foliop)
+ return NULL;
+ }
+
+-ino_t nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr)
++int nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr, ino_t *ino)
+ {
+- ino_t res = 0;
+ struct nilfs_dir_entry *de;
+ struct folio *folio;
+
+ de = nilfs_find_entry(dir, qstr, &folio);
+- if (de) {
+- res = le64_to_cpu(de->inode);
+- folio_release_kmap(folio, de);
+- }
+- return res;
++ if (IS_ERR(de))
++ return PTR_ERR(de);
++
++ *ino = le64_to_cpu(de->inode);
++ folio_release_kmap(folio, de);
++ return 0;
+ }
+
+ void nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de,
+diff --git a/fs/nilfs2/namei.c b/fs/nilfs2/namei.c
+index c950139db6ef0d..4905063790c578 100644
+--- a/fs/nilfs2/namei.c
++++ b/fs/nilfs2/namei.c
+@@ -55,12 +55,20 @@ nilfs_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
+ {
+ struct inode *inode;
+ ino_t ino;
++ int res;
+
+ if (dentry->d_name.len > NILFS_NAME_LEN)
+ return ERR_PTR(-ENAMETOOLONG);
+
+- ino = nilfs_inode_by_name(dir, &dentry->d_name);
+- inode = ino ? nilfs_iget(dir->i_sb, NILFS_I(dir)->i_root, ino) : NULL;
++ res = nilfs_inode_by_name(dir, &dentry->d_name, &ino);
++ if (res) {
++ if (res != -ENOENT)
++ return ERR_PTR(res);
++ inode = NULL;
++ } else {
++ inode = nilfs_iget(dir->i_sb, NILFS_I(dir)->i_root, ino);
++ }
++
+ return d_splice_alias(inode, dentry);
+ }
+
+@@ -263,10 +271,11 @@ static int nilfs_do_unlink(struct inode *dir, struct dentry *dentry)
+ struct folio *folio;
+ int err;
+
+- err = -ENOENT;
+ de = nilfs_find_entry(dir, &dentry->d_name, &folio);
+- if (!de)
++ if (IS_ERR(de)) {
++ err = PTR_ERR(de);
+ goto out;
++ }
+
+ inode = d_inode(dentry);
+ err = -EIO;
+@@ -362,10 +371,11 @@ static int nilfs_rename(struct mnt_idmap *idmap,
+ if (unlikely(err))
+ return err;
+
+- err = -ENOENT;
+ old_de = nilfs_find_entry(old_dir, &old_dentry->d_name, &old_folio);
+- if (!old_de)
++ if (IS_ERR(old_de)) {
++ err = PTR_ERR(old_de);
+ goto out;
++ }
+
+ if (S_ISDIR(old_inode->i_mode)) {
+ err = -EIO;
+@@ -382,10 +392,12 @@ static int nilfs_rename(struct mnt_idmap *idmap,
+ if (dir_de && !nilfs_empty_dir(new_inode))
+ goto out_dir;
+
+- err = -ENOENT;
+- new_de = nilfs_find_entry(new_dir, &new_dentry->d_name, &new_folio);
+- if (!new_de)
++ new_de = nilfs_find_entry(new_dir, &new_dentry->d_name,
++ &new_folio);
++ if (IS_ERR(new_de)) {
++ err = PTR_ERR(new_de);
+ goto out_dir;
++ }
+ nilfs_set_link(new_dir, new_de, new_folio, old_inode);
+ folio_release_kmap(new_folio, new_de);
+ nilfs_mark_inode_dirty(new_dir);
+@@ -440,12 +452,13 @@ static int nilfs_rename(struct mnt_idmap *idmap,
+ */
+ static struct dentry *nilfs_get_parent(struct dentry *child)
+ {
+- unsigned long ino;
++ ino_t ino;
++ int res;
+ struct nilfs_root *root;
+
+- ino = nilfs_inode_by_name(d_inode(child), &dotdot_name);
+- if (!ino)
+- return ERR_PTR(-ENOENT);
++ res = nilfs_inode_by_name(d_inode(child), &dotdot_name, &ino);
++ if (res)
++ return ERR_PTR(res);
+
+ root = NILFS_I(d_inode(child))->i_root;
+
+diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h
+index 4017f78564405a..0a80dc39a3aa46 100644
+--- a/fs/nilfs2/nilfs.h
++++ b/fs/nilfs2/nilfs.h
+@@ -233,7 +233,7 @@ static inline __u32 nilfs_mask_flags(umode_t mode, __u32 flags)
+
+ /* dir.c */
+ int nilfs_add_link(struct dentry *, struct inode *);
+-ino_t nilfs_inode_by_name(struct inode *, const struct qstr *);
++int nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr, ino_t *ino);
+ int nilfs_make_empty(struct inode *, struct inode *);
+ struct nilfs_dir_entry *nilfs_find_entry(struct inode *, const struct qstr *,
+ struct folio **);
+diff --git a/fs/smb/server/mgmt/user_session.c b/fs/smb/server/mgmt/user_session.c
+index 99416ce9f50183..1e4624e9d434ab 100644
+--- a/fs/smb/server/mgmt/user_session.c
++++ b/fs/smb/server/mgmt/user_session.c
+@@ -177,9 +177,10 @@ static void ksmbd_expire_session(struct ksmbd_conn *conn)
+
+ down_write(&conn->session_lock);
+ xa_for_each(&conn->sessions, id, sess) {
+- if (sess->state != SMB2_SESSION_VALID ||
+- time_after(jiffies,
+- sess->last_active + SMB2_SESSION_TIMEOUT)) {
++ if (atomic_read(&sess->refcnt) == 0 &&
++ (sess->state != SMB2_SESSION_VALID ||
++ time_after(jiffies,
++ sess->last_active + SMB2_SESSION_TIMEOUT))) {
+ xa_erase(&conn->sessions, sess->id);
+ hash_del(&sess->hlist);
+ ksmbd_session_destroy(sess);
+@@ -269,8 +270,6 @@ struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id)
+
+ down_read(&sessions_table_lock);
+ sess = __session_lookup(id);
+- if (sess)
+- sess->last_active = jiffies;
+ up_read(&sessions_table_lock);
+
+ return sess;
+@@ -289,6 +288,22 @@ struct ksmbd_session *ksmbd_session_lookup_all(struct ksmbd_conn *conn,
+ return sess;
+ }
+
++void ksmbd_user_session_get(struct ksmbd_session *sess)
++{
++ atomic_inc(&sess->refcnt);
++}
++
++void ksmbd_user_session_put(struct ksmbd_session *sess)
++{
++ if (!sess)
++ return;
++
++ if (atomic_read(&sess->refcnt) <= 0)
++ WARN_ON(1);
++ else
++ atomic_dec(&sess->refcnt);
++}
++
+ struct preauth_session *ksmbd_preauth_session_alloc(struct ksmbd_conn *conn,
+ u64 sess_id)
+ {
+@@ -393,6 +408,7 @@ static struct ksmbd_session *__session_create(int protocol)
+ xa_init(&sess->rpc_handle_list);
+ sess->sequence_number = 1;
+ rwlock_init(&sess->tree_conns_lock);
++ atomic_set(&sess->refcnt, 1);
+
+ ret = __init_smb2_session(sess);
+ if (ret)
+diff --git a/fs/smb/server/mgmt/user_session.h b/fs/smb/server/mgmt/user_session.h
+index dc9fded2cd4379..c1c4b20bd5c6cf 100644
+--- a/fs/smb/server/mgmt/user_session.h
++++ b/fs/smb/server/mgmt/user_session.h
+@@ -61,6 +61,8 @@ struct ksmbd_session {
+ struct ksmbd_file_table file_table;
+ unsigned long last_active;
+ rwlock_t tree_conns_lock;
++
++ atomic_t refcnt;
+ };
+
+ static inline int test_session_flag(struct ksmbd_session *sess, int bit)
+@@ -104,4 +106,6 @@ void ksmbd_release_tree_conn_id(struct ksmbd_session *sess, int id);
+ int ksmbd_session_rpc_open(struct ksmbd_session *sess, char *rpc_name);
+ void ksmbd_session_rpc_close(struct ksmbd_session *sess, int id);
+ int ksmbd_session_rpc_method(struct ksmbd_session *sess, int id);
++void ksmbd_user_session_get(struct ksmbd_session *sess);
++void ksmbd_user_session_put(struct ksmbd_session *sess);
+ #endif /* __USER_SESSION_MANAGEMENT_H__ */
+diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c
+index 4d24cc105ef6b5..bb3e7b09201a88 100644
+--- a/fs/smb/server/server.c
++++ b/fs/smb/server/server.c
+@@ -238,6 +238,8 @@ static void __handle_ksmbd_work(struct ksmbd_work *work,
+ } while (is_chained == true);
+
+ send:
++ if (work->sess)
++ ksmbd_user_session_put(work->sess);
+ if (work->tcon)
+ ksmbd_tree_connect_put(work->tcon);
+ smb3_preauth_hash_rsp(work);
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 065adfb985fe2a..72e0880617ebc6 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -605,8 +605,10 @@ int smb2_check_user_session(struct ksmbd_work *work)
+
+ /* Check for validity of user session */
+ work->sess = ksmbd_session_lookup_all(conn, sess_id);
+- if (work->sess)
++ if (work->sess) {
++ ksmbd_user_session_get(work->sess);
+ return 1;
++ }
+ ksmbd_debug(SMB, "Invalid user session, Uid %llu\n", sess_id);
+ return -ENOENT;
+ }
+@@ -1746,6 +1748,7 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ }
+
+ conn->binding = true;
++ ksmbd_user_session_get(sess);
+ } else if ((conn->dialect < SMB30_PROT_ID ||
+ server_conf.flags & KSMBD_GLOBAL_FLAG_SMB3_MULTICHANNEL) &&
+ (req->Flags & SMB2_SESSION_REQ_FLAG_BINDING)) {
+@@ -1772,6 +1775,7 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ }
+
+ conn->binding = false;
++ ksmbd_user_session_get(sess);
+ }
+ work->sess = sess;
+
+@@ -2232,7 +2236,9 @@ int smb2_session_logoff(struct ksmbd_work *work)
+ }
+
+ ksmbd_destroy_file_table(&sess->file_table);
++ down_write(&conn->session_lock);
+ sess->state = SMB2_SESSION_EXPIRED;
++ up_write(&conn->session_lock);
+
+ ksmbd_free_user(sess->user);
+ sess->user = NULL;
+diff --git a/include/linux/fsl/enetc_mdio.h b/include/linux/fsl/enetc_mdio.h
+index df25fffdc0ae71..623ccfcbf39c35 100644
+--- a/include/linux/fsl/enetc_mdio.h
++++ b/include/linux/fsl/enetc_mdio.h
+@@ -59,7 +59,8 @@ static inline int enetc_mdio_read_c45(struct mii_bus *bus, int phy_id,
+ static inline int enetc_mdio_write_c45(struct mii_bus *bus, int phy_id,
+ int devad, int regnum, u16 value)
+ { return -EINVAL; }
+-struct enetc_hw *enetc_hw_alloc(struct device *dev, void __iomem *port_regs)
++static inline struct enetc_hw *enetc_hw_alloc(struct device *dev,
++ void __iomem *port_regs)
+ { return ERR_PTR(-EINVAL); }
+
+ #endif
+diff --git a/include/linux/irqchip/arm-gic-v4.h b/include/linux/irqchip/arm-gic-v4.h
+index ecabed6d330752..7f1f11a5e4e44b 100644
+--- a/include/linux/irqchip/arm-gic-v4.h
++++ b/include/linux/irqchip/arm-gic-v4.h
+@@ -66,10 +66,12 @@ struct its_vpe {
+ bool enabled;
+ bool group;
+ } sgi_config[16];
+- atomic_t vmapp_count;
+ };
+ };
+
++ /* Track the VPE being mapped */
++ atomic_t vmapp_count;
++
+ /*
+ * Ensures mutual exclusion between affinity setting of the
+ * vPE and vLPI operations using vpe->col_idx.
+diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
+index b5f5369b63009f..9d5c00b0285c34 100644
+--- a/include/trace/events/huge_memory.h
++++ b/include/trace/events/huge_memory.h
+@@ -208,7 +208,7 @@ TRACE_EVENT(mm_khugepaged_scan_file,
+
+ TRACE_EVENT(mm_khugepaged_collapse_file,
+ TP_PROTO(struct mm_struct *mm, struct folio *new_folio, pgoff_t index,
+- bool is_shmem, unsigned long addr, struct file *file,
++ unsigned long addr, bool is_shmem, struct file *file,
+ int nr, int result),
+ TP_ARGS(mm, new_folio, index, addr, is_shmem, file, nr, result),
+ TP_STRUCT__entry(
+@@ -233,7 +233,7 @@ TRACE_EVENT(mm_khugepaged_collapse_file,
+ __entry->result = result;
+ ),
+
+- TP_printk("mm=%p, hpage_pfn=0x%lx, index=%ld, addr=%ld, is_shmem=%d, filename=%s, nr=%d, result=%s",
++ TP_printk("mm=%p, hpage_pfn=0x%lx, index=%ld, addr=%lx, is_shmem=%d, filename=%s, nr=%d, result=%s",
+ __entry->mm,
+ __entry->hpfn,
+ __entry->index,
+diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
+index c8dc5f8ea69962..12873639ea9644 100644
+--- a/include/uapi/linux/ublk_cmd.h
++++ b/include/uapi/linux/ublk_cmd.h
+@@ -175,7 +175,13 @@
+ /* use ioctl encoding for uring command */
+ #define UBLK_F_CMD_IOCTL_ENCODE (1UL << 6)
+
+-/* Copy between request and user buffer by pread()/pwrite() */
++/*
++ * Copy between request and user buffer by pread()/pwrite()
++ *
++ * Not available for UBLK_F_UNPRIVILEGED_DEV, otherwise userspace may
++ * deceive us by not filling request buffer, then kernel uninitialized
++ * data may be leaked.
++ */
+ #define UBLK_F_USER_COPY (1UL << 7)
+
+ /*
+diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
+index c2acf6180845db..a8ee7287ac9f2d 100644
+--- a/io_uring/io_uring.h
++++ b/io_uring/io_uring.h
+@@ -279,7 +279,14 @@ static inline bool io_sqring_full(struct io_ring_ctx *ctx)
+ {
+ struct io_rings *r = ctx->rings;
+
+- return READ_ONCE(r->sq.tail) - ctx->cached_sq_head == ctx->sq_entries;
++ /*
++ * SQPOLL must use the actual sqring head, as using the cached_sq_head
++ * is race prone if the SQPOLL thread has grabbed entries but not yet
++ * committed them to the ring. For !SQPOLL, this doesn't matter, but
++ * since this helper is just used for SQPOLL sqring waits (or POLLOUT),
++ * just read the actual sqring head unconditionally.
++ */
++ return READ_ONCE(r->sq.tail) - READ_ONCE(r->sq.head) == ctx->sq_entries;
+ }
+
+ static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx)
+@@ -315,6 +322,7 @@ static inline int io_run_task_work(void)
+ if (current->io_uring) {
+ unsigned int count = 0;
+
++ __set_current_state(TASK_RUNNING);
+ tctx_task_work_run(current->io_uring, UINT_MAX, &count);
+ if (count)
+ ret = true;
+diff --git a/kernel/time/posix-clock.c b/kernel/time/posix-clock.c
+index 4782edcbe7b9b4..6b5d5c0021fae6 100644
+--- a/kernel/time/posix-clock.c
++++ b/kernel/time/posix-clock.c
+@@ -319,6 +319,9 @@ static int pc_clock_settime(clockid_t id, const struct timespec64 *ts)
+ goto out;
+ }
+
++ if (!timespec64_valid_strict(ts))
++ return -EINVAL;
++
+ if (cd.clk->ops.clock_settime)
+ err = cd.clk->ops.clock_settime(cd.clk, ts);
+ else
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index d7d4fb403f6f0f..43f4e3f57438b4 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -1160,19 +1160,13 @@ void fgraph_update_pid_func(void)
+ static int start_graph_tracing(void)
+ {
+ unsigned long **ret_stack_list;
+- int ret, cpu;
++ int ret;
+
+ ret_stack_list = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL);
+
+ if (!ret_stack_list)
+ return -ENOMEM;
+
+- /* The cpu_boot init_task->ret_stack will never be freed */
+- for_each_online_cpu(cpu) {
+- if (!idle_task(cpu)->ret_stack)
+- ftrace_graph_init_idle_task(idle_task(cpu), cpu);
+- }
+-
+ do {
+ ret = alloc_retstack_tasklist(ret_stack_list);
+ } while (ret == -EAGAIN);
+@@ -1242,14 +1236,34 @@ static void ftrace_graph_disable_direct(bool disable_branch)
+ fgraph_direct_gops = &fgraph_stub;
+ }
+
++/* The cpu_boot init_task->ret_stack will never be freed */
++static int fgraph_cpu_init(unsigned int cpu)
++{
++ if (!idle_task(cpu)->ret_stack)
++ ftrace_graph_init_idle_task(idle_task(cpu), cpu);
++ return 0;
++}
++
+ int register_ftrace_graph(struct fgraph_ops *gops)
+ {
++ static bool fgraph_initialized;
+ int command = 0;
+ int ret = 0;
+ int i = -1;
+
+ mutex_lock(&ftrace_lock);
+
++ if (!fgraph_initialized) {
++ ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "fgraph_idle_init",
++ fgraph_cpu_init, NULL);
++ if (ret < 0) {
++ pr_warn("fgraph: Error to init cpu hotplug support\n");
++ return ret;
++ }
++ fgraph_initialized = true;
++ ret = 0;
++ }
++
+ if (!fgraph_array[0]) {
+ /* The array must always have real data on it */
+ for (i = 0; i < FGRAPH_ARRAY_SIZE; i++)
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index 6df3a8b95808ab..20f6f7ae937272 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -2196,6 +2196,8 @@ static inline void mas_node_or_none(struct ma_state *mas,
+
+ /*
+ * mas_wr_node_walk() - Find the correct offset for the index in the @mas.
++ * If @mas->index cannot be found within the containing
++ * node, we traverse to the last entry in the node.
+ * @wr_mas: The maple write state
+ *
+ * Uses mas_slot_locked() and does not need to worry about dead nodes.
+@@ -3609,7 +3611,7 @@ static bool mas_wr_walk(struct ma_wr_state *wr_mas)
+ return true;
+ }
+
+-static bool mas_wr_walk_index(struct ma_wr_state *wr_mas)
++static void mas_wr_walk_index(struct ma_wr_state *wr_mas)
+ {
+ struct ma_state *mas = wr_mas->mas;
+
+@@ -3618,11 +3620,9 @@ static bool mas_wr_walk_index(struct ma_wr_state *wr_mas)
+ wr_mas->content = mas_slot_locked(mas, wr_mas->slots,
+ mas->offset);
+ if (ma_is_leaf(wr_mas->type))
+- return true;
++ return;
+ mas_wr_walk_traverse(wr_mas);
+-
+ }
+- return true;
+ }
+ /*
+ * mas_extend_spanning_null() - Extend a store of a %NULL to include surrounding %NULLs.
+@@ -3853,8 +3853,8 @@ static inline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
+ memset(&b_node, 0, sizeof(struct maple_big_node));
+ /* Copy l_mas and store the value in b_node. */
+ mas_store_b_node(&l_wr_mas, &b_node, l_mas.end);
+- /* Copy r_mas into b_node. */
+- if (r_mas.offset <= r_mas.end)
++ /* Copy r_mas into b_node if there is anything to copy. */
++ if (r_mas.max > r_mas.last)
+ mas_mab_cp(&r_mas, r_mas.offset, r_mas.end,
+ &b_node, b_node.b_end + 1);
+ else
+diff --git a/mm/damon/sysfs-test.h b/mm/damon/sysfs-test.h
+index 1c9b596057a7cc..7b5c7b307da99c 100644
+--- a/mm/damon/sysfs-test.h
++++ b/mm/damon/sysfs-test.h
+@@ -67,6 +67,7 @@ static void damon_sysfs_test_add_targets(struct kunit *test)
+ damon_destroy_ctx(ctx);
+ kfree(sysfs_targets->targets_arr);
+ kfree(sysfs_targets);
++ kfree(sysfs_target->regions);
+ kfree(sysfs_target);
+ }
+
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index cdd1d8655a76bb..4cba91ecf74b89 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -2219,7 +2219,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
+ folio_put(new_folio);
+ out:
+ VM_BUG_ON(!list_empty(&pagelist));
+- trace_mm_khugepaged_collapse_file(mm, new_folio, index, is_shmem, addr, file, HPAGE_PMD_NR, result);
++ trace_mm_khugepaged_collapse_file(mm, new_folio, index, addr, is_shmem, file, HPAGE_PMD_NR, result);
+ return result;
+ }
+
+diff --git a/mm/mremap.c b/mm/mremap.c
+index e7ae140fc6409b..3ca167d84c5655 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -238,6 +238,7 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ {
+ spinlock_t *old_ptl, *new_ptl;
+ struct mm_struct *mm = vma->vm_mm;
++ bool res = false;
+ pmd_t pmd;
+
+ if (!arch_supports_page_table_move())
+@@ -277,19 +278,25 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ if (new_ptl != old_ptl)
+ spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+
+- /* Clear the pmd */
+ pmd = *old_pmd;
++
++ /* Racing with collapse? */
++ if (unlikely(!pmd_present(pmd) || pmd_leaf(pmd)))
++ goto out_unlock;
++ /* Clear the pmd */
+ pmd_clear(old_pmd);
++ res = true;
+
+ VM_BUG_ON(!pmd_none(*new_pmd));
+
+ pmd_populate(mm, new_pmd, pmd_pgtable(pmd));
+ flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
++out_unlock:
+ if (new_ptl != old_ptl)
+ spin_unlock(new_ptl);
+ spin_unlock(old_ptl);
+
+- return true;
++ return res;
+ }
+ #else
+ static inline bool move_normal_pmd(struct vm_area_struct *vma,
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 38bdc439651acf..478ba2f7c2eefd 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -2106,7 +2106,7 @@ static int unuse_mm(struct mm_struct *mm, unsigned int type)
+
+ mmap_read_lock(mm);
+ for_each_vma(vmi, vma) {
+- if (vma->anon_vma) {
++ if (vma->anon_vma && !is_vm_hugetlb_page(vma)) {
+ ret = unuse_vma(vma, type);
+ if (ret)
+ break;
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index bd489c1af22893..128f307da6eeac 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -4300,7 +4300,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
+ }
+
+ /* ineligible */
+- if (zone > sc->reclaim_idx) {
++ if (!folio_test_lru(folio) || zone > sc->reclaim_idx) {
+ gen = folio_inc_gen(lruvec, folio, false);
+ list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]);
+ return true;
+@@ -4940,8 +4940,8 @@ static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control *
+
+ blk_finish_plug(&plug);
+ done:
+- /* kswapd should never fail */
+- pgdat->kswapd_failures = 0;
++ if (sc->nr_reclaimed > reclaimed)
++ pgdat->kswapd_failures = 0;
+ }
+
+ /******************************************************************************
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index 67604ccec2f427..e39fba5565c5d4 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -825,11 +825,14 @@ static int __init bt_init(void)
+ bt_sysfs_cleanup();
+ cleanup_led:
+ bt_leds_cleanup();
++ debugfs_remove_recursive(bt_debugfs);
+ return err;
+ }
+
+ static void __exit bt_exit(void)
+ {
++ iso_exit();
++
+ mgmt_exit();
+
+ sco_exit();
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index d5e00d0dd1a04b..c9eefb43bf47e3 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -2301,13 +2301,9 @@ int iso_init(void)
+
+ hci_register_cb(&iso_cb);
+
+- if (IS_ERR_OR_NULL(bt_debugfs))
+- return 0;
+-
+- if (!iso_debugfs) {
++ if (!IS_ERR_OR_NULL(bt_debugfs))
+ iso_debugfs = debugfs_create_file("iso", 0444, bt_debugfs,
+ NULL, &iso_debugfs_fops);
+- }
+
+ iso_inited = true;
+
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 16c48df8df4cc8..8f67eea34779e1 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -2342,9 +2342,7 @@ static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len)
+ if (len <= skb->len)
+ break;
+
+- if (unlikely(TCP_SKB_CB(skb)->eor) ||
+- tcp_has_tx_tstamp(skb) ||
+- !skb_pure_zcopy_same(skb, next))
++ if (tcp_has_tx_tstamp(skb) || !tcp_skb_can_collapse(skb, next))
+ return false;
+
+ len -= skb->len;
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 49c622e743e87f..2a82ed7f7d9d2e 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -950,8 +950,10 @@ static int udp_send_skb(struct sk_buff *skb, struct flowi4 *fl4,
+ skb_shinfo(skb)->gso_type = SKB_GSO_UDP_L4;
+ skb_shinfo(skb)->gso_segs = DIV_ROUND_UP(datalen,
+ cork->gso_size);
++
++ /* Don't checksum the payload, skb will get segmented */
++ goto csum_partial;
+ }
+- goto csum_partial;
+ }
+
+ if (is_udplite) /* UDP-Lite */
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 6602a2e9cdb532..a10181ac8e9806 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1266,8 +1266,10 @@ static int udp_v6_send_skb(struct sk_buff *skb, struct flowi6 *fl6,
+ skb_shinfo(skb)->gso_type = SKB_GSO_UDP_L4;
+ skb_shinfo(skb)->gso_segs = DIV_ROUND_UP(datalen,
+ cork->gso_size);
++
++ /* Don't checksum the payload, skb will get segmented */
++ goto csum_partial;
+ }
+- goto csum_partial;
+ }
+
+ if (is_udplite)
+diff --git a/net/mptcp/mib.c b/net/mptcp/mib.c
+index 79f2278434b950..3dc49c3169f20e 100644
+--- a/net/mptcp/mib.c
++++ b/net/mptcp/mib.c
+@@ -15,6 +15,7 @@ static const struct snmp_mib mptcp_snmp_list[] = {
+ SNMP_MIB_ITEM("MPCapableACKRX", MPTCP_MIB_MPCAPABLEPASSIVEACK),
+ SNMP_MIB_ITEM("MPCapableFallbackACK", MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK),
+ SNMP_MIB_ITEM("MPCapableFallbackSYNACK", MPTCP_MIB_MPCAPABLEACTIVEFALLBACK),
++ SNMP_MIB_ITEM("MPCapableEndpAttempt", MPTCP_MIB_MPCAPABLEENDPATTEMPT),
+ SNMP_MIB_ITEM("MPFallbackTokenInit", MPTCP_MIB_TOKENFALLBACKINIT),
+ SNMP_MIB_ITEM("MPTCPRetrans", MPTCP_MIB_RETRANSSEGS),
+ SNMP_MIB_ITEM("MPJoinNoTokenFound", MPTCP_MIB_JOINNOTOKEN),
+diff --git a/net/mptcp/mib.h b/net/mptcp/mib.h
+index 3f4ca290be6904..ac574a790465a6 100644
+--- a/net/mptcp/mib.h
++++ b/net/mptcp/mib.h
+@@ -10,6 +10,7 @@ enum linux_mptcp_mib_field {
+ MPTCP_MIB_MPCAPABLEPASSIVEACK, /* Received third ACK with MP_CAPABLE */
+ MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK,/* Server-side fallback during 3-way handshake */
+ MPTCP_MIB_MPCAPABLEACTIVEFALLBACK, /* Client-side fallback during 3-way handshake */
++ MPTCP_MIB_MPCAPABLEENDPATTEMPT, /* Prohibited MPC to port-based endp */
+ MPTCP_MIB_TOKENFALLBACKINIT, /* Could not init/allocate token */
+ MPTCP_MIB_RETRANSSEGS, /* Segments retransmitted at the MPTCP-level */
+ MPTCP_MIB_JOINNOTOKEN, /* Received MP_JOIN but the token was not found */
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 6c586b2b377bbe..ba99484c501ed2 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -869,12 +869,12 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
+ i, rm_id, id, remote_id, msk->mpc_endpoint_id);
+ spin_unlock_bh(&msk->pm.lock);
+ mptcp_subflow_shutdown(sk, ssk, how);
++ removed |= subflow->request_join;
+
+ /* the following takes care of updating the subflows counter */
+ mptcp_close_ssk(sk, ssk, subflow);
+ spin_lock_bh(&msk->pm.lock);
+
+- removed |= subflow->request_join;
+ if (rm_type == MPTCP_MIB_RMSUBFLOW)
+ __MPTCP_INC_STATS(sock_net(sk), rm_type);
+ }
+@@ -1117,6 +1117,7 @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk,
+ */
+ inet_sk_state_store(newsk, TCP_LISTEN);
+ lock_sock(ssk);
++ WRITE_ONCE(mptcp_subflow_ctx(ssk)->pm_listener, true);
+ err = __inet_listen_sk(ssk, backlog);
+ if (!err)
+ mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CREATED);
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 3b22313d1b86f6..049af90589ae62 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -528,6 +528,7 @@ struct mptcp_subflow_context {
+ __unused : 9;
+ bool data_avail;
+ bool scheduled;
++ bool pm_listener; /* a listener managed by the kernel PM? */
+ u32 remote_nonce;
+ u64 thmac;
+ u32 local_nonce;
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 7d14da95a28305..d4d1fddea44f3b 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -132,6 +132,13 @@ static void subflow_add_reset_reason(struct sk_buff *skb, u8 reason)
+ }
+ }
+
++static int subflow_reset_req_endp(struct request_sock *req, struct sk_buff *skb)
++{
++ SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_MPCAPABLEENDPATTEMPT);
++ subflow_add_reset_reason(skb, MPTCP_RST_EPROHIBIT);
++ return -EPERM;
++}
++
+ /* Init mptcp request socket.
+ *
+ * Returns an error code if a JOIN has failed and a TCP reset
+@@ -165,6 +172,8 @@ static int subflow_check_req(struct request_sock *req,
+ if (opt_mp_capable) {
+ SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_MPCAPABLEPASSIVE);
+
++ if (unlikely(listener->pm_listener))
++ return subflow_reset_req_endp(req, skb);
+ if (opt_mp_join)
+ return 0;
+ } else if (opt_mp_join) {
+@@ -172,6 +181,8 @@ static int subflow_check_req(struct request_sock *req,
+
+ if (mp_opt.backup)
+ SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINSYNBACKUPRX);
++ } else if (unlikely(listener->pm_listener)) {
++ return subflow_reset_req_endp(req, skb);
+ }
+
+ if (opt_mp_capable && listener->request_mptcp) {
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 4a2c8274c3df7e..843cc1ed75c3e5 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -303,6 +303,7 @@ enum {
+ CXT_FIXUP_HP_SPECTRE,
+ CXT_FIXUP_HP_GATE_MIC,
+ CXT_FIXUP_MUTE_LED_GPIO,
++ CXT_FIXUP_HP_ELITEONE_OUT_DIS,
+ CXT_FIXUP_HP_ZBOOK_MUTE_LED,
+ CXT_FIXUP_HEADSET_MIC,
+ CXT_FIXUP_HP_MIC_NO_PRESENCE,
+@@ -320,6 +321,19 @@ static void cxt_fixup_stereo_dmic(struct hda_codec *codec,
+ spec->gen.inv_dmic_split = 1;
+ }
+
++/* fix widget control pin settings */
++static void cxt_fixup_update_pinctl(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ if (action == HDA_FIXUP_ACT_PROBE) {
++ /* Unset OUT_EN for this Node pin, leaving only HP_EN.
++ * This is the value stored in the codec register after
++ * the correct initialization of the previous windows boot.
++ */
++ snd_hda_set_pin_ctl_cache(codec, 0x1d, AC_PINCTL_HP_EN);
++ }
++}
++
+ static void cxt5066_increase_mic_boost(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+@@ -971,6 +985,10 @@ static const struct hda_fixup cxt_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = cxt_fixup_mute_led_gpio,
+ },
++ [CXT_FIXUP_HP_ELITEONE_OUT_DIS] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = cxt_fixup_update_pinctl,
++ },
+ [CXT_FIXUP_HP_ZBOOK_MUTE_LED] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = cxt_fixup_hp_zbook_mute_led,
+@@ -1061,6 +1079,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x83b2, "HP EliteBook 840 G5", CXT_FIXUP_HP_DOCK),
+ SND_PCI_QUIRK(0x103c, 0x83b3, "HP EliteBook 830 G5", CXT_FIXUP_HP_DOCK),
+ SND_PCI_QUIRK(0x103c, 0x83d3, "HP ProBook 640 G4", CXT_FIXUP_HP_DOCK),
++ SND_PCI_QUIRK(0x103c, 0x83e5, "HP EliteOne 1000 G2", CXT_FIXUP_HP_ELITEONE_OUT_DIS),
+ SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x8427, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x844f, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
+diff --git a/sound/usb/mixer_scarlett2.c b/sound/usb/mixer_scarlett2.c
+index 1150cf104985ce..4cddf84db631c6 100644
+--- a/sound/usb/mixer_scarlett2.c
++++ b/sound/usb/mixer_scarlett2.c
+@@ -5613,6 +5613,8 @@ static int scarlett2_update_filter_values(struct usb_mixer_interface *mixer)
+ info->peq_flt_total_count *
+ SCARLETT2_BIQUAD_COEFFS,
+ peq_flt_values);
++ if (err < 0)
++ return err;
+
+ for (i = 0, dst_idx = 0; i < info->dsp_input_count; i++) {
+ src_idx = i *
+diff --git a/tools/testing/selftests/hid/Makefile b/tools/testing/selftests/hid/Makefile
+index 346328e2295c30..748e0c79a27dad 100644
+--- a/tools/testing/selftests/hid/Makefile
++++ b/tools/testing/selftests/hid/Makefile
+@@ -18,6 +18,7 @@ TEST_PROGS += hid-usb_crash.sh
+ TEST_PROGS += hid-wacom.sh
+
+ TEST_FILES := run-hid-tools-tests.sh
++TEST_FILES += tests
+
+ CXX ?= $(CROSS_COMPILE)g++
+
+diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c
+index 717539eddf9875..852e7281026ee4 100644
+--- a/tools/testing/selftests/mm/uffd-common.c
++++ b/tools/testing/selftests/mm/uffd-common.c
+@@ -18,7 +18,7 @@ bool test_uffdio_wp = true;
+ unsigned long long *count_verify;
+ uffd_test_ops_t *uffd_test_ops;
+ uffd_test_case_ops_t *uffd_test_case_ops;
+-atomic_bool ready_for_fork;
++pthread_barrier_t ready_for_fork;
+
+ static int uffd_mem_fd_create(off_t mem_size, bool hugetlb)
+ {
+@@ -519,7 +519,8 @@ void *uffd_poll_thread(void *arg)
+ pollfd[1].fd = pipefd[cpu*2];
+ pollfd[1].events = POLLIN;
+
+- ready_for_fork = true;
++ /* Ready for parent thread to fork */
++ pthread_barrier_wait(&ready_for_fork);
+
+ for (;;) {
+ ret = poll(pollfd, 2, -1);
+diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h
+index a70ae10b5f6206..3e6228d8e0dcc7 100644
+--- a/tools/testing/selftests/mm/uffd-common.h
++++ b/tools/testing/selftests/mm/uffd-common.h
+@@ -33,7 +33,6 @@
+ #include <inttypes.h>
+ #include <stdint.h>
+ #include <sys/random.h>
+-#include <stdatomic.h>
+
+ #include "../kselftest.h"
+ #include "vm_util.h"
+@@ -105,7 +104,7 @@ extern bool map_shared;
+ extern bool test_uffdio_wp;
+ extern unsigned long long *count_verify;
+ extern volatile bool test_uffdio_copy_eexist;
+-extern atomic_bool ready_for_fork;
++extern pthread_barrier_t ready_for_fork;
+
+ extern uffd_test_ops_t anon_uffd_test_ops;
+ extern uffd_test_ops_t shmem_uffd_test_ops;
+diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c
+index b3d21eed203dc2..c8a3b1c7edffbd 100644
+--- a/tools/testing/selftests/mm/uffd-unit-tests.c
++++ b/tools/testing/selftests/mm/uffd-unit-tests.c
+@@ -241,6 +241,9 @@ static void *fork_event_consumer(void *data)
+ fork_event_args *args = data;
+ struct uffd_msg msg = { 0 };
+
++ /* Ready for parent thread to fork */
++ pthread_barrier_wait(&ready_for_fork);
++
+ /* Read until a full msg received */
+ while (uffd_read_msg(args->parent_uffd, &msg));
+
+@@ -308,8 +311,12 @@ static int pagemap_test_fork(int uffd, bool with_event, bool test_pin)
+
+ /* Prepare a thread to resolve EVENT_FORK */
+ if (with_event) {
++ pthread_barrier_init(&ready_for_fork, NULL, 2);
+ if (pthread_create(&thread, NULL, fork_event_consumer, &args))
+ err("pthread_create()");
++ /* Wait for child thread to start before forking */
++ pthread_barrier_wait(&ready_for_fork);
++ pthread_barrier_destroy(&ready_for_fork);
+ }
+
+ child = fork();
+@@ -774,7 +781,7 @@ static void uffd_sigbus_test_common(bool wp)
+ char c;
+ struct uffd_args args = { 0 };
+
+- ready_for_fork = false;
++ pthread_barrier_init(&ready_for_fork, NULL, 2);
+
+ fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK);
+
+@@ -791,8 +798,9 @@ static void uffd_sigbus_test_common(bool wp)
+ if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args))
+ err("uffd_poll_thread create");
+
+- while (!ready_for_fork)
+- ; /* Wait for the poll_thread to start executing before forking */
++ /* Wait for child thread to start before forking */
++ pthread_barrier_wait(&ready_for_fork);
++ pthread_barrier_destroy(&ready_for_fork);
+
+ pid = fork();
+ if (pid < 0)
+@@ -833,7 +841,7 @@ static void uffd_events_test_common(bool wp)
+ char c;
+ struct uffd_args args = { 0 };
+
+- ready_for_fork = false;
++ pthread_barrier_init(&ready_for_fork, NULL, 2);
+
+ fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK);
+ if (uffd_register(uffd, area_dst, nr_pages * page_size,
+@@ -844,8 +852,9 @@ static void uffd_events_test_common(bool wp)
+ if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args))
+ err("uffd_poll_thread create");
+
+- while (!ready_for_fork)
+- ; /* Wait for the poll_thread to start executing before forking */
++ /* Wait for child thread to start before forking */
++ pthread_barrier_wait(&ready_for_fork);
++ pthread_barrier_destroy(&ready_for_fork);
+
+ pid = fork();
+ if (pid < 0)
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index cde041c93906df..05859bb387ffc5 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -23,6 +23,7 @@ tmpfile=""
+ cout=""
+ err=""
+ capout=""
++cappid=""
+ ns1=""
+ ns2=""
+ iptables="iptables"
+@@ -861,40 +862,62 @@ check_cestab()
+ fi
+ }
+
+-do_transfer()
++cond_start_capture()
+ {
+- local listener_ns="$1"
+- local connector_ns="$2"
+- local cl_proto="$3"
+- local srv_proto="$4"
+- local connect_addr="$5"
+-
+- local port=$((10000 + MPTCP_LIB_TEST_COUNTER - 1))
+- local cappid
+- local FAILING_LINKS=${FAILING_LINKS:-""}
+- local fastclose=${fastclose:-""}
+- local speed=${speed:-"fast"}
++ local ns="$1"
+
+- :> "$cout"
+- :> "$sout"
+ :> "$capout"
+
+ if $capture; then
+- local capuser
+- if [ -z $SUDO_USER ] ; then
++ local capuser capfile
++ if [ -z $SUDO_USER ]; then
+ capuser=""
+ else
+ capuser="-Z $SUDO_USER"
+ fi
+
+- capfile=$(printf "mp_join-%02u-%s.pcap" "$MPTCP_LIB_TEST_COUNTER" "${listener_ns}")
++ capfile=$(printf "mp_join-%02u-%s.pcap" "$MPTCP_LIB_TEST_COUNTER" "$ns")
+
+ echo "Capturing traffic for test $MPTCP_LIB_TEST_COUNTER into $capfile"
+- ip netns exec ${listener_ns} tcpdump -i any -s 65535 -B 32768 $capuser -w $capfile > "$capout" 2>&1 &
++ ip netns exec "$ns" tcpdump -i any -s 65535 -B 32768 $capuser -w "$capfile" > "$capout" 2>&1 &
+ cappid=$!
+
+ sleep 1
+ fi
++}
++
++cond_stop_capture()
++{
++ if $capture; then
++ sleep 1
++ kill $cappid
++ cat "$capout"
++ fi
++}
++
++get_port()
++{
++ echo "$((10000 + MPTCP_LIB_TEST_COUNTER - 1))"
++}
++
++do_transfer()
++{
++ local listener_ns="$1"
++ local connector_ns="$2"
++ local cl_proto="$3"
++ local srv_proto="$4"
++ local connect_addr="$5"
++ local port
++
++ local FAILING_LINKS=${FAILING_LINKS:-""}
++ local fastclose=${fastclose:-""}
++ local speed=${speed:-"fast"}
++ port=$(get_port)
++
++ :> "$cout"
++ :> "$sout"
++
++ cond_start_capture ${listener_ns}
+
+ NSTAT_HISTORY=/tmp/${listener_ns}.nstat ip netns exec ${listener_ns} \
+ nstat -n
+@@ -981,10 +1004,7 @@ do_transfer()
+ wait $spid
+ local rets=$?
+
+- if $capture; then
+- sleep 1
+- kill $cappid
+- fi
++ cond_stop_capture
+
+ NSTAT_HISTORY=/tmp/${listener_ns}.nstat ip netns exec ${listener_ns} \
+ nstat | grep Tcp > /tmp/${listener_ns}.out
+@@ -1000,7 +1020,6 @@ do_transfer()
+ ip netns exec ${connector_ns} ss -Menita 1>&2 -o "dport = :$port"
+ cat /tmp/${connector_ns}.out
+
+- cat "$capout"
+ return 1
+ fi
+
+@@ -1017,13 +1036,7 @@ do_transfer()
+ fi
+ rets=$?
+
+- if [ $retc -eq 0 ] && [ $rets -eq 0 ];then
+- cat "$capout"
+- return 0
+- fi
+-
+- cat "$capout"
+- return 1
++ [ $retc -eq 0 ] && [ $rets -eq 0 ]
+ }
+
+ make_file()
+@@ -2786,6 +2799,32 @@ verify_listener_events()
+ fail_test
+ }
+
++chk_mpc_endp_attempt()
++{
++ local retl=$1
++ local attempts=$2
++
++ print_check "Connect"
++
++ if [ ${retl} = 124 ]; then
++ fail_test "timeout on connect"
++ elif [ ${retl} = 0 ]; then
++ fail_test "unexpected successful connect"
++ else
++ print_ok
++
++ print_check "Attempts"
++ count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMPCapableEndpAttempt")
++ if [ -z "$count" ]; then
++ print_skip
++ elif [ "$count" != "$attempts" ]; then
++ fail_test "got ${count} MPC attempt[s] on port-based endpoint, expected ${attempts}"
++ else
++ print_ok
++ fi
++ fi
++}
++
+ add_addr_ports_tests()
+ {
+ # signal address with port
+@@ -2876,6 +2915,22 @@ add_addr_ports_tests()
+ chk_join_nr 2 2 2
+ chk_add_nr 2 2 2
+ fi
++
++ if reset "port-based signal endpoint must not accept mpc"; then
++ local port retl count
++ port=$(get_port)
++
++ cond_start_capture ${ns1}
++ pm_nl_add_endpoint ${ns1} 10.0.2.1 flags signal port ${port}
++ mptcp_lib_wait_local_port_listen ${ns1} ${port}
++
++ timeout 1 ip netns exec ${ns2} \
++ ./mptcp_connect -t ${timeout_poll} -p $port -s MPTCP 10.0.2.1 >/dev/null 2>&1
++ retl=$?
++ cond_stop_capture
++
++ chk_mpc_endp_attempt ${retl} 1
++ fi
+ }
+
+ syncookies_tests()
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-10-25 11:42 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-10-25 11:42 UTC (permalink / raw
To: gentoo-commits
commit: ccb08661d505822adbf5fe5b353c1da8d93e95a9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Oct 25 11:42:29 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Oct 25 11:42:29 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ccb08661
netfilter: xtables: fix typo causing some targets not to load on IPv6
Bug: https://bugs.gentoo.org/941988
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
2005_netfilter-xtables-fix-typo.patch | 71 +++++++++++++++++++++++++++++++++++
2 files changed, 75 insertions(+)
diff --git a/0000_README b/0000_README
index 70f8b56f..f88c09ee 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+Patch: 2005_netfilter-xtables-fix-typo.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/net/netfilter?id=306ed1728e8438caed30332e1ab46b28c25fe3d8
+Desc: netfilter: xtables: fix typo causing some targets not to load on IPv6
+
Patch: 2901_tools-lib-subcmd-compile-fix.patch
From: https://lore.kernel.org/all/20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de/
Desc: tools lib subcmd: Fixed uninitialized use of variable in parse-options
diff --git a/2005_netfilter-xtables-fix-typo.patch b/2005_netfilter-xtables-fix-typo.patch
new file mode 100644
index 00000000..6a7dfc7c
--- /dev/null
+++ b/2005_netfilter-xtables-fix-typo.patch
@@ -0,0 +1,71 @@
+From 306ed1728e8438caed30332e1ab46b28c25fe3d8 Mon Sep 17 00:00:00 2001
+From: Pablo Neira Ayuso <pablo@netfilter.org>
+Date: Sun, 20 Oct 2024 14:49:51 +0200
+Subject: netfilter: xtables: fix typo causing some targets not to load on IPv6
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+- There is no NFPROTO_IPV6 family for mark and NFLOG.
+- TRACE is also missing module autoload with NFPROTO_IPV6.
+
+This results in ip6tables failing to restore a ruleset. This issue has been
+reported by several users providing incomplete patches.
+
+Very similar to Ilya Katsnelson's patch including a missing chunk in the
+TRACE extension.
+
+Fixes: 0bfcb7b71e73 ("netfilter: xtables: avoid NFPROTO_UNSPEC where needed")
+Reported-by: Ignat Korchagin <ignat@cloudflare.com>
+Reported-by: Ilya Katsnelson <me@0upti.me>
+Reported-by: Krzysztof Olędzki <ole@ans.pl>
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+---
+ net/netfilter/xt_NFLOG.c | 2 +-
+ net/netfilter/xt_TRACE.c | 1 +
+ net/netfilter/xt_mark.c | 2 +-
+ 3 files changed, 3 insertions(+), 2 deletions(-)
+
+(limited to 'net/netfilter')
+
+diff --git a/net/netfilter/xt_NFLOG.c b/net/netfilter/xt_NFLOG.c
+index d80abd6ccaf8f7..6dcf4bc7e30b2a 100644
+--- a/net/netfilter/xt_NFLOG.c
++++ b/net/netfilter/xt_NFLOG.c
+@@ -79,7 +79,7 @@ static struct xt_target nflog_tg_reg[] __read_mostly = {
+ {
+ .name = "NFLOG",
+ .revision = 0,
+- .family = NFPROTO_IPV4,
++ .family = NFPROTO_IPV6,
+ .checkentry = nflog_tg_check,
+ .destroy = nflog_tg_destroy,
+ .target = nflog_tg,
+diff --git a/net/netfilter/xt_TRACE.c b/net/netfilter/xt_TRACE.c
+index f3fa4f11348cd8..a642ff09fc8e8c 100644
+--- a/net/netfilter/xt_TRACE.c
++++ b/net/netfilter/xt_TRACE.c
+@@ -49,6 +49,7 @@ static struct xt_target trace_tg_reg[] __read_mostly = {
+ .target = trace_tg,
+ .checkentry = trace_tg_check,
+ .destroy = trace_tg_destroy,
++ .me = THIS_MODULE,
+ },
+ #endif
+ };
+diff --git a/net/netfilter/xt_mark.c b/net/netfilter/xt_mark.c
+index f76fe04fc9a4e1..65b965ca40ea7e 100644
+--- a/net/netfilter/xt_mark.c
++++ b/net/netfilter/xt_mark.c
+@@ -62,7 +62,7 @@ static struct xt_target mark_tg_reg[] __read_mostly = {
+ {
+ .name = "MARK",
+ .revision = 2,
+- .family = NFPROTO_IPV4,
++ .family = NFPROTO_IPV6,
+ .target = mark_tg,
+ .targetsize = sizeof(struct xt_mark_tginfo2),
+ .me = THIS_MODULE,
+--
+cgit 1.2.3-korg
+
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-10-27 13:40 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-10-27 13:40 UTC (permalink / raw
To: gentoo-commits
commit: 9a580cd7cc60c63211ba84f7b285b899b34cf280
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Oct 27 13:39:24 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Oct 27 13:39:24 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9a580cd7
BPF tools compilation fixes
libbpf: Prevent compiler warnings/errors
resolve_btfids: Fix compiler warnings
Bug: https://bugs.gentoo.org/942303
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 +++
2951_libbpf-Prevent-compiler-warnings-errors.patch | 69 ++++++++++++++++++++
2952_resolve-btfids-Fix-compiler-warnings.patch | 73 ++++++++++++++++++++++
3 files changed, 150 insertions(+)
diff --git a/0000_README b/0000_README
index f88c09ee..1c7c3235 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,14 @@ Patch: 2920_sign-file-patch-for-libressl.patch
From: https://bugs.gentoo.org/717166
Desc: sign-file: full functionality with modern LibreSSL
+Patch: 2951_libbpf-Prevent-compiler-warnings-errors.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=7f4ec77f3fee41dd6a41f03a40703889e6e8f7b2
+Desc: libbpf: Prevent compiler warnings/errors
+
+Patch: 2952_resolve-btfids-Fix-compiler-warnings.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=2c3d022abe6c3165109393b75a127b06c2c70063
+Desc: resolve_btfids: Fix compiler warnings
+
Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
From: https://lore.kernel.org/bpf/
Desc: libbpf: workaround -Wmaybe-uninitialized false positive
diff --git a/2951_libbpf-Prevent-compiler-warnings-errors.patch b/2951_libbpf-Prevent-compiler-warnings-errors.patch
new file mode 100644
index 00000000..67e150ca
--- /dev/null
+++ b/2951_libbpf-Prevent-compiler-warnings-errors.patch
@@ -0,0 +1,69 @@
+From 7f4ec77f3fee41dd6a41f03a40703889e6e8f7b2 Mon Sep 17 00:00:00 2001
+From: Eder Zulian <ezulian@redhat.com>
+Date: Tue, 22 Oct 2024 19:23:28 +0200
+Subject: libbpf: Prevent compiler warnings/errors
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Initialize 'new_off' and 'pad_bits' to 0 and 'pad_type' to NULL in
+btf_dump_emit_bit_padding to prevent compiler warnings/errors which are
+observed when compiling with 'EXTRA_CFLAGS=-g -Og' options, but do not
+happen when compiling with current default options.
+
+For example, when compiling libbpf with
+
+ $ make "EXTRA_CFLAGS=-g -Og" -C tools/lib/bpf/ clean all
+
+Clang version 17.0.6 and GCC 13.3.1 fail to compile btf_dump.c due to
+following errors:
+
+ btf_dump.c: In function ‘btf_dump_emit_bit_padding’:
+ btf_dump.c:903:42: error: ‘new_off’ may be used uninitialized [-Werror=maybe-uninitialized]
+ 903 | if (new_off > cur_off && new_off <= next_off) {
+ | ~~~~~~~~^~~~~~~~~~~
+ btf_dump.c:870:13: note: ‘new_off’ was declared here
+ 870 | int new_off, pad_bits, bits, i;
+ | ^~~~~~~
+ btf_dump.c:917:25: error: ‘pad_type’ may be used uninitialized [-Werror=maybe-uninitialized]
+ 917 | btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type,
+ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ 918 | in_bitfield ? new_off - cur_off : 0);
+ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ btf_dump.c:871:21: note: ‘pad_type’ was declared here
+ 871 | const char *pad_type;
+ | ^~~~~~~~
+ btf_dump.c:930:20: error: ‘pad_bits’ may be used uninitialized [-Werror=maybe-uninitialized]
+ 930 | if (bits == pad_bits) {
+ | ^
+ btf_dump.c:870:22: note: ‘pad_bits’ was declared here
+ 870 | int new_off, pad_bits, bits, i;
+ | ^~~~~~~~
+ cc1: all warnings being treated as errors
+
+Signed-off-by: Eder Zulian <ezulian@redhat.com>
+Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
+Acked-by: Jiri Olsa <jolsa@kernel.org>
+Link: https://lore.kernel.org/bpf/20241022172329.3871958-3-ezulian@redhat.com
+---
+ tools/lib/bpf/btf_dump.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 8440c2c5ad3ede..468392f9882d2f 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -867,8 +867,8 @@ static void btf_dump_emit_bit_padding(const struct btf_dump *d,
+ } pads[] = {
+ {"long", d->ptr_sz * 8}, {"int", 32}, {"short", 16}, {"char", 8}
+ };
+- int new_off, pad_bits, bits, i;
+- const char *pad_type;
++ int new_off = 0, pad_bits = 0, bits, i;
++ const char *pad_type = NULL;
+
+ if (cur_off >= next_off)
+ return; /* no gap */
+--
+cgit 1.2.3-korg
+
diff --git a/2952_resolve-btfids-Fix-compiler-warnings.patch b/2952_resolve-btfids-Fix-compiler-warnings.patch
new file mode 100644
index 00000000..eaf9dd23
--- /dev/null
+++ b/2952_resolve-btfids-Fix-compiler-warnings.patch
@@ -0,0 +1,73 @@
+From 2c3d022abe6c3165109393b75a127b06c2c70063 Mon Sep 17 00:00:00 2001
+From: Eder Zulian <ezulian@redhat.com>
+Date: Tue, 22 Oct 2024 19:23:27 +0200
+Subject: resolve_btfids: Fix compiler warnings
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Initialize 'set' and 'set8' pointers to NULL in sets_patch to prevent
+possible compiler warnings which are issued for various optimization
+levels, but do not happen when compiling with current default
+compilation options.
+
+For example, when compiling resolve_btfids with
+
+ $ make "HOSTCFLAGS=-O2 -Wall" -C tools/bpf/resolve_btfids/ clean all
+
+Clang version 17.0.6 and GCC 13.3.1 issue following
+-Wmaybe-uninitialized warnings for variables 'set8' and 'set':
+
+ In function ‘sets_patch’,
+ inlined from ‘symbols_patch’ at main.c:748:6,
+ inlined from ‘main’ at main.c:823:6:
+ main.c:163:9: warning: ‘set8’ may be used uninitialized [-Wmaybe-uninitialized]
+ 163 | eprintf(1, verbose, pr_fmt(fmt), ##__VA_ARGS__)
+ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ main.c:729:17: note: in expansion of macro ‘pr_debug’
+ 729 | pr_debug("sorting addr %5lu: cnt %6d [%s]\n",
+ | ^~~~~~~~
+ main.c: In function ‘main’:
+ main.c:682:37: note: ‘set8’ was declared here
+ 682 | struct btf_id_set8 *set8;
+ | ^~~~
+ In function ‘sets_patch’,
+ inlined from ‘symbols_patch’ at main.c:748:6,
+ inlined from ‘main’ at main.c:823:6:
+ main.c:163:9: warning: ‘set’ may be used uninitialized [-Wmaybe-uninitialized]
+ 163 | eprintf(1, verbose, pr_fmt(fmt), ##__VA_ARGS__)
+ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ main.c:729:17: note: in expansion of macro ‘pr_debug’
+ 729 | pr_debug("sorting addr %5lu: cnt %6d [%s]\n",
+ | ^~~~~~~~
+ main.c: In function ‘main’:
+ main.c:683:36: note: ‘set’ was declared here
+ 683 | struct btf_id_set *set;
+ | ^~~
+
+Signed-off-by: Eder Zulian <ezulian@redhat.com>
+Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
+Acked-by: Jiri Olsa <jolsa@kernel.org>
+Link: https://lore.kernel.org/bpf/20241022172329.3871958-2-ezulian@redhat.com
+---
+ tools/bpf/resolve_btfids/main.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
+index d54aaa0619df95..bd9f960bce3d5b 100644
+--- a/tools/bpf/resolve_btfids/main.c
++++ b/tools/bpf/resolve_btfids/main.c
+@@ -679,8 +679,8 @@ static int sets_patch(struct object *obj)
+
+ next = rb_first(&obj->sets);
+ while (next) {
+- struct btf_id_set8 *set8;
+- struct btf_id_set *set;
++ struct btf_id_set8 *set8 = NULL;
++ struct btf_id_set *set = NULL;
+ unsigned long addr, off;
+ struct btf_id *id;
+
+--
+cgit 1.2.3-korg
+
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-01 11:26 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-01 11:26 UTC (permalink / raw
To: gentoo-commits
commit: b2b4c88cdfdeb5c2665e513f4d722f2003e0af17
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 1 11:26:33 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 1 11:26:33 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b2b4c88c
Linux patch 6.11.6
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1005_linux-6.11.6.patch | 10498 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 10502 insertions(+)
diff --git a/0000_README b/0000_README
index 1c7c3235..cc196549 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-6.11.5.patch
From: https://www.kernel.org
Desc: Linux 6.11.5
+Patch: 1005_linux-6.11.6.patch
+From: https://www.kernel.org
+Desc: Linux 6.11.6
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1005_linux-6.11.6.patch b/1005_linux-6.11.6.patch
new file mode 100644
index 00000000..b652bb88
--- /dev/null
+++ b/1005_linux-6.11.6.patch
@@ -0,0 +1,10498 @@
+diff --git a/Documentation/devicetree/bindings/sound/davinci-mcasp-audio.yaml b/Documentation/devicetree/bindings/sound/davinci-mcasp-audio.yaml
+index 7735e08d35ba14..beef193aaaeba0 100644
+--- a/Documentation/devicetree/bindings/sound/davinci-mcasp-audio.yaml
++++ b/Documentation/devicetree/bindings/sound/davinci-mcasp-audio.yaml
+@@ -102,21 +102,21 @@ properties:
+ default: 2
+
+ interrupts:
+- anyOf:
+- - minItems: 1
+- items:
+- - description: TX interrupt
+- - description: RX interrupt
+- - items:
+- - description: common/combined interrupt
++ minItems: 1
++ maxItems: 2
+
+ interrupt-names:
+ oneOf:
+- - minItems: 1
++ - description: TX interrupt
++ const: tx
++ - description: RX interrupt
++ const: rx
++ - description: TX and RX interrupts
+ items:
+ - const: tx
+ - const: rx
+- - const: common
++ - description: Common/combined interrupt
++ const: common
+
+ fck_parent:
+ $ref: /schemas/types.yaml#/definitions/string
+diff --git a/Makefile b/Makefile
+index 687ce7aee67a73..318a5d60088e0d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 11
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/boot/dts/broadcom/bcm2837-rpi-cm3-io3.dts b/arch/arm/boot/dts/broadcom/bcm2837-rpi-cm3-io3.dts
+index 72d26d130efaa4..85f54fa595aa8f 100644
+--- a/arch/arm/boot/dts/broadcom/bcm2837-rpi-cm3-io3.dts
++++ b/arch/arm/boot/dts/broadcom/bcm2837-rpi-cm3-io3.dts
+@@ -77,7 +77,7 @@ &gpio {
+ };
+
+ &hdmi {
+- hpd-gpios = <&expgpio 1 GPIO_ACTIVE_LOW>;
++ hpd-gpios = <&expgpio 0 GPIO_ACTIVE_LOW>;
+ power-domains = <&power RPI_POWER_DOMAIN_HDMI>;
+ status = "okay";
+ };
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index f6bc3da1ef110e..c8e237b20ef29f 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -10,7 +10,7 @@
+ #
+ # Copyright (C) 1995-2001 by Russell King
+
+-LDFLAGS_vmlinux :=--no-undefined -X
++LDFLAGS_vmlinux :=--no-undefined -X --pic-veneer
+
+ ifeq ($(CONFIG_RELOCATABLE), y)
+ # Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 9bef7638342ef7..ebc6782e3cd073 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -996,6 +996,9 @@ static int kvm_vcpu_suspend(struct kvm_vcpu *vcpu)
+ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
+ {
+ if (kvm_request_pending(vcpu)) {
++ if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu))
++ return -EIO;
++
+ if (kvm_check_request(KVM_REQ_SLEEP, vcpu))
+ kvm_vcpu_sleep(vcpu);
+
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index 31e49da867ffc3..2b1e2cec4c45d2 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1977,7 +1977,7 @@ static u64 reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+ * one cache line.
+ */
+ if (kvm_has_mte(vcpu->kvm))
+- clidr |= 2 << CLIDR_TTYPE_SHIFT(loc);
++ clidr |= 2ULL << CLIDR_TTYPE_SHIFT(loc);
+
+ __vcpu_sys_reg(vcpu, r->reg) = clidr;
+
+diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
+index e7c53e8af3d165..d88fdeaf6144b7 100644
+--- a/arch/arm64/kvm/vgic/vgic-init.c
++++ b/arch/arm64/kvm/vgic/vgic-init.c
+@@ -417,8 +417,28 @@ static void __kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
+ kfree(vgic_cpu->private_irqs);
+ vgic_cpu->private_irqs = NULL;
+
+- if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3)
++ if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
++ /*
++ * If this vCPU is being destroyed because of a failed creation
++ * then unregister the redistributor to avoid leaving behind a
++ * dangling pointer to the vCPU struct.
++ *
++ * vCPUs that have been successfully created (i.e. added to
++ * kvm->vcpu_array) get unregistered in kvm_vgic_destroy(), as
++ * this function gets called while holding kvm->arch.config_lock
++ * in the VM teardown path and would otherwise introduce a lock
++ * inversion w.r.t. kvm->srcu.
++ *
++ * vCPUs that failed creation are torn down outside of the
++ * kvm->arch.config_lock and do not get unregistered in
++ * kvm_vgic_destroy(), meaning it is both safe and necessary to
++ * do so here.
++ */
++ if (kvm_get_vcpu_by_id(vcpu->kvm, vcpu->vcpu_id) != vcpu)
++ vgic_unregister_redist_iodev(vcpu);
++
+ vgic_cpu->rd_iodev.base_addr = VGIC_ADDR_UNDEF;
++ }
+ }
+
+ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
+@@ -536,10 +556,10 @@ int kvm_vgic_map_resources(struct kvm *kvm)
+ out:
+ mutex_unlock(&kvm->arch.config_lock);
+ out_slots:
+- mutex_unlock(&kvm->slots_lock);
+-
+ if (ret)
+- kvm_vgic_destroy(kvm);
++ kvm_vm_dead(kvm);
++
++ mutex_unlock(&kvm->slots_lock);
+
+ return ret;
+ }
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 59e05a7aea56a5..6e5a934ee2f55e 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -2172,7 +2172,11 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ emit(A64_STR64I(A64_R(20), A64_SP, regs_off + 8), ctx);
+
+ if (flags & BPF_TRAMP_F_CALL_ORIG) {
+- emit_a64_mov_i64(A64_R(0), (const u64)im, ctx);
++ /* for the first pass, assume the worst case */
++ if (!ctx->image)
++ ctx->idx += 4;
++ else
++ emit_a64_mov_i64(A64_R(0), (const u64)im, ctx);
+ emit_call((const u64)__bpf_tramp_enter, ctx);
+ }
+
+@@ -2216,7 +2220,11 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+
+ if (flags & BPF_TRAMP_F_CALL_ORIG) {
+ im->ip_epilogue = ctx->ro_image + ctx->idx;
+- emit_a64_mov_i64(A64_R(0), (const u64)im, ctx);
++ /* for the first pass, assume the worst case */
++ if (!ctx->image)
++ ctx->idx += 4;
++ else
++ emit_a64_mov_i64(A64_R(0), (const u64)im, ctx);
+ emit_call((const u64)__bpf_tramp_exit, ctx);
+ }
+
+diff --git a/arch/loongarch/include/asm/bootinfo.h b/arch/loongarch/include/asm/bootinfo.h
+index 6d5846dd075cbd..7657e016233fb1 100644
+--- a/arch/loongarch/include/asm/bootinfo.h
++++ b/arch/loongarch/include/asm/bootinfo.h
+@@ -26,6 +26,10 @@ struct loongson_board_info {
+
+ #define NR_WORDS DIV_ROUND_UP(NR_CPUS, BITS_PER_LONG)
+
++/*
++ * The "core" of cores_per_node and cores_per_package stands for a
++ * logical core, which means in a SMT system it stands for a thread.
++ */
+ struct loongson_system_configuration {
+ int nr_cpus;
+ int nr_nodes;
+diff --git a/arch/loongarch/include/asm/kasan.h b/arch/loongarch/include/asm/kasan.h
+index cd6084f4e153fe..c6bce5fbff57b0 100644
+--- a/arch/loongarch/include/asm/kasan.h
++++ b/arch/loongarch/include/asm/kasan.h
+@@ -16,7 +16,7 @@
+ #define XRANGE_SHIFT (48)
+
+ /* Valid address length */
+-#define XRANGE_SHADOW_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3)
++#define XRANGE_SHADOW_SHIFT min(cpu_vabits, VA_BITS)
+ /* Used for taking out the valid address */
+ #define XRANGE_SHADOW_MASK GENMASK_ULL(XRANGE_SHADOW_SHIFT - 1, 0)
+ /* One segment whole address space size */
+diff --git a/arch/loongarch/kernel/process.c b/arch/loongarch/kernel/process.c
+index f2ff8b5d591e4f..6e58f65455c7ca 100644
+--- a/arch/loongarch/kernel/process.c
++++ b/arch/loongarch/kernel/process.c
+@@ -293,13 +293,15 @@ unsigned long stack_top(void)
+ {
+ unsigned long top = TASK_SIZE & PAGE_MASK;
+
+- /* Space for the VDSO & data page */
+- top -= PAGE_ALIGN(current->thread.vdso->size);
+- top -= VVAR_SIZE;
+-
+- /* Space to randomize the VDSO base */
+- if (current->flags & PF_RANDOMIZE)
+- top -= VDSO_RANDOMIZE_SIZE;
++ if (current->thread.vdso) {
++ /* Space for the VDSO & data page */
++ top -= PAGE_ALIGN(current->thread.vdso->size);
++ top -= VVAR_SIZE;
++
++ /* Space to randomize the VDSO base */
++ if (current->flags & PF_RANDOMIZE)
++ top -= VDSO_RANDOMIZE_SIZE;
++ }
+
+ return top;
+ }
+diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
+index 0f0740f0be274a..d2b18ed23b8a4b 100644
+--- a/arch/loongarch/kernel/setup.c
++++ b/arch/loongarch/kernel/setup.c
+@@ -55,6 +55,7 @@
+ #define SMBIOS_FREQHIGH_OFFSET 0x17
+ #define SMBIOS_FREQLOW_MASK 0xFF
+ #define SMBIOS_CORE_PACKAGE_OFFSET 0x23
++#define SMBIOS_THREAD_PACKAGE_OFFSET 0x25
+ #define LOONGSON_EFI_ENABLE (1 << 3)
+
+ unsigned long fw_arg0, fw_arg1, fw_arg2;
+@@ -125,7 +126,7 @@ static void __init parse_cpu_table(const struct dmi_header *dm)
+ cpu_clock_freq = freq_temp * 1000000;
+
+ loongson_sysconf.cpuname = (void *)dmi_string_parse(dm, dmi_data[16]);
+- loongson_sysconf.cores_per_package = *(dmi_data + SMBIOS_CORE_PACKAGE_OFFSET);
++ loongson_sysconf.cores_per_package = *(dmi_data + SMBIOS_THREAD_PACKAGE_OFFSET);
+
+ pr_info("CpuClock = %llu\n", cpu_clock_freq);
+ }
+diff --git a/arch/loongarch/kernel/traps.c b/arch/loongarch/kernel/traps.c
+index f9f4eb00c92ef5..c57b4134f3e84b 100644
+--- a/arch/loongarch/kernel/traps.c
++++ b/arch/loongarch/kernel/traps.c
+@@ -555,6 +555,9 @@ asmlinkage void noinstr do_ale(struct pt_regs *regs)
+ #else
+ unsigned int *pc;
+
++ if (regs->csr_prmd & CSR_PRMD_PIE)
++ local_irq_enable();
++
+ perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->csr_badvaddr);
+
+ /*
+@@ -579,6 +582,8 @@ asmlinkage void noinstr do_ale(struct pt_regs *regs)
+ die_if_kernel("Kernel ale access", regs);
+ force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);
+ out:
++ if (regs->csr_prmd & CSR_PRMD_PIE)
++ local_irq_disable();
+ #endif
+ irqentry_exit(regs, state);
+ }
+diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
+index 99f34409fb60f4..4cc631fa703913 100644
+--- a/arch/riscv/net/bpf_jit_comp64.c
++++ b/arch/riscv/net/bpf_jit_comp64.c
+@@ -18,6 +18,7 @@
+ #define RV_MAX_REG_ARGS 8
+ #define RV_FENTRY_NINSNS 2
+ #define RV_FENTRY_NBYTES (RV_FENTRY_NINSNS * 4)
++#define RV_KCFI_NINSNS (IS_ENABLED(CONFIG_CFI_CLANG) ? 1 : 0)
+ /* imm that allows emit_imm to emit max count insns */
+ #define RV_MAX_COUNT_IMM 0x7FFF7FF7FF7FF7FF
+
+@@ -271,7 +272,8 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
+ if (!is_tail_call)
+ emit_addiw(RV_REG_A0, RV_REG_A5, 0, ctx);
+ emit_jalr(RV_REG_ZERO, is_tail_call ? RV_REG_T3 : RV_REG_RA,
+- is_tail_call ? (RV_FENTRY_NINSNS + 1) * 4 : 0, /* skip reserved nops and TCC init */
++ /* kcfi, fentry and TCC init insns will be skipped on tailcall */
++ is_tail_call ? (RV_KCFI_NINSNS + RV_FENTRY_NINSNS + 1) * 4 : 0,
+ ctx);
+ }
+
+@@ -548,8 +550,8 @@ static void emit_atomic(u8 rd, u8 rs, s16 off, s32 imm, bool is64,
+ rv_lr_w(r0, 0, rd, 0, 0), ctx);
+ jmp_offset = ninsns_rvoff(8);
+ emit(rv_bne(RV_REG_T2, r0, jmp_offset >> 1), ctx);
+- emit(is64 ? rv_sc_d(RV_REG_T3, rs, rd, 0, 0) :
+- rv_sc_w(RV_REG_T3, rs, rd, 0, 0), ctx);
++ emit(is64 ? rv_sc_d(RV_REG_T3, rs, rd, 0, 1) :
++ rv_sc_w(RV_REG_T3, rs, rd, 0, 1), ctx);
+ jmp_offset = ninsns_rvoff(-6);
+ emit(rv_bne(RV_REG_T3, 0, jmp_offset >> 1), ctx);
+ emit(rv_fence(0x3, 0x3), ctx);
+diff --git a/arch/s390/include/asm/perf_event.h b/arch/s390/include/asm/perf_event.h
+index 9917e2717b2b42..66aff768f8151d 100644
+--- a/arch/s390/include/asm/perf_event.h
++++ b/arch/s390/include/asm/perf_event.h
+@@ -73,6 +73,7 @@ struct perf_sf_sde_regs {
+ #define SAMPLE_FREQ_MODE(hwc) (SAMPL_FLAGS(hwc) & PERF_CPUM_SF_FREQ_MODE)
+
+ #define perf_arch_fetch_caller_regs(regs, __ip) do { \
++ (regs)->psw.mask = 0; \
+ (regs)->psw.addr = (__ip); \
+ (regs)->gprs[15] = (unsigned long)__builtin_frame_address(0) - \
+ offsetof(struct stack_frame, back_chain); \
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index dbe95ec5917e57..d4f19d33914cbc 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -280,18 +280,19 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
+ goto no_pdev;
+
+ switch (ccdf->pec) {
+- case 0x003a: /* Service Action or Error Recovery Successful */
++ case 0x002a: /* Error event concerns FMB */
++ case 0x002b:
++ case 0x002c:
++ break;
++ case 0x0040: /* Service Action or Error Recovery Failed */
++ case 0x003b:
++ zpci_event_io_failure(pdev, pci_channel_io_perm_failure);
++ break;
++ default: /* PCI function left in the error state attempt to recover */
+ ers_res = zpci_event_attempt_error_recovery(pdev);
+ if (ers_res != PCI_ERS_RESULT_RECOVERED)
+ zpci_event_io_failure(pdev, pci_channel_io_perm_failure);
+ break;
+- default:
+- /*
+- * Mark as frozen not permanently failed because the device
+- * could be subsequently recovered by the platform.
+- */
+- zpci_event_io_failure(pdev, pci_channel_io_frozen);
+- break;
+ }
+ pci_dev_put(pdev);
+ no_pdev:
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 007bab9f2a0e39..4e0b05a6bb5c36 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2260,6 +2260,7 @@ config RANDOMIZE_MEMORY_PHYSICAL_PADDING
+ config ADDRESS_MASKING
+ bool "Linear Address Masking support"
+ depends on X86_64
++ depends on COMPILE_TEST || !CPU_MITIGATIONS # wait for LASS
+ help
+ Linear Address Masking (LAM) modifies the checking that is applied
+ to 64-bit linear addresses, allowing software to use of the
+diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
+index b985ca79cf97ba..a481a939862e54 100644
+--- a/arch/x86/events/rapl.c
++++ b/arch/x86/events/rapl.c
+@@ -103,6 +103,19 @@ static struct perf_pmu_events_attr event_attr_##v = { \
+ .event_str = str, \
+ };
+
++/*
++ * RAPL Package energy counter scope:
++ * 1. AMD/HYGON platforms have a per-PKG package energy counter
++ * 2. For Intel platforms
++ * 2.1. CLX-AP is multi-die and its RAPL MSRs are die-scope
++ * 2.2. Other Intel platforms are single die systems so the scope can be
++ * considered as either pkg-scope or die-scope, and we are considering
++ * them as die-scope.
++ */
++#define rapl_pmu_is_pkg_scope() \
++ (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || \
++ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
++
+ struct rapl_pmu {
+ raw_spinlock_t lock;
+ int n_active;
+@@ -140,9 +153,25 @@ static unsigned int rapl_cntr_mask;
+ static u64 rapl_timer_ms;
+ static struct perf_msr *rapl_msrs;
+
++/*
++ * Helper functions to get the correct topology macros according to the
++ * RAPL PMU scope.
++ */
++static inline unsigned int get_rapl_pmu_idx(int cpu)
++{
++ return rapl_pmu_is_pkg_scope() ? topology_logical_package_id(cpu) :
++ topology_logical_die_id(cpu);
++}
++
++static inline const struct cpumask *get_rapl_pmu_cpumask(int cpu)
++{
++ return rapl_pmu_is_pkg_scope() ? topology_core_cpumask(cpu) :
++ topology_die_cpumask(cpu);
++}
++
+ static inline struct rapl_pmu *cpu_to_rapl_pmu(unsigned int cpu)
+ {
+- unsigned int rapl_pmu_idx = topology_logical_die_id(cpu);
++ unsigned int rapl_pmu_idx = get_rapl_pmu_idx(cpu);
+
+ /*
+ * The unsigned check also catches the '-1' return value for non
+@@ -552,7 +581,7 @@ static int rapl_cpu_offline(unsigned int cpu)
+
+ pmu->cpu = -1;
+ /* Find a new cpu to collect rapl events */
+- target = cpumask_any_but(topology_die_cpumask(cpu), cpu);
++ target = cpumask_any_but(get_rapl_pmu_cpumask(cpu), cpu);
+
+ /* Migrate rapl events to the new target */
+ if (target < nr_cpu_ids) {
+@@ -565,6 +594,11 @@ static int rapl_cpu_offline(unsigned int cpu)
+
+ static int rapl_cpu_online(unsigned int cpu)
+ {
++ s32 rapl_pmu_idx = get_rapl_pmu_idx(cpu);
++ if (rapl_pmu_idx < 0) {
++ pr_err("topology_logical_(package/die)_id() returned a negative value");
++ return -EINVAL;
++ }
+ struct rapl_pmu *pmu = cpu_to_rapl_pmu(cpu);
+ int target;
+
+@@ -579,14 +613,14 @@ static int rapl_cpu_online(unsigned int cpu)
+ pmu->timer_interval = ms_to_ktime(rapl_timer_ms);
+ rapl_hrtimer_init(pmu);
+
+- rapl_pmus->pmus[topology_logical_die_id(cpu)] = pmu;
++ rapl_pmus->pmus[rapl_pmu_idx] = pmu;
+ }
+
+ /*
+ * Check if there is an online cpu in the package which collects rapl
+ * events already.
+ */
+- target = cpumask_any_and(&rapl_cpu_mask, topology_die_cpumask(cpu));
++ target = cpumask_any_and(&rapl_cpu_mask, get_rapl_pmu_cpumask(cpu));
+ if (target < nr_cpu_ids)
+ return 0;
+
+@@ -675,7 +709,10 @@ static const struct attribute_group *rapl_attr_update[] = {
+
+ static int __init init_rapl_pmus(void)
+ {
+- int nr_rapl_pmu = topology_max_packages() * topology_max_dies_per_package();
++ int nr_rapl_pmu = topology_max_packages();
++
++ if (!rapl_pmu_is_pkg_scope())
++ nr_rapl_pmu *= topology_max_dies_per_package();
+
+ rapl_pmus = kzalloc(struct_size(rapl_pmus, pmus, nr_rapl_pmu), GFP_KERNEL);
+ if (!rapl_pmus)
+diff --git a/arch/x86/include/asm/runtime-const.h b/arch/x86/include/asm/runtime-const.h
+index 24e3a53ca2551e..6652ebddfd02f8 100644
+--- a/arch/x86/include/asm/runtime-const.h
++++ b/arch/x86/include/asm/runtime-const.h
+@@ -6,7 +6,7 @@
+ typeof(sym) __ret; \
+ asm_inline("mov %1,%0\n1:\n" \
+ ".pushsection runtime_ptr_" #sym ",\"a\"\n\t" \
+- ".long 1b - %c2 - .\n\t" \
++ ".long 1b - %c2 - .\n" \
+ ".popsection" \
+ :"=r" (__ret) \
+ :"i" ((unsigned long)0x0123456789abcdefull), \
+@@ -20,7 +20,7 @@
+ typeof(0u+(val)) __ret = (val); \
+ asm_inline("shrl $12,%k0\n1:\n" \
+ ".pushsection runtime_shift_" #sym ",\"a\"\n\t" \
+- ".long 1b - 1 - .\n\t" \
++ ".long 1b - 1 - .\n" \
+ ".popsection" \
+ :"+r" (__ret)); \
+ __ret; })
+diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
+index 04789f45ab2b2f..250e28cba0a0f6 100644
+--- a/arch/x86/include/asm/uaccess_64.h
++++ b/arch/x86/include/asm/uaccess_64.h
+@@ -12,6 +12,13 @@
+ #include <asm/cpufeatures.h>
+ #include <asm/page.h>
+ #include <asm/percpu.h>
++#include <asm/runtime-const.h>
++
++/*
++ * Virtual variable: there's no actual backing store for this,
++ * it can purely be used as 'runtime_const_ptr(USER_PTR_MAX)'
++ */
++extern unsigned long USER_PTR_MAX;
+
+ #ifdef CONFIG_ADDRESS_MASKING
+ /*
+@@ -46,35 +53,41 @@ static inline unsigned long __untagged_addr_remote(struct mm_struct *mm,
+
+ #endif
+
++#define valid_user_address(x) \
++ ((__force unsigned long)(x) <= runtime_const_ptr(USER_PTR_MAX))
++
+ /*
+- * The virtual address space space is logically divided into a kernel
+- * half and a user half. When cast to a signed type, user pointers
+- * are positive and kernel pointers are negative.
++ * Masking the user address is an alternative to a conditional
++ * user_access_begin that can avoid the fencing. This only works
++ * for dense accesses starting at the address.
+ */
+-#define valid_user_address(x) ((__force long)(x) >= 0)
++static inline void __user *mask_user_address(const void __user *ptr)
++{
++ unsigned long mask;
++ asm("cmp %1,%0\n\t"
++ "sbb %0,%0"
++ :"=r" (mask)
++ :"r" (ptr),
++ "0" (runtime_const_ptr(USER_PTR_MAX)));
++ return (__force void __user *)(mask | (__force unsigned long)ptr);
++}
++#define masked_user_access_begin(x) ({ __uaccess_begin(); mask_user_address(x); })
+
+ /*
+ * User pointers can have tag bits on x86-64. This scheme tolerates
+ * arbitrary values in those bits rather then masking them off.
+ *
+ * Enforce two rules:
+- * 1. 'ptr' must be in the user half of the address space
++ * 1. 'ptr' must be in the user part of the address space
+ * 2. 'ptr+size' must not overflow into kernel addresses
+ *
+- * Note that addresses around the sign change are not valid addresses,
+- * and will GP-fault even with LAM enabled if the sign bit is set (see
+- * "CR3.LAM_SUP" that can narrow the canonicality check if we ever
+- * enable it, but not remove it entirely).
+- *
+- * So the "overflow into kernel addresses" does not imply some sudden
+- * exact boundary at the sign bit, and we can allow a lot of slop on the
+- * size check.
++ * Note that we always have at least one guard page between the
++ * max user address and the non-canonical gap, allowing us to
++ * ignore small sizes entirely.
+ *
+ * In fact, we could probably remove the size check entirely, since
+ * any kernel accesses will be in increasing address order starting
+- * at 'ptr', and even if the end might be in kernel space, we'll
+- * hit the GP faults for non-canonical accesses before we ever get
+- * there.
++ * at 'ptr'.
+ *
+ * That's a separate optimization, for now just handle the small
+ * constant case.
+diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
+index 61eadde085114d..9fe9972d2071b9 100644
+--- a/arch/x86/kernel/amd_nb.c
++++ b/arch/x86/kernel/amd_nb.c
+@@ -44,6 +44,9 @@
+ #define PCI_DEVICE_ID_AMD_19H_M70H_DF_F4 0x14f4
+ #define PCI_DEVICE_ID_AMD_19H_M78H_DF_F4 0x12fc
+ #define PCI_DEVICE_ID_AMD_1AH_M00H_DF_F4 0x12c4
++#define PCI_DEVICE_ID_AMD_1AH_M20H_DF_F4 0x16fc
++#define PCI_DEVICE_ID_AMD_1AH_M60H_DF_F4 0x124c
++#define PCI_DEVICE_ID_AMD_1AH_M70H_DF_F4 0x12bc
+ #define PCI_DEVICE_ID_AMD_MI200_DF_F4 0x14d4
+ #define PCI_DEVICE_ID_AMD_MI300_DF_F4 0x152c
+
+@@ -125,6 +128,9 @@ static const struct pci_device_id amd_nb_link_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M78H_DF_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M00H_DF_F4) },
++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M20H_DF_F4) },
++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M60H_DF_F4) },
++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M70H_DF_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_MI200_DF_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_MI300_DF_F4) },
+ {}
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index ab0e2da7c9ef50..b7d97f97cd9822 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -69,6 +69,7 @@
+ #include <asm/sev.h>
+ #include <asm/tdx.h>
+ #include <asm/posted_intr.h>
++#include <asm/runtime-const.h>
+
+ #include "cpu.h"
+
+@@ -2371,6 +2372,15 @@ void __init arch_cpu_finalize_init(void)
+ alternative_instructions();
+
+ if (IS_ENABLED(CONFIG_X86_64)) {
++ unsigned long USER_PTR_MAX = TASK_SIZE_MAX-1;
++
++ /*
++ * Enable this when LAM is gated on LASS support
++ if (cpu_feature_enabled(X86_FEATURE_LAM))
++ USER_PTR_MAX = (1ul << 63) - PAGE_SIZE - 1;
++ */
++ runtime_const_init(ptr, USER_PTR_MAX);
++
+ /*
+ * Make sure the first 2MB area is not mapped by huge pages
+ * There are typically fixed size MTRRs in there and overlapping
+diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+index 50fa1fe9a073f5..200d89a6402708 100644
+--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
++++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+@@ -29,10 +29,10 @@
+ * hardware. The allocated bandwidth percentage is rounded to the next
+ * control step available on the hardware.
+ */
+-static bool bw_validate(char *buf, unsigned long *data, struct rdt_resource *r)
++static bool bw_validate(char *buf, u32 *data, struct rdt_resource *r)
+ {
+- unsigned long bw;
+ int ret;
++ u32 bw;
+
+ /*
+ * Only linear delay values is supported for current Intel SKUs.
+@@ -42,16 +42,21 @@ static bool bw_validate(char *buf, unsigned long *data, struct rdt_resource *r)
+ return false;
+ }
+
+- ret = kstrtoul(buf, 10, &bw);
++ ret = kstrtou32(buf, 10, &bw);
+ if (ret) {
+- rdt_last_cmd_printf("Non-decimal digit in MB value %s\n", buf);
++ rdt_last_cmd_printf("Invalid MB value %s\n", buf);
+ return false;
+ }
+
+- if ((bw < r->membw.min_bw || bw > r->default_ctrl) &&
+- !is_mba_sc(r)) {
+- rdt_last_cmd_printf("MB value %ld out of range [%d,%d]\n", bw,
+- r->membw.min_bw, r->default_ctrl);
++ /* Nothing else to do if software controller is enabled. */
++ if (is_mba_sc(r)) {
++ *data = bw;
++ return true;
++ }
++
++ if (bw < r->membw.min_bw || bw > r->default_ctrl) {
++ rdt_last_cmd_printf("MB value %u out of range [%d,%d]\n",
++ bw, r->membw.min_bw, r->default_ctrl);
+ return false;
+ }
+
+@@ -65,7 +70,7 @@ int parse_bw(struct rdt_parse_data *data, struct resctrl_schema *s,
+ struct resctrl_staged_config *cfg;
+ u32 closid = data->rdtgrp->closid;
+ struct rdt_resource *r = s->res;
+- unsigned long bw_val;
++ u32 bw_val;
+
+ cfg = &d->staged_config[s->conf_type];
+ if (cfg->have_new_ctrl) {
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 6e73403e874fc6..6fb8d7ba9b50aa 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -359,6 +359,7 @@ SECTIONS
+
+ RUNTIME_CONST(shift, d_hash_shift)
+ RUNTIME_CONST(ptr, dentry_hashtable)
++ RUNTIME_CONST(ptr, USER_PTR_MAX)
+
+ . = ALIGN(PAGE_SIZE);
+
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index 6f704c1037e514..6b3e0c75db945c 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -63,8 +63,12 @@ static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index)
+ u64 pdpte;
+ int ret;
+
++ /*
++ * Note, nCR3 is "assumed" to be 32-byte aligned, i.e. the CPU ignores
++ * nCR3[4:0] when loading PDPTEs from memory.
++ */
+ ret = kvm_vcpu_read_guest_page(vcpu, gpa_to_gfn(cr3), &pdpte,
+- offset_in_page(cr3) + index * 8, 8);
++ (cr3 & GENMASK(11, 5)) + index * 8, 8);
+ if (ret)
+ return 0;
+ return pdpte;
+diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S
+index d066aecf8aeb22..4357ec2a0bfc2c 100644
+--- a/arch/x86/lib/getuser.S
++++ b/arch/x86/lib/getuser.S
+@@ -39,8 +39,13 @@
+
+ .macro check_range size:req
+ .if IS_ENABLED(CONFIG_X86_64)
+- mov %rax, %rdx
+- sar $63, %rdx
++ movq $0x0123456789abcdef,%rdx
++ 1:
++ .pushsection runtime_ptr_USER_PTR_MAX,"a"
++ .long 1b - 8 - .
++ .popsection
++ cmp %rax, %rdx
++ sbb %rdx, %rdx
+ or %rdx, %rax
+ .else
+ cmp $TASK_SIZE_MAX-\size+1, %eax
+diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
+index 0ce17766c0e523..9a6a943d8e410c 100644
+--- a/arch/x86/virt/svm/sev.c
++++ b/arch/x86/virt/svm/sev.c
+@@ -173,6 +173,8 @@ static void __init __snp_fixup_e820_tables(u64 pa)
+ e820__range_update(pa, PMD_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED);
+ e820__range_update_table(e820_table_kexec, pa, PMD_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED);
+ e820__range_update_table(e820_table_firmware, pa, PMD_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED);
++ if (!memblock_is_region_reserved(pa, PMD_SIZE))
++ memblock_reserve(pa, PMD_SIZE);
+ }
+ }
+
+diff --git a/block/elevator.c b/block/elevator.c
+index 4122026b11f1a1..640fcc891b0d2b 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -106,8 +106,7 @@ static struct elevator_type *__elevator_find(const char *name)
+ return NULL;
+ }
+
+-static struct elevator_type *elevator_find_get(struct request_queue *q,
+- const char *name)
++static struct elevator_type *elevator_find_get(const char *name)
+ {
+ struct elevator_type *e;
+
+@@ -569,7 +568,7 @@ static struct elevator_type *elevator_get_default(struct request_queue *q)
+ !blk_mq_is_shared_tags(q->tag_set->flags))
+ return NULL;
+
+- return elevator_find_get(q, "mq-deadline");
++ return elevator_find_get("mq-deadline");
+ }
+
+ /*
+@@ -697,7 +696,7 @@ static int elevator_change(struct request_queue *q, const char *elevator_name)
+ if (q->elevator && elevator_match(q->elevator->type, elevator_name))
+ return 0;
+
+- e = elevator_find_get(q, elevator_name);
++ e = elevator_find_get(elevator_name);
+ if (!e)
+ return -EINVAL;
+ ret = elevator_switch(q, e);
+@@ -709,13 +708,21 @@ int elv_iosched_load_module(struct gendisk *disk, const char *buf,
+ size_t count)
+ {
+ char elevator_name[ELV_NAME_MAX];
++ struct elevator_type *found;
++ const char *name;
+
+ if (!elv_support_iosched(disk->queue))
+ return -EOPNOTSUPP;
+
+ strscpy(elevator_name, buf, sizeof(elevator_name));
++ name = strstrip(elevator_name);
+
+- request_module("%s-iosched", strstrip(elevator_name));
++ spin_lock(&elv_list_lock);
++ found = __elevator_find(name);
++ spin_unlock(&elv_list_lock);
++
++ if (!found)
++ request_module("%s-iosched", name);
+
+ return 0;
+ }
+diff --git a/drivers/accel/qaic/qaic_control.c b/drivers/accel/qaic/qaic_control.c
+index 9e8a8cbadf6bb9..d8bdab69f80095 100644
+--- a/drivers/accel/qaic/qaic_control.c
++++ b/drivers/accel/qaic/qaic_control.c
+@@ -496,7 +496,7 @@ static int encode_addr_size_pairs(struct dma_xfer *xfer, struct wrapper_list *wr
+ nents = sgt->nents;
+ nents_dma = nents;
+ *size = QAIC_MANAGE_EXT_MSG_LENGTH - msg_hdr_len - sizeof(**out_trans);
+- for_each_sgtable_sg(sgt, sg, i) {
++ for_each_sgtable_dma_sg(sgt, sg, i) {
+ *size -= sizeof(*asp);
+ /* Save 1K for possible follow-up transactions. */
+ if (*size < SZ_1K) {
+diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c
+index e86e71c1cdd868..c20eb63750f517 100644
+--- a/drivers/accel/qaic/qaic_data.c
++++ b/drivers/accel/qaic/qaic_data.c
+@@ -184,7 +184,7 @@ static int clone_range_of_sgt_for_slice(struct qaic_device *qdev, struct sg_tabl
+ nents = 0;
+
+ size = size ? size : PAGE_SIZE;
+- for (sg = sgt_in->sgl; sg; sg = sg_next(sg)) {
++ for_each_sgtable_dma_sg(sgt_in, sg, j) {
+ len = sg_dma_len(sg);
+
+ if (!len)
+@@ -221,7 +221,7 @@ static int clone_range_of_sgt_for_slice(struct qaic_device *qdev, struct sg_tabl
+
+ /* copy relevant sg node and fix page and length */
+ sgn = sgf;
+- for_each_sgtable_sg(sgt, sg, j) {
++ for_each_sgtable_dma_sg(sgt, sg, j) {
+ memcpy(sg, sgn, sizeof(*sg));
+ if (sgn == sgf) {
+ sg_dma_address(sg) += offf;
+@@ -301,7 +301,7 @@ static int encode_reqs(struct qaic_device *qdev, struct bo_slice *slice,
+ * fence.
+ */
+ dev_addr = req->dev_addr;
+- for_each_sgtable_sg(slice->sgt, sg, i) {
++ for_each_sgtable_dma_sg(slice->sgt, sg, i) {
+ slice->reqs[i].cmd = cmd;
+ slice->reqs[i].src_addr = cpu_to_le64(slice->dir == DMA_TO_DEVICE ?
+ sg_dma_address(sg) : dev_addr);
+diff --git a/drivers/acpi/button.c b/drivers/acpi/button.c
+index cc61020756beb8..1bb61425fa0b9c 100644
+--- a/drivers/acpi/button.c
++++ b/drivers/acpi/button.c
+@@ -130,6 +130,17 @@ static const struct dmi_system_id dmi_lid_quirks[] = {
+ },
+ .driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_OPEN,
+ },
++ {
++ /*
++ * Samsung galaxybook2 ,initial _LID device notification returns
++ * lid closed.
++ */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "750XED"),
++ },
++ .driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_OPEN,
++ },
+ {}
+ };
+
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 5b06e236aabef2..ed91dfd4fdca7e 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -1916,9 +1916,15 @@ unsigned int cppc_perf_to_khz(struct cppc_perf_caps *caps, unsigned int perf)
+ u64 mul, div;
+
+ if (caps->lowest_freq && caps->nominal_freq) {
+- mul = caps->nominal_freq - caps->lowest_freq;
++ /* Avoid special case when nominal_freq is equal to lowest_freq */
++ if (caps->lowest_freq == caps->nominal_freq) {
++ mul = caps->nominal_freq;
++ div = caps->nominal_perf;
++ } else {
++ mul = caps->nominal_freq - caps->lowest_freq;
++ div = caps->nominal_perf - caps->lowest_perf;
++ }
+ mul *= KHZ_PER_MHZ;
+- div = caps->nominal_perf - caps->lowest_perf;
+ offset = caps->nominal_freq * KHZ_PER_MHZ -
+ div64_u64(caps->nominal_perf * mul, div);
+ } else {
+@@ -1939,11 +1945,17 @@ unsigned int cppc_khz_to_perf(struct cppc_perf_caps *caps, unsigned int freq)
+ {
+ s64 retval, offset = 0;
+ static u64 max_khz;
+- u64 mul, div;
++ u64 mul, div;
+
+ if (caps->lowest_freq && caps->nominal_freq) {
+- mul = caps->nominal_perf - caps->lowest_perf;
+- div = caps->nominal_freq - caps->lowest_freq;
++ /* Avoid special case when nominal_freq is equal to lowest_freq */
++ if (caps->lowest_freq == caps->nominal_freq) {
++ mul = caps->nominal_perf;
++ div = caps->nominal_freq;
++ } else {
++ mul = caps->nominal_perf - caps->lowest_perf;
++ div = caps->nominal_freq - caps->lowest_freq;
++ }
+ /*
+ * We don't need to convert to kHz for computing offset and can
+ * directly use nominal_freq and lowest_freq as the div64_u64
+diff --git a/drivers/acpi/prmt.c b/drivers/acpi/prmt.c
+index c78453c74ef5aa..6ea80369e213b6 100644
+--- a/drivers/acpi/prmt.c
++++ b/drivers/acpi/prmt.c
+@@ -52,7 +52,7 @@ struct prm_context_buffer {
+ static LIST_HEAD(prm_module_list);
+
+ struct prm_handler_info {
+- guid_t guid;
++ efi_guid_t guid;
+ efi_status_t (__efiapi *handler_addr)(u64, void *);
+ u64 static_data_buffer_addr;
+ u64 acpi_param_buffer_addr;
+@@ -72,17 +72,21 @@ struct prm_module_info {
+ struct prm_handler_info handlers[] __counted_by(handler_count);
+ };
+
+-static u64 efi_pa_va_lookup(u64 pa)
++static u64 efi_pa_va_lookup(efi_guid_t *guid, u64 pa)
+ {
+ efi_memory_desc_t *md;
+ u64 pa_offset = pa & ~PAGE_MASK;
+ u64 page = pa & PAGE_MASK;
+
+ for_each_efi_memory_desc(md) {
+- if (md->phys_addr < pa && pa < md->phys_addr + PAGE_SIZE * md->num_pages)
++ if ((md->attribute & EFI_MEMORY_RUNTIME) &&
++ (md->phys_addr < pa && pa < md->phys_addr + PAGE_SIZE * md->num_pages)) {
+ return pa_offset + md->virt_addr + page - md->phys_addr;
++ }
+ }
+
++ pr_warn("Failed to find VA for GUID: %pUL, PA: 0x%llx", guid, pa);
++
+ return 0;
+ }
+
+@@ -148,9 +152,15 @@ acpi_parse_prmt(union acpi_subtable_headers *header, const unsigned long end)
+ th = &tm->handlers[cur_handler];
+
+ guid_copy(&th->guid, (guid_t *)handler_info->handler_guid);
+- th->handler_addr = (void *)efi_pa_va_lookup(handler_info->handler_address);
+- th->static_data_buffer_addr = efi_pa_va_lookup(handler_info->static_data_buffer_address);
+- th->acpi_param_buffer_addr = efi_pa_va_lookup(handler_info->acpi_param_buffer_address);
++ th->handler_addr =
++ (void *)efi_pa_va_lookup(&th->guid, handler_info->handler_address);
++
++ th->static_data_buffer_addr =
++ efi_pa_va_lookup(&th->guid, handler_info->static_data_buffer_address);
++
++ th->acpi_param_buffer_addr =
++ efi_pa_va_lookup(&th->guid, handler_info->acpi_param_buffer_address);
++
+ } while (++cur_handler < tm->handler_count && (handler_info = get_next_handler(handler_info)));
+
+ return 0;
+@@ -253,6 +263,13 @@ static acpi_status acpi_platformrt_space_handler(u32 function,
+ if (!handler || !module)
+ goto invalid_guid;
+
++ if (!handler->handler_addr ||
++ !handler->static_data_buffer_addr ||
++ !handler->acpi_param_buffer_addr) {
++ buffer->prm_status = PRM_HANDLER_ERROR;
++ return AE_OK;
++ }
++
+ ACPI_COPY_NAMESEG(context.signature, "PRMC");
+ context.revision = 0x0;
+ context.reserved = 0x0;
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 0eb52e37246773..a904bf9a7f7db8 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -538,6 +538,13 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "17U70P"),
+ },
+ },
++ {
++ /* LG Electronics 16T90SP */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"),
++ DMI_MATCH(DMI_BOARD_NAME, "16T90SP"),
++ },
++ },
+ { }
+ };
+
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 002e1e9501d1a4..b05cc0e46542a7 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -648,6 +648,7 @@ void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap,
+ /* the scmd has an associated qc */
+ if (!(qc->flags & ATA_QCFLAG_EH)) {
+ /* which hasn't failed yet, timeout */
++ set_host_byte(scmd, DID_TIME_OUT);
+ qc->err_mask |= AC_ERR_TIMEOUT;
+ qc->flags |= ATA_QCFLAG_EH;
+ nr_timedout++;
+diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
+index 9b0f37d4b9d49a..6a99a459b80b2d 100644
+--- a/drivers/cdrom/cdrom.c
++++ b/drivers/cdrom/cdrom.c
+@@ -2313,7 +2313,7 @@ static int cdrom_ioctl_media_changed(struct cdrom_device_info *cdi,
+ return -EINVAL;
+
+ /* Prevent arg from speculatively bypassing the length check */
+- barrier_nospec();
++ arg = array_index_nospec(arg, cdi->capacity);
+
+ info = kmalloc(sizeof(*info), GFP_KERNEL);
+ if (!info)
+diff --git a/drivers/clk/rockchip/clk.c b/drivers/clk/rockchip/clk.c
+index 2fa7253c73b2cd..88629a9abc9c90 100644
+--- a/drivers/clk/rockchip/clk.c
++++ b/drivers/clk/rockchip/clk.c
+@@ -439,7 +439,7 @@ unsigned long rockchip_clk_find_max_clk_id(struct rockchip_clk_branch *list,
+ if (list->id > max)
+ max = list->id;
+ if (list->child && list->child->id > max)
+- max = list->id;
++ max = list->child->id;
+ }
+
+ return max;
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 589fde37ccd7ab..929b9097a6c17c 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -1281,11 +1281,21 @@ static int amd_pstate_register_driver(int mode)
+ return -EINVAL;
+
+ cppc_state = mode;
++
++ ret = amd_pstate_enable(true);
++ if (ret) {
++ pr_err("failed to enable cppc during amd-pstate driver registration, return %d\n",
++ ret);
++ amd_pstate_driver_cleanup();
++ return ret;
++ }
++
+ ret = cpufreq_register_driver(current_pstate_driver);
+ if (ret) {
+ amd_pstate_driver_cleanup();
+ return ret;
+ }
++
+ return 0;
+ }
+
+diff --git a/drivers/firewire/core-topology.c b/drivers/firewire/core-topology.c
+index b4e637aa693214..7e98562d2eb824 100644
+--- a/drivers/firewire/core-topology.c
++++ b/drivers/firewire/core-topology.c
+@@ -204,7 +204,7 @@ static struct fw_node *build_tree(struct fw_card *card, const u32 *sid, int self
+ // the node->ports array where the parent node should be. Later,
+ // when we handle the parent node, we fix up the reference.
+ ++parent_count;
+- node->color = i;
++ node->color = port_index;
+ break;
+
+ case PHY_PACKET_SELF_ID_PORT_STATUS_CHILD:
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 6b6957f4743fec..dc09f2d755f414 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -2902,10 +2902,8 @@ static struct scmi_debug_info *scmi_debugfs_common_setup(struct scmi_info *info)
+ dbg->top_dentry = top_dentry;
+
+ if (devm_add_action_or_reset(info->dev,
+- scmi_debugfs_common_cleanup, dbg)) {
+- scmi_debugfs_common_cleanup(dbg);
++ scmi_debugfs_common_cleanup, dbg))
+ return NULL;
+- }
+
+ return dbg;
+ }
+diff --git a/drivers/firmware/arm_scmi/mailbox.c b/drivers/firmware/arm_scmi/mailbox.c
+index 0219a12e3209a7..06087cb785f364 100644
+--- a/drivers/firmware/arm_scmi/mailbox.c
++++ b/drivers/firmware/arm_scmi/mailbox.c
+@@ -24,6 +24,7 @@
+ * @chan_platform_receiver: Optional Platform Receiver mailbox unidirectional channel
+ * @cinfo: SCMI channel info
+ * @shmem: Transmit/Receive shared memory area
++ * @chan_lock: Lock that prevents multiple xfers from being queued
+ */
+ struct scmi_mailbox {
+ struct mbox_client cl;
+@@ -32,6 +33,7 @@ struct scmi_mailbox {
+ struct mbox_chan *chan_platform_receiver;
+ struct scmi_chan_info *cinfo;
+ struct scmi_shared_mem __iomem *shmem;
++ struct mutex chan_lock;
+ };
+
+ #define client_to_scmi_mailbox(c) container_of(c, struct scmi_mailbox, cl)
+@@ -255,6 +257,7 @@ static int mailbox_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
+
+ cinfo->transport_info = smbox;
+ smbox->cinfo = cinfo;
++ mutex_init(&smbox->chan_lock);
+
+ return 0;
+ }
+@@ -284,13 +287,23 @@ static int mailbox_send_message(struct scmi_chan_info *cinfo,
+ struct scmi_mailbox *smbox = cinfo->transport_info;
+ int ret;
+
+- ret = mbox_send_message(smbox->chan, xfer);
++ /*
++ * The mailbox layer has its own queue. However the mailbox queue
++ * confuses the per message SCMI timeouts since the clock starts when
++ * the message is submitted into the mailbox queue. So when multiple
++ * messages are queued up the clock starts on all messages instead of
++ * only the one inflight.
++ */
++ mutex_lock(&smbox->chan_lock);
+
+- /* mbox_send_message returns non-negative value on success, so reset */
+- if (ret > 0)
+- ret = 0;
++ ret = mbox_send_message(smbox->chan, xfer);
++ /* mbox_send_message returns non-negative value on success */
++ if (ret < 0) {
++ mutex_unlock(&smbox->chan_lock);
++ return ret;
++ }
+
+- return ret;
++ return 0;
+ }
+
+ static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret,
+@@ -298,13 +311,10 @@ static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret,
+ {
+ struct scmi_mailbox *smbox = cinfo->transport_info;
+
+- /*
+- * NOTE: we might prefer not to need the mailbox ticker to manage the
+- * transfer queueing since the protocol layer queues things by itself.
+- * Unfortunately, we have to kick the mailbox framework after we have
+- * received our message.
+- */
+ mbox_client_txdone(smbox->chan, ret);
++
++ /* Release channel */
++ mutex_unlock(&smbox->chan_lock);
+ }
+
+ static void mailbox_fetch_response(struct scmi_chan_info *cinfo,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index f85ace0384d218..1f5a296f5ed2f4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -147,6 +147,7 @@ static union acpi_object *amdgpu_atif_call(struct amdgpu_atif *atif,
+ struct acpi_buffer *params)
+ {
+ acpi_status status;
++ union acpi_object *obj;
+ union acpi_object atif_arg_elements[2];
+ struct acpi_object_list atif_arg;
+ struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+@@ -169,16 +170,24 @@ static union acpi_object *amdgpu_atif_call(struct amdgpu_atif *atif,
+
+ status = acpi_evaluate_object(atif->handle, NULL, &atif_arg,
+ &buffer);
++ obj = (union acpi_object *)buffer.pointer;
+
+- /* Fail only if calling the method fails and ATIF is supported */
++ /* Fail if calling the method fails and ATIF is supported */
+ if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) {
+ DRM_DEBUG_DRIVER("failed to evaluate ATIF got %s\n",
+ acpi_format_exception(status));
+- kfree(buffer.pointer);
++ kfree(obj);
+ return NULL;
+ }
+
+- return buffer.pointer;
++ if (obj->type != ACPI_TYPE_BUFFER) {
++ DRM_DEBUG_DRIVER("bad object returned from ATIF: %d\n",
++ obj->type);
++ kfree(obj);
++ return NULL;
++ }
++
++ return obj;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+index 1cb1ec7beefed3..2304a13fcb0483 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+@@ -1124,8 +1124,10 @@ int amdgpu_mes_add_ring(struct amdgpu_device *adev, int gang_id,
+
+ r = amdgpu_ring_init(adev, ring, 1024, NULL, 0,
+ AMDGPU_RING_PRIO_DEFAULT, NULL);
+- if (r)
++ if (r) {
++ amdgpu_mes_unlock(&adev->mes);
+ goto clean_up_memory;
++ }
+
+ amdgpu_mes_ring_to_queue_props(adev, ring, &qprops);
+
+@@ -1158,7 +1160,6 @@ int amdgpu_mes_add_ring(struct amdgpu_device *adev, int gang_id,
+ amdgpu_ring_fini(ring);
+ clean_up_memory:
+ kfree(ring);
+- amdgpu_mes_unlock(&adev->mes);
+ return r;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index 65abbfb6972ed7..28db4c1a0a2ad8 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -44,6 +44,7 @@
+
+ #include "dm_helpers.h"
+ #include "ddc_service_types.h"
++#include "clk_mgr.h"
+
+ static u32 edid_extract_panel_id(struct edid *edid)
+ {
+@@ -1121,6 +1122,8 @@ bool dm_helpers_dp_handle_test_pattern_request(
+ struct pipe_ctx *pipe_ctx = NULL;
+ struct amdgpu_dm_connector *aconnector = link->priv;
+ struct drm_device *dev = aconnector->base.dev;
++ struct dc_state *dc_state = ctx->dc->current_state;
++ struct clk_mgr *clk_mgr = ctx->dc->clk_mgr;
+ int i;
+
+ for (i = 0; i < MAX_PIPES; i++) {
+@@ -1221,6 +1224,16 @@ bool dm_helpers_dp_handle_test_pattern_request(
+ pipe_ctx->stream->test_pattern.type = test_pattern;
+ pipe_ctx->stream->test_pattern.color_space = test_pattern_color_space;
+
++ /* Temp W/A for compliance test failure */
++ dc_state->bw_ctx.bw.dcn.clk.p_state_change_support = false;
++ dc_state->bw_ctx.bw.dcn.clk.dramclk_khz = clk_mgr->dc_mode_softmax_enabled ?
++ clk_mgr->bw_params->dc_mode_softmax_memclk : clk_mgr->bw_params->max_memclk_mhz;
++ dc_state->bw_ctx.bw.dcn.clk.idle_dramclk_khz = dc_state->bw_ctx.bw.dcn.clk.dramclk_khz;
++ ctx->dc->clk_mgr->funcs->update_clocks(
++ ctx->dc->clk_mgr,
++ dc_state,
++ false);
++
+ dc_link_dp_set_test_pattern(
+ (struct dc_link *) link,
+ test_pattern,
+diff --git a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
+index 3cd52e7a9c77c2..95838c7ab05431 100644
+--- a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
++++ b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
+@@ -841,6 +841,8 @@ bool is_psr_su_specific_panel(struct dc_link *link)
+ isPSRSUSupported = false;
+ else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
+ isPSRSUSupported = false;
++ else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x01)
++ isPSRSUSupported = false;
+ else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1)
+ isPSRSUSupported = true;
+ }
+diff --git a/drivers/gpu/drm/bridge/aux-bridge.c b/drivers/gpu/drm/bridge/aux-bridge.c
+index b29980f95379ec..295e9d031e2dc8 100644
+--- a/drivers/gpu/drm/bridge/aux-bridge.c
++++ b/drivers/gpu/drm/bridge/aux-bridge.c
+@@ -58,9 +58,10 @@ int drm_aux_bridge_register(struct device *parent)
+ adev->id = ret;
+ adev->name = "aux_bridge";
+ adev->dev.parent = parent;
+- adev->dev.of_node = of_node_get(parent->of_node);
+ adev->dev.release = drm_aux_bridge_release;
+
++ device_set_of_node_from_dev(&adev->dev, parent);
++
+ ret = auxiliary_device_init(adev);
+ if (ret) {
+ ida_free(&drm_aux_bridge_ida, adev->id);
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index bcaec86ac67a5c..89b379060596d4 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -101,9 +101,10 @@ static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
+ }
+
+ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
+- struct msm_ringbuffer *ring, struct msm_file_private *ctx)
++ struct msm_ringbuffer *ring, struct msm_gem_submit *submit)
+ {
+ bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
++ struct msm_file_private *ctx = submit->queue->ctx;
+ struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
+ phys_addr_t ttbr;
+ u32 asid;
+@@ -115,6 +116,15 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
+ if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))
+ return;
+
++ if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) {
++ /* Wait for previous submit to complete before continuing: */
++ OUT_PKT7(ring, CP_WAIT_TIMESTAMP, 4);
++ OUT_RING(ring, 0);
++ OUT_RING(ring, lower_32_bits(rbmemptr(ring, fence)));
++ OUT_RING(ring, upper_32_bits(rbmemptr(ring, fence)));
++ OUT_RING(ring, submit->seqno - 1);
++ }
++
+ if (!sysprof) {
+ if (!adreno_is_a7xx(adreno_gpu)) {
+ /* Turn off protected mode to write to special registers */
+@@ -193,7 +203,7 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ struct msm_ringbuffer *ring = submit->ring;
+ unsigned int i, ibs = 0;
+
+- a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx);
++ a6xx_set_pagetable(a6xx_gpu, ring, submit);
+
+ get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP(0),
+ rbmemptr_stats(ring, index, cpcycles_start));
+@@ -283,7 +293,7 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
+ OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR);
+
+- a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx);
++ a6xx_set_pagetable(a6xx_gpu, ring, submit);
+
+ get_stats_counter(ring, REG_A7XX_RBBM_PERFCTR_CP(0),
+ rbmemptr_stats(ring, index, cpcycles_start));
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index 4c1be2f0555f7e..db6c57900781d9 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -711,12 +711,13 @@ void dpu_crtc_complete_commit(struct drm_crtc *crtc)
+ _dpu_crtc_complete_flip(crtc);
+ }
+
+-static void _dpu_crtc_setup_lm_bounds(struct drm_crtc *crtc,
++static int _dpu_crtc_check_and_setup_lm_bounds(struct drm_crtc *crtc,
+ struct drm_crtc_state *state)
+ {
+ struct dpu_crtc_state *cstate = to_dpu_crtc_state(state);
+ struct drm_display_mode *adj_mode = &state->adjusted_mode;
+ u32 crtc_split_width = adj_mode->hdisplay / cstate->num_mixers;
++ struct dpu_kms *dpu_kms = _dpu_crtc_get_kms(crtc);
+ int i;
+
+ for (i = 0; i < cstate->num_mixers; i++) {
+@@ -727,7 +728,12 @@ static void _dpu_crtc_setup_lm_bounds(struct drm_crtc *crtc,
+ r->y2 = adj_mode->vdisplay;
+
+ trace_dpu_crtc_setup_lm_bounds(DRMID(crtc), i, r);
++
++ if (drm_rect_width(r) > dpu_kms->catalog->caps->max_mixer_width)
++ return -E2BIG;
+ }
++
++ return 0;
+ }
+
+ static void _dpu_crtc_get_pcc_coeff(struct drm_crtc_state *state,
+@@ -803,7 +809,7 @@ static void dpu_crtc_atomic_begin(struct drm_crtc *crtc,
+
+ DRM_DEBUG_ATOMIC("crtc%d\n", crtc->base.id);
+
+- _dpu_crtc_setup_lm_bounds(crtc, crtc->state);
++ _dpu_crtc_check_and_setup_lm_bounds(crtc, crtc->state);
+
+ /* encoder will trigger pending mask now */
+ drm_for_each_encoder_mask(encoder, crtc->dev, crtc->state->encoder_mask)
+@@ -1091,9 +1097,6 @@ static void dpu_crtc_disable(struct drm_crtc *crtc,
+
+ dpu_core_perf_crtc_update(crtc, 0);
+
+- memset(cstate->mixers, 0, sizeof(cstate->mixers));
+- cstate->num_mixers = 0;
+-
+ /* disable clk & bw control until clk & bw properties are set */
+ cstate->bw_control = false;
+ cstate->bw_split_vote = false;
+@@ -1192,8 +1195,11 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
+ if (crtc_state->active_changed)
+ crtc_state->mode_changed = true;
+
+- if (cstate->num_mixers)
+- _dpu_crtc_setup_lm_bounds(crtc, crtc_state);
++ if (cstate->num_mixers) {
++ rc = _dpu_crtc_check_and_setup_lm_bounds(crtc, crtc_state);
++ if (rc)
++ return rc;
++ }
+
+ /* FIXME: move this to dpu_plane_atomic_check? */
+ drm_atomic_crtc_state_for_each_plane_state(plane, pstate, crtc_state) {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 3b171bf227d16f..bd3698bf0cf740 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -624,6 +624,40 @@ static struct msm_display_topology dpu_encoder_get_topology(
+ return topology;
+ }
+
++static void dpu_encoder_assign_crtc_resources(struct dpu_kms *dpu_kms,
++ struct drm_encoder *drm_enc,
++ struct dpu_global_state *global_state,
++ struct drm_crtc_state *crtc_state)
++{
++ struct dpu_crtc_state *cstate;
++ struct dpu_hw_blk *hw_ctl[MAX_CHANNELS_PER_ENC];
++ struct dpu_hw_blk *hw_lm[MAX_CHANNELS_PER_ENC];
++ struct dpu_hw_blk *hw_dspp[MAX_CHANNELS_PER_ENC];
++ int num_lm, num_ctl, num_dspp, i;
++
++ cstate = to_dpu_crtc_state(crtc_state);
++
++ memset(cstate->mixers, 0, sizeof(cstate->mixers));
++
++ num_ctl = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state,
++ drm_enc->base.id, DPU_HW_BLK_CTL, hw_ctl, ARRAY_SIZE(hw_ctl));
++ num_lm = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state,
++ drm_enc->base.id, DPU_HW_BLK_LM, hw_lm, ARRAY_SIZE(hw_lm));
++ num_dspp = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state,
++ drm_enc->base.id, DPU_HW_BLK_DSPP, hw_dspp,
++ ARRAY_SIZE(hw_dspp));
++
++ for (i = 0; i < num_lm; i++) {
++ int ctl_idx = (i < num_ctl) ? i : (num_ctl-1);
++
++ cstate->mixers[i].hw_lm = to_dpu_hw_mixer(hw_lm[i]);
++ cstate->mixers[i].lm_ctl = to_dpu_hw_ctl(hw_ctl[ctl_idx]);
++ cstate->mixers[i].hw_dspp = i < num_dspp ? to_dpu_hw_dspp(hw_dspp[i]) : NULL;
++ }
++
++ cstate->num_mixers = num_lm;
++}
++
+ static int dpu_encoder_virt_atomic_check(
+ struct drm_encoder *drm_enc,
+ struct drm_crtc_state *crtc_state,
+@@ -692,6 +726,9 @@ static int dpu_encoder_virt_atomic_check(
+ if (!crtc_state->active_changed || crtc_state->enable)
+ ret = dpu_rm_reserve(&dpu_kms->rm, global_state,
+ drm_enc, crtc_state, topology);
++ if (!ret)
++ dpu_encoder_assign_crtc_resources(dpu_kms, drm_enc,
++ global_state, crtc_state);
+ }
+
+ trace_dpu_enc_atomic_check_flags(DRMID(drm_enc), adj_mode->flags);
+@@ -1093,14 +1130,11 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc,
+ struct dpu_encoder_virt *dpu_enc;
+ struct msm_drm_private *priv;
+ struct dpu_kms *dpu_kms;
+- struct dpu_crtc_state *cstate;
+ struct dpu_global_state *global_state;
+ struct dpu_hw_blk *hw_pp[MAX_CHANNELS_PER_ENC];
+ struct dpu_hw_blk *hw_ctl[MAX_CHANNELS_PER_ENC];
+- struct dpu_hw_blk *hw_lm[MAX_CHANNELS_PER_ENC];
+- struct dpu_hw_blk *hw_dspp[MAX_CHANNELS_PER_ENC] = { NULL };
+ struct dpu_hw_blk *hw_dsc[MAX_CHANNELS_PER_ENC];
+- int num_lm, num_ctl, num_pp, num_dsc;
++ int num_ctl, num_pp, num_dsc;
+ unsigned int dsc_mask = 0;
+ int i;
+
+@@ -1129,11 +1163,6 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc,
+ ARRAY_SIZE(hw_pp));
+ num_ctl = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state,
+ drm_enc->base.id, DPU_HW_BLK_CTL, hw_ctl, ARRAY_SIZE(hw_ctl));
+- num_lm = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state,
+- drm_enc->base.id, DPU_HW_BLK_LM, hw_lm, ARRAY_SIZE(hw_lm));
+- dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state,
+- drm_enc->base.id, DPU_HW_BLK_DSPP, hw_dspp,
+- ARRAY_SIZE(hw_dspp));
+
+ for (i = 0; i < MAX_CHANNELS_PER_ENC; i++)
+ dpu_enc->hw_pp[i] = i < num_pp ? to_dpu_hw_pingpong(hw_pp[i])
+@@ -1159,36 +1188,23 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc,
+ dpu_enc->cur_master->hw_cdm = hw_cdm ? to_dpu_hw_cdm(hw_cdm) : NULL;
+ }
+
+- cstate = to_dpu_crtc_state(crtc_state);
+-
+- for (i = 0; i < num_lm; i++) {
+- int ctl_idx = (i < num_ctl) ? i : (num_ctl-1);
+-
+- cstate->mixers[i].hw_lm = to_dpu_hw_mixer(hw_lm[i]);
+- cstate->mixers[i].lm_ctl = to_dpu_hw_ctl(hw_ctl[ctl_idx]);
+- cstate->mixers[i].hw_dspp = to_dpu_hw_dspp(hw_dspp[i]);
+- }
+-
+- cstate->num_mixers = num_lm;
+-
+ for (i = 0; i < dpu_enc->num_phys_encs; i++) {
+ struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i];
+
+- if (!dpu_enc->hw_pp[i]) {
++ phys->hw_pp = dpu_enc->hw_pp[i];
++ if (!phys->hw_pp) {
+ DPU_ERROR_ENC(dpu_enc,
+ "no pp block assigned at idx: %d\n", i);
+ return;
+ }
+
+- if (!hw_ctl[i]) {
++ phys->hw_ctl = i < num_ctl ? to_dpu_hw_ctl(hw_ctl[i]) : NULL;
++ if (!phys->hw_ctl) {
+ DPU_ERROR_ENC(dpu_enc,
+ "no ctl block assigned at idx: %d\n", i);
+ return;
+ }
+
+- phys->hw_pp = dpu_enc->hw_pp[i];
+- phys->hw_ctl = to_dpu_hw_ctl(hw_ctl[i]);
+-
+ phys->cached_mode = crtc_state->adjusted_mode;
+ if (phys->ops.atomic_mode_set)
+ phys->ops.atomic_mode_set(phys, crtc_state, conn_state);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+index ba8878d21cf0e1..d8a2edebfe8c3c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+@@ -302,7 +302,7 @@ static void dpu_encoder_phys_vid_setup_timing_engine(
+ intf_cfg.stream_sel = 0; /* Don't care value for video mode */
+ intf_cfg.mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc);
+ intf_cfg.dsc = dpu_encoder_helper_get_dsc(phys_enc);
+- if (phys_enc->hw_pp->merge_3d)
++ if (intf_cfg.mode_3d && phys_enc->hw_pp->merge_3d)
+ intf_cfg.merge_3d = phys_enc->hw_pp->merge_3d->idx;
+
+ spin_lock_irqsave(phys_enc->enc_spinlock, lock_flags);
+@@ -440,10 +440,12 @@ static void dpu_encoder_phys_vid_enable(struct dpu_encoder_phys *phys_enc)
+ struct dpu_hw_ctl *ctl;
+ const struct msm_format *fmt;
+ u32 fmt_fourcc;
++ u32 mode_3d;
+
+ ctl = phys_enc->hw_ctl;
+ fmt_fourcc = dpu_encoder_get_drm_fmt(phys_enc);
+ fmt = mdp_get_format(&phys_enc->dpu_kms->base, fmt_fourcc, 0);
++ mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc);
+
+ DPU_DEBUG_VIDENC(phys_enc, "\n");
+
+@@ -466,7 +468,8 @@ static void dpu_encoder_phys_vid_enable(struct dpu_encoder_phys *phys_enc)
+ goto skip_flush;
+
+ ctl->ops.update_pending_flush_intf(ctl, phys_enc->hw_intf->idx);
+- if (ctl->ops.update_pending_flush_merge_3d && phys_enc->hw_pp->merge_3d)
++ if (mode_3d && ctl->ops.update_pending_flush_merge_3d &&
++ phys_enc->hw_pp->merge_3d)
+ ctl->ops.update_pending_flush_merge_3d(ctl, phys_enc->hw_pp->merge_3d->idx);
+
+ if (ctl->ops.update_pending_flush_cdm && phys_enc->hw_cdm)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+index 882c717859cec6..07035ab77b792e 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+@@ -275,6 +275,7 @@ static void _dpu_encoder_phys_wb_update_flush(struct dpu_encoder_phys *phys_enc)
+ struct dpu_hw_pingpong *hw_pp;
+ struct dpu_hw_cdm *hw_cdm;
+ u32 pending_flush = 0;
++ u32 mode_3d;
+
+ if (!phys_enc)
+ return;
+@@ -283,6 +284,7 @@ static void _dpu_encoder_phys_wb_update_flush(struct dpu_encoder_phys *phys_enc)
+ hw_pp = phys_enc->hw_pp;
+ hw_ctl = phys_enc->hw_ctl;
+ hw_cdm = phys_enc->hw_cdm;
++ mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc);
+
+ DPU_DEBUG("[wb:%d]\n", hw_wb->idx - WB_0);
+
+@@ -294,7 +296,8 @@ static void _dpu_encoder_phys_wb_update_flush(struct dpu_encoder_phys *phys_enc)
+ if (hw_ctl->ops.update_pending_flush_wb)
+ hw_ctl->ops.update_pending_flush_wb(hw_ctl, hw_wb->idx);
+
+- if (hw_ctl->ops.update_pending_flush_merge_3d && hw_pp && hw_pp->merge_3d)
++ if (mode_3d && hw_ctl->ops.update_pending_flush_merge_3d &&
++ hw_pp && hw_pp->merge_3d)
+ hw_ctl->ops.update_pending_flush_merge_3d(hw_ctl,
+ hw_pp->merge_3d->idx);
+
+diff --git a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
+index add72bbc28b176..4d55e3cf570f0b 100644
+--- a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
++++ b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
+@@ -26,7 +26,7 @@ static void msm_disp_state_dump_regs(u32 **reg, u32 aligned_len, void __iomem *b
+ end_addr = base_addr + aligned_len;
+
+ if (!(*reg))
+- *reg = kzalloc(len_padded, GFP_KERNEL);
++ *reg = kvzalloc(len_padded, GFP_KERNEL);
+
+ if (*reg)
+ dump_addr = *reg;
+@@ -48,20 +48,21 @@ static void msm_disp_state_dump_regs(u32 **reg, u32 aligned_len, void __iomem *b
+ }
+ }
+
+-static void msm_disp_state_print_regs(u32 **reg, u32 len, void __iomem *base_addr,
+- struct drm_printer *p)
++static void msm_disp_state_print_regs(const u32 *dump_addr, u32 len,
++ void __iomem *base_addr, struct drm_printer *p)
+ {
+ int i;
+- u32 *dump_addr = NULL;
+ void __iomem *addr;
+ u32 num_rows;
+
++ if (!dump_addr) {
++ drm_printf(p, "Registers not stored\n");
++ return;
++ }
++
+ addr = base_addr;
+ num_rows = len / REG_DUMP_ALIGN;
+
+- if (*reg)
+- dump_addr = *reg;
+-
+ for (i = 0; i < num_rows; i++) {
+ drm_printf(p, "0x%lx : %08x %08x %08x %08x\n",
+ (unsigned long)(addr - base_addr),
+@@ -89,7 +90,7 @@ void msm_disp_state_print(struct msm_disp_state *state, struct drm_printer *p)
+
+ list_for_each_entry_safe(block, tmp, &state->blocks, node) {
+ drm_printf(p, "====================%s================\n", block->name);
+- msm_disp_state_print_regs(&block->state, block->size, block->base_addr, p);
++ msm_disp_state_print_regs(block->state, block->size, block->base_addr, p);
+ }
+
+ drm_printf(p, "===================dpu drm state================\n");
+@@ -161,7 +162,7 @@ void msm_disp_state_free(void *data)
+
+ list_for_each_entry_safe(block, tmp, &disp_state->blocks, node) {
+ list_del(&block->node);
+- kfree(block->state);
++ kvfree(block->state);
+ kfree(block);
+ }
+
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 185d7de0bf3766..a98d24b7cb00b4 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -542,7 +542,7 @@ static unsigned long dsi_adjust_pclk_for_compression(const struct drm_display_mo
+
+ int new_htotal = mode->htotal - mode->hdisplay + new_hdisplay;
+
+- return new_htotal * mode->vtotal * drm_mode_vrefresh(mode);
++ return mult_frac(mode->clock * 1000u, new_htotal, mode->htotal);
+ }
+
+ static unsigned long dsi_get_pclk_rate(const struct drm_display_mode *mode,
+@@ -550,7 +550,7 @@ static unsigned long dsi_get_pclk_rate(const struct drm_display_mode *mode,
+ {
+ unsigned long pclk_rate;
+
+- pclk_rate = mode->clock * 1000;
++ pclk_rate = mode->clock * 1000u;
+
+ if (dsc)
+ pclk_rate = dsi_adjust_pclk_for_compression(mode, dsc);
+diff --git a/drivers/gpu/drm/panel/panel-himax-hx83102.c b/drivers/gpu/drm/panel/panel-himax-hx83102.c
+index 6e4b7e4644ce06..8b48bba181316c 100644
+--- a/drivers/gpu/drm/panel/panel-himax-hx83102.c
++++ b/drivers/gpu/drm/panel/panel-himax-hx83102.c
+@@ -298,7 +298,7 @@ static int ivo_t109nw41_init(struct hx83102 *ctx)
+ msleep(60);
+
+ hx83102_enable_extended_cmds(&dsi_ctx, true);
+- mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETPOWER, 0x2c, 0xed, 0xed, 0x0f, 0xcf, 0x42,
++ mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETPOWER, 0x2c, 0xed, 0xed, 0x27, 0xe7, 0x52,
+ 0xf5, 0x39, 0x36, 0x36, 0x36, 0x36, 0x32, 0x8b, 0x11, 0x65, 0x00, 0x88,
+ 0xfa, 0xff, 0xff, 0x8f, 0xff, 0x08, 0xd6, 0x33);
+ mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETDISP, 0x00, 0x47, 0xb0, 0x80, 0x00, 0x12,
+@@ -343,11 +343,11 @@ static int ivo_t109nw41_init(struct hx83102 *ctx)
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xa0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00);
+- mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETGMA, 0x04, 0x04, 0x06, 0x0a, 0x0a, 0x05,
+- 0x12, 0x14, 0x17, 0x13, 0x2c, 0x33, 0x39, 0x4b, 0x4c, 0x56, 0x61, 0x78,
+- 0x7a, 0x41, 0x50, 0x68, 0x73, 0x04, 0x04, 0x06, 0x0a, 0x0a, 0x05, 0x12,
+- 0x14, 0x17, 0x13, 0x2c, 0x33, 0x39, 0x4b, 0x4c, 0x56, 0x61, 0x78, 0x7a,
+- 0x41, 0x50, 0x68, 0x73);
++ mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETGMA, 0x00, 0x07, 0x10, 0x17, 0x1c, 0x33,
++ 0x48, 0x50, 0x57, 0x50, 0x68, 0x6e, 0x71, 0x7f, 0x81, 0x8a, 0x8e, 0x9b,
++ 0x9c, 0x4d, 0x56, 0x5d, 0x73, 0x00, 0x07, 0x10, 0x17, 0x1c, 0x33, 0x48,
++ 0x50, 0x57, 0x50, 0x68, 0x6e, 0x71, 0x7f, 0x81, 0x8a, 0x8e, 0x9b, 0x9c,
++ 0x4d, 0x56, 0x5d, 0x73);
+ mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETTP1, 0x07, 0x10, 0x10, 0x1a, 0x26, 0x9e,
+ 0x00, 0x4f, 0xa0, 0x14, 0x14, 0x00, 0x00, 0x00, 0x00, 0x12, 0x0a, 0x02,
+ 0x02, 0x00, 0x33, 0x02, 0x04, 0x18, 0x01);
+diff --git a/drivers/gpu/drm/vboxvideo/hgsmi_base.c b/drivers/gpu/drm/vboxvideo/hgsmi_base.c
+index 8c041d7ce4f1bd..87dccaecc3e57d 100644
+--- a/drivers/gpu/drm/vboxvideo/hgsmi_base.c
++++ b/drivers/gpu/drm/vboxvideo/hgsmi_base.c
+@@ -139,7 +139,15 @@ int hgsmi_update_pointer_shape(struct gen_pool *ctx, u32 flags,
+ flags |= VBOX_MOUSE_POINTER_VISIBLE;
+ }
+
+- p = hgsmi_buffer_alloc(ctx, sizeof(*p) + pixel_len, HGSMI_CH_VBVA,
++ /*
++ * The 4 extra bytes come from switching struct vbva_mouse_pointer_shape
++ * from having a 4 bytes fixed array at the end to using a proper VLA
++ * at the end. These 4 extra bytes were not subtracted from sizeof(*p)
++ * before the switch to the VLA, so this way the behavior is unchanged.
++ * Chances are these 4 extra bytes are not necessary but they are kept
++ * to avoid regressions.
++ */
++ p = hgsmi_buffer_alloc(ctx, sizeof(*p) + pixel_len + 4, HGSMI_CH_VBVA,
+ VBVA_MOUSE_POINTER_SHAPE);
+ if (!p)
+ return -ENOMEM;
+diff --git a/drivers/gpu/drm/vboxvideo/vboxvideo.h b/drivers/gpu/drm/vboxvideo/vboxvideo.h
+index f60d82504da02c..79ec8481de0e48 100644
+--- a/drivers/gpu/drm/vboxvideo/vboxvideo.h
++++ b/drivers/gpu/drm/vboxvideo/vboxvideo.h
+@@ -351,10 +351,8 @@ struct vbva_mouse_pointer_shape {
+ * Bytes in the gap between the AND and the XOR mask are undefined.
+ * XOR mask scanlines have no gap between them and size of XOR mask is:
+ * xor_len = width * 4 * height.
+- *
+- * Preallocate 4 bytes for accessing actual data as p->data.
+ */
+- u8 data[4];
++ u8 data[];
+ } __packed;
+
+ /* pointer is visible */
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+index fab155a68054a7..82d18b88f4a7e7 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+@@ -886,6 +886,10 @@ static int vmw_stdu_connector_atomic_check(struct drm_connector *conn,
+ struct drm_crtc_state *new_crtc_state;
+
+ conn_state = drm_atomic_get_connector_state(state, conn);
++
++ if (IS_ERR(conn_state))
++ return PTR_ERR(conn_state);
++
+ du = vmw_connector_to_stdu(conn);
+
+ if (!conn_state->crtc)
+diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
+index 8a44a2b6dcbb66..fb394189d9e233 100644
+--- a/drivers/gpu/drm/xe/xe_device.c
++++ b/drivers/gpu/drm/xe/xe_device.c
+@@ -960,13 +960,13 @@ void xe_device_declare_wedged(struct xe_device *xe)
+ return;
+ }
+
++ xe_pm_runtime_get_noresume(xe);
++
+ if (drmm_add_action_or_reset(&xe->drm, xe_device_wedged_fini, xe)) {
+ drm_err(&xe->drm, "Failed to register xe_device_wedged_fini clean-up. Although device is wedged.\n");
+ return;
+ }
+
+- xe_pm_runtime_get_noresume(xe);
+-
+ if (!atomic_xchg(&xe->wedged.flag, 1)) {
+ xe->needs_flr_on_fini = true;
+ drm_err(&xe->drm,
+diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
+index f36980aa26e69a..6e5ba381eadedd 100644
+--- a/drivers/gpu/drm/xe/xe_exec.c
++++ b/drivers/gpu/drm/xe/xe_exec.c
+@@ -40,11 +40,6 @@
+ * user knows an exec writes to a BO and reads from the BO in the next exec, it
+ * is the user's responsibility to pass in / out fence between the two execs).
+ *
+- * Implicit dependencies for external BOs are handled by using the dma-buf
+- * implicit dependency uAPI (TODO: add link). To make this works each exec must
+- * install the job's fence into the DMA_RESV_USAGE_WRITE slot of every external
+- * BO mapped in the VM.
+- *
+ * We do not allow a user to trigger a bind at exec time rather we have a VM
+ * bind IOCTL which uses the same in / out fence interface as exec. In that
+ * sense, a VM bind is basically the same operation as an exec from the user
+@@ -58,8 +53,8 @@
+ * behind any pending kernel operations on any external BOs in VM or any BOs
+ * private to the VM. This is accomplished by the rebinds waiting on BOs
+ * DMA_RESV_USAGE_KERNEL slot (kernel ops) and kernel ops waiting on all BOs
+- * slots (inflight execs are in the DMA_RESV_USAGE_BOOKING for private BOs and
+- * in DMA_RESV_USAGE_WRITE for external BOs).
++ * slots (inflight execs are in the DMA_RESV_USAGE_BOOKKEEP for private BOs and
++ * for external BOs).
+ *
+ * Rebinds / dma-resv usage applies to non-compute mode VMs only as for compute
+ * mode VMs we use preempt fences and a rebind worker (TODO: add link).
+@@ -292,7 +287,8 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ xe_sched_job_arm(job);
+ if (!xe_vm_in_lr_mode(vm))
+ drm_gpuvm_resv_add_fence(&vm->gpuvm, exec, &job->drm.s_fence->finished,
+- DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_WRITE);
++ DMA_RESV_USAGE_BOOKKEEP,
++ DMA_RESV_USAGE_BOOKKEEP);
+
+ for (i = 0; i < num_syncs; i++) {
+ xe_sync_entry_signal(&syncs[i], &job->drm.s_fence->finished);
+diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+index 6aac7fe686735a..6bdd0a5b361226 100644
+--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
++++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+@@ -51,7 +51,9 @@ xe_sched_invalidate_job(struct xe_sched_job *job, int threshold)
+ static inline void xe_sched_add_pending_job(struct xe_gpu_scheduler *sched,
+ struct xe_sched_job *job)
+ {
++ spin_lock(&sched->base.job_list_lock);
+ list_add(&job->drm.list, &sched->base.pending_list);
++ spin_unlock(&sched->base.job_list_lock);
+ }
+
+ static inline
+diff --git a/drivers/gpu/drm/xe/xe_gt_mcr.c b/drivers/gpu/drm/xe/xe_gt_mcr.c
+index 6d948a46912642..d57a765a1a9693 100644
+--- a/drivers/gpu/drm/xe/xe_gt_mcr.c
++++ b/drivers/gpu/drm/xe/xe_gt_mcr.c
+@@ -407,7 +407,7 @@ void xe_gt_mcr_init(struct xe_gt *gt)
+ if (gt->info.type == XE_GT_TYPE_MEDIA) {
+ drm_WARN_ON(&xe->drm, MEDIA_VER(xe) < 13);
+
+- if (MEDIA_VER(xe) >= 20) {
++ if (MEDIA_VERx100(xe) >= 1301) {
+ gt->steering[OADDRM].ranges = xe2lpm_gpmxmt_steering_table;
+ gt->steering[INSTANCE0].ranges = xe2lpm_instance0_steering_table;
+ } else {
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+index 87cb76a8718c99..82795133e129ec 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+@@ -36,6 +36,15 @@ static long tlb_timeout_jiffies(struct xe_gt *gt)
+ return hw_tlb_timeout + 2 * delay;
+ }
+
++static void xe_gt_tlb_invalidation_fence_fini(struct xe_gt_tlb_invalidation_fence *fence)
++{
++ if (WARN_ON_ONCE(!fence->gt))
++ return;
++
++ xe_pm_runtime_put(gt_to_xe(fence->gt));
++ fence->gt = NULL; /* fini() should be called once */
++}
++
+ static void
+ __invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence)
+ {
+@@ -203,7 +212,7 @@ static int send_tlb_invalidation(struct xe_guc *guc,
+ tlb_timeout_jiffies(gt));
+ }
+ spin_unlock_irq(>->tlb_invalidation.pending_lock);
+- } else if (ret < 0) {
++ } else {
+ __invalidation_fence_signal(xe, fence);
+ }
+ if (!ret) {
+@@ -265,10 +274,8 @@ int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt)
+
+ xe_gt_tlb_invalidation_fence_init(gt, &fence, true);
+ ret = xe_gt_tlb_invalidation_guc(gt, &fence);
+- if (ret < 0) {
+- xe_gt_tlb_invalidation_fence_fini(&fence);
++ if (ret)
+ return ret;
+- }
+
+ xe_gt_tlb_invalidation_fence_wait(&fence);
+ } else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {
+@@ -494,7 +501,8 @@ static const struct dma_fence_ops invalidation_fence_ops = {
+ * @stack: fence is stack variable
+ *
+ * Initialize TLB invalidation fence for use. xe_gt_tlb_invalidation_fence_fini
+- * must be called if fence is not signaled.
++ * will be automatically called when fence is signalled (all fences must signal),
++ * even on error.
+ */
+ void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt,
+ struct xe_gt_tlb_invalidation_fence *fence,
+@@ -514,14 +522,3 @@ void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt,
+ dma_fence_get(&fence->base);
+ fence->gt = gt;
+ }
+-
+-/**
+- * xe_gt_tlb_invalidation_fence_fini - Finalize TLB invalidation fence
+- * @fence: TLB invalidation fence to finalize
+- *
+- * Drop PM ref which fence took durinig init.
+- */
+-void xe_gt_tlb_invalidation_fence_fini(struct xe_gt_tlb_invalidation_fence *fence)
+-{
+- xe_pm_runtime_put(gt_to_xe(fence->gt));
+-}
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+index a84065fa324c76..f430d5797af701 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+@@ -28,7 +28,6 @@ int xe_guc_tlb_invalidation_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
+ void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt,
+ struct xe_gt_tlb_invalidation_fence *fence,
+ bool stack);
+-void xe_gt_tlb_invalidation_fence_fini(struct xe_gt_tlb_invalidation_fence *fence);
+
+ static inline void
+ xe_gt_tlb_invalidation_fence_wait(struct xe_gt_tlb_invalidation_fence *fence)
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 690f821f8bf5ad..dfd809e7bbd257 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -1101,10 +1101,13 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
+
+ /*
+ * TDR has fired before free job worker. Common if exec queue
+- * immediately closed after last fence signaled.
++ * immediately closed after last fence signaled. Add back to pending
++ * list so job can be freed and kick scheduler ensuring free job is not
++ * lost.
+ */
+ if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &job->fence->flags)) {
+- guc_exec_queue_free_job(drm_job);
++ xe_sched_add_pending_job(sched, job);
++ xe_sched_submission_start(sched);
+
+ return DRM_GPU_SCHED_STAT_NOMINAL;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index 49ba9a1e375f42..3ac41f70ea6b17 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -3377,10 +3377,8 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
+
+ ret = xe_gt_tlb_invalidation_vma(tile->primary_gt,
+ &fence[fence_id], vma);
+- if (ret < 0) {
+- xe_gt_tlb_invalidation_fence_fini(&fence[fence_id]);
++ if (ret)
+ goto wait;
+- }
+ ++fence_id;
+
+ if (!tile->media_gt)
+@@ -3392,10 +3390,8 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
+
+ ret = xe_gt_tlb_invalidation_vma(tile->media_gt,
+ &fence[fence_id], vma);
+- if (ret < 0) {
+- xe_gt_tlb_invalidation_fence_fini(&fence[fence_id]);
++ if (ret)
+ goto wait;
+- }
+ ++fence_id;
+ }
+ }
+diff --git a/drivers/hwmon/jc42.c b/drivers/hwmon/jc42.c
+index a260cff750a584..c459dce496a6ed 100644
+--- a/drivers/hwmon/jc42.c
++++ b/drivers/hwmon/jc42.c
+@@ -417,7 +417,7 @@ static int jc42_detect(struct i2c_client *client, struct i2c_board_info *info)
+ return -ENODEV;
+
+ if ((devid & TSE2004_DEVID_MASK) == TSE2004_DEVID &&
+- (cap & 0x00e7) != 0x00e7)
++ (cap & 0x0062) != 0x0062)
+ return -ENODEV;
+
+ for (i = 0; i < ARRAY_SIZE(jc42_chips); i++) {
+diff --git a/drivers/iio/accel/bma400_core.c b/drivers/iio/accel/bma400_core.c
+index e90e2f01550ad3..04083b7395ab8c 100644
+--- a/drivers/iio/accel/bma400_core.c
++++ b/drivers/iio/accel/bma400_core.c
+@@ -1219,7 +1219,8 @@ static int bma400_activity_event_en(struct bma400_data *data,
+ static int bma400_tap_event_en(struct bma400_data *data,
+ enum iio_event_direction dir, int state)
+ {
+- unsigned int mask, field_value;
++ unsigned int mask;
++ unsigned int field_value = 0;
+ int ret;
+
+ /*
+diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
+index cceac30e2bb9f9..c16316664db38e 100644
+--- a/drivers/iio/adc/Kconfig
++++ b/drivers/iio/adc/Kconfig
+@@ -1486,6 +1486,8 @@ config TI_LMP92064
+ tristate "Texas Instruments LMP92064 ADC driver"
+ depends on SPI
+ select REGMAP_SPI
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ help
+ Say yes here to build support for the LMP92064 Precision Current and Voltage
+ sensor.
+diff --git a/drivers/iio/frequency/Kconfig b/drivers/iio/frequency/Kconfig
+index 89ae09db5ca5fc..583cbdf4e8cdab 100644
+--- a/drivers/iio/frequency/Kconfig
++++ b/drivers/iio/frequency/Kconfig
+@@ -92,25 +92,26 @@ config ADMV1014
+ module will be called admv1014.
+
+ config ADMV4420
+- tristate "Analog Devices ADMV4420 K Band Downconverter"
+- depends on SPI
+- help
+- Say yes here to build support for Analog Devices K Band
+- Downconverter with integrated Fractional-N PLL and VCO.
++ tristate "Analog Devices ADMV4420 K Band Downconverter"
++ depends on SPI
++ select REGMAP_SPI
++ help
++ Say yes here to build support for Analog Devices K Band
++ Downconverter with integrated Fractional-N PLL and VCO.
+
+- To compile this driver as a module, choose M here: the
+- module will be called admv4420.
++ To compile this driver as a module, choose M here: the
++ module will be called admv4420.
+
+ config ADRF6780
+- tristate "Analog Devices ADRF6780 Microwave Upconverter"
+- depends on SPI
+- depends on COMMON_CLK
+- help
+- Say yes here to build support for Analog Devices ADRF6780
+- 5.9 GHz to 23.6 GHz, Wideband, Microwave Upconverter.
+-
+- To compile this driver as a module, choose M here: the
+- module will be called adrf6780.
++ tristate "Analog Devices ADRF6780 Microwave Upconverter"
++ depends on SPI
++ depends on COMMON_CLK
++ help
++ Say yes here to build support for Analog Devices ADRF6780
++ 5.9 GHz to 23.6 GHz, Wideband, Microwave Upconverter.
++
++ To compile this driver as a module, choose M here: the
++ module will be called adrf6780.
+
+ endmenu
+ endmenu
+diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
+index be0743dac3fff3..c4cf26f1d1496a 100644
+--- a/drivers/infiniband/core/addr.c
++++ b/drivers/infiniband/core/addr.c
+@@ -269,6 +269,8 @@ rdma_find_ndev_for_src_ip_rcu(struct net *net, const struct sockaddr *src_in)
+ break;
+ #endif
+ }
++ if (!ret && dev && is_vlan_dev(dev))
++ dev = vlan_dev_real_dev(dev);
+ return ret ? ERR_PTR(ret) : dev;
+ }
+
+diff --git a/drivers/infiniband/hw/bnxt_re/hw_counters.c b/drivers/infiniband/hw/bnxt_re/hw_counters.c
+index 128651c015956c..1e63f809174837 100644
+--- a/drivers/infiniband/hw/bnxt_re/hw_counters.c
++++ b/drivers/infiniband/hw/bnxt_re/hw_counters.c
+@@ -366,7 +366,7 @@ int bnxt_re_ib_get_hw_stats(struct ib_device *ibdev,
+ goto done;
+ }
+ }
+- if (rdev->pacing.dbr_pacing)
++ if (rdev->pacing.dbr_pacing && bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx))
+ bnxt_re_copy_db_pacing_stats(rdev, stats);
+ }
+
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+index e98cb17173385b..b368916a5bcfcb 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+@@ -77,6 +77,7 @@ struct bnxt_re_srq {
+ struct bnxt_qplib_srq qplib_srq;
+ struct ib_umem *umem;
+ spinlock_t lock; /* protect srq */
++ void *uctx_srq_page;
+ };
+
+ struct bnxt_re_qp {
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 9714b9ab75240b..9b7093eb439c6d 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -184,8 +184,11 @@ static int bnxt_re_setup_chip_ctx(struct bnxt_re_dev *rdev, u8 wqe_mode)
+
+ bnxt_re_set_db_offset(rdev);
+ rc = bnxt_qplib_map_db_bar(&rdev->qplib_res);
+- if (rc)
++ if (rc) {
++ kfree(rdev->chip_ctx);
++ rdev->chip_ctx = NULL;
+ return rc;
++ }
+
+ if (bnxt_qplib_determine_atomics(en_dev->pdev))
+ ibdev_info(&rdev->ibdev,
+@@ -515,6 +518,7 @@ static bool is_dbr_fifo_full(struct bnxt_re_dev *rdev)
+ static void __wait_for_fifo_occupancy_below_th(struct bnxt_re_dev *rdev)
+ {
+ struct bnxt_qplib_db_pacing_data *pacing_data = rdev->qplib_res.pacing_data;
++ u32 retry_fifo_check = 1000;
+ u32 fifo_occup;
+
+ /* loop shouldn't run infintely as the occupancy usually goes
+@@ -528,6 +532,14 @@ static void __wait_for_fifo_occupancy_below_th(struct bnxt_re_dev *rdev)
+
+ if (fifo_occup < pacing_data->pacing_th)
+ break;
++ if (!retry_fifo_check--) {
++ dev_info_once(rdev_to_dev(rdev),
++ "%s: fifo_occup = 0x%xfifo_max_depth = 0x%x pacing_th = 0x%x\n",
++ __func__, fifo_occup, pacing_data->fifo_max_depth,
++ pacing_data->pacing_th);
++ break;
++ }
++
+ }
+ }
+
+@@ -1009,12 +1021,15 @@ static int bnxt_re_handle_unaffi_async_event(struct creq_func_event
+ static int bnxt_re_handle_qp_async_event(struct creq_qp_event *qp_event,
+ struct bnxt_re_qp *qp)
+ {
+- struct bnxt_re_srq *srq = container_of(qp->qplib_qp.srq, struct bnxt_re_srq,
+- qplib_srq);
+ struct creq_qp_error_notification *err_event;
++ struct bnxt_re_srq *srq = NULL;
+ struct ib_event event = {};
+ unsigned int flags;
+
++ if (qp->qplib_qp.srq)
++ srq = container_of(qp->qplib_qp.srq, struct bnxt_re_srq,
++ qplib_srq);
++
+ if (qp->qplib_qp.state == CMDQ_MODIFY_QP_NEW_STATE_ERR &&
+ rdma_is_kernel_res(&qp->ib_qp.res)) {
+ flags = bnxt_re_lock_cqs(qp);
+@@ -1242,15 +1257,9 @@ static int bnxt_re_cqn_handler(struct bnxt_qplib_nq *nq,
+ {
+ struct bnxt_re_cq *cq = container_of(handle, struct bnxt_re_cq,
+ qplib_cq);
+- u32 *cq_ptr;
+
+- if (cq->ib_cq.comp_handler) {
+- if (cq->uctx_cq_page) {
+- cq_ptr = (u32 *)cq->uctx_cq_page;
+- *cq_ptr = cq->qplib_cq.toggle;
+- }
++ if (cq->ib_cq.comp_handler)
+ (*cq->ib_cq.comp_handler)(&cq->ib_cq, cq->ib_cq.cq_context);
+- }
+
+ return 0;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 49e4a4a50bfaeb..03d517be9c52ec 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -54,6 +54,10 @@
+ #include "qplib_rcfw.h"
+ #include "qplib_sp.h"
+ #include "qplib_fp.h"
++#include <rdma/ib_addr.h>
++#include "bnxt_ulp.h"
++#include "bnxt_re.h"
++#include "ib_verbs.h"
+
+ static void __clean_cq(struct bnxt_qplib_cq *cq, u64 qp);
+
+@@ -323,6 +327,7 @@ static void bnxt_qplib_service_nq(struct tasklet_struct *t)
+ case NQ_BASE_TYPE_CQ_NOTIFICATION:
+ {
+ struct nq_cn *nqcne = (struct nq_cn *)nqe;
++ struct bnxt_re_cq *cq_p;
+
+ q_handle = le32_to_cpu(nqcne->cq_handle_low);
+ q_handle |= (u64)le32_to_cpu(nqcne->cq_handle_high)
+@@ -333,6 +338,10 @@ static void bnxt_qplib_service_nq(struct tasklet_struct *t)
+ cq->toggle = (le16_to_cpu(nqe->info10_type) &
+ NQ_CN_TOGGLE_MASK) >> NQ_CN_TOGGLE_SFT;
+ cq->dbinfo.toggle = cq->toggle;
++ cq_p = container_of(cq, struct bnxt_re_cq, qplib_cq);
++ if (cq_p->uctx_cq_page)
++ *((u32 *)cq_p->uctx_cq_page) = cq->toggle;
++
+ bnxt_qplib_armen_db(&cq->dbinfo,
+ DBC_DBC_TYPE_CQ_ARMENA);
+ spin_lock_bh(&cq->compl_lock);
+@@ -347,6 +356,7 @@ static void bnxt_qplib_service_nq(struct tasklet_struct *t)
+ case NQ_BASE_TYPE_SRQ_EVENT:
+ {
+ struct bnxt_qplib_srq *srq;
++ struct bnxt_re_srq *srq_p;
+ struct nq_srq_event *nqsrqe =
+ (struct nq_srq_event *)nqe;
+
+@@ -354,6 +364,12 @@ static void bnxt_qplib_service_nq(struct tasklet_struct *t)
+ q_handle |= (u64)le32_to_cpu(nqsrqe->srq_handle_high)
+ << 32;
+ srq = (struct bnxt_qplib_srq *)q_handle;
++ srq->toggle = (le16_to_cpu(nqe->info10_type) & NQ_CN_TOGGLE_MASK)
++ >> NQ_CN_TOGGLE_SFT;
++ srq->dbinfo.toggle = srq->toggle;
++ srq_p = container_of(srq, struct bnxt_re_srq, qplib_srq);
++ if (srq_p->uctx_srq_page)
++ *((u32 *)srq_p->uctx_srq_page) = srq->toggle;
+ bnxt_qplib_armen_db(&srq->dbinfo,
+ DBC_DBC_TYPE_SRQ_ARMENA);
+ if (nq->srqn_handler(nq,
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index 56538b90d6c56f..389862df818d99 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -105,6 +105,7 @@ struct bnxt_qplib_srq {
+ struct bnxt_qplib_sg_info sg_info;
+ u16 eventq_hw_ring_id;
+ spinlock_t lock; /* protect SRQE link list */
++ u8 toggle;
+ };
+
+ struct bnxt_qplib_sge {
+@@ -169,7 +170,7 @@ struct bnxt_qplib_swqe {
+ };
+ u32 q_key;
+ u32 dst_qp;
+- u16 avid;
++ u32 avid;
+ } send;
+
+ /* Send Raw Ethernet and QP1 */
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index 3ffaef0c265194..7294221b3316cf 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -525,7 +525,7 @@ static int __bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw,
+ /* failed with status */
+ dev_err(&rcfw->pdev->dev, "cmdq[%#x]=%#x status %#x\n",
+ cookie, opcode, evnt->status);
+- rc = -EFAULT;
++ rc = -EIO;
+ }
+
+ return rc;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+index dfc943fab87b4f..96ceec1e8199a6 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+@@ -244,6 +244,8 @@ int bnxt_qplib_alloc_init_hwq(struct bnxt_qplib_hwq *hwq,
+ sginfo.pgsize = npde * pg_size;
+ sginfo.npages = 1;
+ rc = __alloc_pbl(res, &hwq->pbl[PBL_LVL_0], &sginfo);
++ if (rc)
++ goto fail;
+
+ /* Alloc PBL pages */
+ sginfo.npages = npbl;
+@@ -255,22 +257,9 @@ int bnxt_qplib_alloc_init_hwq(struct bnxt_qplib_hwq *hwq,
+ dst_virt_ptr =
+ (dma_addr_t **)hwq->pbl[PBL_LVL_0].pg_arr;
+ src_phys_ptr = hwq->pbl[PBL_LVL_1].pg_map_arr;
+- if (hwq_attr->type == HWQ_TYPE_MR) {
+- /* For MR it is expected that we supply only 1 contigous
+- * page i.e only 1 entry in the PDL that will contain
+- * all the PBLs for the user supplied memory region
+- */
+- for (i = 0; i < hwq->pbl[PBL_LVL_1].pg_count;
+- i++)
+- dst_virt_ptr[0][i] = src_phys_ptr[i] |
+- flag;
+- } else {
+- for (i = 0; i < hwq->pbl[PBL_LVL_1].pg_count;
+- i++)
+- dst_virt_ptr[PTR_PG(i)][PTR_IDX(i)] =
+- src_phys_ptr[i] |
+- PTU_PDE_VALID;
+- }
++ for (i = 0; i < hwq->pbl[PBL_LVL_1].pg_count; i++)
++ dst_virt_ptr[0][i] = src_phys_ptr[i] | flag;
++
+ /* Alloc or init PTEs */
+ rc = __alloc_pbl(res, &hwq->pbl[PBL_LVL_2],
+ hwq_attr->sginfo);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index 9328db92fa6db3..420f8613bcd515 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -137,6 +137,8 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+ 6 : sb->max_sge;
+ attr->max_cq = le32_to_cpu(sb->max_cq);
+ attr->max_cq_wqes = le32_to_cpu(sb->max_cqe);
++ if (!bnxt_qplib_is_chip_gen_p7(rcfw->res->cctx))
++ attr->max_cq_wqes = min_t(u32, BNXT_QPLIB_MAX_CQ_WQES, attr->max_cq_wqes);
+ attr->max_cq_sges = attr->max_qp_sges;
+ attr->max_mr = le32_to_cpu(sb->max_mr);
+ attr->max_mw = le32_to_cpu(sb->max_mw);
+@@ -154,7 +156,14 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+ if (!bnxt_qplib_is_chip_gen_p7(rcfw->res->cctx))
+ attr->l2_db_size = (sb->l2_db_space_size + 1) *
+ (0x01 << RCFW_DBR_BASE_PAGE_SHIFT);
+- attr->max_sgid = BNXT_QPLIB_NUM_GIDS_SUPPORTED;
++ /*
++ * Read the max gid supported by HW.
++ * For each entry in HW GID in HW table, we consume 2
++ * GID entries in the kernel GID table. So max_gid reported
++ * to stack can be up to twice the value reported by the HW, up to 256 gids.
++ */
++ attr->max_sgid = le32_to_cpu(sb->max_gid);
++ attr->max_sgid = min_t(u32, BNXT_QPLIB_NUM_GIDS_SUPPORTED, 2 * attr->max_sgid);
+ attr->dev_cap_flags = le16_to_cpu(sb->dev_cap_flags);
+ attr->dev_cap_flags2 = le16_to_cpu(sb->dev_cap_ext_flags_2);
+
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+index 16a67d70a6fc4b..2f16f3db093ea0 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+@@ -55,6 +55,7 @@ struct bnxt_qplib_dev_attr {
+ u32 max_qp_wqes;
+ u32 max_qp_sges;
+ u32 max_cq;
++#define BNXT_QPLIB_MAX_CQ_WQES 0xfffff
+ u32 max_cq_wqes;
+ u32 max_cq_sges;
+ u32 max_mr;
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index b3757c6a0457a1..8d753e6e0c7190 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -2086,7 +2086,7 @@ static int import_ep(struct c4iw_ep *ep, int iptype, __u8 *peer_ip,
+ err = -ENOMEM;
+ if (n->dev->flags & IFF_LOOPBACK) {
+ if (iptype == 4)
+- pdev = ip_dev_find(&init_net, *(__be32 *)peer_ip);
++ pdev = __ip_dev_find(&init_net, *(__be32 *)peer_ip, false);
+ else if (IS_ENABLED(CONFIG_IPV6))
+ for_each_netdev(&init_net, pdev) {
+ if (ipv6_chk_addr(&init_net,
+@@ -2101,12 +2101,12 @@ static int import_ep(struct c4iw_ep *ep, int iptype, __u8 *peer_ip,
+ err = -ENODEV;
+ goto out;
+ }
++ if (is_vlan_dev(pdev))
++ pdev = vlan_dev_real_dev(pdev);
+ ep->l2t = cxgb4_l2t_get(cdev->rdev.lldi.l2t,
+ n, pdev, rt_tos2priority(tos));
+- if (!ep->l2t) {
+- dev_put(pdev);
++ if (!ep->l2t)
+ goto out;
+- }
+ ep->mtu = pdev->mtu;
+ ep->tx_chan = cxgb4_port_chan(pdev);
+ ep->smac_idx = ((struct port_info *)netdev_priv(pdev))->smt_idx;
+@@ -2119,7 +2119,6 @@ static int import_ep(struct c4iw_ep *ep, int iptype, __u8 *peer_ip,
+ ep->rss_qid = cdev->rdev.lldi.rxq_ids[
+ cxgb4_port_idx(pdev) * step];
+ set_tcp_window(ep, (struct port_info *)netdev_priv(pdev));
+- dev_put(pdev);
+ } else {
+ pdev = get_real_dev(n->dev);
+ ep->l2t = cxgb4_l2t_get(cdev->rdev.lldi.l2t,
+diff --git a/drivers/infiniband/hw/irdma/cm.c b/drivers/infiniband/hw/irdma/cm.c
+index 36bb7e5ce63829..ce8d821bdad847 100644
+--- a/drivers/infiniband/hw/irdma/cm.c
++++ b/drivers/infiniband/hw/irdma/cm.c
+@@ -3631,7 +3631,7 @@ void irdma_free_lsmm_rsrc(struct irdma_qp *iwqp)
+ /**
+ * irdma_accept - registered call for connection to be accepted
+ * @cm_id: cm information for passive connection
+- * @conn_param: accpet parameters
++ * @conn_param: accept parameters
+ */
+ int irdma_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
+ {
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 9632afbd727b64..5dfb4644446ba8 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -68,6 +68,8 @@ MODULE_LICENSE("Dual BSD/GPL");
+ static u64 srpt_service_guid;
+ static DEFINE_SPINLOCK(srpt_dev_lock); /* Protects srpt_dev_list. */
+ static LIST_HEAD(srpt_dev_list); /* List of srpt_device structures. */
++static DEFINE_MUTEX(srpt_mc_mutex); /* Protects srpt_memory_caches. */
++static DEFINE_XARRAY(srpt_memory_caches); /* See also srpt_memory_cache_entry */
+
+ static unsigned srp_max_req_size = DEFAULT_MAX_REQ_SIZE;
+ module_param(srp_max_req_size, int, 0444);
+@@ -105,6 +107,63 @@ static void srpt_recv_done(struct ib_cq *cq, struct ib_wc *wc);
+ static void srpt_send_done(struct ib_cq *cq, struct ib_wc *wc);
+ static void srpt_process_wait_list(struct srpt_rdma_ch *ch);
+
++/* Type of the entries in srpt_memory_caches. */
++struct srpt_memory_cache_entry {
++ refcount_t ref;
++ struct kmem_cache *c;
++};
++
++static struct kmem_cache *srpt_cache_get(unsigned int object_size)
++{
++ struct srpt_memory_cache_entry *e;
++ char name[32];
++ void *res;
++
++ guard(mutex)(&srpt_mc_mutex);
++ e = xa_load(&srpt_memory_caches, object_size);
++ if (e) {
++ refcount_inc(&e->ref);
++ return e->c;
++ }
++ snprintf(name, sizeof(name), "srpt-%u", object_size);
++ e = kmalloc(sizeof(*e), GFP_KERNEL);
++ if (!e)
++ return NULL;
++ refcount_set(&e->ref, 1);
++ e->c = kmem_cache_create(name, object_size, /*align=*/512, 0, NULL);
++ if (!e->c)
++ goto free_entry;
++ res = xa_store(&srpt_memory_caches, object_size, e, GFP_KERNEL);
++ if (xa_is_err(res))
++ goto destroy_cache;
++ return e->c;
++
++destroy_cache:
++ kmem_cache_destroy(e->c);
++
++free_entry:
++ kfree(e);
++ return NULL;
++}
++
++static void srpt_cache_put(struct kmem_cache *c)
++{
++ struct srpt_memory_cache_entry *e = NULL;
++ unsigned long object_size;
++
++ guard(mutex)(&srpt_mc_mutex);
++ xa_for_each(&srpt_memory_caches, object_size, e)
++ if (e->c == c)
++ break;
++ if (WARN_ON_ONCE(!e))
++ return;
++ if (!refcount_dec_and_test(&e->ref))
++ return;
++ WARN_ON_ONCE(xa_erase(&srpt_memory_caches, object_size) != e);
++ kmem_cache_destroy(e->c);
++ kfree(e);
++}
++
+ /*
+ * The only allowed channel state changes are those that change the channel
+ * state into a state with a higher numerical value. Hence the new > prev test.
+@@ -2119,13 +2178,13 @@ static void srpt_release_channel_work(struct work_struct *w)
+ ch->sport->sdev, ch->rq_size,
+ ch->rsp_buf_cache, DMA_TO_DEVICE);
+
+- kmem_cache_destroy(ch->rsp_buf_cache);
++ srpt_cache_put(ch->rsp_buf_cache);
+
+ srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_recv_ring,
+ sdev, ch->rq_size,
+ ch->req_buf_cache, DMA_FROM_DEVICE);
+
+- kmem_cache_destroy(ch->req_buf_cache);
++ srpt_cache_put(ch->req_buf_cache);
+
+ kref_put(&ch->kref, srpt_free_ch);
+ }
+@@ -2245,8 +2304,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ INIT_LIST_HEAD(&ch->cmd_wait_list);
+ ch->max_rsp_size = ch->sport->port_attrib.srp_max_rsp_size;
+
+- ch->rsp_buf_cache = kmem_cache_create("srpt-rsp-buf", ch->max_rsp_size,
+- 512, 0, NULL);
++ ch->rsp_buf_cache = srpt_cache_get(ch->max_rsp_size);
+ if (!ch->rsp_buf_cache)
+ goto free_ch;
+
+@@ -2280,8 +2338,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ alignment_offset = round_up(imm_data_offset, 512) -
+ imm_data_offset;
+ req_sz = alignment_offset + imm_data_offset + srp_max_req_size;
+- ch->req_buf_cache = kmem_cache_create("srpt-req-buf", req_sz,
+- 512, 0, NULL);
++ ch->req_buf_cache = srpt_cache_get(req_sz);
+ if (!ch->req_buf_cache)
+ goto free_rsp_ring;
+
+@@ -2478,7 +2535,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ ch->req_buf_cache, DMA_FROM_DEVICE);
+
+ free_recv_cache:
+- kmem_cache_destroy(ch->req_buf_cache);
++ srpt_cache_put(ch->req_buf_cache);
+
+ free_rsp_ring:
+ srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring,
+@@ -2486,7 +2543,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ ch->rsp_buf_cache, DMA_TO_DEVICE);
+
+ free_rsp_cache:
+- kmem_cache_destroy(ch->rsp_buf_cache);
++ srpt_cache_put(ch->rsp_buf_cache);
+
+ free_ch:
+ if (rdma_cm_id)
+@@ -3055,7 +3112,7 @@ static void srpt_free_srq(struct srpt_device *sdev)
+ srpt_free_ioctx_ring((struct srpt_ioctx **)sdev->ioctx_ring, sdev,
+ sdev->srq_size, sdev->req_buf_cache,
+ DMA_FROM_DEVICE);
+- kmem_cache_destroy(sdev->req_buf_cache);
++ srpt_cache_put(sdev->req_buf_cache);
+ sdev->srq = NULL;
+ }
+
+@@ -3082,8 +3139,7 @@ static int srpt_alloc_srq(struct srpt_device *sdev)
+ pr_debug("create SRQ #wr= %d max_allow=%d dev= %s\n", sdev->srq_size,
+ sdev->device->attrs.max_srq_wr, dev_name(&device->dev));
+
+- sdev->req_buf_cache = kmem_cache_create("srpt-srq-req-buf",
+- srp_max_req_size, 0, 0, NULL);
++ sdev->req_buf_cache = srpt_cache_get(srp_max_req_size);
+ if (!sdev->req_buf_cache)
+ goto free_srq;
+
+@@ -3105,7 +3161,7 @@ static int srpt_alloc_srq(struct srpt_device *sdev)
+ return 0;
+
+ free_cache:
+- kmem_cache_destroy(sdev->req_buf_cache);
++ srpt_cache_put(sdev->req_buf_cache);
+
+ free_srq:
+ ib_destroy_srq(srq);
+diff --git a/drivers/irqchip/irq-renesas-rzg2l.c b/drivers/irqchip/irq-renesas-rzg2l.c
+index 693ff285ca2c67..99e27e01b0b19f 100644
+--- a/drivers/irqchip/irq-renesas-rzg2l.c
++++ b/drivers/irqchip/irq-renesas-rzg2l.c
+@@ -8,6 +8,7 @@
+ */
+
+ #include <linux/bitfield.h>
++#include <linux/cleanup.h>
+ #include <linux/clk.h>
+ #include <linux/err.h>
+ #include <linux/io.h>
+@@ -530,12 +531,12 @@ static int rzg2l_irqc_parse_interrupts(struct rzg2l_irqc_priv *priv,
+ static int rzg2l_irqc_common_init(struct device_node *node, struct device_node *parent,
+ const struct irq_chip *irq_chip)
+ {
++ struct platform_device *pdev = of_find_device_by_node(node);
++ struct device *dev __free(put_device) = pdev ? &pdev->dev : NULL;
+ struct irq_domain *irq_domain, *parent_domain;
+- struct platform_device *pdev;
+ struct reset_control *resetn;
+ int ret;
+
+- pdev = of_find_device_by_node(node);
+ if (!pdev)
+ return -ENODEV;
+
+@@ -591,6 +592,17 @@ static int rzg2l_irqc_common_init(struct device_node *node, struct device_node *
+
+ register_syscore_ops(&rzg2l_irqc_syscore_ops);
+
++ /*
++ * Prevent the cleanup function from invoking put_device by assigning
++ * NULL to dev.
++ *
++ * make coccicheck will complain about missing put_device calls, but
++ * those are false positives, as dev will be automatically "put" via
++ * __free_put_device on the failing path.
++ * On the successful path we don't actually want to "put" dev.
++ */
++ dev = NULL;
++
+ return 0;
+
+ pm_put:
+diff --git a/drivers/irqchip/irq-riscv-imsic-platform.c b/drivers/irqchip/irq-riscv-imsic-platform.c
+index 11723a763c102a..c5ec66e0bfd339 100644
+--- a/drivers/irqchip/irq-riscv-imsic-platform.c
++++ b/drivers/irqchip/irq-riscv-imsic-platform.c
+@@ -340,7 +340,7 @@ int imsic_irqdomain_init(void)
+ imsic->fwnode, global->hart_index_bits, global->guest_index_bits);
+ pr_info("%pfwP: group-index-bits: %d, group-index-shift: %d\n",
+ imsic->fwnode, global->group_index_bits, global->group_index_shift);
+- pr_info("%pfwP: per-CPU IDs %d at base PPN %pa\n",
++ pr_info("%pfwP: per-CPU IDs %d at base address %pa\n",
+ imsic->fwnode, global->nr_ids, &global->base_addr);
+ pr_info("%pfwP: total %d interrupts available\n",
+ imsic->fwnode, num_possible_cpus() * (global->nr_ids - 1));
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 2a9c4ee982e023..df6668a823ed84 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -4055,9 +4055,12 @@ static int raid10_run(struct mddev *mddev)
+ }
+
+ if (!mddev_is_dm(conf->mddev)) {
+- ret = raid10_set_queue_limits(mddev);
+- if (ret)
++ int err = raid10_set_queue_limits(mddev);
++
++ if (err) {
++ ret = err;
+ goto out_free_conf;
++ }
+ }
+
+ /* need to check that every block has at least one working mirror */
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 1491099528be8d..5ec21bda8ab63b 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -2579,26 +2579,27 @@ static u32 ksz_get_phy_flags(struct dsa_switch *ds, int port)
+ return MICREL_KSZ8_P1_ERRATA;
+ break;
+ case KSZ8567_CHIP_ID:
++ /* KSZ8567R Errata DS80000752C Module 4 */
++ case KSZ8765_CHIP_ID:
++ case KSZ8794_CHIP_ID:
++ case KSZ8795_CHIP_ID:
++ /* KSZ879x/KSZ877x/KSZ876x Errata DS80000687C Module 2 */
+ case KSZ9477_CHIP_ID:
++ /* KSZ9477S Errata DS80000754A Module 4 */
+ case KSZ9567_CHIP_ID:
++ /* KSZ9567S Errata DS80000756A Module 4 */
+ case KSZ9896_CHIP_ID:
++ /* KSZ9896C Errata DS80000757A Module 3 */
+ case KSZ9897_CHIP_ID:
+- /* KSZ9477 Errata DS80000754C
+- *
+- * Module 4: Energy Efficient Ethernet (EEE) feature select must
+- * be manually disabled
++ /* KSZ9897R Errata DS80000758C Module 4 */
++ /* Energy Efficient Ethernet (EEE) feature select must be manually disabled
+ * The EEE feature is enabled by default, but it is not fully
+ * operational. It must be manually disabled through register
+ * controls. If not disabled, the PHY ports can auto-negotiate
+ * to enable EEE, and this feature can cause link drops when
+ * linked to another device supporting EEE.
+ *
+- * The same item appears in the errata for the KSZ9567, KSZ9896,
+- * and KSZ9897.
+- *
+- * A similar item appears in the errata for the KSZ8567, but
+- * provides an alternative workaround. For now, use the simple
+- * workaround of disabling the EEE feature for this device too.
++ * The same item appears in the errata for all switches above.
+ */
+ return MICREL_NO_EEE;
+ }
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 5b4e2ce5470d9b..284270a4ade1c1 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -6347,7 +6347,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .invalid_port_mask = BIT(1) | BIT(2) | BIT(8),
+ .num_internal_phys = 5,
+ .internal_phys_offset = 3,
+- .max_vid = 4095,
++ .max_vid = 8191,
+ .max_sid = 63,
+ .port_base_addr = 0x0,
+ .phy_base_addr = 0x0,
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
+index c34caf9815c5cb..a5468224083962 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.h
++++ b/drivers/net/dsa/mv88e6xxx/chip.h
+@@ -206,6 +206,7 @@ struct mv88e6xxx_gpio_ops;
+ struct mv88e6xxx_avb_ops;
+ struct mv88e6xxx_ptp_ops;
+ struct mv88e6xxx_pcs_ops;
++struct mv88e6xxx_cc_coeffs;
+
+ struct mv88e6xxx_irq {
+ u16 masked;
+@@ -408,6 +409,7 @@ struct mv88e6xxx_chip {
+ struct cyclecounter tstamp_cc;
+ struct timecounter tstamp_tc;
+ struct delayed_work overflow_work;
++ const struct mv88e6xxx_cc_coeffs *cc_coeffs;
+
+ struct ptp_clock *ptp_clock;
+ struct ptp_clock_info ptp_clock_info;
+@@ -731,10 +733,6 @@ struct mv88e6xxx_ptp_ops {
+ int arr1_sts_reg;
+ int dep_sts_reg;
+ u32 rx_filters;
+- u32 cc_shift;
+- u32 cc_mult;
+- u32 cc_mult_num;
+- u32 cc_mult_dem;
+ };
+
+ struct mv88e6xxx_pcs_ops {
+diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c
+index 5394a8cf7bf1d4..04053fdc6489af 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.c
++++ b/drivers/net/dsa/mv88e6xxx/port.c
+@@ -1713,6 +1713,7 @@ int mv88e6393x_port_set_policy(struct mv88e6xxx_chip *chip, int port,
+ ptr = shift / 8;
+ shift %= 8;
+ mask >>= ptr * 8;
++ ptr <<= 8;
+
+ err = mv88e6393x_port_policy_read(chip, port, ptr, ®);
+ if (err)
+diff --git a/drivers/net/dsa/mv88e6xxx/ptp.c b/drivers/net/dsa/mv88e6xxx/ptp.c
+index 56391e09b3257e..aed4a4b07f34b1 100644
+--- a/drivers/net/dsa/mv88e6xxx/ptp.c
++++ b/drivers/net/dsa/mv88e6xxx/ptp.c
+@@ -18,6 +18,13 @@
+
+ #define MV88E6XXX_MAX_ADJ_PPB 1000000
+
++struct mv88e6xxx_cc_coeffs {
++ u32 cc_shift;
++ u32 cc_mult;
++ u32 cc_mult_num;
++ u32 cc_mult_dem;
++};
++
+ /* Family MV88E6250:
+ * Raw timestamps are in units of 10-ns clock periods.
+ *
+@@ -25,22 +32,43 @@
+ * simplifies to
+ * clkadj = scaled_ppm * 2^7 / 5^5
+ */
+-#define MV88E6250_CC_SHIFT 28
+-#define MV88E6250_CC_MULT (10 << MV88E6250_CC_SHIFT)
+-#define MV88E6250_CC_MULT_NUM (1 << 7)
+-#define MV88E6250_CC_MULT_DEM 3125ULL
++#define MV88E6XXX_CC_10NS_SHIFT 28
++static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_10ns_coeffs = {
++ .cc_shift = MV88E6XXX_CC_10NS_SHIFT,
++ .cc_mult = 10 << MV88E6XXX_CC_10NS_SHIFT,
++ .cc_mult_num = 1 << 7,
++ .cc_mult_dem = 3125ULL,
++};
+
+-/* Other families:
++/* Other families except MV88E6393X in internal clock mode:
+ * Raw timestamps are in units of 8-ns clock periods.
+ *
+ * clkadj = scaled_ppm * 8*2^28 / (10^6 * 2^16)
+ * simplifies to
+ * clkadj = scaled_ppm * 2^9 / 5^6
+ */
+-#define MV88E6XXX_CC_SHIFT 28
+-#define MV88E6XXX_CC_MULT (8 << MV88E6XXX_CC_SHIFT)
+-#define MV88E6XXX_CC_MULT_NUM (1 << 9)
+-#define MV88E6XXX_CC_MULT_DEM 15625ULL
++#define MV88E6XXX_CC_8NS_SHIFT 28
++static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_8ns_coeffs = {
++ .cc_shift = MV88E6XXX_CC_8NS_SHIFT,
++ .cc_mult = 8 << MV88E6XXX_CC_8NS_SHIFT,
++ .cc_mult_num = 1 << 9,
++ .cc_mult_dem = 15625ULL
++};
++
++/* Family MV88E6393X using internal clock:
++ * Raw timestamps are in units of 4-ns clock periods.
++ *
++ * clkadj = scaled_ppm * 4*2^28 / (10^6 * 2^16)
++ * simplifies to
++ * clkadj = scaled_ppm * 2^8 / 5^6
++ */
++#define MV88E6XXX_CC_4NS_SHIFT 28
++static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_4ns_coeffs = {
++ .cc_shift = MV88E6XXX_CC_4NS_SHIFT,
++ .cc_mult = 4 << MV88E6XXX_CC_4NS_SHIFT,
++ .cc_mult_num = 1 << 8,
++ .cc_mult_dem = 15625ULL
++};
+
+ #define TAI_EVENT_WORK_INTERVAL msecs_to_jiffies(100)
+
+@@ -83,6 +111,33 @@ static int mv88e6352_set_gpio_func(struct mv88e6xxx_chip *chip, int pin,
+ return chip->info->ops->gpio_ops->set_pctl(chip, pin, func);
+ }
+
++static const struct mv88e6xxx_cc_coeffs *
++mv88e6xxx_cc_coeff_get(struct mv88e6xxx_chip *chip)
++{
++ u16 period_ps;
++ int err;
++
++ err = mv88e6xxx_tai_read(chip, MV88E6XXX_TAI_CLOCK_PERIOD, &period_ps, 1);
++ if (err) {
++ dev_err(chip->dev, "failed to read cycle counter period: %d\n",
++ err);
++ return ERR_PTR(err);
++ }
++
++ switch (period_ps) {
++ case 4000:
++ return &mv88e6xxx_cc_4ns_coeffs;
++ case 8000:
++ return &mv88e6xxx_cc_8ns_coeffs;
++ case 10000:
++ return &mv88e6xxx_cc_10ns_coeffs;
++ default:
++ dev_err(chip->dev, "unexpected cycle counter period of %u ps\n",
++ period_ps);
++ return ERR_PTR(-ENODEV);
++ }
++}
++
+ static u64 mv88e6352_ptp_clock_read(const struct cyclecounter *cc)
+ {
+ struct mv88e6xxx_chip *chip = cc_to_chip(cc);
+@@ -204,7 +259,6 @@ static void mv88e6352_tai_event_work(struct work_struct *ugly)
+ static int mv88e6xxx_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
+ {
+ struct mv88e6xxx_chip *chip = ptp_to_chip(ptp);
+- const struct mv88e6xxx_ptp_ops *ptp_ops = chip->info->ops->ptp_ops;
+ int neg_adj = 0;
+ u32 diff, mult;
+ u64 adj;
+@@ -214,10 +268,10 @@ static int mv88e6xxx_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
+ scaled_ppm = -scaled_ppm;
+ }
+
+- mult = ptp_ops->cc_mult;
+- adj = ptp_ops->cc_mult_num;
++ mult = chip->cc_coeffs->cc_mult;
++ adj = chip->cc_coeffs->cc_mult_num;
+ adj *= scaled_ppm;
+- diff = div_u64(adj, ptp_ops->cc_mult_dem);
++ diff = div_u64(adj, chip->cc_coeffs->cc_mult_dem);
+
+ mv88e6xxx_reg_lock(chip);
+
+@@ -364,10 +418,6 @@ const struct mv88e6xxx_ptp_ops mv88e6165_ptp_ops = {
+ (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ),
+- .cc_shift = MV88E6XXX_CC_SHIFT,
+- .cc_mult = MV88E6XXX_CC_MULT,
+- .cc_mult_num = MV88E6XXX_CC_MULT_NUM,
+- .cc_mult_dem = MV88E6XXX_CC_MULT_DEM,
+ };
+
+ const struct mv88e6xxx_ptp_ops mv88e6250_ptp_ops = {
+@@ -391,10 +441,6 @@ const struct mv88e6xxx_ptp_ops mv88e6250_ptp_ops = {
+ (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ),
+- .cc_shift = MV88E6250_CC_SHIFT,
+- .cc_mult = MV88E6250_CC_MULT,
+- .cc_mult_num = MV88E6250_CC_MULT_NUM,
+- .cc_mult_dem = MV88E6250_CC_MULT_DEM,
+ };
+
+ const struct mv88e6xxx_ptp_ops mv88e6352_ptp_ops = {
+@@ -418,10 +464,6 @@ const struct mv88e6xxx_ptp_ops mv88e6352_ptp_ops = {
+ (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ),
+- .cc_shift = MV88E6XXX_CC_SHIFT,
+- .cc_mult = MV88E6XXX_CC_MULT,
+- .cc_mult_num = MV88E6XXX_CC_MULT_NUM,
+- .cc_mult_dem = MV88E6XXX_CC_MULT_DEM,
+ };
+
+ const struct mv88e6xxx_ptp_ops mv88e6390_ptp_ops = {
+@@ -446,10 +488,6 @@ const struct mv88e6xxx_ptp_ops mv88e6390_ptp_ops = {
+ (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ),
+- .cc_shift = MV88E6XXX_CC_SHIFT,
+- .cc_mult = MV88E6XXX_CC_MULT,
+- .cc_mult_num = MV88E6XXX_CC_MULT_NUM,
+- .cc_mult_dem = MV88E6XXX_CC_MULT_DEM,
+ };
+
+ static u64 mv88e6xxx_ptp_clock_read(const struct cyclecounter *cc)
+@@ -462,10 +500,10 @@ static u64 mv88e6xxx_ptp_clock_read(const struct cyclecounter *cc)
+ return 0;
+ }
+
+-/* With a 125MHz input clock, the 32-bit timestamp counter overflows in ~34.3
++/* With a 250MHz input clock, the 32-bit timestamp counter overflows in ~17.2
+ * seconds; this task forces periodic reads so that we don't miss any.
+ */
+-#define MV88E6XXX_TAI_OVERFLOW_PERIOD (HZ * 16)
++#define MV88E6XXX_TAI_OVERFLOW_PERIOD (HZ * 8)
+ static void mv88e6xxx_ptp_overflow_check(struct work_struct *work)
+ {
+ struct delayed_work *dw = to_delayed_work(work);
+@@ -484,11 +522,15 @@ int mv88e6xxx_ptp_setup(struct mv88e6xxx_chip *chip)
+ int i;
+
+ /* Set up the cycle counter */
++ chip->cc_coeffs = mv88e6xxx_cc_coeff_get(chip);
++ if (IS_ERR(chip->cc_coeffs))
++ return PTR_ERR(chip->cc_coeffs);
++
+ memset(&chip->tstamp_cc, 0, sizeof(chip->tstamp_cc));
+ chip->tstamp_cc.read = mv88e6xxx_ptp_clock_read;
+ chip->tstamp_cc.mask = CYCLECOUNTER_MASK(32);
+- chip->tstamp_cc.mult = ptp_ops->cc_mult;
+- chip->tstamp_cc.shift = ptp_ops->cc_shift;
++ chip->tstamp_cc.mult = chip->cc_coeffs->cc_mult;
++ chip->tstamp_cc.shift = chip->cc_coeffs->cc_shift;
+
+ timecounter_init(&chip->tstamp_tc, &chip->tstamp_cc,
+ ktime_to_ns(ktime_get_real()));
+diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c
+index 212421e9d42e4f..f5a1fefb76509e 100644
+--- a/drivers/net/dsa/vitesse-vsc73xx-core.c
++++ b/drivers/net/dsa/vitesse-vsc73xx-core.c
+@@ -721,7 +721,6 @@ static int vsc73xx_setup(struct dsa_switch *ds)
+
+ dev_info(vsc->dev, "set up the switch\n");
+
+- ds->untag_bridge_pvid = true;
+ ds->max_num_bridges = DSA_TAG_8021Q_MAX_NUM_BRIDGES;
+
+ /* Issue RESET */
+diff --git a/drivers/net/ethernet/aeroflex/greth.c b/drivers/net/ethernet/aeroflex/greth.c
+index 27af7746d645b8..adf6f67c5fcbad 100644
+--- a/drivers/net/ethernet/aeroflex/greth.c
++++ b/drivers/net/ethernet/aeroflex/greth.c
+@@ -484,7 +484,7 @@ greth_start_xmit_gbit(struct sk_buff *skb, struct net_device *dev)
+
+ if (unlikely(skb->len > MAX_FRAME_SIZE)) {
+ dev->stats.tx_errors++;
+- goto out;
++ goto len_error;
+ }
+
+ /* Save skb pointer. */
+@@ -575,6 +575,7 @@ greth_start_xmit_gbit(struct sk_buff *skb, struct net_device *dev)
+ map_error:
+ if (net_ratelimit())
+ dev_warn(greth->dev, "Could not create TX DMA mapping\n");
++len_error:
+ dev_kfree_skb(skb);
+ out:
+ return err;
+diff --git a/drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c b/drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c
+index 82768b0e90262b..9ea16ef4139d35 100644
+--- a/drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c
++++ b/drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c
+@@ -322,6 +322,7 @@ static netdev_tx_t bcmasp_xmit(struct sk_buff *skb, struct net_device *dev)
+ }
+ /* Rewind so we do not have a hole */
+ spb_index = intf->tx_spb_index;
++ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index c9faa85408593b..0a68b526e4a821 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -1359,6 +1359,7 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
+ netif_err(priv, tx_err, dev, "DMA map failed at %p (len=%d)\n",
+ skb->data, skb_len);
+ ret = NETDEV_TX_OK;
++ dev_kfree_skb_any(skb);
+ goto out;
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 04a623b3eee29e..103e6aa604c33e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2257,10 +2257,11 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+
+ if (!bnxt_get_rx_ts_p5(bp, &ts, cmpl_ts)) {
+ struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
++ unsigned long flags;
+
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ ns = timecounter_cyc2time(&ptp->tc, ts);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ memset(skb_hwtstamps(skb), 0,
+ sizeof(*skb_hwtstamps(skb)));
+ skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(ns);
+@@ -2760,17 +2761,18 @@ static int bnxt_async_event_process(struct bnxt *bp,
+ case ASYNC_EVENT_CMPL_PHC_UPDATE_EVENT_DATA1_FLAGS_PHC_RTC_UPDATE:
+ if (BNXT_PTP_USE_RTC(bp)) {
+ struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
++ unsigned long flags;
+ u64 ns;
+
+ if (!ptp)
+ goto async_event_process_exit;
+
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ bnxt_ptp_update_current_time(bp);
+ ns = (((u64)BNXT_EVENT_PHC_RTC_UPDATE(data1) <<
+ BNXT_PHC_BITS) | ptp->current_time);
+ bnxt_ptp_rtc_timecounter_init(ptp, ns);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ }
+ break;
+ }
+@@ -13484,9 +13486,11 @@ static void bnxt_force_fw_reset(struct bnxt *bp)
+ return;
+
+ if (ptp) {
+- spin_lock_bh(&ptp->ptp_lock);
++ unsigned long flags;
++
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ set_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ } else {
+ set_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
+ }
+@@ -13551,9 +13555,11 @@ void bnxt_fw_reset(struct bnxt *bp)
+ int n = 0, tmo;
+
+ if (ptp) {
+- spin_lock_bh(&ptp->ptp_lock);
++ unsigned long flags;
++
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ set_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ } else {
+ set_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+index 37d42423459c8e..fa514be8765028 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+@@ -62,13 +62,14 @@ static int bnxt_ptp_settime(struct ptp_clock_info *ptp_info,
+ struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
+ ptp_info);
+ u64 ns = timespec64_to_ns(ts);
++ unsigned long flags;
+
+ if (BNXT_PTP_USE_RTC(ptp->bp))
+ return bnxt_ptp_cfg_settime(ptp->bp, ns);
+
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ timecounter_init(&ptp->tc, &ptp->cc, ns);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ return 0;
+ }
+
+@@ -100,13 +101,14 @@ static int bnxt_refclk_read(struct bnxt *bp, struct ptp_system_timestamp *sts,
+ static void bnxt_ptp_get_current_time(struct bnxt *bp)
+ {
+ struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
++ unsigned long flags;
+
+ if (!ptp)
+ return;
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ WRITE_ONCE(ptp->old_time, ptp->current_time);
+ bnxt_refclk_read(bp, NULL, &ptp->current_time);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ }
+
+ static int bnxt_hwrm_port_ts_query(struct bnxt *bp, u32 flags, u64 *ts,
+@@ -149,17 +151,18 @@ static int bnxt_ptp_gettimex(struct ptp_clock_info *ptp_info,
+ {
+ struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
+ ptp_info);
++ unsigned long flags;
+ u64 ns, cycles;
+ int rc;
+
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ rc = bnxt_refclk_read(ptp->bp, sts, &cycles);
+ if (rc) {
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ return rc;
+ }
+ ns = timecounter_cyc2time(&ptp->tc, cycles);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ *ts = ns_to_timespec64(ns);
+
+ return 0;
+@@ -177,6 +180,7 @@ void bnxt_ptp_update_current_time(struct bnxt *bp)
+ static int bnxt_ptp_adjphc(struct bnxt_ptp_cfg *ptp, s64 delta)
+ {
+ struct hwrm_port_mac_cfg_input *req;
++ unsigned long flags;
+ int rc;
+
+ rc = hwrm_req_init(ptp->bp, req, HWRM_PORT_MAC_CFG);
+@@ -190,9 +194,9 @@ static int bnxt_ptp_adjphc(struct bnxt_ptp_cfg *ptp, s64 delta)
+ if (rc) {
+ netdev_err(ptp->bp->dev, "ptp adjphc failed. rc = %x\n", rc);
+ } else {
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ bnxt_ptp_update_current_time(ptp->bp);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ }
+
+ return rc;
+@@ -202,13 +206,14 @@ static int bnxt_ptp_adjtime(struct ptp_clock_info *ptp_info, s64 delta)
+ {
+ struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
+ ptp_info);
++ unsigned long flags;
+
+ if (BNXT_PTP_USE_RTC(ptp->bp))
+ return bnxt_ptp_adjphc(ptp, delta);
+
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ timecounter_adjtime(&ptp->tc, delta);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ return 0;
+ }
+
+@@ -236,14 +241,15 @@ static int bnxt_ptp_adjfine(struct ptp_clock_info *ptp_info, long scaled_ppm)
+ struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
+ ptp_info);
+ struct bnxt *bp = ptp->bp;
++ unsigned long flags;
+
+ if (!BNXT_MH(bp))
+ return bnxt_ptp_adjfine_rtc(bp, scaled_ppm);
+
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ timecounter_read(&ptp->tc);
+ ptp->cc.mult = adjust_by_scaled_ppm(ptp->cmult, scaled_ppm);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ return 0;
+ }
+
+@@ -251,12 +257,13 @@ void bnxt_ptp_pps_event(struct bnxt *bp, u32 data1, u32 data2)
+ {
+ struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
+ struct ptp_clock_event event;
++ unsigned long flags;
+ u64 ns, pps_ts;
+
+ pps_ts = EVENT_PPS_TS(data2, data1);
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ ns = timecounter_cyc2time(&ptp->tc, pps_ts);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+
+ switch (EVENT_DATA2_PPS_EVENT_TYPE(data2)) {
+ case ASYNC_EVENT_CMPL_PPS_TIMESTAMP_EVENT_DATA2_EVENT_TYPE_INTERNAL:
+@@ -393,16 +400,17 @@ static int bnxt_get_target_cycles(struct bnxt_ptp_cfg *ptp, u64 target_ns,
+ {
+ u64 cycles_now;
+ u64 nsec_now, nsec_delta;
++ unsigned long flags;
+ int rc;
+
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ rc = bnxt_refclk_read(ptp->bp, NULL, &cycles_now);
+ if (rc) {
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ return rc;
+ }
+ nsec_now = timecounter_cyc2time(&ptp->tc, cycles_now);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+
+ nsec_delta = target_ns - nsec_now;
+ *cycles_delta = div64_u64(nsec_delta << ptp->cc.shift, ptp->cc.mult);
+@@ -689,6 +697,7 @@ static int bnxt_stamp_tx_skb(struct bnxt *bp, int slot)
+ struct skb_shared_hwtstamps timestamp;
+ struct bnxt_ptp_tx_req *txts_req;
+ unsigned long now = jiffies;
++ unsigned long flags;
+ u64 ts = 0, ns = 0;
+ u32 tmo = 0;
+ int rc;
+@@ -702,9 +711,9 @@ static int bnxt_stamp_tx_skb(struct bnxt *bp, int slot)
+ tmo, slot);
+ if (!rc) {
+ memset(×tamp, 0, sizeof(timestamp));
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ ns = timecounter_cyc2time(&ptp->tc, ts);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ timestamp.hwtstamp = ns_to_ktime(ns);
+ skb_tstamp_tx(txts_req->tx_skb, ×tamp);
+ ptp->stats.ts_pkts++;
+@@ -730,6 +739,7 @@ static long bnxt_ptp_ts_aux_work(struct ptp_clock_info *ptp_info)
+ unsigned long now = jiffies;
+ struct bnxt *bp = ptp->bp;
+ u16 cons = ptp->txts_cons;
++ unsigned long flags;
+ u32 num_requests;
+ int rc = 0;
+
+@@ -757,9 +767,9 @@ static long bnxt_ptp_ts_aux_work(struct ptp_clock_info *ptp_info)
+ bnxt_ptp_get_current_time(bp);
+ ptp->next_period = now + HZ;
+ if (time_after_eq(now, ptp->next_overflow_check)) {
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ timecounter_read(&ptp->tc);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ ptp->next_overflow_check = now + BNXT_PHC_OVERFLOW_PERIOD;
+ }
+ if (rc == -EAGAIN)
+@@ -819,6 +829,7 @@ void bnxt_tx_ts_cmp(struct bnxt *bp, struct bnxt_napi *bnapi,
+ u32 opaque = tscmp->tx_ts_cmp_opaque;
+ struct bnxt_tx_ring_info *txr;
+ struct bnxt_sw_tx_bd *tx_buf;
++ unsigned long flags;
+ u64 ts, ns;
+ u16 cons;
+
+@@ -833,9 +844,9 @@ void bnxt_tx_ts_cmp(struct bnxt *bp, struct bnxt_napi *bnapi,
+ le32_to_cpu(tscmp->tx_ts_cmp_flags_type),
+ le32_to_cpu(tscmp->tx_ts_cmp_errors_v));
+ } else {
+- spin_lock_bh(&ptp->ptp_lock);
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ ns = timecounter_cyc2time(&ptp->tc, ts);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ timestamp.hwtstamp = ns_to_ktime(ns);
+ skb_tstamp_tx(tx_buf->skb, ×tamp);
+ }
+@@ -975,6 +986,7 @@ void bnxt_ptp_rtc_timecounter_init(struct bnxt_ptp_cfg *ptp, u64 ns)
+ int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg)
+ {
+ struct timespec64 tsp;
++ unsigned long flags;
+ u64 ns;
+ int rc;
+
+@@ -993,9 +1005,9 @@ int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg)
+ if (rc)
+ return rc;
+ }
+- spin_lock_bh(&bp->ptp_cfg->ptp_lock);
++ spin_lock_irqsave(&bp->ptp_cfg->ptp_lock, flags);
+ bnxt_ptp_rtc_timecounter_init(bp->ptp_cfg, ns);
+- spin_unlock_bh(&bp->ptp_cfg->ptp_lock);
++ spin_unlock_irqrestore(&bp->ptp_cfg->ptp_lock, flags);
+
+ return 0;
+ }
+@@ -1063,10 +1075,12 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
+ atomic64_set(&ptp->stats.ts_err, 0);
+
+ if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) {
+- spin_lock_bh(&ptp->ptp_lock);
++ unsigned long flags;
++
++ spin_lock_irqsave(&ptp->ptp_lock, flags);
+ bnxt_refclk_read(bp, NULL, &ptp->current_time);
+ WRITE_ONCE(ptp->old_time, ptp->current_time);
+- spin_unlock_bh(&ptp->ptp_lock);
++ spin_unlock_irqrestore(&ptp->ptp_lock, flags);
+ ptp_schedule_worker(ptp->ptp_clock, 0);
+ }
+ ptp->txts_tmo = BNXT_PTP_DFLT_TX_TMO;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+index a9a2f9a18c9ca1..f322466ecad350 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+@@ -146,11 +146,13 @@ struct bnxt_ptp_cfg {
+ };
+
+ #if BITS_PER_LONG == 32
+-#define BNXT_READ_TIME64(ptp, dst, src) \
+-do { \
+- spin_lock_bh(&(ptp)->ptp_lock); \
+- (dst) = (src); \
+- spin_unlock_bh(&(ptp)->ptp_lock); \
++#define BNXT_READ_TIME64(ptp, dst, src) \
++do { \
++ unsigned long flags; \
++ \
++ spin_lock_irqsave(&(ptp)->ptp_lock, flags); \
++ (dst) = (src); \
++ spin_unlock_irqrestore(&(ptp)->ptp_lock, flags); \
+ } while (0)
+ #else
+ #define BNXT_READ_TIME64(ptp, dst, src) \
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index a8596ebcdfd60e..875fe379eea213 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -1381,10 +1381,8 @@ static netdev_tx_t be_xmit(struct sk_buff *skb, struct net_device *netdev)
+ be_get_wrb_params_from_skb(adapter, skb, &wrb_params);
+
+ wrb_cnt = be_xmit_enqueue(adapter, txo, skb, &wrb_params);
+- if (unlikely(!wrb_cnt)) {
+- dev_kfree_skb_any(skb);
+- goto drop;
+- }
++ if (unlikely(!wrb_cnt))
++ goto drop_skb;
+
+ /* if os2bmc is enabled and if the pkt is destined to bmc,
+ * enqueue the pkt a 2nd time with mgmt bit set.
+@@ -1393,7 +1391,7 @@ static netdev_tx_t be_xmit(struct sk_buff *skb, struct net_device *netdev)
+ BE_WRB_F_SET(wrb_params.features, OS2BMC, 1);
+ wrb_cnt = be_xmit_enqueue(adapter, txo, skb, &wrb_params);
+ if (unlikely(!wrb_cnt))
+- goto drop;
++ goto drop_skb;
+ else
+ skb_get(skb);
+ }
+@@ -1407,6 +1405,8 @@ static netdev_tx_t be_xmit(struct sk_buff *skb, struct net_device *netdev)
+ be_xmit_flush(adapter, txo);
+
+ return NETDEV_TX_OK;
++drop_skb:
++ dev_kfree_skb_any(skb);
+ drop:
+ tx_stats(txo)->tx_drv_drops++;
+ /* Flush the already enqueued tx requests */
+diff --git a/drivers/net/ethernet/freescale/fman/mac.c b/drivers/net/ethernet/freescale/fman/mac.c
+index 9767586b4eb329..11da139082e1bf 100644
+--- a/drivers/net/ethernet/freescale/fman/mac.c
++++ b/drivers/net/ethernet/freescale/fman/mac.c
+@@ -197,55 +197,67 @@ static int mac_probe(struct platform_device *_of_dev)
+ err = -EINVAL;
+ goto _return_of_node_put;
+ }
++ mac_dev->fman_dev = &of_dev->dev;
+
+ /* Get the FMan cell-index */
+ err = of_property_read_u32(dev_node, "cell-index", &val);
+ if (err) {
+ dev_err(dev, "failed to read cell-index for %pOF\n", dev_node);
+ err = -EINVAL;
+- goto _return_of_node_put;
++ goto _return_dev_put;
+ }
+ /* cell-index 0 => FMan id 1 */
+ fman_id = (u8)(val + 1);
+
+- priv->fman = fman_bind(&of_dev->dev);
++ priv->fman = fman_bind(mac_dev->fman_dev);
+ if (!priv->fman) {
+ dev_err(dev, "fman_bind(%pOF) failed\n", dev_node);
+ err = -ENODEV;
+- goto _return_of_node_put;
++ goto _return_dev_put;
+ }
+
++ /* Two references have been taken in of_find_device_by_node()
++ * and fman_bind(). Release one of them here. The second one
++ * will be released in mac_remove().
++ */
++ put_device(mac_dev->fman_dev);
+ of_node_put(dev_node);
++ dev_node = NULL;
+
+ /* Get the address of the memory mapped registers */
+ mac_dev->res = platform_get_mem_or_io(_of_dev, 0);
+ if (!mac_dev->res) {
+ dev_err(dev, "could not get registers\n");
+- return -EINVAL;
++ err = -EINVAL;
++ goto _return_dev_put;
+ }
+
+ err = devm_request_resource(dev, fman_get_mem_region(priv->fman),
+ mac_dev->res);
+ if (err) {
+ dev_err_probe(dev, err, "could not request resource\n");
+- return err;
++ goto _return_dev_put;
+ }
+
+ mac_dev->vaddr = devm_ioremap(dev, mac_dev->res->start,
+ resource_size(mac_dev->res));
+ if (!mac_dev->vaddr) {
+ dev_err(dev, "devm_ioremap() failed\n");
+- return -EIO;
++ err = -EIO;
++ goto _return_dev_put;
+ }
+
+- if (!of_device_is_available(mac_node))
+- return -ENODEV;
++ if (!of_device_is_available(mac_node)) {
++ err = -ENODEV;
++ goto _return_dev_put;
++ }
+
+ /* Get the cell-index */
+ err = of_property_read_u32(mac_node, "cell-index", &val);
+ if (err) {
+ dev_err(dev, "failed to read cell-index for %pOF\n", mac_node);
+- return -EINVAL;
++ err = -EINVAL;
++ goto _return_dev_put;
+ }
+ priv->cell_index = (u8)val;
+
+@@ -259,22 +271,26 @@ static int mac_probe(struct platform_device *_of_dev)
+ if (unlikely(nph < 0)) {
+ dev_err(dev, "of_count_phandle_with_args(%pOF, fsl,fman-ports) failed\n",
+ mac_node);
+- return nph;
++ err = nph;
++ goto _return_dev_put;
+ }
+
+ if (nph != ARRAY_SIZE(mac_dev->port)) {
+ dev_err(dev, "Not supported number of fman-ports handles of mac node %pOF from device tree\n",
+ mac_node);
+- return -EINVAL;
++ err = -EINVAL;
++ goto _return_dev_put;
+ }
+
+- for (i = 0; i < ARRAY_SIZE(mac_dev->port); i++) {
++ /* PORT_NUM determines the size of the port array */
++ for (i = 0; i < PORT_NUM; i++) {
+ /* Find the port node */
+ dev_node = of_parse_phandle(mac_node, "fsl,fman-ports", i);
+ if (!dev_node) {
+ dev_err(dev, "of_parse_phandle(%pOF, fsl,fman-ports) failed\n",
+ mac_node);
+- return -EINVAL;
++ err = -EINVAL;
++ goto _return_dev_arr_put;
+ }
+
+ of_dev = of_find_device_by_node(dev_node);
+@@ -282,17 +298,24 @@ static int mac_probe(struct platform_device *_of_dev)
+ dev_err(dev, "of_find_device_by_node(%pOF) failed\n",
+ dev_node);
+ err = -EINVAL;
+- goto _return_of_node_put;
++ goto _return_dev_arr_put;
+ }
++ mac_dev->fman_port_devs[i] = &of_dev->dev;
+
+- mac_dev->port[i] = fman_port_bind(&of_dev->dev);
++ mac_dev->port[i] = fman_port_bind(mac_dev->fman_port_devs[i]);
+ if (!mac_dev->port[i]) {
+ dev_err(dev, "dev_get_drvdata(%pOF) failed\n",
+ dev_node);
+ err = -EINVAL;
+- goto _return_of_node_put;
++ goto _return_dev_arr_put;
+ }
++ /* Two references have been taken in of_find_device_by_node()
++ * and fman_port_bind(). Release one of them here. The second
++ * one will be released in mac_remove().
++ */
++ put_device(mac_dev->fman_port_devs[i]);
+ of_node_put(dev_node);
++ dev_node = NULL;
+ }
+
+ /* Get the PHY connection type */
+@@ -312,7 +335,7 @@ static int mac_probe(struct platform_device *_of_dev)
+
+ err = init(mac_dev, mac_node, ¶ms);
+ if (err < 0)
+- return err;
++ goto _return_dev_arr_put;
+
+ if (!is_zero_ether_addr(mac_dev->addr))
+ dev_info(dev, "FMan MAC address: %pM\n", mac_dev->addr);
+@@ -327,6 +350,12 @@ static int mac_probe(struct platform_device *_of_dev)
+
+ return err;
+
++_return_dev_arr_put:
++ /* mac_dev is kzalloc'ed */
++ for (i = 0; i < PORT_NUM; i++)
++ put_device(mac_dev->fman_port_devs[i]);
++_return_dev_put:
++ put_device(mac_dev->fman_dev);
+ _return_of_node_put:
+ of_node_put(dev_node);
+ return err;
+@@ -335,6 +364,11 @@ static int mac_probe(struct platform_device *_of_dev)
+ static void mac_remove(struct platform_device *pdev)
+ {
+ struct mac_device *mac_dev = platform_get_drvdata(pdev);
++ int i;
++
++ for (i = 0; i < PORT_NUM; i++)
++ put_device(mac_dev->fman_port_devs[i]);
++ put_device(mac_dev->fman_dev);
+
+ platform_device_unregister(mac_dev->priv->eth_dev);
+ }
+diff --git a/drivers/net/ethernet/freescale/fman/mac.h b/drivers/net/ethernet/freescale/fman/mac.h
+index fe747915cc7379..8b5b43d50f8efb 100644
+--- a/drivers/net/ethernet/freescale/fman/mac.h
++++ b/drivers/net/ethernet/freescale/fman/mac.h
+@@ -19,12 +19,13 @@
+ struct fman_mac;
+ struct mac_priv_s;
+
++#define PORT_NUM 2
+ struct mac_device {
+ void __iomem *vaddr;
+ struct device *dev;
+ struct resource *res;
+ u8 addr[ETH_ALEN];
+- struct fman_port *port[2];
++ struct fman_port *port[PORT_NUM];
+ struct phylink *phylink;
+ struct phylink_config phylink_config;
+ phy_interface_t phy_if;
+@@ -52,6 +53,9 @@ struct mac_device {
+
+ struct fman_mac *fman_mac;
+ struct mac_priv_s *priv;
++
++ struct device *fman_dev;
++ struct device *fman_port_devs[PORT_NUM];
+ };
+
+ static inline struct mac_device
+diff --git a/drivers/net/ethernet/i825xx/sun3_82586.c b/drivers/net/ethernet/i825xx/sun3_82586.c
+index f2d4669c81cf29..58a3d28d938c32 100644
+--- a/drivers/net/ethernet/i825xx/sun3_82586.c
++++ b/drivers/net/ethernet/i825xx/sun3_82586.c
+@@ -1012,6 +1012,7 @@ sun3_82586_send_packet(struct sk_buff *skb, struct net_device *dev)
+ if(skb->len > XMIT_BUFF_SIZE)
+ {
+ printk("%s: Sorry, max. framelength is %d bytes. The length of your frame is %d bytes.\n",dev->name,XMIT_BUFF_SIZE,skb->len);
++ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
+index 4746a6b258f08d..8af75cb37c3ee8 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
+@@ -336,6 +336,51 @@ static int octep_oq_check_hw_for_pkts(struct octep_device *oct,
+ return new_pkts;
+ }
+
++/**
++ * octep_oq_next_pkt() - Move to the next packet in Rx queue.
++ *
++ * @oq: Octeon Rx queue data structure.
++ * @buff_info: Current packet buffer info.
++ * @read_idx: Current packet index in the ring.
++ * @desc_used: Current packet descriptor number.
++ *
++ * Free the resources associated with a packet.
++ * Increment packet index in the ring and packet descriptor number.
++ */
++static void octep_oq_next_pkt(struct octep_oq *oq,
++ struct octep_rx_buffer *buff_info,
++ u32 *read_idx, u32 *desc_used)
++{
++ dma_unmap_page(oq->dev, oq->desc_ring[*read_idx].buffer_ptr,
++ PAGE_SIZE, DMA_FROM_DEVICE);
++ buff_info->page = NULL;
++ (*read_idx)++;
++ (*desc_used)++;
++ if (*read_idx == oq->max_count)
++ *read_idx = 0;
++}
++
++/**
++ * octep_oq_drop_rx() - Free the resources associated with a packet.
++ *
++ * @oq: Octeon Rx queue data structure.
++ * @buff_info: Current packet buffer info.
++ * @read_idx: Current packet index in the ring.
++ * @desc_used: Current packet descriptor number.
++ *
++ */
++static void octep_oq_drop_rx(struct octep_oq *oq,
++ struct octep_rx_buffer *buff_info,
++ u32 *read_idx, u32 *desc_used)
++{
++ int data_len = buff_info->len - oq->max_single_buffer_size;
++
++ while (data_len > 0) {
++ octep_oq_next_pkt(oq, buff_info, read_idx, desc_used);
++ data_len -= oq->buffer_size;
++ };
++}
++
+ /**
+ * __octep_oq_process_rx() - Process hardware Rx queue and push to stack.
+ *
+@@ -367,10 +412,7 @@ static int __octep_oq_process_rx(struct octep_device *oct,
+ desc_used = 0;
+ for (pkt = 0; pkt < pkts_to_process; pkt++) {
+ buff_info = (struct octep_rx_buffer *)&oq->buff_info[read_idx];
+- dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr,
+- PAGE_SIZE, DMA_FROM_DEVICE);
+ resp_hw = page_address(buff_info->page);
+- buff_info->page = NULL;
+
+ /* Swap the length field that is in Big-Endian to CPU */
+ buff_info->len = be64_to_cpu(resp_hw->length);
+@@ -394,36 +436,33 @@ static int __octep_oq_process_rx(struct octep_device *oct,
+ data_offset = OCTEP_OQ_RESP_HW_SIZE;
+ rx_ol_flags = 0;
+ }
++
++ octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used);
++
++ skb = build_skb((void *)resp_hw, PAGE_SIZE);
++ if (!skb) {
++ octep_oq_drop_rx(oq, buff_info,
++ &read_idx, &desc_used);
++ oq->stats.alloc_failures++;
++ continue;
++ }
++ skb_reserve(skb, data_offset);
++
+ rx_bytes += buff_info->len;
+
+ if (buff_info->len <= oq->max_single_buffer_size) {
+- skb = build_skb((void *)resp_hw, PAGE_SIZE);
+- skb_reserve(skb, data_offset);
+ skb_put(skb, buff_info->len);
+- read_idx++;
+- desc_used++;
+- if (read_idx == oq->max_count)
+- read_idx = 0;
+ } else {
+ struct skb_shared_info *shinfo;
+ u16 data_len;
+
+- skb = build_skb((void *)resp_hw, PAGE_SIZE);
+- skb_reserve(skb, data_offset);
+ /* Head fragment includes response header(s);
+ * subsequent fragments contains only data.
+ */
+ skb_put(skb, oq->max_single_buffer_size);
+- read_idx++;
+- desc_used++;
+- if (read_idx == oq->max_count)
+- read_idx = 0;
+-
+ shinfo = skb_shinfo(skb);
+ data_len = buff_info->len - oq->max_single_buffer_size;
+ while (data_len) {
+- dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr,
+- PAGE_SIZE, DMA_FROM_DEVICE);
+ buff_info = (struct octep_rx_buffer *)
+ &oq->buff_info[read_idx];
+ if (data_len < oq->buffer_size) {
+@@ -438,11 +477,8 @@ static int __octep_oq_process_rx(struct octep_device *oct,
+ buff_info->page, 0,
+ buff_info->len,
+ buff_info->len);
+- buff_info->page = NULL;
+- read_idx++;
+- desc_used++;
+- if (read_idx == oq->max_count)
+- read_idx = 0;
++
++ octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used);
+ }
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 82832a24fbd866..da69350c6f7654 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -2411,7 +2411,7 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr,
+ NIX_AF_TL3_TL2X_LINKX_CFG(tl2_tl3_link_schq, link));
+ if (!(cfg & BIT_ULL(12)))
+ continue;
+- bmap |= (1 << i);
++ bmap |= BIT_ULL(i);
+ cfg &= ~BIT_ULL(12);
+ rvu_write64(rvu, blkaddr,
+ NIX_AF_TL3_TL2X_LINKX_CFG(tl2_tl3_link_schq, link), cfg);
+@@ -2432,7 +2432,7 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr,
+
+ /* Set NIX_AF_TL3_TL2_LINKX_CFG[ENA] for the TL3/TL2 queue */
+ for (i = 0; i < (rvu->hw->cgx_links + rvu->hw->lbk_links); i++) {
+- if (!(bmap & (1 << i)))
++ if (!(bmap & BIT_ULL(i)))
+ continue;
+ cfg = rvu_read64(rvu, blkaddr,
+ NIX_AF_TL3_TL2X_LINKX_CFG(tl2_tl3_link_schq, link));
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 16ca427cf4c3f4..ed7313c10a0524 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -1171,7 +1171,7 @@ static int mtk_init_fq_dma(struct mtk_eth *eth)
+ if (unlikely(dma_mapping_error(eth->dma_dev, dma_addr)))
+ return -ENOMEM;
+
+- for (i = 0; i < cnt; i++) {
++ for (i = 0; i < len; i++) {
+ struct mtk_tx_dma_v2 *txd;
+
+ txd = eth->scratch_ring + (j * MTK_FQ_DMA_LENGTH + i) * soc->tx.desc_size;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 20768ef2e9d2b2..86d63c5f27ce26 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1760,6 +1760,10 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+ }
+ }
+
++#define MLX5_MAX_MANAGE_PAGES_CMD_ENT 1
++#define MLX5_CMD_MASK ((1UL << (cmd->vars.max_reg_cmds + \
++ MLX5_MAX_MANAGE_PAGES_CMD_ENT)) - 1)
++
+ static void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_cmd *cmd = &dev->cmd;
+@@ -1771,7 +1775,7 @@ static void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev)
+ /* wait for pending handlers to complete */
+ mlx5_eq_synchronize_cmd_irq(dev);
+ spin_lock_irqsave(&dev->cmd.alloc_lock, flags);
+- vector = ~dev->cmd.vars.bitmask & ((1ul << (1 << dev->cmd.vars.log_sz)) - 1);
++ vector = ~dev->cmd.vars.bitmask & MLX5_CMD_MASK;
+ if (!vector)
+ goto no_trig;
+
+@@ -2345,7 +2349,7 @@ int mlx5_cmd_enable(struct mlx5_core_dev *dev)
+
+ cmd->state = MLX5_CMDIF_STATE_DOWN;
+ cmd->vars.max_reg_cmds = (1 << cmd->vars.log_sz) - 1;
+- cmd->vars.bitmask = (1UL << cmd->vars.max_reg_cmds) - 1;
++ cmd->vars.bitmask = MLX5_CMD_MASK;
+
+ sema_init(&cmd->vars.sem, cmd->vars.max_reg_cmds);
+ sema_init(&cmd->vars.pages_sem, 1);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 16b67c457b6057..3e11c1c6d4f69b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -6508,7 +6508,9 @@ static void _mlx5e_remove(struct auxiliary_device *adev)
+ mlx5e_dcbnl_delete_app(priv);
+ unregister_netdev(priv->netdev);
+ _mlx5e_suspend(adev, false);
+- priv->profile->cleanup(priv);
++ /* Avoid cleanup if profile rollback failed. */
++ if (priv->profile)
++ priv->profile->cleanup(priv);
+ mlx5e_destroy_netdev(priv);
+ mlx5e_devlink_port_unregister(mlx5e_dev);
+ mlx5e_destroy_devlink(mlx5e_dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+index cb7e7e4104aff9..99d9e8863bfd62 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+@@ -1080,6 +1080,12 @@ int mlx5_comp_eqn_get(struct mlx5_core_dev *dev, u16 vecidx, int *eqn)
+ struct mlx5_eq_comp *eq;
+ int ret = 0;
+
++ if (vecidx >= table->max_comp_eqs) {
++ mlx5_core_dbg(dev, "Requested vector index %u should be less than %u",
++ vecidx, table->max_comp_eqs);
++ return -EINVAL;
++ }
++
+ mutex_lock(&table->comp_lock);
+ eq = xa_load(&table->comp_eqs, vecidx);
+ if (eq) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 17f78091ad30ef..7aef30dbd82d6c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1489,7 +1489,7 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs)
+ }
+
+ if (err)
+- goto abort;
++ goto err_esw_enable;
+
+ esw->fdb_table.flags |= MLX5_ESW_FDB_CREATED;
+
+@@ -1503,7 +1503,8 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs)
+
+ return 0;
+
+-abort:
++err_esw_enable:
++ mlx5_eq_notifier_unregister(esw->dev, &esw->nb);
+ mlx5_esw_acls_ns_cleanup(esw);
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 800dfb64ec8303..7d6d859cef3f9f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -3197,7 +3197,6 @@ mlxsw_sp_nexthop_sh_counter_get(struct mlxsw_sp *mlxsw_sp,
+ {
+ struct mlxsw_sp_nexthop_group *nh_grp = nh->nhgi->nh_grp;
+ struct mlxsw_sp_nexthop_counter *nhct;
+- void *ptr;
+ int err;
+
+ nhct = xa_load(&nh_grp->nhgi->nexthop_counters, nh->id);
+@@ -3210,12 +3209,10 @@ mlxsw_sp_nexthop_sh_counter_get(struct mlxsw_sp *mlxsw_sp,
+ if (IS_ERR(nhct))
+ return nhct;
+
+- ptr = xa_store(&nh_grp->nhgi->nexthop_counters, nh->id, nhct,
+- GFP_KERNEL);
+- if (IS_ERR(ptr)) {
+- err = PTR_ERR(ptr);
++ err = xa_err(xa_store(&nh_grp->nhgi->nexthop_counters, nh->id, nhct,
++ GFP_KERNEL));
++ if (err)
+ goto err_store;
+- }
+
+ return nhct;
+
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_mirror.c b/drivers/net/ethernet/microchip/sparx5/sparx5_mirror.c
+index 15db423be4aa69..459a53676ae964 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_mirror.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_mirror.c
+@@ -31,10 +31,10 @@ static u64 sparx5_mirror_port_get(struct sparx5 *sparx5, u32 idx)
+ /* Add port to mirror (only front ports) */
+ static void sparx5_mirror_port_add(struct sparx5 *sparx5, u32 idx, u32 portno)
+ {
+- u32 val, reg = portno;
++ u64 reg = portno;
++ u32 val;
+
+- reg = portno / BITS_PER_BYTE;
+- val = BIT(portno % BITS_PER_BYTE);
++ val = BIT(do_div(reg, 32));
+
+ if (reg == 0)
+ return spx5_rmw(val, val, sparx5, ANA_AC_PROBE_PORT_CFG(idx));
+@@ -45,10 +45,10 @@ static void sparx5_mirror_port_add(struct sparx5 *sparx5, u32 idx, u32 portno)
+ /* Delete port from mirror (only front ports) */
+ static void sparx5_mirror_port_del(struct sparx5 *sparx5, u32 idx, u32 portno)
+ {
+- u32 val, reg = portno;
++ u64 reg = portno;
++ u32 val;
+
+- reg = portno / BITS_PER_BYTE;
+- val = BIT(portno % BITS_PER_BYTE);
++ val = BIT(do_div(reg, 32));
+
+ if (reg == 0)
+ return spx5_rmw(0, val, sparx5, ANA_AC_PROBE_PORT_CFG(idx));
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 01e18f645c0eda..91a8c62b180ae5 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4669,7 +4669,9 @@ static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
+ if ((status & 0xffff) == 0xffff || !(status & tp->irq_mask))
+ return IRQ_NONE;
+
+- if (unlikely(status & SYSErr)) {
++ /* At least RTL8168fp may unexpectedly set the SYSErr bit */
++ if (unlikely(status & SYSErr &&
++ tp->mac_version <= RTL_GIGA_MAC_VER_06)) {
+ rtl8169_pcierr_interrupt(tp->dev);
+ goto out;
+ }
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 6b82df11fe8d09..907af4651c5534 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -1750,20 +1750,19 @@ static int ravb_get_ts_info(struct net_device *ndev,
+ struct ravb_private *priv = netdev_priv(ndev);
+ const struct ravb_hw_info *hw_info = priv->info;
+
+- info->so_timestamping =
+- SOF_TIMESTAMPING_TX_SOFTWARE |
+- SOF_TIMESTAMPING_RX_SOFTWARE |
+- SOF_TIMESTAMPING_SOFTWARE |
+- SOF_TIMESTAMPING_TX_HARDWARE |
+- SOF_TIMESTAMPING_RX_HARDWARE |
+- SOF_TIMESTAMPING_RAW_HARDWARE;
+- info->tx_types = (1 << HWTSTAMP_TX_OFF) | (1 << HWTSTAMP_TX_ON);
+- info->rx_filters =
+- (1 << HWTSTAMP_FILTER_NONE) |
+- (1 << HWTSTAMP_FILTER_PTP_V2_L2_EVENT) |
+- (1 << HWTSTAMP_FILTER_ALL);
+- if (hw_info->gptp || hw_info->ccc_gac)
++ if (hw_info->gptp || hw_info->ccc_gac) {
++ info->so_timestamping =
++ SOF_TIMESTAMPING_TX_SOFTWARE |
++ SOF_TIMESTAMPING_TX_HARDWARE |
++ SOF_TIMESTAMPING_RX_HARDWARE |
++ SOF_TIMESTAMPING_RAW_HARDWARE;
++ info->tx_types = (1 << HWTSTAMP_TX_OFF) | (1 << HWTSTAMP_TX_ON);
++ info->rx_filters =
++ (1 << HWTSTAMP_FILTER_NONE) |
++ (1 << HWTSTAMP_FILTER_PTP_V2_L2_EVENT) |
++ (1 << HWTSTAMP_FILTER_ALL);
+ info->phc_index = ptp_clock_index(priv->ptp.clock);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/renesas/rtsn.c b/drivers/net/ethernet/renesas/rtsn.c
+index 0e6cea42f00771..da90adef6b2b79 100644
+--- a/drivers/net/ethernet/renesas/rtsn.c
++++ b/drivers/net/ethernet/renesas/rtsn.c
+@@ -1057,6 +1057,7 @@ static netdev_tx_t rtsn_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ if (skb->len >= TX_DS) {
+ priv->stats.tx_dropped++;
+ priv->stats.tx_errors++;
++ dev_kfree_skb_any(skb);
+ goto out;
+ }
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
+index 362f85136c3efe..6fdd94c8919ec2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
+@@ -127,10 +127,12 @@ static int mgbe_uphy_lane_bringup_serdes_up(struct net_device *ndev, void *mgbe_
+ value &= ~XPCS_WRAP_UPHY_RX_CONTROL_AUX_RX_IDDQ;
+ writel(value, mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+
++ usleep_range(10, 20); /* 50ns min delay needed as per HW design */
+ value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+ value &= ~XPCS_WRAP_UPHY_RX_CONTROL_RX_SLEEP;
+ writel(value, mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+
++ usleep_range(10, 20); /* 500ns min delay needed as per HW design */
+ value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+ value |= XPCS_WRAP_UPHY_RX_CONTROL_RX_CAL_EN;
+ writel(value, mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+@@ -143,22 +145,30 @@ static int mgbe_uphy_lane_bringup_serdes_up(struct net_device *ndev, void *mgbe_
+ return err;
+ }
+
++ usleep_range(10, 20); /* 50ns min delay needed as per HW design */
+ value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+ value |= XPCS_WRAP_UPHY_RX_CONTROL_RX_DATA_EN;
+ writel(value, mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+
+ value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+- value |= XPCS_WRAP_UPHY_RX_CONTROL_RX_CDR_RESET;
++ value &= ~XPCS_WRAP_UPHY_RX_CONTROL_RX_PCS_PHY_RDY;
+ writel(value, mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+
++ usleep_range(10, 20); /* 50ns min delay needed as per HW design */
+ value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+- value &= ~XPCS_WRAP_UPHY_RX_CONTROL_RX_CDR_RESET;
++ value |= XPCS_WRAP_UPHY_RX_CONTROL_RX_CDR_RESET;
+ writel(value, mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+
++ usleep_range(10, 20); /* 50ns min delay needed as per HW design */
+ value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+ value |= XPCS_WRAP_UPHY_RX_CONTROL_RX_PCS_PHY_RDY;
+ writel(value, mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
+
++ msleep(30); /* 30ms delay needed as per HW design */
++ value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
++ value &= ~XPCS_WRAP_UPHY_RX_CONTROL_RX_CDR_RESET;
++ writel(value, mgbe->xpcs + XPCS_WRAP_UPHY_RX_CONTROL);
++
+ err = readl_poll_timeout(mgbe->xpcs + XPCS_WRAP_IRQ_STATUS, value,
+ value & XPCS_WRAP_IRQ_STATUS_PCS_LINK_STS,
+ 500, 500 * 2000);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 5dbfee4aee43ce..0c4c57e7fddc2c 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -989,6 +989,7 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ if (net_ratelimit())
+ netdev_err(ndev, "TX DMA mapping error\n");
+ ndev->stats.tx_dropped++;
++ dev_kfree_skb_any(skb);
+ return NETDEV_TX_OK;
+ }
+ desc_set_phys_addr(lp, phys, cur_p);
+@@ -1009,6 +1010,7 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ ndev->stats.tx_dropped++;
+ axienet_free_tx_chain(lp, orig_tail_ptr, ii + 1,
+ true, NULL, 0);
++ dev_kfree_skb_any(skb);
+ return NETDEV_TX_OK;
+ }
+ desc_set_phys_addr(lp, phys, cur_p);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 44142245343d95..5708c6e71d1d97 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2797,6 +2797,31 @@ static struct hv_driver netvsc_drv = {
+ },
+ };
+
++/* Set VF's namespace same as the synthetic NIC */
++static void netvsc_event_set_vf_ns(struct net_device *ndev)
++{
++ struct net_device_context *ndev_ctx = netdev_priv(ndev);
++ struct net_device *vf_netdev;
++ int ret;
++
++ vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);
++ if (!vf_netdev)
++ return;
++
++ if (!net_eq(dev_net(ndev), dev_net(vf_netdev))) {
++ ret = dev_change_net_namespace(vf_netdev, dev_net(ndev),
++ "eth%d");
++ if (ret)
++ netdev_err(vf_netdev,
++ "Cannot move to same namespace as %s: %d\n",
++ ndev->name, ret);
++ else
++ netdev_info(vf_netdev,
++ "Moved VF to namespace with: %s\n",
++ ndev->name);
++ }
++}
++
+ /*
+ * On Hyper-V, every VF interface is matched with a corresponding
+ * synthetic interface. The synthetic interface is presented first
+@@ -2809,6 +2834,11 @@ static int netvsc_netdev_event(struct notifier_block *this,
+ struct net_device *event_dev = netdev_notifier_info_to_dev(ptr);
+ int ret = 0;
+
++ if (event_dev->netdev_ops == &device_ops && event == NETDEV_REGISTER) {
++ netvsc_event_set_vf_ns(event_dev);
++ return NOTIFY_DONE;
++ }
++
+ ret = check_dev_is_matching_vf(event_dev);
+ if (ret != 0)
+ return NOTIFY_DONE;
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 2da70bc3dd8695..2a31d09d43ed49 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -154,19 +154,6 @@ static struct macsec_rx_sa *macsec_rxsa_get(struct macsec_rx_sa __rcu *ptr)
+ return sa;
+ }
+
+-static struct macsec_rx_sa *macsec_active_rxsa_get(struct macsec_rx_sc *rx_sc)
+-{
+- struct macsec_rx_sa *sa = NULL;
+- int an;
+-
+- for (an = 0; an < MACSEC_NUM_AN; an++) {
+- sa = macsec_rxsa_get(rx_sc->sa[an]);
+- if (sa)
+- break;
+- }
+- return sa;
+-}
+-
+ static void free_rx_sc_rcu(struct rcu_head *head)
+ {
+ struct macsec_rx_sc *rx_sc = container_of(head, struct macsec_rx_sc, rcu_head);
+@@ -1208,15 +1195,12 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ /* If validateFrames is Strict or the C bit in the
+ * SecTAG is set, discard
+ */
+- struct macsec_rx_sa *active_rx_sa = macsec_active_rxsa_get(rx_sc);
+ if (hdr->tci_an & MACSEC_TCI_C ||
+ secy->validate_frames == MACSEC_VALIDATE_STRICT) {
+ u64_stats_update_begin(&rxsc_stats->syncp);
+ rxsc_stats->stats.InPktsNotUsingSA++;
+ u64_stats_update_end(&rxsc_stats->syncp);
+ DEV_STATS_INC(secy->netdev, rx_errors);
+- if (active_rx_sa)
+- this_cpu_inc(active_rx_sa->stats->InPktsNotUsingSA);
+ goto drop_nosa;
+ }
+
+@@ -1226,8 +1210,6 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ u64_stats_update_begin(&rxsc_stats->syncp);
+ rxsc_stats->stats.InPktsUnusedSA++;
+ u64_stats_update_end(&rxsc_stats->syncp);
+- if (active_rx_sa)
+- this_cpu_inc(active_rx_sa->stats->InPktsUnusedSA);
+ goto deliver;
+ }
+
+diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c
+index 92a7a36b93ac0c..3e0b61202f0c98 100644
+--- a/drivers/net/netdevsim/dev.c
++++ b/drivers/net/netdevsim/dev.c
+@@ -836,7 +836,8 @@ static void nsim_dev_trap_report_work(struct work_struct *work)
+ nsim_dev = nsim_trap_data->nsim_dev;
+
+ if (!devl_trylock(priv_to_devlink(nsim_dev))) {
+- schedule_delayed_work(&nsim_dev->trap_data->trap_report_dw, 1);
++ queue_delayed_work(system_unbound_wq,
++ &nsim_dev->trap_data->trap_report_dw, 1);
+ return;
+ }
+
+@@ -848,11 +849,12 @@ static void nsim_dev_trap_report_work(struct work_struct *work)
+ continue;
+
+ nsim_dev_trap_report(nsim_dev_port);
++ cond_resched();
+ }
+ devl_unlock(priv_to_devlink(nsim_dev));
+-
+- schedule_delayed_work(&nsim_dev->trap_data->trap_report_dw,
+- msecs_to_jiffies(NSIM_TRAP_REPORT_INTERVAL_MS));
++ queue_delayed_work(system_unbound_wq,
++ &nsim_dev->trap_data->trap_report_dw,
++ msecs_to_jiffies(NSIM_TRAP_REPORT_INTERVAL_MS));
+ }
+
+ static int nsim_dev_traps_init(struct devlink *devlink)
+@@ -907,8 +909,9 @@ static int nsim_dev_traps_init(struct devlink *devlink)
+
+ INIT_DELAYED_WORK(&nsim_dev->trap_data->trap_report_dw,
+ nsim_dev_trap_report_work);
+- schedule_delayed_work(&nsim_dev->trap_data->trap_report_dw,
+- msecs_to_jiffies(NSIM_TRAP_REPORT_INTERVAL_MS));
++ queue_delayed_work(system_unbound_wq,
++ &nsim_dev->trap_data->trap_report_dw,
++ msecs_to_jiffies(NSIM_TRAP_REPORT_INTERVAL_MS));
+
+ return 0;
+
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index fc247f479257ae..3ab64e04a01c9c 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -45,8 +45,8 @@
+ /* Control Register 2 bits */
+ #define DP83822_FX_ENABLE BIT(14)
+
+-#define DP83822_HW_RESET BIT(15)
+-#define DP83822_SW_RESET BIT(14)
++#define DP83822_SW_RESET BIT(15)
++#define DP83822_DIG_RESTART BIT(14)
+
+ /* PHY STS bits */
+ #define DP83822_PHYSTS_DUPLEX BIT(2)
+diff --git a/drivers/net/plip/plip.c b/drivers/net/plip/plip.c
+index e39bfaefe8c50b..d81163bc910a3b 100644
+--- a/drivers/net/plip/plip.c
++++ b/drivers/net/plip/plip.c
+@@ -815,7 +815,7 @@ plip_send_packet(struct net_device *dev, struct net_local *nl,
+ return HS_TIMEOUT;
+ }
+ }
+- break;
++ fallthrough;
+
+ case PLIP_PK_LENGTH_LSB:
+ if (plip_send(nibble_timeout, dev,
+diff --git a/drivers/net/pse-pd/pse_core.c b/drivers/net/pse-pd/pse_core.c
+index f8e6854781e6ee..2906ce173f66cd 100644
+--- a/drivers/net/pse-pd/pse_core.c
++++ b/drivers/net/pse-pd/pse_core.c
+@@ -113,7 +113,7 @@ static void pse_release_pis(struct pse_controller_dev *pcdev)
+ {
+ int i;
+
+- for (i = 0; i <= pcdev->nr_lines; i++) {
++ for (i = 0; i < pcdev->nr_lines; i++) {
+ of_node_put(pcdev->pi[i].pairset[0].np);
+ of_node_put(pcdev->pi[i].pairset[1].np);
+ of_node_put(pcdev->pi[i].np);
+@@ -647,7 +647,7 @@ static int of_pse_match_pi(struct pse_controller_dev *pcdev,
+ {
+ int i;
+
+- for (i = 0; i <= pcdev->nr_lines; i++) {
++ for (i = 0; i < pcdev->nr_lines; i++) {
+ if (pcdev->pi[i].np == np)
+ return i;
+ }
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 2506aa8c603ec0..44179f4e807fc3 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1767,7 +1767,8 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ // can rename the link if it knows better.
+ if ((dev->driver_info->flags & FLAG_ETHER) != 0 &&
+ ((dev->driver_info->flags & FLAG_POINTTOPOINT) == 0 ||
+- (net->dev_addr [0] & 0x02) == 0))
++ /* somebody touched it*/
++ !is_zero_ether_addr(net->dev_addr)))
+ strscpy(net->name, "eth%d", sizeof(net->name));
+ /* WLAN devices should always be named "wlan%d" */
+ if ((dev->driver_info->flags & FLAG_WLAN) != 0)
+@@ -1870,6 +1871,7 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ * may trigger an error resubmitting itself and, worse,
+ * schedule a timer. So we kill it all just in case.
+ */
++ usbnet_mark_going_away(dev);
+ cancel_work_sync(&dev->kevent);
+ del_timer_sync(&dev->delay);
+ free_netdev(net);
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index f8131f92a39288..792e9eadbfc3dc 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -4155,7 +4155,7 @@ struct virtnet_stats_ctx {
+ u32 desc_num[3];
+
+ /* The actual supported stat types. */
+- u32 bitmap[3];
++ u64 bitmap[3];
+
+ /* Used to calculate the reply buffer size. */
+ u32 size[3];
+diff --git a/drivers/net/vmxnet3/vmxnet3_xdp.c b/drivers/net/vmxnet3/vmxnet3_xdp.c
+index a6c787454a1aeb..1341374a4588a0 100644
+--- a/drivers/net/vmxnet3/vmxnet3_xdp.c
++++ b/drivers/net/vmxnet3/vmxnet3_xdp.c
+@@ -148,7 +148,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
+ } else { /* XDP buffer from page pool */
+ page = virt_to_page(xdpf->data);
+ tbi->dma_addr = page_pool_get_dma_addr(page) +
+- VMXNET3_XDP_HEADROOM;
++ (xdpf->data - (void *)xdpf);
+ dma_sync_single_for_device(&adapter->pdev->dev,
+ tbi->dma_addr, buf_size,
+ DMA_TO_DEVICE);
+diff --git a/drivers/net/wwan/wwan_core.c b/drivers/net/wwan/wwan_core.c
+index 17431f1b1a0c0e..65a7ed4d67660d 100644
+--- a/drivers/net/wwan/wwan_core.c
++++ b/drivers/net/wwan/wwan_core.c
+@@ -1038,7 +1038,7 @@ static const struct nla_policy wwan_rtnl_policy[IFLA_WWAN_MAX + 1] = {
+
+ static struct rtnl_link_ops wwan_rtnl_link_ops __read_mostly = {
+ .kind = "wwan",
+- .maxtype = __IFLA_WWAN_MAX,
++ .maxtype = IFLA_WWAN_MAX,
+ .alloc = wwan_rtnl_alloc,
+ .validate = wwan_rtnl_validate,
+ .newlink = wwan_rtnl_newlink,
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 7990c3f22ecf66..4b9fda0b1d9a33 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2506,17 +2506,29 @@ static unsigned int nvme_pci_nr_maps(struct nvme_dev *dev)
+ return 1;
+ }
+
+-static void nvme_pci_update_nr_queues(struct nvme_dev *dev)
++static bool nvme_pci_update_nr_queues(struct nvme_dev *dev)
+ {
+ if (!dev->ctrl.tagset) {
+ nvme_alloc_io_tag_set(&dev->ctrl, &dev->tagset, &nvme_mq_ops,
+ nvme_pci_nr_maps(dev), sizeof(struct nvme_iod));
+- return;
++ return true;
++ }
++
++ /* Give up if we are racing with nvme_dev_disable() */
++ if (!mutex_trylock(&dev->shutdown_lock))
++ return false;
++
++ /* Check if nvme_dev_disable() has been executed already */
++ if (!dev->online_queues) {
++ mutex_unlock(&dev->shutdown_lock);
++ return false;
+ }
+
+ blk_mq_update_nr_hw_queues(&dev->tagset, dev->online_queues - 1);
+ /* free previously allocated queues that are no longer usable */
+ nvme_free_queues(dev, dev->online_queues);
++ mutex_unlock(&dev->shutdown_lock);
++ return true;
+ }
+
+ static int nvme_pci_enable(struct nvme_dev *dev)
+@@ -2797,7 +2809,8 @@ static void nvme_reset_work(struct work_struct *work)
+ nvme_dbbuf_set(dev);
+ nvme_unquiesce_io_queues(&dev->ctrl);
+ nvme_wait_freeze(&dev->ctrl);
+- nvme_pci_update_nr_queues(dev);
++ if (!nvme_pci_update_nr_queues(dev))
++ goto out;
+ nvme_unfreeze(&dev->ctrl);
+ } else {
+ dev_warn(dev->ctrl.device, "IO queues lost\n");
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index e9e56bbb3b59d1..d203e23b75620c 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -3106,7 +3106,9 @@ int pci_host_probe(struct pci_host_bridge *bridge)
+ list_for_each_entry(child, &bus->children, node)
+ pcie_bus_configure_settings(child);
+
++ pci_lock_rescan_remove();
+ pci_bus_add_devices(bus);
++ pci_unlock_rescan_remove();
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(pci_host_probe);
+diff --git a/drivers/pci/pwrctl/pci-pwrctl-pwrseq.c b/drivers/pci/pwrctl/pci-pwrctl-pwrseq.c
+index f07758c9edaddc..0e6bd47671c2e6 100644
+--- a/drivers/pci/pwrctl/pci-pwrctl-pwrseq.c
++++ b/drivers/pci/pwrctl/pci-pwrctl-pwrseq.c
+@@ -6,9 +6,9 @@
+ #include <linux/device.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
+-#include <linux/of.h>
+ #include <linux/pci-pwrctl.h>
+ #include <linux/platform_device.h>
++#include <linux/property.h>
+ #include <linux/pwrseq/consumer.h>
+ #include <linux/slab.h>
+ #include <linux/types.h>
+@@ -18,6 +18,40 @@ struct pci_pwrctl_pwrseq_data {
+ struct pwrseq_desc *pwrseq;
+ };
+
++struct pci_pwrctl_pwrseq_pdata {
++ const char *target;
++ /*
++ * Called before doing anything else to perform device-specific
++ * verification between requesting the power sequencing handle.
++ */
++ int (*validate_device)(struct device *dev);
++};
++
++static int pci_pwrctl_pwrseq_qcm_wcn_validate_device(struct device *dev)
++{
++ /*
++ * Old device trees for some platforms already define wifi nodes for
++ * the WCN family of chips since before power sequencing was added
++ * upstream.
++ *
++ * These nodes don't consume the regulator outputs from the PMU, and
++ * if we allow this driver to bind to one of such "incomplete" nodes,
++ * we'll see a kernel log error about the indefinite probe deferral.
++ *
++ * Check the existence of the regulator supply that exists on all
++ * WCN models before moving forward.
++ */
++ if (!device_property_present(dev, "vddaon-supply"))
++ return -ENODEV;
++
++ return 0;
++}
++
++static const struct pci_pwrctl_pwrseq_pdata pci_pwrctl_pwrseq_qcom_wcn_pdata = {
++ .target = "wlan",
++ .validate_device = pci_pwrctl_pwrseq_qcm_wcn_validate_device,
++};
++
+ static void devm_pci_pwrctl_pwrseq_power_off(void *data)
+ {
+ struct pwrseq_desc *pwrseq = data;
+@@ -27,15 +61,26 @@ static void devm_pci_pwrctl_pwrseq_power_off(void *data)
+
+ static int pci_pwrctl_pwrseq_probe(struct platform_device *pdev)
+ {
++ const struct pci_pwrctl_pwrseq_pdata *pdata;
+ struct pci_pwrctl_pwrseq_data *data;
+ struct device *dev = &pdev->dev;
+ int ret;
+
++ pdata = device_get_match_data(dev);
++ if (!pdata || !pdata->target)
++ return -EINVAL;
++
++ if (pdata->validate_device) {
++ ret = pdata->validate_device(dev);
++ if (ret)
++ return ret;
++ }
++
+ data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+- data->pwrseq = devm_pwrseq_get(dev, of_device_get_match_data(dev));
++ data->pwrseq = devm_pwrseq_get(dev, pdata->target);
+ if (IS_ERR(data->pwrseq))
+ return dev_err_probe(dev, PTR_ERR(data->pwrseq),
+ "Failed to get the power sequencer\n");
+@@ -64,12 +109,17 @@ static const struct of_device_id pci_pwrctl_pwrseq_of_match[] = {
+ {
+ /* ATH11K in QCA6390 package. */
+ .compatible = "pci17cb,1101",
+- .data = "wlan",
++ .data = &pci_pwrctl_pwrseq_qcom_wcn_pdata,
++ },
++ {
++ /* ATH11K in WCN6855 package. */
++ .compatible = "pci17cb,1103",
++ .data = &pci_pwrctl_pwrseq_qcom_wcn_pdata,
+ },
+ {
+ /* ATH12K in WCN7850 package. */
+ .compatible = "pci17cb,1107",
+- .data = "wlan",
++ .data = &pci_pwrctl_pwrseq_qcom_wcn_pdata,
+ },
+ { }
+ };
+diff --git a/drivers/platform/x86/dell/dell-wmi-base.c b/drivers/platform/x86/dell/dell-wmi-base.c
+index 502783a7adb118..24fd7ffadda952 100644
+--- a/drivers/platform/x86/dell/dell-wmi-base.c
++++ b/drivers/platform/x86/dell/dell-wmi-base.c
+@@ -264,6 +264,15 @@ static const struct key_entry dell_wmi_keymap_type_0010[] = {
+ /*Speaker Mute*/
+ { KE_KEY, 0x109, { KEY_MUTE} },
+
++ /* S2Idle screen off */
++ { KE_IGNORE, 0x120, { KEY_RESERVED }},
++
++ /* Leaving S4 or S2Idle suspend */
++ { KE_IGNORE, 0x130, { KEY_RESERVED }},
++
++ /* Entering S2Idle suspend */
++ { KE_IGNORE, 0x140, { KEY_RESERVED }},
++
+ /* Mic mute */
+ { KE_KEY, 0x150, { KEY_MICMUTE } },
+
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
+index 9def7983d7d66a..40ddc6eb75624e 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
+@@ -521,6 +521,7 @@ static int __init sysman_init(void)
+ int ret = 0;
+
+ if (!dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "Dell System", NULL) &&
++ !dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "Alienware", NULL) &&
+ !dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "www.dell.com", NULL)) {
+ pr_err("Unable to run on non-Dell system\n");
+ return -ENODEV;
+diff --git a/drivers/platform/x86/intel/pmc/core_ssram.c b/drivers/platform/x86/intel/pmc/core_ssram.c
+index 1bde86c54eb979..239211486fb91c 100644
+--- a/drivers/platform/x86/intel/pmc/core_ssram.c
++++ b/drivers/platform/x86/intel/pmc/core_ssram.c
+@@ -29,7 +29,7 @@
+ #define LPM_REG_COUNT 28
+ #define LPM_MODE_OFFSET 1
+
+-DEFINE_FREE(pmc_core_iounmap, void __iomem *, iounmap(_T));
++DEFINE_FREE(pmc_core_iounmap, void __iomem *, if (_T) iounmap(_T))
+
+ static u32 pmc_core_find_guid(struct pmc_info *list, const struct pmc_reg_map *map)
+ {
+@@ -262,6 +262,8 @@ pmc_core_ssram_get_pmc(struct pmc_dev *pmcdev, int pmc_idx, u32 offset)
+
+ ssram_base = ssram_pcidev->resource[0].start;
+ tmp_ssram = ioremap(ssram_base, SSRAM_HDR_SIZE);
++ if (!tmp_ssram)
++ return -ENOMEM;
+
+ if (pmc_idx != PMC_IDX_MAIN) {
+ /*
+diff --git a/drivers/powercap/dtpm_devfreq.c b/drivers/powercap/dtpm_devfreq.c
+index f40bce8176df27..d1dff6ccab1207 100644
+--- a/drivers/powercap/dtpm_devfreq.c
++++ b/drivers/powercap/dtpm_devfreq.c
+@@ -178,7 +178,7 @@ static int __dtpm_devfreq_setup(struct devfreq *devfreq, struct dtpm *parent)
+ ret = dev_pm_qos_add_request(dev, &dtpm_devfreq->qos_req,
+ DEV_PM_QOS_MAX_FREQUENCY,
+ PM_QOS_MAX_FREQUENCY_DEFAULT_VALUE);
+- if (ret) {
++ if (ret < 0) {
+ pr_err("Failed to add QoS request: %d\n", ret);
+ goto out_dtpm_unregister;
+ }
+diff --git a/drivers/reset/starfive/reset-starfive-jh71x0.c b/drivers/reset/starfive/reset-starfive-jh71x0.c
+index 55bbbd2de52cf9..29ce3486752f38 100644
+--- a/drivers/reset/starfive/reset-starfive-jh71x0.c
++++ b/drivers/reset/starfive/reset-starfive-jh71x0.c
+@@ -94,6 +94,9 @@ static int jh71x0_reset_status(struct reset_controller_dev *rcdev,
+ void __iomem *reg_status = data->status + offset * sizeof(u32);
+ u32 value = readl(reg_status);
+
++ if (!data->asserted)
++ return !(value & mask);
++
+ return !((value ^ data->asserted[offset]) & mask);
+ }
+
+diff --git a/drivers/soundwire/intel_ace2x.c b/drivers/soundwire/intel_ace2x.c
+index 781fe0aefa68ff..655766af2ea567 100644
+--- a/drivers/soundwire/intel_ace2x.c
++++ b/drivers/soundwire/intel_ace2x.c
+@@ -376,11 +376,12 @@ static int intel_hw_params(struct snd_pcm_substream *substream,
+ static int intel_prepare(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+ {
++ struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+ struct sdw_cdns *cdns = snd_soc_dai_get_drvdata(dai);
+ struct sdw_intel *sdw = cdns_to_intel(cdns);
+ struct sdw_cdns_dai_runtime *dai_runtime;
++ struct snd_pcm_hw_params *hw_params;
+ int ch, dir;
+- int ret = 0;
+
+ dai_runtime = cdns->dai_runtime_array[dai->id];
+ if (!dai_runtime) {
+@@ -389,12 +390,8 @@ static int intel_prepare(struct snd_pcm_substream *substream,
+ return -EIO;
+ }
+
++ hw_params = &rtd->dpcm[substream->stream].hw_params;
+ if (dai_runtime->suspended) {
+- struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+- struct snd_pcm_hw_params *hw_params;
+-
+- hw_params = &rtd->dpcm[substream->stream].hw_params;
+-
+ dai_runtime->suspended = false;
+
+ /*
+@@ -415,15 +412,11 @@ static int intel_prepare(struct snd_pcm_substream *substream,
+ /* the SHIM will be configured in the callback functions */
+
+ sdw_cdns_config_stream(cdns, ch, dir, dai_runtime->pdi);
+-
+- /* Inform DSP about PDI stream number */
+- ret = intel_params_stream(sdw, substream, dai,
+- hw_params,
+- sdw->instance,
+- dai_runtime->pdi->intel_alh_id);
+ }
+
+- return ret;
++ /* Inform DSP about PDI stream number */
++ return intel_params_stream(sdw, substream, dai, hw_params, sdw->instance,
++ dai_runtime->pdi->intel_alh_id);
+ }
+
+ static int
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index bf4892544cfdb4..bb84d304b07e57 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -691,7 +691,7 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
+
+ dev->queues = kcalloc(nr_cpu_ids, sizeof(*dev->queues), GFP_KERNEL);
+ if (!dev->queues) {
+- dev->transport->free_device(dev);
++ hba->backend->ops->free_device(dev);
+ return NULL;
+ }
+
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 7eb94894bd68fa..717931267bda0c 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -2130,7 +2130,7 @@ static int tcmu_netlink_event_send(struct tcmu_dev *udev,
+ }
+
+ ret = genlmsg_multicast_allns(&tcmu_genl_family, skb, 0,
+- TCMU_MCGRP_CONFIG, GFP_KERNEL);
++ TCMU_MCGRP_CONFIG);
+
+ /* Wait during an add as the listener may not be up yet */
+ if (ret == 0 ||
+diff --git a/drivers/usb/host/xhci-dbgcap.h b/drivers/usb/host/xhci-dbgcap.h
+index 97c5dc290138bc..9dc8f4d8077cc4 100644
+--- a/drivers/usb/host/xhci-dbgcap.h
++++ b/drivers/usb/host/xhci-dbgcap.h
+@@ -110,7 +110,7 @@ struct dbc_port {
+ struct tasklet_struct push;
+
+ struct list_head write_pool;
+- struct kfifo write_fifo;
++ unsigned int tx_boundary;
+
+ bool registered;
+ };
+diff --git a/drivers/usb/host/xhci-dbgtty.c b/drivers/usb/host/xhci-dbgtty.c
+index b74e98e9439326..0266c2f5bc0d8e 100644
+--- a/drivers/usb/host/xhci-dbgtty.c
++++ b/drivers/usb/host/xhci-dbgtty.c
+@@ -25,16 +25,26 @@ static inline struct dbc_port *dbc_to_port(struct xhci_dbc *dbc)
+ }
+
+ static unsigned int
+-dbc_send_packet(struct dbc_port *port, char *packet, unsigned int size)
++dbc_kfifo_to_req(struct dbc_port *port, char *packet)
+ {
+- unsigned int len;
+-
+- len = kfifo_len(&port->write_fifo);
+- if (len < size)
+- size = len;
+- if (size != 0)
+- size = kfifo_out(&port->write_fifo, packet, size);
+- return size;
++ unsigned int len;
++
++ len = kfifo_len(&port->port.xmit_fifo);
++
++ if (len == 0)
++ return 0;
++
++ len = min(len, DBC_MAX_PACKET);
++
++ if (port->tx_boundary)
++ len = min(port->tx_boundary, len);
++
++ len = kfifo_out(&port->port.xmit_fifo, packet, len);
++
++ if (port->tx_boundary)
++ port->tx_boundary -= len;
++
++ return len;
+ }
+
+ static int dbc_start_tx(struct dbc_port *port)
+@@ -49,7 +59,7 @@ static int dbc_start_tx(struct dbc_port *port)
+
+ while (!list_empty(pool)) {
+ req = list_entry(pool->next, struct dbc_request, list_pool);
+- len = dbc_send_packet(port, req->buf, DBC_MAX_PACKET);
++ len = dbc_kfifo_to_req(port, req->buf);
+ if (len == 0)
+ break;
+ do_tty_wake = true;
+@@ -213,14 +223,32 @@ static ssize_t dbc_tty_write(struct tty_struct *tty, const u8 *buf,
+ {
+ struct dbc_port *port = tty->driver_data;
+ unsigned long flags;
++ unsigned int written = 0;
+
+ spin_lock_irqsave(&port->port_lock, flags);
+- if (count)
+- count = kfifo_in(&port->write_fifo, buf, count);
+- dbc_start_tx(port);
++
++ /*
++ * Treat tty write as one usb transfer. Make sure the writes are turned
++ * into TRB request having the same size boundaries as the tty writes.
++ * Don't add data to kfifo before previous write is turned into TRBs
++ */
++ if (port->tx_boundary) {
++ spin_unlock_irqrestore(&port->port_lock, flags);
++ return 0;
++ }
++
++ if (count) {
++ written = kfifo_in(&port->port.xmit_fifo, buf, count);
++
++ if (written == count)
++ port->tx_boundary = kfifo_len(&port->port.xmit_fifo);
++
++ dbc_start_tx(port);
++ }
++
+ spin_unlock_irqrestore(&port->port_lock, flags);
+
+- return count;
++ return written;
+ }
+
+ static int dbc_tty_put_char(struct tty_struct *tty, u8 ch)
+@@ -230,7 +258,7 @@ static int dbc_tty_put_char(struct tty_struct *tty, u8 ch)
+ int status;
+
+ spin_lock_irqsave(&port->port_lock, flags);
+- status = kfifo_put(&port->write_fifo, ch);
++ status = kfifo_put(&port->port.xmit_fifo, ch);
+ spin_unlock_irqrestore(&port->port_lock, flags);
+
+ return status;
+@@ -253,7 +281,11 @@ static unsigned int dbc_tty_write_room(struct tty_struct *tty)
+ unsigned int room;
+
+ spin_lock_irqsave(&port->port_lock, flags);
+- room = kfifo_avail(&port->write_fifo);
++ room = kfifo_avail(&port->port.xmit_fifo);
++
++ if (port->tx_boundary)
++ room = 0;
++
+ spin_unlock_irqrestore(&port->port_lock, flags);
+
+ return room;
+@@ -266,7 +298,7 @@ static unsigned int dbc_tty_chars_in_buffer(struct tty_struct *tty)
+ unsigned int chars;
+
+ spin_lock_irqsave(&port->port_lock, flags);
+- chars = kfifo_len(&port->write_fifo);
++ chars = kfifo_len(&port->port.xmit_fifo);
+ spin_unlock_irqrestore(&port->port_lock, flags);
+
+ return chars;
+@@ -424,7 +456,8 @@ static int xhci_dbc_tty_register_device(struct xhci_dbc *dbc)
+ goto err_idr;
+ }
+
+- ret = kfifo_alloc(&port->write_fifo, DBC_WRITE_BUF_SIZE, GFP_KERNEL);
++ ret = kfifo_alloc(&port->port.xmit_fifo, DBC_WRITE_BUF_SIZE,
++ GFP_KERNEL);
+ if (ret)
+ goto err_exit_port;
+
+@@ -453,7 +486,7 @@ static int xhci_dbc_tty_register_device(struct xhci_dbc *dbc)
+ xhci_dbc_free_requests(&port->read_pool);
+ xhci_dbc_free_requests(&port->write_pool);
+ err_free_fifo:
+- kfifo_free(&port->write_fifo);
++ kfifo_free(&port->port.xmit_fifo);
+ err_exit_port:
+ idr_remove(&dbc_tty_minors, port->minor);
+ err_idr:
+@@ -478,7 +511,7 @@ static void xhci_dbc_tty_unregister_device(struct xhci_dbc *dbc)
+ idr_remove(&dbc_tty_minors, port->minor);
+ mutex_unlock(&dbc_tty_minors_lock);
+
+- kfifo_free(&port->write_fifo);
++ kfifo_free(&port->port.xmit_fifo);
+ xhci_dbc_free_requests(&port->read_pool);
+ xhci_dbc_free_requests(&port->read_queue);
+ xhci_dbc_free_requests(&port->write_pool);
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index 9262fcd4144f85..d61b4c74648df6 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -519,6 +519,7 @@ static void typec_altmode_release(struct device *dev)
+ typec_altmode_put_partner(alt);
+
+ altmode_id_remove(alt->adev.dev.parent, alt->id);
++ put_device(alt->adev.dev.parent);
+ kfree(alt);
+ }
+
+@@ -568,6 +569,8 @@ typec_register_altmode(struct device *parent,
+ alt->adev.dev.type = &typec_altmode_dev_type;
+ dev_set_name(&alt->adev.dev, "%s.%u", dev_name(parent), id);
+
++ get_device(alt->adev.dev.parent);
++
+ /* Link partners and plugs with the ports */
+ if (!is_port)
+ typec_altmode_set_partner(alt);
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index ea36c6956bf361..1af0ccf7967aba 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -1374,6 +1374,7 @@ config FB_VT8500
+ config FB_WM8505
+ bool "Wondermedia WM8xxx-series frame buffer support"
+ depends on (FB = y) && HAS_IOMEM && (ARCH_VT8500 || COMPILE_TEST)
++ select FB_IOMEM_FOPS
+ select FB_SYS_FILLRECT if (!FB_WMT_GE_ROPS)
+ select FB_SYS_COPYAREA if (!FB_WMT_GE_ROPS)
+ select FB_SYS_IMAGEBLIT
+diff --git a/fs/9p/v9fs.h b/fs/9p/v9fs.h
+index 1775fcc7f0e8ef..698c43dd5dc867 100644
+--- a/fs/9p/v9fs.h
++++ b/fs/9p/v9fs.h
+@@ -179,14 +179,16 @@ extern int v9fs_vfs_rename(struct mnt_idmap *idmap,
+ struct inode *old_dir, struct dentry *old_dentry,
+ struct inode *new_dir, struct dentry *new_dentry,
+ unsigned int flags);
+-extern struct inode *v9fs_fid_iget(struct super_block *sb, struct p9_fid *fid,
+- bool new);
++extern struct inode *v9fs_inode_from_fid(struct v9fs_session_info *v9ses,
++ struct p9_fid *fid,
++ struct super_block *sb, int new);
+ extern const struct inode_operations v9fs_dir_inode_operations_dotl;
+ extern const struct inode_operations v9fs_file_inode_operations_dotl;
+ extern const struct inode_operations v9fs_symlink_inode_operations_dotl;
+ extern const struct netfs_request_ops v9fs_req_ops;
+-extern struct inode *v9fs_fid_iget_dotl(struct super_block *sb,
+- struct p9_fid *fid, bool new);
++extern struct inode *v9fs_inode_from_fid_dotl(struct v9fs_session_info *v9ses,
++ struct p9_fid *fid,
++ struct super_block *sb, int new);
+
+ /* other default globals */
+ #define V9FS_PORT 564
+@@ -225,12 +227,30 @@ static inline int v9fs_proto_dotl(struct v9fs_session_info *v9ses)
+ */
+ static inline struct inode *
+ v9fs_get_inode_from_fid(struct v9fs_session_info *v9ses, struct p9_fid *fid,
+- struct super_block *sb, bool new)
++ struct super_block *sb)
+ {
+ if (v9fs_proto_dotl(v9ses))
+- return v9fs_fid_iget_dotl(sb, fid, new);
++ return v9fs_inode_from_fid_dotl(v9ses, fid, sb, 0);
+ else
+- return v9fs_fid_iget(sb, fid, new);
++ return v9fs_inode_from_fid(v9ses, fid, sb, 0);
++}
++
++/**
++ * v9fs_get_new_inode_from_fid - Helper routine to populate an inode by
++ * issuing a attribute request
++ * @v9ses: session information
++ * @fid: fid to issue attribute request for
++ * @sb: superblock on which to create inode
++ *
++ */
++static inline struct inode *
++v9fs_get_new_inode_from_fid(struct v9fs_session_info *v9ses, struct p9_fid *fid,
++ struct super_block *sb)
++{
++ if (v9fs_proto_dotl(v9ses))
++ return v9fs_inode_from_fid_dotl(v9ses, fid, sb, 1);
++ else
++ return v9fs_inode_from_fid(v9ses, fid, sb, 1);
+ }
+
+ #endif
+diff --git a/fs/9p/v9fs_vfs.h b/fs/9p/v9fs_vfs.h
+index 7923c3c347cbd5..d3aefbec4de6e3 100644
+--- a/fs/9p/v9fs_vfs.h
++++ b/fs/9p/v9fs_vfs.h
+@@ -42,7 +42,7 @@ struct inode *v9fs_alloc_inode(struct super_block *sb);
+ void v9fs_free_inode(struct inode *inode);
+ void v9fs_set_netfs_context(struct inode *inode);
+ int v9fs_init_inode(struct v9fs_session_info *v9ses,
+- struct inode *inode, struct p9_qid *qid, umode_t mode, dev_t rdev);
++ struct inode *inode, umode_t mode, dev_t rdev);
+ void v9fs_evict_inode(struct inode *inode);
+ #if (BITS_PER_LONG == 32)
+ #define QID2INO(q) ((ino_t) (((q)->path+2) ^ (((q)->path) >> 32)))
+diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
+index fd72fc38c8f5b1..3e68521f4e2f98 100644
+--- a/fs/9p/vfs_inode.c
++++ b/fs/9p/vfs_inode.c
+@@ -256,12 +256,9 @@ void v9fs_set_netfs_context(struct inode *inode)
+ }
+
+ int v9fs_init_inode(struct v9fs_session_info *v9ses,
+- struct inode *inode, struct p9_qid *qid, umode_t mode, dev_t rdev)
++ struct inode *inode, umode_t mode, dev_t rdev)
+ {
+ int err = 0;
+- struct v9fs_inode *v9inode = V9FS_I(inode);
+-
+- memcpy(&v9inode->qid, qid, sizeof(struct p9_qid));
+
+ inode_init_owner(&nop_mnt_idmap, inode, NULL, mode);
+ inode->i_blocks = 0;
+@@ -365,59 +362,105 @@ void v9fs_evict_inode(struct inode *inode)
+ clear_inode(inode);
+ }
+
+-struct inode *
+-v9fs_fid_iget(struct super_block *sb, struct p9_fid *fid, bool new)
++static int v9fs_test_inode(struct inode *inode, void *data)
++{
++ int umode;
++ dev_t rdev;
++ struct v9fs_inode *v9inode = V9FS_I(inode);
++ struct p9_wstat *st = (struct p9_wstat *)data;
++ struct v9fs_session_info *v9ses = v9fs_inode2v9ses(inode);
++
++ umode = p9mode2unixmode(v9ses, st, &rdev);
++ /* don't match inode of different type */
++ if (inode_wrong_type(inode, umode))
++ return 0;
++
++ /* compare qid details */
++ if (memcmp(&v9inode->qid.version,
++ &st->qid.version, sizeof(v9inode->qid.version)))
++ return 0;
++
++ if (v9inode->qid.type != st->qid.type)
++ return 0;
++
++ if (v9inode->qid.path != st->qid.path)
++ return 0;
++ return 1;
++}
++
++static int v9fs_test_new_inode(struct inode *inode, void *data)
++{
++ return 0;
++}
++
++static int v9fs_set_inode(struct inode *inode, void *data)
++{
++ struct v9fs_inode *v9inode = V9FS_I(inode);
++ struct p9_wstat *st = (struct p9_wstat *)data;
++
++ memcpy(&v9inode->qid, &st->qid, sizeof(st->qid));
++ return 0;
++}
++
++static struct inode *v9fs_qid_iget(struct super_block *sb,
++ struct p9_qid *qid,
++ struct p9_wstat *st,
++ int new)
+ {
+ dev_t rdev;
+ int retval;
+ umode_t umode;
+ struct inode *inode;
+- struct p9_wstat *st;
+ struct v9fs_session_info *v9ses = sb->s_fs_info;
++ int (*test)(struct inode *inode, void *data);
+
+- inode = iget_locked(sb, QID2INO(&fid->qid));
+- if (unlikely(!inode))
+- return ERR_PTR(-ENOMEM);
+- if (!(inode->i_state & I_NEW)) {
+- if (!new) {
+- goto done;
+- } else {
+- p9_debug(P9_DEBUG_VFS, "WARNING: Inode collision %ld\n",
+- inode->i_ino);
+- iput(inode);
+- remove_inode_hash(inode);
+- inode = iget_locked(sb, QID2INO(&fid->qid));
+- WARN_ON(!(inode->i_state & I_NEW));
+- }
+- }
++ if (new)
++ test = v9fs_test_new_inode;
++ else
++ test = v9fs_test_inode;
+
++ inode = iget5_locked(sb, QID2INO(qid), test, v9fs_set_inode, st);
++ if (!inode)
++ return ERR_PTR(-ENOMEM);
++ if (!(inode->i_state & I_NEW))
++ return inode;
+ /*
+ * initialize the inode with the stat info
+ * FIXME!! we may need support for stale inodes
+ * later.
+ */
+- st = p9_client_stat(fid);
+- if (IS_ERR(st)) {
+- retval = PTR_ERR(st);
+- goto error;
+- }
+-
++ inode->i_ino = QID2INO(qid);
+ umode = p9mode2unixmode(v9ses, st, &rdev);
+- retval = v9fs_init_inode(v9ses, inode, &fid->qid, umode, rdev);
+- v9fs_stat2inode(st, inode, sb, 0);
+- p9stat_free(st);
+- kfree(st);
++ retval = v9fs_init_inode(v9ses, inode, umode, rdev);
+ if (retval)
+ goto error;
+
++ v9fs_stat2inode(st, inode, sb, 0);
+ v9fs_set_netfs_context(inode);
+ v9fs_cache_inode_get_cookie(inode);
+ unlock_new_inode(inode);
+-done:
+ return inode;
+ error:
+ iget_failed(inode);
+ return ERR_PTR(retval);
++
++}
++
++struct inode *
++v9fs_inode_from_fid(struct v9fs_session_info *v9ses, struct p9_fid *fid,
++ struct super_block *sb, int new)
++{
++ struct p9_wstat *st;
++ struct inode *inode = NULL;
++
++ st = p9_client_stat(fid);
++ if (IS_ERR(st))
++ return ERR_CAST(st);
++
++ inode = v9fs_qid_iget(sb, &st->qid, st, new);
++ p9stat_free(st);
++ kfree(st);
++ return inode;
+ }
+
+ /**
+@@ -449,15 +492,8 @@ static int v9fs_at_to_dotl_flags(int flags)
+ */
+ static void v9fs_dec_count(struct inode *inode)
+ {
+- if (!S_ISDIR(inode->i_mode) || inode->i_nlink > 2) {
+- if (inode->i_nlink) {
+- drop_nlink(inode);
+- } else {
+- p9_debug(P9_DEBUG_VFS,
+- "WARNING: unexpected i_nlink zero %d inode %ld\n",
+- inode->i_nlink, inode->i_ino);
+- }
+- }
++ if (!S_ISDIR(inode->i_mode) || inode->i_nlink > 2)
++ drop_nlink(inode);
+ }
+
+ /**
+@@ -508,9 +544,6 @@ static int v9fs_remove(struct inode *dir, struct dentry *dentry, int flags)
+ } else
+ v9fs_dec_count(inode);
+
+- if (inode->i_nlink <= 0) /* no more refs unhash it */
+- remove_inode_hash(inode);
+-
+ v9fs_invalidate_inode_attr(inode);
+ v9fs_invalidate_inode_attr(dir);
+
+@@ -576,7 +609,7 @@ v9fs_create(struct v9fs_session_info *v9ses, struct inode *dir,
+ /*
+ * instantiate inode and assign the unopened fid to the dentry
+ */
+- inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb, true);
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ p9_debug(P9_DEBUG_VFS,
+@@ -704,8 +737,10 @@ struct dentry *v9fs_vfs_lookup(struct inode *dir, struct dentry *dentry,
+ inode = NULL;
+ else if (IS_ERR(fid))
+ inode = ERR_CAST(fid);
++ else if (v9ses->cache & (CACHE_META|CACHE_LOOSE))
++ inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb);
+ else
+- inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb, false);
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb);
+ /*
+ * If we had a rename on the server and a parallel lookup
+ * for the new name, then make sure we instantiate with
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index c61b97bd13b9a7..143ac03b7425c0 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -52,50 +52,80 @@ static kgid_t v9fs_get_fsgid_for_create(struct inode *dir_inode)
+ return current_fsgid();
+ }
+
++static int v9fs_test_inode_dotl(struct inode *inode, void *data)
++{
++ struct v9fs_inode *v9inode = V9FS_I(inode);
++ struct p9_stat_dotl *st = (struct p9_stat_dotl *)data;
+
++ /* don't match inode of different type */
++ if (inode_wrong_type(inode, st->st_mode))
++ return 0;
+
+-struct inode *
+-v9fs_fid_iget_dotl(struct super_block *sb, struct p9_fid *fid, bool new)
++ if (inode->i_generation != st->st_gen)
++ return 0;
++
++ /* compare qid details */
++ if (memcmp(&v9inode->qid.version,
++ &st->qid.version, sizeof(v9inode->qid.version)))
++ return 0;
++
++ if (v9inode->qid.type != st->qid.type)
++ return 0;
++
++ if (v9inode->qid.path != st->qid.path)
++ return 0;
++ return 1;
++}
++
++/* Always get a new inode */
++static int v9fs_test_new_inode_dotl(struct inode *inode, void *data)
++{
++ return 0;
++}
++
++static int v9fs_set_inode_dotl(struct inode *inode, void *data)
++{
++ struct v9fs_inode *v9inode = V9FS_I(inode);
++ struct p9_stat_dotl *st = (struct p9_stat_dotl *)data;
++
++ memcpy(&v9inode->qid, &st->qid, sizeof(st->qid));
++ inode->i_generation = st->st_gen;
++ return 0;
++}
++
++static struct inode *v9fs_qid_iget_dotl(struct super_block *sb,
++ struct p9_qid *qid,
++ struct p9_fid *fid,
++ struct p9_stat_dotl *st,
++ int new)
+ {
+ int retval;
+ struct inode *inode;
+- struct p9_stat_dotl *st;
+ struct v9fs_session_info *v9ses = sb->s_fs_info;
++ int (*test)(struct inode *inode, void *data);
+
+- inode = iget_locked(sb, QID2INO(&fid->qid));
+- if (unlikely(!inode))
+- return ERR_PTR(-ENOMEM);
+- if (!(inode->i_state & I_NEW)) {
+- if (!new) {
+- goto done;
+- } else { /* deal with race condition in inode number reuse */
+- p9_debug(P9_DEBUG_ERROR, "WARNING: Inode collision %lx\n",
+- inode->i_ino);
+- iput(inode);
+- remove_inode_hash(inode);
+- inode = iget_locked(sb, QID2INO(&fid->qid));
+- WARN_ON(!(inode->i_state & I_NEW));
+- }
+- }
++ if (new)
++ test = v9fs_test_new_inode_dotl;
++ else
++ test = v9fs_test_inode_dotl;
+
++ inode = iget5_locked(sb, QID2INO(qid), test, v9fs_set_inode_dotl, st);
++ if (!inode)
++ return ERR_PTR(-ENOMEM);
++ if (!(inode->i_state & I_NEW))
++ return inode;
+ /*
+ * initialize the inode with the stat info
+ * FIXME!! we may need support for stale inodes
+ * later.
+ */
+- st = p9_client_getattr_dotl(fid, P9_STATS_BASIC | P9_STATS_GEN);
+- if (IS_ERR(st)) {
+- retval = PTR_ERR(st);
+- goto error;
+- }
+-
+- retval = v9fs_init_inode(v9ses, inode, &fid->qid,
++ inode->i_ino = QID2INO(qid);
++ retval = v9fs_init_inode(v9ses, inode,
+ st->st_mode, new_decode_dev(st->st_rdev));
+- v9fs_stat2inode_dotl(st, inode, 0);
+- kfree(st);
+ if (retval)
+ goto error;
+
++ v9fs_stat2inode_dotl(st, inode, 0);
+ v9fs_set_netfs_context(inode);
+ v9fs_cache_inode_get_cookie(inode);
+ retval = v9fs_get_acl(inode, fid);
+@@ -103,11 +133,27 @@ v9fs_fid_iget_dotl(struct super_block *sb, struct p9_fid *fid, bool new)
+ goto error;
+
+ unlock_new_inode(inode);
+-done:
+ return inode;
+ error:
+ iget_failed(inode);
+ return ERR_PTR(retval);
++
++}
++
++struct inode *
++v9fs_inode_from_fid_dotl(struct v9fs_session_info *v9ses, struct p9_fid *fid,
++ struct super_block *sb, int new)
++{
++ struct p9_stat_dotl *st;
++ struct inode *inode = NULL;
++
++ st = p9_client_getattr_dotl(fid, P9_STATS_BASIC | P9_STATS_GEN);
++ if (IS_ERR(st))
++ return ERR_CAST(st);
++
++ inode = v9fs_qid_iget_dotl(sb, &st->qid, fid, st, new);
++ kfree(st);
++ return inode;
+ }
+
+ struct dotl_openflag_map {
+@@ -259,7 +305,7 @@ v9fs_vfs_atomic_open_dotl(struct inode *dir, struct dentry *dentry,
+ p9_debug(P9_DEBUG_VFS, "p9_client_walk failed %d\n", err);
+ goto out;
+ }
+- inode = v9fs_fid_iget_dotl(dir->i_sb, fid, true);
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ p9_debug(P9_DEBUG_VFS, "inode creation failed %d\n", err);
+@@ -309,6 +355,7 @@ static int v9fs_vfs_mkdir_dotl(struct mnt_idmap *idmap,
+ umode_t omode)
+ {
+ int err;
++ struct v9fs_session_info *v9ses;
+ struct p9_fid *fid = NULL, *dfid = NULL;
+ kgid_t gid;
+ const unsigned char *name;
+@@ -318,6 +365,7 @@ static int v9fs_vfs_mkdir_dotl(struct mnt_idmap *idmap,
+ struct posix_acl *dacl = NULL, *pacl = NULL;
+
+ p9_debug(P9_DEBUG_VFS, "name %pd\n", dentry);
++ v9ses = v9fs_inode2v9ses(dir);
+
+ omode |= S_IFDIR;
+ if (dir->i_mode & S_ISGID)
+@@ -352,7 +400,7 @@ static int v9fs_vfs_mkdir_dotl(struct mnt_idmap *idmap,
+ }
+
+ /* instantiate inode and assign the unopened fid to the dentry */
+- inode = v9fs_fid_iget_dotl(dir->i_sb, fid, true);
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ p9_debug(P9_DEBUG_VFS, "inode creation failed %d\n",
+@@ -749,6 +797,7 @@ v9fs_vfs_mknod_dotl(struct mnt_idmap *idmap, struct inode *dir,
+ kgid_t gid;
+ const unsigned char *name;
+ umode_t mode;
++ struct v9fs_session_info *v9ses;
+ struct p9_fid *fid = NULL, *dfid = NULL;
+ struct inode *inode;
+ struct p9_qid qid;
+@@ -758,6 +807,7 @@ v9fs_vfs_mknod_dotl(struct mnt_idmap *idmap, struct inode *dir,
+ dir->i_ino, dentry, omode,
+ MAJOR(rdev), MINOR(rdev));
+
++ v9ses = v9fs_inode2v9ses(dir);
+ dfid = v9fs_parent_fid(dentry);
+ if (IS_ERR(dfid)) {
+ err = PTR_ERR(dfid);
+@@ -788,7 +838,7 @@ v9fs_vfs_mknod_dotl(struct mnt_idmap *idmap, struct inode *dir,
+ err);
+ goto error;
+ }
+- inode = v9fs_fid_iget_dotl(dir->i_sb, fid, true);
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ p9_debug(P9_DEBUG_VFS, "inode creation failed %d\n",
+diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c
+index f52fdf42945cf1..489db161abc983 100644
+--- a/fs/9p/vfs_super.c
++++ b/fs/9p/vfs_super.c
+@@ -139,7 +139,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags,
+ else
+ sb->s_d_op = &v9fs_dentry_operations;
+
+- inode = v9fs_get_inode_from_fid(v9ses, fid, sb, true);
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, sb);
+ if (IS_ERR(inode)) {
+ retval = PTR_ERR(inode);
+ goto release_sb;
+diff --git a/fs/backing-file.c b/fs/backing-file.c
+index 8860dac58c37e1..09a9be945d45e6 100644
+--- a/fs/backing-file.c
++++ b/fs/backing-file.c
+@@ -80,7 +80,7 @@ struct backing_aio {
+ refcount_t ref;
+ struct kiocb *orig_iocb;
+ /* used for aio completion */
+- void (*end_write)(struct file *);
++ void (*end_write)(struct file *, loff_t, ssize_t);
+ struct work_struct work;
+ long res;
+ };
+@@ -109,7 +109,7 @@ static void backing_aio_cleanup(struct backing_aio *aio, long res)
+ struct kiocb *orig_iocb = aio->orig_iocb;
+
+ if (aio->end_write)
+- aio->end_write(orig_iocb->ki_filp);
++ aio->end_write(orig_iocb->ki_filp, iocb->ki_pos, res);
+
+ orig_iocb->ki_pos = iocb->ki_pos;
+ backing_aio_put(aio);
+@@ -239,7 +239,7 @@ ssize_t backing_file_write_iter(struct file *file, struct iov_iter *iter,
+
+ ret = vfs_iter_write(file, iter, &iocb->ki_pos, rwf);
+ if (ctx->end_write)
+- ctx->end_write(ctx->user_file);
++ ctx->end_write(ctx->user_file, iocb->ki_pos, ret);
+ } else {
+ struct backing_aio *aio;
+
+@@ -317,7 +317,7 @@ ssize_t backing_file_splice_write(struct pipe_inode_info *pipe,
+ revert_creds(old_cred);
+
+ if (ctx->end_write)
+- ctx->end_write(ctx->user_file);
++ ctx->end_write(ctx->user_file, ppos ? *ppos : 0, ret);
+
+ return ret;
+ }
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 2e49d978f504ed..4ca22d4655aea5 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -3819,6 +3819,8 @@ void btrfs_free_reserved_bytes(struct btrfs_block_group *cache,
+ spin_lock(&cache->lock);
+ if (cache->ro)
+ space_info->bytes_readonly += num_bytes;
++ else if (btrfs_is_zoned(cache->fs_info))
++ space_info->bytes_zone_unusable += num_bytes;
+ cache->reserved -= num_bytes;
+ space_info->bytes_reserved -= num_bytes;
+ space_info->max_extent_size = 0;
+diff --git a/fs/btrfs/dir-item.c b/fs/btrfs/dir-item.c
+index 001c0c2f872c99..1e8cd7c9472e1d 100644
+--- a/fs/btrfs/dir-item.c
++++ b/fs/btrfs/dir-item.c
+@@ -347,8 +347,8 @@ btrfs_search_dir_index_item(struct btrfs_root *root, struct btrfs_path *path,
+ return di;
+ }
+ /* Adjust return code if the key was not found in the next leaf. */
+- if (ret > 0)
+- ret = 0;
++ if (ret >= 0)
++ ret = -ENOENT;
+
+ return ERR_PTR(ret);
+ }
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 216b3b412aa0e1..64bead01354c9b 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1960,7 +1960,7 @@ static void btrfs_init_qgroup(struct btrfs_fs_info *fs_info)
+ fs_info->qgroup_seq = 1;
+ fs_info->qgroup_ulist = NULL;
+ fs_info->qgroup_rescan_running = false;
+- fs_info->qgroup_drop_subtree_thres = BTRFS_MAX_LEVEL;
++ fs_info->qgroup_drop_subtree_thres = BTRFS_QGROUP_DROP_SUBTREE_THRES_DEFAULT;
+ mutex_init(&fs_info->qgroup_rescan_lock);
+ }
+
+diff --git a/fs/btrfs/extent_map.c b/fs/btrfs/extent_map.c
+index 10ac5f657e3889..72ae8f64482c6b 100644
+--- a/fs/btrfs/extent_map.c
++++ b/fs/btrfs/extent_map.c
+@@ -240,13 +240,19 @@ static bool mergeable_maps(const struct extent_map *prev, const struct extent_ma
+ /*
+ * Handle the on-disk data extents merge for @prev and @next.
+ *
++ * @prev: left extent to merge
++ * @next: right extent to merge
++ * @merged: the extent we will not discard after the merge; updated with new values
++ *
++ * After this, one of the two extents is the new merged extent and the other is
++ * removed from the tree and likely freed. Note that @merged is one of @prev/@next
++ * so there is const/non-const aliasing occurring here.
++ *
+ * Only touches disk_bytenr/disk_num_bytes/offset/ram_bytes.
+ * For now only uncompressed regular extent can be merged.
+- *
+- * @prev and @next will be both updated to point to the new merged range.
+- * Thus one of them should be removed by the caller.
+ */
+-static void merge_ondisk_extents(struct extent_map *prev, struct extent_map *next)
++static void merge_ondisk_extents(const struct extent_map *prev, const struct extent_map *next,
++ struct extent_map *merged)
+ {
+ u64 new_disk_bytenr;
+ u64 new_disk_num_bytes;
+@@ -281,15 +287,10 @@ static void merge_ondisk_extents(struct extent_map *prev, struct extent_map *nex
+ new_disk_bytenr;
+ new_offset = prev->disk_bytenr + prev->offset - new_disk_bytenr;
+
+- prev->disk_bytenr = new_disk_bytenr;
+- prev->disk_num_bytes = new_disk_num_bytes;
+- prev->ram_bytes = new_disk_num_bytes;
+- prev->offset = new_offset;
+-
+- next->disk_bytenr = new_disk_bytenr;
+- next->disk_num_bytes = new_disk_num_bytes;
+- next->ram_bytes = new_disk_num_bytes;
+- next->offset = new_offset;
++ merged->disk_bytenr = new_disk_bytenr;
++ merged->disk_num_bytes = new_disk_num_bytes;
++ merged->ram_bytes = new_disk_num_bytes;
++ merged->offset = new_offset;
+ }
+
+ static void dump_extent_map(struct btrfs_fs_info *fs_info, const char *prefix,
+@@ -358,7 +359,7 @@ static void try_merge_map(struct btrfs_inode *inode, struct extent_map *em)
+ em->generation = max(em->generation, merge->generation);
+
+ if (em->disk_bytenr < EXTENT_MAP_LAST_BYTE)
+- merge_ondisk_extents(merge, em);
++ merge_ondisk_extents(merge, em, em);
+ em->flags |= EXTENT_FLAG_MERGED;
+
+ validate_extent_map(fs_info, em);
+@@ -375,7 +376,7 @@ static void try_merge_map(struct btrfs_inode *inode, struct extent_map *em)
+ if (rb && can_merge_extent_map(merge) && mergeable_maps(em, merge)) {
+ em->len += merge->len;
+ if (em->disk_bytenr < EXTENT_MAP_LAST_BYTE)
+- merge_ondisk_extents(em, merge);
++ merge_ondisk_extents(em, merge, em);
+ validate_extent_map(fs_info, em);
+ rb_erase(&merge->rb_node, &tree->root);
+ RB_CLEAR_NODE(&merge->rb_node);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index b1b6564ab68f0c..13675e128af6e0 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4339,11 +4339,8 @@ static int btrfs_unlink_subvol(struct btrfs_trans_handle *trans,
+ */
+ if (btrfs_ino(inode) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID) {
+ di = btrfs_search_dir_index_item(root, path, dir_ino, &fname.disk_name);
+- if (IS_ERR_OR_NULL(di)) {
+- if (!di)
+- ret = -ENOENT;
+- else
+- ret = PTR_ERR(di);
++ if (IS_ERR(di)) {
++ ret = PTR_ERR(di);
+ btrfs_abort_transaction(trans, ret);
+ goto out;
+ }
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index feb8f9f2f3582d..427e8887a51056 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1407,7 +1407,7 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ fs_info->quota_root = NULL;
+ fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_ON;
+ fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_SIMPLE_MODE;
+- fs_info->qgroup_drop_subtree_thres = BTRFS_MAX_LEVEL;
++ fs_info->qgroup_drop_subtree_thres = BTRFS_QGROUP_DROP_SUBTREE_THRES_DEFAULT;
+ spin_unlock(&fs_info->qgroup_lock);
+
+ btrfs_free_qgroup_config(fs_info);
+diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
+index deb479d176a96c..1ef566cf1c2a69 100644
+--- a/fs/btrfs/qgroup.h
++++ b/fs/btrfs/qgroup.h
+@@ -121,6 +121,8 @@ struct btrfs_inode;
+ #define BTRFS_QGROUP_RUNTIME_FLAG_CANCEL_RESCAN (1ULL << 63)
+ #define BTRFS_QGROUP_RUNTIME_FLAG_NO_ACCOUNTING (1ULL << 62)
+
++#define BTRFS_QGROUP_DROP_SUBTREE_THRES_DEFAULT (3)
++
+ /*
+ * Record a dirty extent, and info qgroup to update quota on it
+ */
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 98fa0f382480a2..926d7a9ed99df0 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -340,6 +340,15 @@ static int btrfs_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ fallthrough;
+ case Opt_compress:
+ case Opt_compress_type:
++ /*
++ * Provide the same semantics as older kernels that don't use fs
++ * context, specifying the "compress" option clears
++ * "force-compress" without the need to pass
++ * "compress-force=[no|none]" before specifying "compress".
++ */
++ if (opt != Opt_compress_force && opt != Opt_compress_force_type)
++ btrfs_clear_opt(ctx->mount_opt, FORCE_COMPRESS);
++
+ if (opt == Opt_compress || opt == Opt_compress_force) {
+ ctx->compress_type = BTRFS_COMPRESS_ZLIB;
+ ctx->compress_level = BTRFS_ZLIB_DEFAULT_LEVEL;
+@@ -1498,8 +1507,7 @@ static int btrfs_reconfigure(struct fs_context *fc)
+ sync_filesystem(sb);
+ set_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state);
+
+- if (!mount_reconfigure &&
+- !btrfs_check_options(fs_info, &ctx->mount_opt, fc->sb_flags))
++ if (!btrfs_check_options(fs_info, &ctx->mount_opt, fc->sb_flags))
+ return -EINVAL;
+
+ ret = btrfs_check_features(fs_info, !(fc->sb_flags & SB_RDONLY));
+diff --git a/fs/fuse/passthrough.c b/fs/fuse/passthrough.c
+index 9666d13884ce59..d1b570d39501cc 100644
+--- a/fs/fuse/passthrough.c
++++ b/fs/fuse/passthrough.c
+@@ -18,11 +18,11 @@ static void fuse_file_accessed(struct file *file)
+ fuse_invalidate_atime(inode);
+ }
+
+-static void fuse_file_modified(struct file *file)
++static void fuse_passthrough_end_write(struct file *file, loff_t pos, ssize_t ret)
+ {
+ struct inode *inode = file_inode(file);
+
+- fuse_invalidate_attr_mask(inode, FUSE_STATX_MODSIZE);
++ fuse_write_update_attr(inode, pos, ret);
+ }
+
+ ssize_t fuse_passthrough_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+@@ -63,7 +63,7 @@ ssize_t fuse_passthrough_write_iter(struct kiocb *iocb,
+ struct backing_file_ctx ctx = {
+ .cred = ff->cred,
+ .user_file = file,
+- .end_write = fuse_file_modified,
++ .end_write = fuse_passthrough_end_write,
+ };
+
+ pr_debug("%s: backing_file=0x%p, pos=%lld, len=%zu\n", __func__,
+@@ -110,7 +110,7 @@ ssize_t fuse_passthrough_splice_write(struct pipe_inode_info *pipe,
+ struct backing_file_ctx ctx = {
+ .cred = ff->cred,
+ .user_file = out,
+- .end_write = fuse_file_modified,
++ .end_write = fuse_passthrough_end_write,
+ };
+
+ pr_debug("%s: backing_file=0x%p, pos=%lld, len=%zu, flags=0x%x\n", __func__,
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 974ecf5e0d9522..3ab410059dc202 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -187,7 +187,7 @@ int dbMount(struct inode *ipbmap)
+ }
+
+ bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
+- if (!bmp->db_numag || bmp->db_numag >= MAXAG) {
++ if (!bmp->db_numag || bmp->db_numag > MAXAG) {
+ err = -EINVAL;
+ goto err_release_metapage;
+ }
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 155c6abda71da6..666070457cf747 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -3917,7 +3917,9 @@ struct mnt_namespace *copy_mnt_ns(unsigned long flags, struct mnt_namespace *ns,
+ new = copy_tree(old, old->mnt.mnt_root, copy_flags);
+ if (IS_ERR(new)) {
+ namespace_unlock();
+- free_mnt_ns(new_ns);
++ ns_free_inum(&new_ns->ns);
++ dec_mnt_namespaces(new_ns->ucounts);
++ mnt_ns_release(new_ns);
+ return ERR_CAST(new);
+ }
+ if (user_ns != ns->user_ns) {
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 64cf5d7b7a4e27..d96d8cfd1ff86b 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1358,21 +1358,47 @@ static void destroy_delegation(struct nfs4_delegation *dp)
+ destroy_unhashed_deleg(dp);
+ }
+
++/**
++ * revoke_delegation - perform nfs4 delegation structure cleanup
++ * @dp: pointer to the delegation
++ *
++ * This function assumes that it's called either from the administrative
++ * interface (nfsd4_revoke_states()) that's revoking a specific delegation
++ * stateid or it's called from a laundromat thread (nfsd4_landromat()) that
++ * determined that this specific state has expired and needs to be revoked
++ * (both mark state with the appropriate stid sc_status mode). It is also
++ * assumed that a reference was taken on the @dp state.
++ *
++ * If this function finds that the @dp state is SC_STATUS_FREED it means
++ * that a FREE_STATEID operation for this stateid has been processed and
++ * we can proceed to removing it from recalled list. However, if @dp state
++ * isn't marked SC_STATUS_FREED, it means we need place it on the cl_revoked
++ * list and wait for the FREE_STATEID to arrive from the client. At the same
++ * time, we need to mark it as SC_STATUS_FREEABLE to indicate to the
++ * nfsd4_free_stateid() function that this stateid has already been added
++ * to the cl_revoked list and that nfsd4_free_stateid() is now responsible
++ * for removing it from the list. Inspection of where the delegation state
++ * in the revocation process is protected by the clp->cl_lock.
++ */
+ static void revoke_delegation(struct nfs4_delegation *dp)
+ {
+ struct nfs4_client *clp = dp->dl_stid.sc_client;
+
+ WARN_ON(!list_empty(&dp->dl_recall_lru));
++ WARN_ON_ONCE(!(dp->dl_stid.sc_status &
++ (SC_STATUS_REVOKED | SC_STATUS_ADMIN_REVOKED)));
+
+ trace_nfsd_stid_revoke(&dp->dl_stid);
+
+- if (dp->dl_stid.sc_status &
+- (SC_STATUS_REVOKED | SC_STATUS_ADMIN_REVOKED)) {
+- spin_lock(&clp->cl_lock);
+- refcount_inc(&dp->dl_stid.sc_count);
+- list_add(&dp->dl_recall_lru, &clp->cl_revoked);
+- spin_unlock(&clp->cl_lock);
++ spin_lock(&clp->cl_lock);
++ if (dp->dl_stid.sc_status & SC_STATUS_FREED) {
++ list_del_init(&dp->dl_recall_lru);
++ goto out;
+ }
++ list_add(&dp->dl_recall_lru, &clp->cl_revoked);
++ dp->dl_stid.sc_status |= SC_STATUS_FREEABLE;
++out:
++ spin_unlock(&clp->cl_lock);
+ destroy_unhashed_deleg(dp);
+ }
+
+@@ -1781,6 +1807,7 @@ void nfsd4_revoke_states(struct net *net, struct super_block *sb)
+ mutex_unlock(&stp->st_mutex);
+ break;
+ case SC_TYPE_DELEG:
++ refcount_inc(&stid->sc_count);
+ dp = delegstateid(stid);
+ spin_lock(&state_lock);
+ if (!unhash_delegation_locked(
+@@ -6544,6 +6571,7 @@ nfs4_laundromat(struct nfsd_net *nn)
+ dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
+ if (!state_expired(<, dp->dl_time))
+ break;
++ refcount_inc(&dp->dl_stid.sc_count);
+ unhash_delegation_locked(dp, SC_STATUS_REVOKED);
+ list_add(&dp->dl_recall_lru, &reaplist);
+ }
+@@ -7161,7 +7189,9 @@ nfsd4_free_stateid(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ s->sc_status |= SC_STATUS_CLOSED;
+ spin_unlock(&s->sc_lock);
+ dp = delegstateid(s);
+- list_del_init(&dp->dl_recall_lru);
++ if (s->sc_status & SC_STATUS_FREEABLE)
++ list_del_init(&dp->dl_recall_lru);
++ s->sc_status |= SC_STATUS_FREED;
+ spin_unlock(&cl->cl_lock);
+ nfs4_put_stid(s);
+ ret = nfs_ok;
+@@ -7491,7 +7521,9 @@ nfsd4_delegreturn(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ if ((status = fh_verify(rqstp, &cstate->current_fh, S_IFREG, 0)))
+ return status;
+
+- status = nfsd4_lookup_stateid(cstate, stateid, SC_TYPE_DELEG, 0, &s, nn);
++ status = nfsd4_lookup_stateid(cstate, stateid, SC_TYPE_DELEG,
++ SC_STATUS_REVOKED | SC_STATUS_FREEABLE,
++ &s, nn);
+ if (status)
+ goto out;
+ dp = delegstateid(s);
+@@ -8687,7 +8719,7 @@ nfs4_state_shutdown_net(struct net *net)
+ struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+
+ shrinker_free(nn->nfsd_client_shrinker);
+- cancel_work(&nn->nfsd_shrinker_work);
++ cancel_work_sync(&nn->nfsd_shrinker_work);
+ cancel_delayed_work_sync(&nn->laundromat_work);
+ locks_end_grace(&nn->nfsd4_manager);
+
+diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
+index ec4559ecd193b2..df345896a7450f 100644
+--- a/fs/nfsd/state.h
++++ b/fs/nfsd/state.h
+@@ -113,6 +113,8 @@ struct nfs4_stid {
+ /* For a deleg stateid kept around only to process free_stateid's: */
+ #define SC_STATUS_REVOKED BIT(1)
+ #define SC_STATUS_ADMIN_REVOKED BIT(2)
++#define SC_STATUS_FREEABLE BIT(3)
++#define SC_STATUS_FREED BIT(4)
+ unsigned short sc_status;
+
+ struct list_head sc_cp_list;
+diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
+index 14e470fb88706a..7f9073d5c5a567 100644
+--- a/fs/nilfs2/page.c
++++ b/fs/nilfs2/page.c
+@@ -77,7 +77,8 @@ void nilfs_forget_buffer(struct buffer_head *bh)
+ const unsigned long clear_bits =
+ (BIT(BH_Uptodate) | BIT(BH_Dirty) | BIT(BH_Mapped) |
+ BIT(BH_Async_Write) | BIT(BH_NILFS_Volatile) |
+- BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected));
++ BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected) |
++ BIT(BH_Delay));
+
+ lock_buffer(bh);
+ set_mask_bits(&bh->b_state, clear_bits, 0);
+@@ -414,7 +415,8 @@ void nilfs_clear_folio_dirty(struct folio *folio, bool silent)
+ const unsigned long clear_bits =
+ (BIT(BH_Uptodate) | BIT(BH_Dirty) | BIT(BH_Mapped) |
+ BIT(BH_Async_Write) | BIT(BH_NILFS_Volatile) |
+- BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected));
++ BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected) |
++ BIT(BH_Delay));
+
+ bh = head;
+ do {
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index 272c8a1dab3c27..82ae8254c068be 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -183,8 +183,10 @@ static bool fsnotify_event_needs_parent(struct inode *inode, __u32 mnt_mask,
+ BUILD_BUG_ON(FS_EVENTS_POSS_ON_CHILD & ~FS_EVENTS_POSS_TO_PARENT);
+
+ /* Did either inode/sb/mount subscribe for events with parent/name? */
+- marks_mask |= fsnotify_parent_needed_mask(inode->i_fsnotify_mask);
+- marks_mask |= fsnotify_parent_needed_mask(inode->i_sb->s_fsnotify_mask);
++ marks_mask |= fsnotify_parent_needed_mask(
++ READ_ONCE(inode->i_fsnotify_mask));
++ marks_mask |= fsnotify_parent_needed_mask(
++ READ_ONCE(inode->i_sb->s_fsnotify_mask));
+ marks_mask |= fsnotify_parent_needed_mask(mnt_mask);
+
+ /* Did they subscribe for this event with parent/name info? */
+@@ -195,8 +197,8 @@ static bool fsnotify_event_needs_parent(struct inode *inode, __u32 mnt_mask,
+ static inline bool fsnotify_object_watched(struct inode *inode, __u32 mnt_mask,
+ __u32 mask)
+ {
+- __u32 marks_mask = inode->i_fsnotify_mask | mnt_mask |
+- inode->i_sb->s_fsnotify_mask;
++ __u32 marks_mask = READ_ONCE(inode->i_fsnotify_mask) | mnt_mask |
++ READ_ONCE(inode->i_sb->s_fsnotify_mask);
+
+ return mask & marks_mask & ALL_FSNOTIFY_EVENTS;
+ }
+@@ -213,7 +215,8 @@ int __fsnotify_parent(struct dentry *dentry, __u32 mask, const void *data,
+ int data_type)
+ {
+ const struct path *path = fsnotify_data_path(data, data_type);
+- __u32 mnt_mask = path ? real_mount(path->mnt)->mnt_fsnotify_mask : 0;
++ __u32 mnt_mask = path ?
++ READ_ONCE(real_mount(path->mnt)->mnt_fsnotify_mask) : 0;
+ struct inode *inode = d_inode(dentry);
+ struct dentry *parent;
+ bool parent_watched = dentry->d_flags & DCACHE_FSNOTIFY_PARENT_WATCHED;
+@@ -557,13 +560,13 @@ int fsnotify(__u32 mask, const void *data, int data_type, struct inode *dir,
+ (!inode2 || !inode2->i_fsnotify_marks))
+ return 0;
+
+- marks_mask = sb->s_fsnotify_mask;
++ marks_mask = READ_ONCE(sb->s_fsnotify_mask);
+ if (mnt)
+- marks_mask |= mnt->mnt_fsnotify_mask;
++ marks_mask |= READ_ONCE(mnt->mnt_fsnotify_mask);
+ if (inode)
+- marks_mask |= inode->i_fsnotify_mask;
++ marks_mask |= READ_ONCE(inode->i_fsnotify_mask);
+ if (inode2)
+- marks_mask |= inode2->i_fsnotify_mask;
++ marks_mask |= READ_ONCE(inode2->i_fsnotify_mask);
+
+
+ /*
+diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
+index 4ffc30606e0b91..e163a4b7902244 100644
+--- a/fs/notify/inotify/inotify_user.c
++++ b/fs/notify/inotify/inotify_user.c
+@@ -569,7 +569,7 @@ static int inotify_update_existing_watch(struct fsnotify_group *group,
+ /* more bits in old than in new? */
+ int dropped = (old_mask & ~new_mask);
+ /* more bits in this fsn_mark than the inode's mask? */
+- int do_inode = (new_mask & ~inode->i_fsnotify_mask);
++ int do_inode = (new_mask & ~READ_ONCE(inode->i_fsnotify_mask));
+
+ /* update the inode with this new fsn_mark */
+ if (dropped || do_inode)
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index 5e170e71308868..c45b222cf9c11c 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -128,7 +128,7 @@ __u32 fsnotify_conn_mask(struct fsnotify_mark_connector *conn)
+ if (WARN_ON(!fsnotify_valid_obj_type(conn->type)))
+ return 0;
+
+- return *fsnotify_conn_mask_p(conn);
++ return READ_ONCE(*fsnotify_conn_mask_p(conn));
+ }
+
+ static void fsnotify_get_sb_watched_objects(struct super_block *sb)
+@@ -245,7 +245,11 @@ static void *__fsnotify_recalc_mask(struct fsnotify_mark_connector *conn)
+ !(mark->flags & FSNOTIFY_MARK_FLAG_NO_IREF))
+ want_iref = true;
+ }
+- *fsnotify_conn_mask_p(conn) = new_mask;
++ /*
++ * We use WRITE_ONCE() to prevent silly compiler optimizations from
++ * confusing readers not holding conn->lock with partial updates.
++ */
++ WRITE_ONCE(*fsnotify_conn_mask_p(conn), new_mask);
+
+ return fsnotify_update_iref(conn, want_iref);
+ }
+diff --git a/fs/open.c b/fs/open.c
+index 22adbef7ecc2a6..30bfcddd505de4 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -1458,6 +1458,8 @@ SYSCALL_DEFINE4(openat2, int, dfd, const char __user *, filename,
+
+ if (unlikely(usize < OPEN_HOW_SIZE_VER0))
+ return -EINVAL;
++ if (unlikely(usize > PAGE_SIZE))
++ return -E2BIG;
+
+ err = copy_struct_from_user(&tmp, sizeof(tmp), how, usize);
+ if (err)
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index 1a411cae57ed9c..7b085f89e5a152 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -229,6 +229,11 @@ static void ovl_file_modified(struct file *file)
+ ovl_copyattr(file_inode(file));
+ }
+
++static void ovl_file_end_write(struct file *file, loff_t pos, ssize_t ret)
++{
++ ovl_file_modified(file);
++}
++
+ static void ovl_file_accessed(struct file *file)
+ {
+ struct inode *inode, *upperinode;
+@@ -292,7 +297,7 @@ static ssize_t ovl_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ struct backing_file_ctx ctx = {
+ .cred = ovl_creds(inode->i_sb),
+ .user_file = file,
+- .end_write = ovl_file_modified,
++ .end_write = ovl_file_end_write,
+ };
+
+ if (!iov_iter_count(iter))
+@@ -362,7 +367,7 @@ static ssize_t ovl_splice_write(struct pipe_inode_info *pipe, struct file *out,
+ struct backing_file_ctx ctx = {
+ .cred = ovl_creds(inode->i_sb),
+ .user_file = out,
+- .end_write = ovl_file_modified,
++ .end_write = ovl_file_end_write,
+ };
+
+ inode_lock(inode);
+diff --git a/fs/select.c b/fs/select.c
+index 9515c3fa1a03e8..bc185d111436cd 100644
+--- a/fs/select.c
++++ b/fs/select.c
+@@ -780,7 +780,9 @@ static inline int get_sigset_argpack(struct sigset_argpack *to,
+ {
+ // the path is hot enough for overhead of copy_from_user() to matter
+ if (from) {
+- if (!user_read_access_begin(from, sizeof(*from)))
++ if (can_do_masked_user_access())
++ from = masked_user_access_begin(from);
++ else if (!user_read_access_begin(from, sizeof(*from)))
+ return -EFAULT;
+ unsafe_get_user(to->p, &from->p, Efault);
+ unsafe_get_user(to->size, &from->size, Efault);
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index 33e2860010158c..9bdb6e7f1dc3a9 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -1780,7 +1780,7 @@ static int cifs_init_netfs(void)
+ nomem_subreqpool:
+ kmem_cache_destroy(cifs_io_subrequest_cachep);
+ nomem_subreq:
+- mempool_destroy(&cifs_io_request_pool);
++ mempool_exit(&cifs_io_request_pool);
+ nomem_reqpool:
+ kmem_cache_destroy(cifs_io_request_cachep);
+ nomem_req:
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index bc926ab2555bcb..4069b69fbc7e04 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -920,8 +920,15 @@ static int smb3_reconfigure(struct fs_context *fc)
+ else {
+ kfree_sensitive(ses->password);
+ ses->password = kstrdup(ctx->password, GFP_KERNEL);
++ if (!ses->password)
++ return -ENOMEM;
+ kfree_sensitive(ses->password2);
+ ses->password2 = kstrdup(ctx->password2, GFP_KERNEL);
++ if (!ses->password2) {
++ kfree_sensitive(ses->password);
++ ses->password = NULL;
++ return -ENOMEM;
++ }
+ }
+ STEAL_STRING(cifs_sb, ctx, domainname);
+ STEAL_STRING(cifs_sb, ctx, nodename);
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index ad0e0de9a165d4..7429b96a6ae5ef 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -330,6 +330,18 @@ static int parse_reparse_posix(struct reparse_posix_data *buf,
+
+ switch ((type = le64_to_cpu(buf->InodeType))) {
+ case NFS_SPECFILE_LNK:
++ if (len == 0 || (len % 2)) {
++ cifs_dbg(VFS, "srv returned malformed nfs symlink buffer\n");
++ return -EIO;
++ }
++ /*
++ * Check that buffer does not contain UTF-16 null codepoint
++ * because Linux cannot process symlink with null byte.
++ */
++ if (UniStrnlen((wchar_t *)buf->DataBuffer, len/2) != len/2) {
++ cifs_dbg(VFS, "srv returned null byte in nfs symlink target location\n");
++ return -EIO;
++ }
+ data->symlink_target = cifs_strndup_from_utf16(buf->DataBuffer,
+ len, true,
+ cifs_sb->local_nls);
+@@ -340,8 +352,19 @@ static int parse_reparse_posix(struct reparse_posix_data *buf,
+ break;
+ case NFS_SPECFILE_CHR:
+ case NFS_SPECFILE_BLK:
++ /* DataBuffer for block and char devices contains two 32-bit numbers */
++ if (len != 8) {
++ cifs_dbg(VFS, "srv returned malformed nfs buffer for type: 0x%llx\n", type);
++ return -EIO;
++ }
++ break;
+ case NFS_SPECFILE_FIFO:
+ case NFS_SPECFILE_SOCK:
++ /* DataBuffer for fifos and sockets is empty */
++ if (len != 0) {
++ cifs_dbg(VFS, "srv returned malformed nfs buffer for type: 0x%llx\n", type);
++ return -EIO;
++ }
+ break;
+ default:
+ cifs_dbg(VFS, "%s: unhandled inode type: 0x%llx\n",
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index e9be7b43bb6b8d..b9e332443b0d90 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -1156,7 +1156,7 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_fid fid;
+ unsigned int size[1];
+ void *data[1];
+- struct smb2_file_full_ea_info *ea = NULL;
++ struct smb2_file_full_ea_info *ea;
+ struct smb2_query_info_rsp *rsp;
+ int rc, used_len = 0;
+ int retries = 0, cur_sleep = 1;
+@@ -1177,6 +1177,7 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ if (!utf16_path)
+ return -ENOMEM;
+
++ ea = NULL;
+ resp_buftype[0] = resp_buftype[1] = resp_buftype[2] = CIFS_NO_BUFFER;
+ vars = kzalloc(sizeof(*vars), GFP_KERNEL);
+ if (!vars) {
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 3d9e6e15dd900a..194a4262d57a09 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -3308,6 +3308,15 @@ SMB2_ioctl_init(struct cifs_tcon *tcon, struct TCP_Server_Info *server,
+ return rc;
+
+ if (indatalen) {
++ unsigned int len;
++
++ if (WARN_ON_ONCE(smb3_encryption_required(tcon) &&
++ (check_add_overflow(total_len - 1,
++ ALIGN(indatalen, 8), &len) ||
++ len > MAX_CIFS_SMALL_BUFFER_SIZE))) {
++ cifs_small_buf_release(req);
++ return -EIO;
++ }
+ /*
+ * indatalen is usually small at a couple of bytes max, so
+ * just allocate through generic pool
+diff --git a/fs/udf/balloc.c b/fs/udf/balloc.c
+index d8fc11765d6127..807c493ed0cd5a 100644
+--- a/fs/udf/balloc.c
++++ b/fs/udf/balloc.c
+@@ -370,6 +370,7 @@ static void udf_table_free_blocks(struct super_block *sb,
+ struct extent_position oepos, epos;
+ int8_t etype;
+ struct udf_inode_info *iinfo;
++ int ret = 0;
+
+ mutex_lock(&sbi->s_alloc_mutex);
+ iinfo = UDF_I(table);
+@@ -383,8 +384,12 @@ static void udf_table_free_blocks(struct super_block *sb,
+ epos.block = oepos.block = iinfo->i_location;
+ epos.bh = oepos.bh = NULL;
+
+- while (count &&
+- (etype = udf_next_aext(table, &epos, &eloc, &elen, 1)) != -1) {
++ while (count) {
++ ret = udf_next_aext(table, &epos, &eloc, &elen, &etype, 1);
++ if (ret < 0)
++ goto error_return;
++ if (ret == 0)
++ break;
+ if (((eloc.logicalBlockNum +
+ (elen >> sb->s_blocksize_bits)) == start)) {
+ if ((0x3FFFFFFF - elen) <
+@@ -459,11 +464,8 @@ static void udf_table_free_blocks(struct super_block *sb,
+ adsize = sizeof(struct short_ad);
+ else if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_LONG)
+ adsize = sizeof(struct long_ad);
+- else {
+- brelse(oepos.bh);
+- brelse(epos.bh);
++ else
+ goto error_return;
+- }
+
+ if (epos.offset + (2 * adsize) > sb->s_blocksize) {
+ /* Steal a block from the extent being free'd */
+@@ -479,10 +481,10 @@ static void udf_table_free_blocks(struct super_block *sb,
+ __udf_add_aext(table, &epos, &eloc, elen, 1);
+ }
+
++error_return:
+ brelse(epos.bh);
+ brelse(oepos.bh);
+
+-error_return:
+ mutex_unlock(&sbi->s_alloc_mutex);
+ return;
+ }
+@@ -498,6 +500,7 @@ static int udf_table_prealloc_blocks(struct super_block *sb,
+ struct extent_position epos;
+ int8_t etype = -1;
+ struct udf_inode_info *iinfo;
++ int ret = 0;
+
+ if (first_block >= sbi->s_partmaps[partition].s_partition_len)
+ return 0;
+@@ -516,11 +519,14 @@ static int udf_table_prealloc_blocks(struct super_block *sb,
+ epos.bh = NULL;
+ eloc.logicalBlockNum = 0xFFFFFFFF;
+
+- while (first_block != eloc.logicalBlockNum &&
+- (etype = udf_next_aext(table, &epos, &eloc, &elen, 1)) != -1) {
++ while (first_block != eloc.logicalBlockNum) {
++ ret = udf_next_aext(table, &epos, &eloc, &elen, &etype, 1);
++ if (ret < 0)
++ goto err_out;
++ if (ret == 0)
++ break;
+ udf_debug("eloc=%u, elen=%u, first_block=%u\n",
+ eloc.logicalBlockNum, elen, first_block);
+- ; /* empty loop body */
+ }
+
+ if (first_block == eloc.logicalBlockNum) {
+@@ -539,6 +545,7 @@ static int udf_table_prealloc_blocks(struct super_block *sb,
+ alloc_count = 0;
+ }
+
++err_out:
+ brelse(epos.bh);
+
+ if (alloc_count)
+@@ -560,6 +567,7 @@ static udf_pblk_t udf_table_new_block(struct super_block *sb,
+ struct extent_position epos, goal_epos;
+ int8_t etype;
+ struct udf_inode_info *iinfo = UDF_I(table);
++ int ret = 0;
+
+ *err = -ENOSPC;
+
+@@ -583,8 +591,10 @@ static udf_pblk_t udf_table_new_block(struct super_block *sb,
+ epos.block = iinfo->i_location;
+ epos.bh = goal_epos.bh = NULL;
+
+- while (spread &&
+- (etype = udf_next_aext(table, &epos, &eloc, &elen, 1)) != -1) {
++ while (spread) {
++ ret = udf_next_aext(table, &epos, &eloc, &elen, &etype, 1);
++ if (ret <= 0)
++ break;
+ if (goal >= eloc.logicalBlockNum) {
+ if (goal < eloc.logicalBlockNum +
+ (elen >> sb->s_blocksize_bits))
+@@ -612,9 +622,11 @@ static udf_pblk_t udf_table_new_block(struct super_block *sb,
+
+ brelse(epos.bh);
+
+- if (spread == 0xFFFFFFFF) {
++ if (ret < 0 || spread == 0xFFFFFFFF) {
+ brelse(goal_epos.bh);
+ mutex_unlock(&sbi->s_alloc_mutex);
++ if (ret < 0)
++ *err = ret;
+ return 0;
+ }
+
+diff --git a/fs/udf/directory.c b/fs/udf/directory.c
+index 93153665eb3747..632453aa38934a 100644
+--- a/fs/udf/directory.c
++++ b/fs/udf/directory.c
+@@ -166,13 +166,19 @@ static struct buffer_head *udf_fiiter_bread_blk(struct udf_fileident_iter *iter)
+ */
+ static int udf_fiiter_advance_blk(struct udf_fileident_iter *iter)
+ {
++ int8_t etype = -1;
++ int err = 0;
++
+ iter->loffset++;
+ if (iter->loffset < DIV_ROUND_UP(iter->elen, 1<<iter->dir->i_blkbits))
+ return 0;
+
+ iter->loffset = 0;
+- if (udf_next_aext(iter->dir, &iter->epos, &iter->eloc, &iter->elen, 1)
+- != (EXT_RECORDED_ALLOCATED >> 30)) {
++ err = udf_next_aext(iter->dir, &iter->epos, &iter->eloc,
++ &iter->elen, &etype, 1);
++ if (err < 0)
++ return err;
++ else if (err == 0 || etype != (EXT_RECORDED_ALLOCATED >> 30)) {
+ if (iter->pos == iter->dir->i_size) {
+ iter->elen = 0;
+ return 0;
+@@ -240,6 +246,7 @@ int udf_fiiter_init(struct udf_fileident_iter *iter, struct inode *dir,
+ {
+ struct udf_inode_info *iinfo = UDF_I(dir);
+ int err = 0;
++ int8_t etype;
+
+ iter->dir = dir;
+ iter->bh[0] = iter->bh[1] = NULL;
+@@ -259,9 +266,9 @@ int udf_fiiter_init(struct udf_fileident_iter *iter, struct inode *dir,
+ goto out;
+ }
+
+- if (inode_bmap(dir, iter->pos >> dir->i_blkbits, &iter->epos,
+- &iter->eloc, &iter->elen, &iter->loffset) !=
+- (EXT_RECORDED_ALLOCATED >> 30)) {
++ err = inode_bmap(dir, iter->pos >> dir->i_blkbits, &iter->epos,
++ &iter->eloc, &iter->elen, &iter->loffset, &etype);
++ if (err <= 0 || etype != (EXT_RECORDED_ALLOCATED >> 30)) {
+ if (pos == dir->i_size)
+ return 0;
+ udf_err(dir->i_sb,
+@@ -457,6 +464,7 @@ int udf_fiiter_append_blk(struct udf_fileident_iter *iter)
+ sector_t block;
+ uint32_t old_elen = iter->elen;
+ int err;
++ int8_t etype;
+
+ if (WARN_ON_ONCE(iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB))
+ return -EINVAL;
+@@ -471,8 +479,9 @@ int udf_fiiter_append_blk(struct udf_fileident_iter *iter)
+ udf_fiiter_update_elen(iter, old_elen);
+ return err;
+ }
+- if (inode_bmap(iter->dir, block, &iter->epos, &iter->eloc, &iter->elen,
+- &iter->loffset) != (EXT_RECORDED_ALLOCATED >> 30)) {
++ err = inode_bmap(iter->dir, block, &iter->epos, &iter->eloc, &iter->elen,
++ &iter->loffset, &etype);
++ if (err <= 0 || etype != (EXT_RECORDED_ALLOCATED >> 30)) {
+ udf_err(iter->dir->i_sb,
+ "block %llu not allocated in directory (ino %lu)\n",
+ (unsigned long long)block, iter->dir->i_ino);
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index 4726a4d014b60c..53511c726d5758 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -406,7 +406,7 @@ struct udf_map_rq {
+
+ static int udf_map_block(struct inode *inode, struct udf_map_rq *map)
+ {
+- int err;
++ int ret;
+ struct udf_inode_info *iinfo = UDF_I(inode);
+
+ if (WARN_ON_ONCE(iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB))
+@@ -418,18 +418,24 @@ static int udf_map_block(struct inode *inode, struct udf_map_rq *map)
+ uint32_t elen;
+ sector_t offset;
+ struct extent_position epos = {};
++ int8_t etype;
+
+ down_read(&iinfo->i_data_sem);
+- if (inode_bmap(inode, map->lblk, &epos, &eloc, &elen, &offset)
+- == (EXT_RECORDED_ALLOCATED >> 30)) {
++ ret = inode_bmap(inode, map->lblk, &epos, &eloc, &elen, &offset,
++ &etype);
++ if (ret < 0)
++ goto out_read;
++ if (ret > 0 && etype == (EXT_RECORDED_ALLOCATED >> 30)) {
+ map->pblk = udf_get_lb_pblock(inode->i_sb, &eloc,
+ offset);
+ map->oflags |= UDF_BLK_MAPPED;
++ ret = 0;
+ }
++out_read:
+ up_read(&iinfo->i_data_sem);
+ brelse(epos.bh);
+
+- return 0;
++ return ret;
+ }
+
+ down_write(&iinfo->i_data_sem);
+@@ -440,9 +446,9 @@ static int udf_map_block(struct inode *inode, struct udf_map_rq *map)
+ if (((loff_t)map->lblk) << inode->i_blkbits >= iinfo->i_lenExtents)
+ udf_discard_prealloc(inode);
+ udf_clear_extent_cache(inode);
+- err = inode_getblk(inode, map);
++ ret = inode_getblk(inode, map);
+ up_write(&iinfo->i_data_sem);
+- return err;
++ return ret;
+ }
+
+ static int __udf_get_block(struct inode *inode, sector_t block,
+@@ -545,6 +551,7 @@ static int udf_do_extend_file(struct inode *inode,
+ } else {
+ struct kernel_lb_addr tmploc;
+ uint32_t tmplen;
++ int8_t tmptype;
+
+ udf_write_aext(inode, last_pos, &last_ext->extLocation,
+ last_ext->extLength, 1);
+@@ -554,8 +561,12 @@ static int udf_do_extend_file(struct inode *inode,
+ * more extents, we may need to enter possible following
+ * empty indirect extent.
+ */
+- if (new_block_bytes)
+- udf_next_aext(inode, last_pos, &tmploc, &tmplen, 0);
++ if (new_block_bytes) {
++ err = udf_next_aext(inode, last_pos, &tmploc, &tmplen,
++ &tmptype, 0);
++ if (err < 0)
++ goto out_err;
++ }
+ }
+ iinfo->i_lenExtents += add;
+
+@@ -659,8 +670,10 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ */
+ udf_discard_prealloc(inode);
+
+- etype = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset);
+- within_last_ext = (etype != -1);
++ err = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset, &etype);
++ if (err < 0)
++ goto out;
++ within_last_ext = (err == 1);
+ /* We don't expect extents past EOF... */
+ WARN_ON_ONCE(within_last_ext &&
+ elen > ((loff_t)offset + 1) << inode->i_blkbits);
+@@ -674,8 +687,10 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ extent.extLength = EXT_NOT_RECORDED_NOT_ALLOCATED;
+ } else {
+ epos.offset -= adsize;
+- etype = udf_next_aext(inode, &epos, &extent.extLocation,
+- &extent.extLength, 0);
++ err = udf_next_aext(inode, &epos, &extent.extLocation,
++ &extent.extLength, &etype, 0);
++ if (err <= 0)
++ goto out;
+ extent.extLength |= etype << 30;
+ }
+
+@@ -712,11 +727,11 @@ static int inode_getblk(struct inode *inode, struct udf_map_rq *map)
+ loff_t lbcount = 0, b_off = 0;
+ udf_pblk_t newblocknum;
+ sector_t offset = 0;
+- int8_t etype;
++ int8_t etype, tmpetype;
+ struct udf_inode_info *iinfo = UDF_I(inode);
+ udf_pblk_t goal = 0, pgoal = iinfo->i_location.logicalBlockNum;
+ int lastblock = 0;
+- bool isBeyondEOF;
++ bool isBeyondEOF = false;
+ int ret = 0;
+
+ prev_epos.offset = udf_file_entry_alloc_offset(inode);
+@@ -748,9 +763,13 @@ static int inode_getblk(struct inode *inode, struct udf_map_rq *map)
+ prev_epos.offset = cur_epos.offset;
+ cur_epos.offset = next_epos.offset;
+
+- etype = udf_next_aext(inode, &next_epos, &eloc, &elen, 1);
+- if (etype == -1)
++ ret = udf_next_aext(inode, &next_epos, &eloc, &elen, &etype, 1);
++ if (ret < 0) {
++ goto out_free;
++ } else if (ret == 0) {
++ isBeyondEOF = true;
+ break;
++ }
+
+ c = !c;
+
+@@ -771,13 +790,17 @@ static int inode_getblk(struct inode *inode, struct udf_map_rq *map)
+ * Move prev_epos and cur_epos into indirect extent if we are at
+ * the pointer to it
+ */
+- udf_next_aext(inode, &prev_epos, &tmpeloc, &tmpelen, 0);
+- udf_next_aext(inode, &cur_epos, &tmpeloc, &tmpelen, 0);
++ ret = udf_next_aext(inode, &prev_epos, &tmpeloc, &tmpelen, &tmpetype, 0);
++ if (ret < 0)
++ goto out_free;
++ ret = udf_next_aext(inode, &cur_epos, &tmpeloc, &tmpelen, &tmpetype, 0);
++ if (ret < 0)
++ goto out_free;
+
+ /* if the extent is allocated and recorded, return the block
+ if the extent is not a multiple of the blocksize, round up */
+
+- if (etype == (EXT_RECORDED_ALLOCATED >> 30)) {
++ if (!isBeyondEOF && etype == (EXT_RECORDED_ALLOCATED >> 30)) {
+ if (elen & (inode->i_sb->s_blocksize - 1)) {
+ elen = EXT_RECORDED_ALLOCATED |
+ ((elen + inode->i_sb->s_blocksize - 1) &
+@@ -793,10 +816,9 @@ static int inode_getblk(struct inode *inode, struct udf_map_rq *map)
+ }
+
+ /* Are we beyond EOF and preallocated extent? */
+- if (etype == -1) {
++ if (isBeyondEOF) {
+ loff_t hole_len;
+
+- isBeyondEOF = true;
+ if (count) {
+ if (c)
+ laarr[0] = laarr[1];
+@@ -832,7 +854,6 @@ static int inode_getblk(struct inode *inode, struct udf_map_rq *map)
+ endnum = c + 1;
+ lastblock = 1;
+ } else {
+- isBeyondEOF = false;
+ endnum = startnum = ((count > 2) ? 2 : count);
+
+ /* if the current extent is in position 0,
+@@ -846,15 +867,17 @@ static int inode_getblk(struct inode *inode, struct udf_map_rq *map)
+
+ /* if the current block is located in an extent,
+ read the next extent */
+- etype = udf_next_aext(inode, &next_epos, &eloc, &elen, 0);
+- if (etype != -1) {
++ ret = udf_next_aext(inode, &next_epos, &eloc, &elen, &etype, 0);
++ if (ret > 0) {
+ laarr[c + 1].extLength = (etype << 30) | elen;
+ laarr[c + 1].extLocation = eloc;
+ count++;
+ startnum++;
+ endnum++;
+- } else
++ } else if (ret == 0)
+ lastblock = 1;
++ else
++ goto out_free;
+ }
+
+ /* if the current extent is not recorded but allocated, get the
+@@ -1172,6 +1195,7 @@ static int udf_update_extents(struct inode *inode, struct kernel_long_ad *laarr,
+ int start = 0, i;
+ struct kernel_lb_addr tmploc;
+ uint32_t tmplen;
++ int8_t tmpetype;
+ int err;
+
+ if (startnum > endnum) {
+@@ -1189,14 +1213,19 @@ static int udf_update_extents(struct inode *inode, struct kernel_long_ad *laarr,
+ */
+ if (err < 0)
+ return err;
+- udf_next_aext(inode, epos, &laarr[i].extLocation,
+- &laarr[i].extLength, 1);
++ err = udf_next_aext(inode, epos, &laarr[i].extLocation,
++ &laarr[i].extLength, &tmpetype, 1);
++ if (err < 0)
++ return err;
+ start++;
+ }
+ }
+
+ for (i = start; i < endnum; i++) {
+- udf_next_aext(inode, epos, &tmploc, &tmplen, 0);
++ err = udf_next_aext(inode, epos, &tmploc, &tmplen, &tmpetype, 0);
++ if (err < 0)
++ return err;
++
+ udf_write_aext(inode, epos, &laarr[i].extLocation,
+ laarr[i].extLength, 1);
+ }
+@@ -1955,6 +1984,7 @@ int udf_setup_indirect_aext(struct inode *inode, udf_pblk_t block,
+ struct extent_position nepos;
+ struct kernel_lb_addr neloc;
+ int ver, adsize;
++ int err = 0;
+
+ if (UDF_I(inode)->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
+ adsize = sizeof(struct short_ad);
+@@ -1999,10 +2029,12 @@ int udf_setup_indirect_aext(struct inode *inode, udf_pblk_t block,
+ if (epos->offset + adsize > sb->s_blocksize) {
+ struct kernel_lb_addr cp_loc;
+ uint32_t cp_len;
+- int cp_type;
++ int8_t cp_type;
+
+ epos->offset -= adsize;
+- cp_type = udf_current_aext(inode, epos, &cp_loc, &cp_len, 0);
++ err = udf_current_aext(inode, epos, &cp_loc, &cp_len, &cp_type, 0);
++ if (err <= 0)
++ goto err_out;
+ cp_len |= ((uint32_t)cp_type) << 30;
+
+ __udf_add_aext(inode, &nepos, &cp_loc, cp_len, 1);
+@@ -2017,6 +2049,9 @@ int udf_setup_indirect_aext(struct inode *inode, udf_pblk_t block,
+ *epos = nepos;
+
+ return 0;
++err_out:
++ brelse(bh);
++ return err;
+ }
+
+ /*
+@@ -2162,21 +2197,30 @@ void udf_write_aext(struct inode *inode, struct extent_position *epos,
+ */
+ #define UDF_MAX_INDIR_EXTS 16
+
+-int8_t udf_next_aext(struct inode *inode, struct extent_position *epos,
+- struct kernel_lb_addr *eloc, uint32_t *elen, int inc)
++/*
++ * Returns 1 on success, -errno on error, 0 on hit EOF.
++ */
++int udf_next_aext(struct inode *inode, struct extent_position *epos,
++ struct kernel_lb_addr *eloc, uint32_t *elen, int8_t *etype,
++ int inc)
+ {
+- int8_t etype;
+ unsigned int indirections = 0;
++ int ret = 0;
++ udf_pblk_t block;
+
+- while ((etype = udf_current_aext(inode, epos, eloc, elen, inc)) ==
+- (EXT_NEXT_EXTENT_ALLOCDESCS >> 30)) {
+- udf_pblk_t block;
++ while (1) {
++ ret = udf_current_aext(inode, epos, eloc, elen,
++ etype, inc);
++ if (ret <= 0)
++ return ret;
++ if (*etype != (EXT_NEXT_EXTENT_ALLOCDESCS >> 30))
++ return ret;
+
+ if (++indirections > UDF_MAX_INDIR_EXTS) {
+ udf_err(inode->i_sb,
+ "too many indirect extents in inode %lu\n",
+ inode->i_ino);
+- return -1;
++ return -EFSCORRUPTED;
+ }
+
+ epos->block = *eloc;
+@@ -2186,18 +2230,19 @@ int8_t udf_next_aext(struct inode *inode, struct extent_position *epos,
+ epos->bh = sb_bread(inode->i_sb, block);
+ if (!epos->bh) {
+ udf_debug("reading block %u failed!\n", block);
+- return -1;
++ return -EIO;
+ }
+ }
+-
+- return etype;
+ }
+
+-int8_t udf_current_aext(struct inode *inode, struct extent_position *epos,
+- struct kernel_lb_addr *eloc, uint32_t *elen, int inc)
++/*
++ * Returns 1 on success, -errno on error, 0 on hit EOF.
++ */
++int udf_current_aext(struct inode *inode, struct extent_position *epos,
++ struct kernel_lb_addr *eloc, uint32_t *elen, int8_t *etype,
++ int inc)
+ {
+ int alen;
+- int8_t etype;
+ uint8_t *ptr;
+ struct short_ad *sad;
+ struct long_ad *lad;
+@@ -2212,20 +2257,23 @@ int8_t udf_current_aext(struct inode *inode, struct extent_position *epos,
+ alen = udf_file_entry_alloc_offset(inode) +
+ iinfo->i_lenAlloc;
+ } else {
++ struct allocExtDesc *header =
++ (struct allocExtDesc *)epos->bh->b_data;
++
+ if (!epos->offset)
+ epos->offset = sizeof(struct allocExtDesc);
+ ptr = epos->bh->b_data + epos->offset;
+- alen = sizeof(struct allocExtDesc) +
+- le32_to_cpu(((struct allocExtDesc *)epos->bh->b_data)->
+- lengthAllocDescs);
++ if (check_add_overflow(sizeof(struct allocExtDesc),
++ le32_to_cpu(header->lengthAllocDescs), &alen))
++ return -1;
+ }
+
+ switch (iinfo->i_alloc_type) {
+ case ICBTAG_FLAG_AD_SHORT:
+ sad = udf_get_fileshortad(ptr, alen, &epos->offset, inc);
+ if (!sad)
+- return -1;
+- etype = le32_to_cpu(sad->extLength) >> 30;
++ return 0;
++ *etype = le32_to_cpu(sad->extLength) >> 30;
+ eloc->logicalBlockNum = le32_to_cpu(sad->extPosition);
+ eloc->partitionReferenceNum =
+ iinfo->i_location.partitionReferenceNum;
+@@ -2234,17 +2282,17 @@ int8_t udf_current_aext(struct inode *inode, struct extent_position *epos,
+ case ICBTAG_FLAG_AD_LONG:
+ lad = udf_get_filelongad(ptr, alen, &epos->offset, inc);
+ if (!lad)
+- return -1;
+- etype = le32_to_cpu(lad->extLength) >> 30;
++ return 0;
++ *etype = le32_to_cpu(lad->extLength) >> 30;
+ *eloc = lelb_to_cpu(lad->extLocation);
+ *elen = le32_to_cpu(lad->extLength) & UDF_EXTENT_LENGTH_MASK;
+ break;
+ default:
+ udf_debug("alloc_type = %u unsupported\n", iinfo->i_alloc_type);
+- return -1;
++ return -EINVAL;
+ }
+
+- return etype;
++ return 1;
+ }
+
+ static int udf_insert_aext(struct inode *inode, struct extent_position epos,
+@@ -2253,20 +2301,24 @@ static int udf_insert_aext(struct inode *inode, struct extent_position epos,
+ struct kernel_lb_addr oeloc;
+ uint32_t oelen;
+ int8_t etype;
+- int err;
++ int ret;
+
+ if (epos.bh)
+ get_bh(epos.bh);
+
+- while ((etype = udf_next_aext(inode, &epos, &oeloc, &oelen, 0)) != -1) {
++ while (1) {
++ ret = udf_next_aext(inode, &epos, &oeloc, &oelen, &etype, 0);
++ if (ret <= 0)
++ break;
+ udf_write_aext(inode, &epos, &neloc, nelen, 1);
+ neloc = oeloc;
+ nelen = (etype << 30) | oelen;
+ }
+- err = udf_add_aext(inode, &epos, &neloc, nelen, 1);
++ if (ret == 0)
++ ret = udf_add_aext(inode, &epos, &neloc, nelen, 1);
+ brelse(epos.bh);
+
+- return err;
++ return ret;
+ }
+
+ int8_t udf_delete_aext(struct inode *inode, struct extent_position epos)
+@@ -2278,6 +2330,7 @@ int8_t udf_delete_aext(struct inode *inode, struct extent_position epos)
+ struct udf_inode_info *iinfo;
+ struct kernel_lb_addr eloc;
+ uint32_t elen;
++ int ret;
+
+ if (epos.bh) {
+ get_bh(epos.bh);
+@@ -2293,10 +2346,18 @@ int8_t udf_delete_aext(struct inode *inode, struct extent_position epos)
+ adsize = 0;
+
+ oepos = epos;
+- if (udf_next_aext(inode, &epos, &eloc, &elen, 1) == -1)
++ if (udf_next_aext(inode, &epos, &eloc, &elen, &etype, 1) <= 0)
+ return -1;
+
+- while ((etype = udf_next_aext(inode, &epos, &eloc, &elen, 1)) != -1) {
++ while (1) {
++ ret = udf_next_aext(inode, &epos, &eloc, &elen, &etype, 1);
++ if (ret < 0) {
++ brelse(epos.bh);
++ brelse(oepos.bh);
++ return -1;
++ }
++ if (ret == 0)
++ break;
+ udf_write_aext(inode, &oepos, &eloc, (etype << 30) | elen, 1);
+ if (oepos.bh != epos.bh) {
+ oepos.block = epos.block;
+@@ -2353,14 +2414,17 @@ int8_t udf_delete_aext(struct inode *inode, struct extent_position epos)
+ return (elen >> 30);
+ }
+
+-int8_t inode_bmap(struct inode *inode, sector_t block,
+- struct extent_position *pos, struct kernel_lb_addr *eloc,
+- uint32_t *elen, sector_t *offset)
++/*
++ * Returns 1 on success, -errno on error, 0 on hit EOF.
++ */
++int inode_bmap(struct inode *inode, sector_t block, struct extent_position *pos,
++ struct kernel_lb_addr *eloc, uint32_t *elen, sector_t *offset,
++ int8_t *etype)
+ {
+ unsigned char blocksize_bits = inode->i_sb->s_blocksize_bits;
+ loff_t lbcount = 0, bcount = (loff_t) block << blocksize_bits;
+- int8_t etype;
+ struct udf_inode_info *iinfo;
++ int err = 0;
+
+ iinfo = UDF_I(inode);
+ if (!udf_read_extent_cache(inode, bcount, &lbcount, pos)) {
+@@ -2370,11 +2434,13 @@ int8_t inode_bmap(struct inode *inode, sector_t block,
+ }
+ *elen = 0;
+ do {
+- etype = udf_next_aext(inode, pos, eloc, elen, 1);
+- if (etype == -1) {
+- *offset = (bcount - lbcount) >> blocksize_bits;
+- iinfo->i_lenExtents = lbcount;
+- return -1;
++ err = udf_next_aext(inode, pos, eloc, elen, etype, 1);
++ if (err <= 0) {
++ if (err == 0) {
++ *offset = (bcount - lbcount) >> blocksize_bits;
++ iinfo->i_lenExtents = lbcount;
++ }
++ return err;
+ }
+ lbcount += *elen;
+ } while (lbcount <= bcount);
+@@ -2382,5 +2448,5 @@ int8_t inode_bmap(struct inode *inode, sector_t block,
+ udf_update_extent_cache(inode, lbcount - *elen, pos);
+ *offset = (bcount + *elen - lbcount) >> blocksize_bits;
+
+- return etype;
++ return 1;
+ }
+diff --git a/fs/udf/partition.c b/fs/udf/partition.c
+index af877991edc13a..2b85c9501bed89 100644
+--- a/fs/udf/partition.c
++++ b/fs/udf/partition.c
+@@ -282,9 +282,11 @@ static uint32_t udf_try_read_meta(struct inode *inode, uint32_t block,
+ sector_t ext_offset;
+ struct extent_position epos = {};
+ uint32_t phyblock;
++ int8_t etype;
++ int err = 0;
+
+- if (inode_bmap(inode, block, &epos, &eloc, &elen, &ext_offset) !=
+- (EXT_RECORDED_ALLOCATED >> 30))
++ err = inode_bmap(inode, block, &epos, &eloc, &elen, &ext_offset, &etype);
++ if (err <= 0 || etype != (EXT_RECORDED_ALLOCATED >> 30))
+ phyblock = 0xFFFFFFFF;
+ else {
+ map = &UDF_SB(sb)->s_partmaps[partition];
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 3460ecc826d162..1c8a736b33097e 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -2482,13 +2482,14 @@ static unsigned int udf_count_free_table(struct super_block *sb,
+ uint32_t elen;
+ struct kernel_lb_addr eloc;
+ struct extent_position epos;
++ int8_t etype;
+
+ mutex_lock(&UDF_SB(sb)->s_alloc_mutex);
+ epos.block = UDF_I(table)->i_location;
+ epos.offset = sizeof(struct unallocSpaceEntry);
+ epos.bh = NULL;
+
+- while (udf_next_aext(table, &epos, &eloc, &elen, 1) != -1)
++ while (udf_next_aext(table, &epos, &eloc, &elen, &etype, 1) > 0)
+ accum += (elen >> table->i_sb->s_blocksize_bits);
+
+ brelse(epos.bh);
+diff --git a/fs/udf/truncate.c b/fs/udf/truncate.c
+index a686c10fd709d1..4f33a4a4888613 100644
+--- a/fs/udf/truncate.c
++++ b/fs/udf/truncate.c
+@@ -69,6 +69,7 @@ void udf_truncate_tail_extent(struct inode *inode)
+ int8_t etype = -1, netype;
+ int adsize;
+ struct udf_inode_info *iinfo = UDF_I(inode);
++ int ret;
+
+ if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB ||
+ inode->i_size == iinfo->i_lenExtents)
+@@ -85,7 +86,10 @@ void udf_truncate_tail_extent(struct inode *inode)
+ BUG();
+
+ /* Find the last extent in the file */
+- while ((netype = udf_next_aext(inode, &epos, &eloc, &elen, 1)) != -1) {
++ while (1) {
++ ret = udf_next_aext(inode, &epos, &eloc, &elen, &netype, 1);
++ if (ret <= 0)
++ break;
+ etype = netype;
+ lbcount += elen;
+ if (lbcount > inode->i_size) {
+@@ -101,7 +105,8 @@ void udf_truncate_tail_extent(struct inode *inode)
+ epos.offset -= adsize;
+ extent_trunc(inode, &epos, &eloc, etype, elen, nelen);
+ epos.offset += adsize;
+- if (udf_next_aext(inode, &epos, &eloc, &elen, 1) != -1)
++ if (udf_next_aext(inode, &epos, &eloc, &elen,
++ &netype, 1) > 0)
+ udf_err(inode->i_sb,
+ "Extent after EOF in inode %u\n",
+ (unsigned)inode->i_ino);
+@@ -110,7 +115,8 @@ void udf_truncate_tail_extent(struct inode *inode)
+ }
+ /* This inode entry is in-memory only and thus we don't have to mark
+ * the inode dirty */
+- iinfo->i_lenExtents = inode->i_size;
++ if (ret == 0)
++ iinfo->i_lenExtents = inode->i_size;
+ brelse(epos.bh);
+ }
+
+@@ -124,6 +130,8 @@ void udf_discard_prealloc(struct inode *inode)
+ int8_t etype = -1;
+ struct udf_inode_info *iinfo = UDF_I(inode);
+ int bsize = i_blocksize(inode);
++ int8_t tmpetype = -1;
++ int ret;
+
+ if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB ||
+ ALIGN(inode->i_size, bsize) == ALIGN(iinfo->i_lenExtents, bsize))
+@@ -132,15 +140,23 @@ void udf_discard_prealloc(struct inode *inode)
+ epos.block = iinfo->i_location;
+
+ /* Find the last extent in the file */
+- while (udf_next_aext(inode, &epos, &eloc, &elen, 0) != -1) {
++ while (1) {
++ ret = udf_next_aext(inode, &epos, &eloc, &elen, &tmpetype, 0);
++ if (ret < 0)
++ goto out;
++ if (ret == 0)
++ break;
+ brelse(prev_epos.bh);
+ prev_epos = epos;
+ if (prev_epos.bh)
+ get_bh(prev_epos.bh);
+
+- etype = udf_next_aext(inode, &epos, &eloc, &elen, 1);
++ ret = udf_next_aext(inode, &epos, &eloc, &elen, &etype, 1);
++ if (ret < 0)
++ goto out;
+ lbcount += elen;
+ }
++
+ if (etype == (EXT_NOT_RECORDED_ALLOCATED >> 30)) {
+ lbcount -= elen;
+ udf_delete_aext(inode, prev_epos);
+@@ -150,6 +166,7 @@ void udf_discard_prealloc(struct inode *inode)
+ /* This inode entry is in-memory only and thus we don't have to mark
+ * the inode dirty */
+ iinfo->i_lenExtents = lbcount;
++out:
+ brelse(epos.bh);
+ brelse(prev_epos.bh);
+ }
+@@ -188,6 +205,7 @@ int udf_truncate_extents(struct inode *inode)
+ loff_t byte_offset;
+ int adsize;
+ struct udf_inode_info *iinfo = UDF_I(inode);
++ int ret = 0;
+
+ if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
+ adsize = sizeof(struct short_ad);
+@@ -196,10 +214,12 @@ int udf_truncate_extents(struct inode *inode)
+ else
+ BUG();
+
+- etype = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset);
++ ret = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset, &etype);
++ if (ret < 0)
++ return ret;
+ byte_offset = (offset << sb->s_blocksize_bits) +
+ (inode->i_size & (sb->s_blocksize - 1));
+- if (etype == -1) {
++ if (ret == 0) {
+ /* We should extend the file? */
+ WARN_ON(byte_offset);
+ return 0;
+@@ -217,8 +237,8 @@ int udf_truncate_extents(struct inode *inode)
+ else
+ lenalloc -= sizeof(struct allocExtDesc);
+
+- while ((etype = udf_current_aext(inode, &epos, &eloc,
+- &elen, 0)) != -1) {
++ while ((ret = udf_current_aext(inode, &epos, &eloc,
++ &elen, &etype, 0)) > 0) {
+ if (etype == (EXT_NEXT_EXTENT_ALLOCDESCS >> 30)) {
+ udf_write_aext(inode, &epos, &neloc, nelen, 0);
+ if (indirect_ext_len) {
+@@ -253,6 +273,11 @@ int udf_truncate_extents(struct inode *inode)
+ }
+ }
+
++ if (ret < 0) {
++ brelse(epos.bh);
++ return ret;
++ }
++
+ if (indirect_ext_len) {
+ BUG_ON(!epos.bh);
+ udf_free_blocks(sb, NULL, &epos.block, 0, indirect_ext_len);
+diff --git a/fs/udf/udfdecl.h b/fs/udf/udfdecl.h
+index 88692512a46687..d159f20d61e89a 100644
+--- a/fs/udf/udfdecl.h
++++ b/fs/udf/udfdecl.h
+@@ -157,8 +157,9 @@ extern struct buffer_head *udf_bread(struct inode *inode, udf_pblk_t block,
+ extern int udf_setsize(struct inode *, loff_t);
+ extern void udf_evict_inode(struct inode *);
+ extern int udf_write_inode(struct inode *, struct writeback_control *wbc);
+-extern int8_t inode_bmap(struct inode *, sector_t, struct extent_position *,
+- struct kernel_lb_addr *, uint32_t *, sector_t *);
++extern int inode_bmap(struct inode *inode, sector_t block,
++ struct extent_position *pos, struct kernel_lb_addr *eloc,
++ uint32_t *elen, sector_t *offset, int8_t *etype);
+ int udf_get_block(struct inode *, sector_t, struct buffer_head *, int);
+ extern int udf_setup_indirect_aext(struct inode *inode, udf_pblk_t block,
+ struct extent_position *epos);
+@@ -169,10 +170,12 @@ extern int udf_add_aext(struct inode *, struct extent_position *,
+ extern void udf_write_aext(struct inode *, struct extent_position *,
+ struct kernel_lb_addr *, uint32_t, int);
+ extern int8_t udf_delete_aext(struct inode *, struct extent_position);
+-extern int8_t udf_next_aext(struct inode *, struct extent_position *,
+- struct kernel_lb_addr *, uint32_t *, int);
+-extern int8_t udf_current_aext(struct inode *, struct extent_position *,
+- struct kernel_lb_addr *, uint32_t *, int);
++extern int udf_next_aext(struct inode *inode, struct extent_position *epos,
++ struct kernel_lb_addr *eloc, uint32_t *elen,
++ int8_t *etype, int inc);
++extern int udf_current_aext(struct inode *inode, struct extent_position *epos,
++ struct kernel_lb_addr *eloc, uint32_t *elen,
++ int8_t *etype, int inc);
+ extern void udf_update_extra_perms(struct inode *inode, umode_t mode);
+
+ /* misc.c */
+diff --git a/fs/xfs/scrub/repair.c b/fs/xfs/scrub/repair.c
+index 67478294f11ae8..155bbaaa496e44 100644
+--- a/fs/xfs/scrub/repair.c
++++ b/fs/xfs/scrub/repair.c
+@@ -1084,9 +1084,11 @@ xrep_metadata_inode_forks(
+ return error;
+
+ /* Make sure the attr fork looks ok before we delete it. */
+- error = xrep_metadata_inode_subtype(sc, XFS_SCRUB_TYPE_BMBTA);
+- if (error)
+- return error;
++ if (xfs_inode_hasattr(sc->ip)) {
++ error = xrep_metadata_inode_subtype(sc, XFS_SCRUB_TYPE_BMBTA);
++ if (error)
++ return error;
++ }
+
+ /* Clear the reflink flag since metadata never shares. */
+ if (xfs_is_reflink_inode(sc->ip)) {
+diff --git a/include/linux/backing-file.h b/include/linux/backing-file.h
+index 4b61b0e577209e..2eed0ffb5e8f83 100644
+--- a/include/linux/backing-file.h
++++ b/include/linux/backing-file.h
+@@ -16,7 +16,7 @@ struct backing_file_ctx {
+ const struct cred *cred;
+ struct file *user_file;
+ void (*accessed)(struct file *);
+- void (*end_write)(struct file *);
++ void (*end_write)(struct file *, loff_t, ssize_t);
+ };
+
+ struct file *backing_file_open(const struct path *user_path, int flags,
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index f3e5ce397b8ef7..eb1d3a2fe3339b 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -635,6 +635,7 @@ enum bpf_type_flag {
+ */
+ PTR_UNTRUSTED = BIT(6 + BPF_BASE_TYPE_BITS),
+
++ /* MEM can be uninitialized. */
+ MEM_UNINIT = BIT(7 + BPF_BASE_TYPE_BITS),
+
+ /* DYNPTR points to memory local to the bpf program. */
+@@ -700,6 +701,13 @@ enum bpf_type_flag {
+ */
+ MEM_ALIGNED = BIT(17 + BPF_BASE_TYPE_BITS),
+
++ /* MEM is being written to, often combined with MEM_UNINIT. Non-presence
++ * of MEM_WRITE means that MEM is only being read. MEM_WRITE without the
++ * MEM_UNINIT means that memory needs to be initialized since it is also
++ * read.
++ */
++ MEM_WRITE = BIT(18 + BPF_BASE_TYPE_BITS),
++
+ __BPF_TYPE_FLAG_MAX,
+ __BPF_TYPE_LAST_FLAG = __BPF_TYPE_FLAG_MAX - 1,
+ };
+@@ -758,10 +766,10 @@ enum bpf_arg_type {
+ ARG_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_SOCKET,
+ ARG_PTR_TO_STACK_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_STACK,
+ ARG_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_BTF_ID,
+- /* pointer to memory does not need to be initialized, helper function must fill
+- * all bytes or clear them in error case.
++ /* Pointer to memory does not need to be initialized, since helper function
++ * fills all bytes or clears them in error case.
+ */
+- ARG_PTR_TO_UNINIT_MEM = MEM_UNINIT | ARG_PTR_TO_MEM,
++ ARG_PTR_TO_UNINIT_MEM = MEM_UNINIT | MEM_WRITE | ARG_PTR_TO_MEM,
+ /* Pointer to valid memory of size known at compile time. */
+ ARG_PTR_TO_FIXED_SIZE_MEM = MEM_FIXED_SIZE | ARG_PTR_TO_MEM,
+
+diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
+index 9f2a6b83b49e14..fa78f49d4a9a64 100644
+--- a/include/linux/bpf_types.h
++++ b/include/linux/bpf_types.h
+@@ -146,6 +146,7 @@ BPF_LINK_TYPE(BPF_LINK_TYPE_XDP, xdp)
+ BPF_LINK_TYPE(BPF_LINK_TYPE_NETFILTER, netfilter)
+ BPF_LINK_TYPE(BPF_LINK_TYPE_TCX, tcx)
+ BPF_LINK_TYPE(BPF_LINK_TYPE_NETKIT, netkit)
++BPF_LINK_TYPE(BPF_LINK_TYPE_SOCKMAP, sockmap)
+ #endif
+ #ifdef CONFIG_PERF_EVENTS
+ BPF_LINK_TYPE(BPF_LINK_TYPE_PERF_EVENT, perf)
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index e25d9ebfdf89aa..6d334c211176c0 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -308,6 +308,24 @@ static inline void count_mthp_stat(int order, enum mthp_stat_item item)
+ (transparent_hugepage_flags & \
+ (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG))
+
++static inline bool vma_thp_disabled(struct vm_area_struct *vma,
++ unsigned long vm_flags)
++{
++ /*
++ * Explicitly disabled through madvise or prctl, or some
++ * architectures may disable THP for some mappings, for
++ * example, s390 kvm.
++ */
++ return (vm_flags & VM_NOHUGEPAGE) ||
++ test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags);
++}
++
++static inline bool thp_disabled_by_hw(void)
++{
++ /* If the hardware/firmware marked hugepage support disabled. */
++ return transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED);
++}
++
+ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
+ unsigned long len, unsigned long pgoff, unsigned long flags);
+ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index b26954dc9ed773..93548d71cf6974 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -3336,6 +3336,12 @@ static inline void netif_tx_wake_all_queues(struct net_device *dev)
+
+ static __always_inline void netif_tx_stop_queue(struct netdev_queue *dev_queue)
+ {
++ /* Paired with READ_ONCE() from dev_watchdog() */
++ WRITE_ONCE(dev_queue->trans_start, jiffies);
++
++ /* This barrier is paired with smp_mb() from dev_watchdog() */
++ smp_mb__before_atomic();
++
+ /* Must be an atomic op see netif_txq_try_stop() */
+ set_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state);
+ }
+@@ -3462,6 +3468,12 @@ static inline void netdev_tx_sent_queue(struct netdev_queue *dev_queue,
+ if (likely(dql_avail(&dev_queue->dql) >= 0))
+ return;
+
++ /* Paired with READ_ONCE() from dev_watchdog() */
++ WRITE_ONCE(dev_queue->trans_start, jiffies);
++
++ /* This barrier is paired with smp_mb() from dev_watchdog() */
++ smp_mb__before_atomic();
++
+ set_bit(__QUEUE_STATE_STACK_XOFF, &dev_queue->state);
+
+ /*
+diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
+index 1d06b1e5408a53..1564d7d3ca6151 100644
+--- a/include/linux/shmem_fs.h
++++ b/include/linux/shmem_fs.h
+@@ -111,20 +111,13 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end);
+ int shmem_unuse(unsigned int type);
+
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-extern bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force,
+- struct mm_struct *mm, unsigned long vm_flags);
+ unsigned long shmem_allowable_huge_orders(struct inode *inode,
+ struct vm_area_struct *vma, pgoff_t index,
+- bool global_huge);
++ bool shmem_huge_force);
+ #else
+-static __always_inline bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force,
+- struct mm_struct *mm, unsigned long vm_flags)
+-{
+- return false;
+-}
+ static inline unsigned long shmem_allowable_huge_orders(struct inode *inode,
+ struct vm_area_struct *vma, pgoff_t index,
+- bool global_huge)
++ bool shmem_huge_force)
+ {
+ return 0;
+ }
+diff --git a/include/linux/task_work.h b/include/linux/task_work.h
+index cf5e7e891a7762..2964171856e00d 100644
+--- a/include/linux/task_work.h
++++ b/include/linux/task_work.h
+@@ -14,11 +14,14 @@ init_task_work(struct callback_head *twork, task_work_func_t func)
+ }
+
+ enum task_work_notify_mode {
+- TWA_NONE,
++ TWA_NONE = 0,
+ TWA_RESUME,
+ TWA_SIGNAL,
+ TWA_SIGNAL_NO_IPI,
+ TWA_NMI_CURRENT,
++
++ TWA_FLAGS = 0xff00,
++ TWAF_NO_ALLOC = 0x0100,
+ };
+
+ static inline bool task_work_pending(struct task_struct *task)
+diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
+index d8e4105a2f21b1..39c7cf82b0c221 100644
+--- a/include/linux/uaccess.h
++++ b/include/linux/uaccess.h
+@@ -33,6 +33,13 @@
+ })
+ #endif
+
++#ifdef masked_user_access_begin
++ #define can_do_masked_user_access() 1
++#else
++ #define can_do_masked_user_access() 0
++ #define masked_user_access_begin(src) NULL
++#endif
++
+ /*
+ * Architectures should provide two primitives (raw_copy_{to,from}_user())
+ * and get rid of their private instances of copy_{to,from}_user() and
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index 5d655e109b2c0a..f66bc85c6411dd 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -403,6 +403,7 @@ int bt_sock_register(int proto, const struct net_proto_family *ops);
+ void bt_sock_unregister(int proto);
+ void bt_sock_link(struct bt_sock_list *l, struct sock *s);
+ void bt_sock_unlink(struct bt_sock_list *l, struct sock *s);
++bool bt_sock_linked(struct bt_sock_list *l, struct sock *s);
+ struct sock *bt_sock_alloc(struct net *net, struct socket *sock,
+ struct proto *prot, int proto, gfp_t prio, int kern);
+ int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+diff --git a/include/net/genetlink.h b/include/net/genetlink.h
+index 9ab49bfeae789a..c1d91f1d20f6c9 100644
+--- a/include/net/genetlink.h
++++ b/include/net/genetlink.h
+@@ -531,13 +531,12 @@ static inline int genlmsg_multicast(const struct genl_family *family,
+ * @skb: netlink message as socket buffer
+ * @portid: own netlink portid to avoid sending to yourself
+ * @group: offset of multicast group in groups array
+- * @flags: allocation flags
+ *
+ * This function must hold the RTNL or rcu_read_lock().
+ */
+ int genlmsg_multicast_allns(const struct genl_family *family,
+ struct sk_buff *skb, u32 portid,
+- unsigned int group, gfp_t flags);
++ unsigned int group);
+
+ /**
+ * genlmsg_unicast - unicast a netlink message
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 2d4149075091b1..f127fc268a5ef3 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2715,6 +2715,11 @@ static inline bool sk_is_stream_unix(const struct sock *sk)
+ return sk->sk_family == AF_UNIX && sk->sk_type == SOCK_STREAM;
+ }
+
++static inline bool sk_is_vsock(const struct sock *sk)
++{
++ return sk->sk_family == AF_VSOCK;
++}
++
+ /**
+ * sk_eat_skb - Release a skb if it is no longer needed
+ * @sk: socket to eat this skb from
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 54cef89f6c1ec0..2a98d14b036fab 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -349,20 +349,25 @@ struct xfrm_if_cb {
+ void xfrm_if_register_cb(const struct xfrm_if_cb *ifcb);
+ void xfrm_if_unregister_cb(void);
+
++struct xfrm_dst_lookup_params {
++ struct net *net;
++ int tos;
++ int oif;
++ xfrm_address_t *saddr;
++ xfrm_address_t *daddr;
++ u32 mark;
++ __u8 ipproto;
++ union flowi_uli uli;
++};
++
+ struct net_device;
+ struct xfrm_type;
+ struct xfrm_dst;
+ struct xfrm_policy_afinfo {
+ struct dst_ops *dst_ops;
+- struct dst_entry *(*dst_lookup)(struct net *net,
+- int tos, int oif,
+- const xfrm_address_t *saddr,
+- const xfrm_address_t *daddr,
+- u32 mark);
+- int (*get_saddr)(struct net *net, int oif,
+- xfrm_address_t *saddr,
+- xfrm_address_t *daddr,
+- u32 mark);
++ struct dst_entry *(*dst_lookup)(const struct xfrm_dst_lookup_params *params);
++ int (*get_saddr)(xfrm_address_t *saddr,
++ const struct xfrm_dst_lookup_params *params);
+ int (*fill_dst)(struct xfrm_dst *xdst,
+ struct net_device *dev,
+ const struct flowi *fl);
+@@ -1735,10 +1740,7 @@ static inline int xfrm_user_policy(struct sock *sk, int optname,
+ }
+ #endif
+
+-struct dst_entry *__xfrm_dst_lookup(struct net *net, int tos, int oif,
+- const xfrm_address_t *saddr,
+- const xfrm_address_t *daddr,
+- int family, u32 mark);
++struct dst_entry *__xfrm_dst_lookup(int family, const struct xfrm_dst_lookup_params *params);
+
+ struct xfrm_policy *xfrm_policy_alloc(struct net *net, gfp_t gfp);
+
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 35bcf52dbc6526..b44b3b6c6a8f47 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -1121,6 +1121,9 @@ enum bpf_attach_type {
+
+ #define MAX_BPF_ATTACH_TYPE __MAX_BPF_ATTACH_TYPE
+
++/* Add BPF_LINK_TYPE(type, name) in bpf_types.h to keep bpf_link_type_strs[]
++ * in sync with the definitions below.
++ */
+ enum bpf_link_type {
+ BPF_LINK_TYPE_UNSPEC = 0,
+ BPF_LINK_TYPE_RAW_TRACEPOINT = 1,
+@@ -6046,11 +6049,6 @@ enum {
+ BPF_F_MARK_ENFORCE = (1ULL << 6),
+ };
+
+-/* BPF_FUNC_clone_redirect and BPF_FUNC_redirect flags. */
+-enum {
+- BPF_F_INGRESS = (1ULL << 0),
+-};
+-
+ /* BPF_FUNC_skb_set_tunnel_key and BPF_FUNC_skb_get_tunnel_key flags. */
+ enum {
+ BPF_F_TUNINFO_IPV6 = (1ULL << 0),
+@@ -6197,10 +6195,12 @@ enum {
+ BPF_F_BPRM_SECUREEXEC = (1ULL << 0),
+ };
+
+-/* Flags for bpf_redirect_map helper */
++/* Flags for bpf_redirect and bpf_redirect_map helpers */
+ enum {
+- BPF_F_BROADCAST = (1ULL << 3),
+- BPF_F_EXCLUDE_INGRESS = (1ULL << 4),
++ BPF_F_INGRESS = (1ULL << 0), /* used for skb path */
++ BPF_F_BROADCAST = (1ULL << 3), /* used for XDP path */
++ BPF_F_EXCLUDE_INGRESS = (1ULL << 4), /* used for XDP path */
++#define BPF_F_REDIRECT_FLAGS (BPF_F_INGRESS | BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS)
+ };
+
+ #define __bpf_md_ptr(type, name) \
+diff --git a/include/uapi/sound/asoc.h b/include/uapi/sound/asoc.h
+index 99333cbd3114ec..c117672d44394c 100644
+--- a/include/uapi/sound/asoc.h
++++ b/include/uapi/sound/asoc.h
+@@ -88,7 +88,7 @@
+
+ /* ABI version */
+ #define SND_SOC_TPLG_ABI_VERSION 0x5 /* current version */
+-#define SND_SOC_TPLG_ABI_VERSION_MIN 0x4 /* oldest version supported */
++#define SND_SOC_TPLG_ABI_VERSION_MIN 0x5 /* oldest version supported */
+
+ /* Max size of TLV data */
+ #define SND_SOC_TPLG_TLV_SIZE 32
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 7783b16b87cfef..5f4f1d0bc23a47 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -3528,7 +3528,7 @@ static int btf_get_field_type(const struct btf *btf, const struct btf_type *var_
+ * (i + 1) * elem_size
+ * where i is the repeat index and elem_size is the size of an element.
+ */
+-static int btf_repeat_fields(struct btf_field_info *info,
++static int btf_repeat_fields(struct btf_field_info *info, int info_cnt,
+ u32 field_cnt, u32 repeat_cnt, u32 elem_size)
+ {
+ u32 i, j;
+@@ -3548,6 +3548,12 @@ static int btf_repeat_fields(struct btf_field_info *info,
+ }
+ }
+
++ /* The type of struct size or variable size is u32,
++ * so the multiplication will not overflow.
++ */
++ if (field_cnt * (repeat_cnt + 1) > info_cnt)
++ return -E2BIG;
++
+ cur = field_cnt;
+ for (i = 0; i < repeat_cnt; i++) {
+ memcpy(&info[cur], &info[0], field_cnt * sizeof(info[0]));
+@@ -3592,7 +3598,7 @@ static int btf_find_nested_struct(const struct btf *btf, const struct btf_type *
+ info[i].off += off;
+
+ if (nelems > 1) {
+- err = btf_repeat_fields(info, ret, nelems - 1, t->size);
++ err = btf_repeat_fields(info, info_cnt, ret, nelems - 1, t->size);
+ if (err == 0)
+ ret *= nelems;
+ else
+@@ -3686,10 +3692,10 @@ static int btf_find_field_one(const struct btf *btf,
+
+ if (ret == BTF_FIELD_IGNORE)
+ return 0;
+- if (nelems > info_cnt)
++ if (!info_cnt)
+ return -E2BIG;
+ if (nelems > 1) {
+- ret = btf_repeat_fields(info, 1, nelems - 1, sz);
++ ret = btf_repeat_fields(info, info_cnt, 1, nelems - 1, sz);
+ if (ret < 0)
+ return ret;
+ }
+@@ -8905,6 +8911,7 @@ int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo,
+ if (!type) {
+ bpf_log(ctx->log, "relo #%u: bad type id %u\n",
+ relo_idx, relo->type_id);
++ kfree(specs);
+ return -EINVAL;
+ }
+
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index 9e0e3b0a18e406..7878be18e9d264 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -333,9 +333,11 @@ static int dev_map_hash_get_next_key(struct bpf_map *map, void *key,
+
+ static int dev_map_bpf_prog_run(struct bpf_prog *xdp_prog,
+ struct xdp_frame **frames, int n,
+- struct net_device *dev)
++ struct net_device *tx_dev,
++ struct net_device *rx_dev)
+ {
+- struct xdp_txq_info txq = { .dev = dev };
++ struct xdp_txq_info txq = { .dev = tx_dev };
++ struct xdp_rxq_info rxq = { .dev = rx_dev };
+ struct xdp_buff xdp;
+ int i, nframes = 0;
+
+@@ -346,6 +348,7 @@ static int dev_map_bpf_prog_run(struct bpf_prog *xdp_prog,
+
+ xdp_convert_frame_to_buff(xdpf, &xdp);
+ xdp.txq = &txq;
++ xdp.rxq = &rxq;
+
+ act = bpf_prog_run_xdp(xdp_prog, &xdp);
+ switch (act) {
+@@ -360,7 +363,7 @@ static int dev_map_bpf_prog_run(struct bpf_prog *xdp_prog,
+ bpf_warn_invalid_xdp_action(NULL, xdp_prog, act);
+ fallthrough;
+ case XDP_ABORTED:
+- trace_xdp_exception(dev, xdp_prog, act);
++ trace_xdp_exception(tx_dev, xdp_prog, act);
+ fallthrough;
+ case XDP_DROP:
+ xdp_return_frame_rx_napi(xdpf);
+@@ -388,7 +391,7 @@ static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
+ }
+
+ if (bq->xdp_prog) {
+- to_send = dev_map_bpf_prog_run(bq->xdp_prog, bq->q, cnt, dev);
++ to_send = dev_map_bpf_prog_run(bq->xdp_prog, bq->q, cnt, dev, bq->dev_rx);
+ if (!to_send)
+ goto out;
+ }
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index c9e235807caca0..cc8c00864a6807 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -111,7 +111,7 @@ const struct bpf_func_proto bpf_map_pop_elem_proto = {
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_CONST_MAP_PTR,
+- .arg2_type = ARG_PTR_TO_MAP_VALUE | MEM_UNINIT,
++ .arg2_type = ARG_PTR_TO_MAP_VALUE | MEM_UNINIT | MEM_WRITE,
+ };
+
+ BPF_CALL_2(bpf_map_peek_elem, struct bpf_map *, map, void *, value)
+@@ -124,7 +124,7 @@ const struct bpf_func_proto bpf_map_peek_elem_proto = {
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_CONST_MAP_PTR,
+- .arg2_type = ARG_PTR_TO_MAP_VALUE | MEM_UNINIT,
++ .arg2_type = ARG_PTR_TO_MAP_VALUE | MEM_UNINIT | MEM_WRITE,
+ };
+
+ BPF_CALL_3(bpf_map_lookup_percpu_elem, struct bpf_map *, map, void *, key, u32, cpu)
+@@ -539,7 +539,7 @@ const struct bpf_func_proto bpf_strtol_proto = {
+ .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY,
+ .arg2_type = ARG_CONST_SIZE,
+ .arg3_type = ARG_ANYTHING,
+- .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_WRITE | MEM_ALIGNED,
+ .arg4_size = sizeof(s64),
+ };
+
+@@ -569,7 +569,7 @@ const struct bpf_func_proto bpf_strtoul_proto = {
+ .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY,
+ .arg2_type = ARG_CONST_SIZE,
+ .arg3_type = ARG_ANYTHING,
+- .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_WRITE | MEM_ALIGNED,
+ .arg4_size = sizeof(u64),
+ };
+
+@@ -1745,7 +1745,7 @@ static const struct bpf_func_proto bpf_dynptr_from_mem_proto = {
+ .arg1_type = ARG_PTR_TO_UNINIT_MEM,
+ .arg2_type = ARG_CONST_SIZE_OR_ZERO,
+ .arg3_type = ARG_ANYTHING,
+- .arg4_type = ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_LOCAL | MEM_UNINIT,
++ .arg4_type = ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_LOCAL | MEM_UNINIT | MEM_WRITE,
+ };
+
+ BPF_CALL_5(bpf_dynptr_read, void *, dst, u32, len, const struct bpf_dynptr_kern *, src,
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index af5d2ffadd70bc..00b8dc8ef73855 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -880,7 +880,7 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ const struct btf_type *enum_t;
+ const char *enum_pfx;
+ u64 *delegate_msk, msk = 0;
+- char *p;
++ char *p, *str;
+ int val;
+
+ /* ignore errors, fallback to hex */
+@@ -911,7 +911,8 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ return -EINVAL;
+ }
+
+- while ((p = strsep(¶m->string, ":"))) {
++ str = param->string;
++ while ((p = strsep(&str, ":"))) {
+ if (strcmp(p, "any") == 0) {
+ msk |= ~0ULL;
+ } else if (find_btf_enum_const(info.btf, enum_t, enum_pfx, p, &val)) {
+diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
+index 5aebfc3051e3a6..4a858fdb6476f8 100644
+--- a/kernel/bpf/log.c
++++ b/kernel/bpf/log.c
+@@ -688,8 +688,7 @@ static void print_reg_state(struct bpf_verifier_env *env,
+ if (t == SCALAR_VALUE && reg->precise)
+ verbose(env, "P");
+ if (t == SCALAR_VALUE && tnum_is_const(reg->var_off)) {
+- /* reg->off should be 0 for SCALAR_VALUE */
+- verbose_snum(env, reg->var_off.value + reg->off);
++ verbose_snum(env, reg->var_off.value);
+ return;
+ }
+
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index e20b90c3613169..e1cfe890e0be64 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -29,7 +29,7 @@ struct bpf_ringbuf {
+ u64 mask;
+ struct page **pages;
+ int nr_pages;
+- spinlock_t spinlock ____cacheline_aligned_in_smp;
++ raw_spinlock_t spinlock ____cacheline_aligned_in_smp;
+ /* For user-space producer ring buffers, an atomic_t busy bit is used
+ * to synchronize access to the ring buffers in the kernel, rather than
+ * the spinlock that is used for kernel-producer ring buffers. This is
+@@ -173,7 +173,7 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
+ if (!rb)
+ return NULL;
+
+- spin_lock_init(&rb->spinlock);
++ raw_spin_lock_init(&rb->spinlock);
+ atomic_set(&rb->busy, 0);
+ init_waitqueue_head(&rb->waitq);
+ init_irq_work(&rb->work, bpf_ringbuf_notify);
+@@ -421,10 +421,10 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
+ cons_pos = smp_load_acquire(&rb->consumer_pos);
+
+ if (in_nmi()) {
+- if (!spin_trylock_irqsave(&rb->spinlock, flags))
++ if (!raw_spin_trylock_irqsave(&rb->spinlock, flags))
+ return NULL;
+ } else {
+- spin_lock_irqsave(&rb->spinlock, flags);
++ raw_spin_lock_irqsave(&rb->spinlock, flags);
+ }
+
+ pend_pos = rb->pending_pos;
+@@ -450,7 +450,7 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
+ */
+ if (new_prod_pos - cons_pos > rb->mask ||
+ new_prod_pos - pend_pos > rb->mask) {
+- spin_unlock_irqrestore(&rb->spinlock, flags);
++ raw_spin_unlock_irqrestore(&rb->spinlock, flags);
+ return NULL;
+ }
+
+@@ -462,7 +462,7 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
+ /* pairs with consumer's smp_load_acquire() */
+ smp_store_release(&rb->producer_pos, new_prod_pos);
+
+- spin_unlock_irqrestore(&rb->spinlock, flags);
++ raw_spin_unlock_irqrestore(&rb->spinlock, flags);
+
+ return (void *)hdr + BPF_RINGBUF_HDR_SZ;
+ }
+@@ -632,7 +632,7 @@ const struct bpf_func_proto bpf_ringbuf_reserve_dynptr_proto = {
+ .arg1_type = ARG_CONST_MAP_PTR,
+ .arg2_type = ARG_ANYTHING,
+ .arg3_type = ARG_ANYTHING,
+- .arg4_type = ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_RINGBUF | MEM_UNINIT,
++ .arg4_type = ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_RINGBUF | MEM_UNINIT | MEM_WRITE,
+ };
+
+ BPF_CALL_2(bpf_ringbuf_submit_dynptr, struct bpf_dynptr_kern *, ptr, u64, flags)
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 21fb9c4d498fb0..19b590b5aaec93 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -3636,15 +3636,16 @@ static void bpf_perf_link_dealloc(struct bpf_link *link)
+ }
+
+ static int bpf_perf_link_fill_common(const struct perf_event *event,
+- char __user *uname, u32 ulen,
++ char __user *uname, u32 *ulenp,
+ u64 *probe_offset, u64 *probe_addr,
+ u32 *fd_type, unsigned long *missed)
+ {
+ const char *buf;
+- u32 prog_id;
++ u32 prog_id, ulen;
+ size_t len;
+ int err;
+
++ ulen = *ulenp;
+ if (!ulen ^ !uname)
+ return -EINVAL;
+
+@@ -3652,10 +3653,17 @@ static int bpf_perf_link_fill_common(const struct perf_event *event,
+ probe_offset, probe_addr, missed);
+ if (err)
+ return err;
++
++ if (buf) {
++ len = strlen(buf);
++ *ulenp = len + 1;
++ } else {
++ *ulenp = 1;
++ }
+ if (!uname)
+ return 0;
++
+ if (buf) {
+- len = strlen(buf);
+ err = bpf_copy_to_user(uname, buf, ulen, len);
+ if (err)
+ return err;
+@@ -3680,7 +3688,7 @@ static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
+
+ uname = u64_to_user_ptr(info->perf_event.kprobe.func_name);
+ ulen = info->perf_event.kprobe.name_len;
+- err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
++ err = bpf_perf_link_fill_common(event, uname, &ulen, &offset, &addr,
+ &type, &missed);
+ if (err)
+ return err;
+@@ -3688,7 +3696,7 @@ static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
+ info->perf_event.type = BPF_PERF_EVENT_KRETPROBE;
+ else
+ info->perf_event.type = BPF_PERF_EVENT_KPROBE;
+-
++ info->perf_event.kprobe.name_len = ulen;
+ info->perf_event.kprobe.offset = offset;
+ info->perf_event.kprobe.missed = missed;
+ if (!kallsyms_show_value(current_cred()))
+@@ -3710,7 +3718,7 @@ static int bpf_perf_link_fill_uprobe(const struct perf_event *event,
+
+ uname = u64_to_user_ptr(info->perf_event.uprobe.file_name);
+ ulen = info->perf_event.uprobe.name_len;
+- err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
++ err = bpf_perf_link_fill_common(event, uname, &ulen, &offset, &addr,
+ &type, NULL);
+ if (err)
+ return err;
+@@ -3719,6 +3727,7 @@ static int bpf_perf_link_fill_uprobe(const struct perf_event *event,
+ info->perf_event.type = BPF_PERF_EVENT_URETPROBE;
+ else
+ info->perf_event.type = BPF_PERF_EVENT_UPROBE;
++ info->perf_event.uprobe.name_len = ulen;
+ info->perf_event.uprobe.offset = offset;
+ info->perf_event.uprobe.cookie = event->bpf_cookie;
+ return 0;
+@@ -3744,12 +3753,18 @@ static int bpf_perf_link_fill_tracepoint(const struct perf_event *event,
+ {
+ char __user *uname;
+ u32 ulen;
++ int err;
+
+ uname = u64_to_user_ptr(info->perf_event.tracepoint.tp_name);
+ ulen = info->perf_event.tracepoint.name_len;
++ err = bpf_perf_link_fill_common(event, uname, &ulen, NULL, NULL, NULL, NULL);
++ if (err)
++ return err;
++
+ info->perf_event.type = BPF_PERF_EVENT_TRACEPOINT;
++ info->perf_event.tracepoint.name_len = ulen;
+ info->perf_event.tracepoint.cookie = event->bpf_cookie;
+- return bpf_perf_link_fill_common(event, uname, ulen, NULL, NULL, NULL, NULL);
++ return 0;
+ }
+
+ static int bpf_perf_link_fill_perf_event(const struct perf_event *event,
+@@ -5958,7 +5973,7 @@ static const struct bpf_func_proto bpf_kallsyms_lookup_name_proto = {
+ .arg1_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_CONST_SIZE_OR_ZERO,
+ .arg3_type = ARG_ANYTHING,
+- .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg4_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_WRITE | MEM_ALIGNED,
+ .arg4_size = sizeof(u64),
+ };
+
+diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
+index 02aa9db8d79616..5af9e130e500fa 100644
+--- a/kernel/bpf/task_iter.c
++++ b/kernel/bpf/task_iter.c
+@@ -99,7 +99,7 @@ static struct task_struct *task_seq_get_next(struct bpf_iter_seq_task_common *co
+ rcu_read_lock();
+ pid = find_pid_ns(common->pid, common->ns);
+ if (pid) {
+- task = get_pid_task(pid, PIDTYPE_TGID);
++ task = get_pid_task(pid, PIDTYPE_PID);
+ *tid = common->pid;
+ }
+ rcu_read_unlock();
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index d5215cb1747f16..77b60896200ef0 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2715,10 +2715,16 @@ static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env,
+ b->module = mod;
+ b->offset = offset;
+
++ /* sort() reorders entries by value, so b may no longer point
++ * to the right entry after this
++ */
+ sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
+ kfunc_btf_cmp_by_off, NULL);
++ } else {
++ btf = b->btf;
+ }
+- return b->btf;
++
++ return btf;
+ }
+
+ void bpf_free_kfunc_btf_tab(struct bpf_kfunc_btf_tab *tab)
+@@ -6249,10 +6255,10 @@ static void coerce_reg_to_size_sx(struct bpf_reg_state *reg, int size)
+
+ /* both of s64_max/s64_min positive or negative */
+ if ((s64_max >= 0) == (s64_min >= 0)) {
+- reg->smin_value = reg->s32_min_value = s64_min;
+- reg->smax_value = reg->s32_max_value = s64_max;
+- reg->umin_value = reg->u32_min_value = s64_min;
+- reg->umax_value = reg->u32_max_value = s64_max;
++ reg->s32_min_value = reg->smin_value = s64_min;
++ reg->s32_max_value = reg->smax_value = s64_max;
++ reg->u32_min_value = reg->umin_value = s64_min;
++ reg->u32_max_value = reg->umax_value = s64_max;
+ reg->var_off = tnum_range(s64_min, s64_max);
+ return;
+ }
+@@ -7338,7 +7344,8 @@ static int check_stack_range_initialized(
+ }
+
+ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+- int access_size, bool zero_size_allowed,
++ int access_size, enum bpf_access_type access_type,
++ bool zero_size_allowed,
+ struct bpf_call_arg_meta *meta)
+ {
+ struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno];
+@@ -7350,7 +7357,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ return check_packet_access(env, regno, reg->off, access_size,
+ zero_size_allowed);
+ case PTR_TO_MAP_KEY:
+- if (meta && meta->raw_mode) {
++ if (access_type == BPF_WRITE) {
+ verbose(env, "R%d cannot write into %s\n", regno,
+ reg_type_str(env, reg->type));
+ return -EACCES;
+@@ -7358,15 +7365,13 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ return check_mem_region_access(env, regno, reg->off, access_size,
+ reg->map_ptr->key_size, false);
+ case PTR_TO_MAP_VALUE:
+- if (check_map_access_type(env, regno, reg->off, access_size,
+- meta && meta->raw_mode ? BPF_WRITE :
+- BPF_READ))
++ if (check_map_access_type(env, regno, reg->off, access_size, access_type))
+ return -EACCES;
+ return check_map_access(env, regno, reg->off, access_size,
+ zero_size_allowed, ACCESS_HELPER);
+ case PTR_TO_MEM:
+ if (type_is_rdonly_mem(reg->type)) {
+- if (meta && meta->raw_mode) {
++ if (access_type == BPF_WRITE) {
+ verbose(env, "R%d cannot write into %s\n", regno,
+ reg_type_str(env, reg->type));
+ return -EACCES;
+@@ -7377,7 +7382,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ zero_size_allowed);
+ case PTR_TO_BUF:
+ if (type_is_rdonly_mem(reg->type)) {
+- if (meta && meta->raw_mode) {
++ if (access_type == BPF_WRITE) {
+ verbose(env, "R%d cannot write into %s\n", regno,
+ reg_type_str(env, reg->type));
+ return -EACCES;
+@@ -7405,7 +7410,6 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ * Dynamically check it now.
+ */
+ if (!env->ops->convert_ctx_access) {
+- enum bpf_access_type atype = meta && meta->raw_mode ? BPF_WRITE : BPF_READ;
+ int offset = access_size - 1;
+
+ /* Allow zero-byte read from PTR_TO_CTX */
+@@ -7413,7 +7417,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ return zero_size_allowed ? 0 : -EACCES;
+
+ return check_mem_access(env, env->insn_idx, regno, offset, BPF_B,
+- atype, -1, false, false);
++ access_type, -1, false, false);
+ }
+
+ fallthrough;
+@@ -7438,6 +7442,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ */
+ static int check_mem_size_reg(struct bpf_verifier_env *env,
+ struct bpf_reg_state *reg, u32 regno,
++ enum bpf_access_type access_type,
+ bool zero_size_allowed,
+ struct bpf_call_arg_meta *meta)
+ {
+@@ -7453,15 +7458,12 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
+ */
+ meta->msize_max_value = reg->umax_value;
+
+- /* The register is SCALAR_VALUE; the access check
+- * happens using its boundaries.
++ /* The register is SCALAR_VALUE; the access check happens using
++ * its boundaries. For unprivileged variable accesses, disable
++ * raw mode so that the program is required to initialize all
++ * the memory that the helper could just partially fill up.
+ */
+ if (!tnum_is_const(reg->var_off))
+- /* For unprivileged variable accesses, disable raw
+- * mode so that the program is required to
+- * initialize all the memory that the helper could
+- * just partially fill up.
+- */
+ meta = NULL;
+
+ if (reg->smin_value < 0) {
+@@ -7481,9 +7483,8 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
+ regno);
+ return -EACCES;
+ }
+- err = check_helper_mem_access(env, regno - 1,
+- reg->umax_value,
+- zero_size_allowed, meta);
++ err = check_helper_mem_access(env, regno - 1, reg->umax_value,
++ access_type, zero_size_allowed, meta);
+ if (!err)
+ err = mark_chain_precision(env, regno);
+ return err;
+@@ -7494,13 +7495,11 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
+ {
+ bool may_be_null = type_may_be_null(reg->type);
+ struct bpf_reg_state saved_reg;
+- struct bpf_call_arg_meta meta;
+ int err;
+
+ if (register_is_null(reg))
+ return 0;
+
+- memset(&meta, 0, sizeof(meta));
+ /* Assuming that the register contains a value check if the memory
+ * access is safe. Temporarily save and restore the register's state as
+ * the conversion shouldn't be visible to a caller.
+@@ -7510,10 +7509,8 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
+ mark_ptr_not_null_reg(reg);
+ }
+
+- err = check_helper_mem_access(env, regno, mem_size, true, &meta);
+- /* Check access for BPF_WRITE */
+- meta.raw_mode = true;
+- err = err ?: check_helper_mem_access(env, regno, mem_size, true, &meta);
++ err = check_helper_mem_access(env, regno, mem_size, BPF_READ, true, NULL);
++ err = err ?: check_helper_mem_access(env, regno, mem_size, BPF_WRITE, true, NULL);
+
+ if (may_be_null)
+ *reg = saved_reg;
+@@ -7539,13 +7536,12 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
+ mark_ptr_not_null_reg(mem_reg);
+ }
+
+- err = check_mem_size_reg(env, reg, regno, true, &meta);
+- /* Check access for BPF_WRITE */
+- meta.raw_mode = true;
+- err = err ?: check_mem_size_reg(env, reg, regno, true, &meta);
++ err = check_mem_size_reg(env, reg, regno, BPF_READ, true, &meta);
++ err = err ?: check_mem_size_reg(env, reg, regno, BPF_WRITE, true, &meta);
+
+ if (may_be_null)
+ *mem_reg = saved_reg;
++
+ return err;
+ }
+
+@@ -8819,9 +8815,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ verbose(env, "invalid map_ptr to access map->key\n");
+ return -EACCES;
+ }
+- err = check_helper_mem_access(env, regno,
+- meta->map_ptr->key_size, false,
+- NULL);
++ err = check_helper_mem_access(env, regno, meta->map_ptr->key_size,
++ BPF_READ, false, NULL);
+ break;
+ case ARG_PTR_TO_MAP_VALUE:
+ if (type_may_be_null(arg_type) && register_is_null(reg))
+@@ -8836,9 +8831,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ return -EACCES;
+ }
+ meta->raw_mode = arg_type & MEM_UNINIT;
+- err = check_helper_mem_access(env, regno,
+- meta->map_ptr->value_size, false,
+- meta);
++ err = check_helper_mem_access(env, regno, meta->map_ptr->value_size,
++ arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
++ false, meta);
+ break;
+ case ARG_PTR_TO_PERCPU_BTF_ID:
+ if (!reg->btf_id) {
+@@ -8880,7 +8875,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ */
+ meta->raw_mode = arg_type & MEM_UNINIT;
+ if (arg_type & MEM_FIXED_SIZE) {
+- err = check_helper_mem_access(env, regno, fn->arg_size[arg], false, meta);
++ err = check_helper_mem_access(env, regno, fn->arg_size[arg],
++ arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
++ false, meta);
+ if (err)
+ return err;
+ if (arg_type & MEM_ALIGNED)
+@@ -8888,10 +8885,16 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ }
+ break;
+ case ARG_CONST_SIZE:
+- err = check_mem_size_reg(env, reg, regno, false, meta);
++ err = check_mem_size_reg(env, reg, regno,
++ fn->arg_type[arg - 1] & MEM_WRITE ?
++ BPF_WRITE : BPF_READ,
++ false, meta);
+ break;
+ case ARG_CONST_SIZE_OR_ZERO:
+- err = check_mem_size_reg(env, reg, regno, true, meta);
++ err = check_mem_size_reg(env, reg, regno,
++ fn->arg_type[arg - 1] & MEM_WRITE ?
++ BPF_WRITE : BPF_READ,
++ true, meta);
+ break;
+ case ARG_PTR_TO_DYNPTR:
+ err = process_dynptr_func(env, regno, insn_idx, arg_type, 0);
+@@ -14130,12 +14133,13 @@ static int adjust_reg_min_max_vals(struct bpf_verifier_env *env,
+ * r1 += 0x1
+ * if r2 < 1000 goto ...
+ * use r1 in memory access
+- * So remember constant delta between r2 and r1 and update r1 after
+- * 'if' condition.
++ * So for 64-bit alu remember constant delta between r2 and r1 and
++ * update r1 after 'if' condition.
+ */
+- if (env->bpf_capable && BPF_OP(insn->code) == BPF_ADD &&
+- dst_reg->id && is_reg_const(src_reg, alu32)) {
+- u64 val = reg_const_value(src_reg, alu32);
++ if (env->bpf_capable &&
++ BPF_OP(insn->code) == BPF_ADD && !alu32 &&
++ dst_reg->id && is_reg_const(src_reg, false)) {
++ u64 val = reg_const_value(src_reg, false);
+
+ if ((dst_reg->id & BPF_ADD_CONST) ||
+ /* prevent overflow in find_equal_scalars() later */
+@@ -15140,8 +15144,12 @@ static void find_equal_scalars(struct bpf_verifier_state *vstate,
+ continue;
+ if ((!(reg->id & BPF_ADD_CONST) && !(known_reg->id & BPF_ADD_CONST)) ||
+ reg->off == known_reg->off) {
++ s32 saved_subreg_def = reg->subreg_def;
++
+ copy_register_state(reg, known_reg);
++ reg->subreg_def = saved_subreg_def;
+ } else {
++ s32 saved_subreg_def = reg->subreg_def;
+ s32 saved_off = reg->off;
+
+ fake_reg.type = SCALAR_VALUE;
+@@ -15154,6 +15162,7 @@ static void find_equal_scalars(struct bpf_verifier_state *vstate,
+ * otherwise another find_equal_scalars() will be incorrect.
+ */
+ reg->off = saved_off;
++ reg->subreg_def = saved_subreg_def;
+
+ scalar32_min_max_add(reg, &fake_reg);
+ scalar_min_max_add(reg, &fake_reg);
+@@ -20666,7 +20675,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
+ delta += cnt - 1;
+ env->prog = prog = new_prog;
+ insn = new_prog->insnsi + i + delta;
+- continue;
++ goto next_insn;
+ }
+
+ /* Implement bpf_kptr_xchg inline */
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 1af59cf714cd3d..3713341c2e7205 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -10262,7 +10262,9 @@ void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
+ return;
+ if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
+ return;
+- task_work_add(curr, work, TWA_RESUME);
++
++ /* No page allocation under rq lock */
++ task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC);
+ }
+
+ void sched_mm_cid_exit_signals(struct task_struct *t)
+diff --git a/kernel/task_work.c b/kernel/task_work.c
+index 5d14d639ac71b5..c969f1f26be58a 100644
+--- a/kernel/task_work.c
++++ b/kernel/task_work.c
+@@ -55,15 +55,26 @@ int task_work_add(struct task_struct *task, struct callback_head *work,
+ enum task_work_notify_mode notify)
+ {
+ struct callback_head *head;
++ int flags = notify & TWA_FLAGS;
+
++ notify &= ~TWA_FLAGS;
+ if (notify == TWA_NMI_CURRENT) {
+ if (WARN_ON_ONCE(task != current))
+ return -EINVAL;
+ if (!IS_ENABLED(CONFIG_IRQ_WORK))
+ return -EINVAL;
+ } else {
+- /* record the work call stack in order to print it in KASAN reports */
+- kasan_record_aux_stack(work);
++ /*
++ * Record the work call stack in order to print it in KASAN
++ * reports.
++ *
++ * Note that stack allocation can fail if TWAF_NO_ALLOC flag
++ * is set and new page is needed to expand the stack buffer.
++ */
++ if (flags & TWAF_NO_ALLOC)
++ kasan_record_aux_stack_noalloc(work);
++ else
++ kasan_record_aux_stack(work);
+ }
+
+ head = READ_ONCE(task->task_works);
+diff --git a/kernel/time/posix-clock.c b/kernel/time/posix-clock.c
+index 6b5d5c0021fae6..a6487a9d608533 100644
+--- a/kernel/time/posix-clock.c
++++ b/kernel/time/posix-clock.c
+@@ -310,6 +310,9 @@ static int pc_clock_settime(clockid_t id, const struct timespec64 *ts)
+ struct posix_clock_desc cd;
+ int err;
+
++ if (!timespec64_valid_strict(ts))
++ return -EINVAL;
++
+ err = get_clock_desc(id, &cd);
+ if (err)
+ return err;
+@@ -319,9 +322,6 @@ static int pc_clock_settime(clockid_t id, const struct timespec64 *ts)
+ goto out;
+ }
+
+- if (!timespec64_valid_strict(ts))
+- return -EINVAL;
+-
+ if (cd.clk->ops.clock_settime)
+ err = cd.clk->ops.clock_settime(cd.clk, ts);
+ else
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index add26dc27d7e37..6dbbb3683ab2e9 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1226,7 +1226,7 @@ static const struct bpf_func_proto bpf_get_func_arg_proto = {
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+ .arg2_type = ARG_ANYTHING,
+- .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_WRITE | MEM_ALIGNED,
+ .arg3_size = sizeof(u64),
+ };
+
+@@ -1243,7 +1243,7 @@ static const struct bpf_func_proto bpf_get_func_ret_proto = {
+ .func = get_func_ret,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+- .arg2_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg2_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_WRITE | MEM_ALIGNED,
+ .arg2_size = sizeof(u64),
+ };
+
+@@ -2306,8 +2306,6 @@ void perf_event_detach_bpf_prog(struct perf_event *event)
+
+ old_array = bpf_event_rcu_dereference(event->tp_event->prog_array);
+ ret = bpf_prog_array_copy(old_array, event->prog, NULL, 0, &new_array);
+- if (ret == -ENOENT)
+- goto unlock;
+ if (ret < 0) {
+ bpf_prog_array_delete_safe(old_array, event->prog);
+ } else {
+@@ -3222,7 +3220,8 @@ static int bpf_uprobe_multi_link_fill_link_info(const struct bpf_link *link,
+ struct bpf_uprobe_multi_link *umulti_link;
+ u32 ucount = info->uprobe_multi.count;
+ int err = 0, i;
+- long left;
++ char *p, *buf;
++ long left = 0;
+
+ if (!upath ^ !upath_size)
+ return -EINVAL;
+@@ -3236,26 +3235,23 @@ static int bpf_uprobe_multi_link_fill_link_info(const struct bpf_link *link,
+ info->uprobe_multi.pid = umulti_link->task ?
+ task_pid_nr_ns(umulti_link->task, task_active_pid_ns(current)) : 0;
+
+- if (upath) {
+- char *p, *buf;
+-
+- upath_size = min_t(u32, upath_size, PATH_MAX);
+-
+- buf = kmalloc(upath_size, GFP_KERNEL);
+- if (!buf)
+- return -ENOMEM;
+- p = d_path(&umulti_link->path, buf, upath_size);
+- if (IS_ERR(p)) {
+- kfree(buf);
+- return PTR_ERR(p);
+- }
+- upath_size = buf + upath_size - p;
+- left = copy_to_user(upath, p, upath_size);
++ upath_size = upath_size ? min_t(u32, upath_size, PATH_MAX) : PATH_MAX;
++ buf = kmalloc(upath_size, GFP_KERNEL);
++ if (!buf)
++ return -ENOMEM;
++ p = d_path(&umulti_link->path, buf, upath_size);
++ if (IS_ERR(p)) {
+ kfree(buf);
+- if (left)
+- return -EFAULT;
+- info->uprobe_multi.path_size = upath_size;
++ return PTR_ERR(p);
+ }
++ upath_size = buf + upath_size - p;
++
++ if (upath)
++ left = copy_to_user(upath, p, upath_size);
++ kfree(buf);
++ if (left)
++ return -EFAULT;
++ info->uprobe_multi.path_size = upath_size;
+
+ if (!uoffsets && !ucookies && !uref_ctr_offsets)
+ return 0;
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index 43f4e3f57438b4..69e226a48daa92 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -1162,7 +1162,8 @@ static int start_graph_tracing(void)
+ unsigned long **ret_stack_list;
+ int ret;
+
+- ret_stack_list = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL);
++ ret_stack_list = kcalloc(FTRACE_RETSTACK_ALLOC_SIZE,
++ sizeof(*ret_stack_list), GFP_KERNEL);
+
+ if (!ret_stack_list)
+ return -ENOMEM;
+@@ -1251,10 +1252,10 @@ int register_ftrace_graph(struct fgraph_ops *gops)
+ int ret = 0;
+ int i = -1;
+
+- mutex_lock(&ftrace_lock);
++ guard(mutex)(&ftrace_lock);
+
+ if (!fgraph_initialized) {
+- ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "fgraph_idle_init",
++ ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "fgraph:online",
+ fgraph_cpu_init, NULL);
+ if (ret < 0) {
+ pr_warn("fgraph: Error to init cpu hotplug support\n");
+@@ -1272,10 +1273,8 @@ int register_ftrace_graph(struct fgraph_ops *gops)
+ }
+
+ i = fgraph_lru_alloc_index();
+- if (i < 0 || WARN_ON_ONCE(fgraph_array[i] != &fgraph_stub)) {
+- ret = -ENOSPC;
+- goto out;
+- }
++ if (i < 0 || WARN_ON_ONCE(fgraph_array[i] != &fgraph_stub))
++ return -ENOSPC;
+ gops->idx = i;
+
+ ftrace_graph_active++;
+@@ -1312,8 +1311,6 @@ int register_ftrace_graph(struct fgraph_ops *gops)
+ gops->saved_func = NULL;
+ fgraph_lru_release_index(i);
+ }
+-out:
+- mutex_unlock(&ftrace_lock);
+ return ret;
+ }
+
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index cebd879a30cbd1..fb7b092e793137 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -6008,39 +6008,38 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
+ }
+
+ for_each_buffer_cpu(buffer, cpu) {
++ struct buffer_data_page *old_free_data_page;
++ struct list_head old_pages;
++ unsigned long flags;
+
+ if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ continue;
+
+ cpu_buffer = buffer->buffers[cpu];
+
++ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++
+ /* Clear the head bit to make the link list normal to read */
+ rb_head_page_deactivate(cpu_buffer);
+
+- /* Now walk the list and free all the old sub buffers */
+- list_for_each_entry_safe(bpage, tmp, cpu_buffer->pages, list) {
+- list_del_init(&bpage->list);
+- free_buffer_page(bpage);
+- }
+- /* The above loop stopped an the last page needing to be freed */
+- bpage = list_entry(cpu_buffer->pages, struct buffer_page, list);
+- free_buffer_page(bpage);
+-
+- /* Free the current reader page */
+- free_buffer_page(cpu_buffer->reader_page);
++ /*
++ * Collect buffers from the cpu_buffer pages list and the
++ * reader_page on old_pages, so they can be freed later when not
++ * under a spinlock. The pages list is a linked list with no
++ * head, adding old_pages turns it into a regular list with
++ * old_pages being the head.
++ */
++ list_add(&old_pages, cpu_buffer->pages);
++ list_add(&cpu_buffer->reader_page->list, &old_pages);
+
+ /* One page was allocated for the reader page */
+ cpu_buffer->reader_page = list_entry(cpu_buffer->new_pages.next,
+ struct buffer_page, list);
+ list_del_init(&cpu_buffer->reader_page->list);
+
+- /* The cpu_buffer pages are a link list with no head */
++ /* Install the new pages, remove the head from the list */
+ cpu_buffer->pages = cpu_buffer->new_pages.next;
+- cpu_buffer->new_pages.next->prev = cpu_buffer->new_pages.prev;
+- cpu_buffer->new_pages.prev->next = cpu_buffer->new_pages.next;
+-
+- /* Clear the new_pages list */
+- INIT_LIST_HEAD(&cpu_buffer->new_pages);
++ list_del_init(&cpu_buffer->new_pages);
+
+ cpu_buffer->head_page
+ = list_entry(cpu_buffer->pages, struct buffer_page, list);
+@@ -6049,11 +6048,20 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
+ cpu_buffer->nr_pages = cpu_buffer->nr_pages_to_update;
+ cpu_buffer->nr_pages_to_update = 0;
+
+- free_pages((unsigned long)cpu_buffer->free_page, old_order);
++ old_free_data_page = cpu_buffer->free_page;
+ cpu_buffer->free_page = NULL;
+
+ rb_head_page_activate(cpu_buffer);
+
++ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++
++ /* Free old sub buffers */
++ list_for_each_entry_safe(bpage, tmp, &old_pages, list) {
++ list_del_init(&bpage->list);
++ free_buffer_page(bpage);
++ }
++ free_pages((unsigned long)old_free_data_page, old_order);
++
+ rb_check_pages(cpu_buffer);
+ }
+
+diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
+index b0e0ec85912e98..ebda68ee9abff9 100644
+--- a/kernel/trace/trace_eprobe.c
++++ b/kernel/trace/trace_eprobe.c
+@@ -912,6 +912,11 @@ static int __trace_eprobe_create(int argc, const char *argv[])
+ }
+ }
+
++ if (argc - 2 > MAX_TRACE_ARGS) {
++ ret = -E2BIG;
++ goto error;
++ }
++
+ mutex_lock(&event_mutex);
+ event_call = find_and_get_event(sys_name, sys_event);
+ ep = alloc_event_probe(group, event, event_call, argc - 2);
+@@ -937,7 +942,7 @@ static int __trace_eprobe_create(int argc, const char *argv[])
+
+ argc -= 2; argv += 2;
+ /* parse arguments */
+- for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) {
++ for (i = 0; i < argc; i++) {
+ trace_probe_log_set_index(i + 2);
+ ret = trace_eprobe_tp_update_arg(ep, argv, i);
+ if (ret)
+diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c
+index 62e6a8f4aae9b7..5107d466a12930 100644
+--- a/kernel/trace/trace_fprobe.c
++++ b/kernel/trace/trace_fprobe.c
+@@ -1104,6 +1104,10 @@ static int __trace_fprobe_create(int argc, const char *argv[])
+ argc = new_argc;
+ argv = new_argv;
+ }
++ if (argc > MAX_TRACE_ARGS) {
++ ret = -E2BIG;
++ goto out;
++ }
+
+ ret = traceprobe_expand_dentry_args(argc, argv, &dbuf);
+ if (ret)
+@@ -1124,7 +1128,7 @@ static int __trace_fprobe_create(int argc, const char *argv[])
+ (unsigned long)tf->tpoint->probestub);
+
+ /* parse arguments */
+- for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) {
++ for (i = 0; i < argc; i++) {
+ trace_probe_log_set_index(i + 2);
+ ctx.offset = 0;
+ ret = traceprobe_parse_probe_arg(&tf->tp, i, argv[i], &ctx);
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 61a6da808203e0..263fac44d3ca32 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -1013,6 +1013,10 @@ static int __trace_kprobe_create(int argc, const char *argv[])
+ argc = new_argc;
+ argv = new_argv;
+ }
++ if (argc > MAX_TRACE_ARGS) {
++ ret = -E2BIG;
++ goto out;
++ }
+
+ ret = traceprobe_expand_dentry_args(argc, argv, &dbuf);
+ if (ret)
+@@ -1029,7 +1033,7 @@ static int __trace_kprobe_create(int argc, const char *argv[])
+ }
+
+ /* parse arguments */
+- for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) {
++ for (i = 0; i < argc; i++) {
+ trace_probe_log_set_index(i + 2);
+ ctx.offset = 0;
+ ret = traceprobe_parse_probe_arg(&tk->tp, i, argv[i], &ctx);
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 39877c80d6cb9a..16a5e368e7b77c 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -276,7 +276,7 @@ int traceprobe_parse_event_name(const char **pevent, const char **pgroup,
+ }
+ trace_probe_log_err(offset, NO_EVENT_NAME);
+ return -EINVAL;
+- } else if (len > MAX_EVENT_NAME_LEN) {
++ } else if (len >= MAX_EVENT_NAME_LEN) {
+ trace_probe_log_err(offset, EVENT_TOO_LONG);
+ return -EINVAL;
+ }
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index c98e3b3386badb..6de07df05ed8c0 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -556,6 +556,8 @@ static int __trace_uprobe_create(int argc, const char **argv)
+
+ if (argc < 2)
+ return -ECANCELED;
++ if (argc - 2 > MAX_TRACE_ARGS)
++ return -E2BIG;
+
+ if (argv[0][1] == ':')
+ event = &argv[0][2];
+@@ -681,7 +683,7 @@ static int __trace_uprobe_create(int argc, const char **argv)
+ tu->filename = filename;
+
+ /* parse arguments */
+- for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) {
++ for (i = 0; i < argc; i++) {
+ struct traceprobe_parse_context ctx = {
+ .flags = (is_return ? TPARG_FL_RETURN : 0) | TPARG_FL_USER,
+ };
+@@ -858,6 +860,7 @@ struct uprobe_cpu_buffer {
+ };
+ static struct uprobe_cpu_buffer __percpu *uprobe_cpu_buffer;
+ static int uprobe_buffer_refcnt;
++#define MAX_UCB_BUFFER_SIZE PAGE_SIZE
+
+ static int uprobe_buffer_init(void)
+ {
+@@ -962,6 +965,11 @@ static struct uprobe_cpu_buffer *prepare_uprobe_buffer(struct trace_uprobe *tu,
+ ucb = uprobe_buffer_get();
+ ucb->dsize = tu->tp.size + dsize;
+
++ if (WARN_ON_ONCE(ucb->dsize > MAX_UCB_BUFFER_SIZE)) {
++ ucb->dsize = MAX_UCB_BUFFER_SIZE;
++ dsize = MAX_UCB_BUFFER_SIZE - tu->tp.size;
++ }
++
+ store_trace_args(ucb->buf, &tu->tp, regs, NULL, esize, dsize);
+
+ *ucbp = ucb;
+@@ -981,9 +989,6 @@ static void __uprobe_trace_func(struct trace_uprobe *tu,
+
+ WARN_ON(call != trace_file->event_call);
+
+- if (WARN_ON_ONCE(ucb->dsize > PAGE_SIZE))
+- return;
+-
+ if (trace_trigger_soft_disabled(trace_file))
+ return;
+
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index a30c03a6617263..8079f5c2dfe4f8 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -3023,7 +3023,7 @@ config RUST_BUILD_ASSERT_ALLOW
+ bool "Allow unoptimized build-time assertions"
+ depends on RUST
+ help
+- Controls how are `build_error!` and `build_assert!` handled during build.
++ Controls how `build_error!` and `build_assert!` are handled during the build.
+
+ If calls to them exist in the binary, it may indicate a violated invariant
+ or that the optimizer failed to verify the invariant during compilation.
+diff --git a/lib/objpool.c b/lib/objpool.c
+index 234f9d0bd081a5..fd108fe0d095a7 100644
+--- a/lib/objpool.c
++++ b/lib/objpool.c
+@@ -76,7 +76,7 @@ objpool_init_percpu_slots(struct objpool_head *pool, int nr_objs,
+ * mimimal size of vmalloc is one page since vmalloc would
+ * always align the requested size to page size
+ */
+- if (pool->gfp & GFP_ATOMIC)
++ if ((pool->gfp & GFP_ATOMIC) == GFP_ATOMIC)
+ slot = kmalloc_node(size, pool->gfp, cpu_to_node(i));
+ else
+ slot = __vmalloc_node(size, sizeof(void *), pool->gfp,
+diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c
+index 6432b8c3e431ec..989a12a6787214 100644
+--- a/lib/strncpy_from_user.c
++++ b/lib/strncpy_from_user.c
+@@ -120,6 +120,15 @@ long strncpy_from_user(char *dst, const char __user *src, long count)
+ if (unlikely(count <= 0))
+ return 0;
+
++ if (can_do_masked_user_access()) {
++ long retval;
++
++ src = masked_user_access_begin(src);
++ retval = do_strncpy_from_user(dst, src, count, count);
++ user_read_access_end();
++ return retval;
++ }
++
+ max_addr = TASK_SIZE_MAX;
+ src_addr = (unsigned long)untagged_addr(src);
+ if (likely(src_addr < max_addr)) {
+diff --git a/lib/strnlen_user.c b/lib/strnlen_user.c
+index feeb935a229911..6e489f9e90f158 100644
+--- a/lib/strnlen_user.c
++++ b/lib/strnlen_user.c
+@@ -96,6 +96,15 @@ long strnlen_user(const char __user *str, long count)
+ if (unlikely(count <= 0))
+ return 0;
+
++ if (can_do_masked_user_access()) {
++ long retval;
++
++ str = masked_user_access_begin(str);
++ retval = do_strnlen_user(str, count, count);
++ user_read_access_end();
++ return retval;
++ }
++
+ max_addr = TASK_SIZE_MAX;
+ src_addr = (unsigned long)untagged_addr(str);
+ if (likely(src_addr < max_addr)) {
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 99b146d16a1850..e44508e46e8979 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -106,18 +106,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
+ if (!vma->vm_mm) /* vdso */
+ return 0;
+
+- /*
+- * Explicitly disabled through madvise or prctl, or some
+- * architectures may disable THP for some mappings, for
+- * example, s390 kvm.
+- * */
+- if ((vm_flags & VM_NOHUGEPAGE) ||
+- test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
+- return 0;
+- /*
+- * If the hardware/firmware marked hugepage support disabled.
+- */
+- if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED))
++ if (thp_disabled_by_hw() || vma_thp_disabled(vma, vm_flags))
+ return 0;
+
+ /* khugepaged doesn't collapse DAX vma, but page fault is fine. */
+@@ -159,15 +148,10 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
+ * Must be done before hugepage flags check since shmem has its
+ * own flags.
+ */
+- if (!in_pf && shmem_file(vma->vm_file)) {
+- bool global_huge = shmem_is_huge(file_inode(vma->vm_file), vma->vm_pgoff,
+- !enforce_sysfs, vma->vm_mm, vm_flags);
+-
+- if (!vma_is_anon_shmem(vma))
+- return global_huge ? orders : 0;
++ if (!in_pf && shmem_file(vma->vm_file))
+ return shmem_allowable_huge_orders(file_inode(vma->vm_file),
+- vma, vma->vm_pgoff, global_huge);
+- }
++ vma, vma->vm_pgoff,
++ !enforce_sysfs);
+
+ if (!vma_is_anonymous(vma)) {
+ /*
+diff --git a/mm/memory.c b/mm/memory.c
+index cda2c12c500b8d..cb66345e398d29 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -4719,6 +4719,15 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
+ pmd_t entry;
+ vm_fault_t ret = VM_FAULT_FALLBACK;
+
++ /*
++ * It is too late to allocate a small folio, we already have a large
++ * folio in the pagecache: especially s390 KVM cannot tolerate any
++ * PMD mappings, but PTE-mapped THP are fine. So let's simply refuse any
++ * PMD mappings if THPs are disabled.
++ */
++ if (thp_disabled_by_hw() || vma_thp_disabled(vma, vma->vm_flags))
++ return ret;
++
+ if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
+ return ret;
+
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 5a77acf6ac6a62..27f496d6e43eb6 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -548,10 +548,11 @@ static bool shmem_confirm_swap(struct address_space *mapping,
+
+ static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER;
+
+-static bool __shmem_is_huge(struct inode *inode, pgoff_t index,
+- bool shmem_huge_force, struct mm_struct *mm,
+- unsigned long vm_flags)
++static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
++ bool shmem_huge_force, struct vm_area_struct *vma,
++ unsigned long vm_flags)
+ {
++ struct mm_struct *mm = vma ? vma->vm_mm : NULL;
+ loff_t i_size;
+
+ if (!S_ISREG(inode->i_mode))
+@@ -581,14 +582,15 @@ static bool __shmem_is_huge(struct inode *inode, pgoff_t index,
+ }
+ }
+
+-bool shmem_is_huge(struct inode *inode, pgoff_t index,
+- bool shmem_huge_force, struct mm_struct *mm,
++static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
++ bool shmem_huge_force, struct vm_area_struct *vma,
+ unsigned long vm_flags)
+ {
+ if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER)
+ return false;
+
+- return __shmem_is_huge(inode, index, shmem_huge_force, mm, vm_flags);
++ return __shmem_huge_global_enabled(inode, index, shmem_huge_force,
++ vma, vm_flags);
+ }
+
+ #if defined(CONFIG_SYSFS)
+@@ -771,6 +773,13 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
+ {
+ return 0;
+ }
++
++static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
++ bool shmem_huge_force, struct vm_area_struct *vma,
++ unsigned long vm_flags)
++{
++ return false;
++}
+ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+ /*
+@@ -1156,7 +1165,7 @@ static int shmem_getattr(struct mnt_idmap *idmap,
+ STATX_ATTR_NODUMP);
+ generic_fillattr(idmap, request_mask, inode, stat);
+
+- if (shmem_is_huge(inode, 0, false, NULL, 0))
++ if (shmem_huge_global_enabled(inode, 0, false, NULL, 0))
+ stat->blksize = HPAGE_PMD_SIZE;
+
+ if (request_mask & STATX_BTIME) {
+@@ -1624,21 +1633,27 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ unsigned long shmem_allowable_huge_orders(struct inode *inode,
+ struct vm_area_struct *vma, pgoff_t index,
+- bool global_huge)
++ bool shmem_huge_force)
+ {
+ unsigned long mask = READ_ONCE(huge_shmem_orders_always);
+ unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
+- unsigned long vm_flags = vma->vm_flags;
++ unsigned long vm_flags = vma ? vma->vm_flags : 0;
++ bool global_huge;
+ loff_t i_size;
+ int order;
+
+- if ((vm_flags & VM_NOHUGEPAGE) ||
+- test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
++ if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags)))
+ return 0;
+
+- /* If the hardware/firmware marked hugepage support disabled. */
+- if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED))
+- return 0;
++ global_huge = shmem_huge_global_enabled(inode, index, shmem_huge_force,
++ vma, vm_flags);
++ if (!vma || !vma_is_anon_shmem(vma)) {
++ /*
++ * For tmpfs, we now only support PMD sized THP if huge page
++ * is enabled, otherwise fallback to order 0.
++ */
++ return global_huge ? BIT(HPAGE_PMD_ORDER) : 0;
++ }
+
+ /*
+ * Following the 'deny' semantics of the top level, force the huge
+@@ -2085,7 +2100,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
+ struct mm_struct *fault_mm;
+ struct folio *folio;
+ int error;
+- bool alloced, huge;
++ bool alloced;
+ unsigned long orders = 0;
+
+ if (WARN_ON_ONCE(!shmem_mapping(inode->i_mapping)))
+@@ -2158,14 +2173,8 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
+ return 0;
+ }
+
+- huge = shmem_is_huge(inode, index, false, fault_mm,
+- vma ? vma->vm_flags : 0);
+- /* Find hugepage orders that are allowed for anonymous shmem. */
+- if (vma && vma_is_anon_shmem(vma))
+- orders = shmem_allowable_huge_orders(inode, vma, index, huge);
+- else if (huge)
+- orders = BIT(HPAGE_PMD_ORDER);
+-
++ /* Find hugepage orders that are allowed for anonymous shmem and tmpfs. */
++ orders = shmem_allowable_huge_orders(inode, vma, index, false);
+ if (orders > 0) {
+ gfp_t huge_gfp;
+
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index e39fba5565c5d4..0b4d0a8bd36141 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -185,6 +185,28 @@ void bt_sock_unlink(struct bt_sock_list *l, struct sock *sk)
+ }
+ EXPORT_SYMBOL(bt_sock_unlink);
+
++bool bt_sock_linked(struct bt_sock_list *l, struct sock *s)
++{
++ struct sock *sk;
++
++ if (!l || !s)
++ return false;
++
++ read_lock(&l->lock);
++
++ sk_for_each(sk, &l->head) {
++ if (s == sk) {
++ read_unlock(&l->lock);
++ return true;
++ }
++ }
++
++ read_unlock(&l->lock);
++
++ return false;
++}
++EXPORT_SYMBOL(bt_sock_linked);
++
+ void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh)
+ {
+ const struct cred *old_cred;
+diff --git a/net/bluetooth/bnep/core.c b/net/bluetooth/bnep/core.c
+index ec45f77fce2181..344e2e063be684 100644
+--- a/net/bluetooth/bnep/core.c
++++ b/net/bluetooth/bnep/core.c
+@@ -745,8 +745,7 @@ static int __init bnep_init(void)
+ if (flt[0])
+ BT_INFO("BNEP filters: %s", flt);
+
+- bnep_sock_init();
+- return 0;
++ return bnep_sock_init();
+ }
+
+ static void __exit bnep_exit(void)
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index b2f8f9c5b61066..6e07350817bec2 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1644,12 +1644,12 @@ void hci_adv_instances_clear(struct hci_dev *hdev)
+ struct adv_info *adv_instance, *n;
+
+ if (hdev->adv_instance_timeout) {
+- cancel_delayed_work(&hdev->adv_instance_expire);
++ disable_delayed_work(&hdev->adv_instance_expire);
+ hdev->adv_instance_timeout = 0;
+ }
+
+ list_for_each_entry_safe(adv_instance, n, &hdev->adv_instances, list) {
+- cancel_delayed_work_sync(&adv_instance->rpa_expired_cb);
++ disable_delayed_work_sync(&adv_instance->rpa_expired_cb);
+ list_del(&adv_instance->list);
+ kfree(adv_instance);
+ }
+@@ -2685,11 +2685,11 @@ void hci_unregister_dev(struct hci_dev *hdev)
+ list_del(&hdev->list);
+ write_unlock(&hci_dev_list_lock);
+
+- cancel_work_sync(&hdev->rx_work);
+- cancel_work_sync(&hdev->cmd_work);
+- cancel_work_sync(&hdev->tx_work);
+- cancel_work_sync(&hdev->power_on);
+- cancel_work_sync(&hdev->error_reset);
++ disable_work_sync(&hdev->rx_work);
++ disable_work_sync(&hdev->cmd_work);
++ disable_work_sync(&hdev->tx_work);
++ disable_work_sync(&hdev->power_on);
++ disable_work_sync(&hdev->error_reset);
+
+ hci_cmd_sync_clear(hdev);
+
+@@ -2796,8 +2796,14 @@ static void hci_cancel_cmd_sync(struct hci_dev *hdev, int err)
+ {
+ bt_dev_dbg(hdev, "err 0x%2.2x", err);
+
+- cancel_delayed_work_sync(&hdev->cmd_timer);
+- cancel_delayed_work_sync(&hdev->ncmd_timer);
++ if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) {
++ disable_delayed_work_sync(&hdev->cmd_timer);
++ disable_delayed_work_sync(&hdev->ncmd_timer);
++ } else {
++ cancel_delayed_work_sync(&hdev->cmd_timer);
++ cancel_delayed_work_sync(&hdev->ncmd_timer);
++ }
++
+ atomic_set(&hdev->cmd_cnt, 1);
+
+ hci_cmd_sync_cancel_sync(hdev, err);
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 40ccdef168d7da..ae7a5817883aad 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -5131,9 +5131,15 @@ int hci_dev_close_sync(struct hci_dev *hdev)
+
+ bt_dev_dbg(hdev, "");
+
+- cancel_delayed_work(&hdev->power_off);
+- cancel_delayed_work(&hdev->ncmd_timer);
+- cancel_delayed_work(&hdev->le_scan_disable);
++ if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) {
++ disable_delayed_work(&hdev->power_off);
++ disable_delayed_work(&hdev->ncmd_timer);
++ disable_delayed_work(&hdev->le_scan_disable);
++ } else {
++ cancel_delayed_work(&hdev->power_off);
++ cancel_delayed_work(&hdev->ncmd_timer);
++ cancel_delayed_work(&hdev->le_scan_disable);
++ }
+
+ hci_cmd_sync_cancel_sync(hdev, ENODEV);
+
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index c9eefb43bf47e3..7a83e400ac77a0 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -93,6 +93,16 @@ static struct sock *iso_get_sock(bdaddr_t *src, bdaddr_t *dst,
+ #define ISO_CONN_TIMEOUT (HZ * 40)
+ #define ISO_DISCONN_TIMEOUT (HZ * 2)
+
++static struct sock *iso_sock_hold(struct iso_conn *conn)
++{
++ if (!conn || !bt_sock_linked(&iso_sk_list, conn->sk))
++ return NULL;
++
++ sock_hold(conn->sk);
++
++ return conn->sk;
++}
++
+ static void iso_sock_timeout(struct work_struct *work)
+ {
+ struct iso_conn *conn = container_of(work, struct iso_conn,
+@@ -100,9 +110,7 @@ static void iso_sock_timeout(struct work_struct *work)
+ struct sock *sk;
+
+ iso_conn_lock(conn);
+- sk = conn->sk;
+- if (sk)
+- sock_hold(sk);
++ sk = iso_sock_hold(conn);
+ iso_conn_unlock(conn);
+
+ if (!sk)
+@@ -209,9 +217,7 @@ static void iso_conn_del(struct hci_conn *hcon, int err)
+
+ /* Kill socket */
+ iso_conn_lock(conn);
+- sk = conn->sk;
+- if (sk)
+- sock_hold(sk);
++ sk = iso_sock_hold(conn);
+ iso_conn_unlock(conn);
+
+ if (sk) {
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index a5ac160c592eb9..1c7252a3686694 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -76,6 +76,16 @@ struct sco_pinfo {
+ #define SCO_CONN_TIMEOUT (HZ * 40)
+ #define SCO_DISCONN_TIMEOUT (HZ * 2)
+
++static struct sock *sco_sock_hold(struct sco_conn *conn)
++{
++ if (!conn || !bt_sock_linked(&sco_sk_list, conn->sk))
++ return NULL;
++
++ sock_hold(conn->sk);
++
++ return conn->sk;
++}
++
+ static void sco_sock_timeout(struct work_struct *work)
+ {
+ struct sco_conn *conn = container_of(work, struct sco_conn,
+@@ -87,9 +97,7 @@ static void sco_sock_timeout(struct work_struct *work)
+ sco_conn_unlock(conn);
+ return;
+ }
+- sk = conn->sk;
+- if (sk)
+- sock_hold(sk);
++ sk = sco_sock_hold(conn);
+ sco_conn_unlock(conn);
+
+ if (!sk)
+@@ -194,9 +202,7 @@ static void sco_conn_del(struct hci_conn *hcon, int err)
+
+ /* Kill socket */
+ sco_conn_lock(conn);
+- sk = conn->sk;
+- if (sk)
+- sock_hold(sk);
++ sk = sco_sock_hold(conn);
+ sco_conn_unlock(conn);
+
+ if (sk) {
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 0e719c7c43bb7f..b2b551401bc298 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2437,9 +2437,9 @@ static int __bpf_redirect_neigh(struct sk_buff *skb, struct net_device *dev,
+
+ /* Internal, non-exposed redirect flags. */
+ enum {
+- BPF_F_NEIGH = (1ULL << 1),
+- BPF_F_PEER = (1ULL << 2),
+- BPF_F_NEXTHOP = (1ULL << 3),
++ BPF_F_NEIGH = (1ULL << 16),
++ BPF_F_PEER = (1ULL << 17),
++ BPF_F_NEXTHOP = (1ULL << 18),
+ #define BPF_F_REDIRECT_INTERNAL (BPF_F_NEIGH | BPF_F_PEER | BPF_F_NEXTHOP)
+ };
+
+@@ -2449,6 +2449,8 @@ BPF_CALL_3(bpf_clone_redirect, struct sk_buff *, skb, u32, ifindex, u64, flags)
+ struct sk_buff *clone;
+ int ret;
+
++ BUILD_BUG_ON(BPF_F_REDIRECT_INTERNAL & BPF_F_REDIRECT_FLAGS);
++
+ if (unlikely(flags & (~(BPF_F_INGRESS) | BPF_F_REDIRECT_INTERNAL)))
+ return -EINVAL;
+
+@@ -6261,24 +6263,16 @@ BPF_CALL_5(bpf_skb_check_mtu, struct sk_buff *, skb,
+ {
+ int ret = BPF_MTU_CHK_RET_FRAG_NEEDED;
+ struct net_device *dev = skb->dev;
+- int skb_len, dev_len;
+- int mtu = 0;
+-
+- if (unlikely(flags & ~(BPF_MTU_CHK_SEGS))) {
+- ret = -EINVAL;
+- goto out;
+- }
++ int mtu, dev_len, skb_len;
+
+- if (unlikely(flags & BPF_MTU_CHK_SEGS && (len_diff || *mtu_len))) {
+- ret = -EINVAL;
+- goto out;
+- }
++ if (unlikely(flags & ~(BPF_MTU_CHK_SEGS)))
++ return -EINVAL;
++ if (unlikely(flags & BPF_MTU_CHK_SEGS && (len_diff || *mtu_len)))
++ return -EINVAL;
+
+ dev = __dev_via_ifindex(dev, ifindex);
+- if (unlikely(!dev)) {
+- ret = -ENODEV;
+- goto out;
+- }
++ if (unlikely(!dev))
++ return -ENODEV;
+
+ mtu = READ_ONCE(dev->mtu);
+ dev_len = mtu + dev->hard_header_len;
+@@ -6313,19 +6307,15 @@ BPF_CALL_5(bpf_xdp_check_mtu, struct xdp_buff *, xdp,
+ struct net_device *dev = xdp->rxq->dev;
+ int xdp_len = xdp->data_end - xdp->data;
+ int ret = BPF_MTU_CHK_RET_SUCCESS;
+- int mtu = 0, dev_len;
++ int mtu, dev_len;
+
+ /* XDP variant doesn't support multi-buffer segment check (yet) */
+- if (unlikely(flags)) {
+- ret = -EINVAL;
+- goto out;
+- }
++ if (unlikely(flags))
++ return -EINVAL;
+
+ dev = __dev_via_ifindex(dev, ifindex);
+- if (unlikely(!dev)) {
+- ret = -ENODEV;
+- goto out;
+- }
++ if (unlikely(!dev))
++ return -ENODEV;
+
+ mtu = READ_ONCE(dev->mtu);
+ dev_len = mtu + dev->hard_header_len;
+@@ -6337,7 +6327,7 @@ BPF_CALL_5(bpf_xdp_check_mtu, struct xdp_buff *, xdp,
+ xdp_len += len_diff; /* minus result pass check */
+ if (xdp_len > dev_len)
+ ret = BPF_MTU_CHK_RET_FRAG_NEEDED;
+-out:
++
+ *mtu_len = mtu;
+ return ret;
+ }
+@@ -6348,7 +6338,7 @@ static const struct bpf_func_proto bpf_skb_check_mtu_proto = {
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+ .arg2_type = ARG_ANYTHING,
+- .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_WRITE | MEM_ALIGNED,
+ .arg3_size = sizeof(u32),
+ .arg4_type = ARG_ANYTHING,
+ .arg5_type = ARG_ANYTHING,
+@@ -6360,7 +6350,7 @@ static const struct bpf_func_proto bpf_xdp_check_mtu_proto = {
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+ .arg2_type = ARG_ANYTHING,
+- .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++ .arg3_type = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_WRITE | MEM_ALIGNED,
+ .arg3_size = sizeof(u32),
+ .arg4_type = ARG_ANYTHING,
+ .arg5_type = ARG_ANYTHING,
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 724b6856fcc3e9..219fd8f1ca2a42 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -656,6 +656,8 @@ BPF_CALL_4(bpf_sk_redirect_map, struct sk_buff *, skb,
+ sk = __sock_map_lookup_elem(map, key);
+ if (unlikely(!sk || !sock_map_redirect_allowed(sk)))
+ return SK_DROP;
++ if ((flags & BPF_F_INGRESS) && sk_is_vsock(sk))
++ return SK_DROP;
+
+ skb_bpf_set_redir(skb, sk, flags & BPF_F_INGRESS);
+ return SK_PASS;
+@@ -684,6 +686,8 @@ BPF_CALL_4(bpf_msg_redirect_map, struct sk_msg *, msg,
+ return SK_DROP;
+ if (!(flags & BPF_F_INGRESS) && !sk_is_tcp(sk))
+ return SK_DROP;
++ if (sk_is_vsock(sk))
++ return SK_DROP;
+
+ msg->flags = flags;
+ msg->sk_redir = sk;
+@@ -1258,6 +1262,8 @@ BPF_CALL_4(bpf_sk_redirect_hash, struct sk_buff *, skb,
+ sk = __sock_hash_lookup_elem(map, key);
+ if (unlikely(!sk || !sock_map_redirect_allowed(sk)))
+ return SK_DROP;
++ if ((flags & BPF_F_INGRESS) && sk_is_vsock(sk))
++ return SK_DROP;
+
+ skb_bpf_set_redir(skb, sk, flags & BPF_F_INGRESS);
+ return SK_PASS;
+@@ -1286,6 +1292,8 @@ BPF_CALL_4(bpf_msg_redirect_hash, struct sk_msg *, msg,
+ return SK_DROP;
+ if (!(flags & BPF_F_INGRESS) && !sk_is_tcp(sk))
+ return SK_DROP;
++ if (sk_is_vsock(sk))
++ return SK_DROP;
+
+ msg->flags = flags;
+ msg->sk_redir = sk;
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index ddab1511645425..4d0c8d501ab7cf 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -288,17 +288,19 @@ static struct in_device *inetdev_init(struct net_device *dev)
+ /* Account for reference dev->ip_ptr (below) */
+ refcount_set(&in_dev->refcnt, 1);
+
+- err = devinet_sysctl_register(in_dev);
+- if (err) {
+- in_dev->dead = 1;
+- neigh_parms_release(&arp_tbl, in_dev->arp_parms);
+- in_dev_put(in_dev);
+- in_dev = NULL;
+- goto out;
++ if (dev != blackhole_netdev) {
++ err = devinet_sysctl_register(in_dev);
++ if (err) {
++ in_dev->dead = 1;
++ neigh_parms_release(&arp_tbl, in_dev->arp_parms);
++ in_dev_put(in_dev);
++ in_dev = NULL;
++ goto out;
++ }
++ ip_mc_init_dev(in_dev);
++ if (dev->flags & IFF_UP)
++ ip_mc_up(in_dev);
+ }
+- ip_mc_init_dev(in_dev);
+- if (dev->flags & IFF_UP)
+- ip_mc_up(in_dev);
+
+ /* we can receive as soon as ip_ptr is set -- do this last */
+ rcu_assign_pointer(dev->ip_ptr, in_dev);
+@@ -337,6 +339,19 @@ static void inetdev_destroy(struct in_device *in_dev)
+ in_dev_put(in_dev);
+ }
+
++static int __init inet_blackhole_dev_init(void)
++{
++ int err = 0;
++
++ rtnl_lock();
++ if (!inetdev_init(blackhole_netdev))
++ err = -ENOMEM;
++ rtnl_unlock();
++
++ return err;
++}
++late_initcall(inet_blackhole_dev_init);
++
+ int inet_addr_onlink(struct in_device *in_dev, __be32 a, __be32 b)
+ {
+ const struct in_ifaddr *ifa;
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 64d07b842e736f..cd7989b514eaa7 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -1044,21 +1044,31 @@ static bool reqsk_queue_unlink(struct request_sock *req)
+ found = __sk_nulls_del_node_init_rcu(sk);
+ spin_unlock(lock);
+ }
+- if (timer_pending(&req->rsk_timer) && del_timer_sync(&req->rsk_timer))
+- reqsk_put(req);
++
+ return found;
+ }
+
+-bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
++static bool __inet_csk_reqsk_queue_drop(struct sock *sk,
++ struct request_sock *req,
++ bool from_timer)
+ {
+ bool unlinked = reqsk_queue_unlink(req);
+
++ if (!from_timer && timer_delete_sync(&req->rsk_timer))
++ reqsk_put(req);
++
+ if (unlinked) {
+ reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
+ reqsk_put(req);
+ }
++
+ return unlinked;
+ }
++
++bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req)
++{
++ return __inet_csk_reqsk_queue_drop(sk, req, false);
++}
+ EXPORT_SYMBOL(inet_csk_reqsk_queue_drop);
+
+ void inet_csk_reqsk_queue_drop_and_put(struct sock *sk, struct request_sock *req)
+@@ -1151,7 +1161,7 @@ static void reqsk_timer_handler(struct timer_list *t)
+
+ if (!inet_ehash_insert(req_to_sk(nreq), req_to_sk(oreq), NULL)) {
+ /* delete timer */
+- inet_csk_reqsk_queue_drop(sk_listener, nreq);
++ __inet_csk_reqsk_queue_drop(sk_listener, nreq, true);
+ goto no_ownership;
+ }
+
+@@ -1177,7 +1187,8 @@ static void reqsk_timer_handler(struct timer_list *t)
+ }
+
+ drop:
+- inet_csk_reqsk_queue_drop_and_put(oreq->rsk_listener, oreq);
++ __inet_csk_reqsk_queue_drop(sk_listener, oreq, true);
++ reqsk_put(req);
+ }
+
+ static bool reqsk_queue_hash_req(struct request_sock *req,
+diff --git a/net/ipv4/xfrm4_policy.c b/net/ipv4/xfrm4_policy.c
+index 0294fef577fab1..7e1c2faed1ff99 100644
+--- a/net/ipv4/xfrm4_policy.c
++++ b/net/ipv4/xfrm4_policy.c
+@@ -17,47 +17,43 @@
+ #include <net/ip.h>
+ #include <net/l3mdev.h>
+
+-static struct dst_entry *__xfrm4_dst_lookup(struct net *net, struct flowi4 *fl4,
+- int tos, int oif,
+- const xfrm_address_t *saddr,
+- const xfrm_address_t *daddr,
+- u32 mark)
++static struct dst_entry *__xfrm4_dst_lookup(struct flowi4 *fl4,
++ const struct xfrm_dst_lookup_params *params)
+ {
+ struct rtable *rt;
+
+ memset(fl4, 0, sizeof(*fl4));
+- fl4->daddr = daddr->a4;
+- fl4->flowi4_tos = tos;
+- fl4->flowi4_l3mdev = l3mdev_master_ifindex_by_index(net, oif);
+- fl4->flowi4_mark = mark;
+- if (saddr)
+- fl4->saddr = saddr->a4;
+-
+- rt = __ip_route_output_key(net, fl4);
++ fl4->daddr = params->daddr->a4;
++ fl4->flowi4_tos = params->tos;
++ fl4->flowi4_l3mdev = l3mdev_master_ifindex_by_index(params->net,
++ params->oif);
++ fl4->flowi4_mark = params->mark;
++ if (params->saddr)
++ fl4->saddr = params->saddr->a4;
++ fl4->flowi4_proto = params->ipproto;
++ fl4->uli = params->uli;
++
++ rt = __ip_route_output_key(params->net, fl4);
+ if (!IS_ERR(rt))
+ return &rt->dst;
+
+ return ERR_CAST(rt);
+ }
+
+-static struct dst_entry *xfrm4_dst_lookup(struct net *net, int tos, int oif,
+- const xfrm_address_t *saddr,
+- const xfrm_address_t *daddr,
+- u32 mark)
++static struct dst_entry *xfrm4_dst_lookup(const struct xfrm_dst_lookup_params *params)
+ {
+ struct flowi4 fl4;
+
+- return __xfrm4_dst_lookup(net, &fl4, tos, oif, saddr, daddr, mark);
++ return __xfrm4_dst_lookup(&fl4, params);
+ }
+
+-static int xfrm4_get_saddr(struct net *net, int oif,
+- xfrm_address_t *saddr, xfrm_address_t *daddr,
+- u32 mark)
++static int xfrm4_get_saddr(xfrm_address_t *saddr,
++ const struct xfrm_dst_lookup_params *params)
+ {
+ struct dst_entry *dst;
+ struct flowi4 fl4;
+
+- dst = __xfrm4_dst_lookup(net, &fl4, 0, oif, NULL, daddr, mark);
++ dst = __xfrm4_dst_lookup(&fl4, params);
+ if (IS_ERR(dst))
+ return -EHOSTUNREACH;
+
+diff --git a/net/ipv6/xfrm6_policy.c b/net/ipv6/xfrm6_policy.c
+index b1d81c4270ab3a..1f19b6f14484c6 100644
+--- a/net/ipv6/xfrm6_policy.c
++++ b/net/ipv6/xfrm6_policy.c
+@@ -23,23 +23,24 @@
+ #include <net/ip6_route.h>
+ #include <net/l3mdev.h>
+
+-static struct dst_entry *xfrm6_dst_lookup(struct net *net, int tos, int oif,
+- const xfrm_address_t *saddr,
+- const xfrm_address_t *daddr,
+- u32 mark)
++static struct dst_entry *xfrm6_dst_lookup(const struct xfrm_dst_lookup_params *params)
+ {
+ struct flowi6 fl6;
+ struct dst_entry *dst;
+ int err;
+
+ memset(&fl6, 0, sizeof(fl6));
+- fl6.flowi6_l3mdev = l3mdev_master_ifindex_by_index(net, oif);
+- fl6.flowi6_mark = mark;
+- memcpy(&fl6.daddr, daddr, sizeof(fl6.daddr));
+- if (saddr)
+- memcpy(&fl6.saddr, saddr, sizeof(fl6.saddr));
++ fl6.flowi6_l3mdev = l3mdev_master_ifindex_by_index(params->net,
++ params->oif);
++ fl6.flowi6_mark = params->mark;
++ memcpy(&fl6.daddr, params->daddr, sizeof(fl6.daddr));
++ if (params->saddr)
++ memcpy(&fl6.saddr, params->saddr, sizeof(fl6.saddr));
+
+- dst = ip6_route_output(net, NULL, &fl6);
++ fl6.flowi4_proto = params->ipproto;
++ fl6.uli = params->uli;
++
++ dst = ip6_route_output(params->net, NULL, &fl6);
+
+ err = dst->error;
+ if (dst->error) {
+@@ -50,15 +51,14 @@ static struct dst_entry *xfrm6_dst_lookup(struct net *net, int tos, int oif,
+ return dst;
+ }
+
+-static int xfrm6_get_saddr(struct net *net, int oif,
+- xfrm_address_t *saddr, xfrm_address_t *daddr,
+- u32 mark)
++static int xfrm6_get_saddr(xfrm_address_t *saddr,
++ const struct xfrm_dst_lookup_params *params)
+ {
+ struct dst_entry *dst;
+ struct net_device *dev;
+ struct inet6_dev *idev;
+
+- dst = xfrm6_dst_lookup(net, 0, oif, NULL, daddr, mark);
++ dst = xfrm6_dst_lookup(params);
+ if (IS_ERR(dst))
+ return -EHOSTUNREACH;
+
+@@ -68,7 +68,8 @@ static int xfrm6_get_saddr(struct net *net, int oif,
+ return -EHOSTUNREACH;
+ }
+ dev = idev->dev;
+- ipv6_dev_get_saddr(dev_net(dev), dev, &daddr->in6, 0, &saddr->in6);
++ ipv6_dev_get_saddr(dev_net(dev), dev, ¶ms->daddr->in6, 0,
++ &saddr->in6);
+ dst_release(dst);
+ return 0;
+ }
+diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c
+index fc43ecbd128cc5..28e77a222a39bd 100644
+--- a/net/l2tp/l2tp_netlink.c
++++ b/net/l2tp/l2tp_netlink.c
+@@ -116,7 +116,7 @@ static int l2tp_tunnel_notify(struct genl_family *family,
+ NLM_F_ACK, tunnel, cmd);
+
+ if (ret >= 0) {
+- ret = genlmsg_multicast_allns(family, msg, 0, 0, GFP_ATOMIC);
++ ret = genlmsg_multicast_allns(family, msg, 0, 0);
+ /* We don't care if no one is listening */
+ if (ret == -ESRCH)
+ ret = 0;
+@@ -144,7 +144,7 @@ static int l2tp_session_notify(struct genl_family *family,
+ NLM_F_ACK, session, cmd);
+
+ if (ret >= 0) {
+- ret = genlmsg_multicast_allns(family, msg, 0, 0, GFP_ATOMIC);
++ ret = genlmsg_multicast_allns(family, msg, 0, 0);
+ /* We don't care if no one is listening */
+ if (ret == -ESRCH)
+ ret = 0;
+diff --git a/net/netfilter/nf_bpf_link.c b/net/netfilter/nf_bpf_link.c
+index 5257d5e7eb09d8..3d64a4511fcfdd 100644
+--- a/net/netfilter/nf_bpf_link.c
++++ b/net/netfilter/nf_bpf_link.c
+@@ -23,6 +23,7 @@ static unsigned int nf_hook_run_bpf(void *bpf_prog, struct sk_buff *skb,
+ struct bpf_nf_link {
+ struct bpf_link link;
+ struct nf_hook_ops hook_ops;
++ netns_tracker ns_tracker;
+ struct net *net;
+ u32 dead;
+ const struct nf_defrag_hook *defrag_hook;
+@@ -120,6 +121,7 @@ static void bpf_nf_link_release(struct bpf_link *link)
+ if (!cmpxchg(&nf_link->dead, 0, 1)) {
+ nf_unregister_net_hook(nf_link->net, &nf_link->hook_ops);
+ bpf_nf_disable_defrag(nf_link);
++ put_net_track(nf_link->net, &nf_link->ns_tracker);
+ }
+ }
+
+@@ -150,11 +152,12 @@ static int bpf_nf_link_fill_link_info(const struct bpf_link *link,
+ struct bpf_link_info *info)
+ {
+ struct bpf_nf_link *nf_link = container_of(link, struct bpf_nf_link, link);
++ const struct nf_defrag_hook *hook = nf_link->defrag_hook;
+
+ info->netfilter.pf = nf_link->hook_ops.pf;
+ info->netfilter.hooknum = nf_link->hook_ops.hooknum;
+ info->netfilter.priority = nf_link->hook_ops.priority;
+- info->netfilter.flags = 0;
++ info->netfilter.flags = hook ? BPF_F_NETFILTER_IP_DEFRAG : 0;
+
+ return 0;
+ }
+@@ -257,6 +260,8 @@ int bpf_nf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+ return err;
+ }
+
++ get_net_track(net, &link->ns_tracker, GFP_KERNEL);
++
+ return bpf_link_settle(&link_primer);
+ }
+
+diff --git a/net/netfilter/xt_NFLOG.c b/net/netfilter/xt_NFLOG.c
+index d80abd6ccaf8f7..6dcf4bc7e30b2a 100644
+--- a/net/netfilter/xt_NFLOG.c
++++ b/net/netfilter/xt_NFLOG.c
+@@ -79,7 +79,7 @@ static struct xt_target nflog_tg_reg[] __read_mostly = {
+ {
+ .name = "NFLOG",
+ .revision = 0,
+- .family = NFPROTO_IPV4,
++ .family = NFPROTO_IPV6,
+ .checkentry = nflog_tg_check,
+ .destroy = nflog_tg_destroy,
+ .target = nflog_tg,
+diff --git a/net/netfilter/xt_TRACE.c b/net/netfilter/xt_TRACE.c
+index f3fa4f11348cd8..a642ff09fc8e8c 100644
+--- a/net/netfilter/xt_TRACE.c
++++ b/net/netfilter/xt_TRACE.c
+@@ -49,6 +49,7 @@ static struct xt_target trace_tg_reg[] __read_mostly = {
+ .target = trace_tg,
+ .checkentry = trace_tg_check,
+ .destroy = trace_tg_destroy,
++ .me = THIS_MODULE,
+ },
+ #endif
+ };
+diff --git a/net/netfilter/xt_mark.c b/net/netfilter/xt_mark.c
+index f76fe04fc9a4e1..65b965ca40ea7e 100644
+--- a/net/netfilter/xt_mark.c
++++ b/net/netfilter/xt_mark.c
+@@ -62,7 +62,7 @@ static struct xt_target mark_tg_reg[] __read_mostly = {
+ {
+ .name = "MARK",
+ .revision = 2,
+- .family = NFPROTO_IPV4,
++ .family = NFPROTO_IPV6,
+ .target = mark_tg,
+ .targetsize = sizeof(struct xt_mark_tginfo2),
+ .me = THIS_MODULE,
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index feb54c63a1165f..07ad65774fe298 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -1501,15 +1501,11 @@ static int genl_ctrl_event(int event, const struct genl_family *family,
+ if (IS_ERR(msg))
+ return PTR_ERR(msg);
+
+- if (!family->netnsok) {
++ if (!family->netnsok)
+ genlmsg_multicast_netns(&genl_ctrl, &init_net, msg, 0,
+ 0, GFP_KERNEL);
+- } else {
+- rcu_read_lock();
+- genlmsg_multicast_allns(&genl_ctrl, msg, 0,
+- 0, GFP_ATOMIC);
+- rcu_read_unlock();
+- }
++ else
++ genlmsg_multicast_allns(&genl_ctrl, msg, 0, 0);
+
+ return 0;
+ }
+@@ -1929,23 +1925,23 @@ static int __init genl_init(void)
+
+ core_initcall(genl_init);
+
+-static int genlmsg_mcast(struct sk_buff *skb, u32 portid, unsigned long group,
+- gfp_t flags)
++static int genlmsg_mcast(struct sk_buff *skb, u32 portid, unsigned long group)
+ {
+ struct sk_buff *tmp;
+ struct net *net, *prev = NULL;
+ bool delivered = false;
+ int err;
+
++ rcu_read_lock();
+ for_each_net_rcu(net) {
+ if (prev) {
+- tmp = skb_clone(skb, flags);
++ tmp = skb_clone(skb, GFP_ATOMIC);
+ if (!tmp) {
+ err = -ENOMEM;
+ goto error;
+ }
+ err = nlmsg_multicast(prev->genl_sock, tmp,
+- portid, group, flags);
++ portid, group, GFP_ATOMIC);
+ if (!err)
+ delivered = true;
+ else if (err != -ESRCH)
+@@ -1954,27 +1950,31 @@ static int genlmsg_mcast(struct sk_buff *skb, u32 portid, unsigned long group,
+
+ prev = net;
+ }
++ err = nlmsg_multicast(prev->genl_sock, skb, portid, group, GFP_ATOMIC);
++
++ rcu_read_unlock();
+
+- err = nlmsg_multicast(prev->genl_sock, skb, portid, group, flags);
+ if (!err)
+ delivered = true;
+ else if (err != -ESRCH)
+ return err;
+ return delivered ? 0 : -ESRCH;
+ error:
++ rcu_read_unlock();
++
+ kfree_skb(skb);
+ return err;
+ }
+
+ int genlmsg_multicast_allns(const struct genl_family *family,
+ struct sk_buff *skb, u32 portid,
+- unsigned int group, gfp_t flags)
++ unsigned int group)
+ {
+ if (WARN_ON_ONCE(group >= family->n_mcgrps))
+ return -EINVAL;
+
+ group = family->mcgrp_offset + group;
+- return genlmsg_mcast(skb, portid, group, flags);
++ return genlmsg_mcast(skb, portid, group);
+ }
+ EXPORT_SYMBOL(genlmsg_multicast_allns);
+
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index 2714c4ed928e5a..eecad65fec92ca 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -1498,8 +1498,29 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
+ bool skip_sw = tc_skip_sw(fl_flags);
+ bool skip_hw = tc_skip_hw(fl_flags);
+
+- if (tc_act_bind(act->tcfa_flags))
++ if (tc_act_bind(act->tcfa_flags)) {
++ /* Action is created by classifier and is not
++ * standalone. Check that the user did not set
++ * any action flags different than the
++ * classifier flags, and inherit the flags from
++ * the classifier for the compatibility case
++ * where no flags were specified at all.
++ */
++ if ((tc_act_skip_sw(act->tcfa_flags) && !skip_sw) ||
++ (tc_act_skip_hw(act->tcfa_flags) && !skip_hw)) {
++ NL_SET_ERR_MSG(extack,
++ "Mismatch between action and filter offload flags");
++ err = -EINVAL;
++ goto err;
++ }
++ if (skip_sw)
++ act->tcfa_flags |= TCA_ACT_FLAGS_SKIP_SW;
++ if (skip_hw)
++ act->tcfa_flags |= TCA_ACT_FLAGS_SKIP_HW;
+ continue;
++ }
++
++ /* Action is standalone */
+ if (skip_sw != tc_act_skip_sw(act->tcfa_flags) ||
+ skip_hw != tc_act_skip_hw(act->tcfa_flags)) {
+ NL_SET_ERR_MSG(extack,
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 2af24547a82c49..38ec18f73de43a 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -512,9 +512,15 @@ static void dev_watchdog(struct timer_list *t)
+ struct netdev_queue *txq;
+
+ txq = netdev_get_tx_queue(dev, i);
+- trans_start = READ_ONCE(txq->trans_start);
+ if (!netif_xmit_stopped(txq))
+ continue;
++
++ /* Paired with WRITE_ONCE() + smp_mb...() in
++ * netdev_tx_sent_queue() and netif_tx_stop_queue().
++ */
++ smp_mb();
++ trans_start = READ_ONCE(txq->trans_start);
++
+ if (time_after(jiffies, trans_start + dev->watchdog_timeo)) {
+ timedout_ms = jiffies_to_msecs(jiffies - trans_start);
+ atomic_long_inc(&txq->trans_timeout);
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 8498d0606b248b..8623dc0bafc09b 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1965,7 +1965,8 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+
+ taprio_start_sched(sch, start, new_admin);
+
+- rcu_assign_pointer(q->admin_sched, new_admin);
++ admin = rcu_replace_pointer(q->admin_sched, new_admin,
++ lockdep_rtnl_is_held());
+ if (admin)
+ call_rcu(&admin->rcu, taprio_free_sched_cb);
+
+@@ -2373,9 +2374,6 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb)
+ struct tc_mqprio_qopt opt = { 0 };
+ struct nlattr *nest, *sched_nest;
+
+- oper = rtnl_dereference(q->oper_sched);
+- admin = rtnl_dereference(q->admin_sched);
+-
+ mqprio_qopt_reconstruct(dev, &opt);
+
+ nest = nla_nest_start_noflag(skb, TCA_OPTIONS);
+@@ -2396,18 +2394,23 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb)
+ nla_put_u32(skb, TCA_TAPRIO_ATTR_TXTIME_DELAY, q->txtime_delay))
+ goto options_error;
+
++ rcu_read_lock();
++
++ oper = rtnl_dereference(q->oper_sched);
++ admin = rtnl_dereference(q->admin_sched);
++
+ if (oper && taprio_dump_tc_entries(skb, q, oper))
+- goto options_error;
++ goto options_error_rcu;
+
+ if (oper && dump_schedule(skb, oper))
+- goto options_error;
++ goto options_error_rcu;
+
+ if (!admin)
+ goto done;
+
+ sched_nest = nla_nest_start_noflag(skb, TCA_TAPRIO_ATTR_ADMIN_SCHED);
+ if (!sched_nest)
+- goto options_error;
++ goto options_error_rcu;
+
+ if (dump_schedule(skb, admin))
+ goto admin_error;
+@@ -2415,11 +2418,15 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb)
+ nla_nest_end(skb, sched_nest);
+
+ done:
++ rcu_read_unlock();
+ return nla_nest_end(skb, nest);
+
+ admin_error:
+ nla_nest_cancel(skb, sched_nest);
+
++options_error_rcu:
++ rcu_read_unlock();
++
+ options_error:
+ nla_nest_cancel(skb, nest);
+
+diff --git a/net/smc/smc_pnet.c b/net/smc/smc_pnet.c
+index 2adb92b8c4699c..dbcc72b43d0c08 100644
+--- a/net/smc/smc_pnet.c
++++ b/net/smc/smc_pnet.c
+@@ -753,7 +753,7 @@ static int smc_pnet_add_pnetid(struct net *net, u8 *pnetid)
+
+ write_lock(&sn->pnetids_ndev.lock);
+ list_for_each_entry(pi, &sn->pnetids_ndev.list, list) {
+- if (smc_pnet_match(pnetid, pe->pnetid)) {
++ if (smc_pnet_match(pnetid, pi->pnetid)) {
+ refcount_inc(&pi->refcnt);
+ kfree(pe);
+ goto unlock;
+diff --git a/net/smc/smc_wr.c b/net/smc/smc_wr.c
+index 0021065a600a03..994c0cd4fddbf1 100644
+--- a/net/smc/smc_wr.c
++++ b/net/smc/smc_wr.c
+@@ -648,8 +648,10 @@ void smc_wr_free_link(struct smc_link *lnk)
+ smc_wr_tx_wait_no_pending_sends(lnk);
+ percpu_ref_kill(&lnk->wr_reg_refs);
+ wait_for_completion(&lnk->reg_ref_comp);
++ percpu_ref_exit(&lnk->wr_reg_refs);
+ percpu_ref_kill(&lnk->wr_tx_refs);
+ wait_for_completion(&lnk->tx_ref_comp);
++ percpu_ref_exit(&lnk->wr_tx_refs);
+
+ if (lnk->wr_rx_dma_addr) {
+ ib_dma_unmap_single(ibdev, lnk->wr_rx_dma_addr,
+@@ -912,11 +914,13 @@ int smc_wr_create_link(struct smc_link *lnk)
+ init_waitqueue_head(&lnk->wr_reg_wait);
+ rc = percpu_ref_init(&lnk->wr_reg_refs, smcr_wr_reg_refs_free, 0, GFP_KERNEL);
+ if (rc)
+- goto dma_unmap;
++ goto cancel_ref;
+ init_completion(&lnk->reg_ref_comp);
+ init_waitqueue_head(&lnk->wr_rx_empty_wait);
+ return rc;
+
++cancel_ref:
++ percpu_ref_exit(&lnk->wr_tx_refs);
+ dma_unmap:
+ if (lnk->wr_rx_v2_dma_addr) {
+ ib_dma_unmap_single(ibdev, lnk->wr_rx_v2_dma_addr,
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 16ff976a86e3ed..645222ac84e3fb 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -1672,6 +1672,7 @@ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_acto
+ {
+ struct virtio_vsock_sock *vvs = vsk->trans;
+ struct sock *sk = sk_vsock(vsk);
++ struct virtio_vsock_hdr *hdr;
+ struct sk_buff *skb;
+ int off = 0;
+ int err;
+@@ -1681,10 +1682,19 @@ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_acto
+ * works for types other than dgrams.
+ */
+ skb = __skb_recv_datagram(sk, &vvs->rx_queue, MSG_DONTWAIT, &off, &err);
++ if (!skb) {
++ spin_unlock_bh(&vvs->rx_lock);
++ return err;
++ }
++
++ hdr = virtio_vsock_hdr(skb);
++ if (le32_to_cpu(hdr->flags) & VIRTIO_VSOCK_SEQ_EOM)
++ vvs->msg_count--;
++
++ virtio_transport_dec_rx_pkt(vvs, le32_to_cpu(hdr->len));
+ spin_unlock_bh(&vvs->rx_lock);
+
+- if (!skb)
+- return err;
++ virtio_transport_send_credit_update(vsk);
+
+ return recv_actor(sk, skb);
+ }
+diff --git a/net/vmw_vsock/vsock_bpf.c b/net/vmw_vsock/vsock_bpf.c
+index c42c5cc18f3241..4aa6e74ec2957b 100644
+--- a/net/vmw_vsock/vsock_bpf.c
++++ b/net/vmw_vsock/vsock_bpf.c
+@@ -114,14 +114,6 @@ static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+ return copied;
+ }
+
+-/* Copy of original proto with updated sock_map methods */
+-static struct proto vsock_bpf_prot = {
+- .close = sock_map_close,
+- .recvmsg = vsock_bpf_recvmsg,
+- .sock_is_readable = sk_msg_is_readable,
+- .unhash = sock_map_unhash,
+-};
+-
+ static void vsock_bpf_rebuild_protos(struct proto *prot, const struct proto *base)
+ {
+ *prot = *base;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index f18e1716339e0d..3766efacfd64fa 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -17967,10 +17967,8 @@ void nl80211_common_reg_change_event(enum nl80211_commands cmd_id,
+
+ genlmsg_end(msg, hdr);
+
+- rcu_read_lock();
+ genlmsg_multicast_allns(&nl80211_fam, msg, 0,
+- NL80211_MCGRP_REGULATORY, GFP_ATOMIC);
+- rcu_read_unlock();
++ NL80211_MCGRP_REGULATORY);
+
+ return;
+
+@@ -18703,10 +18701,8 @@ void nl80211_send_beacon_hint_event(struct wiphy *wiphy,
+
+ genlmsg_end(msg, hdr);
+
+- rcu_read_lock();
+ genlmsg_multicast_allns(&nl80211_fam, msg, 0,
+- NL80211_MCGRP_REGULATORY, GFP_ATOMIC);
+- rcu_read_unlock();
++ NL80211_MCGRP_REGULATORY);
+
+ return;
+
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index 9a44d363ba6205..fcd67fdfe79bdc 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -269,6 +269,8 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+
+ dev = dev_get_by_index(net, xuo->ifindex);
+ if (!dev) {
++ struct xfrm_dst_lookup_params params;
++
+ if (!(xuo->flags & XFRM_OFFLOAD_INBOUND)) {
+ saddr = &x->props.saddr;
+ daddr = &x->id.daddr;
+@@ -277,9 +279,12 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ daddr = &x->props.saddr;
+ }
+
+- dst = __xfrm_dst_lookup(net, 0, 0, saddr, daddr,
+- x->props.family,
+- xfrm_smark_get(0, x));
++ memset(¶ms, 0, sizeof(params));
++ params.net = net;
++ params.saddr = saddr;
++ params.daddr = daddr;
++ params.mark = xfrm_smark_get(0, x);
++ dst = __xfrm_dst_lookup(x->props.family, ¶ms);
+ if (IS_ERR(dst))
+ return (is_packet_offload) ? -EINVAL : 0;
+
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index c56c61b0c12ef2..d30a22cd5c621d 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -267,10 +267,8 @@ static const struct xfrm_if_cb *xfrm_if_get_cb(void)
+ return rcu_dereference(xfrm_if_cb);
+ }
+
+-struct dst_entry *__xfrm_dst_lookup(struct net *net, int tos, int oif,
+- const xfrm_address_t *saddr,
+- const xfrm_address_t *daddr,
+- int family, u32 mark)
++struct dst_entry *__xfrm_dst_lookup(int family,
++ const struct xfrm_dst_lookup_params *params)
+ {
+ const struct xfrm_policy_afinfo *afinfo;
+ struct dst_entry *dst;
+@@ -279,7 +277,7 @@ struct dst_entry *__xfrm_dst_lookup(struct net *net, int tos, int oif,
+ if (unlikely(afinfo == NULL))
+ return ERR_PTR(-EAFNOSUPPORT);
+
+- dst = afinfo->dst_lookup(net, tos, oif, saddr, daddr, mark);
++ dst = afinfo->dst_lookup(params);
+
+ rcu_read_unlock();
+
+@@ -293,6 +291,7 @@ static inline struct dst_entry *xfrm_dst_lookup(struct xfrm_state *x,
+ xfrm_address_t *prev_daddr,
+ int family, u32 mark)
+ {
++ struct xfrm_dst_lookup_params params;
+ struct net *net = xs_net(x);
+ xfrm_address_t *saddr = &x->props.saddr;
+ xfrm_address_t *daddr = &x->id.daddr;
+@@ -307,7 +306,29 @@ static inline struct dst_entry *xfrm_dst_lookup(struct xfrm_state *x,
+ daddr = x->coaddr;
+ }
+
+- dst = __xfrm_dst_lookup(net, tos, oif, saddr, daddr, family, mark);
++ params.net = net;
++ params.saddr = saddr;
++ params.daddr = daddr;
++ params.tos = tos;
++ params.oif = oif;
++ params.mark = mark;
++ params.ipproto = x->id.proto;
++ if (x->encap) {
++ switch (x->encap->encap_type) {
++ case UDP_ENCAP_ESPINUDP:
++ params.ipproto = IPPROTO_UDP;
++ params.uli.ports.sport = x->encap->encap_sport;
++ params.uli.ports.dport = x->encap->encap_dport;
++ break;
++ case TCP_ENCAP_ESPINTCP:
++ params.ipproto = IPPROTO_TCP;
++ params.uli.ports.sport = x->encap->encap_sport;
++ params.uli.ports.dport = x->encap->encap_dport;
++ break;
++ }
++ }
++
++ dst = __xfrm_dst_lookup(family, ¶ms);
+
+ if (!IS_ERR(dst)) {
+ if (prev_saddr != saddr)
+@@ -2440,15 +2461,15 @@ int __xfrm_sk_clone_policy(struct sock *sk, const struct sock *osk)
+ }
+
+ static int
+-xfrm_get_saddr(struct net *net, int oif, xfrm_address_t *local,
+- xfrm_address_t *remote, unsigned short family, u32 mark)
++xfrm_get_saddr(unsigned short family, xfrm_address_t *saddr,
++ const struct xfrm_dst_lookup_params *params)
+ {
+ int err;
+ const struct xfrm_policy_afinfo *afinfo = xfrm_policy_get_afinfo(family);
+
+ if (unlikely(afinfo == NULL))
+ return -EINVAL;
+- err = afinfo->get_saddr(net, oif, local, remote, mark);
++ err = afinfo->get_saddr(saddr, params);
+ rcu_read_unlock();
+ return err;
+ }
+@@ -2477,9 +2498,14 @@ xfrm_tmpl_resolve_one(struct xfrm_policy *policy, const struct flowi *fl,
+ remote = &tmpl->id.daddr;
+ local = &tmpl->saddr;
+ if (xfrm_addr_any(local, tmpl->encap_family)) {
+- error = xfrm_get_saddr(net, fl->flowi_oif,
+- &tmp, remote,
+- tmpl->encap_family, 0);
++ struct xfrm_dst_lookup_params params;
++
++ memset(¶ms, 0, sizeof(params));
++ params.net = net;
++ params.oif = fl->flowi_oif;
++ params.daddr = remote;
++ error = xfrm_get_saddr(tmpl->encap_family, &tmp,
++ ¶ms);
+ if (error)
+ goto fail;
+ local = &tmp;
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 55f039ec3d5900..eb146f574bbde7 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -201,6 +201,7 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
+ {
+ int err;
+ u8 sa_dir = attrs[XFRMA_SA_DIR] ? nla_get_u8(attrs[XFRMA_SA_DIR]) : 0;
++ u16 family = p->sel.family;
+
+ err = -EINVAL;
+ switch (p->family) {
+@@ -221,7 +222,10 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
+ goto out;
+ }
+
+- switch (p->sel.family) {
++ if (!family && !(p->flags & XFRM_STATE_AF_UNSPEC))
++ family = p->family;
++
++ switch (family) {
+ case AF_UNSPEC:
+ break;
+
+@@ -1098,7 +1102,9 @@ static int copy_to_user_auth(struct xfrm_algo_auth *auth, struct sk_buff *skb)
+ if (!nla)
+ return -EMSGSIZE;
+ ap = nla_data(nla);
+- memcpy(ap, auth, sizeof(struct xfrm_algo_auth));
++ strscpy_pad(ap->alg_name, auth->alg_name, sizeof(ap->alg_name));
++ ap->alg_key_len = auth->alg_key_len;
++ ap->alg_trunc_len = auth->alg_trunc_len;
+ if (redact_secret && auth->alg_key_len)
+ memset(ap->alg_key, 0, (auth->alg_key_len + 7) / 8);
+ else
+diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c
+index c827d7d8d80035..ea3a7d69f1c461 100644
+--- a/sound/firewire/amdtp-stream.c
++++ b/sound/firewire/amdtp-stream.c
+@@ -172,6 +172,9 @@ static int apply_constraint_to_size(struct snd_pcm_hw_params *params,
+ step = max(step, amdtp_syt_intervals[i]);
+ }
+
++ if (step == 0)
++ return -EINVAL;
++
+ t.min = roundup(s->min, step);
+ t.max = rounddown(s->max, step);
+ t.integer = 1;
+diff --git a/sound/pci/hda/Kconfig b/sound/pci/hda/Kconfig
+index bb15a0248250cc..68f1eee9e5c938 100644
+--- a/sound/pci/hda/Kconfig
++++ b/sound/pci/hda/Kconfig
+@@ -198,7 +198,7 @@ config SND_HDA_SCODEC_TAS2781_I2C
+ depends on SND_SOC
+ select SND_SOC_TAS2781_COMLIB
+ select SND_SOC_TAS2781_FMWLIB
+- select CRC32_SARWATE
++ select CRC32
+ help
+ Say Y or M here to include TAS2781 I2C HD-audio side codec support
+ in snd-hda-intel driver, such as ALC287.
+diff --git a/sound/pci/hda/patch_cs8409.c b/sound/pci/hda/patch_cs8409.c
+index 26f3c31600d7bf..614327218634c0 100644
+--- a/sound/pci/hda/patch_cs8409.c
++++ b/sound/pci/hda/patch_cs8409.c
+@@ -1403,8 +1403,9 @@ void dolphin_fixups(struct hda_codec *codec, const struct hda_fixup *fix, int ac
+ kctrl = snd_hda_gen_add_kctl(&spec->gen, "Line Out Playback Volume",
+ &cs42l42_dac_volume_mixer);
+ /* Update Line Out kcontrol template */
+- kctrl->private_value = HDA_COMPOSE_AMP_VAL_OFS(DOLPHIN_HP_PIN_NID, 3, CS8409_CODEC1,
+- HDA_OUTPUT, CS42L42_VOL_DAC) | HDA_AMP_VAL_MIN_MUTE;
++ if (kctrl)
++ kctrl->private_value = HDA_COMPOSE_AMP_VAL_OFS(DOLPHIN_HP_PIN_NID, 3, CS8409_CODEC1,
++ HDA_OUTPUT, CS42L42_VOL_DAC) | HDA_AMP_VAL_MIN_MUTE;
+ cs8409_enable_ur(codec, 0);
+ snd_hda_codec_set_name(codec, "CS8409/CS42L42");
+ break;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index a2737c1ff92040..2583081c0a3a54 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3857,20 +3857,18 @@ static void alc_default_init(struct hda_codec *codec)
+
+ hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+
+- if (hp_pin_sense)
++ if (hp_pin_sense) {
+ msleep(2);
+
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-
+- if (hp_pin_sense)
+- msleep(85);
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ msleep(75);
+
+- if (hp_pin_sense)
+- msleep(100);
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
++ msleep(75);
++ }
+ }
+
+ static void alc_default_shutup(struct hda_codec *codec)
+@@ -3886,22 +3884,20 @@ static void alc_default_shutup(struct hda_codec *codec)
+
+ hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+
+- if (hp_pin_sense)
++ if (hp_pin_sense) {
+ msleep(2);
+
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-
+- if (hp_pin_sense)
+- msleep(85);
+-
+- if (!spec->no_shutup_pins)
+ snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+
+- if (hp_pin_sense)
+- msleep(100);
++ msleep(75);
+
++ if (!spec->no_shutup_pins)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++
++ msleep(75);
++ }
+ alc_auto_setup_eapd(codec, false);
+ alc_shutup_pins(codec);
+ }
+@@ -7635,6 +7631,7 @@ enum {
+ ALC286_FIXUP_ACER_AIO_HEADSET_MIC,
+ ALC256_FIXUP_ASUS_HEADSET_MIC,
+ ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++ ALC255_FIXUP_PREDATOR_SUBWOOFER,
+ ALC299_FIXUP_PREDATOR_SPK,
+ ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE,
+ ALC289_FIXUP_DELL_SPK1,
+@@ -9051,6 +9048,13 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE
+ },
++ [ALC255_FIXUP_PREDATOR_SUBWOOFER] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x17, 0x90170151 }, /* use as internal speaker (LFE) */
++ { 0x1b, 0x90170152 } /* use as internal speaker (back) */
++ }
++ },
+ [ALC299_FIXUP_PREDATOR_SPK] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -10142,6 +10146,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x1166, "Acer Veriton N4640G", ALC269_FIXUP_LIFEBOOK),
+ SND_PCI_QUIRK(0x1025, 0x1167, "Acer Veriton N6640G", ALC269_FIXUP_LIFEBOOK),
++ SND_PCI_QUIRK(0x1025, 0x1177, "Acer Predator G9-593", ALC255_FIXUP_PREDATOR_SUBWOOFER),
++ SND_PCI_QUIRK(0x1025, 0x1178, "Acer Predator G9-593", ALC255_FIXUP_PREDATOR_SUBWOOFER),
+ SND_PCI_QUIRK(0x1025, 0x1246, "Acer Predator Helios 500", ALC299_FIXUP_PREDATOR_SPK),
+ SND_PCI_QUIRK(0x1025, 0x1247, "Acer vCopperbox", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS),
+ SND_PCI_QUIRK(0x1025, 0x1248, "Acer Veriton N4660G", ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 06349bf0b65874..ace6328e91e31c 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -444,6 +444,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "8A3E"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
++ DMI_MATCH(DMI_BOARD_NAME, "8A7F"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+diff --git a/sound/soc/codecs/lpass-rx-macro.c b/sound/soc/codecs/lpass-rx-macro.c
+index ce42749660c87e..ac759f4a880d09 100644
+--- a/sound/soc/codecs/lpass-rx-macro.c
++++ b/sound/soc/codecs/lpass-rx-macro.c
+@@ -958,7 +958,7 @@ static const struct reg_default rx_defaults[] = {
+ { CDC_RX_BCL_VBAT_PK_EST2, 0x01 },
+ { CDC_RX_BCL_VBAT_PK_EST3, 0x40 },
+ { CDC_RX_BCL_VBAT_RF_PROC1, 0x2A },
+- { CDC_RX_BCL_VBAT_RF_PROC1, 0x00 },
++ { CDC_RX_BCL_VBAT_RF_PROC2, 0x00 },
+ { CDC_RX_BCL_VBAT_TAC1, 0x00 },
+ { CDC_RX_BCL_VBAT_TAC2, 0x18 },
+ { CDC_RX_BCL_VBAT_TAC3, 0x18 },
+diff --git a/sound/soc/codecs/max98388.c b/sound/soc/codecs/max98388.c
+index b847d7c59ec012..99986090b4a63a 100644
+--- a/sound/soc/codecs/max98388.c
++++ b/sound/soc/codecs/max98388.c
+@@ -763,6 +763,7 @@ static int max98388_dai_tdm_slot(struct snd_soc_dai *dai,
+ addr = MAX98388_R2044_PCM_TX_CTRL1 + (cnt / 8);
+ bits = cnt % 8;
+ regmap_update_bits(max98388->regmap, addr, bits, bits);
++ slot_found++;
+ if (slot_found >= MAX_NUM_CH)
+ break;
+ }
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index 22b240a70ad48e..a3d580b2bbf463 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -67,6 +67,7 @@ struct fsl_micfil_soc_data {
+ bool imx;
+ bool use_edma;
+ bool use_verid;
++ bool volume_sx;
+ u64 formats;
+ };
+
+@@ -76,6 +77,7 @@ static struct fsl_micfil_soc_data fsl_micfil_imx8mm = {
+ .fifo_depth = 8,
+ .dataline = 0xf,
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
++ .volume_sx = true,
+ };
+
+ static struct fsl_micfil_soc_data fsl_micfil_imx8mp = {
+@@ -84,6 +86,7 @@ static struct fsl_micfil_soc_data fsl_micfil_imx8mp = {
+ .fifo_depth = 32,
+ .dataline = 0xf,
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
++ .volume_sx = false,
+ };
+
+ static struct fsl_micfil_soc_data fsl_micfil_imx93 = {
+@@ -94,6 +97,7 @@ static struct fsl_micfil_soc_data fsl_micfil_imx93 = {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .use_edma = true,
+ .use_verid = true,
++ .volume_sx = false,
+ };
+
+ static const struct of_device_id fsl_micfil_dt_ids[] = {
+@@ -317,7 +321,26 @@ static int hwvad_detected(struct snd_kcontrol *kcontrol,
+ return 0;
+ }
+
+-static const struct snd_kcontrol_new fsl_micfil_snd_controls[] = {
++static const struct snd_kcontrol_new fsl_micfil_volume_controls[] = {
++ SOC_SINGLE_TLV("CH0 Volume", REG_MICFIL_OUT_CTRL,
++ MICFIL_OUTGAIN_CHX_SHIFT(0), 0xF, 0, gain_tlv),
++ SOC_SINGLE_TLV("CH1 Volume", REG_MICFIL_OUT_CTRL,
++ MICFIL_OUTGAIN_CHX_SHIFT(1), 0xF, 0, gain_tlv),
++ SOC_SINGLE_TLV("CH2 Volume", REG_MICFIL_OUT_CTRL,
++ MICFIL_OUTGAIN_CHX_SHIFT(2), 0xF, 0, gain_tlv),
++ SOC_SINGLE_TLV("CH3 Volume", REG_MICFIL_OUT_CTRL,
++ MICFIL_OUTGAIN_CHX_SHIFT(3), 0xF, 0, gain_tlv),
++ SOC_SINGLE_TLV("CH4 Volume", REG_MICFIL_OUT_CTRL,
++ MICFIL_OUTGAIN_CHX_SHIFT(4), 0xF, 0, gain_tlv),
++ SOC_SINGLE_TLV("CH5 Volume", REG_MICFIL_OUT_CTRL,
++ MICFIL_OUTGAIN_CHX_SHIFT(5), 0xF, 0, gain_tlv),
++ SOC_SINGLE_TLV("CH6 Volume", REG_MICFIL_OUT_CTRL,
++ MICFIL_OUTGAIN_CHX_SHIFT(6), 0xF, 0, gain_tlv),
++ SOC_SINGLE_TLV("CH7 Volume", REG_MICFIL_OUT_CTRL,
++ MICFIL_OUTGAIN_CHX_SHIFT(7), 0xF, 0, gain_tlv),
++};
++
++static const struct snd_kcontrol_new fsl_micfil_volume_sx_controls[] = {
+ SOC_SINGLE_SX_TLV("CH0 Volume", REG_MICFIL_OUT_CTRL,
+ MICFIL_OUTGAIN_CHX_SHIFT(0), 0x8, 0xF, gain_tlv),
+ SOC_SINGLE_SX_TLV("CH1 Volume", REG_MICFIL_OUT_CTRL,
+@@ -334,6 +357,9 @@ static const struct snd_kcontrol_new fsl_micfil_snd_controls[] = {
+ MICFIL_OUTGAIN_CHX_SHIFT(6), 0x8, 0xF, gain_tlv),
+ SOC_SINGLE_SX_TLV("CH7 Volume", REG_MICFIL_OUT_CTRL,
+ MICFIL_OUTGAIN_CHX_SHIFT(7), 0x8, 0xF, gain_tlv),
++};
++
++static const struct snd_kcontrol_new fsl_micfil_snd_controls[] = {
+ SOC_ENUM_EXT("MICFIL Quality Select",
+ fsl_micfil_quality_enum,
+ micfil_quality_get, micfil_quality_set),
+@@ -801,6 +827,20 @@ static int fsl_micfil_dai_probe(struct snd_soc_dai *cpu_dai)
+ return 0;
+ }
+
++static int fsl_micfil_component_probe(struct snd_soc_component *component)
++{
++ struct fsl_micfil *micfil = snd_soc_component_get_drvdata(component);
++
++ if (micfil->soc->volume_sx)
++ snd_soc_add_component_controls(component, fsl_micfil_volume_sx_controls,
++ ARRAY_SIZE(fsl_micfil_volume_sx_controls));
++ else
++ snd_soc_add_component_controls(component, fsl_micfil_volume_controls,
++ ARRAY_SIZE(fsl_micfil_volume_controls));
++
++ return 0;
++}
++
+ static const struct snd_soc_dai_ops fsl_micfil_dai_ops = {
+ .probe = fsl_micfil_dai_probe,
+ .startup = fsl_micfil_startup,
+@@ -821,6 +861,7 @@ static struct snd_soc_dai_driver fsl_micfil_dai = {
+
+ static const struct snd_soc_component_driver fsl_micfil_component = {
+ .name = "fsl-micfil-dai",
++ .probe = fsl_micfil_component_probe,
+ .controls = fsl_micfil_snd_controls,
+ .num_controls = ARRAY_SIZE(fsl_micfil_snd_controls),
+ .legacy_dai_naming = 1,
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index d03b0172b8ad24..a1f03c97b7bb84 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -613,6 +613,9 @@ static int fsl_sai_hw_params(struct snd_pcm_substream *substream,
+
+ val_cr4 |= FSL_SAI_CR4_FRSZ(slots);
+
++ /* Set to avoid channel swap */
++ val_cr4 |= FSL_SAI_CR4_FCONT;
++
+ /* Set to output mode to avoid tri-stated data pins */
+ if (tx)
+ val_cr4 |= FSL_SAI_CR4_CHMOD;
+@@ -699,7 +702,7 @@ static int fsl_sai_hw_params(struct snd_pcm_substream *substream,
+
+ regmap_update_bits(sai->regmap, FSL_SAI_xCR4(tx, ofs),
+ FSL_SAI_CR4_SYWD_MASK | FSL_SAI_CR4_FRSZ_MASK |
+- FSL_SAI_CR4_CHMOD_MASK,
++ FSL_SAI_CR4_CHMOD_MASK | FSL_SAI_CR4_FCONT_MASK,
+ val_cr4);
+ regmap_update_bits(sai->regmap, FSL_SAI_xCR5(tx, ofs),
+ FSL_SAI_CR5_WNW_MASK | FSL_SAI_CR5_W0W_MASK |
+diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h
+index dadbd16ee39457..9c4d19fe22c654 100644
+--- a/sound/soc/fsl/fsl_sai.h
++++ b/sound/soc/fsl/fsl_sai.h
+@@ -137,6 +137,7 @@
+
+ /* SAI Transmit and Receive Configuration 4 Register */
+
++#define FSL_SAI_CR4_FCONT_MASK BIT(28)
+ #define FSL_SAI_CR4_FCONT BIT(28)
+ #define FSL_SAI_CR4_FCOMB_SHIFT BIT(26)
+ #define FSL_SAI_CR4_FCOMB_SOFT BIT(27)
+diff --git a/sound/soc/loongson/loongson_card.c b/sound/soc/loongson/loongson_card.c
+index 2c8dbdba27c5f8..cc0bb6f9772e18 100644
+--- a/sound/soc/loongson/loongson_card.c
++++ b/sound/soc/loongson/loongson_card.c
+@@ -137,6 +137,7 @@ static int loongson_card_parse_of(struct loongson_card_data *data)
+ dev_err(dev, "getting cpu dlc error (%d)\n", ret);
+ goto err;
+ }
++ loongson_dai_links[i].platforms->of_node = loongson_dai_links[i].cpus->of_node;
+
+ ret = snd_soc_of_get_dlc(codec, NULL, loongson_dai_links[i].codecs, 0);
+ if (ret < 0) {
+diff --git a/sound/soc/qcom/Kconfig b/sound/soc/qcom/Kconfig
+index 762491d6f2f2e9..ca7a30ebd26ab1 100644
+--- a/sound/soc/qcom/Kconfig
++++ b/sound/soc/qcom/Kconfig
+@@ -157,6 +157,7 @@ config SND_SOC_SDM845
+ depends on COMMON_CLK
+ select SND_SOC_QDSP6
+ select SND_SOC_QCOM_COMMON
++ select SND_SOC_QCOM_SDW
+ select SND_SOC_RT5663
+ select SND_SOC_MAX98927
+ imply SND_SOC_CROS_EC_CODEC
+@@ -208,6 +209,7 @@ config SND_SOC_SC7280
+ tristate "SoC Machine driver for SC7280 boards"
+ depends on I2C && SOUNDWIRE
+ select SND_SOC_QCOM_COMMON
++ select SND_SOC_QCOM_SDW
+ select SND_SOC_LPASS_SC7280
+ select SND_SOC_MAX98357A
+ select SND_SOC_WCD938X_SDW
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index 5a47f661e0c6f7..242bc16da36daf 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -1242,6 +1242,8 @@ int asoc_qcom_lpass_cpu_platform_probe(struct platform_device *pdev)
+ /* Allocation for i2sctl regmap fields */
+ drvdata->i2sctl = devm_kzalloc(&pdev->dev, sizeof(struct lpaif_i2sctl),
+ GFP_KERNEL);
++ if (!drvdata->i2sctl)
++ return -ENOMEM;
+
+ /* Initialize bitfields for dai I2SCTL register */
+ ret = lpass_cpu_init_i2sctl_bitfields(dev, drvdata->i2sctl,
+diff --git a/sound/soc/qcom/sc7280.c b/sound/soc/qcom/sc7280.c
+index 207ac5da4dd430..230af8d7b205d4 100644
+--- a/sound/soc/qcom/sc7280.c
++++ b/sound/soc/qcom/sc7280.c
+@@ -23,6 +23,7 @@
+ #include "common.h"
+ #include "lpass.h"
+ #include "qdsp6/q6afe.h"
++#include "sdw.h"
+
+ #define DEFAULT_MCLK_RATE 19200000
+ #define RT5682_PLL_FREQ (48000 * 512)
+@@ -316,6 +317,7 @@ static void sc7280_snd_shutdown(struct snd_pcm_substream *substream)
+ struct snd_soc_card *card = rtd->card;
+ struct sc7280_snd_data *data = snd_soc_card_get_drvdata(card);
+ struct snd_soc_dai *cpu_dai = snd_soc_rtd_to_cpu(rtd, 0);
++ struct sdw_stream_runtime *sruntime = data->sruntime[cpu_dai->id];
+
+ switch (cpu_dai->id) {
+ case MI2S_PRIMARY:
+@@ -333,6 +335,9 @@ static void sc7280_snd_shutdown(struct snd_pcm_substream *substream)
+ default:
+ break;
+ }
++
++ data->sruntime[cpu_dai->id] = NULL;
++ sdw_release_stream(sruntime);
+ }
+
+ static int sc7280_snd_startup(struct snd_pcm_substream *substream)
+@@ -347,6 +352,8 @@ static int sc7280_snd_startup(struct snd_pcm_substream *substream)
+ switch (cpu_dai->id) {
+ case MI2S_PRIMARY:
+ ret = sc7280_rt5682_init(rtd);
++ if (ret)
++ return ret;
+ break;
+ case SECONDARY_MI2S_RX:
+ codec_dai_fmt |= SND_SOC_DAIFMT_NB_NF | SND_SOC_DAIFMT_I2S;
+@@ -360,7 +367,8 @@ static int sc7280_snd_startup(struct snd_pcm_substream *substream)
+ default:
+ break;
+ }
+- return ret;
++
++ return qcom_snd_sdw_startup(substream);
+ }
+
+ static const struct snd_soc_ops sc7280_ops = {
+diff --git a/sound/soc/qcom/sdm845.c b/sound/soc/qcom/sdm845.c
+index 75701546b6ea8b..a479d7e5b7fbdc 100644
+--- a/sound/soc/qcom/sdm845.c
++++ b/sound/soc/qcom/sdm845.c
+@@ -15,6 +15,7 @@
+ #include <uapi/linux/input-event-codes.h>
+ #include "common.h"
+ #include "qdsp6/q6afe.h"
++#include "sdw.h"
+ #include "../codecs/rt5663.h"
+
+ #define DRIVER_NAME "sdm845"
+@@ -416,7 +417,7 @@ static int sdm845_snd_startup(struct snd_pcm_substream *substream)
+ pr_err("%s: invalid dai id 0x%x\n", __func__, cpu_dai->id);
+ break;
+ }
+- return 0;
++ return qcom_snd_sdw_startup(substream);
+ }
+
+ static void sdm845_snd_shutdown(struct snd_pcm_substream *substream)
+@@ -425,6 +426,7 @@ static void sdm845_snd_shutdown(struct snd_pcm_substream *substream)
+ struct snd_soc_card *card = rtd->card;
+ struct sdm845_snd_data *data = snd_soc_card_get_drvdata(card);
+ struct snd_soc_dai *cpu_dai = snd_soc_rtd_to_cpu(rtd, 0);
++ struct sdw_stream_runtime *sruntime = data->sruntime[cpu_dai->id];
+
+ switch (cpu_dai->id) {
+ case PRIMARY_MI2S_RX:
+@@ -463,6 +465,9 @@ static void sdm845_snd_shutdown(struct snd_pcm_substream *substream)
+ pr_err("%s: invalid dai id 0x%x\n", __func__, cpu_dai->id);
+ break;
+ }
++
++ data->sruntime[cpu_dai->id] = NULL;
++ sdw_release_stream(sruntime);
+ }
+
+ static int sdm845_snd_prepare(struct snd_pcm_substream *substream)
+diff --git a/sound/soc/qcom/sm8250.c b/sound/soc/qcom/sm8250.c
+index a15dafb99b3370..50e175fd521ce9 100644
+--- a/sound/soc/qcom/sm8250.c
++++ b/sound/soc/qcom/sm8250.c
+@@ -166,6 +166,7 @@ static int sm8250_platform_probe(struct platform_device *pdev)
+
+ static const struct of_device_id snd_sm8250_dt_match[] = {
+ {.compatible = "qcom,sm8250-sndcard"},
++ {.compatible = "qcom,qrb4210-rb2-sndcard"},
+ {.compatible = "qcom,qrb5165-rb5-sndcard"},
+ {}
+ };
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index 63b3c8bf0fdef5..eda8de22643950 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -1298,7 +1298,9 @@ static int rsnd_dai_of_node(struct rsnd_priv *priv, int *is_graph)
+ if (!of_node_name_eq(ports, "ports") &&
+ !of_node_name_eq(ports, "port"))
+ continue;
+- priv->component_dais[i] = of_graph_get_endpoint_count(ports);
++ priv->component_dais[i] =
++ of_graph_get_endpoint_count(of_node_name_eq(ports, "ports") ?
++ ports : np);
+ nr += priv->component_dais[i];
+ i++;
+ if (i >= RSND_MAX_COMPONENT) {
+@@ -1510,7 +1512,8 @@ static int rsnd_dai_probe(struct rsnd_priv *priv)
+ if (!of_node_name_eq(ports, "ports") &&
+ !of_node_name_eq(ports, "port"))
+ continue;
+- for_each_endpoint_of_node(ports, dai_np) {
++ for_each_endpoint_of_node(of_node_name_eq(ports, "ports") ?
++ ports : np, dai_np) {
+ __rsnd_dai_probe(priv, dai_np, dai_np, 0, dai_i);
+ if (!rsnd_is_gen1(priv) && !rsnd_is_gen2(priv)) {
+ rdai = rsnd_rdai_get(priv, dai_i);
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 32c556c625577d..e39df5d10b07df 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -2786,10 +2786,10 @@ EXPORT_SYMBOL_GPL(snd_soc_dapm_update_dai);
+
+ int snd_soc_dapm_widget_name_cmp(struct snd_soc_dapm_widget *widget, const char *s)
+ {
+- struct snd_soc_component *component = snd_soc_dapm_to_component(widget->dapm);
++ struct snd_soc_component *component = widget->dapm->component;
+ const char *wname = widget->name;
+
+- if (component->name_prefix)
++ if (component && component->name_prefix)
+ wname += strlen(component->name_prefix) + 1; /* plus space */
+
+ return strcmp(wname, s);
+diff --git a/sound/soc/sof/intel/hda-dai-ops.c b/sound/soc/sof/intel/hda-dai-ops.c
+index 484c761478853f..92681ca7f24def 100644
+--- a/sound/soc/sof/intel/hda-dai-ops.c
++++ b/sound/soc/sof/intel/hda-dai-ops.c
+@@ -346,20 +346,21 @@ static int hda_trigger(struct snd_sof_dev *sdev, struct snd_soc_dai *cpu_dai,
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ snd_hdac_ext_stream_start(hext_stream);
+ break;
+- case SNDRV_PCM_TRIGGER_SUSPEND:
+- case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+- snd_hdac_ext_stream_clear(hext_stream);
+-
+ /*
+- * Save the LLP registers in case the stream is
+- * restarting due PAUSE_RELEASE, or START without a pcm
+- * close/open since in this case the LLP register is not reset
+- * to 0 and the delay calculation will return with invalid
+- * results.
++ * Save the LLP registers since in case of PAUSE the LLP
++ * register are not reset to 0, the delay calculation will use
++ * the saved offsets for compensating the delay calculation.
+ */
+ hext_stream->pplcllpl = readl(hext_stream->pplc_addr + AZX_REG_PPLCLLPL);
+ hext_stream->pplcllpu = readl(hext_stream->pplc_addr + AZX_REG_PPLCLLPU);
++ snd_hdac_ext_stream_clear(hext_stream);
++ break;
++ case SNDRV_PCM_TRIGGER_SUSPEND:
++ case SNDRV_PCM_TRIGGER_STOP:
++ hext_stream->pplcllpl = 0;
++ hext_stream->pplcllpu = 0;
++ snd_hdac_ext_stream_clear(hext_stream);
+ break;
+ default:
+ dev_err(sdev->dev, "unknown trigger command %d\n", cmd);
+@@ -512,7 +513,6 @@ static const struct hda_dai_widget_dma_ops sdw_ipc4_chain_dma_ops = {
+ static int hda_ipc3_post_trigger(struct snd_sof_dev *sdev, struct snd_soc_dai *cpu_dai,
+ struct snd_pcm_substream *substream, int cmd)
+ {
+- struct hdac_ext_stream *hext_stream = hda_get_hext_stream(sdev, cpu_dai, substream);
+ struct snd_soc_dapm_widget *w = snd_soc_dai_get_widget(cpu_dai, substream->stream);
+
+ switch (cmd) {
+@@ -527,9 +527,6 @@ static int hda_ipc3_post_trigger(struct snd_sof_dev *sdev, struct snd_soc_dai *c
+ if (ret < 0)
+ return ret;
+
+- if (cmd == SNDRV_PCM_TRIGGER_STOP)
+- return hda_link_dma_cleanup(substream, hext_stream, cpu_dai);
+-
+ break;
+ }
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+diff --git a/sound/soc/sof/intel/hda-dai.c b/sound/soc/sof/intel/hda-dai.c
+index 1c823f9eea5700..ac505c7ad34295 100644
+--- a/sound/soc/sof/intel/hda-dai.c
++++ b/sound/soc/sof/intel/hda-dai.c
+@@ -302,6 +302,7 @@ static int __maybe_unused hda_dai_trigger(struct snd_pcm_substream *substream, i
+ }
+
+ switch (cmd) {
++ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ ret = hda_link_dma_cleanup(substream, hext_stream, dai);
+ if (ret < 0) {
+@@ -370,6 +371,13 @@ static int non_hda_dai_hw_params_data(struct snd_pcm_substream *substream,
+ return -EINVAL;
+ }
+
++ sdev = widget_to_sdev(w);
++ hext_stream = ops->get_hext_stream(sdev, cpu_dai, substream);
++
++ /* nothing more to do if the link is already prepared */
++ if (hext_stream && hext_stream->link_prepared)
++ return 0;
++
+ /* use HDaudio stream handling */
+ ret = hda_dai_hw_params_data(substream, params, cpu_dai, data, flags);
+ if (ret < 0) {
+@@ -377,7 +385,6 @@ static int non_hda_dai_hw_params_data(struct snd_pcm_substream *substream,
+ return ret;
+ }
+
+- sdev = widget_to_sdev(w);
+ if (sdev->dspless_mode_selected)
+ return 0;
+
+@@ -482,6 +489,31 @@ int sdw_hda_dai_hw_params(struct snd_pcm_substream *substream,
+ int ret;
+ int i;
+
++ ops = hda_dai_get_ops(substream, cpu_dai);
++ if (!ops) {
++ dev_err(cpu_dai->dev, "DAI widget ops not set\n");
++ return -EINVAL;
++ }
++
++ sdev = widget_to_sdev(w);
++ hext_stream = ops->get_hext_stream(sdev, cpu_dai, substream);
++
++ /* nothing more to do if the link is already prepared */
++ if (hext_stream && hext_stream->link_prepared)
++ return 0;
++
++ /*
++ * reset the PCMSyCM registers to handle a prepare callback when the PCM is restarted
++ * due to xruns or after a call to snd_pcm_drain/drop()
++ */
++ ret = hdac_bus_eml_sdw_map_stream_ch(sof_to_bus(sdev), link_id, cpu_dai->id,
++ 0, 0, substream->stream);
++ if (ret < 0) {
++ dev_err(cpu_dai->dev, "%s: hdac_bus_eml_sdw_map_stream_ch failed %d\n",
++ __func__, ret);
++ return ret;
++ }
++
+ data.dai_index = (link_id << 8) | cpu_dai->id;
+ data.dai_node_id = intel_alh_id;
+ ret = non_hda_dai_hw_params_data(substream, params, cpu_dai, &data, flags);
+@@ -490,10 +522,7 @@ int sdw_hda_dai_hw_params(struct snd_pcm_substream *substream,
+ return ret;
+ }
+
+- ops = hda_dai_get_ops(substream, cpu_dai);
+- sdev = widget_to_sdev(w);
+ hext_stream = ops->get_hext_stream(sdev, cpu_dai, substream);
+-
+ if (!hext_stream)
+ return -ENODEV;
+
+diff --git a/sound/soc/sof/intel/hda-loader.c b/sound/soc/sof/intel/hda-loader.c
+index 75f6240cf3e1d3..9d8ebb7c6a1060 100644
+--- a/sound/soc/sof/intel/hda-loader.c
++++ b/sound/soc/sof/intel/hda-loader.c
+@@ -294,14 +294,9 @@ int hda_cl_copy_fw(struct snd_sof_dev *sdev, struct hdac_ext_stream *hext_stream
+ {
+ struct sof_intel_hda_dev *hda = sdev->pdata->hw_pdata;
+ const struct sof_intel_dsp_desc *chip = hda->desc;
+- struct sof_intel_hda_stream *hda_stream;
+- unsigned long time_left;
+ unsigned int reg;
+ int ret, status;
+
+- hda_stream = container_of(hext_stream, struct sof_intel_hda_stream,
+- hext_stream);
+-
+ dev_dbg(sdev->dev, "Code loader DMA starting\n");
+
+ ret = hda_cl_trigger(sdev->dev, hext_stream, SNDRV_PCM_TRIGGER_START);
+@@ -310,18 +305,6 @@ int hda_cl_copy_fw(struct snd_sof_dev *sdev, struct hdac_ext_stream *hext_stream
+ return ret;
+ }
+
+- if (sdev->pdata->ipc_type == SOF_IPC_TYPE_4) {
+- /* Wait for completion of transfer */
+- time_left = wait_for_completion_timeout(&hda_stream->ioc,
+- msecs_to_jiffies(HDA_CL_DMA_IOC_TIMEOUT_MS));
+-
+- if (!time_left) {
+- dev_err(sdev->dev, "Code loader DMA did not complete\n");
+- return -ETIMEDOUT;
+- }
+- dev_dbg(sdev->dev, "Code loader DMA done\n");
+- }
+-
+ dev_dbg(sdev->dev, "waiting for FW_ENTERED status\n");
+
+ status = snd_sof_dsp_read_poll_timeout(sdev, HDA_DSP_BAR,
+diff --git a/sound/soc/sof/ipc4-topology.c b/sound/soc/sof/ipc4-topology.c
+index 87be7f16e8c2b6..240fee2166d125 100644
+--- a/sound/soc/sof/ipc4-topology.c
++++ b/sound/soc/sof/ipc4-topology.c
+@@ -3129,9 +3129,20 @@ static int sof_ipc4_dai_config(struct snd_sof_dev *sdev, struct snd_sof_widget *
+ * group_id during copier's ipc_prepare op.
+ */
+ if (flags & SOF_DAI_CONFIG_FLAGS_HW_PARAMS) {
++ struct sof_ipc4_alh_configuration_blob *blob;
++
++ blob = (struct sof_ipc4_alh_configuration_blob *)ipc4_copier->copier_config;
+ ipc4_copier->dai_index = data->dai_node_id;
+- copier_data->gtw_cfg.node_id &= ~SOF_IPC4_NODE_INDEX_MASK;
+- copier_data->gtw_cfg.node_id |= SOF_IPC4_NODE_INDEX(data->dai_node_id);
++
++ /*
++ * no need to set the node_id for aggregated DAI's. These will be assigned
++ * a group_id during widget ipc_prepare
++ */
++ if (blob->alh_cfg.device_count == 1) {
++ copier_data->gtw_cfg.node_id &= ~SOF_IPC4_NODE_INDEX_MASK;
++ copier_data->gtw_cfg.node_id |=
++ SOF_IPC4_NODE_INDEX(data->dai_node_id);
++ }
+ }
+
+ break;
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index 35bcf52dbc6526..cf9b43b872f9d1 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -1121,6 +1121,9 @@ enum bpf_attach_type {
+
+ #define MAX_BPF_ATTACH_TYPE __MAX_BPF_ATTACH_TYPE
+
++/* Add BPF_LINK_TYPE(type, name) in bpf_types.h to keep bpf_link_type_strs[]
++ * in sync with the definitions below.
++ */
+ enum bpf_link_type {
+ BPF_LINK_TYPE_UNSPEC = 0,
+ BPF_LINK_TYPE_RAW_TRACEPOINT = 1,
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 848fffa250227f..555fd34c6e1fc3 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -221,7 +221,7 @@ $(OUTPUT)/%:%.c
+ ifeq ($(SRCARCH),$(filter $(SRCARCH),x86 riscv))
+ LLD := lld
+ else
+-LLD := ld
++LLD := $(shell command -v $(LD))
+ endif
+
+ # Filter out -static for liburandom_read.so and its dependent targets so that static builds
+diff --git a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
+index f3932941bbaafc..745c5ada4c4bfd 100644
+--- a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
++++ b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
+@@ -67,8 +67,9 @@ static int verify_perf_link_info(int fd, enum bpf_perf_event_type type, long add
+
+ ASSERT_EQ(info.perf_event.kprobe.cookie, PERF_EVENT_COOKIE, "kprobe_cookie");
+
++ ASSERT_EQ(info.perf_event.kprobe.name_len, strlen(KPROBE_FUNC) + 1,
++ "name_len");
+ if (!info.perf_event.kprobe.func_name) {
+- ASSERT_EQ(info.perf_event.kprobe.name_len, 0, "name_len");
+ info.perf_event.kprobe.func_name = ptr_to_u64(&buf);
+ info.perf_event.kprobe.name_len = sizeof(buf);
+ goto again;
+@@ -79,8 +80,9 @@ static int verify_perf_link_info(int fd, enum bpf_perf_event_type type, long add
+ ASSERT_EQ(err, 0, "cmp_kprobe_func_name");
+ break;
+ case BPF_PERF_EVENT_TRACEPOINT:
++ ASSERT_EQ(info.perf_event.tracepoint.name_len, strlen(TP_NAME) + 1,
++ "name_len");
+ if (!info.perf_event.tracepoint.tp_name) {
+- ASSERT_EQ(info.perf_event.tracepoint.name_len, 0, "name_len");
+ info.perf_event.tracepoint.tp_name = ptr_to_u64(&buf);
+ info.perf_event.tracepoint.name_len = sizeof(buf);
+ goto again;
+@@ -96,8 +98,9 @@ static int verify_perf_link_info(int fd, enum bpf_perf_event_type type, long add
+ case BPF_PERF_EVENT_URETPROBE:
+ ASSERT_EQ(info.perf_event.uprobe.offset, offset, "uprobe_offset");
+
++ ASSERT_EQ(info.perf_event.uprobe.name_len, strlen(UPROBE_FILE) + 1,
++ "name_len");
+ if (!info.perf_event.uprobe.file_name) {
+- ASSERT_EQ(info.perf_event.uprobe.name_len, 0, "name_len");
+ info.perf_event.uprobe.file_name = ptr_to_u64(&buf);
+ info.perf_event.uprobe.name_len = sizeof(buf);
+ goto again;
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-01 11:54 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-01 11:54 UTC (permalink / raw
To: gentoo-commits
commit: cee99db765b4c747cad50b180f66cf57320335b3
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 1 11:54:08 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 1 11:54:08 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cee99db7
Remove redundant patch
Removed:
2005_netfilter-xtables-fix-typo.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 --
2005_netfilter-xtables-fix-typo.patch | 71 -----------------------------------
2 files changed, 75 deletions(-)
diff --git a/0000_README b/0000_README
index cc196549..432ffec3 100644
--- a/0000_README
+++ b/0000_README
@@ -83,10 +83,6 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
-Patch: 2005_netfilter-xtables-fix-typo.patch
-From: https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/net/netfilter?id=306ed1728e8438caed30332e1ab46b28c25fe3d8
-Desc: netfilter: xtables: fix typo causing some targets not to load on IPv6
-
Patch: 2901_tools-lib-subcmd-compile-fix.patch
From: https://lore.kernel.org/all/20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de/
Desc: tools lib subcmd: Fixed uninitialized use of variable in parse-options
diff --git a/2005_netfilter-xtables-fix-typo.patch b/2005_netfilter-xtables-fix-typo.patch
deleted file mode 100644
index 6a7dfc7c..00000000
--- a/2005_netfilter-xtables-fix-typo.patch
+++ /dev/null
@@ -1,71 +0,0 @@
-From 306ed1728e8438caed30332e1ab46b28c25fe3d8 Mon Sep 17 00:00:00 2001
-From: Pablo Neira Ayuso <pablo@netfilter.org>
-Date: Sun, 20 Oct 2024 14:49:51 +0200
-Subject: netfilter: xtables: fix typo causing some targets not to load on IPv6
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-- There is no NFPROTO_IPV6 family for mark and NFLOG.
-- TRACE is also missing module autoload with NFPROTO_IPV6.
-
-This results in ip6tables failing to restore a ruleset. This issue has been
-reported by several users providing incomplete patches.
-
-Very similar to Ilya Katsnelson's patch including a missing chunk in the
-TRACE extension.
-
-Fixes: 0bfcb7b71e73 ("netfilter: xtables: avoid NFPROTO_UNSPEC where needed")
-Reported-by: Ignat Korchagin <ignat@cloudflare.com>
-Reported-by: Ilya Katsnelson <me@0upti.me>
-Reported-by: Krzysztof Olędzki <ole@ans.pl>
-Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
----
- net/netfilter/xt_NFLOG.c | 2 +-
- net/netfilter/xt_TRACE.c | 1 +
- net/netfilter/xt_mark.c | 2 +-
- 3 files changed, 3 insertions(+), 2 deletions(-)
-
-(limited to 'net/netfilter')
-
-diff --git a/net/netfilter/xt_NFLOG.c b/net/netfilter/xt_NFLOG.c
-index d80abd6ccaf8f7..6dcf4bc7e30b2a 100644
---- a/net/netfilter/xt_NFLOG.c
-+++ b/net/netfilter/xt_NFLOG.c
-@@ -79,7 +79,7 @@ static struct xt_target nflog_tg_reg[] __read_mostly = {
- {
- .name = "NFLOG",
- .revision = 0,
-- .family = NFPROTO_IPV4,
-+ .family = NFPROTO_IPV6,
- .checkentry = nflog_tg_check,
- .destroy = nflog_tg_destroy,
- .target = nflog_tg,
-diff --git a/net/netfilter/xt_TRACE.c b/net/netfilter/xt_TRACE.c
-index f3fa4f11348cd8..a642ff09fc8e8c 100644
---- a/net/netfilter/xt_TRACE.c
-+++ b/net/netfilter/xt_TRACE.c
-@@ -49,6 +49,7 @@ static struct xt_target trace_tg_reg[] __read_mostly = {
- .target = trace_tg,
- .checkentry = trace_tg_check,
- .destroy = trace_tg_destroy,
-+ .me = THIS_MODULE,
- },
- #endif
- };
-diff --git a/net/netfilter/xt_mark.c b/net/netfilter/xt_mark.c
-index f76fe04fc9a4e1..65b965ca40ea7e 100644
---- a/net/netfilter/xt_mark.c
-+++ b/net/netfilter/xt_mark.c
-@@ -62,7 +62,7 @@ static struct xt_target mark_tg_reg[] __read_mostly = {
- {
- .name = "MARK",
- .revision = 2,
-- .family = NFPROTO_IPV4,
-+ .family = NFPROTO_IPV6,
- .target = mark_tg,
- .targetsize = sizeof(struct xt_mark_tginfo2),
- .me = THIS_MODULE,
---
-cgit 1.2.3-korg
-
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-03 11:29 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-03 11:29 UTC (permalink / raw
To: gentoo-commits
commit: 5f855e39a596d0d07a7c11276b1a6463c4387667
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 3 11:28:35 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Nov 3 11:28:35 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5f855e39
Revert: HID: multitouch: Add support for lenovo Y9000P Touchpad
Thanks to Ulrich Müller
Bug: https://bugs.gentoo.org/942797
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++++
...ID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch | 24 ++++++++++++++++++++++
2 files changed, 28 insertions(+)
diff --git a/0000_README b/0000_README
index 432ffec3..282aabe7 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+Patch: 2600_HID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch
+From: https://bugs.gentoo.org/942797
+Desc: Revert: HID: multitouch: Add support for lenovo Y9000P Touchpad
+
Patch: 2901_tools-lib-subcmd-compile-fix.patch
From: https://lore.kernel.org/all/20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de/
Desc: tools lib subcmd: Fixed uninitialized use of variable in parse-options
diff --git a/2600_HID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch b/2600_HID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch
new file mode 100644
index 00000000..a6c6939a
--- /dev/null
+++ b/2600_HID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch
@@ -0,0 +1,24 @@
+--- linux-6.6.58-gentoo-r1/drivers/hid/hid-multitouch.c
++++ linux-6.6.58-gentoo-r1/drivers/hid/hid-multitouch.c
+@@ -1447,8 +1447,7 @@ static __u8 *mt_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ {
+ if (hdev->vendor == I2C_VENDOR_ID_GOODIX &&
+ (hdev->product == I2C_DEVICE_ID_GOODIX_01E8 ||
+- hdev->product == I2C_DEVICE_ID_GOODIX_01E9 ||
+- hdev->product == I2C_DEVICE_ID_GOODIX_01E0)) {
++ hdev->product == I2C_DEVICE_ID_GOODIX_01E9)) {
+ if (rdesc[607] == 0x15) {
+ rdesc[607] = 0x25;
+ dev_info(
+@@ -2069,10 +2068,7 @@ static const struct hid_device_id mt_devices[] = {
+ I2C_DEVICE_ID_GOODIX_01E8) },
+ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
+ HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
+- I2C_DEVICE_ID_GOODIX_01E9) },
+- { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
+- HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
+- I2C_DEVICE_ID_GOODIX_01E0) },
++ I2C_DEVICE_ID_GOODIX_01E8) },
+
+ /* GoodTouch panels */
+ { .driver_data = MT_CLS_NSMU,
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-04 20:27 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-04 20:27 UTC (permalink / raw
To: gentoo-commits
commit: 5913ad3100e4b60f94c5e49c7f1c60c696eae20d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Nov 4 20:26:12 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Nov 4 20:26:12 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5913ad31
Update hid-multitouch patch
Bug: https://bugs.gentoo.org/942797
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
2600_HID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/2600_HID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch b/2600_HID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch
index a6c6939a..a27c081b 100644
--- a/2600_HID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch
+++ b/2600_HID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch
@@ -10,15 +10,13 @@
if (rdesc[607] == 0x15) {
rdesc[607] = 0x25;
dev_info(
-@@ -2069,10 +2068,7 @@ static const struct hid_device_id mt_devices[] = {
- I2C_DEVICE_ID_GOODIX_01E8) },
+@@ -2070,9 +2069,6 @@ static const struct hid_device_id mt_devices[] = {
{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
-- I2C_DEVICE_ID_GOODIX_01E9) },
+ I2C_DEVICE_ID_GOODIX_01E9) },
- { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
- HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
- I2C_DEVICE_ID_GOODIX_01E0) },
-+ I2C_DEVICE_ID_GOODIX_01E8) },
/* GoodTouch panels */
{ .driver_data = MT_CLS_NSMU,
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-08 16:29 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-08 16:29 UTC (permalink / raw
To: gentoo-commits
commit: 860d87c5af8d9aafb961f9ff29e6ad54944dff9b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 8 16:29:24 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 8 16:29:24 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=860d87c5
Linux patch 6.11.7
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1006_linux-6.11.7.patch | 9627 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 9631 insertions(+)
diff --git a/0000_README b/0000_README
index 282aabe7..89bcfe71 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1005_linux-6.11.6.patch
From: https://www.kernel.org
Desc: Linux 6.11.6
+Patch: 1006_linux-6.11.7.patch
+From: https://www.kernel.org
+Desc: Linux 6.11.7
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1006_linux-6.11.7.patch b/1006_linux-6.11.7.patch
new file mode 100644
index 00000000..49da654e
--- /dev/null
+++ b/1006_linux-6.11.7.patch
@@ -0,0 +1,9627 @@
+diff --git a/Documentation/devicetree/bindings/iio/adc/adi,ad7380.yaml b/Documentation/devicetree/bindings/iio/adc/adi,ad7380.yaml
+index 899b777017ce3b..7b299ddfea3024 100644
+--- a/Documentation/devicetree/bindings/iio/adc/adi,ad7380.yaml
++++ b/Documentation/devicetree/bindings/iio/adc/adi,ad7380.yaml
+@@ -54,6 +54,10 @@ properties:
+ A 2.5V to 3.3V supply for the external reference voltage. When omitted,
+ the internal 2.5V reference is used.
+
++ refin-supply:
++ description:
++ A 2.5V to 3.3V supply for external reference voltage, for ad7380-4 only.
++
+ aina-supply:
+ description:
+ The common mode voltage supply for the AINA- pin on pseudo-differential
+@@ -122,6 +126,23 @@ allOf:
+ ainc-supply: false
+ aind-supply: false
+
++ # ad7380-4 uses refin-supply as external reference.
++ # All other chips from ad738x family use refio as optional external reference.
++ # When refio-supply is omitted, internal reference is used.
++ - if:
++ properties:
++ compatible:
++ enum:
++ - adi,ad7380-4
++ then:
++ properties:
++ refio-supply: false
++ required:
++ - refin-supply
++ else:
++ properties:
++ refin-supply: false
++
+ examples:
+ - |
+ #include <dt-bindings/interrupt-controller/irq.h>
+diff --git a/Documentation/driver-api/dpll.rst b/Documentation/driver-api/dpll.rst
+index ea8d16600e16a8..e6855cd37e8525 100644
+--- a/Documentation/driver-api/dpll.rst
++++ b/Documentation/driver-api/dpll.rst
+@@ -214,6 +214,27 @@ offset values are fractional with 3-digit decimal places and shell be
+ divided with ``DPLL_PIN_PHASE_OFFSET_DIVIDER`` to get integer part and
+ modulo divided to get fractional part.
+
++Embedded SYNC
++=============
++
++Device may provide ability to use Embedded SYNC feature. It allows
++to embed additional SYNC signal into the base frequency of a pin - a one
++special pulse of base frequency signal every time SYNC signal pulse
++happens. The user can configure the frequency of Embedded SYNC.
++The Embedded SYNC capability is always related to a given base frequency
++and HW capabilities. The user is provided a range of Embedded SYNC
++frequencies supported, depending on current base frequency configured for
++the pin.
++
++ ========================================= =================================
++ ``DPLL_A_PIN_ESYNC_FREQUENCY`` current Embedded SYNC frequency
++ ``DPLL_A_PIN_ESYNC_FREQUENCY_SUPPORTED`` nest available Embedded SYNC
++ frequency ranges
++ ``DPLL_A_PIN_FREQUENCY_MIN`` attr minimum value of frequency
++ ``DPLL_A_PIN_FREQUENCY_MAX`` attr maximum value of frequency
++ ``DPLL_A_PIN_ESYNC_PULSE`` pulse type of Embedded SYNC
++ ========================================= =================================
++
+ Configuration commands group
+ ============================
+
+diff --git a/Documentation/netlink/specs/dpll.yaml b/Documentation/netlink/specs/dpll.yaml
+index 94132d30e0e035..f2894ca35de840 100644
+--- a/Documentation/netlink/specs/dpll.yaml
++++ b/Documentation/netlink/specs/dpll.yaml
+@@ -345,6 +345,26 @@ attribute-sets:
+ Value is in PPM (parts per million).
+ This may be implemented for example for pin of type
+ PIN_TYPE_SYNCE_ETH_PORT.
++ -
++ name: esync-frequency
++ type: u64
++ doc: |
++ Frequency of Embedded SYNC signal. If provided, the pin is configured
++ with a SYNC signal embedded into its base clock frequency.
++ -
++ name: esync-frequency-supported
++ type: nest
++ multi-attr: true
++ nested-attributes: frequency-range
++ doc: |
++ If provided a pin is capable of embedding a SYNC signal (within given
++ range) into its base frequency signal.
++ -
++ name: esync-pulse
++ type: u32
++ doc: |
++ A ratio of high to low state of a SYNC signal pulse embedded
++ into base clock frequency. Value is in percents.
+ -
+ name: pin-parent-device
+ subset-of: pin
+@@ -510,6 +530,9 @@ operations:
+ - phase-adjust-max
+ - phase-adjust
+ - fractional-frequency-offset
++ - esync-frequency
++ - esync-frequency-supported
++ - esync-pulse
+
+ dump:
+ request:
+@@ -536,6 +559,7 @@ operations:
+ - parent-device
+ - parent-pin
+ - phase-adjust
++ - esync-frequency
+ -
+ name: pin-create-ntf
+ doc: Notification about pin appearing
+diff --git a/Documentation/rust/arch-support.rst b/Documentation/rust/arch-support.rst
+index 750ff371570a03..54be7ddf3e57a7 100644
+--- a/Documentation/rust/arch-support.rst
++++ b/Documentation/rust/arch-support.rst
+@@ -17,7 +17,7 @@ Architecture Level of support Constraints
+ ============= ================ ==============================================
+ ``arm64`` Maintained Little Endian only.
+ ``loongarch`` Maintained \-
+-``riscv`` Maintained ``riscv64`` only.
++``riscv`` Maintained ``riscv64`` and LLVM/Clang only.
+ ``um`` Maintained \-
+ ``x86`` Maintained ``x86_64`` only.
+ ============= ================ ==============================================
+diff --git a/Makefile b/Makefile
+index 318a5d60088e0d..692bbdf40fb5f2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 11
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8ulp.dtsi b/arch/arm64/boot/dts/freescale/imx8ulp.dtsi
+index e32d5afcf4a962..43f5437684448b 100644
+--- a/arch/arm64/boot/dts/freescale/imx8ulp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8ulp.dtsi
+@@ -384,7 +384,7 @@ pcc4: clock-controller@29800000 {
+ };
+
+ flexspi2: spi@29810000 {
+- compatible = "nxp,imx8mm-fspi";
++ compatible = "nxp,imx8ulp-fspi";
+ reg = <0x29810000 0x10000>, <0x60000000 0x10000000>;
+ reg-names = "fspi_base", "fspi_mmap";
+ #address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8939.dtsi b/arch/arm64/boot/dts/qcom/msm8939.dtsi
+index 46d9480cd46456..39405713329ba1 100644
+--- a/arch/arm64/boot/dts/qcom/msm8939.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8939.dtsi
+@@ -248,7 +248,7 @@ rpm: remoteproc {
+
+ smd-edge {
+ interrupts = <GIC_SPI 168 IRQ_TYPE_EDGE_RISING>;
+- mboxes = <&apcs1_mbox 0>;
++ qcom,ipc = <&apcs1_mbox 8 0>;
+ qcom,smd-edge = <15>;
+
+ rpm_requests: rpm-requests {
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+index 9caa14dda58552..530db0913f91d9 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+@@ -134,6 +134,8 @@ vreg_nvme: regulator-nvme {
+
+ pinctrl-0 = <&nvme_reg_en>;
+ pinctrl-names = "default";
++
++ regulator-boot-on;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+index e17ab8251e2a55..f566bbf50d6ea3 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+@@ -283,6 +283,8 @@ vreg_nvme: regulator-nvme {
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&nvme_reg_en>;
++
++ regulator-boot-on;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+index 1943bdbfb8c00c..884595d98e6412 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+@@ -206,6 +206,8 @@ vreg_nvme: regulator-nvme {
+
+ pinctrl-0 = <&nvme_reg_en>;
+ pinctrl-names = "default";
++
++ regulator-boot-on;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+index 8098e6730ae52f..7bee11a5f6ee6b 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+@@ -253,6 +253,8 @@ vreg_nvme: regulator-nvme {
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&nvme_reg_en>;
++
++ regulator-boot-on;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 6174160b56c49f..bb2de6469972cd 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -2895,9 +2895,9 @@ pcie6a: pci@1bf8000 {
+ "mhi";
+ #address-cells = <3>;
+ #size-cells = <2>;
+- ranges = <0x01000000 0 0x00000000 0 0x70200000 0 0x100000>,
+- <0x02000000 0 0x70300000 0 0x70300000 0 0x3d00000>;
+- bus-range = <0 0xff>;
++ ranges = <0x01000000 0x0 0x00000000 0x0 0x70200000 0x0 0x100000>,
++ <0x02000000 0x0 0x70300000 0x0 0x70300000 0x0 0x1d00000>;
++ bus-range = <0x00 0xff>;
+
+ dma-coherent;
+
+@@ -2973,14 +2973,16 @@ pcie6a_phy: phy@1bfc000 {
+
+ clocks = <&gcc GCC_PCIE_6A_PHY_AUX_CLK>,
+ <&gcc GCC_PCIE_6A_CFG_AHB_CLK>,
+- <&rpmhcc RPMH_CXO_CLK>,
++ <&tcsr TCSR_PCIE_4L_CLKREF_EN>,
+ <&gcc GCC_PCIE_6A_PHY_RCHNG_CLK>,
+- <&gcc GCC_PCIE_6A_PIPE_CLK>;
++ <&gcc GCC_PCIE_6A_PIPE_CLK>,
++ <&gcc GCC_PCIE_6A_PIPEDIV2_CLK>;
+ clock-names = "aux",
+ "cfg_ahb",
+ "ref",
+ "rchng",
+- "pipe";
++ "pipe",
++ "pipediv2";
+
+ resets = <&gcc GCC_PCIE_6A_PHY_BCR>,
+ <&gcc GCC_PCIE_6A_NOCSR_COM_PHY_BCR>;
+@@ -3017,8 +3019,8 @@ pcie4: pci@1c08000 {
+ "mhi";
+ #address-cells = <3>;
+ #size-cells = <2>;
+- ranges = <0x01000000 0 0x00000000 0 0x7c200000 0 0x100000>,
+- <0x02000000 0 0x7c300000 0 0x7c300000 0 0x3d00000>;
++ ranges = <0x01000000 0x0 0x00000000 0x0 0x7c200000 0x0 0x100000>,
++ <0x02000000 0x0 0x7c300000 0x0 0x7c300000 0x0 0x1d00000>;
+ bus-range = <0x00 0xff>;
+
+ dma-coherent;
+@@ -3068,7 +3070,7 @@ pcie4: pci@1c08000 {
+ assigned-clocks = <&gcc GCC_PCIE_4_AUX_CLK>;
+ assigned-clock-rates = <19200000>;
+
+- interconnects = <&pcie_south_anoc MASTER_PCIE_4 QCOM_ICC_TAG_ALWAYS
++ interconnects = <&pcie_north_anoc MASTER_PCIE_4 QCOM_ICC_TAG_ALWAYS
+ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
+ <&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
+ &cnoc_main SLAVE_PCIE_4 QCOM_ICC_TAG_ALWAYS>;
+@@ -3105,14 +3107,16 @@ pcie4_phy: phy@1c0e000 {
+
+ clocks = <&gcc GCC_PCIE_4_AUX_CLK>,
+ <&gcc GCC_PCIE_4_CFG_AHB_CLK>,
+- <&rpmhcc RPMH_CXO_CLK>,
++ <&tcsr TCSR_PCIE_2L_4_CLKREF_EN>,
+ <&gcc GCC_PCIE_4_PHY_RCHNG_CLK>,
+- <&gcc GCC_PCIE_4_PIPE_CLK>;
++ <&gcc GCC_PCIE_4_PIPE_CLK>,
++ <&gcc GCC_PCIE_4_PIPEDIV2_CLK>;
+ clock-names = "aux",
+ "cfg_ahb",
+ "ref",
+ "rchng",
+- "pipe";
++ "pipe",
++ "pipediv2";
+
+ resets = <&gcc GCC_PCIE_4_PHY_BCR>;
+ reset-names = "phy";
+@@ -5700,7 +5704,8 @@ system-cache-controller@25000000 {
+ <0 0x25a00000 0 0x200000>,
+ <0 0x25c00000 0 0x200000>,
+ <0 0x25e00000 0 0x200000>,
+- <0 0x26000000 0 0x200000>;
++ <0 0x26000000 0 0x200000>,
++ <0 0x26200000 0 0x200000>;
+ reg-names = "llcc0_base",
+ "llcc1_base",
+ "llcc2_base",
+@@ -5709,7 +5714,8 @@ system-cache-controller@25000000 {
+ "llcc5_base",
+ "llcc6_base",
+ "llcc7_base",
+- "llcc_broadcast_base";
++ "llcc_broadcast_base",
++ "llcc_broadcast_and_base";
+ interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+diff --git a/arch/mips/kernel/cmpxchg.c b/arch/mips/kernel/cmpxchg.c
+index e974a4954df853..c371def2302d27 100644
+--- a/arch/mips/kernel/cmpxchg.c
++++ b/arch/mips/kernel/cmpxchg.c
+@@ -102,3 +102,4 @@ unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old,
+ return old;
+ }
+ }
++EXPORT_SYMBOL(__cmpxchg_small);
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index d11c2479d8e170..6651a5cbdc2717 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -172,7 +172,7 @@ config RISCV
+ select HAVE_REGS_AND_STACK_ACCESS_API
+ select HAVE_RETHOOK if !XIP_KERNEL
+ select HAVE_RSEQ
+- select HAVE_RUST if 64BIT
++ select HAVE_RUST if 64BIT && CC_IS_CLANG
+ select HAVE_SAMPLE_FTRACE_DIRECT
+ select HAVE_SAMPLE_FTRACE_DIRECT_MULTI
+ select HAVE_STACKPROTECTOR
+diff --git a/arch/riscv/boot/dts/starfive/jh7110-common.dtsi b/arch/riscv/boot/dts/starfive/jh7110-common.dtsi
+index c7771b3b647588..d6c55f1cc96a92 100644
+--- a/arch/riscv/boot/dts/starfive/jh7110-common.dtsi
++++ b/arch/riscv/boot/dts/starfive/jh7110-common.dtsi
+@@ -128,7 +128,6 @@ &camss {
+ assigned-clocks = <&ispcrg JH7110_ISPCLK_DOM4_APB_FUNC>,
+ <&ispcrg JH7110_ISPCLK_MIPI_RX0_PXL>;
+ assigned-clock-rates = <49500000>, <198000000>;
+- status = "okay";
+
+ ports {
+ #address-cells = <1>;
+@@ -151,7 +150,6 @@ camss_from_csi2rx: endpoint {
+ &csi2rx {
+ assigned-clocks = <&ispcrg JH7110_ISPCLK_VIN_SYS>;
+ assigned-clock-rates = <297000000>;
+- status = "okay";
+
+ ports {
+ #address-cells = <1>;
+diff --git a/arch/riscv/boot/dts/starfive/jh7110-pine64-star64.dts b/arch/riscv/boot/dts/starfive/jh7110-pine64-star64.dts
+index b720cdd15ed6e8..8e39fdc73ecb81 100644
+--- a/arch/riscv/boot/dts/starfive/jh7110-pine64-star64.dts
++++ b/arch/riscv/boot/dts/starfive/jh7110-pine64-star64.dts
+@@ -44,8 +44,7 @@ &pcie1 {
+ };
+
+ &phy0 {
+- rx-internal-delay-ps = <1900>;
+- tx-internal-delay-ps = <1500>;
++ rx-internal-delay-ps = <1500>;
+ motorcomm,rx-clk-drv-microamp = <2910>;
+ motorcomm,rx-data-drv-microamp = <2910>;
+ motorcomm,tx-clk-adj-enabled;
+diff --git a/arch/riscv/kernel/acpi.c b/arch/riscv/kernel/acpi.c
+index ba957aaca5cbb8..71a3546c6f9e37 100644
+--- a/arch/riscv/kernel/acpi.c
++++ b/arch/riscv/kernel/acpi.c
+@@ -210,7 +210,7 @@ void __init __iomem *__acpi_map_table(unsigned long phys, unsigned long size)
+ if (!size)
+ return NULL;
+
+- return early_ioremap(phys, size);
++ return early_memremap(phys, size);
+ }
+
+ void __init __acpi_unmap_table(void __iomem *map, unsigned long size)
+@@ -218,7 +218,7 @@ void __init __acpi_unmap_table(void __iomem *map, unsigned long size)
+ if (!map || !size)
+ return;
+
+- early_iounmap(map, size);
++ early_memunmap(map, size);
+ }
+
+ void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
+diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
+index b09ca5f944f77b..cb09f0c4f62c7a 100644
+--- a/arch/riscv/kernel/asm-offsets.c
++++ b/arch/riscv/kernel/asm-offsets.c
+@@ -4,8 +4,6 @@
+ * Copyright (C) 2017 SiFive
+ */
+
+-#define GENERATING_ASM_OFFSETS
+-
+ #include <linux/kbuild.h>
+ #include <linux/mm.h>
+ #include <linux/sched.h>
+diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
+index d6c108c50cba92..d32dfdba083e18 100644
+--- a/arch/riscv/kernel/cacheinfo.c
++++ b/arch/riscv/kernel/cacheinfo.c
+@@ -75,8 +75,7 @@ int populate_cache_leaves(unsigned int cpu)
+ {
+ struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+ struct cacheinfo *this_leaf = this_cpu_ci->info_list;
+- struct device_node *np = of_cpu_device_node_get(cpu);
+- struct device_node *prev = NULL;
++ struct device_node *np, *prev;
+ int levels = 1, level = 1;
+
+ if (!acpi_disabled) {
+@@ -100,6 +99,10 @@ int populate_cache_leaves(unsigned int cpu)
+ return 0;
+ }
+
++ np = of_cpu_device_node_get(cpu);
++ if (!np)
++ return -ENOENT;
++
+ if (of_property_read_bool(np, "cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_UNIFIED, level);
+ if (of_property_read_bool(np, "i-cache-size"))
+diff --git a/arch/riscv/kernel/cpu-hotplug.c b/arch/riscv/kernel/cpu-hotplug.c
+index 28b58fc5ad1996..a1e38ecfc8be21 100644
+--- a/arch/riscv/kernel/cpu-hotplug.c
++++ b/arch/riscv/kernel/cpu-hotplug.c
+@@ -58,7 +58,7 @@ void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+ if (cpu_ops->cpu_is_stopped)
+ ret = cpu_ops->cpu_is_stopped(cpu);
+ if (ret)
+- pr_warn("CPU%d may not have stopped: %d\n", cpu, ret);
++ pr_warn("CPU%u may not have stopped: %d\n", cpu, ret);
+ }
+
+ /*
+diff --git a/arch/riscv/kernel/efi-header.S b/arch/riscv/kernel/efi-header.S
+index 515b2dfbca75ba..c5f17c2710b58f 100644
+--- a/arch/riscv/kernel/efi-header.S
++++ b/arch/riscv/kernel/efi-header.S
+@@ -64,7 +64,7 @@ extra_header_fields:
+ .long efi_header_end - _start // SizeOfHeaders
+ .long 0 // CheckSum
+ .short IMAGE_SUBSYSTEM_EFI_APPLICATION // Subsystem
+- .short 0 // DllCharacteristics
++ .short IMAGE_DLL_CHARACTERISTICS_NX_COMPAT // DllCharacteristics
+ .quad 0 // SizeOfStackReserve
+ .quad 0 // SizeOfStackCommit
+ .quad 0 // SizeOfHeapReserve
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index d4fd8af7aaf5a9..1b9867136b6100 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -136,8 +136,6 @@
+ #define REG_PTR(insn, pos, regs) \
+ (ulong *)((ulong)(regs) + REG_OFFSET(insn, pos))
+
+-#define GET_RM(insn) (((insn) >> 12) & 7)
+-
+ #define GET_RS1(insn, regs) (*REG_PTR(insn, SH_RS1, regs))
+ #define GET_RS2(insn, regs) (*REG_PTR(insn, SH_RS2, regs))
+ #define GET_RS1S(insn, regs) (*REG_PTR(RVC_RS1S(insn), 0, regs))
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index f7ef8ad9b550d0..54a7fec25d5f8d 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -18,6 +18,7 @@ obj-vdso = $(patsubst %, %.o, $(vdso-syms)) note.o
+
+ ccflags-y := -fno-stack-protector
+ ccflags-y += -DDISABLE_BRANCH_PROFILING
++ccflags-y += -fno-builtin
+
+ ifneq ($(c-gettimeofday-y),)
+ CFLAGS_vgettimeofday.o += -fPIC -include $(c-gettimeofday-y)
+diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h
+index a3ec87d198ac83..806649c7f23dc6 100644
+--- a/arch/x86/include/asm/bug.h
++++ b/arch/x86/include/asm/bug.h
+@@ -13,6 +13,18 @@
+ #define INSN_UD2 0x0b0f
+ #define LEN_UD2 2
+
++/*
++ * In clang we have UD1s reporting UBSAN failures on X86, 64 and 32bit.
++ */
++#define INSN_ASOP 0x67
++#define OPCODE_ESCAPE 0x0f
++#define SECOND_BYTE_OPCODE_UD1 0xb9
++#define SECOND_BYTE_OPCODE_UD2 0x0b
++
++#define BUG_NONE 0xffff
++#define BUG_UD1 0xfffe
++#define BUG_UD2 0xfffd
++
+ #ifdef CONFIG_GENERIC_BUG
+
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 4fa0b17e5043aa..29ec49209ae010 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -42,6 +42,7 @@
+ #include <linux/hardirq.h>
+ #include <linux/atomic.h>
+ #include <linux/iommu.h>
++#include <linux/ubsan.h>
+
+ #include <asm/stacktrace.h>
+ #include <asm/processor.h>
+@@ -91,6 +92,47 @@ __always_inline int is_valid_bugaddr(unsigned long addr)
+ return *(unsigned short *)addr == INSN_UD2;
+ }
+
++/*
++ * Check for UD1 or UD2, accounting for Address Size Override Prefixes.
++ * If it's a UD1, get the ModRM byte to pass along to UBSan.
++ */
++__always_inline int decode_bug(unsigned long addr, u32 *imm)
++{
++ u8 v;
++
++ if (addr < TASK_SIZE_MAX)
++ return BUG_NONE;
++
++ v = *(u8 *)(addr++);
++ if (v == INSN_ASOP)
++ v = *(u8 *)(addr++);
++ if (v != OPCODE_ESCAPE)
++ return BUG_NONE;
++
++ v = *(u8 *)(addr++);
++ if (v == SECOND_BYTE_OPCODE_UD2)
++ return BUG_UD2;
++
++ if (!IS_ENABLED(CONFIG_UBSAN_TRAP) || v != SECOND_BYTE_OPCODE_UD1)
++ return BUG_NONE;
++
++ /* Retrieve the immediate (type value) for the UBSAN UD1 */
++ v = *(u8 *)(addr++);
++ if (X86_MODRM_RM(v) == 4)
++ addr++;
++
++ *imm = 0;
++ if (X86_MODRM_MOD(v) == 1)
++ *imm = *(u8 *)addr;
++ else if (X86_MODRM_MOD(v) == 2)
++ *imm = *(u32 *)addr;
++ else
++ WARN_ONCE(1, "Unexpected MODRM_MOD: %u\n", X86_MODRM_MOD(v));
++
++ return BUG_UD1;
++}
++
++
+ static nokprobe_inline int
+ do_trap_no_signal(struct task_struct *tsk, int trapnr, const char *str,
+ struct pt_regs *regs, long error_code)
+@@ -216,30 +258,37 @@ static inline void handle_invalid_op(struct pt_regs *regs)
+ static noinstr bool handle_bug(struct pt_regs *regs)
+ {
+ bool handled = false;
++ int ud_type;
++ u32 imm;
+
+- /*
+- * Normally @regs are unpoisoned by irqentry_enter(), but handle_bug()
+- * is a rare case that uses @regs without passing them to
+- * irqentry_enter().
+- */
+- kmsan_unpoison_entry_regs(regs);
+- if (!is_valid_bugaddr(regs->ip))
++ ud_type = decode_bug(regs->ip, &imm);
++ if (ud_type == BUG_NONE)
+ return handled;
+
+ /*
+ * All lies, just get the WARN/BUG out.
+ */
+ instrumentation_begin();
++ /*
++ * Normally @regs are unpoisoned by irqentry_enter(), but handle_bug()
++ * is a rare case that uses @regs without passing them to
++ * irqentry_enter().
++ */
++ kmsan_unpoison_entry_regs(regs);
+ /*
+ * Since we're emulating a CALL with exceptions, restore the interrupt
+ * state to what it was at the exception site.
+ */
+ if (regs->flags & X86_EFLAGS_IF)
+ raw_local_irq_enable();
+- if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN ||
+- handle_cfi_failure(regs) == BUG_TRAP_TYPE_WARN) {
+- regs->ip += LEN_UD2;
+- handled = true;
++ if (ud_type == BUG_UD2) {
++ if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN ||
++ handle_cfi_failure(regs) == BUG_TRAP_TYPE_WARN) {
++ regs->ip += LEN_UD2;
++ handled = true;
++ }
++ } else if (IS_ENABLED(CONFIG_UBSAN_TRAP)) {
++ pr_crit("%s at %pS\n", report_ubsan_failure(regs, imm), (void *)regs->ip);
+ }
+ if (regs->flags & X86_EFLAGS_IF)
+ raw_local_irq_disable();
+diff --git a/block/blk-map.c b/block/blk-map.c
+index 0e1167b239342f..6ef2ec1f7d78bb 100644
+--- a/block/blk-map.c
++++ b/block/blk-map.c
+@@ -600,9 +600,7 @@ static int blk_rq_map_user_bvec(struct request *rq, const struct iov_iter *iter)
+ if (nsegs >= nr_segs || bytes > UINT_MAX - bv->bv_len)
+ goto put_bio;
+ if (bytes + bv->bv_len > nr_iter)
+- goto put_bio;
+- if (bv->bv_offset + bv->bv_len > PAGE_SIZE)
+- goto put_bio;
++ break;
+
+ nsegs++;
+ bytes += bv->bv_len;
+diff --git a/drivers/accel/ivpu/ivpu_debugfs.c b/drivers/accel/ivpu/ivpu_debugfs.c
+index 6f86f8df30db0f..8d50981594d153 100644
+--- a/drivers/accel/ivpu/ivpu_debugfs.c
++++ b/drivers/accel/ivpu/ivpu_debugfs.c
+@@ -108,6 +108,14 @@ static int reset_pending_show(struct seq_file *s, void *v)
+ return 0;
+ }
+
++static int firewall_irq_counter_show(struct seq_file *s, void *v)
++{
++ struct ivpu_device *vdev = seq_to_ivpu(s);
++
++ seq_printf(s, "%d\n", atomic_read(&vdev->hw->firewall_irq_counter));
++ return 0;
++}
++
+ static const struct drm_debugfs_info vdev_debugfs_list[] = {
+ {"bo_list", bo_list_show, 0},
+ {"fw_name", fw_name_show, 0},
+@@ -116,6 +124,7 @@ static const struct drm_debugfs_info vdev_debugfs_list[] = {
+ {"last_bootmode", last_bootmode_show, 0},
+ {"reset_counter", reset_counter_show, 0},
+ {"reset_pending", reset_pending_show, 0},
++ {"firewall_irq_counter", firewall_irq_counter_show, 0},
+ };
+
+ static ssize_t
+diff --git a/drivers/accel/ivpu/ivpu_hw.c b/drivers/accel/ivpu/ivpu_hw.c
+index 27f0fe4d54e006..e69c0613513f11 100644
+--- a/drivers/accel/ivpu/ivpu_hw.c
++++ b/drivers/accel/ivpu/ivpu_hw.c
+@@ -249,6 +249,7 @@ int ivpu_hw_init(struct ivpu_device *vdev)
+ platform_init(vdev);
+ wa_init(vdev);
+ timeouts_init(vdev);
++ atomic_set(&vdev->hw->firewall_irq_counter, 0);
+
+ return 0;
+ }
+diff --git a/drivers/accel/ivpu/ivpu_hw.h b/drivers/accel/ivpu/ivpu_hw.h
+index 1c0c98e3afb88d..a96a05b2acda9a 100644
+--- a/drivers/accel/ivpu/ivpu_hw.h
++++ b/drivers/accel/ivpu/ivpu_hw.h
+@@ -52,6 +52,7 @@ struct ivpu_hw_info {
+ int dma_bits;
+ ktime_t d0i3_entry_host_ts;
+ u64 d0i3_entry_vpu_ts;
++ atomic_t firewall_irq_counter;
+ };
+
+ int ivpu_hw_init(struct ivpu_device *vdev);
+diff --git a/drivers/accel/ivpu/ivpu_hw_ip.c b/drivers/accel/ivpu/ivpu_hw_ip.c
+index dfd2f4a5b52685..60b33fc59d96e3 100644
+--- a/drivers/accel/ivpu/ivpu_hw_ip.c
++++ b/drivers/accel/ivpu/ivpu_hw_ip.c
+@@ -1062,7 +1062,10 @@ static void irq_wdt_mss_handler(struct ivpu_device *vdev)
+
+ static void irq_noc_firewall_handler(struct ivpu_device *vdev)
+ {
+- ivpu_pm_trigger_recovery(vdev, "NOC Firewall IRQ");
++ atomic_inc(&vdev->hw->firewall_irq_counter);
++
++ ivpu_dbg(vdev, IRQ, "NOC Firewall interrupt detected, counter %d\n",
++ atomic_read(&vdev->hw->firewall_irq_counter));
+ }
+
+ /* Handler for IRQs from NPU core */
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index ed91dfd4fdca7e..544f53ae9cc0c3 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -867,7 +867,7 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+
+ /* Store CPU Logical ID */
+ cpc_ptr->cpu_id = pr->id;
+- spin_lock_init(&cpc_ptr->rmw_lock);
++ raw_spin_lock_init(&cpc_ptr->rmw_lock);
+
+ /* Parse PSD data for this CPU */
+ ret = acpi_get_psd(cpc_ptr, handle);
+@@ -1087,6 +1087,7 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu);
+ struct cpc_reg *reg = ®_res->cpc_entry.reg;
+ struct cpc_desc *cpc_desc;
++ unsigned long flags;
+
+ size = GET_BIT_WIDTH(reg);
+
+@@ -1126,7 +1127,7 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ return -ENODEV;
+ }
+
+- spin_lock(&cpc_desc->rmw_lock);
++ raw_spin_lock_irqsave(&cpc_desc->rmw_lock, flags);
+ switch (size) {
+ case 8:
+ prev_val = readb_relaxed(vaddr);
+@@ -1141,7 +1142,7 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ prev_val = readq_relaxed(vaddr);
+ break;
+ default:
+- spin_unlock(&cpc_desc->rmw_lock);
++ raw_spin_unlock_irqrestore(&cpc_desc->rmw_lock, flags);
+ return -EFAULT;
+ }
+ val = MASK_VAL_WRITE(reg, prev_val, val);
+@@ -1174,7 +1175,7 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ }
+
+ if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
+- spin_unlock(&cpc_desc->rmw_lock);
++ raw_spin_unlock_irqrestore(&cpc_desc->rmw_lock, flags);
+
+ return ret_val;
+ }
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index a904bf9a7f7db8..42c490149a431e 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -511,24 +511,10 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ },
+ },
+ {
+- /* Asus Vivobook Pro N6506MV */
++ /* Asus Vivobook Pro N6506M* */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_BOARD_NAME, "N6506MV"),
+- },
+- },
+- {
+- /* Asus Vivobook Pro N6506MU */
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_BOARD_NAME, "N6506MU"),
+- },
+- },
+- {
+- /* Asus Vivobook Pro N6506MJ */
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_BOARD_NAME, "N6506MJ"),
++ DMI_MATCH(DMI_BOARD_NAME, "N6506M"),
+ },
+ },
+ {
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 3b0f4b6153fc52..68b8cb518463e3 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -25,7 +25,6 @@
+ #include <linux/mutex.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/netdevice.h>
+-#include <linux/rcupdate.h>
+ #include <linux/sched/signal.h>
+ #include <linux/sched/mm.h>
+ #include <linux/string_helpers.h>
+@@ -2641,7 +2640,6 @@ static const char *dev_uevent_name(const struct kobject *kobj)
+ static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)
+ {
+ const struct device *dev = kobj_to_dev(kobj);
+- struct device_driver *driver;
+ int retval = 0;
+
+ /* add device node properties if present */
+@@ -2670,12 +2668,8 @@ static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)
+ if (dev->type && dev->type->name)
+ add_uevent_var(env, "DEVTYPE=%s", dev->type->name);
+
+- /* Synchronize with module_remove_driver() */
+- rcu_read_lock();
+- driver = READ_ONCE(dev->driver);
+- if (driver)
+- add_uevent_var(env, "DRIVER=%s", driver->name);
+- rcu_read_unlock();
++ if (dev->driver)
++ add_uevent_var(env, "DRIVER=%s", dev->driver->name);
+
+ /* Add common DT information about the device */
+ of_device_uevent(dev, env);
+@@ -2745,8 +2739,11 @@ static ssize_t uevent_show(struct device *dev, struct device_attribute *attr,
+ if (!env)
+ return -ENOMEM;
+
++ /* Synchronize with really_probe() */
++ device_lock(dev);
+ /* let the kset specific function add its keys */
+ retval = kset->uevent_ops->uevent(&dev->kobj, env);
++ device_unlock(dev);
+ if (retval)
+ goto out;
+
+@@ -4044,6 +4041,41 @@ int device_for_each_child_reverse(struct device *parent, void *data,
+ }
+ EXPORT_SYMBOL_GPL(device_for_each_child_reverse);
+
++/**
++ * device_for_each_child_reverse_from - device child iterator in reversed order.
++ * @parent: parent struct device.
++ * @from: optional starting point in child list
++ * @fn: function to be called for each device.
++ * @data: data for the callback.
++ *
++ * Iterate over @parent's child devices, starting at @from, and call @fn
++ * for each, passing it @data. This helper is identical to
++ * device_for_each_child_reverse() when @from is NULL.
++ *
++ * @fn is checked each iteration. If it returns anything other than 0,
++ * iteration stop and that value is returned to the caller of
++ * device_for_each_child_reverse_from();
++ */
++int device_for_each_child_reverse_from(struct device *parent,
++ struct device *from, const void *data,
++ int (*fn)(struct device *, const void *))
++{
++ struct klist_iter i;
++ struct device *child;
++ int error = 0;
++
++ if (!parent->p)
++ return 0;
++
++ klist_iter_init_node(&parent->p->klist_children, &i,
++ (from ? &from->p->knode_parent : NULL));
++ while ((child = prev_device(&i)) && !error)
++ error = fn(child, data);
++ klist_iter_exit(&i);
++ return error;
++}
++EXPORT_SYMBOL_GPL(device_for_each_child_reverse_from);
++
+ /**
+ * device_find_child - device iterator for locating a particular device.
+ * @parent: parent struct device
+diff --git a/drivers/base/module.c b/drivers/base/module.c
+index c4eaa1158d54ed..5bc71bea883a06 100644
+--- a/drivers/base/module.c
++++ b/drivers/base/module.c
+@@ -7,7 +7,6 @@
+ #include <linux/errno.h>
+ #include <linux/slab.h>
+ #include <linux/string.h>
+-#include <linux/rcupdate.h>
+ #include "base.h"
+
+ static char *make_driver_name(const struct device_driver *drv)
+@@ -102,9 +101,6 @@ void module_remove_driver(const struct device_driver *drv)
+ if (!drv)
+ return;
+
+- /* Synchronize with dev_uevent() */
+- synchronize_rcu();
+-
+ sysfs_remove_link(&drv->p->kobj, "module");
+
+ if (drv->owner)
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 854546000c92bf..1ff99a7091bbba 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -674,6 +674,16 @@ EXPORT_SYMBOL_GPL(tpm_chip_register);
+ */
+ void tpm_chip_unregister(struct tpm_chip *chip)
+ {
++#ifdef CONFIG_TCG_TPM2_HMAC
++ int rc;
++
++ rc = tpm_try_get_ops(chip);
++ if (!rc) {
++ tpm2_end_auth_session(chip);
++ tpm_put_ops(chip);
++ }
++#endif
++
+ tpm_del_legacy_sysfs(chip);
+ if (tpm_is_hwrng_enabled(chip))
+ hwrng_unregister(&chip->hwrng);
+diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
+index c3fbbf4d3db79a..48ff87444f8519 100644
+--- a/drivers/char/tpm/tpm-dev-common.c
++++ b/drivers/char/tpm/tpm-dev-common.c
+@@ -27,6 +27,9 @@ static ssize_t tpm_dev_transmit(struct tpm_chip *chip, struct tpm_space *space,
+ struct tpm_header *header = (void *)buf;
+ ssize_t ret, len;
+
++ if (chip->flags & TPM_CHIP_FLAG_TPM2)
++ tpm2_end_auth_session(chip);
++
+ ret = tpm2_prepare_space(chip, space, buf, bufsiz);
+ /* If the command is not implemented by the TPM, synthesize a
+ * response with a TPM2_RC_COMMAND_CODE return for user-space.
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 5da134f12c9a47..8134f002b121f8 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -379,10 +379,12 @@ int tpm_pm_suspend(struct device *dev)
+
+ rc = tpm_try_get_ops(chip);
+ if (!rc) {
+- if (chip->flags & TPM_CHIP_FLAG_TPM2)
++ if (chip->flags & TPM_CHIP_FLAG_TPM2) {
++ tpm2_end_auth_session(chip);
+ tpm2_shutdown(chip, TPM2_SU_STATE);
+- else
++ } else {
+ rc = tpm1_pm_suspend(chip, tpm_suspend_pcr);
++ }
+
+ tpm_put_ops(chip);
+ }
+diff --git a/drivers/char/tpm/tpm2-sessions.c b/drivers/char/tpm/tpm2-sessions.c
+index 44f60730cff441..c8fdfe901dfb7c 100644
+--- a/drivers/char/tpm/tpm2-sessions.c
++++ b/drivers/char/tpm/tpm2-sessions.c
+@@ -333,6 +333,9 @@ void tpm_buf_append_hmac_session(struct tpm_chip *chip, struct tpm_buf *buf,
+ }
+
+ #ifdef CONFIG_TCG_TPM2_HMAC
++ /* The first write to /dev/tpm{rm0} will flush the session. */
++ attributes |= TPM2_SA_CONTINUE_SESSION;
++
+ /*
+ * The Architecture Guide requires us to strip trailing zeros
+ * before computing the HMAC
+@@ -484,7 +487,8 @@ static void tpm2_KDFe(u8 z[EC_PT_SZ], const char *str, u8 *pt_u, u8 *pt_v,
+ sha256_final(&sctx, out);
+ }
+
+-static void tpm_buf_append_salt(struct tpm_buf *buf, struct tpm_chip *chip)
++static void tpm_buf_append_salt(struct tpm_buf *buf, struct tpm_chip *chip,
++ struct tpm2_auth *auth)
+ {
+ struct crypto_kpp *kpp;
+ struct kpp_request *req;
+@@ -543,7 +547,7 @@ static void tpm_buf_append_salt(struct tpm_buf *buf, struct tpm_chip *chip)
+ sg_set_buf(&s[0], chip->null_ec_key_x, EC_PT_SZ);
+ sg_set_buf(&s[1], chip->null_ec_key_y, EC_PT_SZ);
+ kpp_request_set_input(req, s, EC_PT_SZ*2);
+- sg_init_one(d, chip->auth->salt, EC_PT_SZ);
++ sg_init_one(d, auth->salt, EC_PT_SZ);
+ kpp_request_set_output(req, d, EC_PT_SZ);
+ crypto_kpp_compute_shared_secret(req);
+ kpp_request_free(req);
+@@ -554,8 +558,7 @@ static void tpm_buf_append_salt(struct tpm_buf *buf, struct tpm_chip *chip)
+ * This works because KDFe fully consumes the secret before it
+ * writes the salt
+ */
+- tpm2_KDFe(chip->auth->salt, "SECRET", x, chip->null_ec_key_x,
+- chip->auth->salt);
++ tpm2_KDFe(auth->salt, "SECRET", x, chip->null_ec_key_x, auth->salt);
+
+ out:
+ crypto_free_kpp(kpp);
+@@ -853,7 +856,9 @@ int tpm_buf_check_hmac_response(struct tpm_chip *chip, struct tpm_buf *buf,
+ if (rc)
+ /* manually close the session if it wasn't consumed */
+ tpm2_flush_context(chip, auth->handle);
+- memzero_explicit(auth, sizeof(*auth));
++
++ kfree_sensitive(auth);
++ chip->auth = NULL;
+ } else {
+ /* reset for next use */
+ auth->session = TPM_HEADER_SIZE;
+@@ -881,7 +886,8 @@ void tpm2_end_auth_session(struct tpm_chip *chip)
+ return;
+
+ tpm2_flush_context(chip, auth->handle);
+- memzero_explicit(auth, sizeof(*auth));
++ kfree_sensitive(auth);
++ chip->auth = NULL;
+ }
+ EXPORT_SYMBOL(tpm2_end_auth_session);
+
+@@ -915,33 +921,37 @@ static int tpm2_parse_start_auth_session(struct tpm2_auth *auth,
+
+ static int tpm2_load_null(struct tpm_chip *chip, u32 *null_key)
+ {
+- int rc;
+ unsigned int offset = 0; /* dummy offset for null seed context */
+ u8 name[SHA256_DIGEST_SIZE + 2];
++ u32 tmp_null_key;
++ int rc;
+
+ rc = tpm2_load_context(chip, chip->null_key_context, &offset,
+- null_key);
+- if (rc != -EINVAL)
+- return rc;
++ &tmp_null_key);
++ if (rc != -EINVAL) {
++ if (!rc)
++ *null_key = tmp_null_key;
++ goto err;
++ }
+
+- /* an integrity failure may mean the TPM has been reset */
+- dev_err(&chip->dev, "NULL key integrity failure!\n");
+- /* check the null name against what we know */
+- tpm2_create_primary(chip, TPM2_RH_NULL, NULL, name);
+- if (memcmp(name, chip->null_key_name, sizeof(name)) == 0)
+- /* name unchanged, assume transient integrity failure */
+- return rc;
+- /*
+- * Fatal TPM failure: the NULL seed has actually changed, so
+- * the TPM must have been illegally reset. All in-kernel TPM
+- * operations will fail because the NULL primary can't be
+- * loaded to salt the sessions, but disable the TPM anyway so
+- * userspace programmes can't be compromised by it.
+- */
+- dev_err(&chip->dev, "NULL name has changed, disabling TPM due to interference\n");
++ /* Try to re-create null key, given the integrity failure: */
++ rc = tpm2_create_primary(chip, TPM2_RH_NULL, &tmp_null_key, name);
++ if (rc)
++ goto err;
++
++ /* Return null key if the name has not been changed: */
++ if (!memcmp(name, chip->null_key_name, sizeof(name))) {
++ *null_key = tmp_null_key;
++ return 0;
++ }
++
++ /* Deduce from the name change TPM interference: */
++ dev_err(&chip->dev, "null key integrity check failed\n");
++ tpm2_flush_context(chip, tmp_null_key);
+ chip->flags |= TPM_CHIP_FLAG_DISABLE;
+
+- return rc;
++err:
++ return rc ? -ENODEV : 0;
+ }
+
+ /**
+@@ -958,16 +968,20 @@ static int tpm2_load_null(struct tpm_chip *chip, u32 *null_key)
+ */
+ int tpm2_start_auth_session(struct tpm_chip *chip)
+ {
++ struct tpm2_auth *auth;
+ struct tpm_buf buf;
+- struct tpm2_auth *auth = chip->auth;
+- int rc;
+ u32 null_key;
++ int rc;
+
+- if (!auth) {
+- dev_warn_once(&chip->dev, "auth session is not active\n");
++ if (chip->auth) {
++ dev_warn_once(&chip->dev, "auth session is active\n");
+ return 0;
+ }
+
++ auth = kzalloc(sizeof(*auth), GFP_KERNEL);
++ if (!auth)
++ return -ENOMEM;
++
+ rc = tpm2_load_null(chip, &null_key);
+ if (rc)
+ goto out;
+@@ -988,7 +1002,7 @@ int tpm2_start_auth_session(struct tpm_chip *chip)
+ tpm_buf_append(&buf, auth->our_nonce, sizeof(auth->our_nonce));
+
+ /* append encrypted salt and squirrel away unencrypted in auth */
+- tpm_buf_append_salt(&buf, chip);
++ tpm_buf_append_salt(&buf, chip, auth);
+ /* session type (HMAC, audit or policy) */
+ tpm_buf_append_u8(&buf, TPM2_SE_HMAC);
+
+@@ -1010,10 +1024,13 @@ int tpm2_start_auth_session(struct tpm_chip *chip)
+
+ tpm_buf_destroy(&buf);
+
+- if (rc)
+- goto out;
++ if (rc == TPM2_RC_SUCCESS) {
++ chip->auth = auth;
++ return 0;
++ }
+
+- out:
++out:
++ kfree_sensitive(auth);
+ return rc;
+ }
+ EXPORT_SYMBOL(tpm2_start_auth_session);
+@@ -1347,18 +1364,21 @@ static int tpm2_create_null_primary(struct tpm_chip *chip)
+ *
+ * Derive and context save the null primary and allocate memory in the
+ * struct tpm_chip for the authorizations.
++ *
++ * Return:
++ * * 0 - OK
++ * * -errno - A system error
++ * * TPM_RC - A TPM error
+ */
+ int tpm2_sessions_init(struct tpm_chip *chip)
+ {
+ int rc;
+
+ rc = tpm2_create_null_primary(chip);
+- if (rc)
+- dev_err(&chip->dev, "TPM: security failed (NULL seed derivation): %d\n", rc);
+-
+- chip->auth = kmalloc(sizeof(*chip->auth), GFP_KERNEL);
+- if (!chip->auth)
+- return -ENOMEM;
++ if (rc) {
++ dev_err(&chip->dev, "null key creation failed with %d\n", rc);
++ return rc;
++ }
+
+ return rc;
+ }
+diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
+index 99b5c25be07943..b3fd0f1b7578dc 100644
+--- a/drivers/cxl/Kconfig
++++ b/drivers/cxl/Kconfig
+@@ -60,6 +60,7 @@ config CXL_ACPI
+ default CXL_BUS
+ select ACPI_TABLE_LIB
+ select ACPI_HMAT
++ select CXL_PORT
+ help
+ Enable support for host managed device memory (HDM) resources
+ published by a platform's ACPI CXL memory layout description. See
+diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
+index db321f48ba52e7..2caa90fa4bf253 100644
+--- a/drivers/cxl/Makefile
++++ b/drivers/cxl/Makefile
+@@ -1,13 +1,21 @@
+ # SPDX-License-Identifier: GPL-2.0
++
++# Order is important here for the built-in case:
++# - 'core' first for fundamental init
++# - 'port' before platform root drivers like 'acpi' so that CXL-root ports
++# are immediately enabled
++# - 'mem' and 'pmem' before endpoint drivers so that memdevs are
++# immediately enabled
++# - 'pci' last, also mirrors the hardware enumeration hierarchy
+ obj-y += core/
+-obj-$(CONFIG_CXL_PCI) += cxl_pci.o
+-obj-$(CONFIG_CXL_MEM) += cxl_mem.o
++obj-$(CONFIG_CXL_PORT) += cxl_port.o
+ obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
+ obj-$(CONFIG_CXL_PMEM) += cxl_pmem.o
+-obj-$(CONFIG_CXL_PORT) += cxl_port.o
++obj-$(CONFIG_CXL_MEM) += cxl_mem.o
++obj-$(CONFIG_CXL_PCI) += cxl_pci.o
+
+-cxl_mem-y := mem.o
+-cxl_pci-y := pci.o
++cxl_port-y := port.o
+ cxl_acpi-y := acpi.o
+ cxl_pmem-y := pmem.o security.o
+-cxl_port-y := port.o
++cxl_mem-y := mem.o
++cxl_pci-y := pci.o
+diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
+index 82b78e331d8ed2..432b7cfd12a8e1 100644
+--- a/drivers/cxl/acpi.c
++++ b/drivers/cxl/acpi.c
+@@ -924,6 +924,13 @@ static void __exit cxl_acpi_exit(void)
+
+ /* load before dax_hmem sees 'Soft Reserved' CXL ranges */
+ subsys_initcall(cxl_acpi_init);
++
++/*
++ * Arrange for host-bridge ports to be active synchronous with
++ * cxl_acpi_probe() exit.
++ */
++MODULE_SOFTDEP("pre: cxl_port");
++
+ module_exit(cxl_acpi_exit);
+ MODULE_DESCRIPTION("CXL ACPI: Platform Support");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c
+index 3df10517a3278f..223c273c0cd179 100644
+--- a/drivers/cxl/core/hdm.c
++++ b/drivers/cxl/core/hdm.c
+@@ -712,7 +712,44 @@ static int cxl_decoder_commit(struct cxl_decoder *cxld)
+ return 0;
+ }
+
+-static int cxl_decoder_reset(struct cxl_decoder *cxld)
++static int commit_reap(struct device *dev, const void *data)
++{
++ struct cxl_port *port = to_cxl_port(dev->parent);
++ struct cxl_decoder *cxld;
++
++ if (!is_switch_decoder(dev) && !is_endpoint_decoder(dev))
++ return 0;
++
++ cxld = to_cxl_decoder(dev);
++ if (port->commit_end == cxld->id &&
++ ((cxld->flags & CXL_DECODER_F_ENABLE) == 0)) {
++ port->commit_end--;
++ dev_dbg(&port->dev, "reap: %s commit_end: %d\n",
++ dev_name(&cxld->dev), port->commit_end);
++ }
++
++ return 0;
++}
++
++void cxl_port_commit_reap(struct cxl_decoder *cxld)
++{
++ struct cxl_port *port = to_cxl_port(cxld->dev.parent);
++
++ lockdep_assert_held_write(&cxl_region_rwsem);
++
++ /*
++ * Once the highest committed decoder is disabled, free any other
++ * decoders that were pinned allocated by out-of-order release.
++ */
++ port->commit_end--;
++ dev_dbg(&port->dev, "reap: %s commit_end: %d\n", dev_name(&cxld->dev),
++ port->commit_end);
++ device_for_each_child_reverse_from(&port->dev, &cxld->dev, NULL,
++ commit_reap);
++}
++EXPORT_SYMBOL_NS_GPL(cxl_port_commit_reap, CXL);
++
++static void cxl_decoder_reset(struct cxl_decoder *cxld)
+ {
+ struct cxl_port *port = to_cxl_port(cxld->dev.parent);
+ struct cxl_hdm *cxlhdm = dev_get_drvdata(&port->dev);
+@@ -721,14 +758,14 @@ static int cxl_decoder_reset(struct cxl_decoder *cxld)
+ u32 ctrl;
+
+ if ((cxld->flags & CXL_DECODER_F_ENABLE) == 0)
+- return 0;
++ return;
+
+- if (port->commit_end != id) {
++ if (port->commit_end == id)
++ cxl_port_commit_reap(cxld);
++ else
+ dev_dbg(&port->dev,
+ "%s: out of order reset, expected decoder%d.%d\n",
+ dev_name(&cxld->dev), port->id, port->commit_end);
+- return -EBUSY;
+- }
+
+ down_read(&cxl_dpa_rwsem);
+ ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(id));
+@@ -741,7 +778,6 @@ static int cxl_decoder_reset(struct cxl_decoder *cxld)
+ writel(0, hdm + CXL_HDM_DECODER0_BASE_LOW_OFFSET(id));
+ up_read(&cxl_dpa_rwsem);
+
+- port->commit_end--;
+ cxld->flags &= ~CXL_DECODER_F_ENABLE;
+
+ /* Userspace is now responsible for reconfiguring this decoder */
+@@ -751,8 +787,6 @@ static int cxl_decoder_reset(struct cxl_decoder *cxld)
+ cxled = to_cxl_endpoint_decoder(&cxld->dev);
+ cxled->state = CXL_DECODER_STATE_MANUAL;
+ }
+-
+- return 0;
+ }
+
+ static int cxl_setup_hdm_decoder_from_dvsec(
+diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
+index 1d5007e3795a3c..d3237346f68776 100644
+--- a/drivers/cxl/core/port.c
++++ b/drivers/cxl/core/port.c
+@@ -2088,11 +2088,18 @@ static void cxl_bus_remove(struct device *dev)
+
+ static struct workqueue_struct *cxl_bus_wq;
+
+-static void cxl_bus_rescan_queue(struct work_struct *w)
++static int cxl_rescan_attach(struct device *dev, void *data)
+ {
+- int rc = bus_rescan_devices(&cxl_bus_type);
++ int rc = device_attach(dev);
++
++ dev_vdbg(dev, "rescan: %s\n", rc ? "attach" : "detached");
+
+- pr_debug("CXL bus rescan result: %d\n", rc);
++ return 0;
++}
++
++static void cxl_bus_rescan_queue(struct work_struct *w)
++{
++ bus_for_each_dev(&cxl_bus_type, NULL, NULL, cxl_rescan_attach);
+ }
+
+ void cxl_bus_rescan(void)
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index 21ad5f24287581..38507c4a3743b1 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -232,8 +232,8 @@ static int cxl_region_invalidate_memregion(struct cxl_region *cxlr)
+ "Bypassing cpu_cache_invalidate_memregion() for testing!\n");
+ return 0;
+ } else {
+- dev_err(&cxlr->dev,
+- "Failed to synchronize CPU cache state\n");
++ dev_WARN(&cxlr->dev,
++ "Failed to synchronize CPU cache state\n");
+ return -ENXIO;
+ }
+ }
+@@ -242,19 +242,17 @@ static int cxl_region_invalidate_memregion(struct cxl_region *cxlr)
+ return 0;
+ }
+
+-static int cxl_region_decode_reset(struct cxl_region *cxlr, int count)
++static void cxl_region_decode_reset(struct cxl_region *cxlr, int count)
+ {
+ struct cxl_region_params *p = &cxlr->params;
+- int i, rc = 0;
++ int i;
+
+ /*
+- * Before region teardown attempt to flush, and if the flush
+- * fails cancel the region teardown for data consistency
+- * concerns
++ * Before region teardown attempt to flush, evict any data cached for
++ * this region, or scream loudly about missing arch / platform support
++ * for CXL teardown.
+ */
+- rc = cxl_region_invalidate_memregion(cxlr);
+- if (rc)
+- return rc;
++ cxl_region_invalidate_memregion(cxlr);
+
+ for (i = count - 1; i >= 0; i--) {
+ struct cxl_endpoint_decoder *cxled = p->targets[i];
+@@ -277,23 +275,17 @@ static int cxl_region_decode_reset(struct cxl_region *cxlr, int count)
+ cxl_rr = cxl_rr_load(iter, cxlr);
+ cxld = cxl_rr->decoder;
+ if (cxld->reset)
+- rc = cxld->reset(cxld);
+- if (rc)
+- return rc;
++ cxld->reset(cxld);
+ set_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags);
+ }
+
+ endpoint_reset:
+- rc = cxled->cxld.reset(&cxled->cxld);
+- if (rc)
+- return rc;
++ cxled->cxld.reset(&cxled->cxld);
+ set_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags);
+ }
+
+ /* all decoders associated with this region have been torn down */
+ clear_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags);
+-
+- return 0;
+ }
+
+ static int commit_decoder(struct cxl_decoder *cxld)
+@@ -409,16 +401,8 @@ static ssize_t commit_store(struct device *dev, struct device_attribute *attr,
+ * still pending.
+ */
+ if (p->state == CXL_CONFIG_RESET_PENDING) {
+- rc = cxl_region_decode_reset(cxlr, p->interleave_ways);
+- /*
+- * Revert to committed since there may still be active
+- * decoders associated with this region, or move forward
+- * to active to mark the reset successful
+- */
+- if (rc)
+- p->state = CXL_CONFIG_COMMIT;
+- else
+- p->state = CXL_CONFIG_ACTIVE;
++ cxl_region_decode_reset(cxlr, p->interleave_ways);
++ p->state = CXL_CONFIG_ACTIVE;
+ }
+ }
+
+@@ -2052,13 +2036,7 @@ static int cxl_region_detach(struct cxl_endpoint_decoder *cxled)
+ get_device(&cxlr->dev);
+
+ if (p->state > CXL_CONFIG_ACTIVE) {
+- /*
+- * TODO: tear down all impacted regions if a device is
+- * removed out of order
+- */
+- rc = cxl_region_decode_reset(cxlr, p->interleave_ways);
+- if (rc)
+- goto out;
++ cxl_region_decode_reset(cxlr, p->interleave_ways);
+ p->state = CXL_CONFIG_ACTIVE;
+ }
+
+diff --git a/drivers/cxl/core/trace.h b/drivers/cxl/core/trace.h
+index 9167cfba7f592c..cdffebcf20a4db 100644
+--- a/drivers/cxl/core/trace.h
++++ b/drivers/cxl/core/trace.h
+@@ -279,7 +279,7 @@ TRACE_EVENT(cxl_generic_event,
+ #define CXL_GMER_MEM_EVT_TYPE_ECC_ERROR 0x00
+ #define CXL_GMER_MEM_EVT_TYPE_INV_ADDR 0x01
+ #define CXL_GMER_MEM_EVT_TYPE_DATA_PATH_ERROR 0x02
+-#define show_mem_event_type(type) __print_symbolic(type, \
++#define show_gmer_mem_event_type(type) __print_symbolic(type, \
+ { CXL_GMER_MEM_EVT_TYPE_ECC_ERROR, "ECC Error" }, \
+ { CXL_GMER_MEM_EVT_TYPE_INV_ADDR, "Invalid Address" }, \
+ { CXL_GMER_MEM_EVT_TYPE_DATA_PATH_ERROR, "Data Path Error" } \
+@@ -373,7 +373,7 @@ TRACE_EVENT(cxl_general_media,
+ "hpa=%llx region=%s region_uuid=%pUb",
+ __entry->dpa, show_dpa_flags(__entry->dpa_flags),
+ show_event_desc_flags(__entry->descriptor),
+- show_mem_event_type(__entry->type),
++ show_gmer_mem_event_type(__entry->type),
+ show_trans_type(__entry->transaction_type),
+ __entry->channel, __entry->rank, __entry->device,
+ __print_hex(__entry->comp_id, CXL_EVENT_GEN_MED_COMP_ID_SIZE),
+@@ -391,6 +391,17 @@ TRACE_EVENT(cxl_general_media,
+ * DRAM Event Record defines many fields the same as the General Media Event
+ * Record. Reuse those definitions as appropriate.
+ */
++#define CXL_DER_MEM_EVT_TYPE_ECC_ERROR 0x00
++#define CXL_DER_MEM_EVT_TYPE_SCRUB_MEDIA_ECC_ERROR 0x01
++#define CXL_DER_MEM_EVT_TYPE_INV_ADDR 0x02
++#define CXL_DER_MEM_EVT_TYPE_DATA_PATH_ERROR 0x03
++#define show_dram_mem_event_type(type) __print_symbolic(type, \
++ { CXL_DER_MEM_EVT_TYPE_ECC_ERROR, "ECC Error" }, \
++ { CXL_DER_MEM_EVT_TYPE_SCRUB_MEDIA_ECC_ERROR, "Scrub Media ECC Error" }, \
++ { CXL_DER_MEM_EVT_TYPE_INV_ADDR, "Invalid Address" }, \
++ { CXL_DER_MEM_EVT_TYPE_DATA_PATH_ERROR, "Data Path Error" } \
++)
++
+ #define CXL_DER_VALID_CHANNEL BIT(0)
+ #define CXL_DER_VALID_RANK BIT(1)
+ #define CXL_DER_VALID_NIBBLE BIT(2)
+@@ -477,7 +488,7 @@ TRACE_EVENT(cxl_dram,
+ "hpa=%llx region=%s region_uuid=%pUb",
+ __entry->dpa, show_dpa_flags(__entry->dpa_flags),
+ show_event_desc_flags(__entry->descriptor),
+- show_mem_event_type(__entry->type),
++ show_dram_mem_event_type(__entry->type),
+ show_trans_type(__entry->transaction_type),
+ __entry->channel, __entry->rank, __entry->nibble_mask,
+ __entry->bank_group, __entry->bank,
+diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
+index 9afb407d438fae..d437745ab1725a 100644
+--- a/drivers/cxl/cxl.h
++++ b/drivers/cxl/cxl.h
+@@ -359,7 +359,7 @@ struct cxl_decoder {
+ struct cxl_region *region;
+ unsigned long flags;
+ int (*commit)(struct cxl_decoder *cxld);
+- int (*reset)(struct cxl_decoder *cxld);
++ void (*reset)(struct cxl_decoder *cxld);
+ };
+
+ /*
+@@ -730,6 +730,7 @@ static inline bool is_cxl_root(struct cxl_port *port)
+ int cxl_num_decoders_committed(struct cxl_port *port);
+ bool is_cxl_port(const struct device *dev);
+ struct cxl_port *to_cxl_port(const struct device *dev);
++void cxl_port_commit_reap(struct cxl_decoder *cxld);
+ struct pci_bus;
+ int devm_cxl_register_pci_bus(struct device *host, struct device *uport_dev,
+ struct pci_bus *bus);
+diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c
+index d7d5d982ce69f3..49733d2c38f10b 100644
+--- a/drivers/cxl/port.c
++++ b/drivers/cxl/port.c
+@@ -208,7 +208,22 @@ static struct cxl_driver cxl_port_driver = {
+ },
+ };
+
+-module_cxl_driver(cxl_port_driver);
++static int __init cxl_port_init(void)
++{
++ return cxl_driver_register(&cxl_port_driver);
++}
++/*
++ * Be ready to immediately enable ports emitted by the platform CXL root
++ * (e.g. cxl_acpi) when CONFIG_CXL_PORT=y.
++ */
++subsys_initcall(cxl_port_init);
++
++static void __exit cxl_port_exit(void)
++{
++ cxl_driver_unregister(&cxl_port_driver);
++}
++module_exit(cxl_port_exit);
++
+ MODULE_DESCRIPTION("CXL: Port enumeration and services");
+ MODULE_LICENSE("GPL v2");
+ MODULE_IMPORT_NS(CXL);
+diff --git a/drivers/dpll/dpll_netlink.c b/drivers/dpll/dpll_netlink.c
+index 98e6ad8528d37c..fc0280dcddd107 100644
+--- a/drivers/dpll/dpll_netlink.c
++++ b/drivers/dpll/dpll_netlink.c
+@@ -342,6 +342,51 @@ dpll_msg_add_pin_freq(struct sk_buff *msg, struct dpll_pin *pin,
+ return 0;
+ }
+
++static int
++dpll_msg_add_pin_esync(struct sk_buff *msg, struct dpll_pin *pin,
++ struct dpll_pin_ref *ref, struct netlink_ext_ack *extack)
++{
++ const struct dpll_pin_ops *ops = dpll_pin_ops(ref);
++ struct dpll_device *dpll = ref->dpll;
++ struct dpll_pin_esync esync;
++ struct nlattr *nest;
++ int ret, i;
++
++ if (!ops->esync_get)
++ return 0;
++ ret = ops->esync_get(pin, dpll_pin_on_dpll_priv(dpll, pin), dpll,
++ dpll_priv(dpll), &esync, extack);
++ if (ret == -EOPNOTSUPP)
++ return 0;
++ else if (ret)
++ return ret;
++ if (nla_put_64bit(msg, DPLL_A_PIN_ESYNC_FREQUENCY, sizeof(esync.freq),
++ &esync.freq, DPLL_A_PIN_PAD))
++ return -EMSGSIZE;
++ if (nla_put_u32(msg, DPLL_A_PIN_ESYNC_PULSE, esync.pulse))
++ return -EMSGSIZE;
++ for (i = 0; i < esync.range_num; i++) {
++ nest = nla_nest_start(msg,
++ DPLL_A_PIN_ESYNC_FREQUENCY_SUPPORTED);
++ if (!nest)
++ return -EMSGSIZE;
++ if (nla_put_64bit(msg, DPLL_A_PIN_FREQUENCY_MIN,
++ sizeof(esync.range[i].min),
++ &esync.range[i].min, DPLL_A_PIN_PAD))
++ goto nest_cancel;
++ if (nla_put_64bit(msg, DPLL_A_PIN_FREQUENCY_MAX,
++ sizeof(esync.range[i].max),
++ &esync.range[i].max, DPLL_A_PIN_PAD))
++ goto nest_cancel;
++ nla_nest_end(msg, nest);
++ }
++ return 0;
++
++nest_cancel:
++ nla_nest_cancel(msg, nest);
++ return -EMSGSIZE;
++}
++
+ static bool dpll_pin_is_freq_supported(struct dpll_pin *pin, u32 freq)
+ {
+ int fs;
+@@ -481,6 +526,9 @@ dpll_cmd_pin_get_one(struct sk_buff *msg, struct dpll_pin *pin,
+ if (ret)
+ return ret;
+ ret = dpll_msg_add_ffo(msg, pin, ref, extack);
++ if (ret)
++ return ret;
++ ret = dpll_msg_add_pin_esync(msg, pin, ref, extack);
+ if (ret)
+ return ret;
+ if (xa_empty(&pin->parent_refs))
+@@ -738,6 +786,83 @@ dpll_pin_freq_set(struct dpll_pin *pin, struct nlattr *a,
+ return ret;
+ }
+
++static int
++dpll_pin_esync_set(struct dpll_pin *pin, struct nlattr *a,
++ struct netlink_ext_ack *extack)
++{
++ struct dpll_pin_ref *ref, *failed;
++ const struct dpll_pin_ops *ops;
++ struct dpll_pin_esync esync;
++ u64 freq = nla_get_u64(a);
++ struct dpll_device *dpll;
++ bool supported = false;
++ unsigned long i;
++ int ret;
++
++ xa_for_each(&pin->dpll_refs, i, ref) {
++ ops = dpll_pin_ops(ref);
++ if (!ops->esync_set || !ops->esync_get) {
++ NL_SET_ERR_MSG(extack,
++ "embedded sync feature is not supported by this device");
++ return -EOPNOTSUPP;
++ }
++ }
++ ref = dpll_xa_ref_dpll_first(&pin->dpll_refs);
++ ops = dpll_pin_ops(ref);
++ dpll = ref->dpll;
++ ret = ops->esync_get(pin, dpll_pin_on_dpll_priv(dpll, pin), dpll,
++ dpll_priv(dpll), &esync, extack);
++ if (ret) {
++ NL_SET_ERR_MSG(extack, "unable to get current embedded sync frequency value");
++ return ret;
++ }
++ if (freq == esync.freq)
++ return 0;
++ for (i = 0; i < esync.range_num; i++)
++ if (freq <= esync.range[i].max && freq >= esync.range[i].min)
++ supported = true;
++ if (!supported) {
++ NL_SET_ERR_MSG_ATTR(extack, a,
++ "requested embedded sync frequency value is not supported by this device");
++ return -EINVAL;
++ }
++
++ xa_for_each(&pin->dpll_refs, i, ref) {
++ void *pin_dpll_priv;
++
++ ops = dpll_pin_ops(ref);
++ dpll = ref->dpll;
++ pin_dpll_priv = dpll_pin_on_dpll_priv(dpll, pin);
++ ret = ops->esync_set(pin, pin_dpll_priv, dpll, dpll_priv(dpll),
++ freq, extack);
++ if (ret) {
++ failed = ref;
++ NL_SET_ERR_MSG_FMT(extack,
++ "embedded sync frequency set failed for dpll_id: %u",
++ dpll->id);
++ goto rollback;
++ }
++ }
++ __dpll_pin_change_ntf(pin);
++
++ return 0;
++
++rollback:
++ xa_for_each(&pin->dpll_refs, i, ref) {
++ void *pin_dpll_priv;
++
++ if (ref == failed)
++ break;
++ ops = dpll_pin_ops(ref);
++ dpll = ref->dpll;
++ pin_dpll_priv = dpll_pin_on_dpll_priv(dpll, pin);
++ if (ops->esync_set(pin, pin_dpll_priv, dpll, dpll_priv(dpll),
++ esync.freq, extack))
++ NL_SET_ERR_MSG(extack, "set embedded sync frequency rollback failed");
++ }
++ return ret;
++}
++
+ static int
+ dpll_pin_on_pin_state_set(struct dpll_pin *pin, u32 parent_idx,
+ enum dpll_pin_state state,
+@@ -1039,6 +1164,11 @@ dpll_pin_set_from_nlattr(struct dpll_pin *pin, struct genl_info *info)
+ if (ret)
+ return ret;
+ break;
++ case DPLL_A_PIN_ESYNC_FREQUENCY:
++ ret = dpll_pin_esync_set(pin, a, info->extack);
++ if (ret)
++ return ret;
++ break;
+ }
+ }
+
+diff --git a/drivers/dpll/dpll_nl.c b/drivers/dpll/dpll_nl.c
+index 1e95f5397cfce6..fe9b6893d26147 100644
+--- a/drivers/dpll/dpll_nl.c
++++ b/drivers/dpll/dpll_nl.c
+@@ -62,7 +62,7 @@ static const struct nla_policy dpll_pin_get_dump_nl_policy[DPLL_A_PIN_ID + 1] =
+ };
+
+ /* DPLL_CMD_PIN_SET - do */
+-static const struct nla_policy dpll_pin_set_nl_policy[DPLL_A_PIN_PHASE_ADJUST + 1] = {
++static const struct nla_policy dpll_pin_set_nl_policy[DPLL_A_PIN_ESYNC_FREQUENCY + 1] = {
+ [DPLL_A_PIN_ID] = { .type = NLA_U32, },
+ [DPLL_A_PIN_FREQUENCY] = { .type = NLA_U64, },
+ [DPLL_A_PIN_DIRECTION] = NLA_POLICY_RANGE(NLA_U32, 1, 2),
+@@ -71,6 +71,7 @@ static const struct nla_policy dpll_pin_set_nl_policy[DPLL_A_PIN_PHASE_ADJUST +
+ [DPLL_A_PIN_PARENT_DEVICE] = NLA_POLICY_NESTED(dpll_pin_parent_device_nl_policy),
+ [DPLL_A_PIN_PARENT_PIN] = NLA_POLICY_NESTED(dpll_pin_parent_pin_nl_policy),
+ [DPLL_A_PIN_PHASE_ADJUST] = { .type = NLA_S32, },
++ [DPLL_A_PIN_ESYNC_FREQUENCY] = { .type = NLA_U64, },
+ };
+
+ /* Ops table for dpll */
+@@ -138,7 +139,7 @@ static const struct genl_split_ops dpll_nl_ops[] = {
+ .doit = dpll_nl_pin_set_doit,
+ .post_doit = dpll_pin_post_doit,
+ .policy = dpll_pin_set_nl_policy,
+- .maxattr = DPLL_A_PIN_PHASE_ADJUST,
++ .maxattr = DPLL_A_PIN_ESYNC_FREQUENCY,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ };
+diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
+index 285fe7ad490d1d..3e8051fe829657 100644
+--- a/drivers/firmware/arm_sdei.c
++++ b/drivers/firmware/arm_sdei.c
+@@ -763,7 +763,7 @@ static int sdei_device_freeze(struct device *dev)
+ int err;
+
+ /* unregister private events */
+- cpuhp_remove_state(sdei_entry_point);
++ cpuhp_remove_state(sdei_hp_state);
+
+ err = sdei_unregister_shared();
+ if (err)
+diff --git a/drivers/firmware/microchip/mpfs-auto-update.c b/drivers/firmware/microchip/mpfs-auto-update.c
+index 9ca5ee58edbdf8..0f7ec88482022c 100644
+--- a/drivers/firmware/microchip/mpfs-auto-update.c
++++ b/drivers/firmware/microchip/mpfs-auto-update.c
+@@ -76,14 +76,11 @@
+ #define AUTO_UPDATE_INFO_SIZE SZ_1M
+ #define AUTO_UPDATE_BITSTREAM_BASE (AUTO_UPDATE_DIRECTORY_SIZE + AUTO_UPDATE_INFO_SIZE)
+
+-#define AUTO_UPDATE_TIMEOUT_MS 60000
+-
+ struct mpfs_auto_update_priv {
+ struct mpfs_sys_controller *sys_controller;
+ struct device *dev;
+ struct mtd_info *flash;
+ struct fw_upload *fw_uploader;
+- struct completion programming_complete;
+ size_t size_per_bitstream;
+ bool cancel_request;
+ };
+@@ -156,19 +153,6 @@ static void mpfs_auto_update_cancel(struct fw_upload *fw_uploader)
+
+ static enum fw_upload_err mpfs_auto_update_poll_complete(struct fw_upload *fw_uploader)
+ {
+- struct mpfs_auto_update_priv *priv = fw_uploader->dd_handle;
+- int ret;
+-
+- /*
+- * There is no meaningful way to get the status of the programming while
+- * it is in progress, so attempting anything other than waiting for it
+- * to complete would be misplaced.
+- */
+- ret = wait_for_completion_timeout(&priv->programming_complete,
+- msecs_to_jiffies(AUTO_UPDATE_TIMEOUT_MS));
+- if (!ret)
+- return FW_UPLOAD_ERR_TIMEOUT;
+-
+ return FW_UPLOAD_ERR_NONE;
+ }
+
+@@ -349,33 +333,23 @@ static enum fw_upload_err mpfs_auto_update_write(struct fw_upload *fw_uploader,
+ u32 offset, u32 size, u32 *written)
+ {
+ struct mpfs_auto_update_priv *priv = fw_uploader->dd_handle;
+- enum fw_upload_err err = FW_UPLOAD_ERR_NONE;
+ int ret;
+
+- reinit_completion(&priv->programming_complete);
+-
+ ret = mpfs_auto_update_write_bitstream(fw_uploader, data, offset, size, written);
+- if (ret) {
+- err = FW_UPLOAD_ERR_RW_ERROR;
+- goto out;
+- }
++ if (ret)
++ return FW_UPLOAD_ERR_RW_ERROR;
+
+- if (priv->cancel_request) {
+- err = FW_UPLOAD_ERR_CANCELED;
+- goto out;
+- }
++ if (priv->cancel_request)
++ return FW_UPLOAD_ERR_CANCELED;
+
+ if (mpfs_auto_update_is_bitstream_info(data, size))
+- goto out;
++ return FW_UPLOAD_ERR_NONE;
+
+ ret = mpfs_auto_update_verify_image(fw_uploader);
+ if (ret)
+- err = FW_UPLOAD_ERR_FW_INVALID;
++ return FW_UPLOAD_ERR_FW_INVALID;
+
+-out:
+- complete(&priv->programming_complete);
+-
+- return err;
++ return FW_UPLOAD_ERR_NONE;
+ }
+
+ static const struct fw_upload_ops mpfs_auto_update_ops = {
+@@ -461,8 +435,6 @@ static int mpfs_auto_update_probe(struct platform_device *pdev)
+ return dev_err_probe(dev, ret,
+ "The current bitstream does not support auto-update\n");
+
+- init_completion(&priv->programming_complete);
+-
+ fw_uploader = firmware_upload_register(THIS_MODULE, dev, "mpfs-auto-update",
+ &mpfs_auto_update_ops, priv);
+ if (IS_ERR(fw_uploader))
+diff --git a/drivers/gpio/gpio-sloppy-logic-analyzer.c b/drivers/gpio/gpio-sloppy-logic-analyzer.c
+index aed6d1f6cfc308..6440d55bf2e1fe 100644
+--- a/drivers/gpio/gpio-sloppy-logic-analyzer.c
++++ b/drivers/gpio/gpio-sloppy-logic-analyzer.c
+@@ -235,7 +235,9 @@ static int gpio_la_poll_probe(struct platform_device *pdev)
+ if (!priv)
+ return -ENOMEM;
+
+- devm_mutex_init(dev, &priv->blob_lock);
++ ret = devm_mutex_init(dev, &priv->blob_lock);
++ if (ret)
++ return ret;
+
+ fops_buf_size_set(priv, GPIO_LA_DEFAULT_BUF_SIZE);
+
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 148bcfbf98e024..337971080dfde9 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -4834,6 +4834,8 @@ static void *gpiolib_seq_start(struct seq_file *s, loff_t *pos)
+ return NULL;
+
+ s->private = priv;
++ if (*pos > 0)
++ priv->newline = true;
+ priv->idx = srcu_read_lock(&gpio_devices_srcu);
+
+ list_for_each_entry_srcu(gdev, &gpio_devices, list,
+@@ -4877,7 +4879,7 @@ static int gpiolib_seq_show(struct seq_file *s, void *v)
+
+ gc = srcu_dereference(gdev->chip, &gdev->srcu);
+ if (!gc) {
+- seq_printf(s, "%s%s: (dangling chip)",
++ seq_printf(s, "%s%s: (dangling chip)\n",
+ priv->newline ? "\n" : "",
+ dev_name(&gdev->dev));
+ return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+index 403c177f24349d..bbf43e668c1c49 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+@@ -51,6 +51,12 @@ MODULE_FIRMWARE("amdgpu/sdma_7_0_1.bin");
+ #define SDMA0_HYP_DEC_REG_END 0x589a
+ #define SDMA1_HYP_DEC_REG_OFFSET 0x20
+
++/*define for compression field for sdma7*/
++#define SDMA_PKT_CONSTANT_FILL_HEADER_compress_offset 0
++#define SDMA_PKT_CONSTANT_FILL_HEADER_compress_mask 0x00000001
++#define SDMA_PKT_CONSTANT_FILL_HEADER_compress_shift 16
++#define SDMA_PKT_CONSTANT_FILL_HEADER_COMPRESS(x) (((x) & SDMA_PKT_CONSTANT_FILL_HEADER_compress_mask) << SDMA_PKT_CONSTANT_FILL_HEADER_compress_shift)
++
+ static void sdma_v7_0_set_ring_funcs(struct amdgpu_device *adev);
+ static void sdma_v7_0_set_buffer_funcs(struct amdgpu_device *adev);
+ static void sdma_v7_0_set_vm_pte_funcs(struct amdgpu_device *adev);
+@@ -1611,7 +1617,8 @@ static void sdma_v7_0_emit_fill_buffer(struct amdgpu_ib *ib,
+ uint64_t dst_offset,
+ uint32_t byte_count)
+ {
+- ib->ptr[ib->length_dw++] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_CONST_FILL);
++ ib->ptr[ib->length_dw++] = SDMA_PKT_CONSTANT_FILL_HEADER_OP(SDMA_OP_CONST_FILL) |
++ SDMA_PKT_CONSTANT_FILL_HEADER_COMPRESS(1);
+ ib->ptr[ib->length_dw++] = lower_32_bits(dst_offset);
+ ib->ptr[ib->length_dw++] = upper_32_bits(dst_offset);
+ ib->ptr[ib->length_dw++] = src_data;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c
+index 11c904ae29586d..c4c52173ef2240 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c
+@@ -303,6 +303,7 @@ void build_unoptimized_policy_settings(enum dml_project_id project, struct dml_m
+ if (project == dml_project_dcn35 ||
+ project == dml_project_dcn351) {
+ policy->DCCProgrammingAssumesScanDirectionUnknownFinal = false;
++ policy->EnhancedPrefetchScheduleAccelerationFinal = 0;
+ policy->AllowForPStateChangeOrStutterInVBlankFinal = dml_prefetch_support_uclk_fclk_and_stutter_if_possible; /*new*/
+ policy->UseOnlyMaxPrefetchModes = 1;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 87672ca714de5b..80e60ea2d11e3c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -1234,6 +1234,14 @@ static void smu_init_xgmi_plpd_mode(struct smu_context *smu)
+ }
+ }
+
++static bool smu_is_workload_profile_available(struct smu_context *smu,
++ u32 profile)
++{
++ if (profile >= PP_SMC_POWER_PROFILE_COUNT)
++ return false;
++ return smu->workload_map && smu->workload_map[profile].valid_mapping;
++}
++
+ static int smu_sw_init(void *handle)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+@@ -1257,7 +1265,6 @@ static int smu_sw_init(void *handle)
+ atomic_set(&smu->smu_power.power_gate.vpe_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.umsch_mm_gated, 1);
+
+- smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
+ smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT] = 0;
+ smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 1;
+ smu->workload_prority[PP_SMC_POWER_PROFILE_POWERSAVING] = 2;
+@@ -1266,6 +1273,12 @@ static int smu_sw_init(void *handle)
+ smu->workload_prority[PP_SMC_POWER_PROFILE_COMPUTE] = 5;
+ smu->workload_prority[PP_SMC_POWER_PROFILE_CUSTOM] = 6;
+
++ if (smu->is_apu ||
++ !smu_is_workload_profile_available(smu, PP_SMC_POWER_PROFILE_FULLSCREEN3D))
++ smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
++ else
++ smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D];
++
+ smu->workload_setting[0] = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+ smu->workload_setting[1] = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
+ smu->workload_setting[2] = PP_SMC_POWER_PROFILE_POWERSAVING;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+index 22737b11b1bfb1..1fe020f1f4dbe2 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+@@ -242,7 +242,9 @@ static int vangogh_tables_init(struct smu_context *smu)
+ goto err0_out;
+ smu_table->metrics_time = 0;
+
+- smu_table->gpu_metrics_table_size = max(sizeof(struct gpu_metrics_v2_3), sizeof(struct gpu_metrics_v2_2));
++ smu_table->gpu_metrics_table_size = sizeof(struct gpu_metrics_v2_2);
++ smu_table->gpu_metrics_table_size = max(smu_table->gpu_metrics_table_size, sizeof(struct gpu_metrics_v2_3));
++ smu_table->gpu_metrics_table_size = max(smu_table->gpu_metrics_table_size, sizeof(struct gpu_metrics_v2_4));
+ smu_table->gpu_metrics_table = kzalloc(smu_table->gpu_metrics_table_size, GFP_KERNEL);
+ if (!smu_table->gpu_metrics_table)
+ goto err1_out;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index cb923e33fd6fc7..d53e162dcd8de2 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2485,7 +2485,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ DpmActivityMonitorCoeffInt_t *activity_monitor =
+ &(activity_monitor_external.DpmActivityMonitorCoeffInt);
+ int workload_type, ret = 0;
+- u32 workload_mask;
++ u32 workload_mask, selected_workload_mask;
+
+ smu->power_profile_mode = input[size];
+
+@@ -2552,7 +2552,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ if (workload_type < 0)
+ return -EINVAL;
+
+- workload_mask = 1 << workload_type;
++ selected_workload_mask = workload_mask = 1 << workload_type;
+
+ /* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
+ if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
+@@ -2572,7 +2572,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ workload_mask,
+ NULL);
+ if (!ret)
+- smu->workload_mask = workload_mask;
++ smu->workload_mask = selected_workload_mask;
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/i915/display/intel_alpm.c b/drivers/gpu/drm/i915/display/intel_alpm.c
+index 10689480338eb2..90a960fb1d1435 100644
+--- a/drivers/gpu/drm/i915/display/intel_alpm.c
++++ b/drivers/gpu/drm/i915/display/intel_alpm.c
+@@ -280,7 +280,7 @@ void intel_alpm_lobf_compute_config(struct intel_dp *intel_dp,
+ if (DISPLAY_VER(i915) < 20)
+ return;
+
+- if (!intel_dp_as_sdp_supported(intel_dp))
++ if (!intel_dp->as_sdp_supported)
+ return;
+
+ if (crtc_state->has_psr)
+diff --git a/drivers/gpu/drm/i915/display/intel_backlight.c b/drivers/gpu/drm/i915/display/intel_backlight.c
+index 6c3333136737e7..ca28558a2c93f3 100644
+--- a/drivers/gpu/drm/i915/display/intel_backlight.c
++++ b/drivers/gpu/drm/i915/display/intel_backlight.c
+@@ -1011,7 +1011,7 @@ static u32 cnp_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
+ {
+ struct drm_i915_private *i915 = to_i915(connector->base.dev);
+
+- return DIV_ROUND_CLOSEST(KHz(RUNTIME_INFO(i915)->rawclk_freq),
++ return DIV_ROUND_CLOSEST(KHz(DISPLAY_RUNTIME_INFO(i915)->rawclk_freq),
+ pwm_freq_hz);
+ }
+
+@@ -1073,7 +1073,7 @@ static u32 pch_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
+ {
+ struct drm_i915_private *i915 = to_i915(connector->base.dev);
+
+- return DIV_ROUND_CLOSEST(KHz(RUNTIME_INFO(i915)->rawclk_freq),
++ return DIV_ROUND_CLOSEST(KHz(DISPLAY_RUNTIME_INFO(i915)->rawclk_freq),
+ pwm_freq_hz * 128);
+ }
+
+@@ -1091,7 +1091,7 @@ static u32 i9xx_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
+ int clock;
+
+ if (IS_PINEVIEW(i915))
+- clock = KHz(RUNTIME_INFO(i915)->rawclk_freq);
++ clock = KHz(DISPLAY_RUNTIME_INFO(i915)->rawclk_freq);
+ else
+ clock = KHz(i915->display.cdclk.hw.cdclk);
+
+@@ -1109,7 +1109,7 @@ static u32 i965_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
+ int clock;
+
+ if (IS_G4X(i915))
+- clock = KHz(RUNTIME_INFO(i915)->rawclk_freq);
++ clock = KHz(DISPLAY_RUNTIME_INFO(i915)->rawclk_freq);
+ else
+ clock = KHz(i915->display.cdclk.hw.cdclk);
+
+@@ -1133,7 +1133,7 @@ static u32 vlv_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
+ clock = MHz(25);
+ mul = 16;
+ } else {
+- clock = KHz(RUNTIME_INFO(i915)->rawclk_freq);
++ clock = KHz(DISPLAY_RUNTIME_INFO(i915)->rawclk_freq);
+ mul = 128;
+ }
+
+diff --git a/drivers/gpu/drm/i915/display/intel_display_device.c b/drivers/gpu/drm/i915/display/intel_display_device.c
+index dd7dce4b0e7a1e..ff3bac7dd9e33a 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_device.c
++++ b/drivers/gpu/drm/i915/display/intel_display_device.c
+@@ -1474,6 +1474,9 @@ static void __intel_display_device_info_runtime_init(struct drm_i915_private *i9
+ }
+ }
+
++ display_runtime->rawclk_freq = intel_read_rawclk(i915);
++ drm_dbg_kms(&i915->drm, "rawclk rate: %d kHz\n", display_runtime->rawclk_freq);
++
+ return;
+
+ display_fused_off:
+@@ -1516,6 +1519,8 @@ void intel_display_device_info_print(const struct intel_display_device_info *inf
+ drm_printf(p, "has_hdcp: %s\n", str_yes_no(runtime->has_hdcp));
+ drm_printf(p, "has_dmc: %s\n", str_yes_no(runtime->has_dmc));
+ drm_printf(p, "has_dsc: %s\n", str_yes_no(runtime->has_dsc));
++
++ drm_printf(p, "rawclk rate: %u kHz\n", runtime->rawclk_freq);
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/i915/display/intel_display_device.h b/drivers/gpu/drm/i915/display/intel_display_device.h
+index 13453ea4daea09..ad60c676c84d14 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_device.h
++++ b/drivers/gpu/drm/i915/display/intel_display_device.h
+@@ -204,6 +204,8 @@ struct intel_display_runtime_info {
+ u16 step;
+ } ip;
+
++ u32 rawclk_freq;
++
+ u8 pipe_mask;
+ u8 cpu_transcoder_mask;
+ u16 port_mask;
+diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
+index e288a1b21d7e60..0af1e34ef2a70f 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_power.c
++++ b/drivers/gpu/drm/i915/display/intel_display_power.c
+@@ -1704,6 +1704,14 @@ static void icl_display_core_init(struct drm_i915_private *dev_priv,
+ /* Wa_14011503030:xelpd */
+ if (DISPLAY_VER(dev_priv) == 13)
+ intel_de_write(dev_priv, XELPD_DISPLAY_ERR_FATAL_MASK, ~0);
++
++ /* Wa_15013987218 */
++ if (DISPLAY_VER(dev_priv) == 20) {
++ intel_de_rmw(dev_priv, SOUTH_DSPCLK_GATE_D,
++ 0, PCH_GMBUSUNIT_CLOCK_GATE_DISABLE);
++ intel_de_rmw(dev_priv, SOUTH_DSPCLK_GATE_D,
++ PCH_GMBUSUNIT_CLOCK_GATE_DISABLE, 0);
++ }
+ }
+
+ static void icl_display_core_uninit(struct drm_i915_private *dev_priv)
+diff --git a/drivers/gpu/drm/i915/display/intel_display_power_well.c b/drivers/gpu/drm/i915/display/intel_display_power_well.c
+index 919f712fef131c..adf5d1fbccb562 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_power_well.c
++++ b/drivers/gpu/drm/i915/display/intel_display_power_well.c
+@@ -1176,9 +1176,9 @@ static void vlv_init_display_clock_gating(struct drm_i915_private *dev_priv)
+ MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE);
+ intel_de_write(dev_priv, CBR1_VLV, 0);
+
+- drm_WARN_ON(&dev_priv->drm, RUNTIME_INFO(dev_priv)->rawclk_freq == 0);
++ drm_WARN_ON(&dev_priv->drm, DISPLAY_RUNTIME_INFO(dev_priv)->rawclk_freq == 0);
+ intel_de_write(dev_priv, RAWCLK_FREQ_VLV,
+- DIV_ROUND_CLOSEST(RUNTIME_INFO(dev_priv)->rawclk_freq,
++ DIV_ROUND_CLOSEST(DISPLAY_RUNTIME_INFO(dev_priv)->rawclk_freq,
+ 1000));
+ }
+
+diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
+index f9d3cc3c342bb6..160098708eba28 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_types.h
++++ b/drivers/gpu/drm/i915/display/intel_display_types.h
+@@ -1806,6 +1806,7 @@ struct intel_dp {
+
+ /* connector directly attached - won't be use for modeset in mst world */
+ struct intel_connector *attached_connector;
++ bool as_sdp_supported;
+
+ struct drm_dp_tunnel *tunnel;
+ bool tunnel_suspended:1;
+diff --git a/drivers/gpu/drm/i915/display/intel_display_wa.h b/drivers/gpu/drm/i915/display/intel_display_wa.h
+index 63201d09852c56..be644ab6ae0061 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_wa.h
++++ b/drivers/gpu/drm/i915/display/intel_display_wa.h
+@@ -6,8 +6,16 @@
+ #ifndef __INTEL_DISPLAY_WA_H__
+ #define __INTEL_DISPLAY_WA_H__
+
++#include <linux/types.h>
++
+ struct drm_i915_private;
+
+ void intel_display_wa_apply(struct drm_i915_private *i915);
+
++#ifdef I915
++static inline bool intel_display_needs_wa_16023588340(struct drm_i915_private *i915) { return false; }
++#else
++bool intel_display_needs_wa_16023588340(struct drm_i915_private *i915);
++#endif
++
+ #endif
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index ffc0d1b1404554..4ec724e8b2207a 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -130,14 +130,6 @@ bool intel_dp_is_edp(struct intel_dp *intel_dp)
+ return dig_port->base.type == INTEL_OUTPUT_EDP;
+ }
+
+-bool intel_dp_as_sdp_supported(struct intel_dp *intel_dp)
+-{
+- struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+-
+- return HAS_AS_SDP(i915) &&
+- drm_dp_as_sdp_supported(&intel_dp->aux, intel_dp->dpcd);
+-}
+-
+ static void intel_dp_unset_edid(struct intel_dp *intel_dp);
+
+ /* Is link rate UHBR and thus 128b/132b? */
+@@ -2635,8 +2627,7 @@ static void intel_dp_compute_as_sdp(struct intel_dp *intel_dp,
+ const struct drm_display_mode *adjusted_mode =
+ &crtc_state->hw.adjusted_mode;
+
+- if (!crtc_state->vrr.enable ||
+- !intel_dp_as_sdp_supported(intel_dp))
++ if (!crtc_state->vrr.enable || !intel_dp->as_sdp_supported)
+ return;
+
+ crtc_state->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_ADAPTIVE_SYNC);
+@@ -4402,8 +4393,11 @@ void intel_dp_set_infoframes(struct intel_encoder *encoder,
+ if (!enable && HAS_DSC(dev_priv))
+ val &= ~VDIP_ENABLE_PPS;
+
+- /* When PSR is enabled, this routine doesn't disable VSC DIP */
+- if (!crtc_state->has_psr)
++ /*
++ * This routine disables VSC DIP if the function is called
++ * to disable SDP or if it does not have PSR
++ */
++ if (!enable || !crtc_state->has_psr)
+ val &= ~VIDEO_DIP_ENABLE_VSC_HSW;
+
+ intel_de_write(dev_priv, reg, val);
+@@ -5921,6 +5915,15 @@ intel_dp_detect_dsc_caps(struct intel_dp *intel_dp, struct intel_connector *conn
+ connector);
+ }
+
++static void
++intel_dp_detect_sdp_caps(struct intel_dp *intel_dp)
++{
++ struct drm_i915_private *i915 = dp_to_i915(intel_dp);
++
++ intel_dp->as_sdp_supported = HAS_AS_SDP(i915) &&
++ drm_dp_as_sdp_supported(&intel_dp->aux, intel_dp->dpcd);
++}
++
+ static int
+ intel_dp_detect(struct drm_connector *connector,
+ struct drm_modeset_acquire_ctx *ctx,
+@@ -5991,6 +5994,8 @@ intel_dp_detect(struct drm_connector *connector,
+
+ intel_dp_detect_dsc_caps(intel_dp, intel_connector);
+
++ intel_dp_detect_sdp_caps(intel_dp);
++
+ intel_dp_mst_configure(intel_dp);
+
+ if (intel_dp->reset_link_params) {
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
+index a0f990a95ecca3..9be539edf817b7 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.h
++++ b/drivers/gpu/drm/i915/display/intel_dp.h
+@@ -85,7 +85,6 @@ void intel_dp_audio_compute_config(struct intel_encoder *encoder,
+ struct drm_connector_state *conn_state);
+ bool intel_dp_has_hdmi_sink(struct intel_dp *intel_dp);
+ bool intel_dp_is_edp(struct intel_dp *intel_dp);
+-bool intel_dp_as_sdp_supported(struct intel_dp *intel_dp);
+ bool intel_dp_is_uhbr(const struct intel_crtc_state *crtc_state);
+ bool intel_dp_has_dsc(const struct intel_connector *connector);
+ int intel_dp_link_symbol_size(int rate);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_aux.c b/drivers/gpu/drm/i915/display/intel_dp_aux.c
+index be58185a77c019..6420da69f3bbc6 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_aux.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_aux.c
+@@ -84,7 +84,7 @@ static u32 g4x_get_aux_clock_divider(struct intel_dp *intel_dp, int index)
+ * The clock divider is based off the hrawclk, and would like to run at
+ * 2MHz. So, take the hrawclk value and divide by 2000 and use that
+ */
+- return DIV_ROUND_CLOSEST(RUNTIME_INFO(i915)->rawclk_freq, 2000);
++ return DIV_ROUND_CLOSEST(DISPLAY_RUNTIME_INFO(i915)->rawclk_freq, 2000);
+ }
+
+ static u32 ilk_get_aux_clock_divider(struct intel_dp *intel_dp, int index)
+@@ -104,7 +104,7 @@ static u32 ilk_get_aux_clock_divider(struct intel_dp *intel_dp, int index)
+ if (dig_port->aux_ch == AUX_CH_A)
+ freq = i915->display.cdclk.hw.cdclk;
+ else
+- freq = RUNTIME_INFO(i915)->rawclk_freq;
++ freq = DISPLAY_RUNTIME_INFO(i915)->rawclk_freq;
+ return DIV_ROUND_CLOSEST(freq, 2000);
+ }
+
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_hdcp.c b/drivers/gpu/drm/i915/display/intel_dp_hdcp.c
+index b0101d72b9c1ae..71bf014a57e5c7 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_hdcp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_hdcp.c
+@@ -677,8 +677,15 @@ static
+ int intel_dp_hdcp2_get_capability(struct intel_connector *connector,
+ bool *capable)
+ {
+- struct intel_digital_port *dig_port = intel_attached_dig_port(connector);
+- struct drm_dp_aux *aux = &dig_port->dp.aux;
++ struct intel_digital_port *dig_port;
++ struct drm_dp_aux *aux;
++
++ *capable = false;
++ if (!intel_attached_encoder(connector))
++ return -EINVAL;
++
++ dig_port = intel_attached_dig_port(connector);
++ aux = &dig_port->dp.aux;
+
+ return _intel_dp_hdcp2_get_capability(aux, capable);
+ }
+diff --git a/drivers/gpu/drm/i915/display/intel_fbc.c b/drivers/gpu/drm/i915/display/intel_fbc.c
+index 67116c9f14643b..8488f82143a40c 100644
+--- a/drivers/gpu/drm/i915/display/intel_fbc.c
++++ b/drivers/gpu/drm/i915/display/intel_fbc.c
+@@ -56,6 +56,7 @@
+ #include "intel_display_device.h"
+ #include "intel_display_trace.h"
+ #include "intel_display_types.h"
++#include "intel_display_wa.h"
+ #include "intel_fbc.h"
+ #include "intel_fbc_regs.h"
+ #include "intel_frontbuffer.h"
+@@ -1237,6 +1238,11 @@ static int intel_fbc_check_plane(struct intel_atomic_state *state,
+ return 0;
+ }
+
++ if (intel_display_needs_wa_16023588340(i915)) {
++ plane_state->no_fbc_reason = "Wa_16023588340";
++ return 0;
++ }
++
+ /* WaFbcTurnOffFbcWhenHyperVisorIsUsed:skl,bxt */
+ if (i915_vtd_active(i915) && (IS_SKYLAKE(i915) || IS_BROXTON(i915))) {
+ plane_state->no_fbc_reason = "VT-d enabled";
+diff --git a/drivers/gpu/drm/i915/display/intel_hdcp.c b/drivers/gpu/drm/i915/display/intel_hdcp.c
+index b0440cc59c2345..c2f42be26128de 100644
+--- a/drivers/gpu/drm/i915/display/intel_hdcp.c
++++ b/drivers/gpu/drm/i915/display/intel_hdcp.c
+@@ -203,11 +203,16 @@ int intel_hdcp_read_valid_bksv(struct intel_digital_port *dig_port,
+ /* Is HDCP1.4 capable on Platform and Sink */
+ bool intel_hdcp_get_capability(struct intel_connector *connector)
+ {
+- struct intel_digital_port *dig_port = intel_attached_dig_port(connector);
++ struct intel_digital_port *dig_port;
+ const struct intel_hdcp_shim *shim = connector->hdcp.shim;
+ bool capable = false;
+ u8 bksv[5];
+
++ if (!intel_attached_encoder(connector))
++ return capable;
++
++ dig_port = intel_attached_dig_port(connector);
++
+ if (!shim)
+ return capable;
+
+diff --git a/drivers/gpu/drm/i915/display/intel_pps.c b/drivers/gpu/drm/i915/display/intel_pps.c
+index 7ce926241e83a2..68141af4da540f 100644
+--- a/drivers/gpu/drm/i915/display/intel_pps.c
++++ b/drivers/gpu/drm/i915/display/intel_pps.c
+@@ -951,6 +951,14 @@ void intel_pps_on_unlocked(struct intel_dp *intel_dp)
+ intel_de_posting_read(dev_priv, pp_ctrl_reg);
+ }
+
++ /*
++ * WA: 22019252566
++ * Disable DPLS gating around power sequence.
++ */
++ if (IS_DISPLAY_VER(dev_priv, 13, 14))
++ intel_de_rmw(dev_priv, SOUTH_DSPCLK_GATE_D,
++ 0, PCH_DPLSUNIT_CLOCK_GATE_DISABLE);
++
+ pp |= PANEL_POWER_ON;
+ if (!IS_IRONLAKE(dev_priv))
+ pp |= PANEL_POWER_RESET;
+@@ -961,6 +969,10 @@ void intel_pps_on_unlocked(struct intel_dp *intel_dp)
+ wait_panel_on(intel_dp);
+ intel_dp->pps.last_power_on = jiffies;
+
++ if (IS_DISPLAY_VER(dev_priv, 13, 14))
++ intel_de_rmw(dev_priv, SOUTH_DSPCLK_GATE_D,
++ PCH_DPLSUNIT_CLOCK_GATE_DISABLE, 0);
++
+ if (IS_IRONLAKE(dev_priv)) {
+ pp |= PANEL_POWER_RESET; /* restore panel reset bit */
+ intel_de_write(dev_priv, pp_ctrl_reg, pp);
+@@ -1471,7 +1483,7 @@ static void pps_init_registers(struct intel_dp *intel_dp, bool force_disable_vdd
+ {
+ struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+ u32 pp_on, pp_off, port_sel = 0;
+- int div = RUNTIME_INFO(dev_priv)->rawclk_freq / 1000;
++ int div = DISPLAY_RUNTIME_INFO(dev_priv)->rawclk_freq / 1000;
+ struct pps_registers regs;
+ enum port port = dp_to_dig_port(intel_dp)->base.port;
+ const struct edp_power_seq *seq = &intel_dp->pps.pps_delays;
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index da242ba19ed95b..0876fe53a6f918 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -1605,6 +1605,12 @@ _panel_replay_compute_config(struct intel_dp *intel_dp,
+ if (!alpm_config_valid(intel_dp, crtc_state, true))
+ return false;
+
++ if (crtc_state->crc_enabled) {
++ drm_dbg_kms(&i915->drm,
++ "Panel Replay not enabled because it would inhibit pipe CRC calculation\n");
++ return false;
++ }
++
+ return true;
+ }
+
+diff --git a/drivers/gpu/drm/i915/display/intel_tc.c b/drivers/gpu/drm/i915/display/intel_tc.c
+index 9887967b2ca5c5..6f2ee7dbc43b35 100644
+--- a/drivers/gpu/drm/i915/display/intel_tc.c
++++ b/drivers/gpu/drm/i915/display/intel_tc.c
+@@ -393,6 +393,9 @@ void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port,
+ bool lane_reversal = dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL;
+ u32 val;
+
++ if (DISPLAY_VER(i915) >= 14)
++ return;
++
+ drm_WARN_ON(&i915->drm,
+ lane_reversal && tc->mode != TC_PORT_LEGACY);
+
+diff --git a/drivers/gpu/drm/i915/display/intel_vrr.c b/drivers/gpu/drm/i915/display/intel_vrr.c
+index 5a0da64c7db33e..7e1d9c718214c1 100644
+--- a/drivers/gpu/drm/i915/display/intel_vrr.c
++++ b/drivers/gpu/drm/i915/display/intel_vrr.c
+@@ -233,8 +233,7 @@ intel_vrr_compute_config(struct intel_crtc_state *crtc_state,
+ crtc_state->mode_flags |= I915_MODE_FLAG_VRR;
+ }
+
+- if (intel_dp_as_sdp_supported(intel_dp) &&
+- crtc_state->vrr.enable) {
++ if (intel_dp->as_sdp_supported && crtc_state->vrr.enable) {
+ crtc_state->vrr.vsync_start =
+ (crtc_state->hw.adjusted_mode.crtc_vtotal -
+ crtc_state->hw.adjusted_mode.vsync_start);
+diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+index ba5a628b4757c1..a1ab64db0130c2 100644
+--- a/drivers/gpu/drm/i915/display/skl_universal_plane.c
++++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+@@ -1085,11 +1085,6 @@ static u32 skl_plane_ctl(const struct intel_crtc_state *crtc_state,
+ if (DISPLAY_VER(dev_priv) == 13)
+ plane_ctl |= adlp_plane_ctl_arb_slots(plane_state);
+
+- if (GRAPHICS_VER(dev_priv) >= 20 &&
+- fb->modifier == I915_FORMAT_MOD_4_TILED) {
+- plane_ctl |= PLANE_CTL_RENDER_DECOMPRESSION_ENABLE;
+- }
+-
+ return plane_ctl;
+ }
+
+diff --git a/drivers/gpu/drm/i915/intel_device_info.c b/drivers/gpu/drm/i915/intel_device_info.c
+index eede5417cb3fed..01a6502530501a 100644
+--- a/drivers/gpu/drm/i915/intel_device_info.c
++++ b/drivers/gpu/drm/i915/intel_device_info.c
+@@ -124,7 +124,6 @@ void intel_device_info_print(const struct intel_device_info *info,
+ #undef PRINT_FLAG
+
+ drm_printf(p, "has_pooled_eu: %s\n", str_yes_no(runtime->has_pooled_eu));
+- drm_printf(p, "rawclk rate: %u kHz\n", runtime->rawclk_freq);
+ }
+
+ #define ID(id) (id)
+@@ -377,10 +376,6 @@ void intel_device_info_runtime_init(struct drm_i915_private *dev_priv)
+ "Disabling ppGTT for VT-d support\n");
+ runtime->ppgtt_type = INTEL_PPGTT_NONE;
+ }
+-
+- runtime->rawclk_freq = intel_read_rawclk(dev_priv);
+- drm_dbg(&dev_priv->drm, "rawclk rate: %d kHz\n", runtime->rawclk_freq);
+-
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h
+index df73ef94615dd8..643ff1bf74eeb0 100644
+--- a/drivers/gpu/drm/i915/intel_device_info.h
++++ b/drivers/gpu/drm/i915/intel_device_info.h
+@@ -207,8 +207,6 @@ struct intel_runtime_info {
+
+ u16 device_id;
+
+- u32 rawclk_freq;
+-
+ struct intel_step_info step;
+
+ unsigned int page_sizes; /* page sizes supported by the HW */
+diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c
+index a90504359e8d27..e5d412b2d61b62 100644
+--- a/drivers/gpu/drm/mediatek/mtk_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_crtc.c
+@@ -120,44 +120,6 @@ static void mtk_drm_finish_page_flip(struct mtk_crtc *mtk_crtc)
+ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
+ }
+
+-#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+-static int mtk_drm_cmdq_pkt_create(struct cmdq_client *client, struct cmdq_pkt *pkt,
+- size_t size)
+-{
+- struct device *dev;
+- dma_addr_t dma_addr;
+-
+- pkt->va_base = kzalloc(size, GFP_KERNEL);
+- if (!pkt->va_base)
+- return -ENOMEM;
+-
+- pkt->buf_size = size;
+- pkt->cl = (void *)client;
+-
+- dev = client->chan->mbox->dev;
+- dma_addr = dma_map_single(dev, pkt->va_base, pkt->buf_size,
+- DMA_TO_DEVICE);
+- if (dma_mapping_error(dev, dma_addr)) {
+- dev_err(dev, "dma map failed, size=%u\n", (u32)(u64)size);
+- kfree(pkt->va_base);
+- return -ENOMEM;
+- }
+-
+- pkt->pa_base = dma_addr;
+-
+- return 0;
+-}
+-
+-static void mtk_drm_cmdq_pkt_destroy(struct cmdq_pkt *pkt)
+-{
+- struct cmdq_client *client = (struct cmdq_client *)pkt->cl;
+-
+- dma_unmap_single(client->chan->mbox->dev, pkt->pa_base, pkt->buf_size,
+- DMA_TO_DEVICE);
+- kfree(pkt->va_base);
+-}
+-#endif
+-
+ static void mtk_crtc_destroy(struct drm_crtc *crtc)
+ {
+ struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc);
+@@ -165,9 +127,8 @@ static void mtk_crtc_destroy(struct drm_crtc *crtc)
+
+ mtk_mutex_put(mtk_crtc->mutex);
+ #if IS_REACHABLE(CONFIG_MTK_CMDQ)
+- mtk_drm_cmdq_pkt_destroy(&mtk_crtc->cmdq_handle);
+-
+ if (mtk_crtc->cmdq_client.chan) {
++ cmdq_pkt_destroy(&mtk_crtc->cmdq_client, &mtk_crtc->cmdq_handle);
+ mbox_free_channel(mtk_crtc->cmdq_client.chan);
+ mtk_crtc->cmdq_client.chan = NULL;
+ }
+@@ -1122,9 +1083,9 @@ int mtk_crtc_create(struct drm_device *drm_dev, const unsigned int *path,
+ mbox_free_channel(mtk_crtc->cmdq_client.chan);
+ mtk_crtc->cmdq_client.chan = NULL;
+ } else {
+- ret = mtk_drm_cmdq_pkt_create(&mtk_crtc->cmdq_client,
+- &mtk_crtc->cmdq_handle,
+- PAGE_SIZE);
++ ret = cmdq_pkt_create(&mtk_crtc->cmdq_client,
++ &mtk_crtc->cmdq_handle,
++ PAGE_SIZE);
+ if (ret) {
+ dev_dbg(dev, "mtk_crtc %d failed to create cmdq packet\n",
+ drm_crtc_index(&mtk_crtc->base));
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+index 9d6d9fd8342e41..064d03598ea2e9 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+@@ -61,8 +61,8 @@
+ #define OVL_CON_CLRFMT_RGB (1 << 12)
+ #define OVL_CON_CLRFMT_ARGB8888 (2 << 12)
+ #define OVL_CON_CLRFMT_RGBA8888 (3 << 12)
+-#define OVL_CON_CLRFMT_ABGR8888 (OVL_CON_CLRFMT_RGBA8888 | OVL_CON_BYTE_SWAP)
+-#define OVL_CON_CLRFMT_BGRA8888 (OVL_CON_CLRFMT_ARGB8888 | OVL_CON_BYTE_SWAP)
++#define OVL_CON_CLRFMT_ABGR8888 (OVL_CON_CLRFMT_ARGB8888 | OVL_CON_BYTE_SWAP)
++#define OVL_CON_CLRFMT_BGRA8888 (OVL_CON_CLRFMT_RGBA8888 | OVL_CON_BYTE_SWAP)
+ #define OVL_CON_CLRFMT_UYVY (4 << 12)
+ #define OVL_CON_CLRFMT_YUYV (5 << 12)
+ #define OVL_CON_CLRFMT_RGB565(ovl) ((ovl)->data->fmt_rgb565_is_0 ? \
+@@ -379,11 +379,6 @@ void mtk_ovl_layer_off(struct device *dev, unsigned int idx,
+
+ static unsigned int ovl_fmt_convert(struct mtk_disp_ovl *ovl, unsigned int fmt)
+ {
+- /* The return value in switch "MEM_MODE_INPUT_FORMAT_XXX"
+- * is defined in mediatek HW data sheet.
+- * The alphabet order in XXX is no relation to data
+- * arrangement in memory.
+- */
+ switch (fmt) {
+ default:
+ case DRM_FORMAT_RGB565:
+diff --git a/drivers/gpu/drm/mediatek/mtk_dp.c b/drivers/gpu/drm/mediatek/mtk_dp.c
+index d8796a904eca4d..f2bee617f063a7 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dp.c
++++ b/drivers/gpu/drm/mediatek/mtk_dp.c
+@@ -145,6 +145,89 @@ struct mtk_dp_data {
+ u16 audio_m_div2_bit;
+ };
+
++static const struct mtk_dp_efuse_fmt mt8188_dp_efuse_fmt[MTK_DP_CAL_MAX] = {
++ [MTK_DP_CAL_GLB_BIAS_TRIM] = {
++ .idx = 0,
++ .shift = 10,
++ .mask = 0x1f,
++ .min_val = 1,
++ .max_val = 0x1e,
++ .default_val = 0xf,
++ },
++ [MTK_DP_CAL_CLKTX_IMPSE] = {
++ .idx = 0,
++ .shift = 15,
++ .mask = 0xf,
++ .min_val = 1,
++ .max_val = 0xe,
++ .default_val = 0x8,
++ },
++ [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_0] = {
++ .idx = 1,
++ .shift = 0,
++ .mask = 0xf,
++ .min_val = 1,
++ .max_val = 0xe,
++ .default_val = 0x8,
++ },
++ [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_1] = {
++ .idx = 1,
++ .shift = 8,
++ .mask = 0xf,
++ .min_val = 1,
++ .max_val = 0xe,
++ .default_val = 0x8,
++ },
++ [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_2] = {
++ .idx = 1,
++ .shift = 16,
++ .mask = 0xf,
++ .min_val = 1,
++ .max_val = 0xe,
++ .default_val = 0x8,
++ },
++ [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_3] = {
++ .idx = 1,
++ .shift = 24,
++ .mask = 0xf,
++ .min_val = 1,
++ .max_val = 0xe,
++ .default_val = 0x8,
++ },
++ [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_0] = {
++ .idx = 1,
++ .shift = 4,
++ .mask = 0xf,
++ .min_val = 1,
++ .max_val = 0xe,
++ .default_val = 0x8,
++ },
++ [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_1] = {
++ .idx = 1,
++ .shift = 12,
++ .mask = 0xf,
++ .min_val = 1,
++ .max_val = 0xe,
++ .default_val = 0x8,
++ },
++ [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_2] = {
++ .idx = 1,
++ .shift = 20,
++ .mask = 0xf,
++ .min_val = 1,
++ .max_val = 0xe,
++ .default_val = 0x8,
++ },
++ [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_3] = {
++ .idx = 1,
++ .shift = 28,
++ .mask = 0xf,
++ .min_val = 1,
++ .max_val = 0xe,
++ .default_val = 0x8,
++ },
++};
++
+ static const struct mtk_dp_efuse_fmt mt8195_edp_efuse_fmt[MTK_DP_CAL_MAX] = {
+ [MTK_DP_CAL_GLB_BIAS_TRIM] = {
+ .idx = 3,
+@@ -2771,7 +2854,7 @@ static SIMPLE_DEV_PM_OPS(mtk_dp_pm_ops, mtk_dp_suspend, mtk_dp_resume);
+ static const struct mtk_dp_data mt8188_dp_data = {
+ .bridge_type = DRM_MODE_CONNECTOR_DisplayPort,
+ .smc_cmd = MTK_DP_SIP_ATF_VIDEO_UNMUTE,
+- .efuse_fmt = mt8195_dp_efuse_fmt,
++ .efuse_fmt = mt8188_dp_efuse_fmt,
+ .audio_supported = true,
+ .audio_pkt_in_hblank_area = true,
+ .audio_m_div2_bit = MT8188_AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_DIV_2,
+diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor/panthor_fw.c
+index ef232c0c204932..4e2d3a02ea0689 100644
+--- a/drivers/gpu/drm/panthor/panthor_fw.c
++++ b/drivers/gpu/drm/panthor/panthor_fw.c
+@@ -487,6 +487,7 @@ static int panthor_fw_load_section_entry(struct panthor_device *ptdev,
+ struct panthor_fw_binary_iter *iter,
+ u32 ehdr)
+ {
++ ssize_t vm_pgsz = panthor_vm_page_size(ptdev->fw->vm);
+ struct panthor_fw_binary_section_entry_hdr hdr;
+ struct panthor_fw_section *section;
+ u32 section_size;
+@@ -515,8 +516,7 @@ static int panthor_fw_load_section_entry(struct panthor_device *ptdev,
+ return -EINVAL;
+ }
+
+- if ((hdr.va.start & ~PAGE_MASK) != 0 ||
+- (hdr.va.end & ~PAGE_MASK) != 0) {
++ if (!IS_ALIGNED(hdr.va.start, vm_pgsz) || !IS_ALIGNED(hdr.va.end, vm_pgsz)) {
+ drm_err(&ptdev->base, "Firmware corrupted, virtual addresses not page aligned: 0x%x-0x%x\n",
+ hdr.va.start, hdr.va.end);
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
+index 38f560864879c5..be97d56bc011d2 100644
+--- a/drivers/gpu/drm/panthor/panthor_gem.c
++++ b/drivers/gpu/drm/panthor/panthor_gem.c
+@@ -44,8 +44,7 @@ void panthor_kernel_bo_destroy(struct panthor_kernel_bo *bo)
+ to_panthor_bo(bo->obj)->exclusive_vm_root_gem != panthor_vm_root_gem(vm)))
+ goto out_free_bo;
+
+- ret = panthor_vm_unmap_range(vm, bo->va_node.start,
+- panthor_kernel_bo_size(bo));
++ ret = panthor_vm_unmap_range(vm, bo->va_node.start, bo->va_node.size);
+ if (ret)
+ goto out_free_bo;
+
+@@ -95,10 +94,16 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
+ }
+
+ bo = to_panthor_bo(&obj->base);
+- size = obj->base.size;
+ kbo->obj = &obj->base;
+ bo->flags = bo_flags;
+
++ /* The system and GPU MMU page size might differ, which becomes a
++ * problem for FW sections that need to be mapped at explicit address
++ * since our PAGE_SIZE alignment might cover a VA range that's
++ * expected to be used for another section.
++ * Make sure we never map more than we need.
++ */
++ size = ALIGN(size, panthor_vm_page_size(vm));
+ ret = panthor_vm_alloc_va(vm, gpu_va, size, &kbo->va_node);
+ if (ret)
+ goto err_put_obj;
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
+index ce8e8a93d70767..837ba312f3a8b4 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.c
++++ b/drivers/gpu/drm/panthor/panthor_mmu.c
+@@ -826,6 +826,14 @@ void panthor_vm_idle(struct panthor_vm *vm)
+ mutex_unlock(&ptdev->mmu->as.slots_lock);
+ }
+
++u32 panthor_vm_page_size(struct panthor_vm *vm)
++{
++ const struct io_pgtable *pgt = io_pgtable_ops_to_pgtable(vm->pgtbl_ops);
++ u32 pg_shift = ffs(pgt->cfg.pgsize_bitmap) - 1;
++
++ return 1u << pg_shift;
++}
++
+ static void panthor_vm_stop(struct panthor_vm *vm)
+ {
+ drm_sched_stop(&vm->sched, NULL);
+@@ -1025,12 +1033,13 @@ int
+ panthor_vm_alloc_va(struct panthor_vm *vm, u64 va, u64 size,
+ struct drm_mm_node *va_node)
+ {
++ ssize_t vm_pgsz = panthor_vm_page_size(vm);
+ int ret;
+
+- if (!size || (size & ~PAGE_MASK))
++ if (!size || !IS_ALIGNED(size, vm_pgsz))
+ return -EINVAL;
+
+- if (va != PANTHOR_VM_KERNEL_AUTO_VA && (va & ~PAGE_MASK))
++ if (va != PANTHOR_VM_KERNEL_AUTO_VA && !IS_ALIGNED(va, vm_pgsz))
+ return -EINVAL;
+
+ mutex_lock(&vm->mm_lock);
+@@ -2366,11 +2375,12 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
+ const struct drm_panthor_vm_bind_op *op,
+ struct panthor_vm_op_ctx *op_ctx)
+ {
++ ssize_t vm_pgsz = panthor_vm_page_size(vm);
+ struct drm_gem_object *gem;
+ int ret;
+
+ /* Aligned on page size. */
+- if ((op->va | op->size) & ~PAGE_MASK)
++ if (!IS_ALIGNED(op->va | op->size, vm_pgsz))
+ return -EINVAL;
+
+ switch (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) {
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.h b/drivers/gpu/drm/panthor/panthor_mmu.h
+index 6788771071e355..8d21e83d8aba1e 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.h
++++ b/drivers/gpu/drm/panthor/panthor_mmu.h
+@@ -30,6 +30,7 @@ panthor_vm_get_bo_for_va(struct panthor_vm *vm, u64 va, u64 *bo_offset);
+
+ int panthor_vm_active(struct panthor_vm *vm);
+ void panthor_vm_idle(struct panthor_vm *vm);
++u32 panthor_vm_page_size(struct panthor_vm *vm);
+ int panthor_vm_as(struct panthor_vm *vm);
+ int panthor_vm_flush_all(struct panthor_vm *vm);
+
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index 4d1d5a342a4a6e..e9234488dc2b43 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -589,10 +589,11 @@ struct panthor_group {
+ * @timedout: True when a timeout occurred on any of the queues owned by
+ * this group.
+ *
+- * Timeouts can be reported by drm_sched or by the FW. In any case, any
+- * timeout situation is unrecoverable, and the group becomes useless.
+- * We simply wait for all references to be dropped so we can release the
+- * group object.
++ * Timeouts can be reported by drm_sched or by the FW. If a reset is required,
++ * and the group can't be suspended, this also leads to a timeout. In any case,
++ * any timeout situation is unrecoverable, and the group becomes useless. We
++ * simply wait for all references to be dropped so we can release the group
++ * object.
+ */
+ bool timedout;
+
+@@ -2640,6 +2641,12 @@ void panthor_sched_suspend(struct panthor_device *ptdev)
+ csgs_upd_ctx_init(&upd_ctx);
+ while (slot_mask) {
+ u32 csg_id = ffs(slot_mask) - 1;
++ struct panthor_csg_slot *csg_slot = &sched->csg_slots[csg_id];
++
++ /* We consider group suspension failures as fatal and flag the
++ * group as unusable by setting timedout=true.
++ */
++ csg_slot->group->timedout = true;
+
+ csgs_upd_ctx_queue_reqs(ptdev, &upd_ctx, csg_id,
+ CSG_STATE_TERMINATE,
+@@ -3409,6 +3416,11 @@ panthor_job_create(struct panthor_file *pfile,
+ goto err_put_job;
+ }
+
++ if (!group_can_run(job->group)) {
++ ret = -EINVAL;
++ goto err_put_job;
++ }
++
+ if (job->queue_idx >= job->group->queue_count ||
+ !job->group->queues[job->queue_idx]) {
+ ret = -EINVAL;
+diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
+index d79c76a287f22f..6a29ac1b2d2193 100644
+--- a/drivers/gpu/drm/tegra/drm.c
++++ b/drivers/gpu/drm/tegra/drm.c
+@@ -1152,8 +1152,8 @@ static int host1x_drm_probe(struct host1x_device *dev)
+
+ if (host1x_drm_wants_iommu(dev) && device_iommu_mapped(dma_dev)) {
+ tegra->domain = iommu_paging_domain_alloc(dma_dev);
+- if (!tegra->domain) {
+- err = -ENOMEM;
++ if (IS_ERR(tegra->domain)) {
++ err = PTR_ERR(tegra->domain);
+ goto free;
+ }
+
+diff --git a/drivers/gpu/drm/tests/drm_connector_test.c b/drivers/gpu/drm/tests/drm_connector_test.c
+index 15e36a8db6858a..6bba97d0be88ef 100644
+--- a/drivers/gpu/drm/tests/drm_connector_test.c
++++ b/drivers/gpu/drm/tests/drm_connector_test.c
+@@ -996,7 +996,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb(struct kunit *test)
+ unsigned long long rate;
+ struct drm_device *drm = &priv->drm;
+
+- mode = drm_display_mode_from_cea_vic(drm, 16);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
+@@ -1017,7 +1017,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc(struct kunit *test)
+ unsigned long long rate;
+ struct drm_device *drm = &priv->drm;
+
+- mode = drm_display_mode_from_cea_vic(drm, 16);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
+@@ -1038,7 +1038,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1(struct kunit *t
+ unsigned long long rate;
+ struct drm_device *drm = &priv->drm;
+
+- mode = drm_display_mode_from_cea_vic(drm, 1);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ rate = drm_hdmi_compute_mode_clock(mode, 10, HDMI_COLORSPACE_RGB);
+@@ -1056,7 +1056,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc(struct kunit *test)
+ unsigned long long rate;
+ struct drm_device *drm = &priv->drm;
+
+- mode = drm_display_mode_from_cea_vic(drm, 16);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
+@@ -1077,7 +1077,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1(struct kunit *t
+ unsigned long long rate;
+ struct drm_device *drm = &priv->drm;
+
+- mode = drm_display_mode_from_cea_vic(drm, 1);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ rate = drm_hdmi_compute_mode_clock(mode, 12, HDMI_COLORSPACE_RGB);
+@@ -1095,7 +1095,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb_double(struct kunit *test)
+ unsigned long long rate;
+ struct drm_device *drm = &priv->drm;
+
+- mode = drm_display_mode_from_cea_vic(drm, 6);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 6);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ KUNIT_ASSERT_TRUE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
+@@ -1118,7 +1118,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv420_valid(struct kunit
+ unsigned long long rate;
+ unsigned int vic = *(unsigned int *)test->param_value;
+
+- mode = drm_display_mode_from_cea_vic(drm, vic);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, vic);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
+@@ -1155,7 +1155,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc(struct kuni
+ drm_hdmi_compute_mode_clock_yuv420_vic_valid_tests[0];
+ unsigned long long rate;
+
+- mode = drm_display_mode_from_cea_vic(drm, vic);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, vic);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
+@@ -1180,7 +1180,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc(struct kuni
+ drm_hdmi_compute_mode_clock_yuv420_vic_valid_tests[0];
+ unsigned long long rate;
+
+- mode = drm_display_mode_from_cea_vic(drm, vic);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, vic);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
+@@ -1203,7 +1203,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc(struct kunit
+ struct drm_device *drm = &priv->drm;
+ unsigned long long rate;
+
+- mode = drm_display_mode_from_cea_vic(drm, 16);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
+@@ -1225,7 +1225,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc(struct kuni
+ struct drm_device *drm = &priv->drm;
+ unsigned long long rate;
+
+- mode = drm_display_mode_from_cea_vic(drm, 16);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
+@@ -1247,7 +1247,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc(struct kuni
+ struct drm_device *drm = &priv->drm;
+ unsigned long long rate;
+
+- mode = drm_display_mode_from_cea_vic(drm, 16);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
+diff --git a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+index 34ee95d41f2966..294773342e710d 100644
+--- a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
++++ b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+@@ -441,7 +441,7 @@ static void drm_test_check_broadcast_rgb_auto_cea_mode_vic_1(struct kunit *test)
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+- mode = drm_display_mode_from_cea_vic(drm, 1);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ drm = &priv->drm;
+@@ -555,7 +555,7 @@ static void drm_test_check_broadcast_rgb_full_cea_mode_vic_1(struct kunit *test)
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+- mode = drm_display_mode_from_cea_vic(drm, 1);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ drm = &priv->drm;
+@@ -671,7 +671,7 @@ static void drm_test_check_broadcast_rgb_limited_cea_mode_vic_1(struct kunit *te
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+- mode = drm_display_mode_from_cea_vic(drm, 1);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ drm = &priv->drm;
+@@ -1263,7 +1263,7 @@ static void drm_test_check_output_bpc_format_vic_1(struct kunit *test)
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+- mode = drm_display_mode_from_cea_vic(drm, 1);
++ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+ /*
+diff --git a/drivers/gpu/drm/tests/drm_kunit_helpers.c b/drivers/gpu/drm/tests/drm_kunit_helpers.c
+index aa62719dab0e47..04a6b8cc62ac67 100644
+--- a/drivers/gpu/drm/tests/drm_kunit_helpers.c
++++ b/drivers/gpu/drm/tests/drm_kunit_helpers.c
+@@ -3,6 +3,7 @@
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_atomic_helper.h>
+ #include <drm/drm_drv.h>
++#include <drm/drm_edid.h>
+ #include <drm/drm_fourcc.h>
+ #include <drm/drm_kunit_helpers.h>
+ #include <drm/drm_managed.h>
+@@ -311,6 +312,47 @@ drm_kunit_helper_create_crtc(struct kunit *test,
+ }
+ EXPORT_SYMBOL_GPL(drm_kunit_helper_create_crtc);
+
++static void kunit_action_drm_mode_destroy(void *ptr)
++{
++ struct drm_display_mode *mode = ptr;
++
++ drm_mode_destroy(NULL, mode);
++}
++
++/**
++ * drm_kunit_display_mode_from_cea_vic() - return a mode for CEA VIC
++ for a KUnit test
++ * @test: The test context object
++ * @dev: DRM device
++ * @video_code: CEA VIC of the mode
++ *
++ * Creates a new mode matching the specified CEA VIC for a KUnit test.
++ *
++ * Resources will be cleaned up automatically.
++ *
++ * Returns: A new drm_display_mode on success or NULL on failure
++ */
++struct drm_display_mode *
++drm_kunit_display_mode_from_cea_vic(struct kunit *test, struct drm_device *dev,
++ u8 video_code)
++{
++ struct drm_display_mode *mode;
++ int ret;
++
++ mode = drm_display_mode_from_cea_vic(dev, video_code);
++ if (!mode)
++ return NULL;
++
++ ret = kunit_add_action_or_reset(test,
++ kunit_action_drm_mode_destroy,
++ mode);
++ if (ret)
++ return NULL;
++
++ return mode;
++}
++EXPORT_SYMBOL_GPL(drm_kunit_display_mode_from_cea_vic);
++
+ MODULE_AUTHOR("Maxime Ripard <maxime@cerno.tech>");
+ MODULE_DESCRIPTION("KUnit test suite helper functions");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
+index 1979614a90bddd..1ff9602a52f67a 100644
+--- a/drivers/gpu/drm/xe/Makefile
++++ b/drivers/gpu/drm/xe/Makefile
+@@ -175,6 +175,7 @@ xe-$(CONFIG_DRM_XE_DISPLAY) += \
+ display/xe_display.o \
+ display/xe_display_misc.o \
+ display/xe_display_rps.o \
++ display/xe_display_wa.o \
+ display/xe_dsb_buffer.o \
+ display/xe_fb_pin.o \
+ display/xe_hdcp_gsc.o \
+diff --git a/drivers/gpu/drm/xe/compat-i915-headers/i915_drv.h b/drivers/gpu/drm/xe/compat-i915-headers/i915_drv.h
+index 1f1ad4d3ef517b..a7d20613392237 100644
+--- a/drivers/gpu/drm/xe/compat-i915-headers/i915_drv.h
++++ b/drivers/gpu/drm/xe/compat-i915-headers/i915_drv.h
+@@ -116,7 +116,6 @@ struct i915_sched_attr {
+ #define i915_gem_fence_wait_priority(fence, attr) do { (void) attr; } while (0)
+
+ #define pdev_to_i915 pdev_to_xe_device
+-#define RUNTIME_INFO(xe) (&(xe)->info.i915_runtime)
+
+ #define FORCEWAKE_ALL XE_FORCEWAKE_ALL
+
+diff --git a/drivers/gpu/drm/xe/display/xe_display_wa.c b/drivers/gpu/drm/xe/display/xe_display_wa.c
+new file mode 100644
+index 00000000000000..68e3d1959ad6a8
+--- /dev/null
++++ b/drivers/gpu/drm/xe/display/xe_display_wa.c
+@@ -0,0 +1,16 @@
++// SPDX-License-Identifier: MIT
++/*
++ * Copyright © 2024 Intel Corporation
++ */
++
++#include "intel_display_wa.h"
++
++#include "xe_device.h"
++#include "xe_wa.h"
++
++#include <generated/xe_wa_oob.h>
++
++bool intel_display_needs_wa_16023588340(struct drm_i915_private *i915)
++{
++ return XE_WA(xe_root_mmio_gt(i915), 16023588340);
++}
+diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+index 3c286504005862..660ff42e45a6f4 100644
+--- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+@@ -80,7 +80,10 @@
+ #define LE_CACHEABILITY_MASK REG_GENMASK(1, 0)
+ #define LE_CACHEABILITY(value) REG_FIELD_PREP(LE_CACHEABILITY_MASK, value)
+
+-#define XE2_GAMREQSTRM_CTRL XE_REG(0x4194)
++#define STATELESS_COMPRESSION_CTRL XE_REG_MCR(0x4148)
++#define UNIFIED_COMPRESSION_FORMAT REG_GENMASK(3, 0)
++
++#define XE2_GAMREQSTRM_CTRL XE_REG_MCR(0x4194)
+ #define CG_DIS_CNTLBUS REG_BIT(6)
+
+ #define CCS_AUX_INV XE_REG(0x4208)
+@@ -91,6 +94,8 @@
+ #define VE1_AUX_INV XE_REG(0x42b8)
+ #define AUX_INV REG_BIT(0)
+
++#define XE2_LMEM_CFG XE_REG(0x48b0)
++
+ #define XEHP_TILE_ADDR_RANGE(_idx) XE_REG_MCR(0x4900 + (_idx) * 4)
+ #define XEHP_FLAT_CCS_BASE_ADDR XE_REG_MCR(0x4910)
+ #define XEHP_FLAT_CCS_PTR REG_GENMASK(31, 8)
+@@ -100,12 +105,14 @@
+
+ #define CHICKEN_RASTER_1 XE_REG_MCR(0x6204, XE_REG_OPTION_MASKED)
+ #define DIS_SF_ROUND_NEAREST_EVEN REG_BIT(8)
++#define DIS_CLIP_NEGATIVE_BOUNDING_BOX REG_BIT(6)
+
+ #define CHICKEN_RASTER_2 XE_REG_MCR(0x6208, XE_REG_OPTION_MASKED)
+ #define TBIMR_FAST_CLIP REG_BIT(5)
+
+ #define FF_MODE XE_REG_MCR(0x6210)
+ #define DIS_TE_AUTOSTRIP REG_BIT(31)
++#define VS_HIT_MAX_VALUE_MASK REG_GENMASK(25, 20)
+ #define DIS_MESH_PARTIAL_AUTOSTRIP REG_BIT(16)
+ #define DIS_MESH_AUTOSTRIP REG_BIT(15)
+
+@@ -190,6 +197,7 @@
+ #define GSCPSMI_BASE XE_REG(0x880c)
+
+ #define CCCHKNREG1 XE_REG_MCR(0x8828)
++#define L3CMPCTRL REG_BIT(23)
+ #define ENCOMPPERFFIX REG_BIT(18)
+
+ /* Fuse readout registers for GT */
+@@ -364,6 +372,9 @@
+ #define XEHP_L3NODEARBCFG XE_REG_MCR(0xb0b4)
+ #define XEHP_LNESPARE REG_BIT(19)
+
++#define L3SQCREG2 XE_REG_MCR(0xb104)
++#define COMPMEMRD256BOVRFETCHEN REG_BIT(20)
++
+ #define L3SQCREG3 XE_REG_MCR(0xb108)
+ #define COMPPWOVERFETCHEN REG_BIT(28)
+
+@@ -403,6 +414,10 @@
+ #define INVALIDATION_BROADCAST_MODE_DIS REG_BIT(12)
+ #define GLOBAL_INVALIDATION_MODE REG_BIT(2)
+
++#define LMEM_CFG XE_REG(0xcf58)
++#define LMEM_EN REG_BIT(31)
++#define LMTT_DIR_PTR REG_GENMASK(30, 0) /* in multiples of 64KB */
++
+ #define HALF_SLICE_CHICKEN5 XE_REG_MCR(0xe188, XE_REG_OPTION_MASKED)
+ #define DISABLE_SAMPLE_G_PERFORMANCE REG_BIT(0)
+
+diff --git a/drivers/gpu/drm/xe/regs/xe_regs.h b/drivers/gpu/drm/xe/regs/xe_regs.h
+index 23e33ec8490221..55bf47c9901698 100644
+--- a/drivers/gpu/drm/xe/regs/xe_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_regs.h
+@@ -24,11 +24,14 @@
+ #define LMEM_INIT REG_BIT(7)
+ #define DRIVERFLR REG_BIT(31)
+
++#define XEHP_CLOCK_GATE_DIS XE_REG(0x101014)
++#define SGSI_SIDECLK_DIS REG_BIT(17)
++
+ #define GU_DEBUG XE_REG(0x101018)
+ #define DRIVERFLR_STATUS REG_BIT(31)
+
+-#define XEHP_CLOCK_GATE_DIS XE_REG(0x101014)
+-#define SGSI_SIDECLK_DIS REG_BIT(17)
++#define VIRTUAL_CTRL_REG XE_REG(0x10108c)
++#define GUEST_GTT_UPDATE_EN REG_BIT(8)
+
+ #define XEHP_MTCFG_ADDR XE_REG(0x101800)
+ #define TILE_COUNT REG_GENMASK(15, 8)
+@@ -66,6 +69,9 @@
+ #define DISPLAY_IRQ REG_BIT(16)
+ #define GT_DW_IRQ(x) REG_BIT(x)
+
++#define VF_CAP_REG XE_REG(0x1901f8, XE_REG_OPTION_VF)
++#define VF_CAP REG_BIT(0)
++
+ #define PVC_RP_STATE_CAP XE_REG(0x281014)
+
+ #endif
+diff --git a/drivers/gpu/drm/xe/regs/xe_sriov_regs.h b/drivers/gpu/drm/xe/regs/xe_sriov_regs.h
+deleted file mode 100644
+index 017b4ddd1ecf4f..00000000000000
+--- a/drivers/gpu/drm/xe/regs/xe_sriov_regs.h
++++ /dev/null
+@@ -1,23 +0,0 @@
+-/* SPDX-License-Identifier: MIT */
+-/*
+- * Copyright © 2023 Intel Corporation
+- */
+-
+-#ifndef _REGS_XE_SRIOV_REGS_H_
+-#define _REGS_XE_SRIOV_REGS_H_
+-
+-#include "regs/xe_reg_defs.h"
+-
+-#define XE2_LMEM_CFG XE_REG(0x48b0)
+-
+-#define LMEM_CFG XE_REG(0xcf58)
+-#define LMEM_EN REG_BIT(31)
+-#define LMTT_DIR_PTR REG_GENMASK(30, 0) /* in multiples of 64KB */
+-
+-#define VIRTUAL_CTRL_REG XE_REG(0x10108c)
+-#define GUEST_GTT_UPDATE_EN REG_BIT(8)
+-
+-#define VF_CAP_REG XE_REG(0x1901f8, XE_REG_OPTION_VF)
+-#define VF_CAP REG_BIT(0)
+-
+-#endif
+diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
+index a7c7812d579157..ebdb6f2d1ca7cb 100644
+--- a/drivers/gpu/drm/xe/xe_device_types.h
++++ b/drivers/gpu/drm/xe/xe_device_types.h
+@@ -297,12 +297,6 @@ struct xe_device {
+ u8 has_atomic_enable_pte_bit:1;
+ /** @info.has_device_atomics_on_smem: Supports device atomics on SMEM */
+ u8 has_device_atomics_on_smem:1;
+-
+-#if IS_ENABLED(CONFIG_DRM_XE_DISPLAY)
+- struct {
+- u32 rawclk_freq;
+- } i915_runtime;
+-#endif
+ } info;
+
+ /** @irq: device interrupt state */
+diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c
+index 0cdbc1296e8857..226542bb1442e0 100644
+--- a/drivers/gpu/drm/xe/xe_ggtt.c
++++ b/drivers/gpu/drm/xe/xe_ggtt.c
+@@ -309,6 +309,16 @@ static void ggtt_invalidate_gt_tlb(struct xe_gt *gt)
+
+ static void xe_ggtt_invalidate(struct xe_ggtt *ggtt)
+ {
++ struct xe_device *xe = tile_to_xe(ggtt->tile);
++
++ /*
++ * XXX: Barrier for GGTT pages. Unsure exactly why this required but
++ * without this LNL is having issues with the GuC reading scratch page
++ * vs. correct GGTT page. Not particularly a hot code path so blindly
++ * do a mmio read here which results in GuC reading correct GGTT page.
++ */
++ xe_mmio_read32(xe_root_mmio_gt(xe), VF_CAP_REG);
++
+ /* Each GT in a tile has its own TLB to cache GGTT lookups */
+ ggtt_invalidate_gt_tlb(ggtt->tile->primary_gt);
+ ggtt_invalidate_gt_tlb(ggtt->tile->media_gt);
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index 0062a5e4d5fac7..ba9f50c1faa679 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -109,9 +109,9 @@ static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
+
+ if (!xe_gt_is_media_type(gt)) {
+ xe_mmio_write32(gt, SCRATCH1LPFC, EN_L3_RW_CCS_CACHE_FLUSH);
+- reg = xe_mmio_read32(gt, XE2_GAMREQSTRM_CTRL);
++ reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL);
+ reg |= CG_DIS_CNTLBUS;
+- xe_mmio_write32(gt, XE2_GAMREQSTRM_CTRL, reg);
++ xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
+ }
+
+ xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0x3);
+@@ -133,9 +133,9 @@ static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
+ if (WARN_ON(err))
+ return;
+
+- reg = xe_mmio_read32(gt, XE2_GAMREQSTRM_CTRL);
++ reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL);
+ reg &= ~CG_DIS_CNTLBUS;
+- xe_mmio_write32(gt, XE2_GAMREQSTRM_CTRL, reg);
++ xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
+
+ xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
+ }
+@@ -555,7 +555,6 @@ int xe_gt_init_hwconfig(struct xe_gt *gt)
+
+ xe_gt_mcr_init_early(gt);
+ xe_pat_init(gt);
+- xe_gt_enable_host_l2_vram(gt);
+
+ err = xe_uc_init(>->uc);
+ if (err)
+@@ -567,6 +566,7 @@ int xe_gt_init_hwconfig(struct xe_gt *gt)
+
+ xe_gt_topology_init(gt);
+ xe_gt_mcr_init(gt);
++ xe_gt_enable_host_l2_vram(gt);
+
+ out_fw:
+ xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
+index 9dbba9ab7a9abb..ef239440963ce1 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
+@@ -5,7 +5,7 @@
+
+ #include <drm/drm_managed.h>
+
+-#include "regs/xe_sriov_regs.h"
++#include "regs/xe_regs.h"
+
+ #include "xe_gt_sriov_pf.h"
+ #include "xe_gt_sriov_pf_config.h"
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index dfd809e7bbd257..cbdd44567d1072 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -989,12 +989,22 @@ static void xe_guc_exec_queue_lr_cleanup(struct work_struct *w)
+ static bool check_timeout(struct xe_exec_queue *q, struct xe_sched_job *job)
+ {
+ struct xe_gt *gt = guc_to_gt(exec_queue_to_guc(q));
+- u32 ctx_timestamp = xe_lrc_ctx_timestamp(q->lrc[0]);
+- u32 ctx_job_timestamp = xe_lrc_ctx_job_timestamp(q->lrc[0]);
++ u32 ctx_timestamp, ctx_job_timestamp;
+ u32 timeout_ms = q->sched_props.job_timeout_ms;
+ u32 diff;
+ u64 running_time_ms;
+
++ if (!xe_sched_job_started(job)) {
++ xe_gt_warn(gt, "Check job timeout: seqno=%u, lrc_seqno=%u, guc_id=%d, not started",
++ xe_sched_job_seqno(job), xe_sched_job_lrc_seqno(job),
++ q->guc->id);
++
++ return xe_sched_invalidate_job(job, 2);
++ }
++
++ ctx_timestamp = xe_lrc_ctx_timestamp(q->lrc[0]);
++ ctx_job_timestamp = xe_lrc_ctx_job_timestamp(q->lrc[0]);
++
+ /*
+ * Counter wraps at ~223s at the usual 19.2MHz, be paranoid catch
+ * possible overflows with a high timeout.
+@@ -1120,10 +1130,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
+ exec_queue_killed_or_banned_or_wedged(q) ||
+ exec_queue_destroyed(q);
+
+- /* Job hasn't started, can't be timed out */
+- if (!skip_timeout_check && !xe_sched_job_started(job))
+- goto rearm;
+-
+ /*
+ * XXX: Sampling timeout doesn't work in wedged mode as we have to
+ * modify scheduling state to read timestamp. We could read the
+diff --git a/drivers/gpu/drm/xe/xe_lmtt.c b/drivers/gpu/drm/xe/xe_lmtt.c
+index 418661a8891839..c5fdb36b6d3365 100644
+--- a/drivers/gpu/drm/xe/xe_lmtt.c
++++ b/drivers/gpu/drm/xe/xe_lmtt.c
+@@ -7,7 +7,7 @@
+
+ #include <drm/drm_managed.h>
+
+-#include "regs/xe_sriov_regs.h"
++#include "regs/xe_gt_regs.h"
+
+ #include "xe_assert.h"
+ #include "xe_bo.h"
+diff --git a/drivers/gpu/drm/xe/xe_module.c b/drivers/gpu/drm/xe/xe_module.c
+index 499540add465a4..464ec24e2dd206 100644
+--- a/drivers/gpu/drm/xe/xe_module.c
++++ b/drivers/gpu/drm/xe/xe_module.c
+@@ -8,6 +8,8 @@
+ #include <linux/init.h>
+ #include <linux/module.h>
+
++#include <drm/drm_module.h>
++
+ #include "xe_drv.h"
+ #include "xe_hw_fence.h"
+ #include "xe_pci.h"
+@@ -61,12 +63,23 @@ module_param_named_unsafe(wedged_mode, xe_modparam.wedged_mode, int, 0600);
+ MODULE_PARM_DESC(wedged_mode,
+ "Module's default policy for the wedged mode - 0=never, 1=upon-critical-errors[default], 2=upon-any-hang");
+
++static int xe_check_nomodeset(void)
++{
++ if (drm_firmware_drivers_only())
++ return -ENODEV;
++
++ return 0;
++}
++
+ struct init_funcs {
+ int (*init)(void);
+ void (*exit)(void);
+ };
+
+ static const struct init_funcs init_funcs[] = {
++ {
++ .init = xe_check_nomodeset,
++ },
+ {
+ .init = xe_hw_fence_module_init,
+ .exit = xe_hw_fence_module_exit,
+@@ -85,15 +98,35 @@ static const struct init_funcs init_funcs[] = {
+ },
+ };
+
++static int __init xe_call_init_func(unsigned int i)
++{
++ if (WARN_ON(i >= ARRAY_SIZE(init_funcs)))
++ return 0;
++ if (!init_funcs[i].init)
++ return 0;
++
++ return init_funcs[i].init();
++}
++
++static void xe_call_exit_func(unsigned int i)
++{
++ if (WARN_ON(i >= ARRAY_SIZE(init_funcs)))
++ return;
++ if (!init_funcs[i].exit)
++ return;
++
++ init_funcs[i].exit();
++}
++
+ static int __init xe_init(void)
+ {
+ int err, i;
+
+ for (i = 0; i < ARRAY_SIZE(init_funcs); i++) {
+- err = init_funcs[i].init();
++ err = xe_call_init_func(i);
+ if (err) {
+ while (i--)
+- init_funcs[i].exit();
++ xe_call_exit_func(i);
+ return err;
+ }
+ }
+@@ -106,7 +139,7 @@ static void __exit xe_exit(void)
+ int i;
+
+ for (i = ARRAY_SIZE(init_funcs) - 1; i >= 0; i--)
+- init_funcs[i].exit();
++ xe_call_exit_func(i);
+ }
+
+ module_init(xe_init);
+diff --git a/drivers/gpu/drm/xe/xe_sriov.c b/drivers/gpu/drm/xe/xe_sriov.c
+index a274a5fb14018b..5a1d65e4f19f2b 100644
+--- a/drivers/gpu/drm/xe/xe_sriov.c
++++ b/drivers/gpu/drm/xe/xe_sriov.c
+@@ -5,7 +5,7 @@
+
+ #include <drm/drm_managed.h>
+
+-#include "regs/xe_sriov_regs.h"
++#include "regs/xe_regs.h"
+
+ #include "xe_assert.h"
+ #include "xe_device.h"
+diff --git a/drivers/gpu/drm/xe/xe_tuning.c b/drivers/gpu/drm/xe/xe_tuning.c
+index d4e6fa918942be..faa1bf42e50edf 100644
+--- a/drivers/gpu/drm/xe/xe_tuning.c
++++ b/drivers/gpu/drm/xe/xe_tuning.c
+@@ -39,12 +39,23 @@ static const struct xe_rtp_entry_sr gt_tunings[] = {
+ },
+ { XE_RTP_NAME("Tuning: Compression Overfetch"),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, XE_RTP_END_VERSION_UNDEFINED)),
+- XE_RTP_ACTIONS(CLR(CCCHKNREG1, ENCOMPPERFFIX)),
++ XE_RTP_ACTIONS(CLR(CCCHKNREG1, ENCOMPPERFFIX),
++ SET(CCCHKNREG1, L3CMPCTRL))
+ },
+ { XE_RTP_NAME("Tuning: Enable compressible partial write overfetch in L3"),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, XE_RTP_END_VERSION_UNDEFINED)),
+ XE_RTP_ACTIONS(SET(L3SQCREG3, COMPPWOVERFETCHEN))
+ },
++ { XE_RTP_NAME("Tuning: L2 Overfetch Compressible Only"),
++ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, XE_RTP_END_VERSION_UNDEFINED)),
++ XE_RTP_ACTIONS(SET(L3SQCREG2,
++ COMPMEMRD256BOVRFETCHEN))
++ },
++ { XE_RTP_NAME("Tuning: Stateless compression control"),
++ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, XE_RTP_END_VERSION_UNDEFINED)),
++ XE_RTP_ACTIONS(FIELD_SET(STATELESS_COMPRESSION_CTRL, UNIFIED_COMPRESSION_FORMAT,
++ REG_FIELD_PREP(UNIFIED_COMPRESSION_FORMAT, 0)))
++ },
+ {}
+ };
+
+@@ -93,6 +104,14 @@ static const struct xe_rtp_entry_sr lrc_tunings[] = {
+ REG_FIELD_PREP(L3_PWM_TIMER_INIT_VAL_MASK, 0x7f)))
+ },
+
++ /* Xe2_HPG */
++
++ { XE_RTP_NAME("Tuning: vs hit max value"),
++ XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
++ XE_RTP_ACTIONS(FIELD_SET(FF_MODE, VS_HIT_MAX_VALUE_MASK,
++ REG_FIELD_PREP(VS_HIT_MAX_VALUE_MASK, 0x3f)))
++ },
++
+ {}
+ };
+
+diff --git a/drivers/gpu/drm/xe/xe_wa.c b/drivers/gpu/drm/xe/xe_wa.c
+index e648265d081be2..e2d7ccc6f144b6 100644
+--- a/drivers/gpu/drm/xe/xe_wa.c
++++ b/drivers/gpu/drm/xe/xe_wa.c
+@@ -733,6 +733,10 @@ static const struct xe_rtp_entry_sr lrc_was[] = {
+ DIS_PARTIAL_AUTOSTRIP |
+ DIS_AUTOSTRIP))
+ },
++ { XE_RTP_NAME("15016589081"),
++ XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
++ XE_RTP_ACTIONS(SET(CHICKEN_RASTER_1, DIS_CLIP_NEGATIVE_BOUNDING_BOX))
++ },
+
+ {}
+ };
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index 108e9ccab1ef06..e00074c8852253 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -645,7 +645,7 @@ static int ad7124_write_raw(struct iio_dev *indio_dev,
+
+ switch (info) {
+ case IIO_CHAN_INFO_SAMP_FREQ:
+- if (val2 != 0) {
++ if (val2 != 0 || val == 0) {
+ ret = -EINVAL;
+ break;
+ }
+diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c
+index 59d7615c0f565c..5f131bc1a01e97 100644
+--- a/drivers/iio/industrialio-gts-helper.c
++++ b/drivers/iio/industrialio-gts-helper.c
+@@ -307,13 +307,15 @@ static int iio_gts_build_avail_scale_table(struct iio_gts *gts)
+ if (ret)
+ goto err_free_out;
+
++ for (i = 0; i < gts->num_itime; i++)
++ kfree(per_time_gains[i]);
+ kfree(per_time_gains);
+ gts->per_time_avail_scale_tables = per_time_scales;
+
+ return 0;
+
+ err_free_out:
+- for (i--; i; i--) {
++ for (i--; i >= 0; i--) {
+ kfree(per_time_scales[i]);
+ kfree(per_time_gains[i]);
+ }
+diff --git a/drivers/iio/light/veml6030.c b/drivers/iio/light/veml6030.c
+index 9630de1c578ecb..621428885455c0 100644
+--- a/drivers/iio/light/veml6030.c
++++ b/drivers/iio/light/veml6030.c
+@@ -522,7 +522,7 @@ static int veml6030_read_raw(struct iio_dev *indio_dev,
+ }
+ if (mask == IIO_CHAN_INFO_PROCESSED) {
+ *val = (reg * data->cur_resolution) / 10000;
+- *val2 = (reg * data->cur_resolution) % 10000;
++ *val2 = (reg * data->cur_resolution) % 10000 * 100;
+ return IIO_VAL_INT_PLUS_MICRO;
+ }
+ *val = reg;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 03d517be9c52ec..560a0f7bff85e9 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -1532,9 +1532,11 @@ int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res,
+ u32 tbl_indx;
+ int rc;
+
++ spin_lock_bh(&rcfw->tbl_lock);
+ tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw);
+ rcfw->qp_tbl[tbl_indx].qp_id = BNXT_QPLIB_QP_ID_INVALID;
+ rcfw->qp_tbl[tbl_indx].qp_handle = NULL;
++ spin_unlock_bh(&rcfw->tbl_lock);
+
+ bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
+ CMDQ_BASE_OPCODE_DESTROY_QP,
+@@ -1545,8 +1547,10 @@ int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res,
+ sizeof(resp), 0);
+ rc = bnxt_qplib_rcfw_send_message(rcfw, &msg);
+ if (rc) {
++ spin_lock_bh(&rcfw->tbl_lock);
+ rcfw->qp_tbl[tbl_indx].qp_id = qp->id;
+ rcfw->qp_tbl[tbl_indx].qp_handle = qp;
++ spin_unlock_bh(&rcfw->tbl_lock);
+ return rc;
+ }
+
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index 7294221b3316cf..e82bd37158ad6c 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -290,7 +290,6 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw,
+ struct bnxt_qplib_hwq *hwq;
+ u32 sw_prod, cmdq_prod;
+ struct pci_dev *pdev;
+- unsigned long flags;
+ u16 cookie;
+ u8 *preq;
+
+@@ -301,7 +300,7 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw,
+ /* Cmdq are in 16-byte units, each request can consume 1 or more
+ * cmdqe
+ */
+- spin_lock_irqsave(&hwq->lock, flags);
++ spin_lock_bh(&hwq->lock);
+ required_slots = bnxt_qplib_get_cmd_slots(msg->req);
+ free_slots = HWQ_FREE_SLOTS(hwq);
+ cookie = cmdq->seq_num & RCFW_MAX_COOKIE_VALUE;
+@@ -311,7 +310,7 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw,
+ dev_info_ratelimited(&pdev->dev,
+ "CMDQ is full req/free %d/%d!",
+ required_slots, free_slots);
+- spin_unlock_irqrestore(&hwq->lock, flags);
++ spin_unlock_bh(&hwq->lock);
+ return -EAGAIN;
+ }
+ if (msg->block)
+@@ -367,7 +366,7 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw,
+ wmb();
+ writel(cmdq_prod, cmdq->cmdq_mbox.prod);
+ writel(RCFW_CMDQ_TRIG_VAL, cmdq->cmdq_mbox.db);
+- spin_unlock_irqrestore(&hwq->lock, flags);
++ spin_unlock_bh(&hwq->lock);
+ /* Return the CREQ response pointer */
+ return 0;
+ }
+@@ -486,7 +485,6 @@ static int __bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw,
+ {
+ struct creq_qp_event *evnt = (struct creq_qp_event *)msg->resp;
+ struct bnxt_qplib_crsqe *crsqe;
+- unsigned long flags;
+ u16 cookie;
+ int rc;
+ u8 opcode;
+@@ -512,12 +510,12 @@ static int __bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw,
+ rc = __poll_for_resp(rcfw, cookie);
+
+ if (rc) {
+- spin_lock_irqsave(&rcfw->cmdq.hwq.lock, flags);
++ spin_lock_bh(&rcfw->cmdq.hwq.lock);
+ crsqe = &rcfw->crsqe_tbl[cookie];
+ crsqe->is_waiter_alive = false;
+ if (rc == -ENODEV)
+ set_bit(FIRMWARE_STALL_DETECTED, &rcfw->cmdq.flags);
+- spin_unlock_irqrestore(&rcfw->cmdq.hwq.lock, flags);
++ spin_unlock_bh(&rcfw->cmdq.hwq.lock);
+ return -ETIMEDOUT;
+ }
+
+@@ -628,7 +626,6 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ u16 cookie, blocked = 0;
+ bool is_waiter_alive;
+ struct pci_dev *pdev;
+- unsigned long flags;
+ u32 wait_cmds = 0;
+ int rc = 0;
+
+@@ -637,17 +634,21 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ case CREQ_QP_EVENT_EVENT_QP_ERROR_NOTIFICATION:
+ err_event = (struct creq_qp_error_notification *)qp_event;
+ qp_id = le32_to_cpu(err_event->xid);
++ spin_lock(&rcfw->tbl_lock);
+ tbl_indx = map_qp_id_to_tbl_indx(qp_id, rcfw);
+ qp = rcfw->qp_tbl[tbl_indx].qp_handle;
++ if (!qp) {
++ spin_unlock(&rcfw->tbl_lock);
++ break;
++ }
++ bnxt_qplib_mark_qp_error(qp);
++ rc = rcfw->creq.aeq_handler(rcfw, qp_event, qp);
++ spin_unlock(&rcfw->tbl_lock);
+ dev_dbg(&pdev->dev, "Received QP error notification\n");
+ dev_dbg(&pdev->dev,
+ "qpid 0x%x, req_err=0x%x, resp_err=0x%x\n",
+ qp_id, err_event->req_err_state_reason,
+ err_event->res_err_state_reason);
+- if (!qp)
+- break;
+- bnxt_qplib_mark_qp_error(qp);
+- rc = rcfw->creq.aeq_handler(rcfw, qp_event, qp);
+ break;
+ default:
+ /*
+@@ -659,8 +660,7 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ *
+ */
+
+- spin_lock_irqsave_nested(&hwq->lock, flags,
+- SINGLE_DEPTH_NESTING);
++ spin_lock_nested(&hwq->lock, SINGLE_DEPTH_NESTING);
+ cookie = le16_to_cpu(qp_event->cookie);
+ blocked = cookie & RCFW_CMD_IS_BLOCKING;
+ cookie &= RCFW_MAX_COOKIE_VALUE;
+@@ -672,7 +672,7 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ dev_info(&pdev->dev,
+ "rcfw timedout: cookie = %#x, free_slots = %d",
+ cookie, crsqe->free_slots);
+- spin_unlock_irqrestore(&hwq->lock, flags);
++ spin_unlock(&hwq->lock);
+ return rc;
+ }
+
+@@ -720,7 +720,7 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ __destroy_timedout_ah(rcfw,
+ (struct creq_create_ah_resp *)
+ qp_event);
+- spin_unlock_irqrestore(&hwq->lock, flags);
++ spin_unlock(&hwq->lock);
+ }
+ *num_wait += wait_cmds;
+ return rc;
+@@ -734,12 +734,11 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+ u32 type, budget = CREQ_ENTRY_POLL_BUDGET;
+ struct bnxt_qplib_hwq *hwq = &creq->hwq;
+ struct creq_base *creqe;
+- unsigned long flags;
+ u32 num_wakeup = 0;
+ u32 hw_polled = 0;
+
+ /* Service the CREQ until budget is over */
+- spin_lock_irqsave(&hwq->lock, flags);
++ spin_lock_bh(&hwq->lock);
+ while (budget > 0) {
+ creqe = bnxt_qplib_get_qe(hwq, hwq->cons, NULL);
+ if (!CREQ_CMP_VALID(creqe, creq->creq_db.dbinfo.flags))
+@@ -782,7 +781,7 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+ if (hw_polled)
+ bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo,
+ rcfw->res->cctx, true);
+- spin_unlock_irqrestore(&hwq->lock, flags);
++ spin_unlock_bh(&hwq->lock);
+ if (num_wakeup)
+ wake_up_nr(&rcfw->cmdq.waitq, num_wakeup);
+ }
+@@ -978,6 +977,7 @@ int bnxt_qplib_alloc_rcfw_channel(struct bnxt_qplib_res *res,
+ GFP_KERNEL);
+ if (!rcfw->qp_tbl)
+ goto fail;
++ spin_lock_init(&rcfw->tbl_lock);
+
+ rcfw->max_timeout = res->cctx->hwrm_cmd_max_timeout;
+
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+index 45996e60a0d03e..07779aeb75759d 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+@@ -224,6 +224,8 @@ struct bnxt_qplib_rcfw {
+ struct bnxt_qplib_crsqe *crsqe_tbl;
+ int qp_tbl_size;
+ struct bnxt_qplib_qp_node *qp_tbl;
++ /* To synchronize the qp-handle hash table */
++ spinlock_t tbl_lock;
+ u64 oos_prev;
+ u32 init_oos_stats;
+ u32 cmdq_depth;
+diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c
+index 246b739ddb2b21..9008584946c622 100644
+--- a/drivers/infiniband/hw/cxgb4/provider.c
++++ b/drivers/infiniband/hw/cxgb4/provider.c
+@@ -474,6 +474,7 @@ static const struct ib_device_ops c4iw_dev_ops = {
+ .fill_res_cq_entry = c4iw_fill_res_cq_entry,
+ .fill_res_cm_id_entry = c4iw_fill_res_cm_id_entry,
+ .fill_res_mr_entry = c4iw_fill_res_mr_entry,
++ .fill_res_qp_entry = c4iw_fill_res_qp_entry,
+ .get_dev_fw_str = get_dev_fw_str,
+ .get_dma_mr = c4iw_get_dma_mr,
+ .get_hw_stats = c4iw_get_mib,
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index e39b1a101e9721..10ce3b44f645f4 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -4268,14 +4268,14 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
+ MLX5_SET(qpc, qpc, retry_count, attr->retry_cnt);
+
+ if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC && attr->max_rd_atomic)
+- MLX5_SET(qpc, qpc, log_sra_max, ilog2(attr->max_rd_atomic));
++ MLX5_SET(qpc, qpc, log_sra_max, fls(attr->max_rd_atomic - 1));
+
+ if (attr_mask & IB_QP_SQ_PSN)
+ MLX5_SET(qpc, qpc, next_send_psn, attr->sq_psn);
+
+ if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC && attr->max_dest_rd_atomic)
+ MLX5_SET(qpc, qpc, log_rra_max,
+- ilog2(attr->max_dest_rd_atomic));
++ fls(attr->max_dest_rd_atomic - 1));
+
+ if (attr_mask & (IB_QP_ACCESS_FLAGS | IB_QP_MAX_DEST_RD_ATOMIC)) {
+ err = set_qpc_atomic_flags(qp, attr, attr_mask, qpc);
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index 54c57b267b25f7..865d3f8e97a660 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -119,12 +119,12 @@ static void input_pass_values(struct input_dev *dev,
+
+ handle = rcu_dereference(dev->grab);
+ if (handle) {
+- count = handle->handler->events(handle, vals, count);
++ count = handle->handle_events(handle, vals, count);
+ } else {
+ list_for_each_entry_rcu(handle, &dev->h_list, d_node)
+ if (handle->open) {
+- count = handle->handler->events(handle, vals,
+- count);
++ count = handle->handle_events(handle, vals,
++ count);
+ if (!count)
+ break;
+ }
+@@ -2537,57 +2537,6 @@ static int input_handler_check_methods(const struct input_handler *handler)
+ return 0;
+ }
+
+-/*
+- * An implementation of input_handler's events() method that simply
+- * invokes handler->event() method for each event one by one.
+- */
+-static unsigned int input_handler_events_default(struct input_handle *handle,
+- struct input_value *vals,
+- unsigned int count)
+-{
+- struct input_handler *handler = handle->handler;
+- struct input_value *v;
+-
+- for (v = vals; v != vals + count; v++)
+- handler->event(handle, v->type, v->code, v->value);
+-
+- return count;
+-}
+-
+-/*
+- * An implementation of input_handler's events() method that invokes
+- * handler->filter() method for each event one by one and removes events
+- * that were filtered out from the "vals" array.
+- */
+-static unsigned int input_handler_events_filter(struct input_handle *handle,
+- struct input_value *vals,
+- unsigned int count)
+-{
+- struct input_handler *handler = handle->handler;
+- struct input_value *end = vals;
+- struct input_value *v;
+-
+- for (v = vals; v != vals + count; v++) {
+- if (handler->filter(handle, v->type, v->code, v->value))
+- continue;
+- if (end != v)
+- *end = *v;
+- end++;
+- }
+-
+- return end - vals;
+-}
+-
+-/*
+- * An implementation of input_handler's events() method that does nothing.
+- */
+-static unsigned int input_handler_events_null(struct input_handle *handle,
+- struct input_value *vals,
+- unsigned int count)
+-{
+- return count;
+-}
+-
+ /**
+ * input_register_handler - register a new input handler
+ * @handler: handler to be registered
+@@ -2607,13 +2556,6 @@ int input_register_handler(struct input_handler *handler)
+
+ INIT_LIST_HEAD(&handler->h_list);
+
+- if (handler->filter)
+- handler->events = input_handler_events_filter;
+- else if (handler->event)
+- handler->events = input_handler_events_default;
+- else if (!handler->events)
+- handler->events = input_handler_events_null;
+-
+ error = mutex_lock_interruptible(&input_mutex);
+ if (error)
+ return error;
+@@ -2687,6 +2629,75 @@ int input_handler_for_each_handle(struct input_handler *handler, void *data,
+ }
+ EXPORT_SYMBOL(input_handler_for_each_handle);
+
++/*
++ * An implementation of input_handle's handle_events() method that simply
++ * invokes handler->event() method for each event one by one.
++ */
++static unsigned int input_handle_events_default(struct input_handle *handle,
++ struct input_value *vals,
++ unsigned int count)
++{
++ struct input_handler *handler = handle->handler;
++ struct input_value *v;
++
++ for (v = vals; v != vals + count; v++)
++ handler->event(handle, v->type, v->code, v->value);
++
++ return count;
++}
++
++/*
++ * An implementation of input_handle's handle_events() method that invokes
++ * handler->filter() method for each event one by one and removes events
++ * that were filtered out from the "vals" array.
++ */
++static unsigned int input_handle_events_filter(struct input_handle *handle,
++ struct input_value *vals,
++ unsigned int count)
++{
++ struct input_handler *handler = handle->handler;
++ struct input_value *end = vals;
++ struct input_value *v;
++
++ for (v = vals; v != vals + count; v++) {
++ if (handler->filter(handle, v->type, v->code, v->value))
++ continue;
++ if (end != v)
++ *end = *v;
++ end++;
++ }
++
++ return end - vals;
++}
++
++/*
++ * An implementation of input_handle's handle_events() method that does nothing.
++ */
++static unsigned int input_handle_events_null(struct input_handle *handle,
++ struct input_value *vals,
++ unsigned int count)
++{
++ return count;
++}
++
++/*
++ * Sets up appropriate handle->event_handler based on the input_handler
++ * associated with the handle.
++ */
++static void input_handle_setup_event_handler(struct input_handle *handle)
++{
++ struct input_handler *handler = handle->handler;
++
++ if (handler->filter)
++ handle->handle_events = input_handle_events_filter;
++ else if (handler->event)
++ handle->handle_events = input_handle_events_default;
++ else if (handler->events)
++ handle->handle_events = handler->events;
++ else
++ handle->handle_events = input_handle_events_null;
++}
++
+ /**
+ * input_register_handle - register a new input handle
+ * @handle: handle to register
+@@ -2704,6 +2715,7 @@ int input_register_handle(struct input_handle *handle)
+ struct input_dev *dev = handle->dev;
+ int error;
+
++ input_handle_setup_event_handler(handle);
+ /*
+ * We take dev->mutex here to prevent race with
+ * input_release_device().
+diff --git a/drivers/input/touchscreen/edt-ft5x06.c b/drivers/input/touchscreen/edt-ft5x06.c
+index e70415f189a556..126b0ed85aa50c 100644
+--- a/drivers/input/touchscreen/edt-ft5x06.c
++++ b/drivers/input/touchscreen/edt-ft5x06.c
+@@ -1121,6 +1121,14 @@ static void edt_ft5x06_ts_set_regs(struct edt_ft5x06_ts_data *tsdata)
+ }
+ }
+
++static void edt_ft5x06_exit_regmap(void *arg)
++{
++ struct edt_ft5x06_ts_data *data = arg;
++
++ if (!IS_ERR_OR_NULL(data->regmap))
++ regmap_exit(data->regmap);
++}
++
+ static void edt_ft5x06_disable_regulators(void *arg)
+ {
+ struct edt_ft5x06_ts_data *data = arg;
+@@ -1154,6 +1162,16 @@ static int edt_ft5x06_ts_probe(struct i2c_client *client)
+ return PTR_ERR(tsdata->regmap);
+ }
+
++ /*
++ * We are not using devm_regmap_init_i2c() and instead install a
++ * custom action because we may replace regmap with M06-specific one
++ * and we need to make sure that it will not be released too early.
++ */
++ error = devm_add_action_or_reset(&client->dev, edt_ft5x06_exit_regmap,
++ tsdata);
++ if (error)
++ return error;
++
+ chip_data = device_get_match_data(&client->dev);
+ if (!chip_data)
+ chip_data = (const struct edt_i2c_chip_data *)id->driver_data;
+@@ -1347,7 +1365,6 @@ static void edt_ft5x06_ts_remove(struct i2c_client *client)
+ struct edt_ft5x06_ts_data *tsdata = i2c_get_clientdata(client);
+
+ edt_ft5x06_ts_teardown_debugfs(tsdata);
+- regmap_exit(tsdata->regmap);
+ }
+
+ static int edt_ft5x06_ts_suspend(struct device *dev)
+diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
+index 9d090fa07516f0..be011cef12e5d5 100644
+--- a/drivers/misc/mei/client.c
++++ b/drivers/misc/mei/client.c
+@@ -321,7 +321,7 @@ void mei_io_cb_free(struct mei_cl_cb *cb)
+ return;
+
+ list_del(&cb->list);
+- kfree(cb->buf.data);
++ kvfree(cb->buf.data);
+ kfree(cb->ext_hdr);
+ kfree(cb);
+ }
+@@ -497,7 +497,7 @@ struct mei_cl_cb *mei_cl_alloc_cb(struct mei_cl *cl, size_t length,
+ if (length == 0)
+ return cb;
+
+- cb->buf.data = kmalloc(roundup(length, MEI_SLOT_SIZE), GFP_KERNEL);
++ cb->buf.data = kvmalloc(roundup(length, MEI_SLOT_SIZE), GFP_KERNEL);
+ if (!cb->buf.data) {
+ mei_io_cb_free(cb);
+ return NULL;
+diff --git a/drivers/misc/sgi-gru/grukservices.c b/drivers/misc/sgi-gru/grukservices.c
+index 37e804bbb1f281..205945ce9e86a6 100644
+--- a/drivers/misc/sgi-gru/grukservices.c
++++ b/drivers/misc/sgi-gru/grukservices.c
+@@ -258,7 +258,6 @@ static int gru_get_cpu_resources(int dsr_bytes, void **cb, void **dsr)
+ int lcpu;
+
+ BUG_ON(dsr_bytes > GRU_NUM_KERNEL_DSR_BYTES);
+- preempt_disable();
+ bs = gru_lock_kernel_context(-1);
+ lcpu = uv_blade_processor_id();
+ *cb = bs->kernel_cb + lcpu * GRU_HANDLE_STRIDE;
+@@ -272,7 +271,6 @@ static int gru_get_cpu_resources(int dsr_bytes, void **cb, void **dsr)
+ static void gru_free_cpu_resources(void *cb, void *dsr)
+ {
+ gru_unlock_kernel_context(uv_numa_blade_id());
+- preempt_enable();
+ }
+
+ /*
+diff --git a/drivers/misc/sgi-gru/grumain.c b/drivers/misc/sgi-gru/grumain.c
+index 0f5b09e290c899..3036c15f368925 100644
+--- a/drivers/misc/sgi-gru/grumain.c
++++ b/drivers/misc/sgi-gru/grumain.c
+@@ -937,10 +937,8 @@ vm_fault_t gru_fault(struct vm_fault *vmf)
+
+ again:
+ mutex_lock(>s->ts_ctxlock);
+- preempt_disable();
+
+ if (gru_check_context_placement(gts)) {
+- preempt_enable();
+ mutex_unlock(>s->ts_ctxlock);
+ gru_unload_context(gts, 1);
+ return VM_FAULT_NOPAGE;
+@@ -949,7 +947,6 @@ vm_fault_t gru_fault(struct vm_fault *vmf)
+ if (!gts->ts_gru) {
+ STAT(load_user_context);
+ if (!gru_assign_gru_context(gts)) {
+- preempt_enable();
+ mutex_unlock(>s->ts_ctxlock);
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(GRU_ASSIGN_DELAY); /* true hack ZZZ */
+@@ -965,7 +962,6 @@ vm_fault_t gru_fault(struct vm_fault *vmf)
+ vma->vm_page_prot);
+ }
+
+- preempt_enable();
+ mutex_unlock(>s->ts_ctxlock);
+
+ return VM_FAULT_NOPAGE;
+diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c
+index 10921cd2608dfa..1107dd3e2e9fa4 100644
+--- a/drivers/misc/sgi-gru/grutlbpurge.c
++++ b/drivers/misc/sgi-gru/grutlbpurge.c
+@@ -65,7 +65,6 @@ static struct gru_tlb_global_handle *get_lock_tgh_handle(struct gru_state
+ struct gru_tlb_global_handle *tgh;
+ int n;
+
+- preempt_disable();
+ if (uv_numa_blade_id() == gru->gs_blade_id)
+ n = get_on_blade_tgh(gru);
+ else
+@@ -79,7 +78,6 @@ static struct gru_tlb_global_handle *get_lock_tgh_handle(struct gru_state
+ static void get_unlock_tgh_handle(struct gru_tlb_global_handle *tgh)
+ {
+ unlock_tgh_handle(tgh);
+- preempt_enable();
+ }
+
+ /*
+diff --git a/drivers/mmc/host/sdhci-pci-gli.c b/drivers/mmc/host/sdhci-pci-gli.c
+index 0f81586a19df38..68ce4920e01e35 100644
+--- a/drivers/mmc/host/sdhci-pci-gli.c
++++ b/drivers/mmc/host/sdhci-pci-gli.c
+@@ -892,28 +892,40 @@ static void gl9767_disable_ssc_pll(struct pci_dev *pdev)
+ gl9767_vhs_read(pdev);
+ }
+
++static void gl9767_set_low_power_negotiation(struct pci_dev *pdev, bool enable)
++{
++ u32 value;
++
++ gl9767_vhs_write(pdev);
++
++ pci_read_config_dword(pdev, PCIE_GLI_9767_CFG, &value);
++ if (enable)
++ value &= ~PCIE_GLI_9767_CFG_LOW_PWR_OFF;
++ else
++ value |= PCIE_GLI_9767_CFG_LOW_PWR_OFF;
++ pci_write_config_dword(pdev, PCIE_GLI_9767_CFG, value);
++
++ gl9767_vhs_read(pdev);
++}
++
+ static void sdhci_gl9767_set_clock(struct sdhci_host *host, unsigned int clock)
+ {
+ struct sdhci_pci_slot *slot = sdhci_priv(host);
+ struct mmc_ios *ios = &host->mmc->ios;
+ struct pci_dev *pdev;
+- u32 value;
+ u16 clk;
+
+ pdev = slot->chip->pdev;
+ host->mmc->actual_clock = 0;
+
+- gl9767_vhs_write(pdev);
+-
+- pci_read_config_dword(pdev, PCIE_GLI_9767_CFG, &value);
+- value |= PCIE_GLI_9767_CFG_LOW_PWR_OFF;
+- pci_write_config_dword(pdev, PCIE_GLI_9767_CFG, value);
+-
++ gl9767_set_low_power_negotiation(pdev, false);
+ gl9767_disable_ssc_pll(pdev);
+ sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL);
+
+- if (clock == 0)
++ if (clock == 0) {
++ gl9767_set_low_power_negotiation(pdev, true);
+ return;
++ }
+
+ clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock);
+ if (clock == 200000000 && ios->timing == MMC_TIMING_UHS_SDR104) {
+@@ -922,12 +934,7 @@ static void sdhci_gl9767_set_clock(struct sdhci_host *host, unsigned int clock)
+ }
+
+ sdhci_enable_clk(host, clk);
+-
+- pci_read_config_dword(pdev, PCIE_GLI_9767_CFG, &value);
+- value &= ~PCIE_GLI_9767_CFG_LOW_PWR_OFF;
+- pci_write_config_dword(pdev, PCIE_GLI_9767_CFG, value);
+-
+- gl9767_vhs_read(pdev);
++ gl9767_set_low_power_negotiation(pdev, true);
+ }
+
+ static void gli_set_9767(struct sdhci_host *host)
+@@ -1061,6 +1068,9 @@ static int gl9767_init_sd_express(struct mmc_host *mmc, struct mmc_ios *ios)
+ sdhci_writew(host, value, SDHCI_CLOCK_CONTROL);
+ }
+
++ pci_read_config_dword(pdev, PCIE_GLI_9767_CFG, &value);
++ value &= ~PCIE_GLI_9767_CFG_LOW_PWR_OFF;
++ pci_write_config_dword(pdev, PCIE_GLI_9767_CFG, value);
+ gl9767_vhs_read(pdev);
+
+ return 0;
+diff --git a/drivers/net/ethernet/amd/mvme147.c b/drivers/net/ethernet/amd/mvme147.c
+index c156566c09064c..f19b04b92fa9f3 100644
+--- a/drivers/net/ethernet/amd/mvme147.c
++++ b/drivers/net/ethernet/amd/mvme147.c
+@@ -105,10 +105,6 @@ static struct net_device * __init mvme147lance_probe(void)
+ macaddr[3] = address&0xff;
+ eth_hw_addr_set(dev, macaddr);
+
+- printk("%s: MVME147 at 0x%08lx, irq %d, Hardware Address %pM\n",
+- dev->name, dev->base_addr, MVME147_LANCE_IRQ,
+- dev->dev_addr);
+-
+ lp = netdev_priv(dev);
+ lp->ram = __get_dma_pages(GFP_ATOMIC, 3); /* 32K */
+ if (!lp->ram) {
+@@ -138,6 +134,9 @@ static struct net_device * __init mvme147lance_probe(void)
+ return ERR_PTR(err);
+ }
+
++ netdev_info(dev, "MVME147 at 0x%08lx, irq %d, Hardware Address %pM\n",
++ dev->base_addr, MVME147_LANCE_IRQ, dev->dev_addr);
++
+ return dev;
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c
+index cc35d29ac9e6c2..d5ad6d84007c21 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dpll.c
++++ b/drivers/net/ethernet/intel/ice/ice_dpll.c
+@@ -9,6 +9,8 @@
+ #define ICE_CGU_STATE_ACQ_ERR_THRESHOLD 50
+ #define ICE_DPLL_PIN_IDX_INVALID 0xff
+ #define ICE_DPLL_RCLK_NUM_PER_PF 1
++#define ICE_DPLL_PIN_ESYNC_PULSE_HIGH_PERCENT 25
++#define ICE_DPLL_PIN_GEN_RCLK_FREQ 1953125
+
+ /**
+ * enum ice_dpll_pin_type - enumerate ice pin types:
+@@ -30,6 +32,10 @@ static const char * const pin_type_name[] = {
+ [ICE_DPLL_PIN_TYPE_RCLK_INPUT] = "rclk-input",
+ };
+
++static const struct dpll_pin_frequency ice_esync_range[] = {
++ DPLL_PIN_FREQUENCY_RANGE(0, DPLL_PIN_FREQUENCY_1_HZ),
++};
++
+ /**
+ * ice_dpll_is_reset - check if reset is in progress
+ * @pf: private board structure
+@@ -394,8 +400,8 @@ ice_dpll_pin_state_update(struct ice_pf *pf, struct ice_dpll_pin *pin,
+
+ switch (pin_type) {
+ case ICE_DPLL_PIN_TYPE_INPUT:
+- ret = ice_aq_get_input_pin_cfg(&pf->hw, pin->idx, NULL, NULL,
+- NULL, &pin->flags[0],
++ ret = ice_aq_get_input_pin_cfg(&pf->hw, pin->idx, &pin->status,
++ NULL, NULL, &pin->flags[0],
+ &pin->freq, &pin->phase_adjust);
+ if (ret)
+ goto err;
+@@ -430,7 +436,7 @@ ice_dpll_pin_state_update(struct ice_pf *pf, struct ice_dpll_pin *pin,
+ goto err;
+
+ parent &= ICE_AQC_GET_CGU_OUT_CFG_DPLL_SRC_SEL;
+- if (ICE_AQC_SET_CGU_OUT_CFG_OUT_EN & pin->flags[0]) {
++ if (ICE_AQC_GET_CGU_OUT_CFG_OUT_EN & pin->flags[0]) {
+ pin->state[pf->dplls.eec.dpll_idx] =
+ parent == pf->dplls.eec.dpll_idx ?
+ DPLL_PIN_STATE_CONNECTED :
+@@ -1100,6 +1106,214 @@ ice_dpll_phase_offset_get(const struct dpll_pin *pin, void *pin_priv,
+ return 0;
+ }
+
++/**
++ * ice_dpll_output_esync_set - callback for setting embedded sync
++ * @pin: pointer to a pin
++ * @pin_priv: private data pointer passed on pin registration
++ * @dpll: registered dpll pointer
++ * @dpll_priv: private data pointer passed on dpll registration
++ * @freq: requested embedded sync frequency
++ * @extack: error reporting
++ *
++ * Dpll subsystem callback. Handler for setting embedded sync frequency value
++ * on output pin.
++ *
++ * Context: Acquires pf->dplls.lock
++ * Return:
++ * * 0 - success
++ * * negative - error
++ */
++static int
++ice_dpll_output_esync_set(const struct dpll_pin *pin, void *pin_priv,
++ const struct dpll_device *dpll, void *dpll_priv,
++ u64 freq, struct netlink_ext_ack *extack)
++{
++ struct ice_dpll_pin *p = pin_priv;
++ struct ice_dpll *d = dpll_priv;
++ struct ice_pf *pf = d->pf;
++ u8 flags = 0;
++ int ret;
++
++ if (ice_dpll_is_reset(pf, extack))
++ return -EBUSY;
++ mutex_lock(&pf->dplls.lock);
++ if (p->flags[0] & ICE_AQC_GET_CGU_OUT_CFG_OUT_EN)
++ flags = ICE_AQC_SET_CGU_OUT_CFG_OUT_EN;
++ if (freq == DPLL_PIN_FREQUENCY_1_HZ) {
++ if (p->flags[0] & ICE_AQC_GET_CGU_OUT_CFG_ESYNC_EN) {
++ ret = 0;
++ } else {
++ flags |= ICE_AQC_SET_CGU_OUT_CFG_ESYNC_EN;
++ ret = ice_aq_set_output_pin_cfg(&pf->hw, p->idx, flags,
++ 0, 0, 0);
++ }
++ } else {
++ if (!(p->flags[0] & ICE_AQC_GET_CGU_OUT_CFG_ESYNC_EN)) {
++ ret = 0;
++ } else {
++ flags &= ~ICE_AQC_SET_CGU_OUT_CFG_ESYNC_EN;
++ ret = ice_aq_set_output_pin_cfg(&pf->hw, p->idx, flags,
++ 0, 0, 0);
++ }
++ }
++ mutex_unlock(&pf->dplls.lock);
++
++ return ret;
++}
++
++/**
++ * ice_dpll_output_esync_get - callback for getting embedded sync config
++ * @pin: pointer to a pin
++ * @pin_priv: private data pointer passed on pin registration
++ * @dpll: registered dpll pointer
++ * @dpll_priv: private data pointer passed on dpll registration
++ * @esync: on success holds embedded sync pin properties
++ * @extack: error reporting
++ *
++ * Dpll subsystem callback. Handler for getting embedded sync frequency value
++ * and capabilities on output pin.
++ *
++ * Context: Acquires pf->dplls.lock
++ * Return:
++ * * 0 - success
++ * * negative - error
++ */
++static int
++ice_dpll_output_esync_get(const struct dpll_pin *pin, void *pin_priv,
++ const struct dpll_device *dpll, void *dpll_priv,
++ struct dpll_pin_esync *esync,
++ struct netlink_ext_ack *extack)
++{
++ struct ice_dpll_pin *p = pin_priv;
++ struct ice_dpll *d = dpll_priv;
++ struct ice_pf *pf = d->pf;
++
++ if (ice_dpll_is_reset(pf, extack))
++ return -EBUSY;
++ mutex_lock(&pf->dplls.lock);
++ if (!(p->flags[0] & ICE_AQC_GET_CGU_OUT_CFG_ESYNC_ABILITY) ||
++ p->freq != DPLL_PIN_FREQUENCY_10_MHZ) {
++ mutex_unlock(&pf->dplls.lock);
++ return -EOPNOTSUPP;
++ }
++ esync->range = ice_esync_range;
++ esync->range_num = ARRAY_SIZE(ice_esync_range);
++ if (p->flags[0] & ICE_AQC_GET_CGU_OUT_CFG_ESYNC_EN) {
++ esync->freq = DPLL_PIN_FREQUENCY_1_HZ;
++ esync->pulse = ICE_DPLL_PIN_ESYNC_PULSE_HIGH_PERCENT;
++ } else {
++ esync->freq = 0;
++ esync->pulse = 0;
++ }
++ mutex_unlock(&pf->dplls.lock);
++
++ return 0;
++}
++
++/**
++ * ice_dpll_input_esync_set - callback for setting embedded sync
++ * @pin: pointer to a pin
++ * @pin_priv: private data pointer passed on pin registration
++ * @dpll: registered dpll pointer
++ * @dpll_priv: private data pointer passed on dpll registration
++ * @freq: requested embedded sync frequency
++ * @extack: error reporting
++ *
++ * Dpll subsystem callback. Handler for setting embedded sync frequency value
++ * on input pin.
++ *
++ * Context: Acquires pf->dplls.lock
++ * Return:
++ * * 0 - success
++ * * negative - error
++ */
++static int
++ice_dpll_input_esync_set(const struct dpll_pin *pin, void *pin_priv,
++ const struct dpll_device *dpll, void *dpll_priv,
++ u64 freq, struct netlink_ext_ack *extack)
++{
++ struct ice_dpll_pin *p = pin_priv;
++ struct ice_dpll *d = dpll_priv;
++ struct ice_pf *pf = d->pf;
++ u8 flags_en = 0;
++ int ret;
++
++ if (ice_dpll_is_reset(pf, extack))
++ return -EBUSY;
++ mutex_lock(&pf->dplls.lock);
++ if (p->flags[0] & ICE_AQC_GET_CGU_IN_CFG_FLG2_INPUT_EN)
++ flags_en = ICE_AQC_SET_CGU_IN_CFG_FLG2_INPUT_EN;
++ if (freq == DPLL_PIN_FREQUENCY_1_HZ) {
++ if (p->flags[0] & ICE_AQC_GET_CGU_IN_CFG_FLG2_ESYNC_EN) {
++ ret = 0;
++ } else {
++ flags_en |= ICE_AQC_SET_CGU_IN_CFG_FLG2_ESYNC_EN;
++ ret = ice_aq_set_input_pin_cfg(&pf->hw, p->idx, 0,
++ flags_en, 0, 0);
++ }
++ } else {
++ if (!(p->flags[0] & ICE_AQC_GET_CGU_IN_CFG_FLG2_ESYNC_EN)) {
++ ret = 0;
++ } else {
++ flags_en &= ~ICE_AQC_SET_CGU_IN_CFG_FLG2_ESYNC_EN;
++ ret = ice_aq_set_input_pin_cfg(&pf->hw, p->idx, 0,
++ flags_en, 0, 0);
++ }
++ }
++ mutex_unlock(&pf->dplls.lock);
++
++ return ret;
++}
++
++/**
++ * ice_dpll_input_esync_get - callback for getting embedded sync config
++ * @pin: pointer to a pin
++ * @pin_priv: private data pointer passed on pin registration
++ * @dpll: registered dpll pointer
++ * @dpll_priv: private data pointer passed on dpll registration
++ * @esync: on success holds embedded sync pin properties
++ * @extack: error reporting
++ *
++ * Dpll subsystem callback. Handler for getting embedded sync frequency value
++ * and capabilities on input pin.
++ *
++ * Context: Acquires pf->dplls.lock
++ * Return:
++ * * 0 - success
++ * * negative - error
++ */
++static int
++ice_dpll_input_esync_get(const struct dpll_pin *pin, void *pin_priv,
++ const struct dpll_device *dpll, void *dpll_priv,
++ struct dpll_pin_esync *esync,
++ struct netlink_ext_ack *extack)
++{
++ struct ice_dpll_pin *p = pin_priv;
++ struct ice_dpll *d = dpll_priv;
++ struct ice_pf *pf = d->pf;
++
++ if (ice_dpll_is_reset(pf, extack))
++ return -EBUSY;
++ mutex_lock(&pf->dplls.lock);
++ if (!(p->status & ICE_AQC_GET_CGU_IN_CFG_STATUS_ESYNC_CAP) ||
++ p->freq != DPLL_PIN_FREQUENCY_10_MHZ) {
++ mutex_unlock(&pf->dplls.lock);
++ return -EOPNOTSUPP;
++ }
++ esync->range = ice_esync_range;
++ esync->range_num = ARRAY_SIZE(ice_esync_range);
++ if (p->flags[0] & ICE_AQC_GET_CGU_IN_CFG_FLG2_ESYNC_EN) {
++ esync->freq = DPLL_PIN_FREQUENCY_1_HZ;
++ esync->pulse = ICE_DPLL_PIN_ESYNC_PULSE_HIGH_PERCENT;
++ } else {
++ esync->freq = 0;
++ esync->pulse = 0;
++ }
++ mutex_unlock(&pf->dplls.lock);
++
++ return 0;
++}
++
+ /**
+ * ice_dpll_rclk_state_on_pin_set - set a state on rclk pin
+ * @pin: pointer to a pin
+@@ -1224,6 +1438,8 @@ static const struct dpll_pin_ops ice_dpll_input_ops = {
+ .phase_adjust_get = ice_dpll_pin_phase_adjust_get,
+ .phase_adjust_set = ice_dpll_input_phase_adjust_set,
+ .phase_offset_get = ice_dpll_phase_offset_get,
++ .esync_set = ice_dpll_input_esync_set,
++ .esync_get = ice_dpll_input_esync_get,
+ };
+
+ static const struct dpll_pin_ops ice_dpll_output_ops = {
+@@ -1234,6 +1450,8 @@ static const struct dpll_pin_ops ice_dpll_output_ops = {
+ .direction_get = ice_dpll_output_direction,
+ .phase_adjust_get = ice_dpll_pin_phase_adjust_get,
+ .phase_adjust_set = ice_dpll_output_phase_adjust_set,
++ .esync_set = ice_dpll_output_esync_set,
++ .esync_get = ice_dpll_output_esync_get,
+ };
+
+ static const struct dpll_device_ops ice_dpll_ops = {
+@@ -1846,6 +2064,73 @@ static int ice_dpll_init_worker(struct ice_pf *pf)
+ return 0;
+ }
+
++/**
++ * ice_dpll_init_info_pins_generic - initializes generic pins info
++ * @pf: board private structure
++ * @input: if input pins initialized
++ *
++ * Init information for generic pins, cache them in PF's pins structures.
++ *
++ * Return:
++ * * 0 - success
++ * * negative - init failure reason
++ */
++static int ice_dpll_init_info_pins_generic(struct ice_pf *pf, bool input)
++{
++ struct ice_dpll *de = &pf->dplls.eec, *dp = &pf->dplls.pps;
++ static const char labels[][sizeof("99")] = {
++ "0", "1", "2", "3", "4", "5", "6", "7", "8",
++ "9", "10", "11", "12", "13", "14", "15" };
++ u32 cap = DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE;
++ enum ice_dpll_pin_type pin_type;
++ int i, pin_num, ret = -EINVAL;
++ struct ice_dpll_pin *pins;
++ u32 phase_adj_max;
++
++ if (input) {
++ pin_num = pf->dplls.num_inputs;
++ pins = pf->dplls.inputs;
++ phase_adj_max = pf->dplls.input_phase_adj_max;
++ pin_type = ICE_DPLL_PIN_TYPE_INPUT;
++ cap |= DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE;
++ } else {
++ pin_num = pf->dplls.num_outputs;
++ pins = pf->dplls.outputs;
++ phase_adj_max = pf->dplls.output_phase_adj_max;
++ pin_type = ICE_DPLL_PIN_TYPE_OUTPUT;
++ }
++ if (pin_num > ARRAY_SIZE(labels))
++ return ret;
++
++ for (i = 0; i < pin_num; i++) {
++ pins[i].idx = i;
++ pins[i].prop.board_label = labels[i];
++ pins[i].prop.phase_range.min = phase_adj_max;
++ pins[i].prop.phase_range.max = -phase_adj_max;
++ pins[i].prop.capabilities = cap;
++ pins[i].pf = pf;
++ ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL);
++ if (ret)
++ break;
++ if (input && pins[i].freq == ICE_DPLL_PIN_GEN_RCLK_FREQ)
++ pins[i].prop.type = DPLL_PIN_TYPE_MUX;
++ else
++ pins[i].prop.type = DPLL_PIN_TYPE_EXT;
++ if (!input)
++ continue;
++ ret = ice_aq_get_cgu_ref_prio(&pf->hw, de->dpll_idx, i,
++ &de->input_prio[i]);
++ if (ret)
++ break;
++ ret = ice_aq_get_cgu_ref_prio(&pf->hw, dp->dpll_idx, i,
++ &dp->input_prio[i]);
++ if (ret)
++ break;
++ }
++
++ return ret;
++}
++
+ /**
+ * ice_dpll_init_info_direct_pins - initializes direct pins info
+ * @pf: board private structure
+@@ -1884,6 +2169,8 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf,
+ default:
+ return -EINVAL;
+ }
++ if (num_pins != ice_cgu_get_num_pins(hw, input))
++ return ice_dpll_init_info_pins_generic(pf, input);
+
+ for (i = 0; i < num_pins; i++) {
+ caps = 0;
+diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h
+index 93172e93995b94..c320f1bf7d6d66 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dpll.h
++++ b/drivers/net/ethernet/intel/ice/ice_dpll.h
+@@ -31,6 +31,7 @@ struct ice_dpll_pin {
+ struct dpll_pin_properties prop;
+ u32 freq;
+ s32 phase_adjust;
++ u8 status;
+ };
+
+ /** ice_dpll - store info required for DPLL control
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+index 3a33e6b9b313d7..ec8db830ac73ae 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+@@ -34,7 +34,6 @@ static const struct ice_cgu_pin_desc ice_e810t_sfp_cgu_inputs[] = {
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "GNSS-1PPS", ZL_REF4P, DPLL_PIN_TYPE_GNSS,
+ ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz },
+- { "OCXO", ZL_REF4N, DPLL_PIN_TYPE_INT_OSCILLATOR, 0, },
+ };
+
+ static const struct ice_cgu_pin_desc ice_e810t_qsfp_cgu_inputs[] = {
+@@ -52,7 +51,6 @@ static const struct ice_cgu_pin_desc ice_e810t_qsfp_cgu_inputs[] = {
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "GNSS-1PPS", ZL_REF4P, DPLL_PIN_TYPE_GNSS,
+ ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz },
+- { "OCXO", ZL_REF4N, DPLL_PIN_TYPE_INT_OSCILLATOR, },
+ };
+
+ static const struct ice_cgu_pin_desc ice_e810t_sfp_cgu_outputs[] = {
+@@ -5964,6 +5962,25 @@ ice_cgu_get_pin_desc(struct ice_hw *hw, bool input, int *size)
+ return t;
+ }
+
++/**
++ * ice_cgu_get_num_pins - get pin description array size
++ * @hw: pointer to the hw struct
++ * @input: if request is done against input or output pins
++ *
++ * Return: size of pin description array for given hw.
++ */
++int ice_cgu_get_num_pins(struct ice_hw *hw, bool input)
++{
++ const struct ice_cgu_pin_desc *t;
++ int size;
++
++ t = ice_cgu_get_pin_desc(hw, input, &size);
++ if (t)
++ return size;
++
++ return 0;
++}
++
+ /**
+ * ice_cgu_get_pin_type - get pin's type
+ * @hw: pointer to the hw struct
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+index 0852a34ade918b..6cedc1a906afb6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+@@ -404,6 +404,7 @@ int ice_read_sma_ctrl_e810t(struct ice_hw *hw, u8 *data);
+ int ice_write_sma_ctrl_e810t(struct ice_hw *hw, u8 data);
+ int ice_read_pca9575_reg_e810t(struct ice_hw *hw, u8 offset, u8 *data);
+ bool ice_is_pca9575_present(struct ice_hw *hw);
++int ice_cgu_get_num_pins(struct ice_hw *hw, bool input);
+ enum dpll_pin_type ice_cgu_get_pin_type(struct ice_hw *hw, u8 pin, bool input);
+ struct dpll_pin_frequency *
+ ice_cgu_get_pin_freq_supp(struct ice_hw *hw, u8 pin, bool input, u8 *num);
+diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.h b/drivers/net/ethernet/mediatek/mtk_wed_wo.h
+index 87a67fa3868d34..c01b1e8428f6d5 100644
+--- a/drivers/net/ethernet/mediatek/mtk_wed_wo.h
++++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.h
+@@ -91,8 +91,8 @@ enum mtk_wed_dummy_cr_idx {
+ #define MT7981_FIRMWARE_WO "mediatek/mt7981_wo.bin"
+ #define MT7986_FIRMWARE_WO0 "mediatek/mt7986_wo_0.bin"
+ #define MT7986_FIRMWARE_WO1 "mediatek/mt7986_wo_1.bin"
+-#define MT7988_FIRMWARE_WO0 "mediatek/mt7988_wo_0.bin"
+-#define MT7988_FIRMWARE_WO1 "mediatek/mt7988_wo_1.bin"
++#define MT7988_FIRMWARE_WO0 "mediatek/mt7988/mt7988_wo_0.bin"
++#define MT7988_FIRMWARE_WO1 "mediatek/mt7988/mt7988_wo_1.bin"
+
+ #define MTK_WO_MCU_CFG_LS_BASE 0
+ #define MTK_WO_MCU_CFG_LS_HW_VER_ADDR (MTK_WO_MCU_CFG_LS_BASE + 0x000)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+index 060e5b93921148..d6f37456fb317d 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+@@ -389,15 +389,27 @@ static void mlxsw_pci_wqe_frag_unmap(struct mlxsw_pci *mlxsw_pci, char *wqe,
+ dma_unmap_single(&pdev->dev, mapaddr, frag_len, direction);
+ }
+
+-static struct sk_buff *mlxsw_pci_rdq_build_skb(struct page *pages[],
++static struct sk_buff *mlxsw_pci_rdq_build_skb(struct mlxsw_pci_queue *q,
++ struct page *pages[],
+ u16 byte_count)
+ {
++ struct mlxsw_pci_queue *cq = q->u.rdq.cq;
+ unsigned int linear_data_size;
++ struct page_pool *page_pool;
+ struct sk_buff *skb;
+ int page_index = 0;
+ bool linear_only;
+ void *data;
+
++ linear_only = byte_count + MLXSW_PCI_RX_BUF_SW_OVERHEAD <= PAGE_SIZE;
++ linear_data_size = linear_only ? byte_count :
++ PAGE_SIZE -
++ MLXSW_PCI_RX_BUF_SW_OVERHEAD;
++
++ page_pool = cq->u.cq.page_pool;
++ page_pool_dma_sync_for_cpu(page_pool, pages[page_index],
++ MLXSW_PCI_SKB_HEADROOM, linear_data_size);
++
+ data = page_address(pages[page_index]);
+ net_prefetch(data);
+
+@@ -405,11 +417,6 @@ static struct sk_buff *mlxsw_pci_rdq_build_skb(struct page *pages[],
+ if (unlikely(!skb))
+ return ERR_PTR(-ENOMEM);
+
+- linear_only = byte_count + MLXSW_PCI_RX_BUF_SW_OVERHEAD <= PAGE_SIZE;
+- linear_data_size = linear_only ? byte_count :
+- PAGE_SIZE -
+- MLXSW_PCI_RX_BUF_SW_OVERHEAD;
+-
+ skb_reserve(skb, MLXSW_PCI_SKB_HEADROOM);
+ skb_put(skb, linear_data_size);
+
+@@ -425,6 +432,7 @@ static struct sk_buff *mlxsw_pci_rdq_build_skb(struct page *pages[],
+
+ page = pages[page_index];
+ frag_size = min(byte_count, PAGE_SIZE);
++ page_pool_dma_sync_for_cpu(page_pool, page, 0, frag_size);
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+ page, 0, frag_size, PAGE_SIZE);
+ byte_count -= frag_size;
+@@ -760,7 +768,7 @@ static void mlxsw_pci_cqe_rdq_handle(struct mlxsw_pci *mlxsw_pci,
+ if (err)
+ goto out;
+
+- skb = mlxsw_pci_rdq_build_skb(pages, byte_count);
++ skb = mlxsw_pci_rdq_build_skb(q, pages, byte_count);
+ if (IS_ERR(skb)) {
+ dev_err_ratelimited(&pdev->dev, "Failed to build skb for RDQ\n");
+ mlxsw_pci_rdq_pages_recycle(q, pages, num_sg_entries);
+@@ -988,12 +996,13 @@ static int mlxsw_pci_cq_page_pool_init(struct mlxsw_pci_queue *q,
+ if (cq_type != MLXSW_PCI_CQ_RDQ)
+ return 0;
+
+- pp_params.flags = PP_FLAG_DMA_MAP;
++ pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
+ pp_params.pool_size = MLXSW_PCI_WQE_COUNT * mlxsw_pci->num_sg_entries;
+ pp_params.nid = dev_to_node(&mlxsw_pci->pdev->dev);
+ pp_params.dev = &mlxsw_pci->pdev->dev;
+ pp_params.napi = &q->u.cq.napi;
+ pp_params.dma_dir = DMA_FROM_DEVICE;
++ pp_params.max_len = PAGE_SIZE;
+
+ page_pool = page_pool_create(&pp_params);
+ if (IS_ERR(page_pool))
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
+index d761a1235994cc..7ea798a4949e2d 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
+@@ -481,11 +481,33 @@ mlxsw_sp_ipip_ol_netdev_change_gre6(struct mlxsw_sp *mlxsw_sp,
+ struct mlxsw_sp_ipip_entry *ipip_entry,
+ struct netlink_ext_ack *extack)
+ {
++ u32 new_kvdl_index, old_kvdl_index = ipip_entry->dip_kvdl_index;
++ struct in6_addr old_addr6 = ipip_entry->parms.daddr.addr6;
+ struct mlxsw_sp_ipip_parms new_parms;
++ int err;
+
+ new_parms = mlxsw_sp_ipip_netdev_parms_init_gre6(ipip_entry->ol_dev);
+- return mlxsw_sp_ipip_ol_netdev_change_gre(mlxsw_sp, ipip_entry,
+- &new_parms, extack);
++
++ err = mlxsw_sp_ipv6_addr_kvdl_index_get(mlxsw_sp,
++ &new_parms.daddr.addr6,
++ &new_kvdl_index);
++ if (err)
++ return err;
++ ipip_entry->dip_kvdl_index = new_kvdl_index;
++
++ err = mlxsw_sp_ipip_ol_netdev_change_gre(mlxsw_sp, ipip_entry,
++ &new_parms, extack);
++ if (err)
++ goto err_change_gre;
++
++ mlxsw_sp_ipv6_addr_put(mlxsw_sp, &old_addr6);
++
++ return 0;
++
++err_change_gre:
++ ipip_entry->dip_kvdl_index = old_kvdl_index;
++ mlxsw_sp_ipv6_addr_put(mlxsw_sp, &new_parms.daddr.addr6);
++ return err;
+ }
+
+ static int
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
+index 5b174cb95eb8a3..d94081c7658e32 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
+@@ -16,6 +16,7 @@
+ #include "spectrum.h"
+ #include "spectrum_ptp.h"
+ #include "core.h"
++#include "txheader.h"
+
+ #define MLXSW_SP1_PTP_CLOCK_CYCLES_SHIFT 29
+ #define MLXSW_SP1_PTP_CLOCK_FREQ_KHZ 156257 /* 6.4nSec */
+@@ -1684,6 +1685,12 @@ int mlxsw_sp_ptp_txhdr_construct(struct mlxsw_core *mlxsw_core,
+ struct sk_buff *skb,
+ const struct mlxsw_tx_info *tx_info)
+ {
++ if (skb_cow_head(skb, MLXSW_TXHDR_LEN)) {
++ this_cpu_inc(mlxsw_sp_port->pcpu_stats->tx_dropped);
++ dev_kfree_skb_any(skb);
++ return -ENOMEM;
++ }
++
+ mlxsw_sp_txhdr_construct(skb, tx_info);
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+index 84d3a8551b0322..071f128aa49073 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+@@ -203,8 +203,12 @@ static void _dwmac4_dump_dma_regs(struct stmmac_priv *priv,
+ readl(ioaddr + DMA_CHAN_TX_CONTROL(dwmac4_addrs, channel));
+ reg_space[DMA_CHAN_RX_CONTROL(default_addrs, channel) / 4] =
+ readl(ioaddr + DMA_CHAN_RX_CONTROL(dwmac4_addrs, channel));
++ reg_space[DMA_CHAN_TX_BASE_ADDR_HI(default_addrs, channel) / 4] =
++ readl(ioaddr + DMA_CHAN_TX_BASE_ADDR_HI(dwmac4_addrs, channel));
+ reg_space[DMA_CHAN_TX_BASE_ADDR(default_addrs, channel) / 4] =
+ readl(ioaddr + DMA_CHAN_TX_BASE_ADDR(dwmac4_addrs, channel));
++ reg_space[DMA_CHAN_RX_BASE_ADDR_HI(default_addrs, channel) / 4] =
++ readl(ioaddr + DMA_CHAN_RX_BASE_ADDR_HI(dwmac4_addrs, channel));
+ reg_space[DMA_CHAN_RX_BASE_ADDR(default_addrs, channel) / 4] =
+ readl(ioaddr + DMA_CHAN_RX_BASE_ADDR(dwmac4_addrs, channel));
+ reg_space[DMA_CHAN_TX_END_ADDR(default_addrs, channel) / 4] =
+@@ -225,8 +229,12 @@ static void _dwmac4_dump_dma_regs(struct stmmac_priv *priv,
+ readl(ioaddr + DMA_CHAN_CUR_TX_DESC(dwmac4_addrs, channel));
+ reg_space[DMA_CHAN_CUR_RX_DESC(default_addrs, channel) / 4] =
+ readl(ioaddr + DMA_CHAN_CUR_RX_DESC(dwmac4_addrs, channel));
++ reg_space[DMA_CHAN_CUR_TX_BUF_ADDR_HI(default_addrs, channel) / 4] =
++ readl(ioaddr + DMA_CHAN_CUR_TX_BUF_ADDR_HI(dwmac4_addrs, channel));
+ reg_space[DMA_CHAN_CUR_TX_BUF_ADDR(default_addrs, channel) / 4] =
+ readl(ioaddr + DMA_CHAN_CUR_TX_BUF_ADDR(dwmac4_addrs, channel));
++ reg_space[DMA_CHAN_CUR_RX_BUF_ADDR_HI(default_addrs, channel) / 4] =
++ readl(ioaddr + DMA_CHAN_CUR_RX_BUF_ADDR_HI(dwmac4_addrs, channel));
+ reg_space[DMA_CHAN_CUR_RX_BUF_ADDR(default_addrs, channel) / 4] =
+ readl(ioaddr + DMA_CHAN_CUR_RX_BUF_ADDR(dwmac4_addrs, channel));
+ reg_space[DMA_CHAN_STATUS(default_addrs, channel) / 4] =
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.h b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.h
+index 17d9120db5fe90..4f980dcd395823 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.h
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.h
+@@ -127,7 +127,9 @@ static inline u32 dma_chanx_base_addr(const struct dwmac4_addrs *addrs,
+ #define DMA_CHAN_SLOT_CTRL_STATUS(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x3c)
+ #define DMA_CHAN_CUR_TX_DESC(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x44)
+ #define DMA_CHAN_CUR_RX_DESC(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x4c)
++#define DMA_CHAN_CUR_TX_BUF_ADDR_HI(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x50)
+ #define DMA_CHAN_CUR_TX_BUF_ADDR(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x54)
++#define DMA_CHAN_CUR_RX_BUF_ADDR_HI(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x58)
+ #define DMA_CHAN_CUR_RX_BUF_ADDR(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x5c)
+ #define DMA_CHAN_STATUS(addrs, x) (dma_chanx_base_addr(addrs, x) + 0x60)
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index f3a1b179aaeaca..02368917efb4ad 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -4330,11 +4330,6 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ if (dma_mapping_error(priv->device, des))
+ goto dma_map_err;
+
+- tx_q->tx_skbuff_dma[first_entry].buf = des;
+- tx_q->tx_skbuff_dma[first_entry].len = skb_headlen(skb);
+- tx_q->tx_skbuff_dma[first_entry].map_as_page = false;
+- tx_q->tx_skbuff_dma[first_entry].buf_type = STMMAC_TXBUF_T_SKB;
+-
+ if (priv->dma_cap.addr64 <= 32) {
+ first->des0 = cpu_to_le32(des);
+
+@@ -4353,6 +4348,23 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue);
+
++ /* In case two or more DMA transmit descriptors are allocated for this
++ * non-paged SKB data, the DMA buffer address should be saved to
++ * tx_q->tx_skbuff_dma[].buf corresponding to the last descriptor,
++ * and leave the other tx_q->tx_skbuff_dma[].buf as NULL to guarantee
++ * that stmmac_tx_clean() does not unmap the entire DMA buffer too early
++ * since the tail areas of the DMA buffer can be accessed by DMA engine
++ * sooner or later.
++ * By saving the DMA buffer address to tx_q->tx_skbuff_dma[].buf
++ * corresponding to the last descriptor, stmmac_tx_clean() will unmap
++ * this DMA buffer right after the DMA engine completely finishes the
++ * full buffer transmission.
++ */
++ tx_q->tx_skbuff_dma[tx_q->cur_tx].buf = des;
++ tx_q->tx_skbuff_dma[tx_q->cur_tx].len = skb_headlen(skb);
++ tx_q->tx_skbuff_dma[tx_q->cur_tx].map_as_page = false;
++ tx_q->tx_skbuff_dma[tx_q->cur_tx].buf_type = STMMAC_TXBUF_T_SKB;
++
+ /* Prepare fragments */
+ for (i = 0; i < nfrags; i++) {
+ const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 2e94d10348cceb..4cb925321785ec 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -1702,20 +1702,24 @@ static int gtp_encap_enable(struct gtp_dev *gtp, struct nlattr *data[])
+ return -EINVAL;
+
+ if (data[IFLA_GTP_FD0]) {
+- u32 fd0 = nla_get_u32(data[IFLA_GTP_FD0]);
++ int fd0 = nla_get_u32(data[IFLA_GTP_FD0]);
+
+- sk0 = gtp_encap_enable_socket(fd0, UDP_ENCAP_GTP0, gtp);
+- if (IS_ERR(sk0))
+- return PTR_ERR(sk0);
++ if (fd0 >= 0) {
++ sk0 = gtp_encap_enable_socket(fd0, UDP_ENCAP_GTP0, gtp);
++ if (IS_ERR(sk0))
++ return PTR_ERR(sk0);
++ }
+ }
+
+ if (data[IFLA_GTP_FD1]) {
+- u32 fd1 = nla_get_u32(data[IFLA_GTP_FD1]);
++ int fd1 = nla_get_u32(data[IFLA_GTP_FD1]);
+
+- sk1u = gtp_encap_enable_socket(fd1, UDP_ENCAP_GTP1U, gtp);
+- if (IS_ERR(sk1u)) {
+- gtp_encap_disable_sock(sk0);
+- return PTR_ERR(sk1u);
++ if (fd1 >= 0) {
++ sk1u = gtp_encap_enable_socket(fd1, UDP_ENCAP_GTP1U, gtp);
++ if (IS_ERR(sk1u)) {
++ gtp_encap_disable_sock(sk0);
++ return PTR_ERR(sk1u);
++ }
+ }
+ }
+
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 2a31d09d43ed49..edee2870f62ab7 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3798,8 +3798,7 @@ static void macsec_free_netdev(struct net_device *dev)
+ {
+ struct macsec_dev *macsec = macsec_priv(dev);
+
+- if (macsec->secy.tx_sc.md_dst)
+- metadata_dst_free(macsec->secy.tx_sc.md_dst);
++ dst_release(&macsec->secy.tx_sc.md_dst->dst);
+ free_percpu(macsec->stats);
+ free_percpu(macsec->secy.tx_sc.stats);
+
+diff --git a/drivers/net/mctp/mctp-i2c.c b/drivers/net/mctp/mctp-i2c.c
+index 4dc057c121f5d0..e70fb66879941f 100644
+--- a/drivers/net/mctp/mctp-i2c.c
++++ b/drivers/net/mctp/mctp-i2c.c
+@@ -588,6 +588,9 @@ static int mctp_i2c_header_create(struct sk_buff *skb, struct net_device *dev,
+ if (len > MCTP_I2C_MAXMTU)
+ return -EMSGSIZE;
+
++ if (!daddr || !saddr)
++ return -EINVAL;
++
+ lldst = *((u8 *)daddr);
+ llsrc = *((u8 *)saddr);
+
+diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
+index a1f91ff8ec5688..f108e363b716ad 100644
+--- a/drivers/net/netdevsim/fib.c
++++ b/drivers/net/netdevsim/fib.c
+@@ -1377,10 +1377,12 @@ static ssize_t nsim_nexthop_bucket_activity_write(struct file *file,
+
+ if (pos != 0)
+ return -EINVAL;
+- if (size > sizeof(buf))
++ if (size > sizeof(buf) - 1)
+ return -EINVAL;
+ if (copy_from_user(buf, user_buf, size))
+ return -EFAULT;
++ buf[size] = 0;
++
+ if (sscanf(buf, "%u %hu", &nhid, &bucket_index) != 2)
+ return -EINVAL;
+
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index dbaf26d6a7a618..16d07d619b4df9 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -3043,9 +3043,14 @@ ath10k_wmi_tlv_op_cleanup_mgmt_tx_send(struct ath10k *ar,
+ struct sk_buff *msdu)
+ {
+ struct ath10k_skb_cb *cb = ATH10K_SKB_CB(msdu);
++ struct ath10k_mgmt_tx_pkt_addr *pkt_addr;
+ struct ath10k_wmi *wmi = &ar->wmi;
+
+- idr_remove(&wmi->mgmt_pending_tx, cb->msdu_id);
++ spin_lock_bh(&ar->data_lock);
++ pkt_addr = idr_remove(&wmi->mgmt_pending_tx, cb->msdu_id);
++ spin_unlock_bh(&ar->data_lock);
++
++ kfree(pkt_addr);
+
+ return 0;
+ }
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index fe234459836497..1d58452ed8ca6d 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -2441,6 +2441,7 @@ wmi_process_mgmt_tx_comp(struct ath10k *ar, struct mgmt_tx_compl_params *param)
+ dma_unmap_single(ar->dev, pkt_addr->paddr,
+ msdu->len, DMA_TO_DEVICE);
+ info = IEEE80211_SKB_CB(msdu);
++ kfree(pkt_addr);
+
+ if (param->status) {
+ info->flags &= ~IEEE80211_TX_STAT_ACK;
+@@ -9612,6 +9613,7 @@ static int ath10k_wmi_mgmt_tx_clean_up_pending(int msdu_id, void *ptr,
+ dma_unmap_single(ar->dev, pkt_addr->paddr,
+ msdu->len, DMA_TO_DEVICE);
+ ieee80211_free_txskb(ar->hw, msdu);
++ kfree(pkt_addr);
+
+ return 0;
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index c087d8a0f5b255..40088e62572e12 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -5291,8 +5291,11 @@ int ath11k_dp_rx_process_mon_status(struct ath11k_base *ab, int mac_id,
+ hal_status == HAL_TLV_STATUS_PPDU_DONE) {
+ rx_mon_stats->status_ppdu_done++;
+ pmon->mon_ppdu_status = DP_PPDU_STATUS_DONE;
+- ath11k_dp_rx_mon_dest_process(ar, mac_id, budget, napi);
+- pmon->mon_ppdu_status = DP_PPDU_STATUS_START;
++ if (!ab->hw_params.full_monitor_mode) {
++ ath11k_dp_rx_mon_dest_process(ar, mac_id,
++ budget, napi);
++ pmon->mon_ppdu_status = DP_PPDU_STATUS_START;
++ }
+ }
+
+ if (ppdu_info->peer_id == HAL_INVALID_PEERID ||
+diff --git a/drivers/net/wireless/broadcom/brcm80211/Kconfig b/drivers/net/wireless/broadcom/brcm80211/Kconfig
+index 3a1a35b5672f1a..19d0c003f62626 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/Kconfig
++++ b/drivers/net/wireless/broadcom/brcm80211/Kconfig
+@@ -27,6 +27,7 @@ source "drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig"
+ config BRCM_TRACING
+ bool "Broadcom device tracing"
+ depends on BRCMSMAC || BRCMFMAC
++ depends on TRACING
+ help
+ If you say Y here, the Broadcom wireless drivers will register
+ with ftrace to dump event information into the trace ringbuffer.
+diff --git a/drivers/net/wireless/intel/iwlegacy/common.c b/drivers/net/wireless/intel/iwlegacy/common.c
+index 9d33a66a49b593..958dd4f9bc6920 100644
+--- a/drivers/net/wireless/intel/iwlegacy/common.c
++++ b/drivers/net/wireless/intel/iwlegacy/common.c
+@@ -3122,6 +3122,7 @@ il_enqueue_hcmd(struct il_priv *il, struct il_host_cmd *cmd)
+ struct il_cmd_meta *out_meta;
+ dma_addr_t phys_addr;
+ unsigned long flags;
++ u8 *out_payload;
+ u32 idx;
+ u16 fix_size;
+
+@@ -3157,6 +3158,16 @@ il_enqueue_hcmd(struct il_priv *il, struct il_host_cmd *cmd)
+ out_cmd = txq->cmd[idx];
+ out_meta = &txq->meta[idx];
+
++ /* The payload is in the same place in regular and huge
++ * command buffers, but we need to let the compiler know when
++ * we're using a larger payload buffer to avoid "field-
++ * spanning write" warnings at run-time for huge commands.
++ */
++ if (cmd->flags & CMD_SIZE_HUGE)
++ out_payload = ((struct il_device_cmd_huge *)out_cmd)->cmd.payload;
++ else
++ out_payload = out_cmd->cmd.payload;
++
+ if (WARN_ON(out_meta->flags & CMD_MAPPED)) {
+ spin_unlock_irqrestore(&il->hcmd_lock, flags);
+ return -ENOSPC;
+@@ -3170,7 +3181,7 @@ il_enqueue_hcmd(struct il_priv *il, struct il_host_cmd *cmd)
+ out_meta->callback = cmd->callback;
+
+ out_cmd->hdr.cmd = cmd->id;
+- memcpy(&out_cmd->cmd.payload, cmd->data, cmd->len);
++ memcpy(out_payload, cmd->data, cmd->len);
+
+ /* At this point, the out_cmd now has all of the incoming cmd
+ * information */
+@@ -4962,6 +4973,8 @@ il_pci_resume(struct device *device)
+ */
+ pci_write_config_byte(pdev, PCI_CFG_RETRY_TIMEOUT, 0x00);
+
++ _il_wr(il, CSR_INT, 0xffffffff);
++ _il_wr(il, CSR_FH_INT_STATUS, 0xffffffff);
+ il_enable_interrupts(il);
+
+ if (!(_il_rd(il, CSR_GP_CNTRL) & CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW))
+diff --git a/drivers/net/wireless/intel/iwlegacy/common.h b/drivers/net/wireless/intel/iwlegacy/common.h
+index 69687fcf963fc1..027dae5619a371 100644
+--- a/drivers/net/wireless/intel/iwlegacy/common.h
++++ b/drivers/net/wireless/intel/iwlegacy/common.h
+@@ -560,6 +560,18 @@ struct il_device_cmd {
+
+ #define TFD_MAX_PAYLOAD_SIZE (sizeof(struct il_device_cmd))
+
++/**
++ * struct il_device_cmd_huge
++ *
++ * For use when sending huge commands.
++ */
++struct il_device_cmd_huge {
++ struct il_cmd_header hdr; /* uCode API */
++ union {
++ u8 payload[IL_MAX_CMD_SIZE - sizeof(struct il_cmd_header)];
++ } __packed cmd;
++} __packed;
++
+ struct il_host_cmd {
+ const void *data;
+ unsigned long reply_page;
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index aaaabd67f9593c..3709039a294d8b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -1413,25 +1413,35 @@ _iwl_op_mode_start(struct iwl_drv *drv, struct iwlwifi_opmode_table *op)
+ const struct iwl_op_mode_ops *ops = op->ops;
+ struct dentry *dbgfs_dir = NULL;
+ struct iwl_op_mode *op_mode = NULL;
++ int retry, max_retry = !!iwlwifi_mod_params.fw_restart * IWL_MAX_INIT_RETRY;
+
+ /* also protects start/stop from racing against each other */
+ lockdep_assert_held(&iwlwifi_opmode_table_mtx);
+
++ for (retry = 0; retry <= max_retry; retry++) {
++
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+- drv->dbgfs_op_mode = debugfs_create_dir(op->name,
+- drv->dbgfs_drv);
+- dbgfs_dir = drv->dbgfs_op_mode;
++ drv->dbgfs_op_mode = debugfs_create_dir(op->name,
++ drv->dbgfs_drv);
++ dbgfs_dir = drv->dbgfs_op_mode;
+ #endif
+
+- op_mode = ops->start(drv->trans, drv->trans->cfg,
+- &drv->fw, dbgfs_dir);
+- if (op_mode)
+- return op_mode;
++ op_mode = ops->start(drv->trans, drv->trans->cfg,
++ &drv->fw, dbgfs_dir);
++
++ if (op_mode)
++ return op_mode;
++
++ if (test_bit(STATUS_TRANS_DEAD, &drv->trans->status))
++ break;
++
++ IWL_ERR(drv, "retry init count %d\n", retry);
+
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+- debugfs_remove_recursive(drv->dbgfs_op_mode);
+- drv->dbgfs_op_mode = NULL;
++ debugfs_remove_recursive(drv->dbgfs_op_mode);
++ drv->dbgfs_op_mode = NULL;
+ #endif
++ }
+
+ return NULL;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.h b/drivers/net/wireless/intel/iwlwifi/iwl-drv.h
+index 1549ff42954978..6a1d31892417b4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.h
+@@ -98,6 +98,9 @@ void iwl_drv_stop(struct iwl_drv *drv);
+ #define VISIBLE_IF_IWLWIFI_KUNIT static
+ #endif
+
++/* max retry for init flow */
++#define IWL_MAX_INIT_RETRY 2
++
+ #define FW_NAME_PRE_BUFSIZE 64
+ struct iwl_trans;
+ const char *iwl_drv_get_fwname_pre(struct iwl_trans *trans, char *buf);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 08c4898c8f1a33..49d5278d078a58 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -1288,8 +1288,8 @@ static void iwl_mvm_disconnect_iterator(void *data, u8 *mac,
+ void iwl_mvm_send_recovery_cmd(struct iwl_mvm *mvm, u32 flags)
+ {
+ u32 error_log_size = mvm->fw->ucode_capa.error_log_size;
++ u32 status = 0;
+ int ret;
+- u32 resp;
+
+ struct iwl_fw_error_recovery_cmd recovery_cmd = {
+ .flags = cpu_to_le32(flags),
+@@ -1297,7 +1297,6 @@ void iwl_mvm_send_recovery_cmd(struct iwl_mvm *mvm, u32 flags)
+ };
+ struct iwl_host_cmd host_cmd = {
+ .id = WIDE_ID(SYSTEM_GROUP, FW_ERROR_RECOVERY_CMD),
+- .flags = CMD_WANT_SKB,
+ .data = {&recovery_cmd, },
+ .len = {sizeof(recovery_cmd), },
+ };
+@@ -1317,7 +1316,7 @@ void iwl_mvm_send_recovery_cmd(struct iwl_mvm *mvm, u32 flags)
+ recovery_cmd.buf_size = cpu_to_le32(error_log_size);
+ }
+
+- ret = iwl_mvm_send_cmd(mvm, &host_cmd);
++ ret = iwl_mvm_send_cmd_status(mvm, &host_cmd, &status);
+ kfree(mvm->error_recovery_buf);
+ mvm->error_recovery_buf = NULL;
+
+@@ -1328,11 +1327,10 @@ void iwl_mvm_send_recovery_cmd(struct iwl_mvm *mvm, u32 flags)
+
+ /* skb respond is only relevant in ERROR_RECOVERY_UPDATE_DB */
+ if (flags & ERROR_RECOVERY_UPDATE_DB) {
+- resp = le32_to_cpu(*(__le32 *)host_cmd.resp_pkt->data);
+- if (resp) {
++ if (status) {
+ IWL_ERR(mvm,
+ "Failed to send recovery cmd blob was invalid %d\n",
+- resp);
++ status);
+
+ ieee80211_iterate_interfaces(mvm->hw, 0,
+ iwl_mvm_disconnect_iterator,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 1ebcc6417ecef1..63b2c6fe3f8abd 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1292,12 +1292,14 @@ int iwl_mvm_mac_start(struct ieee80211_hw *hw)
+ {
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ int ret;
++ int retry, max_retry = 0;
+
+ mutex_lock(&mvm->mutex);
+
+ /* we are starting the mac not in error flow, and restart is enabled */
+ if (!test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status) &&
+ iwlwifi_mod_params.fw_restart) {
++ max_retry = IWL_MAX_INIT_RETRY;
+ /*
+ * This will prevent mac80211 recovery flows to trigger during
+ * init failures
+@@ -1305,7 +1307,13 @@ int iwl_mvm_mac_start(struct ieee80211_hw *hw)
+ set_bit(IWL_MVM_STATUS_STARTING, &mvm->status);
+ }
+
+- ret = __iwl_mvm_mac_start(mvm);
++ for (retry = 0; retry <= max_retry; retry++) {
++ ret = __iwl_mvm_mac_start(mvm);
++ if (!ret)
++ break;
++
++ IWL_ERR(mvm, "mac start retry %d\n", retry);
++ }
+ clear_bit(IWL_MVM_STATUS_STARTING, &mvm->status);
+
+ mutex_unlock(&mvm->mutex);
+@@ -1951,7 +1959,6 @@ static void iwl_mvm_mac_remove_interface(struct ieee80211_hw *hw,
+ mvm->p2p_device_vif = NULL;
+ }
+
+- iwl_mvm_unset_link_mapping(mvm, vif, &vif->bss_conf);
+ iwl_mvm_mac_ctxt_remove(mvm, vif);
+
+ RCU_INIT_POINTER(mvm->vif_id_to_mac[mvmvif->id], NULL);
+@@ -1960,6 +1967,7 @@ static void iwl_mvm_mac_remove_interface(struct ieee80211_hw *hw,
+ mvm->monitor_on = false;
+
+ out:
++ iwl_mvm_unset_link_mapping(mvm, vif, &vif->bss_conf);
+ if (vif->type == NL80211_IFTYPE_AP ||
+ vif->type == NL80211_IFTYPE_ADHOC) {
+ iwl_mvm_dealloc_int_sta(mvm, &mvmvif->deflink.mcast_sta);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
+index 3c99396ad36922..c4ccc353a8fd43 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
+@@ -41,8 +41,6 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw,
+ /* reset deflink MLO parameters */
+ mvmvif->deflink.fw_link_id = IWL_MVM_FW_LINK_ID_INVALID;
+ mvmvif->deflink.active = 0;
+- /* the first link always points to the default one */
+- mvmvif->link[0] = &mvmvif->deflink;
+
+ ret = iwl_mvm_mld_mac_ctxt_add(mvm, vif);
+ if (ret)
+@@ -60,9 +58,19 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw,
+ IEEE80211_VIF_SUPPORTS_CQM_RSSI;
+ }
+
+- ret = iwl_mvm_add_link(mvm, vif, &vif->bss_conf);
+- if (ret)
+- goto out_free_bf;
++ /* We want link[0] to point to the default link, unless we have MLO and
++ * in this case this will be modified later by .change_vif_links()
++ * If we are in the restart flow with an MLD connection, we will wait
++ * to .change_vif_links() to setup the links.
++ */
++ if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) ||
++ !ieee80211_vif_is_mld(vif)) {
++ mvmvif->link[0] = &mvmvif->deflink;
++
++ ret = iwl_mvm_add_link(mvm, vif, &vif->bss_conf);
++ if (ret)
++ goto out_free_bf;
++ }
+
+ /* Save a pointer to p2p device vif, so it can later be used to
+ * update the p2p device MAC when a GO is started/stopped
+@@ -347,11 +355,6 @@ __iwl_mvm_mld_assign_vif_chanctx(struct iwl_mvm *mvm,
+ rcu_read_unlock();
+ }
+
+- if (vif->type == NL80211_IFTYPE_STATION)
+- iwl_mvm_send_ap_tx_power_constraint_cmd(mvm, vif,
+- link_conf,
+- false);
+-
+ /* then activate */
+ ret = iwl_mvm_link_changed(mvm, vif, link_conf,
+ LINK_CONTEXT_MODIFY_ACTIVE |
+@@ -360,6 +363,11 @@ __iwl_mvm_mld_assign_vif_chanctx(struct iwl_mvm *mvm,
+ if (ret)
+ goto out;
+
++ if (vif->type == NL80211_IFTYPE_STATION)
++ iwl_mvm_send_ap_tx_power_constraint_cmd(mvm, vif,
++ link_conf,
++ false);
++
+ /*
+ * Power state must be updated before quotas,
+ * otherwise fw will complain.
+@@ -1188,7 +1196,11 @@ iwl_mvm_mld_change_vif_links(struct ieee80211_hw *hw,
+
+ mutex_lock(&mvm->mutex);
+
+- if (old_links == 0) {
++ /* If we're in RESTART flow, the default link wasn't added in
++ * drv_add_interface(), and link[0] doesn't point to it.
++ */
++ if (old_links == 0 && !test_bit(IWL_MVM_STATUS_IN_HW_RESTART,
++ &mvm->status)) {
+ err = iwl_mvm_disable_link(mvm, vif, &vif->bss_conf);
+ if (err)
+ goto out_err;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 3a9018595ea909..cfc50c7b294922 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -1774,7 +1774,7 @@ iwl_mvm_umac_scan_cfg_channels_v7_6g(struct iwl_mvm *mvm,
+ &cp->channel_config[ch_cnt];
+
+ u32 s_ssid_bitmap = 0, bssid_bitmap = 0, flags = 0;
+- u8 j, k, n_s_ssids = 0, n_bssids = 0;
++ u8 k, n_s_ssids = 0, n_bssids = 0;
+ u8 max_s_ssids, max_bssids;
+ bool force_passive = false, found = false, allow_passive = true,
+ unsolicited_probe_on_chan = false, psc_no_listen = false;
+@@ -1799,7 +1799,7 @@ iwl_mvm_umac_scan_cfg_channels_v7_6g(struct iwl_mvm *mvm,
+ cfg->v5.iter_count = 1;
+ cfg->v5.iter_interval = 0;
+
+- for (j = 0; j < params->n_6ghz_params; j++) {
++ for (u32 j = 0; j < params->n_6ghz_params; j++) {
+ s8 tmp_psd_20;
+
+ if (!(scan_6ghz_params[j].channel_idx == i))
+@@ -1873,7 +1873,7 @@ iwl_mvm_umac_scan_cfg_channels_v7_6g(struct iwl_mvm *mvm,
+ * SSID.
+ * TODO: improve this logic
+ */
+- for (j = 0; j < params->n_6ghz_params; j++) {
++ for (u32 j = 0; j < params->n_6ghz_params; j++) {
+ if (!(scan_6ghz_params[j].channel_idx == i))
+ continue;
+
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192du/sw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192du/sw.c
+index d069a81ac61752..cc699efa9c7938 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192du/sw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192du/sw.c
+@@ -352,7 +352,6 @@ static const struct usb_device_id rtl8192d_usb_ids[] = {
+ {RTL_USB_DEVICE(USB_VENDOR_ID_REALTEK, 0x8194, rtl92du_hal_cfg)},
+ {RTL_USB_DEVICE(USB_VENDOR_ID_REALTEK, 0x8111, rtl92du_hal_cfg)},
+ {RTL_USB_DEVICE(USB_VENDOR_ID_REALTEK, 0x0193, rtl92du_hal_cfg)},
+- {RTL_USB_DEVICE(USB_VENDOR_ID_REALTEK, 0x8171, rtl92du_hal_cfg)},
+ {RTL_USB_DEVICE(USB_VENDOR_ID_REALTEK, 0xe194, rtl92du_hal_cfg)},
+ {RTL_USB_DEVICE(0x2019, 0xab2c, rtl92du_hal_cfg)},
+ {RTL_USB_DEVICE(0x2019, 0xab2d, rtl92du_hal_cfg)},
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
+index 02afeb3acce469..5aef7fa378788c 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.c
++++ b/drivers/net/wireless/realtek/rtw89/pci.c
+@@ -3026,24 +3026,54 @@ static void rtw89_pci_declaim_device(struct rtw89_dev *rtwdev,
+ pci_disable_device(pdev);
+ }
+
+-static void rtw89_pci_cfg_dac(struct rtw89_dev *rtwdev)
++static bool rtw89_pci_chip_is_manual_dac(struct rtw89_dev *rtwdev)
+ {
+- struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+- if (!rtwpci->enable_dac)
+- return;
+-
+ switch (chip->chip_id) {
+ case RTL8852A:
+ case RTL8852B:
+ case RTL8851B:
+ case RTL8852BT:
+- break;
++ return true;
+ default:
+- return;
++ return false;
++ }
++}
++
++static bool rtw89_pci_is_dac_compatible_bridge(struct rtw89_dev *rtwdev)
++{
++ struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv;
++ struct pci_dev *bridge = pci_upstream_bridge(rtwpci->pdev);
++
++ if (!rtw89_pci_chip_is_manual_dac(rtwdev))
++ return true;
++
++ if (!bridge)
++ return false;
++
++ switch (bridge->vendor) {
++ case PCI_VENDOR_ID_INTEL:
++ return true;
++ case PCI_VENDOR_ID_ASMEDIA:
++ if (bridge->device == 0x2806)
++ return true;
++ break;
+ }
+
++ return false;
++}
++
++static void rtw89_pci_cfg_dac(struct rtw89_dev *rtwdev)
++{
++ struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv;
++
++ if (!rtwpci->enable_dac)
++ return;
++
++ if (!rtw89_pci_chip_is_manual_dac(rtwdev))
++ return;
++
+ rtw89_pci_config_byte_set(rtwdev, RTW89_PCIE_L1_CTRL, RTW89_PCIE_BIT_EN_64BITS);
+ }
+
+@@ -3061,6 +3091,9 @@ static int rtw89_pci_setup_mapping(struct rtw89_dev *rtwdev,
+ goto err;
+ }
+
++ if (!rtw89_pci_is_dac_compatible_bridge(rtwdev))
++ goto no_dac;
++
+ ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(36));
+ if (!ret) {
+ rtwpci->enable_dac = true;
+@@ -3073,6 +3106,7 @@ static int rtw89_pci_setup_mapping(struct rtw89_dev *rtwdev,
+ goto err_release_regions;
+ }
+ }
++no_dac:
+
+ resource_len = pci_resource_len(pdev, bar_id);
+ rtwpci->mmap = pci_iomap(pdev, bar_id, resource_len);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index a6fb1359a7e148..89ad4217f86068 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -90,6 +90,17 @@ module_param(apst_secondary_latency_tol_us, ulong, 0644);
+ MODULE_PARM_DESC(apst_secondary_latency_tol_us,
+ "secondary APST latency tolerance in us");
+
++/*
++ * Older kernels didn't enable protection information if it was at an offset.
++ * Newer kernels do, so it breaks reads on the upgrade if such formats were
++ * used in prior kernels since the metadata written did not contain a valid
++ * checksum.
++ */
++static bool disable_pi_offsets = false;
++module_param(disable_pi_offsets, bool, 0444);
++MODULE_PARM_DESC(disable_pi_offsets,
++ "disable protection information if it has an offset");
++
+ /*
+ * nvme_wq - hosts nvme related works that are not reset or delete
+ * nvme_reset_wq - hosts nvme reset works
+@@ -1921,8 +1932,12 @@ static void nvme_configure_metadata(struct nvme_ctrl *ctrl,
+
+ if (head->pi_size && head->ms >= head->pi_size)
+ head->pi_type = id->dps & NVME_NS_DPS_PI_MASK;
+- if (!(id->dps & NVME_NS_DPS_PI_FIRST))
+- info->pi_offset = head->ms - head->pi_size;
++ if (!(id->dps & NVME_NS_DPS_PI_FIRST)) {
++ if (disable_pi_offsets)
++ head->pi_type = 0;
++ else
++ info->pi_offset = head->ms - head->pi_size;
++ }
+
+ if (ctrl->ops->flags & NVME_F_FABRICS) {
+ /*
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index 15c93ce07e2636..2cb35c4528a93a 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -423,10 +423,13 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req,
+ struct io_uring_cmd *ioucmd = req->end_io_data;
+ struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd);
+
+- if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
++ if (nvme_req(req)->flags & NVME_REQ_CANCELLED) {
+ pdu->status = -EINTR;
+- else
++ } else {
+ pdu->status = nvme_req(req)->status;
++ if (!pdu->status)
++ pdu->status = blk_status_to_errno(err);
++ }
+ pdu->result = le64_to_cpu(nvme_req(req)->result.u64);
+
+ /*
+diff --git a/drivers/nvme/target/auth.c b/drivers/nvme/target/auth.c
+index 8bc3f431c77f60..8c41a47dfed170 100644
+--- a/drivers/nvme/target/auth.c
++++ b/drivers/nvme/target/auth.c
+@@ -103,6 +103,7 @@ int nvmet_setup_dhgroup(struct nvmet_ctrl *ctrl, u8 dhgroup_id)
+ pr_debug("%s: ctrl %d failed to generate private key, err %d\n",
+ __func__, ctrl->cntlid, ret);
+ kfree_sensitive(ctrl->dh_key);
++ ctrl->dh_key = NULL;
+ return ret;
+ }
+ ctrl->dh_keysize = crypto_kpp_maxsize(ctrl->dh_tfm);
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 85ced6958d6d11..51407c376a2227 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1067,8 +1067,15 @@ static void pci_std_enable_acs(struct pci_dev *dev, struct pci_acs *caps)
+ static void pci_enable_acs(struct pci_dev *dev)
+ {
+ struct pci_acs caps;
++ bool enable_acs = false;
+ int pos;
+
++ /* If an iommu is present we start with kernel default caps */
++ if (pci_acs_enable) {
++ if (pci_dev_specific_enable_acs(dev))
++ enable_acs = true;
++ }
++
+ pos = dev->acs_cap;
+ if (!pos)
+ return;
+@@ -1077,11 +1084,8 @@ static void pci_enable_acs(struct pci_dev *dev)
+ pci_read_config_word(dev, pos + PCI_ACS_CTRL, &caps.ctrl);
+ caps.fw_ctrl = caps.ctrl;
+
+- /* If an iommu is present we start with kernel default caps */
+- if (pci_acs_enable) {
+- if (pci_dev_specific_enable_acs(dev))
+- pci_std_enable_acs(dev, &caps);
+- }
++ if (enable_acs)
++ pci_std_enable_acs(dev, &caps);
+
+ /*
+ * Always apply caps from the command line, even if there is no iommu.
+diff --git a/drivers/phy/freescale/phy-fsl-imx8m-pcie.c b/drivers/phy/freescale/phy-fsl-imx8m-pcie.c
+index 11fcb1867118c3..e98361dcdeadfe 100644
+--- a/drivers/phy/freescale/phy-fsl-imx8m-pcie.c
++++ b/drivers/phy/freescale/phy-fsl-imx8m-pcie.c
+@@ -141,11 +141,6 @@ static int imx8_pcie_phy_power_on(struct phy *phy)
+ IMX8MM_GPR_PCIE_REF_CLK_PLL);
+ usleep_range(100, 200);
+
+- /* Do the PHY common block reset */
+- regmap_update_bits(imx8_phy->iomuxc_gpr, IOMUXC_GPR14,
+- IMX8MM_GPR_PCIE_CMN_RST,
+- IMX8MM_GPR_PCIE_CMN_RST);
+-
+ switch (imx8_phy->drvdata->variant) {
+ case IMX8MP:
+ reset_control_deassert(imx8_phy->perst);
+@@ -156,6 +151,11 @@ static int imx8_pcie_phy_power_on(struct phy *phy)
+ break;
+ }
+
++ /* Do the PHY common block reset */
++ regmap_update_bits(imx8_phy->iomuxc_gpr, IOMUXC_GPR14,
++ IMX8MM_GPR_PCIE_CMN_RST,
++ IMX8MM_GPR_PCIE_CMN_RST);
++
+ /* Polling to check the phy is ready or not. */
+ ret = readl_poll_timeout(imx8_phy->base + IMX8MM_PCIE_PHY_CMN_REG075,
+ val, val == ANA_PLL_DONE, 10, 20000);
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-usb-legacy.c b/drivers/phy/qualcomm/phy-qcom-qmp-usb-legacy.c
+index 6d0ba39c19431e..8bf951b0490cfd 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-usb-legacy.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-usb-legacy.c
+@@ -1248,6 +1248,7 @@ static int qmp_usb_legacy_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ qmp->dev = dev;
++ dev_set_drvdata(dev, qmp);
+
+ qmp->cfg = of_device_get_match_data(dev);
+ if (!qmp->cfg)
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+index 9b0eb87b16804a..f91446d1d2c358 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+@@ -2179,6 +2179,7 @@ static int qmp_usb_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ qmp->dev = dev;
++ dev_set_drvdata(dev, qmp);
+
+ qmp->cfg = of_device_get_match_data(dev);
+ if (!qmp->cfg)
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-usbc.c b/drivers/phy/qualcomm/phy-qcom-qmp-usbc.c
+index 5cbc5fd529ebe1..dea3456f88b1fa 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-usbc.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-usbc.c
+@@ -1049,6 +1049,7 @@ static int qmp_usbc_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ qmp->dev = dev;
++ dev_set_drvdata(dev, qmp);
+
+ qmp->orientation = TYPEC_ORIENTATION_NORMAL;
+
+diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c
+index 733a36f67fbc69..1f4c5389676ace 100644
+--- a/drivers/powercap/intel_rapl_msr.c
++++ b/drivers/powercap/intel_rapl_msr.c
+@@ -147,6 +147,7 @@ static const struct x86_cpu_id pl4_support_ids[] = {
+ X86_MATCH_VFM(INTEL_RAPTORLAKE_P, NULL),
+ X86_MATCH_VFM(INTEL_METEORLAKE, NULL),
+ X86_MATCH_VFM(INTEL_METEORLAKE_L, NULL),
++ X86_MATCH_VFM(INTEL_ARROWLAKE_U, NULL),
+ {}
+ };
+
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index a9d8a9c62663e3..e41698218e62f3 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -3652,7 +3652,7 @@ static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
+ enum dma_data_direction dir;
+ struct scsi_data_buffer *sdb = &scp->sdb;
+ u8 *fsp;
+- int i;
++ int i, total = 0;
+
+ /*
+ * Even though reads are inherently atomic (in this driver), we expect
+@@ -3689,18 +3689,16 @@ static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
+ fsp + (block * sdebug_sector_size),
+ sdebug_sector_size, sg_skip, do_write);
+ sdeb_data_sector_unlock(sip, do_write);
+- if (ret != sdebug_sector_size) {
+- ret += (i * sdebug_sector_size);
++ total += ret;
++ if (ret != sdebug_sector_size)
+ break;
+- }
+ sg_skip += sdebug_sector_size;
+ if (++block >= sdebug_store_sectors)
+ block = 0;
+ }
+- ret = num * sdebug_sector_size;
+ sdeb_data_unlock(sip, atomic);
+
+- return ret;
++ return total;
+ }
+
+ /* Returns number of bytes copied or -1 if error. */
+diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c
+index 7d088b8da07578..2270732b353c83 100644
+--- a/drivers/scsi/scsi_transport_fc.c
++++ b/drivers/scsi/scsi_transport_fc.c
+@@ -1255,7 +1255,7 @@ static ssize_t fc_rport_set_marginal_state(struct device *dev,
+ */
+ if (rport->port_state == FC_PORTSTATE_ONLINE)
+ rport->port_state = port_state;
+- else
++ else if (port_state != rport->port_state)
+ return -EINVAL;
+ } else if (port_state == FC_PORTSTATE_ONLINE) {
+ /*
+@@ -1265,7 +1265,7 @@ static ssize_t fc_rport_set_marginal_state(struct device *dev,
+ */
+ if (rport->port_state == FC_PORTSTATE_MARGINAL)
+ rport->port_state = port_state;
+- else
++ else if (port_state != rport->port_state)
+ return -EINVAL;
+ } else
+ return -EINVAL;
+diff --git a/drivers/soc/qcom/pmic_glink.c b/drivers/soc/qcom/pmic_glink.c
+index 9606222993fd78..baa4ac6704a901 100644
+--- a/drivers/soc/qcom/pmic_glink.c
++++ b/drivers/soc/qcom/pmic_glink.c
+@@ -4,6 +4,7 @@
+ * Copyright (c) 2022, Linaro Ltd
+ */
+ #include <linux/auxiliary_bus.h>
++#include <linux/delay.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+@@ -13,6 +14,8 @@
+ #include <linux/soc/qcom/pmic_glink.h>
+ #include <linux/spinlock.h>
+
++#define PMIC_GLINK_SEND_TIMEOUT (5 * HZ)
++
+ enum {
+ PMIC_GLINK_CLIENT_BATT = 0,
+ PMIC_GLINK_CLIENT_ALTMODE,
+@@ -112,13 +115,29 @@ EXPORT_SYMBOL_GPL(pmic_glink_client_register);
+ int pmic_glink_send(struct pmic_glink_client *client, void *data, size_t len)
+ {
+ struct pmic_glink *pg = client->pg;
++ bool timeout_reached = false;
++ unsigned long start;
+ int ret;
+
+ mutex_lock(&pg->state_lock);
+- if (!pg->ept)
++ if (!pg->ept) {
+ ret = -ECONNRESET;
+- else
+- ret = rpmsg_send(pg->ept, data, len);
++ } else {
++ start = jiffies;
++ for (;;) {
++ ret = rpmsg_send(pg->ept, data, len);
++ if (ret != -EAGAIN)
++ break;
++
++ if (timeout_reached) {
++ ret = -ETIMEDOUT;
++ break;
++ }
++
++ usleep_range(1000, 5000);
++ timeout_reached = time_after(jiffies, start + PMIC_GLINK_SEND_TIMEOUT);
++ }
++ }
+ mutex_unlock(&pg->state_lock);
+
+ return ret;
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 191de1917f8316..3fa990fb59c78b 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -1003,6 +1003,7 @@ static int dspi_setup(struct spi_device *spi)
+ u32 cs_sck_delay = 0, sck_cs_delay = 0;
+ struct fsl_dspi_platform_data *pdata;
+ unsigned char pasc = 0, asc = 0;
++ struct gpio_desc *gpio_cs;
+ struct chip_data *chip;
+ unsigned long clkrate;
+ bool cs = true;
+@@ -1077,7 +1078,10 @@ static int dspi_setup(struct spi_device *spi)
+ chip->ctar_val |= SPI_CTAR_LSBFE;
+ }
+
+- gpiod_direction_output(spi_get_csgpiod(spi, 0), false);
++ gpio_cs = spi_get_csgpiod(spi, 0);
++ if (gpio_cs)
++ gpiod_direction_output(gpio_cs, false);
++
+ dspi_deassert_cs(spi, &cs);
+
+ spi_set_ctldata(spi, chip);
+diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
+index 6f4057330444d5..fa967be4f9a17e 100644
+--- a/drivers/spi/spi-geni-qcom.c
++++ b/drivers/spi/spi-geni-qcom.c
+@@ -1108,6 +1108,11 @@ static int spi_geni_probe(struct platform_device *pdev)
+ init_completion(&mas->tx_reset_done);
+ init_completion(&mas->rx_reset_done);
+ spin_lock_init(&mas->lock);
++
++ ret = geni_icc_get(&mas->se, NULL);
++ if (ret)
++ return ret;
++
+ pm_runtime_use_autosuspend(&pdev->dev);
+ pm_runtime_set_autosuspend_delay(&pdev->dev, 250);
+ ret = devm_pm_runtime_enable(dev);
+@@ -1117,9 +1122,6 @@ static int spi_geni_probe(struct platform_device *pdev)
+ if (device_property_read_bool(&pdev->dev, "spi-slave"))
+ spi->target = true;
+
+- ret = geni_icc_get(&mas->se, NULL);
+- if (ret)
+- return ret;
+ /* Set the bus quota to a reasonable value for register access */
+ mas->se.icc_paths[GENI_TO_CORE].avg_bw = Bps_to_icc(CORE_2X_50_MHZ);
+ mas->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW;
+diff --git a/drivers/staging/iio/frequency/ad9832.c b/drivers/staging/iio/frequency/ad9832.c
+index 6c390c4eb26dea..492612e8f8bad5 100644
+--- a/drivers/staging/iio/frequency/ad9832.c
++++ b/drivers/staging/iio/frequency/ad9832.c
+@@ -129,12 +129,15 @@ static unsigned long ad9832_calc_freqreg(unsigned long mclk, unsigned long fout)
+ static int ad9832_write_frequency(struct ad9832_state *st,
+ unsigned int addr, unsigned long fout)
+ {
++ unsigned long clk_freq;
+ unsigned long regval;
+
+- if (fout > (clk_get_rate(st->mclk) / 2))
++ clk_freq = clk_get_rate(st->mclk);
++
++ if (!clk_freq || fout > (clk_freq / 2))
+ return -EINVAL;
+
+- regval = ad9832_calc_freqreg(clk_get_rate(st->mclk), fout);
++ regval = ad9832_calc_freqreg(clk_freq, fout);
+
+ st->freq_data[0] = cpu_to_be16((AD9832_CMD_FRE8BITSW << CMD_SHIFT) |
+ (addr << ADD_SHIFT) |
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c
+index e9aa9e23aab9ec..bde2cc386afdda 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c
+@@ -13,48 +13,12 @@ static struct rapl_if_priv rapl_mmio_priv;
+
+ static const struct rapl_mmio_regs rapl_mmio_default = {
+ .reg_unit = 0x5938,
+- .regs[RAPL_DOMAIN_PACKAGE] = { 0x59a0, 0x593c, 0x58f0, 0, 0x5930},
++ .regs[RAPL_DOMAIN_PACKAGE] = { 0x59a0, 0x593c, 0x58f0, 0, 0x5930, 0x59b0},
+ .regs[RAPL_DOMAIN_DRAM] = { 0x58e0, 0x58e8, 0x58ec, 0, 0},
+- .limits[RAPL_DOMAIN_PACKAGE] = BIT(POWER_LIMIT2),
++ .limits[RAPL_DOMAIN_PACKAGE] = BIT(POWER_LIMIT2) | BIT(POWER_LIMIT4),
+ .limits[RAPL_DOMAIN_DRAM] = BIT(POWER_LIMIT2),
+ };
+
+-static int rapl_mmio_cpu_online(unsigned int cpu)
+-{
+- struct rapl_package *rp;
+-
+- /* mmio rapl supports package 0 only for now */
+- if (topology_physical_package_id(cpu))
+- return 0;
+-
+- rp = rapl_find_package_domain_cpuslocked(cpu, &rapl_mmio_priv, true);
+- if (!rp) {
+- rp = rapl_add_package_cpuslocked(cpu, &rapl_mmio_priv, true);
+- if (IS_ERR(rp))
+- return PTR_ERR(rp);
+- }
+- cpumask_set_cpu(cpu, &rp->cpumask);
+- return 0;
+-}
+-
+-static int rapl_mmio_cpu_down_prep(unsigned int cpu)
+-{
+- struct rapl_package *rp;
+- int lead_cpu;
+-
+- rp = rapl_find_package_domain_cpuslocked(cpu, &rapl_mmio_priv, true);
+- if (!rp)
+- return 0;
+-
+- cpumask_clear_cpu(cpu, &rp->cpumask);
+- lead_cpu = cpumask_first(&rp->cpumask);
+- if (lead_cpu >= nr_cpu_ids)
+- rapl_remove_package_cpuslocked(rp);
+- else if (rp->lead_cpu == cpu)
+- rp->lead_cpu = lead_cpu;
+- return 0;
+-}
+-
+ static int rapl_mmio_read_raw(int cpu, struct reg_action *ra)
+ {
+ if (!ra->reg.mmio)
+@@ -82,6 +46,7 @@ static int rapl_mmio_write_raw(int cpu, struct reg_action *ra)
+ int proc_thermal_rapl_add(struct pci_dev *pdev, struct proc_thermal_device *proc_priv)
+ {
+ const struct rapl_mmio_regs *rapl_regs = &rapl_mmio_default;
++ struct rapl_package *rp;
+ enum rapl_domain_reg_id reg;
+ enum rapl_domain_type domain;
+ int ret;
+@@ -109,25 +74,38 @@ int proc_thermal_rapl_add(struct pci_dev *pdev, struct proc_thermal_device *proc
+ return PTR_ERR(rapl_mmio_priv.control_type);
+ }
+
+- ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "powercap/rapl:online",
+- rapl_mmio_cpu_online, rapl_mmio_cpu_down_prep);
+- if (ret < 0) {
+- powercap_unregister_control_type(rapl_mmio_priv.control_type);
+- rapl_mmio_priv.control_type = NULL;
+- return ret;
++ /* Register a RAPL package device for package 0 which is always online */
++ rp = rapl_find_package_domain(0, &rapl_mmio_priv, false);
++ if (rp) {
++ ret = -EEXIST;
++ goto err;
++ }
++
++ rp = rapl_add_package(0, &rapl_mmio_priv, false);
++ if (IS_ERR(rp)) {
++ ret = PTR_ERR(rp);
++ goto err;
+ }
+- rapl_mmio_priv.pcap_rapl_online = ret;
+
+ return 0;
++
++err:
++ powercap_unregister_control_type(rapl_mmio_priv.control_type);
++ rapl_mmio_priv.control_type = NULL;
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(proc_thermal_rapl_add);
+
+ void proc_thermal_rapl_remove(void)
+ {
++ struct rapl_package *rp;
++
+ if (IS_ERR_OR_NULL(rapl_mmio_priv.control_type))
+ return;
+
+- cpuhp_remove_state(rapl_mmio_priv.pcap_rapl_online);
++ rp = rapl_find_package_domain(0, &rapl_mmio_priv, false);
++ if (rp)
++ rapl_remove_package(rp);
+ powercap_unregister_control_type(rapl_mmio_priv.control_type);
+ }
+ EXPORT_SYMBOL_GPL(proc_thermal_rapl_remove);
+diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c
+index 721319329afa96..7db9869a9f3fe7 100644
+--- a/drivers/thunderbolt/retimer.c
++++ b/drivers/thunderbolt/retimer.c
+@@ -516,7 +516,7 @@ int tb_retimer_scan(struct tb_port *port, bool add)
+ */
+ tb_retimer_set_inbound_sbtx(port);
+
+- for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++) {
++ for (max = 1, i = 1; i <= TB_MAX_RETIMER_INDEX; i++) {
+ /*
+ * Last retimer is true only for the last on-board
+ * retimer (the one connected directly to the Type-C
+@@ -527,9 +527,10 @@ int tb_retimer_scan(struct tb_port *port, bool add)
+ last_idx = i;
+ else if (ret < 0)
+ break;
++
++ max = i;
+ }
+
+- max = i;
+ ret = 0;
+
+ /* Add retimers if they do not exist already */
+diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
+index 10e719dd837cec..4f777788e9179c 100644
+--- a/drivers/thunderbolt/tb.c
++++ b/drivers/thunderbolt/tb.c
+@@ -288,6 +288,24 @@ static void tb_increase_tmu_accuracy(struct tb_tunnel *tunnel)
+ device_for_each_child(&sw->dev, NULL, tb_increase_switch_tmu_accuracy);
+ }
+
++static int tb_switch_tmu_hifi_uni_required(struct device *dev, void *not_used)
++{
++ struct tb_switch *sw = tb_to_switch(dev);
++
++ if (sw && tb_switch_tmu_is_enabled(sw) &&
++ tb_switch_tmu_is_configured(sw, TB_SWITCH_TMU_MODE_HIFI_UNI))
++ return 1;
++
++ return device_for_each_child(dev, NULL,
++ tb_switch_tmu_hifi_uni_required);
++}
++
++static bool tb_tmu_hifi_uni_required(struct tb *tb)
++{
++ return device_for_each_child(&tb->dev, NULL,
++ tb_switch_tmu_hifi_uni_required) == 1;
++}
++
+ static int tb_enable_tmu(struct tb_switch *sw)
+ {
+ int ret;
+@@ -302,12 +320,30 @@ static int tb_enable_tmu(struct tb_switch *sw)
+ ret = tb_switch_tmu_configure(sw,
+ TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI);
+ if (ret == -EOPNOTSUPP) {
+- if (tb_switch_clx_is_enabled(sw, TB_CL1))
+- ret = tb_switch_tmu_configure(sw,
+- TB_SWITCH_TMU_MODE_LOWRES);
+- else
+- ret = tb_switch_tmu_configure(sw,
+- TB_SWITCH_TMU_MODE_HIFI_BI);
++ if (tb_switch_clx_is_enabled(sw, TB_CL1)) {
++ /*
++ * Figure out uni-directional HiFi TMU requirements
++ * currently in the domain. If there are no
++ * uni-directional HiFi requirements we can put the TMU
++ * into LowRes mode.
++ *
++ * Deliberately skip bi-directional HiFi links
++ * as these work independently of other links
++ * (and they do not allow any CL states anyway).
++ */
++ if (tb_tmu_hifi_uni_required(sw->tb))
++ ret = tb_switch_tmu_configure(sw,
++ TB_SWITCH_TMU_MODE_HIFI_UNI);
++ else
++ ret = tb_switch_tmu_configure(sw,
++ TB_SWITCH_TMU_MODE_LOWRES);
++ } else {
++ ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_MODE_HIFI_BI);
++ }
++
++ /* If not supported, fallback to bi-directional HiFi */
++ if (ret == -EOPNOTSUPP)
++ ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_MODE_HIFI_BI);
+ }
+ if (ret)
+ return ret;
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 09408642a6efba..83567388a7b58e 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -8224,7 +8224,7 @@ static void ufshcd_update_rtc(struct ufs_hba *hba)
+
+ err = ufshcd_query_attr(hba, UPIU_QUERY_OPCODE_WRITE_ATTR, QUERY_ATTR_IDN_SECONDS_PASSED,
+ 0, 0, &val);
+- ufshcd_rpm_put_sync(hba);
++ ufshcd_rpm_put(hba);
+
+ if (err)
+ dev_err(hba->dev, "%s: Failed to update rtc %d\n", __func__, err);
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index df0c5c4f4508a6..e629b3442640a6 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -654,7 +654,7 @@ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ pm_runtime_put_noidle(&dev->dev);
+
+ if (pci_choose_state(dev, PMSG_SUSPEND) == PCI_D0)
+- pm_runtime_forbid(&dev->dev);
++ pm_runtime_get(&dev->dev);
+ else if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW)
+ pm_runtime_allow(&dev->dev);
+
+@@ -681,7 +681,9 @@ static void xhci_pci_remove(struct pci_dev *dev)
+
+ xhci->xhc_state |= XHCI_STATE_REMOVING;
+
+- if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW)
++ if (pci_choose_state(dev, PMSG_SUSPEND) == PCI_D0)
++ pm_runtime_put(&dev->dev);
++ else if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW)
+ pm_runtime_forbid(&dev->dev);
+
+ if (xhci->shared_hcd) {
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 785183f0b5f9de..4479e949fa64b9 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1718,6 +1718,14 @@ static void handle_cmd_completion(struct xhci_hcd *xhci,
+
+ trace_xhci_handle_command(xhci->cmd_ring, &cmd_trb->generic);
+
++ cmd_comp_code = GET_COMP_CODE(le32_to_cpu(event->status));
++
++ /* If CMD ring stopped we own the trbs between enqueue and dequeue */
++ if (cmd_comp_code == COMP_COMMAND_RING_STOPPED) {
++ complete_all(&xhci->cmd_ring_stop_completion);
++ return;
++ }
++
+ cmd_dequeue_dma = xhci_trb_virt_to_dma(xhci->cmd_ring->deq_seg,
+ cmd_trb);
+ /*
+@@ -1734,14 +1742,6 @@ static void handle_cmd_completion(struct xhci_hcd *xhci,
+
+ cancel_delayed_work(&xhci->cmd_timer);
+
+- cmd_comp_code = GET_COMP_CODE(le32_to_cpu(event->status));
+-
+- /* If CMD ring stopped we own the trbs between enqueue and dequeue */
+- if (cmd_comp_code == COMP_COMMAND_RING_STOPPED) {
+- complete_all(&xhci->cmd_ring_stop_completion);
+- return;
+- }
+-
+ if (cmd->command_trb != xhci->cmd_ring->dequeue) {
+ xhci_err(xhci,
+ "Command completion event does not match command\n");
+diff --git a/drivers/usb/phy/phy.c b/drivers/usb/phy/phy.c
+index 06e0fb23566cef..06f789097989f1 100644
+--- a/drivers/usb/phy/phy.c
++++ b/drivers/usb/phy/phy.c
+@@ -628,7 +628,7 @@ void devm_usb_put_phy(struct device *dev, struct usb_phy *phy)
+ {
+ int r;
+
+- r = devres_destroy(dev, devm_usb_phy_release, devm_usb_phy_match, phy);
++ r = devres_release(dev, devm_usb_phy_release, devm_usb_phy_match, phy);
+ dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n");
+ }
+ EXPORT_SYMBOL_GPL(devm_usb_put_phy);
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index d61b4c74648df6..1eb240604cf6f8 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -2341,6 +2341,7 @@ void typec_port_register_altmodes(struct typec_port *port,
+ altmodes[index] = alt;
+ index++;
+ }
++ fwnode_handle_put(altmodes_node);
+ }
+ EXPORT_SYMBOL_GPL(typec_port_register_altmodes);
+
+diff --git a/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec.c b/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec.c
+index 501eddb294e432..b80eb2d78d88b4 100644
+--- a/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec.c
++++ b/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec.c
+@@ -93,8 +93,10 @@ static int qcom_pmic_typec_probe(struct platform_device *pdev)
+ return -EINVAL;
+
+ bridge_dev = devm_drm_dp_hpd_bridge_alloc(tcpm->dev, to_of_node(tcpm->tcpc.fwnode));
+- if (IS_ERR(bridge_dev))
+- return PTR_ERR(bridge_dev);
++ if (IS_ERR(bridge_dev)) {
++ ret = PTR_ERR(bridge_dev);
++ goto fwnode_remove;
++ }
+
+ tcpm->tcpm_port = tcpm_register_port(tcpm->dev, &tcpm->tcpc);
+ if (IS_ERR(tcpm->tcpm_port)) {
+@@ -123,7 +125,7 @@ static int qcom_pmic_typec_probe(struct platform_device *pdev)
+ port_unregister:
+ tcpm_unregister_port(tcpm->tcpm_port);
+ fwnode_remove:
+- fwnode_remove_software_node(tcpm->tcpc.fwnode);
++ fwnode_handle_put(tcpm->tcpc.fwnode);
+
+ return ret;
+ }
+@@ -135,7 +137,7 @@ static void qcom_pmic_typec_remove(struct platform_device *pdev)
+ tcpm->pdphy_stop(tcpm);
+ tcpm->port_stop(tcpm);
+ tcpm_unregister_port(tcpm->tcpm_port);
+- fwnode_remove_software_node(tcpm->tcpc.fwnode);
++ fwnode_handle_put(tcpm->tcpc.fwnode);
+ }
+
+ static const struct pmic_typec_resources pm8150b_typec_res = {
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 4b02d647425910..3bcf104543e967 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -4515,7 +4515,8 @@ static inline enum tcpm_state hard_reset_state(struct tcpm_port *port)
+ return ERROR_RECOVERY;
+ if (port->pwr_role == TYPEC_SOURCE)
+ return SRC_UNATTACHED;
+- if (port->state == SNK_WAIT_CAPABILITIES_TIMEOUT)
++ if (port->state == SNK_WAIT_CAPABILITIES ||
++ port->state == SNK_WAIT_CAPABILITIES_TIMEOUT)
+ return SNK_READY;
+ return SNK_UNATTACHED;
+ }
+@@ -5043,8 +5044,11 @@ static void run_state_machine(struct tcpm_port *port)
+ tcpm_set_state(port, SNK_SOFT_RESET,
+ PD_T_SINK_WAIT_CAP);
+ } else {
+- tcpm_set_state(port, SNK_WAIT_CAPABILITIES_TIMEOUT,
+- PD_T_SINK_WAIT_CAP);
++ if (!port->self_powered)
++ upcoming_state = SNK_WAIT_CAPABILITIES_TIMEOUT;
++ else
++ upcoming_state = hard_reset_state(port);
++ tcpm_set_state(port, upcoming_state, PD_T_SINK_WAIT_CAP);
+ }
+ break;
+ case SNK_WAIT_CAPABILITIES_TIMEOUT:
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index f8622ed72e0812..ada363af5aab8e 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -12,6 +12,7 @@
+ #include <linux/swap.h>
+ #include <linux/ctype.h>
+ #include <linux/sched.h>
++#include <linux/iversion.h>
+ #include <linux/task_io_accounting_ops.h>
+ #include "internal.h"
+ #include "afs_fs.h"
+@@ -1823,6 +1824,8 @@ static int afs_symlink(struct mnt_idmap *idmap, struct inode *dir,
+
+ static void afs_rename_success(struct afs_operation *op)
+ {
++ struct afs_vnode *vnode = AFS_FS_I(d_inode(op->dentry));
++
+ _enter("op=%08x", op->debug_id);
+
+ op->ctime = op->file[0].scb.status.mtime_client;
+@@ -1832,6 +1835,22 @@ static void afs_rename_success(struct afs_operation *op)
+ op->ctime = op->file[1].scb.status.mtime_client;
+ afs_vnode_commit_status(op, &op->file[1]);
+ }
++
++ /* If we're moving a subdir between dirs, we need to update
++ * its DV counter too as the ".." will be altered.
++ */
++ if (S_ISDIR(vnode->netfs.inode.i_mode) &&
++ op->file[0].vnode != op->file[1].vnode) {
++ u64 new_dv;
++
++ write_seqlock(&vnode->cb_lock);
++
++ new_dv = vnode->status.data_version + 1;
++ vnode->status.data_version = new_dv;
++ inode_set_iversion_raw(&vnode->netfs.inode, new_dv);
++
++ write_sequnlock(&vnode->cb_lock);
++ }
+ }
+
+ static void afs_rename_edit_dir(struct afs_operation *op)
+@@ -1873,6 +1892,12 @@ static void afs_rename_edit_dir(struct afs_operation *op)
+ &vnode->fid, afs_edit_dir_for_rename_2);
+ }
+
++ if (S_ISDIR(vnode->netfs.inode.i_mode) &&
++ new_dvnode != orig_dvnode &&
++ test_bit(AFS_VNODE_DIR_VALID, &vnode->flags))
++ afs_edit_dir_update_dotdot(vnode, new_dvnode,
++ afs_edit_dir_for_rename_sub);
++
+ new_inode = d_inode(new_dentry);
+ if (new_inode) {
+ spin_lock(&new_inode->i_lock);
+diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c
+index a71bff10496b2b..fe223fb781111a 100644
+--- a/fs/afs/dir_edit.c
++++ b/fs/afs/dir_edit.c
+@@ -127,10 +127,10 @@ static struct folio *afs_dir_get_folio(struct afs_vnode *vnode, pgoff_t index)
+ /*
+ * Scan a directory block looking for a dirent of the right name.
+ */
+-static int afs_dir_scan_block(union afs_xdr_dir_block *block, struct qstr *name,
++static int afs_dir_scan_block(const union afs_xdr_dir_block *block, const struct qstr *name,
+ unsigned int blocknum)
+ {
+- union afs_xdr_dirent *de;
++ const union afs_xdr_dirent *de;
+ u64 bitmap;
+ int d, len, n;
+
+@@ -492,3 +492,90 @@ void afs_edit_dir_remove(struct afs_vnode *vnode,
+ clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags);
+ goto out_unmap;
+ }
++
++/*
++ * Edit a subdirectory that has been moved between directories to update the
++ * ".." entry.
++ */
++void afs_edit_dir_update_dotdot(struct afs_vnode *vnode, struct afs_vnode *new_dvnode,
++ enum afs_edit_dir_reason why)
++{
++ union afs_xdr_dir_block *block;
++ union afs_xdr_dirent *de;
++ struct folio *folio;
++ unsigned int nr_blocks, b;
++ pgoff_t index;
++ loff_t i_size;
++ int slot;
++
++ _enter("");
++
++ i_size = i_size_read(&vnode->netfs.inode);
++ if (i_size < AFS_DIR_BLOCK_SIZE) {
++ clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags);
++ return;
++ }
++ nr_blocks = i_size / AFS_DIR_BLOCK_SIZE;
++
++ /* Find a block that has sufficient slots available. Each folio
++ * contains two or more directory blocks.
++ */
++ for (b = 0; b < nr_blocks; b++) {
++ index = b / AFS_DIR_BLOCKS_PER_PAGE;
++ folio = afs_dir_get_folio(vnode, index);
++ if (!folio)
++ goto error;
++
++ block = kmap_local_folio(folio, b * AFS_DIR_BLOCK_SIZE - folio_pos(folio));
++
++ /* Abandon the edit if we got a callback break. */
++ if (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags))
++ goto invalidated;
++
++ slot = afs_dir_scan_block(block, &dotdot_name, b);
++ if (slot >= 0)
++ goto found_dirent;
++
++ kunmap_local(block);
++ folio_unlock(folio);
++ folio_put(folio);
++ }
++
++ /* Didn't find the dirent to clobber. Download the directory again. */
++ trace_afs_edit_dir(vnode, why, afs_edit_dir_update_nodd,
++ 0, 0, 0, 0, "..");
++ clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags);
++ goto out;
++
++found_dirent:
++ de = &block->dirents[slot];
++ de->u.vnode = htonl(new_dvnode->fid.vnode);
++ de->u.unique = htonl(new_dvnode->fid.unique);
++
++ trace_afs_edit_dir(vnode, why, afs_edit_dir_update_dd, b, slot,
++ ntohl(de->u.vnode), ntohl(de->u.unique), "..");
++
++ kunmap_local(block);
++ folio_unlock(folio);
++ folio_put(folio);
++ inode_set_iversion_raw(&vnode->netfs.inode, vnode->status.data_version);
++
++out:
++ _leave("");
++ return;
++
++invalidated:
++ kunmap_local(block);
++ folio_unlock(folio);
++ folio_put(folio);
++ trace_afs_edit_dir(vnode, why, afs_edit_dir_update_inval,
++ 0, 0, 0, 0, "..");
++ clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags);
++ goto out;
++
++error:
++ trace_afs_edit_dir(vnode, why, afs_edit_dir_update_error,
++ 0, 0, 0, 0, "..");
++ clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags);
++ goto out;
++}
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 6e1d3c4daf72c6..b306c09808706b 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -1072,6 +1072,8 @@ extern void afs_check_for_remote_deletion(struct afs_operation *);
+ extern void afs_edit_dir_add(struct afs_vnode *, struct qstr *, struct afs_fid *,
+ enum afs_edit_dir_reason);
+ extern void afs_edit_dir_remove(struct afs_vnode *, struct qstr *, enum afs_edit_dir_reason);
++void afs_edit_dir_update_dotdot(struct afs_vnode *vnode, struct afs_vnode *new_dvnode,
++ enum afs_edit_dir_reason why);
+
+ /*
+ * dir_silly.c
+diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
+index b4e31ae17cd95a..31e437d94869de 100644
+--- a/fs/btrfs/bio.c
++++ b/fs/btrfs/bio.c
+@@ -49,6 +49,7 @@ void btrfs_bio_init(struct btrfs_bio *bbio, struct btrfs_fs_info *fs_info,
+ bbio->end_io = end_io;
+ bbio->private = private;
+ atomic_set(&bbio->pending_ios, 1);
++ WRITE_ONCE(bbio->status, BLK_STS_OK);
+ }
+
+ /*
+@@ -123,43 +124,26 @@ static void __btrfs_bio_end_io(struct btrfs_bio *bbio)
+ void btrfs_bio_end_io(struct btrfs_bio *bbio, blk_status_t status)
+ {
+ bbio->bio.bi_status = status;
+- __btrfs_bio_end_io(bbio);
+-}
+-
+-static void btrfs_orig_write_end_io(struct bio *bio);
+-
+-static void btrfs_bbio_propagate_error(struct btrfs_bio *bbio,
+- struct btrfs_bio *orig_bbio)
+-{
+- /*
+- * For writes we tolerate nr_mirrors - 1 write failures, so we can't
+- * just blindly propagate a write failure here. Instead increment the
+- * error count in the original I/O context so that it is guaranteed to
+- * be larger than the error tolerance.
+- */
+- if (bbio->bio.bi_end_io == &btrfs_orig_write_end_io) {
+- struct btrfs_io_stripe *orig_stripe = orig_bbio->bio.bi_private;
+- struct btrfs_io_context *orig_bioc = orig_stripe->bioc;
+-
+- atomic_add(orig_bioc->max_errors, &orig_bioc->error);
+- } else {
+- orig_bbio->bio.bi_status = bbio->bio.bi_status;
+- }
+-}
+-
+-static void btrfs_orig_bbio_end_io(struct btrfs_bio *bbio)
+-{
+ if (bbio->bio.bi_pool == &btrfs_clone_bioset) {
+ struct btrfs_bio *orig_bbio = bbio->private;
+
+- if (bbio->bio.bi_status)
+- btrfs_bbio_propagate_error(bbio, orig_bbio);
+ btrfs_cleanup_bio(bbio);
+ bbio = orig_bbio;
+ }
+
+- if (atomic_dec_and_test(&bbio->pending_ios))
++ /*
++ * At this point, bbio always points to the original btrfs_bio. Save
++ * the first error in it.
++ */
++ if (status != BLK_STS_OK)
++ cmpxchg(&bbio->status, BLK_STS_OK, status);
++
++ if (atomic_dec_and_test(&bbio->pending_ios)) {
++ /* Load split bio's error which might be set above. */
++ if (status == BLK_STS_OK)
++ bbio->bio.bi_status = READ_ONCE(bbio->status);
+ __btrfs_bio_end_io(bbio);
++ }
+ }
+
+ static int next_repair_mirror(struct btrfs_failed_bio *fbio, int cur_mirror)
+@@ -179,7 +163,7 @@ static int prev_repair_mirror(struct btrfs_failed_bio *fbio, int cur_mirror)
+ static void btrfs_repair_done(struct btrfs_failed_bio *fbio)
+ {
+ if (atomic_dec_and_test(&fbio->repair_count)) {
+- btrfs_orig_bbio_end_io(fbio->bbio);
++ btrfs_bio_end_io(fbio->bbio, fbio->bbio->bio.bi_status);
+ mempool_free(fbio, &btrfs_failed_bio_pool);
+ }
+ }
+@@ -326,7 +310,7 @@ static void btrfs_check_read_bio(struct btrfs_bio *bbio, struct btrfs_device *de
+ if (fbio)
+ btrfs_repair_done(fbio);
+ else
+- btrfs_orig_bbio_end_io(bbio);
++ btrfs_bio_end_io(bbio, bbio->bio.bi_status);
+ }
+
+ static void btrfs_log_dev_io_error(struct bio *bio, struct btrfs_device *dev)
+@@ -360,7 +344,7 @@ static void btrfs_end_bio_work(struct work_struct *work)
+ if (is_data_bbio(bbio))
+ btrfs_check_read_bio(bbio, bbio->bio.bi_private);
+ else
+- btrfs_orig_bbio_end_io(bbio);
++ btrfs_bio_end_io(bbio, bbio->bio.bi_status);
+ }
+
+ static void btrfs_simple_end_io(struct bio *bio)
+@@ -380,7 +364,7 @@ static void btrfs_simple_end_io(struct bio *bio)
+ } else {
+ if (bio_op(bio) == REQ_OP_ZONE_APPEND && !bio->bi_status)
+ btrfs_record_physical_zoned(bbio);
+- btrfs_orig_bbio_end_io(bbio);
++ btrfs_bio_end_io(bbio, bbio->bio.bi_status);
+ }
+ }
+
+@@ -394,7 +378,7 @@ static void btrfs_raid56_end_io(struct bio *bio)
+ if (bio_op(bio) == REQ_OP_READ && is_data_bbio(bbio))
+ btrfs_check_read_bio(bbio, NULL);
+ else
+- btrfs_orig_bbio_end_io(bbio);
++ btrfs_bio_end_io(bbio, bbio->bio.bi_status);
+
+ btrfs_put_bioc(bioc);
+ }
+@@ -424,7 +408,7 @@ static void btrfs_orig_write_end_io(struct bio *bio)
+ if (bio_op(bio) == REQ_OP_ZONE_APPEND && !bio->bi_status)
+ stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
+
+- btrfs_orig_bbio_end_io(bbio);
++ btrfs_bio_end_io(bbio, bbio->bio.bi_status);
+ btrfs_put_bioc(bioc);
+ }
+
+@@ -593,7 +577,7 @@ static void run_one_async_done(struct btrfs_work *work, bool do_free)
+
+ /* If an error occurred we just want to clean up the bio and move on. */
+ if (bio->bi_status) {
+- btrfs_orig_bbio_end_io(async->bbio);
++ btrfs_bio_end_io(async->bbio, async->bbio->bio.bi_status);
+ return;
+ }
+
+@@ -765,11 +749,9 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+ ASSERT(bbio->bio.bi_pool == &btrfs_clone_bioset);
+ ASSERT(remaining);
+
+- remaining->bio.bi_status = ret;
+- btrfs_orig_bbio_end_io(remaining);
++ btrfs_bio_end_io(remaining, ret);
+ }
+- bbio->bio.bi_status = ret;
+- btrfs_orig_bbio_end_io(bbio);
++ btrfs_bio_end_io(bbio, ret);
+ /* Do not submit another chunk */
+ return true;
+ }
+diff --git a/fs/btrfs/bio.h b/fs/btrfs/bio.h
+index d9dd5276093df0..043f94562166ba 100644
+--- a/fs/btrfs/bio.h
++++ b/fs/btrfs/bio.h
+@@ -79,6 +79,9 @@ struct btrfs_bio {
+ /* File system that this I/O operates on. */
+ struct btrfs_fs_info *fs_info;
+
++ /* Save the first error status of split bio. */
++ blk_status_t status;
++
+ /*
+ * This member must come last, bio_alloc_bioset will allocate enough
+ * bytes for entire btrfs_bio but relies on bio being last.
+diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
+index f6dbda37a3615a..990ef97accec48 100644
+--- a/fs/btrfs/defrag.c
++++ b/fs/btrfs/defrag.c
+@@ -772,12 +772,12 @@ static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start,
+ * We can get a merged extent, in that case, we need to re-search
+ * tree to get the original em for defrag.
+ *
+- * If @newer_than is 0 or em::generation < newer_than, we can trust
+- * this em, as either we don't care about the generation, or the
+- * merged extent map will be rejected anyway.
++ * This is because even if we have adjacent extents that are contiguous
++ * and compatible (same type and flags), we still want to defrag them
++ * so that we use less metadata (extent items in the extent tree and
++ * file extent items in the inode's subvolume tree).
+ */
+- if (em && (em->flags & EXTENT_FLAG_MERGED) &&
+- newer_than && em->generation >= newer_than) {
++ if (em && (em->flags & EXTENT_FLAG_MERGED)) {
+ free_extent_map(em);
+ em = NULL;
+ }
+diff --git a/fs/btrfs/extent_map.c b/fs/btrfs/extent_map.c
+index 72ae8f64482c6b..b56ec83bf9528f 100644
+--- a/fs/btrfs/extent_map.c
++++ b/fs/btrfs/extent_map.c
+@@ -227,7 +227,12 @@ static bool mergeable_maps(const struct extent_map *prev, const struct extent_ma
+ if (extent_map_end(prev) != next->start)
+ return false;
+
+- if (prev->flags != next->flags)
++ /*
++ * The merged flag is not an on-disk flag, it just indicates we had the
++ * extent maps of 2 (or more) adjacent extents merged, so factor it out.
++ */
++ if ((prev->flags & ~EXTENT_FLAG_MERGED) !=
++ (next->flags & ~EXTENT_FLAG_MERGED))
+ return false;
+
+ if (next->disk_bytenr < EXTENT_MAP_LAST_BYTE - 1)
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index fcedc43ef291a2..0485143cd75e08 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1103,6 +1103,7 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ if (device->bdev) {
+ fs_devices->open_devices--;
+ device->bdev = NULL;
++ device->bdev_file = NULL;
+ }
+ clear_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state);
+ btrfs_destroy_dev_zone_info(device);
+diff --git a/fs/dax.c b/fs/dax.c
+index c62acd2812f8d4..21b47402b3dca4 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -1262,35 +1262,46 @@ static s64 dax_unshare_iter(struct iomap_iter *iter)
+ {
+ struct iomap *iomap = &iter->iomap;
+ const struct iomap *srcmap = iomap_iter_srcmap(iter);
+- loff_t pos = iter->pos;
+- loff_t length = iomap_length(iter);
++ loff_t copy_pos = iter->pos;
++ u64 copy_len = iomap_length(iter);
++ u32 mod;
+ int id = 0;
+ s64 ret = 0;
+ void *daddr = NULL, *saddr = NULL;
+
+- /* don't bother with blocks that are not shared to start with */
+- if (!(iomap->flags & IOMAP_F_SHARED))
+- return length;
++ if (!iomap_want_unshare_iter(iter))
++ return iomap_length(iter);
++
++ /*
++ * Extend the file range to be aligned to fsblock/pagesize, because
++ * we need to copy entire blocks, not just the byte range specified.
++ * Invalidate the mapping because we're about to CoW.
++ */
++ mod = offset_in_page(copy_pos);
++ if (mod) {
++ copy_len += mod;
++ copy_pos -= mod;
++ }
++
++ mod = offset_in_page(copy_pos + copy_len);
++ if (mod)
++ copy_len += PAGE_SIZE - mod;
++
++ invalidate_inode_pages2_range(iter->inode->i_mapping,
++ copy_pos >> PAGE_SHIFT,
++ (copy_pos + copy_len - 1) >> PAGE_SHIFT);
+
+ id = dax_read_lock();
+- ret = dax_iomap_direct_access(iomap, pos, length, &daddr, NULL);
++ ret = dax_iomap_direct_access(iomap, copy_pos, copy_len, &daddr, NULL);
+ if (ret < 0)
+ goto out_unlock;
+
+- /* zero the distance if srcmap is HOLE or UNWRITTEN */
+- if (srcmap->flags & IOMAP_F_SHARED || srcmap->type == IOMAP_UNWRITTEN) {
+- memset(daddr, 0, length);
+- dax_flush(iomap->dax_dev, daddr, length);
+- ret = length;
+- goto out_unlock;
+- }
+-
+- ret = dax_iomap_direct_access(srcmap, pos, length, &saddr, NULL);
++ ret = dax_iomap_direct_access(srcmap, copy_pos, copy_len, &saddr, NULL);
+ if (ret < 0)
+ goto out_unlock;
+
+- if (copy_mc_to_kernel(daddr, saddr, length) == 0)
+- ret = length;
++ if (copy_mc_to_kernel(daddr, saddr, copy_len) == 0)
++ ret = iomap_length(iter);
+ else
+ ret = -EIO;
+
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index 8e6edb6628183a..c8e984be39823a 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1337,16 +1337,11 @@ EXPORT_SYMBOL_GPL(iomap_file_buffered_write_punch_delalloc);
+ static loff_t iomap_unshare_iter(struct iomap_iter *iter)
+ {
+ struct iomap *iomap = &iter->iomap;
+- const struct iomap *srcmap = iomap_iter_srcmap(iter);
+ loff_t pos = iter->pos;
+ loff_t length = iomap_length(iter);
+ loff_t written = 0;
+
+- /* don't bother with blocks that are not shared to start with */
+- if (!(iomap->flags & IOMAP_F_SHARED))
+- return length;
+- /* don't bother with holes or unwritten extents */
+- if (srcmap->type == IOMAP_HOLE || srcmap->type == IOMAP_UNWRITTEN)
++ if (!iomap_want_unshare_iter(iter))
+ return length;
+
+ do {
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 20cb2008f9e469..035ba52742a504 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -1001,6 +1001,11 @@ void nfs_delegation_mark_returned(struct inode *inode,
+ }
+
+ nfs_mark_delegation_revoked(delegation);
++ clear_bit(NFS_DELEGATION_RETURNING, &delegation->flags);
++ spin_unlock(&delegation->lock);
++ if (nfs_detach_delegation(NFS_I(inode), delegation, NFS_SERVER(inode)))
++ nfs_put_delegation(delegation);
++ goto out_rcu_unlock;
+
+ out_clear_returning:
+ clear_bit(NFS_DELEGATION_RETURNING, &delegation->flags);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 5768b2ff1d1d13..8b78d493e301f3 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1838,14 +1838,12 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ if (!async_copy)
+ goto out_err;
+ async_copy->cp_nn = nn;
++ INIT_LIST_HEAD(&async_copy->copies);
++ refcount_set(&async_copy->refcount, 1);
+ /* Arbitrary cap on number of pending async copy operations */
+ if (atomic_inc_return(&nn->pending_async_copies) >
+- (int)rqstp->rq_pool->sp_nrthreads) {
+- atomic_dec(&nn->pending_async_copies);
++ (int)rqstp->rq_pool->sp_nrthreads)
+ goto out_err;
+- }
+- INIT_LIST_HEAD(&async_copy->copies);
+- refcount_set(&async_copy->refcount, 1);
+ async_copy->cp_src = kmalloc(sizeof(*async_copy->cp_src), GFP_KERNEL);
+ if (!async_copy->cp_src)
+ goto out_err;
+diff --git a/fs/nilfs2/namei.c b/fs/nilfs2/namei.c
+index 4905063790c578..9b108052d9f71f 100644
+--- a/fs/nilfs2/namei.c
++++ b/fs/nilfs2/namei.c
+@@ -157,6 +157,9 @@ static int nilfs_symlink(struct mnt_idmap *idmap, struct inode *dir,
+ /* slow symlink */
+ inode->i_op = &nilfs_symlink_inode_operations;
+ inode_nohighmem(inode);
++ mapping_set_gfp_mask(inode->i_mapping,
++ mapping_gfp_constraint(inode->i_mapping,
++ ~__GFP_FS));
+ inode->i_mapping->a_ops = &nilfs_aops;
+ err = page_symlink(inode, symname, l);
+ if (err)
+diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
+index 7f9073d5c5a567..097e2c7f7f718c 100644
+--- a/fs/nilfs2/page.c
++++ b/fs/nilfs2/page.c
+@@ -409,6 +409,7 @@ void nilfs_clear_folio_dirty(struct folio *folio, bool silent)
+
+ folio_clear_uptodate(folio);
+ folio_clear_mappedtodisk(folio);
++ folio_clear_checked(folio);
+
+ head = folio_buffers(folio);
+ if (head) {
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index d31eae611fe066..906ee60d91cb11 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -1285,7 +1285,14 @@ static int ntfs_file_release(struct inode *inode, struct file *file)
+ /* If we are last writer on the inode, drop the block reservation. */
+ if (sbi->options->prealloc &&
+ ((file->f_mode & FMODE_WRITE) &&
+- atomic_read(&inode->i_writecount) == 1)) {
++ atomic_read(&inode->i_writecount) == 1)
++ /*
++ * The only file when inode->i_fop = &ntfs_file_operations and
++ * init_rwsem(&ni->file.run_lock) is not called explicitly is MFT.
++ *
++ * Add additional check here.
++ */
++ && inode->i_ino != MFT_REC_MFT) {
+ ni_lock(ni);
+ down_write(&ni->file.run_lock);
+
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 60c975ac38e610..2a017b5c8101f3 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -102,7 +102,9 @@ void ni_clear(struct ntfs_inode *ni)
+ {
+ struct rb_node *node;
+
+- if (!ni->vfs_inode.i_nlink && ni->mi.mrec && is_rec_inuse(ni->mi.mrec))
++ if (!ni->vfs_inode.i_nlink && ni->mi.mrec &&
++ is_rec_inuse(ni->mi.mrec) &&
++ !(ni->mi.sbi->flags & NTFS_FLAGS_LOG_REPLAYING))
+ ni_delete_all(ni);
+
+ al_destroy(ni);
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index 6b0bdc474e763f..4804eb9628bb29 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -536,11 +536,15 @@ struct inode *ntfs_iget5(struct super_block *sb, const struct MFT_REF *ref,
+ if (inode->i_state & I_NEW)
+ inode = ntfs_read_mft(inode, name, ref);
+ else if (ref->seq != ntfs_i(inode)->mi.mrec->seq) {
+- /* Inode overlaps? */
+- _ntfs_bad_inode(inode);
++ /*
++ * Sequence number is not expected.
++ * Looks like inode was reused but caller uses the old reference
++ */
++ iput(inode);
++ inode = ERR_PTR(-ESTALE);
+ }
+
+- if (IS_ERR(inode) && name)
++ if (IS_ERR(inode))
+ ntfs_set_state(sb->s_fs_info, NTFS_DIRTY_ERROR);
+
+ return inode;
+@@ -1713,7 +1717,10 @@ int ntfs_create_inode(struct mnt_idmap *idmap, struct inode *dir,
+ attr = ni_find_attr(ni, NULL, NULL, ATTR_EA, NULL, 0, NULL, NULL);
+ if (attr && attr->non_res) {
+ /* Delete ATTR_EA, if non-resident. */
+- attr_set_size(ni, ATTR_EA, NULL, 0, NULL, 0, NULL, false, NULL);
++ struct runs_tree run;
++ run_init(&run);
++ attr_set_size(ni, ATTR_EA, NULL, 0, &run, 0, NULL, false, NULL);
++ run_close(&run);
+ }
+
+ if (rp_inserted)
+diff --git a/fs/ntfs3/lznt.c b/fs/ntfs3/lznt.c
+index 4aae598d6d8846..fdc9b2ebf3410e 100644
+--- a/fs/ntfs3/lznt.c
++++ b/fs/ntfs3/lznt.c
+@@ -236,6 +236,9 @@ static inline ssize_t decompress_chunk(u8 *unc, u8 *unc_end, const u8 *cmpr,
+
+ /* Do decompression until pointers are inside range. */
+ while (up < unc_end && cmpr < cmpr_end) {
++ // return err if more than LZNT_CHUNK_SIZE bytes are written
++ if (up - unc > LZNT_CHUNK_SIZE)
++ return -EINVAL;
+ /* Correct index */
+ while (unc + s_max_off[index] < up)
+ index += 1;
+diff --git a/fs/ntfs3/namei.c b/fs/ntfs3/namei.c
+index 0a70c36585463b..02b745f117a511 100644
+--- a/fs/ntfs3/namei.c
++++ b/fs/ntfs3/namei.c
+@@ -81,7 +81,7 @@ static struct dentry *ntfs_lookup(struct inode *dir, struct dentry *dentry,
+ if (err < 0)
+ inode = ERR_PTR(err);
+ else {
+- ni_lock(ni);
++ ni_lock_dir(ni);
+ inode = dir_search_u(dir, uni, NULL);
+ ni_unlock(ni);
+ }
+diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h
+index e5255a251929af..79047cd5461171 100644
+--- a/fs/ntfs3/ntfs_fs.h
++++ b/fs/ntfs3/ntfs_fs.h
+@@ -334,7 +334,7 @@ struct mft_inode {
+
+ /* Nested class for ntfs_inode::ni_lock. */
+ enum ntfs_inode_mutex_lock_class {
+- NTFS_INODE_MUTEX_DIRTY,
++ NTFS_INODE_MUTEX_DIRTY = 1,
+ NTFS_INODE_MUTEX_SECURITY,
+ NTFS_INODE_MUTEX_OBJID,
+ NTFS_INODE_MUTEX_REPARSE,
+diff --git a/fs/ntfs3/record.c b/fs/ntfs3/record.c
+index 6c76503edc200d..f810f0419d25ea 100644
+--- a/fs/ntfs3/record.c
++++ b/fs/ntfs3/record.c
+@@ -223,29 +223,21 @@ struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr)
+ prev_type = 0;
+ attr = Add2Ptr(rec, off);
+ } else {
+- /* Check if input attr inside record. */
++ /*
++ * We don't need to check previous attr here. There is
++ * a bounds checking in the previous round.
++ */
+ off = PtrOffset(rec, attr);
+- if (off >= used)
+- return NULL;
+
+ asize = le32_to_cpu(attr->size);
+- if (asize < SIZEOF_RESIDENT) {
+- /* Impossible 'cause we should not return such attribute. */
+- return NULL;
+- }
+-
+- /* Overflow check. */
+- if (off + asize < off)
+- return NULL;
+
+ prev_type = le32_to_cpu(attr->type);
+ attr = Add2Ptr(attr, asize);
+ off += asize;
+ }
+
+- asize = le32_to_cpu(attr->size);
+-
+ /* Can we use the first field (attr->type). */
++ /* NOTE: this code also checks attr->size availability. */
+ if (off + 8 > used) {
+ static_assert(ALIGN(sizeof(enum ATTR_TYPE), 8) == 8);
+ return NULL;
+@@ -265,6 +257,8 @@ struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr)
+ if (t32 < prev_type)
+ return NULL;
+
++ asize = le32_to_cpu(attr->size);
++
+ /* Check overflow and boundary. */
+ if (off + asize < off || off + asize > used)
+ return NULL;
+@@ -293,6 +287,10 @@ struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr)
+ if (attr->non_res != 1)
+ return NULL;
+
++ /* Can we use memory including attr->nres.valid_size? */
++ if (asize < SIZEOF_NONRESIDENT)
++ return NULL;
++
+ t16 = le16_to_cpu(attr->nres.run_off);
+ if (t16 > asize)
+ return NULL;
+@@ -319,7 +317,8 @@ struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr)
+
+ if (!attr->nres.svcn && is_attr_ext(attr)) {
+ /* First segment of sparse/compressed attribute */
+- if (asize + 8 < SIZEOF_NONRESIDENT_EX)
++ /* Can we use memory including attr->nres.total_size? */
++ if (asize < SIZEOF_NONRESIDENT_EX)
+ return NULL;
+
+ tot_size = le64_to_cpu(attr->nres.total_size);
+@@ -329,10 +328,10 @@ struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr)
+ if (tot_size > alloc_size)
+ return NULL;
+ } else {
+- if (asize + 8 < SIZEOF_NONRESIDENT)
++ if (attr->nres.c_unit)
+ return NULL;
+
+- if (attr->nres.c_unit)
++ if (alloc_size > mi->sbi->volume.size)
+ return NULL;
+ }
+
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index ccc57038a9779f..02d2beb7ddb953 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -1783,6 +1783,14 @@ int ocfs2_remove_inode_range(struct inode *inode,
+ return 0;
+
+ if (OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL) {
++ int id_count = ocfs2_max_inline_data_with_xattr(inode->i_sb, di);
++
++ if (byte_start > id_count || byte_start + byte_len > id_count) {
++ ret = -EINVAL;
++ mlog_errno(ret);
++ goto out;
++ }
++
+ ret = ocfs2_truncate_inline(inode, di_bh, byte_start,
+ byte_start + byte_len, 0);
+ if (ret) {
+diff --git a/fs/smb/client/cifs_unicode.c b/fs/smb/client/cifs_unicode.c
+index 79d99a9139441e..4cc6e0896fad37 100644
+--- a/fs/smb/client/cifs_unicode.c
++++ b/fs/smb/client/cifs_unicode.c
+@@ -484,10 +484,21 @@ cifsConvertToUTF16(__le16 *target, const char *source, int srclen,
+ /**
+ * Remap spaces and periods found at the end of every
+ * component of the path. The special cases of '.' and
+- * '..' do not need to be dealt with explicitly because
+- * they are addressed in namei.c:link_path_walk().
++ * '..' are need to be handled because of symlinks.
++ * They are treated as non-end-of-string to avoid
++ * remapping and breaking symlinks pointing to . or ..
+ **/
+- if ((i == srclen - 1) || (source[i+1] == '\\'))
++ if ((i == 0 || source[i-1] == '\\') &&
++ source[i] == '.' &&
++ (i == srclen-1 || source[i+1] == '\\'))
++ end_of_string = false; /* "." case */
++ else if (i >= 1 &&
++ (i == 1 || source[i-2] == '\\') &&
++ source[i-1] == '.' &&
++ source[i] == '.' &&
++ (i == srclen-1 || source[i+1] == '\\'))
++ end_of_string = false; /* ".." case */
++ else if ((i == srclen - 1) || (source[i+1] == '\\'))
+ end_of_string = true;
+ else
+ end_of_string = false;
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 7429b96a6ae5ef..74abbdf5026c73 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -14,6 +14,12 @@
+ #include "fs_context.h"
+ #include "reparse.h"
+
++static int detect_directory_symlink_target(struct cifs_sb_info *cifs_sb,
++ const unsigned int xid,
++ const char *full_path,
++ const char *symname,
++ bool *directory);
++
+ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ struct dentry *dentry, struct cifs_tcon *tcon,
+ const char *full_path, const char *symname)
+@@ -24,6 +30,7 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ struct inode *new;
+ struct kvec iov;
+ __le16 *path;
++ bool directory;
+ char *sym, sep = CIFS_DIR_SEP(cifs_sb);
+ u16 len, plen;
+ int rc = 0;
+@@ -45,6 +52,18 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ goto out;
+ }
+
++ /*
++ * SMB distinguish between symlink to directory and symlink to file.
++ * They cannot be exchanged (symlink of file type which points to
++ * directory cannot be resolved and vice-versa). Try to detect if
++ * the symlink target could be a directory or not. When detection
++ * fails then treat symlink as a file (non-directory) symlink.
++ */
++ directory = false;
++ rc = detect_directory_symlink_target(cifs_sb, xid, full_path, symname, &directory);
++ if (rc < 0)
++ goto out;
++
+ plen = 2 * UniStrnlen((wchar_t *)path, PATH_MAX);
+ len = sizeof(*buf) + plen * 2;
+ buf = kzalloc(len, GFP_KERNEL);
+@@ -69,7 +88,8 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ iov.iov_base = buf;
+ iov.iov_len = len;
+ new = smb2_get_reparse_inode(&data, inode->i_sb, xid,
+- tcon, full_path, &iov, NULL);
++ tcon, full_path, directory,
++ &iov, NULL);
+ if (!IS_ERR(new))
+ d_instantiate(dentry, new);
+ else
+@@ -81,6 +101,144 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ return rc;
+ }
+
++static int detect_directory_symlink_target(struct cifs_sb_info *cifs_sb,
++ const unsigned int xid,
++ const char *full_path,
++ const char *symname,
++ bool *directory)
++{
++ char sep = CIFS_DIR_SEP(cifs_sb);
++ struct cifs_open_parms oparms;
++ struct tcon_link *tlink;
++ struct cifs_tcon *tcon;
++ const char *basename;
++ struct cifs_fid fid;
++ char *resolved_path;
++ int full_path_len;
++ int basename_len;
++ int symname_len;
++ char *path_sep;
++ __u32 oplock;
++ int open_rc;
++
++ /*
++ * First do some simple check. If the original Linux symlink target ends
++ * with slash, or last path component is dot or dot-dot then it is for
++ * sure symlink to the directory.
++ */
++ basename = kbasename(symname);
++ basename_len = strlen(basename);
++ if (basename_len == 0 || /* symname ends with slash */
++ (basename_len == 1 && basename[0] == '.') || /* last component is "." */
++ (basename_len == 2 && basename[0] == '.' && basename[1] == '.')) { /* or ".." */
++ *directory = true;
++ return 0;
++ }
++
++ /*
++ * For absolute symlinks it is not possible to determinate
++ * if it should point to directory or file.
++ */
++ if (symname[0] == '/') {
++ cifs_dbg(FYI,
++ "%s: cannot determinate if the symlink target path '%s' "
++ "is directory or not, creating '%s' as file symlink\n",
++ __func__, symname, full_path);
++ return 0;
++ }
++
++ /*
++ * If it was not detected as directory yet and the symlink is relative
++ * then try to resolve the path on the SMB server, check if the path
++ * exists and determinate if it is a directory or not.
++ */
++
++ full_path_len = strlen(full_path);
++ symname_len = strlen(symname);
++
++ tlink = cifs_sb_tlink(cifs_sb);
++ if (IS_ERR(tlink))
++ return PTR_ERR(tlink);
++
++ resolved_path = kzalloc(full_path_len + symname_len + 1, GFP_KERNEL);
++ if (!resolved_path) {
++ cifs_put_tlink(tlink);
++ return -ENOMEM;
++ }
++
++ /*
++ * Compose the resolved SMB symlink path from the SMB full path
++ * and Linux target symlink path.
++ */
++ memcpy(resolved_path, full_path, full_path_len+1);
++ path_sep = strrchr(resolved_path, sep);
++ if (path_sep)
++ path_sep++;
++ else
++ path_sep = resolved_path;
++ memcpy(path_sep, symname, symname_len+1);
++ if (sep == '\\')
++ convert_delimiter(path_sep, sep);
++
++ tcon = tlink_tcon(tlink);
++ oparms = CIFS_OPARMS(cifs_sb, tcon, resolved_path,
++ FILE_READ_ATTRIBUTES, FILE_OPEN, 0, ACL_NO_MODE);
++ oparms.fid = &fid;
++
++ /* Try to open as a directory (NOT_FILE) */
++ oplock = 0;
++ oparms.create_options = cifs_create_options(cifs_sb,
++ CREATE_NOT_FILE | OPEN_REPARSE_POINT);
++ open_rc = tcon->ses->server->ops->open(xid, &oparms, &oplock, NULL);
++ if (open_rc == 0) {
++ /* Successful open means that the target path is definitely a directory. */
++ *directory = true;
++ tcon->ses->server->ops->close(xid, tcon, &fid);
++ } else if (open_rc == -ENOTDIR) {
++ /* -ENOTDIR means that the target path is definitely a file. */
++ *directory = false;
++ } else if (open_rc == -ENOENT) {
++ /* -ENOENT means that the target path does not exist. */
++ cifs_dbg(FYI,
++ "%s: symlink target path '%s' does not exist, "
++ "creating '%s' as file symlink\n",
++ __func__, symname, full_path);
++ } else {
++ /* Try to open as a file (NOT_DIR) */
++ oplock = 0;
++ oparms.create_options = cifs_create_options(cifs_sb,
++ CREATE_NOT_DIR | OPEN_REPARSE_POINT);
++ open_rc = tcon->ses->server->ops->open(xid, &oparms, &oplock, NULL);
++ if (open_rc == 0) {
++ /* Successful open means that the target path is definitely a file. */
++ *directory = false;
++ tcon->ses->server->ops->close(xid, tcon, &fid);
++ } else if (open_rc == -EISDIR) {
++ /* -EISDIR means that the target path is definitely a directory. */
++ *directory = true;
++ } else {
++ /*
++ * This code branch is called when we do not have a permission to
++ * open the resolved_path or some other client/process denied
++ * opening the resolved_path.
++ *
++ * TODO: Try to use ops->query_dir_first on the parent directory
++ * of resolved_path, search for basename of resolved_path and
++ * check if the ATTR_DIRECTORY is set in fi.Attributes. In some
++ * case this could work also when opening of the path is denied.
++ */
++ cifs_dbg(FYI,
++ "%s: cannot determinate if the symlink target path '%s' "
++ "is directory or not, creating '%s' as file symlink\n",
++ __func__, symname, full_path);
++ }
++ }
++
++ kfree(resolved_path);
++ cifs_put_tlink(tlink);
++ return 0;
++}
++
+ static int nfs_set_reparse_buf(struct reparse_posix_data *buf,
+ mode_t mode, dev_t dev,
+ struct kvec *iov)
+@@ -108,8 +266,8 @@ static int nfs_set_reparse_buf(struct reparse_posix_data *buf,
+ buf->InodeType = cpu_to_le64(type);
+ buf->ReparseDataLength = cpu_to_le16(len + dlen -
+ sizeof(struct reparse_data_buffer));
+- *(__le64 *)buf->DataBuffer = cpu_to_le64(((u64)MAJOR(dev) << 32) |
+- MINOR(dev));
++ *(__le64 *)buf->DataBuffer = cpu_to_le64(((u64)MINOR(dev) << 32) |
++ MAJOR(dev));
+ iov->iov_base = buf;
+ iov->iov_len = len + dlen;
+ return 0;
+@@ -137,7 +295,7 @@ static int mknod_nfs(unsigned int xid, struct inode *inode,
+ };
+
+ new = smb2_get_reparse_inode(&data, inode->i_sb, xid,
+- tcon, full_path, &iov, NULL);
++ tcon, full_path, false, &iov, NULL);
+ if (!IS_ERR(new))
+ d_instantiate(dentry, new);
+ else
+@@ -283,7 +441,7 @@ static int mknod_wsl(unsigned int xid, struct inode *inode,
+ data.wsl.eas_len = len;
+
+ new = smb2_get_reparse_inode(&data, inode->i_sb,
+- xid, tcon, full_path,
++ xid, tcon, full_path, false,
+ &reparse_iov, &xattr_iov);
+ if (!IS_ERR(new))
+ d_instantiate(dentry, new);
+@@ -497,7 +655,7 @@ static void wsl_to_fattr(struct cifs_open_info_data *data,
+ else if (!strncmp(name, SMB2_WSL_XATTR_MODE, nlen))
+ fattr->cf_mode = (umode_t)le32_to_cpu(*(__le32 *)v);
+ else if (!strncmp(name, SMB2_WSL_XATTR_DEV, nlen))
+- fattr->cf_rdev = wsl_mkdev(v);
++ fattr->cf_rdev = reparse_mkdev(v);
+ } while (next);
+ out:
+ fattr->cf_dtype = S_DT(fattr->cf_mode);
+@@ -518,13 +676,13 @@ bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
+ if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8)
+ return false;
+ fattr->cf_mode |= S_IFCHR;
+- fattr->cf_rdev = reparse_nfs_mkdev(buf);
++ fattr->cf_rdev = reparse_mkdev(buf->DataBuffer);
+ break;
+ case NFS_SPECFILE_BLK:
+ if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8)
+ return false;
+ fattr->cf_mode |= S_IFBLK;
+- fattr->cf_rdev = reparse_nfs_mkdev(buf);
++ fattr->cf_rdev = reparse_mkdev(buf->DataBuffer);
+ break;
+ case NFS_SPECFILE_FIFO:
+ fattr->cf_mode |= S_IFIFO;
+diff --git a/fs/smb/client/reparse.h b/fs/smb/client/reparse.h
+index 2c0644bc4e65a7..158e7b7aae646c 100644
+--- a/fs/smb/client/reparse.h
++++ b/fs/smb/client/reparse.h
+@@ -18,14 +18,7 @@
+ */
+ #define IO_REPARSE_TAG_INTERNAL ((__u32)~0U)
+
+-static inline dev_t reparse_nfs_mkdev(struct reparse_posix_data *buf)
+-{
+- u64 v = le64_to_cpu(*(__le64 *)buf->DataBuffer);
+-
+- return MKDEV(v >> 32, v & 0xffffffff);
+-}
+-
+-static inline dev_t wsl_mkdev(void *ptr)
++static inline dev_t reparse_mkdev(void *ptr)
+ {
+ u64 v = le64_to_cpu(*(__le64 *)ptr);
+
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index a6dab60e2c01ef..cdb0e028e73c46 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -1198,6 +1198,7 @@ struct inode *smb2_get_reparse_inode(struct cifs_open_info_data *data,
+ const unsigned int xid,
+ struct cifs_tcon *tcon,
+ const char *full_path,
++ bool directory,
+ struct kvec *reparse_iov,
+ struct kvec *xattr_iov)
+ {
+@@ -1217,7 +1218,7 @@ struct inode *smb2_get_reparse_inode(struct cifs_open_info_data *data,
+ FILE_READ_ATTRIBUTES |
+ FILE_WRITE_ATTRIBUTES,
+ FILE_CREATE,
+- CREATE_NOT_DIR | OPEN_REPARSE_POINT,
++ (directory ? CREATE_NOT_FILE : CREATE_NOT_DIR) | OPEN_REPARSE_POINT,
+ ACL_NO_MODE);
+ if (xattr_iov)
+ oparms.ea_cctx = xattr_iov;
+diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
+index b208232b12a24d..5e0855fefcfe66 100644
+--- a/fs/smb/client/smb2proto.h
++++ b/fs/smb/client/smb2proto.h
+@@ -61,6 +61,7 @@ struct inode *smb2_get_reparse_inode(struct cifs_open_info_data *data,
+ const unsigned int xid,
+ struct cifs_tcon *tcon,
+ const char *full_path,
++ bool directory,
+ struct kvec *reparse_iov,
+ struct kvec *xattr_iov);
+ int smb2_query_reparse_point(const unsigned int xid,
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 27a3e9285fbf68..2f302da629cb4f 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -731,6 +731,34 @@ void dup_userfaultfd_complete(struct list_head *fcs)
+ }
+ }
+
++void dup_userfaultfd_fail(struct list_head *fcs)
++{
++ struct userfaultfd_fork_ctx *fctx, *n;
++
++ /*
++ * An error has occurred on fork, we will tear memory down, but have
++ * allocated memory for fctx's and raised reference counts for both the
++ * original and child contexts (and on the mm for each as a result).
++ *
++ * These would ordinarily be taken care of by a user handling the event,
++ * but we are no longer doing so, so manually clean up here.
++ *
++ * mm tear down will take care of cleaning up VMA contexts.
++ */
++ list_for_each_entry_safe(fctx, n, fcs, list) {
++ struct userfaultfd_ctx *octx = fctx->orig;
++ struct userfaultfd_ctx *ctx = fctx->new;
++
++ atomic_dec(&octx->mmap_changing);
++ VM_BUG_ON(atomic_read(&octx->mmap_changing) < 0);
++ userfaultfd_ctx_put(octx);
++ userfaultfd_ctx_put(ctx);
++
++ list_del(&fctx->list);
++ kfree(fctx);
++ }
++}
++
+ void mremap_userfaultfd_prep(struct vm_area_struct *vma,
+ struct vm_userfaultfd_ctx *vm_ctx)
+ {
+diff --git a/fs/xfs/xfs_filestream.c b/fs/xfs/xfs_filestream.c
+index e3aaa055559781..88bd23ce74cde1 100644
+--- a/fs/xfs/xfs_filestream.c
++++ b/fs/xfs/xfs_filestream.c
+@@ -64,7 +64,7 @@ xfs_filestream_pick_ag(
+ struct xfs_perag *pag;
+ struct xfs_perag *max_pag = NULL;
+ xfs_extlen_t minlen = *longest;
+- xfs_extlen_t free = 0, minfree, maxfree = 0;
++ xfs_extlen_t minfree, maxfree = 0;
+ xfs_agnumber_t agno;
+ bool first_pass = true;
+ int err;
+@@ -107,7 +107,6 @@ xfs_filestream_pick_ag(
+ !(flags & XFS_PICK_USERDATA) ||
+ (flags & XFS_PICK_LOWSPACE))) {
+ /* Break out, retaining the reference on the AG. */
+- free = pag->pagf_freeblks;
+ break;
+ }
+ }
+@@ -150,23 +149,25 @@ xfs_filestream_pick_ag(
+ * grab.
+ */
+ if (!max_pag) {
+- for_each_perag_wrap(args->mp, 0, start_agno, args->pag)
++ for_each_perag_wrap(args->mp, 0, start_agno, pag) {
++ max_pag = pag;
+ break;
+- atomic_inc(&args->pag->pagf_fstrms);
+- *longest = 0;
+- } else {
+- pag = max_pag;
+- free = maxfree;
+- atomic_inc(&pag->pagf_fstrms);
++ }
++
++ /* Bail if there are no AGs at all to select from. */
++ if (!max_pag)
++ return -ENOSPC;
+ }
++
++ pag = max_pag;
++ atomic_inc(&pag->pagf_fstrms);
+ } else if (max_pag) {
+ xfs_perag_rele(max_pag);
+ }
+
+- trace_xfs_filestream_pick(pag, pino, free);
++ trace_xfs_filestream_pick(pag, pino);
+ args->pag = pag;
+ return 0;
+-
+ }
+
+ static struct xfs_inode *
+diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
+index 180ce697305a92..f681a195a7441c 100644
+--- a/fs/xfs/xfs_trace.h
++++ b/fs/xfs/xfs_trace.h
+@@ -691,8 +691,8 @@ DEFINE_FILESTREAM_EVENT(xfs_filestream_lookup);
+ DEFINE_FILESTREAM_EVENT(xfs_filestream_scan);
+
+ TRACE_EVENT(xfs_filestream_pick,
+- TP_PROTO(struct xfs_perag *pag, xfs_ino_t ino, xfs_extlen_t free),
+- TP_ARGS(pag, ino, free),
++ TP_PROTO(struct xfs_perag *pag, xfs_ino_t ino),
++ TP_ARGS(pag, ino),
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(xfs_ino_t, ino)
+@@ -703,14 +703,9 @@ TRACE_EVENT(xfs_filestream_pick,
+ TP_fast_assign(
+ __entry->dev = pag->pag_mount->m_super->s_dev;
+ __entry->ino = ino;
+- if (pag) {
+- __entry->agno = pag->pag_agno;
+- __entry->streams = atomic_read(&pag->pagf_fstrms);
+- } else {
+- __entry->agno = NULLAGNUMBER;
+- __entry->streams = 0;
+- }
+- __entry->free = free;
++ __entry->agno = pag->pag_agno;
++ __entry->streams = atomic_read(&pag->pagf_fstrms);
++ __entry->free = pag->pagf_freeblks;
+ ),
+ TP_printk("dev %d:%d ino 0x%llx agno 0x%x streams %d free %d",
+ MAJOR(__entry->dev), MINOR(__entry->dev),
+diff --git a/include/acpi/cppc_acpi.h b/include/acpi/cppc_acpi.h
+index e1720d93066695..a451ca4c207bbe 100644
+--- a/include/acpi/cppc_acpi.h
++++ b/include/acpi/cppc_acpi.h
+@@ -65,7 +65,7 @@ struct cpc_desc {
+ int write_cmd_status;
+ int write_cmd_id;
+ /* Lock used for RMW operations in cpc_write() */
+- spinlock_t rmw_lock;
++ raw_spinlock_t rmw_lock;
+ struct cpc_register_resource cpc_regs[MAX_CPC_REG_ENT];
+ struct acpi_psd_package domain_info;
+ struct kobject kobj;
+diff --git a/include/drm/drm_kunit_helpers.h b/include/drm/drm_kunit_helpers.h
+index e7cc17ee4934a5..afdd46ef04f70d 100644
+--- a/include/drm/drm_kunit_helpers.h
++++ b/include/drm/drm_kunit_helpers.h
+@@ -120,4 +120,8 @@ drm_kunit_helper_create_crtc(struct kunit *test,
+ const struct drm_crtc_funcs *funcs,
+ const struct drm_crtc_helper_funcs *helper_funcs);
+
++struct drm_display_mode *
++drm_kunit_display_mode_from_cea_vic(struct kunit *test, struct drm_device *dev,
++ u8 video_code);
++
+ #endif // DRM_KUNIT_HELPERS_H_
+diff --git a/include/linux/bpf_mem_alloc.h b/include/linux/bpf_mem_alloc.h
+index aaf004d943228a..e45162ef59bb1a 100644
+--- a/include/linux/bpf_mem_alloc.h
++++ b/include/linux/bpf_mem_alloc.h
+@@ -33,6 +33,9 @@ int bpf_mem_alloc_percpu_init(struct bpf_mem_alloc *ma, struct obj_cgroup *objcg
+ int bpf_mem_alloc_percpu_unit_init(struct bpf_mem_alloc *ma, int size);
+ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma);
+
++/* Check the allocation size for kmalloc equivalent allocator */
++int bpf_mem_alloc_check_size(bool percpu, size_t size);
++
+ /* kmalloc/kfree equivalent: */
+ void *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size);
+ void bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr);
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index f805adaa316e9a..cd6f9aae311fca 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -80,7 +80,11 @@
+ #define __noscs __attribute__((__no_sanitize__("shadow-call-stack")))
+ #endif
+
++#ifdef __SANITIZE_HWADDRESS__
++#define __no_sanitize_address __attribute__((__no_sanitize__("hwaddress")))
++#else
+ #define __no_sanitize_address __attribute__((__no_sanitize_address__))
++#endif
+
+ #if defined(__SANITIZE_THREAD__)
+ #define __no_sanitize_thread __attribute__((__no_sanitize_thread__))
+diff --git a/include/linux/device.h b/include/linux/device.h
+index 34eb20f5966f95..02d840f5c226ce 100644
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -1073,6 +1073,9 @@ int device_for_each_child(struct device *dev, void *data,
+ int (*fn)(struct device *dev, void *data));
+ int device_for_each_child_reverse(struct device *dev, void *data,
+ int (*fn)(struct device *dev, void *data));
++int device_for_each_child_reverse_from(struct device *parent,
++ struct device *from, const void *data,
++ int (*fn)(struct device *, const void *));
+ struct device *device_find_child(struct device *dev, void *data,
+ int (*match)(struct device *dev, void *data));
+ struct device *device_find_child_by_name(struct device *parent,
+diff --git a/include/linux/dpll.h b/include/linux/dpll.h
+index d275736230b3b1..81f7b623d0ba67 100644
+--- a/include/linux/dpll.h
++++ b/include/linux/dpll.h
+@@ -15,6 +15,7 @@
+
+ struct dpll_device;
+ struct dpll_pin;
++struct dpll_pin_esync;
+
+ struct dpll_device_ops {
+ int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv,
+@@ -83,6 +84,13 @@ struct dpll_pin_ops {
+ int (*ffo_get)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ s64 *ffo, struct netlink_ext_ack *extack);
++ int (*esync_set)(const struct dpll_pin *pin, void *pin_priv,
++ const struct dpll_device *dpll, void *dpll_priv,
++ u64 freq, struct netlink_ext_ack *extack);
++ int (*esync_get)(const struct dpll_pin *pin, void *pin_priv,
++ const struct dpll_device *dpll, void *dpll_priv,
++ struct dpll_pin_esync *esync,
++ struct netlink_ext_ack *extack);
+ };
+
+ struct dpll_pin_frequency {
+@@ -111,6 +119,13 @@ struct dpll_pin_phase_adjust_range {
+ s32 max;
+ };
+
++struct dpll_pin_esync {
++ u64 freq;
++ const struct dpll_pin_frequency *range;
++ u8 range_num;
++ u8 pulse;
++};
++
+ struct dpll_pin_properties {
+ const char *board_label;
+ const char *panel_label;
+diff --git a/include/linux/input.h b/include/linux/input.h
+index 89a0be6ee0e23b..cd866b020a01d6 100644
+--- a/include/linux/input.h
++++ b/include/linux/input.h
+@@ -339,12 +339,16 @@ struct input_handler {
+ * @name: name given to the handle by handler that created it
+ * @dev: input device the handle is attached to
+ * @handler: handler that works with the device through this handle
++ * @handle_events: event sequence handler. It is set up by the input core
++ * according to event handling method specified in the @handler. See
++ * input_handle_setup_event_handler().
++ * This method is being called by the input core with interrupts disabled
++ * and dev->event_lock spinlock held and so it may not sleep.
+ * @d_node: used to put the handle on device's list of attached handles
+ * @h_node: used to put the handle on handler's list of handles from which
+ * it gets events
+ */
+ struct input_handle {
+-
+ void *private;
+
+ int open;
+@@ -353,6 +357,10 @@ struct input_handle {
+ struct input_dev *dev;
+ struct input_handler *handler;
+
++ unsigned int (*handle_events)(struct input_handle *handle,
++ struct input_value *vals,
++ unsigned int count);
++
+ struct list_head d_node;
+ struct list_head h_node;
+ };
+diff --git a/include/linux/iomap.h b/include/linux/iomap.h
+index 6fc1c858013d1e..034399030609e4 100644
+--- a/include/linux/iomap.h
++++ b/include/linux/iomap.h
+@@ -256,6 +256,25 @@ static inline const struct iomap *iomap_iter_srcmap(const struct iomap_iter *i)
+ return &i->iomap;
+ }
+
++/*
++ * Check if the range needs to be unshared for a FALLOC_FL_UNSHARE_RANGE
++ * operation.
++ *
++ * Don't bother with blocks that are not shared to start with; or mappings that
++ * cannot be shared, such as inline data, delalloc reservations, holes or
++ * unwritten extents.
++ *
++ * Note that we use srcmap directly instead of iomap_iter_srcmap as unsharing
++ * requires providing a separate source map, and the presence of one is a good
++ * indicator that unsharing is needed, unlike IOMAP_F_SHARED which can be set
++ * for any data that goes into the COW fork for XFS.
++ */
++static inline bool iomap_want_unshare_iter(const struct iomap_iter *iter)
++{
++ return (iter->iomap.flags & IOMAP_F_SHARED) &&
++ iter->srcmap.type == IOMAP_MAPPED;
++}
++
+ ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
+ const struct iomap_ops *ops);
+ int iomap_file_buffered_write_punch_delalloc(struct inode *inode,
+diff --git a/include/linux/ksm.h b/include/linux/ksm.h
+index 11690dacd98684..ec9c05044d4fed 100644
+--- a/include/linux/ksm.h
++++ b/include/linux/ksm.h
+@@ -54,12 +54,11 @@ static inline long mm_ksm_zero_pages(struct mm_struct *mm)
+ return atomic_long_read(&mm->ksm_zero_pages);
+ }
+
+-static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
++static inline void ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
+ {
++ /* Adding mm to ksm is best effort on fork. */
+ if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
+- return __ksm_enter(mm);
+-
+- return 0;
++ __ksm_enter(mm);
+ }
+
+ static inline int ksm_execve(struct mm_struct *mm)
+@@ -107,9 +106,8 @@ static inline int ksm_disable(struct mm_struct *mm)
+ return 0;
+ }
+
+-static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
++static inline void ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
+ {
+- return 0;
+ }
+
+ static inline int ksm_execve(struct mm_struct *mm)
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index 1dc6248feb8324..fd04c8e942250c 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -458,9 +458,7 @@ struct lru_gen_folio {
+
+ enum {
+ MM_LEAF_TOTAL, /* total leaf entries */
+- MM_LEAF_OLD, /* old leaf entries */
+ MM_LEAF_YOUNG, /* young leaf entries */
+- MM_NONLEAF_TOTAL, /* total non-leaf entries */
+ MM_NONLEAF_FOUND, /* non-leaf entries found in Bloom filters */
+ MM_NONLEAF_ADDED, /* non-leaf entries added to Bloom filters */
+ NR_MM_STATS
+@@ -557,7 +555,7 @@ struct lru_gen_memcg {
+
+ void lru_gen_init_pgdat(struct pglist_data *pgdat);
+ void lru_gen_init_lruvec(struct lruvec *lruvec);
+-void lru_gen_look_around(struct page_vma_mapped_walk *pvmw);
++bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw);
+
+ void lru_gen_init_memcg(struct mem_cgroup *memcg);
+ void lru_gen_exit_memcg(struct mem_cgroup *memcg);
+@@ -576,8 +574,9 @@ static inline void lru_gen_init_lruvec(struct lruvec *lruvec)
+ {
+ }
+
+-static inline void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
++static inline bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
+ {
++ return false;
+ }
+
+ static inline void lru_gen_init_memcg(struct mem_cgroup *memcg)
+diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
+index d9ac7b136aeab2..522123050ff863 100644
+--- a/include/linux/rcutiny.h
++++ b/include/linux/rcutiny.h
+@@ -111,6 +111,11 @@ static inline void __kvfree_call_rcu(struct rcu_head *head, void *ptr)
+ kvfree(ptr);
+ }
+
++static inline void kvfree_rcu_barrier(void)
++{
++ rcu_barrier();
++}
++
+ #ifdef CONFIG_KASAN_GENERIC
+ void kvfree_call_rcu(struct rcu_head *head, void *ptr);
+ #else
+diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
+index 254244202ea976..58e7db80f3a8e9 100644
+--- a/include/linux/rcutree.h
++++ b/include/linux/rcutree.h
+@@ -35,6 +35,7 @@ static inline void rcu_virt_note_context_switch(void)
+
+ void synchronize_rcu_expedited(void);
+ void kvfree_call_rcu(struct rcu_head *head, void *ptr);
++void kvfree_rcu_barrier(void);
+
+ void rcu_barrier(void);
+ void rcu_momentary_dyntick_idle(void);
+diff --git a/include/linux/tick.h b/include/linux/tick.h
+index 72744638c5b0fd..99c9c5a7252aab 100644
+--- a/include/linux/tick.h
++++ b/include/linux/tick.h
+@@ -251,12 +251,19 @@ static inline void tick_dep_set_task(struct task_struct *tsk,
+ if (tick_nohz_full_enabled())
+ tick_nohz_dep_set_task(tsk, bit);
+ }
++
+ static inline void tick_dep_clear_task(struct task_struct *tsk,
+ enum tick_dep_bits bit)
+ {
+ if (tick_nohz_full_enabled())
+ tick_nohz_dep_clear_task(tsk, bit);
+ }
++
++static inline void tick_dep_init_task(struct task_struct *tsk)
++{
++ atomic_set(&tsk->tick_dep_mask, 0);
++}
++
+ static inline void tick_dep_set_signal(struct task_struct *tsk,
+ enum tick_dep_bits bit)
+ {
+@@ -290,6 +297,7 @@ static inline void tick_dep_set_task(struct task_struct *tsk,
+ enum tick_dep_bits bit) { }
+ static inline void tick_dep_clear_task(struct task_struct *tsk,
+ enum tick_dep_bits bit) { }
++static inline void tick_dep_init_task(struct task_struct *tsk) { }
+ static inline void tick_dep_set_signal(struct task_struct *tsk,
+ enum tick_dep_bits bit) { }
+ static inline void tick_dep_clear_signal(struct signal_struct *signal,
+diff --git a/include/linux/ubsan.h b/include/linux/ubsan.h
+index bff7445498ded4..d8219cbe09ff8d 100644
+--- a/include/linux/ubsan.h
++++ b/include/linux/ubsan.h
+@@ -4,6 +4,11 @@
+
+ #ifdef CONFIG_UBSAN_TRAP
+ const char *report_ubsan_failure(struct pt_regs *regs, u32 check_type);
++#else
++static inline const char *report_ubsan_failure(struct pt_regs *regs, u32 check_type)
++{
++ return NULL;
++}
+ #endif
+
+ #endif
+diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
+index a12bcf042551ea..f4a45a37229ade 100644
+--- a/include/linux/userfaultfd_k.h
++++ b/include/linux/userfaultfd_k.h
+@@ -249,6 +249,7 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma,
+
+ extern int dup_userfaultfd(struct vm_area_struct *, struct list_head *);
+ extern void dup_userfaultfd_complete(struct list_head *);
++void dup_userfaultfd_fail(struct list_head *);
+
+ extern void mremap_userfaultfd_prep(struct vm_area_struct *,
+ struct vm_userfaultfd_ctx *);
+@@ -332,6 +333,10 @@ static inline void dup_userfaultfd_complete(struct list_head *l)
+ {
+ }
+
++static inline void dup_userfaultfd_fail(struct list_head *l)
++{
++}
++
+ static inline void mremap_userfaultfd_prep(struct vm_area_struct *vma,
+ struct vm_userfaultfd_ctx *ctx)
+ {
+diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h
+index 1db2417b8ff52d..35d1e09940b278 100644
+--- a/include/net/ip_tunnels.h
++++ b/include/net/ip_tunnels.h
+@@ -354,7 +354,7 @@ static inline void ip_tunnel_init_flow(struct flowi4 *fl4,
+ memset(fl4, 0, sizeof(*fl4));
+
+ if (oif) {
+- fl4->flowi4_l3mdev = l3mdev_master_upper_ifindex_by_index_rcu(net, oif);
++ fl4->flowi4_l3mdev = l3mdev_master_upper_ifindex_by_index(net, oif);
+ /* Legacy VRF/l3mdev use case */
+ fl4->flowi4_oif = fl4->flowi4_l3mdev ? 0 : oif;
+ }
+diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
+index 450c44c83a5d21..a0aed1a428a183 100644
+--- a/include/trace/events/afs.h
++++ b/include/trace/events/afs.h
+@@ -331,7 +331,11 @@ enum yfs_cm_operation {
+ EM(afs_edit_dir_delete, "delete") \
+ EM(afs_edit_dir_delete_error, "d_err ") \
+ EM(afs_edit_dir_delete_inval, "d_invl") \
+- E_(afs_edit_dir_delete_noent, "d_nent")
++ EM(afs_edit_dir_delete_noent, "d_nent") \
++ EM(afs_edit_dir_update_dd, "u_ddot") \
++ EM(afs_edit_dir_update_error, "u_fail") \
++ EM(afs_edit_dir_update_inval, "u_invl") \
++ E_(afs_edit_dir_update_nodd, "u_nodd")
+
+ #define afs_edit_dir_reasons \
+ EM(afs_edit_dir_for_create, "Create") \
+@@ -340,6 +344,7 @@ enum yfs_cm_operation {
+ EM(afs_edit_dir_for_rename_0, "Renam0") \
+ EM(afs_edit_dir_for_rename_1, "Renam1") \
+ EM(afs_edit_dir_for_rename_2, "Renam2") \
++ EM(afs_edit_dir_for_rename_sub, "RnmSub") \
+ EM(afs_edit_dir_for_rmdir, "RmDir ") \
+ EM(afs_edit_dir_for_silly_0, "S_Ren0") \
+ EM(afs_edit_dir_for_silly_1, "S_Ren1") \
+diff --git a/include/uapi/linux/dpll.h b/include/uapi/linux/dpll.h
+index 0c13d7f1a1bc3b..b0654ade7b7eba 100644
+--- a/include/uapi/linux/dpll.h
++++ b/include/uapi/linux/dpll.h
+@@ -210,6 +210,9 @@ enum dpll_a_pin {
+ DPLL_A_PIN_PHASE_ADJUST,
+ DPLL_A_PIN_PHASE_OFFSET,
+ DPLL_A_PIN_FRACTIONAL_FREQUENCY_OFFSET,
++ DPLL_A_PIN_ESYNC_FREQUENCY,
++ DPLL_A_PIN_ESYNC_FREQUENCY_SUPPORTED,
++ DPLL_A_PIN_ESYNC_PULSE,
+
+ __DPLL_A_PIN_MAX,
+ DPLL_A_PIN_MAX = (__DPLL_A_PIN_MAX - 1)
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 6b3bc0876f7fe0..19e2c1f9c4a212 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -1016,6 +1016,25 @@ int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
+ return IOU_OK;
+ }
+
++static bool io_kiocb_start_write(struct io_kiocb *req, struct kiocb *kiocb)
++{
++ struct inode *inode;
++ bool ret;
++
++ if (!(req->flags & REQ_F_ISREG))
++ return true;
++ if (!(kiocb->ki_flags & IOCB_NOWAIT)) {
++ kiocb_start_write(kiocb);
++ return true;
++ }
++
++ inode = file_inode(kiocb->ki_filp);
++ ret = sb_start_write_trylock(inode->i_sb);
++ if (ret)
++ __sb_writers_release(inode->i_sb, SB_FREEZE_WRITE);
++ return ret;
++}
++
+ int io_write(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+@@ -1053,8 +1072,8 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags)
+ if (unlikely(ret))
+ return ret;
+
+- if (req->flags & REQ_F_ISREG)
+- kiocb_start_write(kiocb);
++ if (unlikely(!io_kiocb_start_write(req, kiocb)))
++ return -EAGAIN;
+ kiocb->ki_flags |= IOCB_WRITE;
+
+ if (likely(req->file->f_op->write_iter))
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 8ba73042a23952..479a2ea5d9af65 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -24,6 +24,23 @@
+ DEFINE_STATIC_KEY_ARRAY_FALSE(cgroup_bpf_enabled_key, MAX_CGROUP_BPF_ATTACH_TYPE);
+ EXPORT_SYMBOL(cgroup_bpf_enabled_key);
+
++/*
++ * cgroup bpf destruction makes heavy use of work items and there can be a lot
++ * of concurrent destructions. Use a separate workqueue so that cgroup bpf
++ * destruction work items don't end up filling up max_active of system_wq
++ * which may lead to deadlock.
++ */
++static struct workqueue_struct *cgroup_bpf_destroy_wq;
++
++static int __init cgroup_bpf_wq_init(void)
++{
++ cgroup_bpf_destroy_wq = alloc_workqueue("cgroup_bpf_destroy", 0, 1);
++ if (!cgroup_bpf_destroy_wq)
++ panic("Failed to alloc workqueue for cgroup bpf destroy.\n");
++ return 0;
++}
++core_initcall(cgroup_bpf_wq_init);
++
+ /* __always_inline is necessary to prevent indirect call through run_prog
+ * function pointer.
+ */
+@@ -334,7 +351,7 @@ static void cgroup_bpf_release_fn(struct percpu_ref *ref)
+ struct cgroup *cgrp = container_of(ref, struct cgroup, bpf.refcnt);
+
+ INIT_WORK(&cgrp->bpf.release_work, cgroup_bpf_release);
+- queue_work(system_wq, &cgrp->bpf.release_work);
++ queue_work(cgroup_bpf_destroy_wq, &cgrp->bpf.release_work);
+ }
+
+ /* Get underlying bpf_prog of bpf_prog_list entry, regardless if it's through
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index cc8c00864a6807..8419de44c51749 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -2830,12 +2830,14 @@ struct bpf_iter_bits {
+ __u64 __opaque[2];
+ } __aligned(8);
+
++#define BITS_ITER_NR_WORDS_MAX 511
++
+ struct bpf_iter_bits_kern {
+ union {
+ unsigned long *bits;
+ unsigned long bits_copy;
+ };
+- u32 nr_bits;
++ int nr_bits;
+ int bit;
+ } __aligned(8);
+
+@@ -2844,7 +2846,8 @@ struct bpf_iter_bits_kern {
+ * @it: The new bpf_iter_bits to be created
+ * @unsafe_ptr__ign: A pointer pointing to a memory area to be iterated over
+ * @nr_words: The size of the specified memory area, measured in 8-byte units.
+- * Due to the limitation of memalloc, it can't be greater than 512.
++ * The maximum value of @nr_words is @BITS_ITER_NR_WORDS_MAX. This limit may be
++ * further reduced by the BPF memory allocator implementation.
+ *
+ * This function initializes a new bpf_iter_bits structure for iterating over
+ * a memory area which is specified by the @unsafe_ptr__ign and @nr_words. It
+@@ -2871,6 +2874,8 @@ bpf_iter_bits_new(struct bpf_iter_bits *it, const u64 *unsafe_ptr__ign, u32 nr_w
+
+ if (!unsafe_ptr__ign || !nr_words)
+ return -EINVAL;
++ if (nr_words > BITS_ITER_NR_WORDS_MAX)
++ return -E2BIG;
+
+ /* Optimization for u64 mask */
+ if (nr_bits == 64) {
+@@ -2882,6 +2887,9 @@ bpf_iter_bits_new(struct bpf_iter_bits *it, const u64 *unsafe_ptr__ign, u32 nr_w
+ return 0;
+ }
+
++ if (bpf_mem_alloc_check_size(false, nr_bytes))
++ return -E2BIG;
++
+ /* Fallback to memalloc */
+ kit->bits = bpf_mem_alloc(&bpf_global_ma, nr_bytes);
+ if (!kit->bits)
+@@ -2909,17 +2917,16 @@ bpf_iter_bits_new(struct bpf_iter_bits *it, const u64 *unsafe_ptr__ign, u32 nr_w
+ __bpf_kfunc int *bpf_iter_bits_next(struct bpf_iter_bits *it)
+ {
+ struct bpf_iter_bits_kern *kit = (void *)it;
+- u32 nr_bits = kit->nr_bits;
++ int bit = kit->bit, nr_bits = kit->nr_bits;
+ const unsigned long *bits;
+- int bit;
+
+- if (nr_bits == 0)
++ if (!nr_bits || bit >= nr_bits)
+ return NULL;
+
+ bits = nr_bits == 64 ? &kit->bits_copy : kit->bits;
+- bit = find_next_bit(bits, nr_bits, kit->bit + 1);
++ bit = find_next_bit(bits, nr_bits, bit + 1);
+ if (bit >= nr_bits) {
+- kit->nr_bits = 0;
++ kit->bit = bit;
+ return NULL;
+ }
+
+diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
+index 0218a5132ab562..9b60eda0f727b3 100644
+--- a/kernel/bpf/lpm_trie.c
++++ b/kernel/bpf/lpm_trie.c
+@@ -655,7 +655,7 @@ static int trie_get_next_key(struct bpf_map *map, void *_key, void *_next_key)
+ if (!key || key->prefixlen > trie->max_prefixlen)
+ goto find_leftmost;
+
+- node_stack = kmalloc_array(trie->max_prefixlen,
++ node_stack = kmalloc_array(trie->max_prefixlen + 1,
+ sizeof(struct lpm_trie_node *),
+ GFP_ATOMIC | __GFP_NOWARN);
+ if (!node_stack)
+diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
+index dec892ded031e0..b2c7a4c49be775 100644
+--- a/kernel/bpf/memalloc.c
++++ b/kernel/bpf/memalloc.c
+@@ -35,6 +35,8 @@
+ */
+ #define LLIST_NODE_SZ sizeof(struct llist_node)
+
++#define BPF_MEM_ALLOC_SIZE_MAX 4096
++
+ /* similar to kmalloc, but sizeof == 8 bucket is gone */
+ static u8 size_index[24] __ro_after_init = {
+ 3, /* 8 */
+@@ -65,7 +67,7 @@ static u8 size_index[24] __ro_after_init = {
+
+ static int bpf_mem_cache_idx(size_t size)
+ {
+- if (!size || size > 4096)
++ if (!size || size > BPF_MEM_ALLOC_SIZE_MAX)
+ return -1;
+
+ if (size <= 192)
+@@ -1005,3 +1007,13 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags)
+
+ return !ret ? NULL : ret + LLIST_NODE_SZ;
+ }
++
++int bpf_mem_alloc_check_size(bool percpu, size_t size)
++{
++ /* The size of percpu allocation doesn't have LLIST_NODE_SZ overhead */
++ if ((percpu && size > BPF_MEM_ALLOC_SIZE_MAX) ||
++ (!percpu && size > BPF_MEM_ALLOC_SIZE_MAX - LLIST_NODE_SZ))
++ return -E2BIG;
++
++ return 0;
++}
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 77b60896200ef0..626c5284ca5a84 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -17416,9 +17416,11 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
+ struct bpf_verifier_state_list *sl, **pprev;
+ struct bpf_verifier_state *cur = env->cur_state, *new, *loop_entry;
+ int i, j, n, err, states_cnt = 0;
+- bool force_new_state = env->test_state_freq || is_force_checkpoint(env, insn_idx);
+- bool add_new_state = force_new_state;
+- bool force_exact;
++ bool force_new_state, add_new_state, force_exact;
++
++ force_new_state = env->test_state_freq || is_force_checkpoint(env, insn_idx) ||
++ /* Avoid accumulating infinitely long jmp history */
++ cur->jmp_history_cnt > 40;
+
+ /* bpf progs typically have pruning point every 4 instructions
+ * http://vger.kernel.org/bpfconf2019.html#session-1
+@@ -17428,6 +17430,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
+ * In tests that amounts to up to 50% reduction into total verifier
+ * memory consumption and 20% verifier time speedup.
+ */
++ add_new_state = force_new_state;
+ if (env->jmps_processed - env->prev_jmps_processed >= 2 &&
+ env->insn_processed - env->prev_insn_processed >= 8)
+ add_new_state = true;
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index c8e4b62b436a48..6ba7dd2ab771d0 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -5722,7 +5722,7 @@ static bool cgroup_check_hierarchy_limits(struct cgroup *parent)
+ {
+ struct cgroup *cgroup;
+ int ret = false;
+- int level = 1;
++ int level = 0;
+
+ lockdep_assert_held(&cgroup_mutex);
+
+@@ -5730,7 +5730,7 @@ static bool cgroup_check_hierarchy_limits(struct cgroup *parent)
+ if (cgroup->nr_descendants >= cgroup->max_descendants)
+ goto fail;
+
+- if (level > cgroup->max_depth)
++ if (level >= cgroup->max_depth)
+ goto fail;
+
+ level++;
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 6b97fb2ac4af56..dc08a23747338c 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -104,6 +104,7 @@
+ #include <linux/rseq.h>
+ #include <uapi/linux/pidfd.h>
+ #include <linux/pidfs.h>
++#include <linux/tick.h>
+
+ #include <asm/pgalloc.h>
+ #include <linux/uaccess.h>
+@@ -652,11 +653,6 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ mm->exec_vm = oldmm->exec_vm;
+ mm->stack_vm = oldmm->stack_vm;
+
+- retval = ksm_fork(mm, oldmm);
+- if (retval)
+- goto out;
+- khugepaged_fork(mm, oldmm);
+-
+ /* Use __mt_dup() to efficiently build an identical maple tree. */
+ retval = __mt_dup(&oldmm->mm_mt, &mm->mm_mt, GFP_KERNEL);
+ if (unlikely(retval))
+@@ -759,6 +755,8 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ vma_iter_free(&vmi);
+ if (!retval) {
+ mt_set_in_rcu(vmi.mas.tree);
++ ksm_fork(mm, oldmm);
++ khugepaged_fork(mm, oldmm);
+ } else if (mpnt) {
+ /*
+ * The entire maple tree has already been duplicated. If the
+@@ -774,7 +772,10 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ mmap_write_unlock(mm);
+ flush_tlb_mm(oldmm);
+ mmap_write_unlock(oldmm);
+- dup_userfaultfd_complete(&uf);
++ if (!retval)
++ dup_userfaultfd_complete(&uf);
++ else
++ dup_userfaultfd_fail(&uf);
+ fail_uprobe_end:
+ uprobe_end_dup_mmap();
+ return retval;
+@@ -2290,6 +2291,7 @@ __latent_entropy struct task_struct *copy_process(
+ acct_clear_integrals(p);
+
+ posix_cputimers_init(&p->posix_cputimers);
++ tick_dep_init_task(p);
+
+ p->io_context = NULL;
+ audit_set_context(p, NULL);
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index e641cc681901a5..8af1354b223194 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3584,18 +3584,15 @@ kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp)
+ }
+
+ /*
+- * This function is invoked after the KFREE_DRAIN_JIFFIES timeout.
++ * Return: %true if a work is queued, %false otherwise.
+ */
+-static void kfree_rcu_monitor(struct work_struct *work)
++static bool
++kvfree_rcu_queue_batch(struct kfree_rcu_cpu *krcp)
+ {
+- struct kfree_rcu_cpu *krcp = container_of(work,
+- struct kfree_rcu_cpu, monitor_work.work);
+ unsigned long flags;
++ bool queued = false;
+ int i, j;
+
+- // Drain ready for reclaim.
+- kvfree_rcu_drain_ready(krcp);
+-
+ raw_spin_lock_irqsave(&krcp->lock, flags);
+
+ // Attempt to start a new batch.
+@@ -3630,15 +3627,32 @@ static void kfree_rcu_monitor(struct work_struct *work)
+ }
+
+ // One work is per one batch, so there are three
+- // "free channels", the batch can handle. It can
+- // be that the work is in the pending state when
+- // channels have been detached following by each
+- // other.
+- queue_rcu_work(system_wq, &krwp->rcu_work);
++ // "free channels", the batch can handle. Break
++ // the loop since it is done with this CPU thus
++ // queuing an RCU work is _always_ success here.
++ queued = queue_rcu_work(system_wq, &krwp->rcu_work);
++ WARN_ON_ONCE(!queued);
++ break;
+ }
+ }
+
+ raw_spin_unlock_irqrestore(&krcp->lock, flags);
++ return queued;
++}
++
++/*
++ * This function is invoked after the KFREE_DRAIN_JIFFIES timeout.
++ */
++static void kfree_rcu_monitor(struct work_struct *work)
++{
++ struct kfree_rcu_cpu *krcp = container_of(work,
++ struct kfree_rcu_cpu, monitor_work.work);
++
++ // Drain ready for reclaim.
++ kvfree_rcu_drain_ready(krcp);
++
++ // Queue a batch for a rest.
++ kvfree_rcu_queue_batch(krcp);
+
+ // If there is nothing to detach, it means that our job is
+ // successfully done here. In case of having at least one
+@@ -3859,6 +3873,86 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
+ }
+ EXPORT_SYMBOL_GPL(kvfree_call_rcu);
+
++/**
++ * kvfree_rcu_barrier - Wait until all in-flight kvfree_rcu() complete.
++ *
++ * Note that a single argument of kvfree_rcu() call has a slow path that
++ * triggers synchronize_rcu() following by freeing a pointer. It is done
++ * before the return from the function. Therefore for any single-argument
++ * call that will result in a kfree() to a cache that is to be destroyed
++ * during module exit, it is developer's responsibility to ensure that all
++ * such calls have returned before the call to kmem_cache_destroy().
++ */
++void kvfree_rcu_barrier(void)
++{
++ struct kfree_rcu_cpu_work *krwp;
++ struct kfree_rcu_cpu *krcp;
++ bool queued;
++ int i, cpu;
++
++ /*
++ * Firstly we detach objects and queue them over an RCU-batch
++ * for all CPUs. Finally queued works are flushed for each CPU.
++ *
++ * Please note. If there are outstanding batches for a particular
++ * CPU, those have to be finished first following by queuing a new.
++ */
++ for_each_possible_cpu(cpu) {
++ krcp = per_cpu_ptr(&krc, cpu);
++
++ /*
++ * Check if this CPU has any objects which have been queued for a
++ * new GP completion. If not(means nothing to detach), we are done
++ * with it. If any batch is pending/running for this "krcp", below
++ * per-cpu flush_rcu_work() waits its completion(see last step).
++ */
++ if (!need_offload_krc(krcp))
++ continue;
++
++ while (1) {
++ /*
++ * If we are not able to queue a new RCU work it means:
++ * - batches for this CPU are still in flight which should
++ * be flushed first and then repeat;
++ * - no objects to detach, because of concurrency.
++ */
++ queued = kvfree_rcu_queue_batch(krcp);
++
++ /*
++ * Bail out, if there is no need to offload this "krcp"
++ * anymore. As noted earlier it can run concurrently.
++ */
++ if (queued || !need_offload_krc(krcp))
++ break;
++
++ /* There are ongoing batches. */
++ for (i = 0; i < KFREE_N_BATCHES; i++) {
++ krwp = &(krcp->krw_arr[i]);
++ flush_rcu_work(&krwp->rcu_work);
++ }
++ }
++ }
++
++ /*
++ * Now we guarantee that all objects are flushed.
++ */
++ for_each_possible_cpu(cpu) {
++ krcp = per_cpu_ptr(&krc, cpu);
++
++ /*
++ * A monitor work can drain ready to reclaim objects
++ * directly. Wait its completion if running or pending.
++ */
++ cancel_delayed_work_sync(&krcp->monitor_work);
++
++ for (i = 0; i < KFREE_N_BATCHES; i++) {
++ krwp = &(krcp->krw_arr[i]);
++ flush_rcu_work(&krwp->rcu_work);
++ }
++ }
++}
++EXPORT_SYMBOL_GPL(kvfree_rcu_barrier);
++
+ static unsigned long
+ kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
+ {
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 1681ab5012e12b..4f3df25176caaa 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -460,9 +460,7 @@ int walk_system_ram_res_rev(u64 start, u64 end, void *arg,
+ rams_size += 16;
+ }
+
+- rams[i].start = res.start;
+- rams[i++].end = res.end;
+-
++ rams[i++] = res;
+ start = res.end + 1;
+ }
+
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 1d2cbdb162a67d..425348b8d9eb38 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3289,7 +3289,7 @@ static void task_numa_work(struct callback_head *work)
+ vma = vma_next(&vmi);
+ }
+
+- do {
++ for (; vma; vma = vma_next(&vmi)) {
+ if (!vma_migratable(vma) || !vma_policy_mof(vma) ||
+ is_vm_hugetlb_page(vma) || (vma->vm_flags & VM_MIXEDMAP)) {
+ trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_UNSUITABLE);
+@@ -3411,7 +3411,7 @@ static void task_numa_work(struct callback_head *work)
+ */
+ if (vma_pids_forced)
+ break;
+- } for_each_vma(vmi, vma);
++ }
+
+ /*
+ * If no VMAs are remaining and VMAs were skipped due to the PID
+diff --git a/lib/Kconfig.ubsan b/lib/Kconfig.ubsan
+index bdda600f8dfbeb..1d4aa7a83b3a55 100644
+--- a/lib/Kconfig.ubsan
++++ b/lib/Kconfig.ubsan
+@@ -29,8 +29,8 @@ config UBSAN_TRAP
+
+ Also note that selecting Y will cause your kernel to Oops
+ with an "illegal instruction" error with no further details
+- when a UBSAN violation occurs. (Except on arm64, which will
+- report which Sanitizer failed.) This may make it hard to
++ when a UBSAN violation occurs. (Except on arm64 and x86, which
++ will report which Sanitizer failed.) This may make it hard to
+ determine whether an Oops was caused by UBSAN or to figure
+ out the details of a UBSAN violation. It makes the kernel log
+ output less useful for bug reports.
+diff --git a/lib/codetag.c b/lib/codetag.c
+index afa8a2d4f31733..d1fbbb7c2ec3d0 100644
+--- a/lib/codetag.c
++++ b/lib/codetag.c
+@@ -228,6 +228,9 @@ bool codetag_unload_module(struct module *mod)
+ if (!mod)
+ return true;
+
++ /* await any module's kfree_rcu() operations to complete */
++ kvfree_rcu_barrier();
++
+ mutex_lock(&codetag_lock);
+ list_for_each_entry(cttype, &codetag_types, link) {
+ struct codetag_module *found = NULL;
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 4a6a9f419bd7eb..b892894228b03e 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -461,6 +461,8 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
+ size_t bytes, struct iov_iter *i)
+ {
+ size_t n, copied = 0;
++ bool uses_kmap = IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) ||
++ PageHighMem(page);
+
+ if (!page_copy_sane(page, offset, bytes))
+ return 0;
+@@ -471,7 +473,7 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
+ char *p;
+
+ n = bytes - copied;
+- if (PageHighMem(page)) {
++ if (uses_kmap) {
+ page += offset / PAGE_SIZE;
+ offset %= PAGE_SIZE;
+ n = min_t(size_t, n, PAGE_SIZE - offset);
+@@ -482,7 +484,7 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
+ kunmap_atomic(p);
+ copied += n;
+ offset += n;
+- } while (PageHighMem(page) && copied != bytes && n > 0);
++ } while (uses_kmap && copied != bytes && n > 0);
+
+ return copied;
+ }
+diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
+index e6667a28c01491..af5b9c41d5b30c 100644
+--- a/lib/slub_kunit.c
++++ b/lib/slub_kunit.c
+@@ -140,7 +140,7 @@ static void test_kmalloc_redzone_access(struct kunit *test)
+ {
+ struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_kmalloc", 32,
+ SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE);
+- u8 *p = __kmalloc_cache_noprof(s, GFP_KERNEL, 18);
++ u8 *p = alloc_hooks(__kmalloc_cache_noprof(s, GFP_KERNEL, 18));
+
+ kasan_disable_current();
+
+diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
+index 7b32be2a3cf0e8..9efde47f80698f 100644
+--- a/mm/kasan/kasan_test.c
++++ b/mm/kasan/kasan_test.c
+@@ -1765,32 +1765,6 @@ static void vm_map_ram_tags(struct kunit *test)
+ free_pages((unsigned long)p_ptr, 1);
+ }
+
+-static void vmalloc_percpu(struct kunit *test)
+-{
+- char __percpu *ptr;
+- int cpu;
+-
+- /*
+- * This test is specifically crafted for the software tag-based mode,
+- * the only tag-based mode that poisons percpu mappings.
+- */
+- KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS);
+-
+- ptr = __alloc_percpu(PAGE_SIZE, PAGE_SIZE);
+-
+- for_each_possible_cpu(cpu) {
+- char *c_ptr = per_cpu_ptr(ptr, cpu);
+-
+- KUNIT_EXPECT_GE(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_MIN);
+- KUNIT_EXPECT_LT(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_KERNEL);
+-
+- /* Make sure that in-bounds accesses don't crash the kernel. */
+- *c_ptr = 0;
+- }
+-
+- free_percpu(ptr);
+-}
+-
+ /*
+ * Check that the assigned pointer tag falls within the [KASAN_TAG_MIN,
+ * KASAN_TAG_KERNEL) range (note: excluding the match-all tag) for tag-based
+@@ -1967,7 +1941,6 @@ static struct kunit_case kasan_kunit_test_cases[] = {
+ KUNIT_CASE(vmalloc_oob),
+ KUNIT_CASE(vmap_tags),
+ KUNIT_CASE(vm_map_ram_tags),
+- KUNIT_CASE(vmalloc_percpu),
+ KUNIT_CASE(match_all_not_assigned),
+ KUNIT_CASE(match_all_ptr_tag),
+ KUNIT_CASE(match_all_mem_tag),
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 368ab3878fa6e0..75b858bd6aa58f 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -1099,7 +1099,7 @@ static void migrate_folio_done(struct folio *src,
+ * not accounted to NR_ISOLATED_*. They can be recognized
+ * as __folio_test_movable
+ */
+- if (likely(!__folio_test_movable(src)))
++ if (likely(!__folio_test_movable(src)) && reason != MR_DEMOTION)
+ mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON +
+ folio_is_file_lru(src), -folio_nr_pages(src));
+
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 18fddcce03b851..8a04f29aa4230d 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1952,7 +1952,8 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
+
+ if (get_area) {
+ addr = get_area(file, addr, len, pgoff, flags);
+- } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
++ } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)
++ && IS_ALIGNED(len, PMD_SIZE)) {
+ /* Ensures that larger anonymous mappings are THP aligned. */
+ addr = thp_get_unmapped_area_vmflags(file, addr, len,
+ pgoff, flags, vm_flags);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 91ace8ca97e21f..ec459522c29349 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -2874,12 +2874,12 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
+ page = __rmqueue(zone, order, migratetype, alloc_flags);
+
+ /*
+- * If the allocation fails, allow OOM handling access
+- * to HIGHATOMIC reserves as failing now is worse than
+- * failing a high-order atomic allocation in the
+- * future.
++ * If the allocation fails, allow OOM handling and
++ * order-0 (atomic) allocs access to HIGHATOMIC
++ * reserves as failing now is worse than failing a
++ * high-order atomic allocation in the future.
+ */
+- if (!page && (alloc_flags & ALLOC_OOM))
++ if (!page && (alloc_flags & (ALLOC_OOM|ALLOC_NON_BLOCK)))
+ page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
+
+ if (!page) {
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 2490e727e2dcbc..3d89847f01dadb 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -75,6 +75,7 @@
+ #include <linux/memremap.h>
+ #include <linux/userfaultfd_k.h>
+ #include <linux/mm_inline.h>
++#include <linux/oom.h>
+
+ #include <asm/tlbflush.h>
+
+@@ -870,13 +871,24 @@ static bool folio_referenced_one(struct folio *folio,
+ continue;
+ }
+
+- if (pvmw.pte) {
+- if (lru_gen_enabled() &&
+- pte_young(ptep_get(pvmw.pte))) {
+- lru_gen_look_around(&pvmw);
+- referenced++;
+- }
++ /*
++ * Skip the non-shared swapbacked folio mapped solely by
++ * the exiting or OOM-reaped process. This avoids redundant
++ * swap-out followed by an immediate unmap.
++ */
++ if ((!atomic_read(&vma->vm_mm->mm_users) ||
++ check_stable_address_space(vma->vm_mm)) &&
++ folio_test_anon(folio) && folio_test_swapbacked(folio) &&
++ !folio_likely_mapped_shared(folio)) {
++ pra->referenced = -1;
++ page_vma_mapped_walk_done(&pvmw);
++ return false;
++ }
+
++ if (lru_gen_enabled() && pvmw.pte) {
++ if (lru_gen_look_around(&pvmw))
++ referenced++;
++ } else if (pvmw.pte) {
+ if (ptep_clear_flush_young_notify(vma, address,
+ pvmw.pte))
+ referenced++;
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 27f496d6e43eb6..941d1271395202 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1163,7 +1163,9 @@ static int shmem_getattr(struct mnt_idmap *idmap,
+ stat->attributes_mask |= (STATX_ATTR_APPEND |
+ STATX_ATTR_IMMUTABLE |
+ STATX_ATTR_NODUMP);
++ inode_lock_shared(inode);
+ generic_fillattr(idmap, request_mask, inode, stat);
++ inode_unlock_shared(inode);
+
+ if (shmem_huge_global_enabled(inode, 0, false, NULL, 0))
+ stat->blksize = HPAGE_PMD_SIZE;
+diff --git a/mm/shrinker.c b/mm/shrinker.c
+index dc5d2a6fcfc414..4a93fd433689a0 100644
+--- a/mm/shrinker.c
++++ b/mm/shrinker.c
+@@ -76,19 +76,21 @@ void free_shrinker_info(struct mem_cgroup *memcg)
+
+ int alloc_shrinker_info(struct mem_cgroup *memcg)
+ {
+- struct shrinker_info *info;
+ int nid, ret = 0;
+ int array_size = 0;
+
+ mutex_lock(&shrinker_mutex);
+ array_size = shrinker_unit_size(shrinker_nr_max);
+ for_each_node(nid) {
+- info = kvzalloc_node(sizeof(*info) + array_size, GFP_KERNEL, nid);
++ struct shrinker_info *info = kvzalloc_node(sizeof(*info) + array_size,
++ GFP_KERNEL, nid);
+ if (!info)
+ goto err;
+ info->map_nr_max = shrinker_nr_max;
+- if (shrinker_unit_alloc(info, NULL, nid))
++ if (shrinker_unit_alloc(info, NULL, nid)) {
++ kvfree(info);
+ goto err;
++ }
+ rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info);
+ }
+ mutex_unlock(&shrinker_mutex);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 128f307da6eeac..f5bcd08527ae0f 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -56,6 +56,7 @@
+ #include <linux/khugepaged.h>
+ #include <linux/rculist_nulls.h>
+ #include <linux/random.h>
++#include <linux/mmu_notifier.h>
+
+ #include <asm/tlbflush.h>
+ #include <asm/div64.h>
+@@ -863,7 +864,12 @@ static enum folio_references folio_check_references(struct folio *folio,
+ if (vm_flags & VM_LOCKED)
+ return FOLIOREF_ACTIVATE;
+
+- /* rmap lock contention: rotate */
++ /*
++ * There are two cases to consider.
++ * 1) Rmap lock contention: rotate.
++ * 2) Skip the non-shared swapbacked folio mapped solely by
++ * the exiting or OOM-reaped process.
++ */
+ if (referenced_ptes == -1)
+ return FOLIOREF_KEEP;
+
+@@ -3271,7 +3277,8 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk
+ return false;
+ }
+
+-static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned long addr)
++static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned long addr,
++ struct pglist_data *pgdat)
+ {
+ unsigned long pfn = pte_pfn(pte);
+
+@@ -3283,13 +3290,20 @@ static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned
+ if (WARN_ON_ONCE(pte_devmap(pte) || pte_special(pte)))
+ return -1;
+
++ if (!pte_young(pte) && !mm_has_notifiers(vma->vm_mm))
++ return -1;
++
+ if (WARN_ON_ONCE(!pfn_valid(pfn)))
+ return -1;
+
++ if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat))
++ return -1;
++
+ return pfn;
+ }
+
+-static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned long addr)
++static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned long addr,
++ struct pglist_data *pgdat)
+ {
+ unsigned long pfn = pmd_pfn(pmd);
+
+@@ -3301,9 +3315,15 @@ static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned
+ if (WARN_ON_ONCE(pmd_devmap(pmd)))
+ return -1;
+
++ if (!pmd_young(pmd) && !mm_has_notifiers(vma->vm_mm))
++ return -1;
++
+ if (WARN_ON_ONCE(!pfn_valid(pfn)))
+ return -1;
+
++ if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat))
++ return -1;
++
+ return pfn;
+ }
+
+@@ -3312,10 +3332,6 @@ static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg,
+ {
+ struct folio *folio;
+
+- /* try to avoid unnecessary memory loads */
+- if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat))
+- return NULL;
+-
+ folio = pfn_folio(pfn);
+ if (folio_nid(folio) != pgdat->node_id)
+ return NULL;
+@@ -3371,21 +3387,16 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end,
+ total++;
+ walk->mm_stats[MM_LEAF_TOTAL]++;
+
+- pfn = get_pte_pfn(ptent, args->vma, addr);
++ pfn = get_pte_pfn(ptent, args->vma, addr, pgdat);
+ if (pfn == -1)
+ continue;
+
+- if (!pte_young(ptent)) {
+- walk->mm_stats[MM_LEAF_OLD]++;
+- continue;
+- }
+-
+ folio = get_pfn_folio(pfn, memcg, pgdat, walk->can_swap);
+ if (!folio)
+ continue;
+
+- if (!ptep_test_and_clear_young(args->vma, addr, pte + i))
+- VM_WARN_ON_ONCE(true);
++ if (!ptep_clear_young_notify(args->vma, addr, pte + i))
++ continue;
+
+ young++;
+ walk->mm_stats[MM_LEAF_YOUNG]++;
+@@ -3451,21 +3462,25 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area
+ /* don't round down the first address */
+ addr = i ? (*first & PMD_MASK) + i * PMD_SIZE : *first;
+
+- pfn = get_pmd_pfn(pmd[i], vma, addr);
+- if (pfn == -1)
++ if (!pmd_present(pmd[i]))
+ goto next;
+
+ if (!pmd_trans_huge(pmd[i])) {
+- if (should_clear_pmd_young())
++ if (!walk->force_scan && should_clear_pmd_young() &&
++ !mm_has_notifiers(args->mm))
+ pmdp_test_and_clear_young(vma, addr, pmd + i);
+ goto next;
+ }
+
++ pfn = get_pmd_pfn(pmd[i], vma, addr, pgdat);
++ if (pfn == -1)
++ goto next;
++
+ folio = get_pfn_folio(pfn, memcg, pgdat, walk->can_swap);
+ if (!folio)
+ goto next;
+
+- if (!pmdp_test_and_clear_young(vma, addr, pmd + i))
++ if (!pmdp_clear_young_notify(vma, addr, pmd + i))
+ goto next;
+
+ walk->mm_stats[MM_LEAF_YOUNG]++;
+@@ -3523,27 +3538,18 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end,
+ }
+
+ if (pmd_trans_huge(val)) {
+- unsigned long pfn = pmd_pfn(val);
+ struct pglist_data *pgdat = lruvec_pgdat(walk->lruvec);
++ unsigned long pfn = get_pmd_pfn(val, vma, addr, pgdat);
+
+ walk->mm_stats[MM_LEAF_TOTAL]++;
+
+- if (!pmd_young(val)) {
+- walk->mm_stats[MM_LEAF_OLD]++;
+- continue;
+- }
+-
+- /* try to avoid unnecessary memory loads */
+- if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat))
+- continue;
+-
+- walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first);
++ if (pfn != -1)
++ walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first);
+ continue;
+ }
+
+- walk->mm_stats[MM_NONLEAF_TOTAL]++;
+-
+- if (should_clear_pmd_young()) {
++ if (!walk->force_scan && should_clear_pmd_young() &&
++ !mm_has_notifiers(args->mm)) {
+ if (!pmd_young(val))
+ continue;
+
+@@ -4017,13 +4023,13 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc)
+ * the PTE table to the Bloom filter. This forms a feedback loop between the
+ * eviction and the aging.
+ */
+-void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
++bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
+ {
+ int i;
+ unsigned long start;
+ unsigned long end;
+ struct lru_gen_mm_walk *walk;
+- int young = 0;
++ int young = 1;
+ pte_t *pte = pvmw->pte;
+ unsigned long addr = pvmw->address;
+ struct vm_area_struct *vma = pvmw->vma;
+@@ -4039,12 +4045,15 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
+ lockdep_assert_held(pvmw->ptl);
+ VM_WARN_ON_ONCE_FOLIO(folio_test_lru(folio), folio);
+
++ if (!ptep_clear_young_notify(vma, addr, pte))
++ return false;
++
+ if (spin_is_contended(pvmw->ptl))
+- return;
++ return true;
+
+ /* exclude special VMAs containing anon pages from COW */
+ if (vma->vm_flags & VM_SPECIAL)
+- return;
++ return true;
+
+ /* avoid taking the LRU lock under the PTL when possible */
+ walk = current->reclaim_state ? current->reclaim_state->mm_walk : NULL;
+@@ -4052,6 +4061,9 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
+ start = max(addr & PMD_MASK, vma->vm_start);
+ end = min(addr | ~PMD_MASK, vma->vm_end - 1) + 1;
+
++ if (end - start == PAGE_SIZE)
++ return true;
++
+ if (end - start > MIN_LRU_BATCH * PAGE_SIZE) {
+ if (addr - start < MIN_LRU_BATCH * PAGE_SIZE / 2)
+ end = start + MIN_LRU_BATCH * PAGE_SIZE;
+@@ -4065,7 +4077,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
+
+ /* folio_update_gen() requires stable folio_memcg() */
+ if (!mem_cgroup_trylock_pages(memcg))
+- return;
++ return true;
+
+ arch_enter_lazy_mmu_mode();
+
+@@ -4075,19 +4087,16 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
+ unsigned long pfn;
+ pte_t ptent = ptep_get(pte + i);
+
+- pfn = get_pte_pfn(ptent, vma, addr);
++ pfn = get_pte_pfn(ptent, vma, addr, pgdat);
+ if (pfn == -1)
+ continue;
+
+- if (!pte_young(ptent))
+- continue;
+-
+ folio = get_pfn_folio(pfn, memcg, pgdat, can_swap);
+ if (!folio)
+ continue;
+
+- if (!ptep_test_and_clear_young(vma, addr, pte + i))
+- VM_WARN_ON_ONCE(true);
++ if (!ptep_clear_young_notify(vma, addr, pte + i))
++ continue;
+
+ young++;
+
+@@ -4117,6 +4126,8 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
+ /* feedback from rmap walkers to page table walkers */
+ if (mm_state && suitable_to_scan(i, young))
+ update_bloom_filter(mm_state, max_seq, pvmw->pmd);
++
++ return true;
+ }
+
+ /******************************************************************************
+@@ -5231,11 +5242,11 @@ static void lru_gen_seq_show_full(struct seq_file *m, struct lruvec *lruvec,
+ for (tier = 0; tier < MAX_NR_TIERS; tier++) {
+ seq_printf(m, " %10d", tier);
+ for (type = 0; type < ANON_AND_FILE; type++) {
+- const char *s = " ";
++ const char *s = "xxx";
+ unsigned long n[3] = {};
+
+ if (seq == max_seq) {
+- s = "RT ";
++ s = "RTx";
+ n[0] = READ_ONCE(lrugen->avg_refaulted[type][tier]);
+ n[1] = READ_ONCE(lrugen->avg_total[type][tier]);
+ } else if (seq == min_seq[type] || NR_HIST_GENS > 1) {
+@@ -5257,14 +5268,14 @@ static void lru_gen_seq_show_full(struct seq_file *m, struct lruvec *lruvec,
+
+ seq_puts(m, " ");
+ for (i = 0; i < NR_MM_STATS; i++) {
+- const char *s = " ";
++ const char *s = "xxxx";
+ unsigned long n = 0;
+
+ if (seq == max_seq && NR_HIST_GENS == 1) {
+- s = "LOYNFA";
++ s = "TYFA";
+ n = READ_ONCE(mm_state->stats[hist][i]);
+ } else if (seq != max_seq && NR_HIST_GENS > 1) {
+- s = "loynfa";
++ s = "tyfa";
+ n = READ_ONCE(mm_state->stats[hist][i]);
+ }
+
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index ae7a5817883aad..c0203a2b510756 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -206,6 +206,12 @@ struct sk_buff *__hci_cmd_sync_sk(struct hci_dev *hdev, u16 opcode, u32 plen,
+ return ERR_PTR(err);
+ }
+
++ /* If command return a status event skb will be set to NULL as there are
++ * no parameters.
++ */
++ if (!skb)
++ return ERR_PTR(-ENODATA);
++
+ return skb;
+ }
+ EXPORT_SYMBOL(__hci_cmd_sync_sk);
+@@ -255,6 +261,11 @@ int __hci_cmd_sync_status_sk(struct hci_dev *hdev, u16 opcode, u32 plen,
+ u8 status;
+
+ skb = __hci_cmd_sync_sk(hdev, opcode, plen, param, event, timeout, sk);
++
++ /* If command return a status event, skb will be set to -ENODATA */
++ if (skb == ERR_PTR(-ENODATA))
++ return 0;
++
+ if (IS_ERR(skb)) {
+ if (!event)
+ bt_dev_err(hdev, "Opcode 0x%4.4x failed: %ld", opcode,
+@@ -262,13 +273,6 @@ int __hci_cmd_sync_status_sk(struct hci_dev *hdev, u16 opcode, u32 plen,
+ return PTR_ERR(skb);
+ }
+
+- /* If command return a status event skb will be set to NULL as there are
+- * no parameters, in case of failure IS_ERR(skb) would have be set to
+- * the actual error would be found with PTR_ERR(skb).
+- */
+- if (!skb)
+- return 0;
+-
+ status = skb->data[0];
+
+ kfree_skb(skb);
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index 6d7a442ceb89be..501ec4249fedc3 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -246,6 +246,7 @@ static void reset_ctx(struct xdp_page_head *head)
+ head->ctx.data_meta = head->orig_ctx.data_meta;
+ head->ctx.data_end = head->orig_ctx.data_end;
+ xdp_update_frame_from_buff(&head->ctx, head->frame);
++ head->frame->mem = head->orig_ctx.rxq->mem;
+ }
+
+ static int xdp_recv_frames(struct xdp_frame **frames, int nframes,
+diff --git a/net/core/dev.c b/net/core/dev.c
+index dd87f5fb2f3a7d..25f20c5cc8f55f 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3631,6 +3631,9 @@ int skb_csum_hwoffload_help(struct sk_buff *skb,
+ return 0;
+
+ if (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
++ if (vlan_get_protocol(skb) == htons(ETH_P_IPV6) &&
++ skb_network_header_len(skb) != sizeof(struct ipv6hdr))
++ goto sw_checksum;
+ switch (skb->csum_offset) {
+ case offsetof(struct tcphdr, check):
+ case offsetof(struct udphdr, check):
+@@ -3638,6 +3641,7 @@ int skb_csum_hwoffload_help(struct sk_buff *skb,
+ }
+ }
+
++sw_checksum:
+ return skb_checksum_help(skb);
+ }
+ EXPORT_SYMBOL(skb_csum_hwoffload_help);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 97a38a7e1b2cc3..3c5dead0c71ce3 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2032,7 +2032,7 @@ static const struct nla_policy ifla_policy[IFLA_MAX+1] = {
+ [IFLA_NUM_TX_QUEUES] = { .type = NLA_U32 },
+ [IFLA_NUM_RX_QUEUES] = { .type = NLA_U32 },
+ [IFLA_GSO_MAX_SEGS] = { .type = NLA_U32 },
+- [IFLA_GSO_MAX_SIZE] = { .type = NLA_U32 },
++ [IFLA_GSO_MAX_SIZE] = NLA_POLICY_MIN(NLA_U32, MAX_TCP_HEADER + 1),
+ [IFLA_PHYS_PORT_ID] = { .type = NLA_BINARY, .len = MAX_PHYS_ITEM_ID_LEN },
+ [IFLA_CARRIER_CHANGES] = { .type = NLA_U32 }, /* ignored */
+ [IFLA_PHYS_SWITCH_ID] = { .type = NLA_BINARY, .len = MAX_PHYS_ITEM_ID_LEN },
+@@ -2057,7 +2057,7 @@ static const struct nla_policy ifla_policy[IFLA_MAX+1] = {
+ [IFLA_TSO_MAX_SIZE] = { .type = NLA_REJECT },
+ [IFLA_TSO_MAX_SEGS] = { .type = NLA_REJECT },
+ [IFLA_ALLMULTI] = { .type = NLA_REJECT },
+- [IFLA_GSO_IPV4_MAX_SIZE] = { .type = NLA_U32 },
++ [IFLA_GSO_IPV4_MAX_SIZE] = NLA_POLICY_MIN(NLA_U32, MAX_TCP_HEADER + 1),
+ [IFLA_GRO_IPV4_MAX_SIZE] = { .type = NLA_U32 },
+ };
+
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 219fd8f1ca2a42..0550837775d5e1 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -1771,6 +1771,10 @@ static int sock_map_link_update_prog(struct bpf_link *link,
+ ret = -EINVAL;
+ goto out;
+ }
++ if (!sockmap_link->map) {
++ ret = -ENOLINK;
++ goto out;
++ }
+
+ ret = sock_map_prog_link_lookup(sockmap_link->map, &pprog, &plink,
+ sockmap_link->attach_type);
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 5cffad42fe8ca6..49937878d5e8ad 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -217,7 +217,7 @@ static struct ip_tunnel *ip_tunnel_find(struct ip_tunnel_net *itn,
+
+ ip_tunnel_flags_copy(flags, parms->i_flags);
+
+- hlist_for_each_entry_rcu(t, head, hash_node) {
++ hlist_for_each_entry_rcu(t, head, hash_node, lockdep_rtnl_is_held()) {
+ if (local == t->parms.iph.saddr &&
+ remote == t->parms.iph.daddr &&
+ link == READ_ONCE(t->parms.link) &&
+diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c
+index 7db0437140bf22..9ae2b2725bf99a 100644
+--- a/net/ipv6/netfilter/nf_reject_ipv6.c
++++ b/net/ipv6/netfilter/nf_reject_ipv6.c
+@@ -268,12 +268,12 @@ static int nf_reject6_fill_skb_dst(struct sk_buff *skb_in)
+ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ int hook)
+ {
+- struct sk_buff *nskb;
+- struct tcphdr _otcph;
+- const struct tcphdr *otcph;
+- unsigned int otcplen, hh_len;
+ const struct ipv6hdr *oip6h = ipv6_hdr(oldskb);
+ struct dst_entry *dst = NULL;
++ const struct tcphdr *otcph;
++ struct sk_buff *nskb;
++ struct tcphdr _otcph;
++ unsigned int otcplen;
+ struct flowi6 fl6;
+
+ if ((!(ipv6_addr_type(&oip6h->saddr) & IPV6_ADDR_UNICAST)) ||
+@@ -312,9 +312,8 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ if (IS_ERR(dst))
+ return;
+
+- hh_len = (dst->dev->hard_header_len + 15)&~15;
+- nskb = alloc_skb(hh_len + 15 + dst->header_len + sizeof(struct ipv6hdr)
+- + sizeof(struct tcphdr) + dst->trailer_len,
++ nskb = alloc_skb(LL_MAX_HEADER + sizeof(struct ipv6hdr) +
++ sizeof(struct tcphdr) + dst->trailer_len,
+ GFP_ATOMIC);
+
+ if (!nskb) {
+@@ -327,7 +326,7 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+
+ nskb->mark = fl6.flowi6_mark;
+
+- skb_reserve(nskb, hh_len + dst->header_len);
++ skb_reserve(nskb, LL_MAX_HEADER);
+ nf_reject_ip6hdr_put(nskb, oldskb, IPPROTO_TCP, ip6_dst_hoplimit(dst));
+ nf_reject_ip6_tcphdr_put(nskb, oldskb, otcph, otcplen);
+
+diff --git a/net/mac80211/Kconfig b/net/mac80211/Kconfig
+index 13438cc0a6b139..cf0f7780fb109e 100644
+--- a/net/mac80211/Kconfig
++++ b/net/mac80211/Kconfig
+@@ -96,7 +96,7 @@ config MAC80211_DEBUGFS
+
+ config MAC80211_MESSAGE_TRACING
+ bool "Trace all mac80211 debug messages"
+- depends on MAC80211
++ depends on MAC80211 && TRACING
+ help
+ Select this option to have mac80211 register the
+ mac80211_msg trace subsystem with tracepoints to
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index b02b84ce213076..f2b5c18417ef71 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -3138,7 +3138,8 @@ static int ieee80211_get_tx_power(struct wiphy *wiphy,
+ struct ieee80211_local *local = wiphy_priv(wiphy);
+ struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
+
+- if (local->ops->get_txpower)
++ if (local->ops->get_txpower &&
++ (sdata->flags & IEEE80211_SDATA_IN_DRIVER))
+ return drv_get_txpower(local, sdata, dbm);
+
+ if (local->emulate_chanctx)
+diff --git a/net/mac80211/key.c b/net/mac80211/key.c
+index eecdd2265eaa63..e45b5f56c4055a 100644
+--- a/net/mac80211/key.c
++++ b/net/mac80211/key.c
+@@ -987,6 +987,26 @@ void ieee80211_reenable_keys(struct ieee80211_sub_if_data *sdata)
+ }
+ }
+
++static void
++ieee80211_key_iter(struct ieee80211_hw *hw,
++ struct ieee80211_vif *vif,
++ struct ieee80211_key *key,
++ void (*iter)(struct ieee80211_hw *hw,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta,
++ struct ieee80211_key_conf *key,
++ void *data),
++ void *iter_data)
++{
++ /* skip keys of station in removal process */
++ if (key->sta && key->sta->removed)
++ return;
++ if (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE))
++ return;
++ iter(hw, vif, key->sta ? &key->sta->sta : NULL,
++ &key->conf, iter_data);
++}
++
+ void ieee80211_iter_keys(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ void (*iter)(struct ieee80211_hw *hw,
+@@ -1005,16 +1025,13 @@ void ieee80211_iter_keys(struct ieee80211_hw *hw,
+ if (vif) {
+ sdata = vif_to_sdata(vif);
+ list_for_each_entry_safe(key, tmp, &sdata->key_list, list)
+- iter(hw, &sdata->vif,
+- key->sta ? &key->sta->sta : NULL,
+- &key->conf, iter_data);
++ ieee80211_key_iter(hw, vif, key, iter, iter_data);
+ } else {
+ list_for_each_entry(sdata, &local->interfaces, list)
+ list_for_each_entry_safe(key, tmp,
+ &sdata->key_list, list)
+- iter(hw, &sdata->vif,
+- key->sta ? &key->sta->sta : NULL,
+- &key->conf, iter_data);
++ ieee80211_key_iter(hw, &sdata->vif, key,
++ iter, iter_data);
+ }
+ }
+ EXPORT_SYMBOL(ieee80211_iter_keys);
+@@ -1031,17 +1048,8 @@ _ieee80211_iter_keys_rcu(struct ieee80211_hw *hw,
+ {
+ struct ieee80211_key *key;
+
+- list_for_each_entry_rcu(key, &sdata->key_list, list) {
+- /* skip keys of station in removal process */
+- if (key->sta && key->sta->removed)
+- continue;
+- if (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE))
+- continue;
+-
+- iter(hw, &sdata->vif,
+- key->sta ? &key->sta->sta : NULL,
+- &key->conf, iter_data);
+- }
++ list_for_each_entry_rcu(key, &sdata->key_list, list)
++ ieee80211_key_iter(hw, &sdata->vif, key, iter, iter_data);
+ }
+
+ void ieee80211_iter_keys_rcu(struct ieee80211_hw *hw,
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index d4b3bc46cdaaf5..ec87b36f0d451a 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2864,8 +2864,10 @@ static int mptcp_init_sock(struct sock *sk)
+ if (unlikely(!net->mib.mptcp_statistics) && !mptcp_mib_alloc(net))
+ return -ENOMEM;
+
++ rcu_read_lock();
+ ret = mptcp_init_sched(mptcp_sk(sk),
+ mptcp_sched_find(mptcp_get_scheduler(net)));
++ rcu_read_unlock();
+ if (ret)
+ return ret;
+
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index 50429cbd42da42..2db38c06bedebf 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -904,6 +904,9 @@ static void nft_payload_set_eval(const struct nft_expr *expr,
+ ((priv->base != NFT_PAYLOAD_TRANSPORT_HEADER &&
+ priv->base != NFT_PAYLOAD_INNER_HEADER) ||
+ skb->ip_summed != CHECKSUM_PARTIAL)) {
++ if (offset + priv->len > skb->len)
++ goto err;
++
+ fsum = skb_checksum(skb, offset, priv->len, 0);
+ tsum = csum_partial(src, priv->len, 0);
+
+diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
+index da5d929c7c85bf..709840612f0dfd 100644
+--- a/net/netfilter/x_tables.c
++++ b/net/netfilter/x_tables.c
+@@ -1269,7 +1269,7 @@ struct xt_table *xt_find_table_lock(struct net *net, u_int8_t af,
+
+ /* and once again: */
+ list_for_each_entry(t, &xt_net->tables[af], list)
+- if (strcmp(t->name, name) == 0)
++ if (strcmp(t->name, name) == 0 && owner == t->me)
+ return t;
+
+ module_put(owner);
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 17d97bbe890fd5..bbc778c233c892 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1518,6 +1518,7 @@ int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q,
+ return 0;
+
+ err_dev_insert:
++ tcf_block_offload_unbind(block, q, ei);
+ err_block_offload_bind:
+ tcf_chain0_head_change_cb_del(block, ei);
+ err_chain0_head_change_cb_add:
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 2eefa478387997..a1d27bc039a364 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -791,7 +791,7 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
+ drops = max_t(int, n, 0);
+ rcu_read_lock();
+ while ((parentid = sch->parent)) {
+- if (TC_H_MAJ(parentid) == TC_H_MAJ(TC_H_INGRESS))
++ if (parentid == TC_H_ROOT)
+ break;
+
+ if (sch->flags & TCQ_F_NOPARENT)
+diff --git a/net/sunrpc/xprtrdma/ib_client.c b/net/sunrpc/xprtrdma/ib_client.c
+index 8507cd4d892170..28c68b5f682382 100644
+--- a/net/sunrpc/xprtrdma/ib_client.c
++++ b/net/sunrpc/xprtrdma/ib_client.c
+@@ -153,6 +153,7 @@ static void rpcrdma_remove_one(struct ib_device *device,
+ }
+
+ trace_rpcrdma_client_remove_one_done(device);
++ xa_destroy(&rd->rd_xa);
+ kfree(rd);
+ }
+
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 4d5d351bd0b51e..c9ebf9449fcc33 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -1236,6 +1236,7 @@ static void _cfg80211_unregister_wdev(struct wireless_dev *wdev,
+ /* deleted from the list, so can't be found from nl80211 any more */
+ cqm_config = rcu_access_pointer(wdev->cqm_config);
+ kfree_rcu(cqm_config, rcu_head);
++ RCU_INIT_POINTER(wdev->cqm_config, NULL);
+
+ /*
+ * Ensure that all events have been processed and
+diff --git a/rust/kernel/device.rs b/rust/kernel/device.rs
+index 851018eef885e7..c8199ee079eff1 100644
+--- a/rust/kernel/device.rs
++++ b/rust/kernel/device.rs
+@@ -51,18 +51,9 @@ impl Device {
+ ///
+ /// It must also be ensured that `bindings::device::release` can be called from any thread.
+ /// While not officially documented, this should be the case for any `struct device`.
+- pub unsafe fn from_raw(ptr: *mut bindings::device) -> ARef<Self> {
+- // SAFETY: By the safety requirements, ptr is valid.
+- // Initially increase the reference count by one to compensate for the final decrement once
+- // this newly created `ARef<Device>` instance is dropped.
+- unsafe { bindings::get_device(ptr) };
+-
+- // CAST: `Self` is a `repr(transparent)` wrapper around `bindings::device`.
+- let ptr = ptr.cast::<Self>();
+-
+- // SAFETY: `ptr` is valid by the safety requirements of this function. By the above call to
+- // `bindings::get_device` we also own a reference to the underlying `struct device`.
+- unsafe { ARef::from_raw(ptr::NonNull::new_unchecked(ptr)) }
++ pub unsafe fn get_device(ptr: *mut bindings::device) -> ARef<Self> {
++ // SAFETY: By the safety requirements ptr is valid
++ unsafe { Self::as_ref(ptr) }.into()
+ }
+
+ /// Obtain the raw `struct device *`.
+diff --git a/rust/kernel/firmware.rs b/rust/kernel/firmware.rs
+index dee5b4b18aec40..13a374a5cdb743 100644
+--- a/rust/kernel/firmware.rs
++++ b/rust/kernel/firmware.rs
+@@ -44,7 +44,7 @@ fn request_nowarn() -> Self {
+ ///
+ /// # fn no_run() -> Result<(), Error> {
+ /// # // SAFETY: *NOT* safe, just for the example to get an `ARef<Device>` instance
+-/// # let dev = unsafe { Device::from_raw(core::ptr::null_mut()) };
++/// # let dev = unsafe { Device::get_device(core::ptr::null_mut()) };
+ ///
+ /// let fw = Firmware::request(c_str!("path/to/firmware.bin"), &dev)?;
+ /// let blob = fw.data();
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2583081c0a3a54..660fd984a92859 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7507,6 +7507,7 @@ enum {
+ ALC286_FIXUP_SONY_MIC_NO_PRESENCE,
+ ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT,
+ ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
++ ALC269_FIXUP_DELL1_LIMIT_INT_MIC_BOOST,
+ ALC269_FIXUP_DELL2_MIC_NO_PRESENCE,
+ ALC269_FIXUP_DELL3_MIC_NO_PRESENCE,
+ ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
+@@ -7541,6 +7542,7 @@ enum {
+ ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
+ ALC255_FIXUP_ASUS_MIC_NO_PRESENCE,
+ ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++ ALC255_FIXUP_DELL1_LIMIT_INT_MIC_BOOST,
+ ALC255_FIXUP_DELL2_MIC_NO_PRESENCE,
+ ALC255_FIXUP_HEADSET_MODE,
+ ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC,
+@@ -8102,6 +8104,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE
+ },
++ [ALC269_FIXUP_DELL1_LIMIT_INT_MIC_BOOST] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc269_fixup_limit_int_mic_boost,
++ .chained = true,
++ .chain_id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE
++ },
+ [ALC269_FIXUP_DELL2_MIC_NO_PRESENCE] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -8382,6 +8390,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC255_FIXUP_HEADSET_MODE
+ },
++ [ALC255_FIXUP_DELL1_LIMIT_INT_MIC_BOOST] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc269_fixup_limit_int_mic_boost,
++ .chained = true,
++ .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE
++ },
+ [ALC255_FIXUP_DELL2_MIC_NO_PRESENCE] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -10715,6 +10729,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1558, 0x1404, "Clevo N150CU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x14a1, "Clevo L141MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x2624, "Clevo L240TU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1558, 0x28c1, "Clevo V370VND", ALC2XX_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1558, 0x4018, "Clevo NV40M[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x4019, "Clevo NV40MZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x4020, "Clevo NV40MB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -10956,6 +10971,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1d05, 0x115c, "TongFang GMxTGxx", ALC269_FIXUP_NO_SHUTUP),
+ SND_PCI_QUIRK(0x1d05, 0x121b, "TongFang GMxAGxx", ALC269_FIXUP_NO_SHUTUP),
+ SND_PCI_QUIRK(0x1d05, 0x1387, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1d05, 0x1409, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1d17, 0x3288, "Haier Boyue G42", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS),
+ SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+@@ -11050,6 +11066,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC269_FIXUP_DELL2_MIC_NO_PRESENCE, .name = "dell-headset-dock"},
+ {.id = ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, .name = "dell-headset3"},
+ {.id = ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, .name = "dell-headset4"},
++ {.id = ALC269_FIXUP_DELL4_MIC_NO_PRESENCE_QUIET, .name = "dell-headset4-quiet"},
+ {.id = ALC283_FIXUP_CHROME_BOOK, .name = "alc283-dac-wcaps"},
+ {.id = ALC283_FIXUP_SENSE_COMBO_JACK, .name = "alc283-sense-combo"},
+ {.id = ALC292_FIXUP_TPT440_DOCK, .name = "tpt440-dock"},
+@@ -11604,16 +11621,16 @@ static const struct snd_hda_pin_quirk alc269_fallback_pin_fixup_tbl[] = {
+ SND_HDA_PIN_QUIRK(0x10ec0289, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
+ {0x19, 0x40000000},
+ {0x1b, 0x40000000}),
+- SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
++ SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE_QUIET,
+ {0x19, 0x40000000},
+ {0x1b, 0x40000000}),
+ SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ {0x19, 0x40000000},
+ {0x1a, 0x40000000}),
+- SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++ SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_LIMIT_INT_MIC_BOOST,
+ {0x19, 0x40000000},
+ {0x1a, 0x40000000}),
+- SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
++ SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC269_FIXUP_DELL1_LIMIT_INT_MIC_BOOST,
+ {0x19, 0x40000000},
+ {0x1a, 0x40000000}),
+ SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC2XX_FIXUP_HEADSET_MIC,
+diff --git a/sound/soc/codecs/cs42l51.c b/sound/soc/codecs/cs42l51.c
+index e4827b8c2bde45..6e51954bdb1ecc 100644
+--- a/sound/soc/codecs/cs42l51.c
++++ b/sound/soc/codecs/cs42l51.c
+@@ -747,8 +747,10 @@ int cs42l51_probe(struct device *dev, struct regmap *regmap)
+
+ cs42l51->reset_gpio = devm_gpiod_get_optional(dev, "reset",
+ GPIOD_OUT_LOW);
+- if (IS_ERR(cs42l51->reset_gpio))
+- return PTR_ERR(cs42l51->reset_gpio);
++ if (IS_ERR(cs42l51->reset_gpio)) {
++ ret = PTR_ERR(cs42l51->reset_gpio);
++ goto error;
++ }
+
+ if (cs42l51->reset_gpio) {
+ dev_dbg(dev, "Release reset gpio\n");
+@@ -780,6 +782,7 @@ int cs42l51_probe(struct device *dev, struct regmap *regmap)
+ return 0;
+
+ error:
++ gpiod_set_value_cansleep(cs42l51->reset_gpio, 1);
+ regulator_bulk_disable(ARRAY_SIZE(cs42l51->supplies),
+ cs42l51->supplies);
+ return ret;
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index e39df5d10b07df..1647b24ca34d7a 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -1147,6 +1147,8 @@ static int dapm_widget_list_create(struct snd_soc_dapm_widget_list **list,
+ if (*list == NULL)
+ return -ENOMEM;
+
++ (*list)->num_widgets = size;
++
+ list_for_each_entry(w, widgets, work_list)
+ (*list)->widgets[i++] = w;
+
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 5f09f9f205cea0..d1cf32837be449 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -3880,6 +3880,9 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ break;
+ err = dell_dock_mixer_init(mixer);
+ break;
++ case USB_ID(0x0bda, 0x402e): /* Dell WD19 dock */
++ err = dell_dock_mixer_create(mixer);
++ break;
+
+ case USB_ID(0x2a39, 0x3fd2): /* RME ADI-2 Pro */
+ case USB_ID(0x2a39, 0x3fd3): /* RME ADI-2 DAC */
+diff --git a/tools/mm/page-types.c b/tools/mm/page-types.c
+index 8d5595b6c59f84..2a4ca4dd2da80a 100644
+--- a/tools/mm/page-types.c
++++ b/tools/mm/page-types.c
+@@ -22,6 +22,7 @@
+ #include <time.h>
+ #include <setjmp.h>
+ #include <signal.h>
++#include <inttypes.h>
+ #include <sys/types.h>
+ #include <sys/errno.h>
+ #include <sys/fcntl.h>
+@@ -392,9 +393,9 @@ static void show_page_range(unsigned long voffset, unsigned long offset,
+ if (opt_file)
+ printf("%lx\t", voff);
+ if (opt_list_cgroup)
+- printf("@%llu\t", (unsigned long long)cgroup0);
++ printf("@%" PRIu64 "\t", cgroup0);
+ if (opt_list_mapcnt)
+- printf("%lu\t", mapcnt0);
++ printf("%" PRIu64 "\t", mapcnt0);
+ printf("%lx\t%lx\t%s\n",
+ index, count, page_flag_name(flags0));
+ }
+@@ -420,9 +421,9 @@ static void show_page(unsigned long voffset, unsigned long offset,
+ if (opt_file)
+ printf("%lx\t", voffset);
+ if (opt_list_cgroup)
+- printf("@%llu\t", (unsigned long long)cgroup);
++ printf("@%" PRIu64 "\t", cgroup)
+ if (opt_list_mapcnt)
+- printf("%lu\t", mapcnt);
++ printf("%" PRIu64 "\t", mapcnt);
+
+ printf("%lx\t%s\n", offset, page_flag_name(flags));
+ }
+diff --git a/tools/mm/slabinfo.c b/tools/mm/slabinfo.c
+index cfaeaea71042e5..04e9e6ba86ead5 100644
+--- a/tools/mm/slabinfo.c
++++ b/tools/mm/slabinfo.c
+@@ -1297,7 +1297,9 @@ static void read_slab_dir(void)
+ slab->cpu_partial_free = get_obj("cpu_partial_free");
+ slab->alloc_node_mismatch = get_obj("alloc_node_mismatch");
+ slab->deactivate_bypass = get_obj("deactivate_bypass");
+- chdir("..");
++ if (chdir(".."))
++ fatal("Unable to chdir from slab ../%s\n",
++ slab->name);
+ if (slab->name[0] == ':')
+ alias_targets++;
+ slab++;
+diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c
+index 31a223eaf8e65f..ee3d43a7ba4570 100644
+--- a/tools/perf/util/python.c
++++ b/tools/perf/util/python.c
+@@ -19,6 +19,7 @@
+ #include "util/bpf-filter.h"
+ #include "util/env.h"
+ #include "util/kvm-stat.h"
++#include "util/stat.h"
+ #include "util/kwork.h"
+ #include "util/sample.h"
+ #include "util/lock-contention.h"
+@@ -1355,6 +1356,7 @@ PyMODINIT_FUNC PyInit_perf(void)
+
+ unsigned int scripting_max_stack = PERF_MAX_STACK_DEPTH;
+
++#ifdef HAVE_KVM_STAT_SUPPORT
+ bool kvm_entry_event(struct evsel *evsel __maybe_unused)
+ {
+ return false;
+@@ -1384,6 +1386,7 @@ void exit_event_decode_key(struct perf_kvm_stat *kvm __maybe_unused,
+ char *decode __maybe_unused)
+ {
+ }
++#endif // HAVE_KVM_STAT_SUPPORT
+
+ int find_scripts(char **scripts_array __maybe_unused, char **scripts_path_array __maybe_unused,
+ int num __maybe_unused, int pathlen __maybe_unused)
+diff --git a/tools/perf/util/syscalltbl.c b/tools/perf/util/syscalltbl.c
+index 0dd26b991b3fb5..351da249f1cc61 100644
+--- a/tools/perf/util/syscalltbl.c
++++ b/tools/perf/util/syscalltbl.c
+@@ -42,6 +42,11 @@ static const char *const *syscalltbl_native = syscalltbl_mips_n64;
+ #include <asm/syscalls.c>
+ const int syscalltbl_native_max_id = SYSCALLTBL_LOONGARCH_MAX_ID;
+ static const char *const *syscalltbl_native = syscalltbl_loongarch;
++#else
++const int syscalltbl_native_max_id = 0;
++static const char *const syscalltbl_native[] = {
++ [0] = "unknown",
++};
+ #endif
+
+ struct syscall {
+@@ -178,6 +183,11 @@ int syscalltbl__id(struct syscalltbl *tbl, const char *name)
+ return audit_name_to_syscall(name, tbl->audit_machine);
+ }
+
++int syscalltbl__id_at_idx(struct syscalltbl *tbl __maybe_unused, int idx)
++{
++ return idx;
++}
++
+ int syscalltbl__strglobmatch_next(struct syscalltbl *tbl __maybe_unused,
+ const char *syscall_glob __maybe_unused, int *idx __maybe_unused)
+ {
+diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
+index 90d5afd52dd06b..c5bbd89b319209 100644
+--- a/tools/testing/cxl/test/cxl.c
++++ b/tools/testing/cxl/test/cxl.c
+@@ -693,26 +693,22 @@ static int mock_decoder_commit(struct cxl_decoder *cxld)
+ return 0;
+ }
+
+-static int mock_decoder_reset(struct cxl_decoder *cxld)
++static void mock_decoder_reset(struct cxl_decoder *cxld)
+ {
+ struct cxl_port *port = to_cxl_port(cxld->dev.parent);
+ int id = cxld->id;
+
+ if ((cxld->flags & CXL_DECODER_F_ENABLE) == 0)
+- return 0;
++ return;
+
+ dev_dbg(&port->dev, "%s reset\n", dev_name(&cxld->dev));
+- if (port->commit_end != id) {
++ if (port->commit_end == id)
++ cxl_port_commit_reap(cxld);
++ else
+ dev_dbg(&port->dev,
+ "%s: out of order reset, expected decoder%d.%d\n",
+ dev_name(&cxld->dev), port->id, port->commit_end);
+- return -EBUSY;
+- }
+-
+- port->commit_end--;
+ cxld->flags &= ~CXL_DECODER_F_ENABLE;
+-
+- return 0;
+ }
+
+ static void default_mock_decoder(struct cxl_decoder *cxld)
+diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c
+index 852e7281026ee4..717539eddf9875 100644
+--- a/tools/testing/selftests/mm/uffd-common.c
++++ b/tools/testing/selftests/mm/uffd-common.c
+@@ -18,7 +18,7 @@ bool test_uffdio_wp = true;
+ unsigned long long *count_verify;
+ uffd_test_ops_t *uffd_test_ops;
+ uffd_test_case_ops_t *uffd_test_case_ops;
+-pthread_barrier_t ready_for_fork;
++atomic_bool ready_for_fork;
+
+ static int uffd_mem_fd_create(off_t mem_size, bool hugetlb)
+ {
+@@ -519,8 +519,7 @@ void *uffd_poll_thread(void *arg)
+ pollfd[1].fd = pipefd[cpu*2];
+ pollfd[1].events = POLLIN;
+
+- /* Ready for parent thread to fork */
+- pthread_barrier_wait(&ready_for_fork);
++ ready_for_fork = true;
+
+ for (;;) {
+ ret = poll(pollfd, 2, -1);
+diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h
+index 3e6228d8e0dcc7..a70ae10b5f6206 100644
+--- a/tools/testing/selftests/mm/uffd-common.h
++++ b/tools/testing/selftests/mm/uffd-common.h
+@@ -33,6 +33,7 @@
+ #include <inttypes.h>
+ #include <stdint.h>
+ #include <sys/random.h>
++#include <stdatomic.h>
+
+ #include "../kselftest.h"
+ #include "vm_util.h"
+@@ -104,7 +105,7 @@ extern bool map_shared;
+ extern bool test_uffdio_wp;
+ extern unsigned long long *count_verify;
+ extern volatile bool test_uffdio_copy_eexist;
+-extern pthread_barrier_t ready_for_fork;
++extern atomic_bool ready_for_fork;
+
+ extern uffd_test_ops_t anon_uffd_test_ops;
+ extern uffd_test_ops_t shmem_uffd_test_ops;
+diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c
+index c8a3b1c7edffbd..b3d21eed203dc2 100644
+--- a/tools/testing/selftests/mm/uffd-unit-tests.c
++++ b/tools/testing/selftests/mm/uffd-unit-tests.c
+@@ -241,9 +241,6 @@ static void *fork_event_consumer(void *data)
+ fork_event_args *args = data;
+ struct uffd_msg msg = { 0 };
+
+- /* Ready for parent thread to fork */
+- pthread_barrier_wait(&ready_for_fork);
+-
+ /* Read until a full msg received */
+ while (uffd_read_msg(args->parent_uffd, &msg));
+
+@@ -311,12 +308,8 @@ static int pagemap_test_fork(int uffd, bool with_event, bool test_pin)
+
+ /* Prepare a thread to resolve EVENT_FORK */
+ if (with_event) {
+- pthread_barrier_init(&ready_for_fork, NULL, 2);
+ if (pthread_create(&thread, NULL, fork_event_consumer, &args))
+ err("pthread_create()");
+- /* Wait for child thread to start before forking */
+- pthread_barrier_wait(&ready_for_fork);
+- pthread_barrier_destroy(&ready_for_fork);
+ }
+
+ child = fork();
+@@ -781,7 +774,7 @@ static void uffd_sigbus_test_common(bool wp)
+ char c;
+ struct uffd_args args = { 0 };
+
+- pthread_barrier_init(&ready_for_fork, NULL, 2);
++ ready_for_fork = false;
+
+ fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK);
+
+@@ -798,9 +791,8 @@ static void uffd_sigbus_test_common(bool wp)
+ if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args))
+ err("uffd_poll_thread create");
+
+- /* Wait for child thread to start before forking */
+- pthread_barrier_wait(&ready_for_fork);
+- pthread_barrier_destroy(&ready_for_fork);
++ while (!ready_for_fork)
++ ; /* Wait for the poll_thread to start executing before forking */
+
+ pid = fork();
+ if (pid < 0)
+@@ -841,7 +833,7 @@ static void uffd_events_test_common(bool wp)
+ char c;
+ struct uffd_args args = { 0 };
+
+- pthread_barrier_init(&ready_for_fork, NULL, 2);
++ ready_for_fork = false;
+
+ fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK);
+ if (uffd_register(uffd, area_dst, nr_pages * page_size,
+@@ -852,9 +844,8 @@ static void uffd_events_test_common(bool wp)
+ if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args))
+ err("uffd_poll_thread create");
+
+- /* Wait for child thread to start before forking */
+- pthread_barrier_wait(&ready_for_fork);
+- pthread_barrier_destroy(&ready_for_fork);
++ while (!ready_for_fork)
++ ; /* Wait for the poll_thread to start executing before forking */
+
+ pid = fork();
+ if (pid < 0)
+diff --git a/tools/usb/usbip/src/usbip_detach.c b/tools/usb/usbip/src/usbip_detach.c
+index b29101986b5a62..6b78d4a81e95b2 100644
+--- a/tools/usb/usbip/src/usbip_detach.c
++++ b/tools/usb/usbip/src/usbip_detach.c
+@@ -68,6 +68,7 @@ static int detach_port(char *port)
+ }
+
+ if (!found) {
++ ret = -1;
+ err("Invalid port %s > maxports %d",
+ port, vhci_driver->nports);
+ goto call_driver_close;
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-13 19:54 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-13 19:54 UTC (permalink / raw
To: gentoo-commits
commit: 0ba37cb313cc167fec3d5cbd8f9cb818538174d9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Nov 13 19:53:24 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 13 19:53:24 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0ba37cb3
Add the BMQ(BitMap Queue) Scheduler. USE=experimental
A newish CPU scheduler developed from PDS(incld).
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 +
5020_BMQ-and-PDS-io-scheduler-v6.11-r1.patch | 11164 +++++++++++++++++++++++++
5021_BMQ-and-PDS-gentoo-defaults.patch | 13 +
3 files changed, 11185 insertions(+)
diff --git a/0000_README b/0000_README
index 89bcfe71..9d412a8c 100644
--- a/0000_README
+++ b/0000_README
@@ -134,3 +134,11 @@ Desc: Add Gentoo Linux support config settings and defaults.
Patch: 5010_enable-cpu-optimizations-universal.patch
From: https://github.com/graysky2/kernel_compiler_patch
Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
+
+Patch: 5020_BMQ-and-PDS-io-scheduler-v6.11-r1.patch
+From: https://gitlab.com/alfredchen/projectc
+Desc: BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch: 5021_BMQ-and-PDS-gentoo-defaults.patch
+From: https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc: Set defaults for BMQ. default to n
diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.11-r1.patch b/5020_BMQ-and-PDS-io-scheduler-v6.11-r1.patch
new file mode 100644
index 00000000..0bf20d4b
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.11-r1.patch
@@ -0,0 +1,11164 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index f8bc1630eba0..1b90768a0916 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1673,3 +1673,12 @@ is 10 seconds.
+
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield() will be performed.
++
++ 0 - No yield.
++ 1 - Requeue task. (default)
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++ BitMap queue CPU Scheduler
++ --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++ Overview
++ Task policy
++ Priority management
++ BitMap Queue
++ CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++ It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++ All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++ BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++ ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 72a1acd03675..e69ab1acbdbd 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -481,7 +481,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ seq_puts(m, "0 0 0\n");
+ else
+ seq_printf(m, "%llu %llu %lu\n",
+- (unsigned long long)task->se.sum_exec_runtime,
++ (unsigned long long)tsk_seruntime(task),
+ (unsigned long long)task->sched_info.run_delay,
+ task->sched_info.pcount);
+
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ [RLIMIT_LOCKS] = { RLIM_INFINITY, RLIM_INFINITY }, \
+ [RLIMIT_SIGPENDING] = { 0, 0 }, \
+ [RLIMIT_MSGQUEUE] = { MQ_BYTES_MAX, MQ_BYTES_MAX }, \
+- [RLIMIT_NICE] = { 0, 0 }, \
++ [RLIMIT_NICE] = { 30, 30 }, \
+ [RLIMIT_RTPRIO] = { 0, 0 }, \
+ [RLIMIT_RTTIME] = { RLIM_INFINITY, RLIM_INFINITY }, \
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index f8d150343d42..a09af9f2dcc9 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -782,9 +782,13 @@ struct task_struct {
+ struct alloc_tag *alloc_tag;
+ #endif
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
+ int on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
+ struct __call_single_node wake_entry;
++#ifndef CONFIG_SCHED_ALT
+ unsigned int wakee_flips;
+ unsigned long wakee_flip_decay_ts;
+ struct task_struct *last_wakee;
+@@ -798,6 +802,7 @@ struct task_struct {
+ */
+ int recent_used_cpu;
+ int wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ int on_rq;
+
+@@ -806,6 +811,19 @@ struct task_struct {
+ int normal_prio;
+ unsigned int rt_priority;
+
++#ifdef CONFIG_SCHED_ALT
++ u64 last_ran;
++ s64 time_slice;
++ struct list_head sq_node;
++#ifdef CONFIG_SCHED_BMQ
++ int boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++ u64 deadline;
++#endif /* CONFIG_SCHED_PDS */
++ /* sched_clock time spent running */
++ u64 sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ struct sched_entity se;
+ struct sched_rt_entity rt;
+ struct sched_dl_entity dl;
+@@ -817,6 +835,7 @@ struct task_struct {
+ unsigned long core_cookie;
+ unsigned int core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_CGROUP_SCHED
+ struct task_group *sched_task_group;
+@@ -1585,6 +1604,15 @@ struct task_struct {
+ */
+ };
+
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t) ((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t) (0UL)
++#else /* CFS */
++#define tsk_seruntime(t) ((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t) ((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ #define TASK_REPORT_IDLE (TASK_REPORT + 1)
+ #define TASK_REPORT_MAX (TASK_REPORT_IDLE << 1)
+
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index df3aca89d4f5..1df1f7635188 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -2,6 +2,25 @@
+ #ifndef _LINUX_SCHED_DEADLINE_H
+ #define _LINUX_SCHED_DEADLINE_H
+
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++ return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p) (0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p) ((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p) ((p)->dl.deadline)
++
+ /*
+ * SCHED_DEADLINE tasks has negative priorities, reflecting
+ * the fact that any of them has higher prio than RT and
+@@ -23,6 +42,7 @@ static inline int dl_task(struct task_struct *p)
+ {
+ return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index ab83d85e1183..e66dfb553bc5 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -18,6 +18,28 @@
+ #define MAX_PRIO (MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO (MAX_RT_PRIO + NICE_WIDTH / 2)
+
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ (12)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ (0)
++#endif
++
++#define MIN_NORMAL_PRIO (128)
++#define NORMAL_PRIO_NUM (64)
++#define MAX_PRIO (MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO (MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+ * Convert user-nice values [ -20 ... 0 ... 19 ]
+ * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index b2b9e6eb9683..09bd4d8758b2 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -24,8 +24,10 @@ static inline bool task_is_realtime(struct task_struct *tsk)
+
+ if (policy == SCHED_FIFO || policy == SCHED_RR)
+ return true;
++#ifndef CONFIG_SCHED_ALT
+ if (policy == SCHED_DEADLINE)
+ return true;
++#endif
+ return false;
+ }
+
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index 4237daa5ac7a..3cebd93c49c8 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -244,7 +244,8 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
+
+ #endif /* !CONFIG_SMP */
+
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++ !defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/init/Kconfig b/init/Kconfig
+index 5783a0b87517..a2600f5d64fb 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -640,6 +640,7 @@ config TASK_IO_ACCOUNTING
+
+ config PSI
+ bool "Pressure stall information tracking"
++ depends on !SCHED_ALT
+ select KERNFS
+ help
+ Collect metrics that indicate how overcommitted the CPU, memory,
+@@ -805,6 +806,7 @@ menu "Scheduler features"
+ config UCLAMP_TASK
+ bool "Enable utilization clamping for RT/FAIR tasks"
+ depends on CPU_FREQ_GOV_SCHEDUTIL
++ depends on !SCHED_ALT
+ help
+ This feature enables the scheduler to track the clamped utilization
+ of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -851,6 +853,35 @@ config UCLAMP_BUCKETS_COUNT
+
+ If in doubt, use the default value.
+
++menuconfig SCHED_ALT
++ bool "Alternative CPU Schedulers"
++ default y
++ help
++ This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++ prompt "Alternative CPU Scheduler"
++ default SCHED_BMQ
++
++config SCHED_BMQ
++ bool "BMQ CPU scheduler"
++ help
++ The BitMap Queue CPU scheduler for excellent interactivity and
++ responsiveness on the desktop and solid scalability on normal
++ hardware and commodity servers.
++
++config SCHED_PDS
++ bool "PDS CPU scheduler"
++ help
++ The Priority and Deadline based Skip list multiple queue CPU
++ Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+
+ #
+@@ -916,6 +947,7 @@ config NUMA_BALANCING
+ depends on ARCH_SUPPORTS_NUMA_BALANCING
+ depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++ depends on !SCHED_ALT
+ help
+ This option adds support for automatic NUMA aware memory/task placement.
+ The mechanism is quite primitive and is based on migrating memory when
+@@ -1029,6 +1061,7 @@ config FAIR_GROUP_SCHED
+ depends on CGROUP_SCHED
+ default CGROUP_SCHED
+
++if !SCHED_ALT
+ config CFS_BANDWIDTH
+ bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
+ depends on FAIR_GROUP_SCHED
+@@ -1051,6 +1084,7 @@ config RT_GROUP_SCHED
+ realtime bandwidth for them.
+ See Documentation/scheduler/sched-rt-group.rst for more information.
+
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+
+ config SCHED_MM_CID
+@@ -1299,6 +1333,7 @@ config CHECKPOINT_RESTORE
+
+ config SCHED_AUTOGROUP
+ bool "Automatic process group scheduling"
++ depends on !SCHED_ALT
+ select CGROUPS
+ select CGROUP_SCHED
+ select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index eeb110c65fe2..9d5ac5c3af07 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -70,9 +70,15 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .stack = init_stack,
+ .usage = REFCOUNT_INIT(2),
+ .flags = PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++ .prio = DEFAULT_PRIO,
++ .static_prio = DEFAULT_PRIO,
++ .normal_prio = DEFAULT_PRIO,
++#else
+ .prio = MAX_PRIO - 20,
+ .static_prio = MAX_PRIO - 20,
+ .normal_prio = MAX_PRIO - 20,
++#endif
+ .policy = SCHED_NORMAL,
+ .cpus_ptr = &init_task.cpus_mask,
+ .user_cpus_ptr = NULL,
+@@ -85,6 +91,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .restart_block = {
+ .fn = do_no_restart_syscall,
+ },
++#ifdef CONFIG_SCHED_ALT
++ .sq_node = LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++ .boost_prio = 0,
++#endif
++#ifdef CONFIG_SCHED_PDS
++ .deadline = 0,
++#endif
++ .time_slice = HZ,
++#else
+ .se = {
+ .group_node = LIST_HEAD_INIT(init_task.se.group_node),
+ },
+@@ -92,6 +108,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .run_list = LIST_HEAD_INIT(init_task.rt.run_list),
+ .time_slice = RR_TIMESLICE,
+ },
++#endif
+ .tasks = LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ .pushable_tasks = PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index c2f1fd95a821..41654679b1b2 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
+
+ config SCHED_CORE
+ bool "Core Scheduling for SMT"
+- depends on SCHED_SMT
++ depends on SCHED_SMT && !SCHED_ALT
+ help
+ This option permits Core Scheduling, a means of coordinated task
+ selection across SMT siblings. When enabled -- see
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 4bd9e50bcc8e..7cecc01ea422 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -911,7 +911,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ return ret;
+ }
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+ * Helper routine for generate_sched_domains().
+ * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1347,7 +1347,7 @@ static void rebuild_sched_domains_locked(void)
+ /* Have scheduler rebuild the domains */
+ partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ static void rebuild_sched_domains_locked(void)
+ {
+ }
+@@ -3400,12 +3400,15 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ goto out_unlock;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(task)) {
+ cs->nr_migrate_dl_tasks++;
+ cs->sum_migrate_dl_bw += task->dl.dl_bw;
+ }
++#endif
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (!cs->nr_migrate_dl_tasks)
+ goto out_success;
+
+@@ -3426,6 +3429,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ }
+
+ out_success:
++#endif
+ /*
+ * Mark attach is in progress. This makes validate_change() fail
+ * changes which zero cpus/mems_allowed.
+@@ -3449,12 +3453,14 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
+ if (!cs->attach_in_progress)
+ wake_up(&cpuset_attach_wq);
+
++#ifndef CONFIG_SCHED_ALT
+ if (cs->nr_migrate_dl_tasks) {
+ int cpu = cpumask_any(cs->effective_cpus);
+
+ dl_bw_free(cpu, cs->sum_migrate_dl_bw);
+ reset_migrate_dl_data(cs);
+ }
++#endif
+
+ mutex_unlock(&cpuset_mutex);
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index dead51de8eb5..8edef9676ab3 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -149,7 +149,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ */
+ t1 = tsk->sched_info.pcount;
+ t2 = tsk->sched_info.run_delay;
+- t3 = tsk->se.sum_exec_runtime;
++ t3 = tsk_seruntime(tsk);
+
+ d->cpu_count += t1;
+
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 7430852a8571..4ab0058dc1e7 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -175,7 +175,7 @@ static void __exit_signal(struct task_struct *tsk)
+ sig->curr_target = next_thread(tsk);
+ }
+
+- add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++ add_device_randomness((const void*) &tsk_seruntime(tsk),
+ sizeof(unsigned long long));
+
+ /*
+@@ -196,7 +196,7 @@ static void __exit_signal(struct task_struct *tsk)
+ sig->inblock += task_io_get_inblock(tsk);
+ sig->oublock += task_io_get_oublock(tsk);
+ task_io_accounting_add(&sig->ioac, &tsk->ioac);
+- sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++ sig->sum_sched_runtime += tsk_seruntime(tsk);
+ sig->nr_threads--;
+ __unhash_process(tsk, group_dead);
+ write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index fba1229f1de6..3b838532700a 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -363,7 +363,7 @@ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
+
+ waiter->tree.prio = __waiter_prio(task);
+- waiter->tree.deadline = task->dl.deadline;
++ waiter->tree.deadline = __tsk_deadline(task);
+ }
+
+ /*
+@@ -384,16 +384,20 @@ waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ * Only use with rt_waiter_node_{less,equal}()
+ */
+ #define task_to_waiter_node(p) \
+- &(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++ &(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ #define task_to_waiter(p) \
+ &(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
+
+ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++ return (left->deadline < right->deadline);
++#else
+ if (left->prio < right->prio)
+ return 1;
+
++#ifndef CONFIG_SCHED_BMQ
+ /*
+ * If both waiters have dl_prio(), we check the deadlines of the
+ * associated tasks.
+@@ -402,16 +406,22 @@ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ */
+ if (dl_prio(left->prio))
+ return dl_time_before(left->deadline, right->deadline);
++#endif
+
+ return 0;
++#endif
+ }
+
+ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++ return (left->deadline == right->deadline);
++#else
+ if (left->prio != right->prio)
+ return 0;
+
++#ifndef CONFIG_SCHED_BMQ
+ /*
+ * If both waiters have dl_prio(), we check the deadlines of the
+ * associated tasks.
+@@ -420,8 +430,10 @@ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ */
+ if (dl_prio(left->prio))
+ return left->deadline == right->deadline;
++#endif
+
+ return 1;
++#endif
+ }
+
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 976092b7bd45..31d587c16ec1 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -28,7 +28,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..a69587be7cde
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,7489 @@
++/*
++ * kernel/sched/alt_core.c
++ *
++ * Core alternative kernel scheduler code and related syscalls
++ *
++ * Copyright (C) 1991-2002 Linus Torvalds
++ *
++ * 2009-08-13 Brainfuck deadline scheduling policy by Con Kolivas deletes
++ * a whole lot of those previous things.
++ * 2017-09-06 Priority and Deadline based Skip list multiple queue kernel
++ * scheduler by Alfred Chen.
++ * 2019-02-20 BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/hotplug.h>
++#include <linux/sched/init.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/rseq.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#include <trace/events/ipi.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++#include "smp.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x) (1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x) (0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v6.11-r1"
++
++#define STOP_PRIO (MAX_RT_PRIO - 1)
++
++/*
++ * Time slice
++ * (default: 4 msec, units: nanoseconds)
++ */
++unsigned int sysctl_sched_base_slice __read_mostly = (4 << 20);
++
++#include "alt_core.h"
++#include "alt_topology.h"
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS (100 << 10)
++
++/**
++ * sched_yield_type - Type of sched_yield() will be performed.
++ * 0: No yield.
++ * 1: Requeue task. (default)
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++
++cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++static DEFINE_MUTEX(sched_hotcpu_mutex);
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next) do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch() do { } while (0)
++#endif
++
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 2] ____cacheline_aligned_in_smp;
++
++cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
++cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_pcore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_ecore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS + 1];
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++ if (!p->user_cpus_ptr)
++ return cpu_possible_mask; /* &init_task.cpus_mask */
++ return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++ int i;
++
++ bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++ for(i = 0; i < SCHED_LEVELS; i++)
++ INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++ struct task_struct *idle)
++{
++ INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
++ list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
++ idle->on_rq = TASK_ON_RQ_QUEUED;
++}
++
++#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu) \
++ if (low < pr && pr <= high) \
++ cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
++
++#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu) \
++ if (low < pr && pr <= high) \
++ cpumask_set_cpu(cpu, sched_preempt_mask + pr);
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++ int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++ int last_prio = rq->prio;
++ int cpu, pr;
++
++ if (prio == last_prio)
++ return;
++
++ rq->prio = prio;
++#ifdef CONFIG_SCHED_PDS
++ rq->prio_idx = sched_prio2idx(rq->prio, rq);
++#endif
++ cpu = cpu_of(rq);
++ pr = atomic_read(&sched_prio_record);
++
++ if (prio < last_prio) {
++ if (IDLE_TASK_SCHED_PRIO == last_prio) {
++ rq->clear_idle_mask_func(cpu, sched_idle_mask);
++ last_prio -= 2;
++ }
++ CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
++
++ return;
++ }
++ /* last_prio < prio */
++ if (IDLE_TASK_SCHED_PRIO == prio) {
++ rq->set_idle_mask_func(cpu, sched_idle_mask);
++ prio -= 2;
++ }
++ SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ * p->pi_lock
++ * rq->lock
++ * hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ * rq1->lock
++ * rq2->lock where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ * - sched_setaffinity()/
++ * set_cpus_allowed_ptr(): p->cpus_ptr, p->nr_cpus_allowed
++ * - set_user_nice(): p->se.load, p->*prio
++ * - __sched_setscheduler(): p->sched_class, p->policy, p->*prio,
++ * p->se.load, p->rt_priority,
++ * p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ * - sched_setnuma(): p->numa_preferred_nid
++ * - sched_move_task(): p->sched_task_group
++ * - uclamp_update_active() p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ * is changed locklessly using set_current_state(), __set_current_state() or
++ * set_special_state(), see their respective comments, or by
++ * try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ * concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ * is set by activate_task() and cleared by deactivate_task(), under
++ * rq->lock. Non-zero indicates the task is runnable, the special
++ * ON_RQ_MIGRATING state is used for migration without holding both
++ * rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ * is set by prepare_task() and cleared by finish_task() such that it will be
++ * set before p is scheduled-in and cleared after p is scheduled-out, both
++ * under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ * [ The astute reader will observe that it is possible for two tasks on one
++ * CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ * - Don't call set_task_cpu() on a blocked task:
++ *
++ * We don't care what CPU we're not running on, this simplifies hotplug,
++ * the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ * - for try_to_wake_up(), called under p->pi_lock:
++ *
++ * This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ * - for migration called under rq->lock:
++ * [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ * o move_queued_task()
++ * o detach_task()
++ *
++ * - for migration called under double_rq_lock():
++ *
++ * o __migrate_swap_task()
++ * o push_rt_task() / pull_rt_task()
++ * o push_dl_task() / pull_dl_task()
++ * o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq *
++task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
++{
++ struct rq *rq;
++ for (;;) {
++ rq = task_rq(p);
++ if (p->on_cpu || task_on_rq_queued(p)) {
++ raw_spin_lock_irqsave(&rq->lock, *flags);
++ if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++ *plock = &rq->lock;
++ return rq;
++ }
++ raw_spin_unlock_irqrestore(&rq->lock, *flags);
++ } else if (task_on_rq_migrating(p)) {
++ do {
++ cpu_relax();
++ } while (unlikely(task_on_rq_migrating(p)));
++ } else {
++ raw_spin_lock_irqsave(&p->pi_lock, *flags);
++ if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
++ *plock = &p->pi_lock;
++ return rq;
++ }
++ raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++ }
++ }
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
++{
++ raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ lockdep_assert_held(&p->pi_lock);
++
++ for (;;) {
++ rq = task_rq(p);
++ raw_spin_lock(&rq->lock);
++ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++ return rq;
++ raw_spin_unlock(&rq->lock);
++
++ while (unlikely(task_on_rq_migrating(p)))
++ cpu_relax();
++ }
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(p->pi_lock)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ for (;;) {
++ raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++ rq = task_rq(p);
++ raw_spin_lock(&rq->lock);
++ /*
++ * move_queued_task() task_rq_lock()
++ *
++ * ACQUIRE (rq->lock)
++ * [S] ->on_rq = MIGRATING [L] rq = task_rq()
++ * WMB (__set_task_cpu()) ACQUIRE (rq->lock);
++ * [S] ->cpu = new_cpu [L] task_rq()
++ * [L] ->on_rq
++ * RELEASE (rq->lock)
++ *
++ * If we observe the old CPU in task_rq_lock(), the acquire of
++ * the old rq->lock will fully serialize against the stores.
++ *
++ * If we observe the new CPU in task_rq_lock(), the address
++ * dependency headed by '[L] rq = task_rq()' and the acquire
++ * will pair with the WMB to ensure we then also see migrating.
++ */
++ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++ return rq;
++ }
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++ while (unlikely(task_on_rq_migrating(p)))
++ cpu_relax();
++ }
++}
++
++static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
++ rq_lock_irqsave(_T->lock, &_T->rf),
++ rq_unlock_irqrestore(_T->lock, &_T->rf),
++ struct rq_flags rf)
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++ raw_spinlock_t *lock;
++
++ /* Matches synchronize_rcu() in __sched_core_enable() */
++ preempt_disable();
++
++ for (;;) {
++ lock = __rq_lockp(rq);
++ raw_spin_lock_nested(lock, subclass);
++ if (likely(lock == __rq_lockp(rq))) {
++ /* preempt_count *MUST* be > 1 */
++ preempt_enable_no_resched();
++ return;
++ }
++ raw_spin_unlock(lock);
++ }
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++ raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++ s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++ irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++ /*
++ * Since irq_time is only updated on {soft,}irq_exit, we might run into
++ * this case when a previous update_rq_clock() happened inside a
++ * {soft,}IRQ region.
++ *
++ * When this happens, we stop ->clock_task and only update the
++ * prev_irq_time stamp to account for the part that fit, so that a next
++ * update will consume the rest. This ensures ->clock_task is
++ * monotonic.
++ *
++ * It does however cause some slight miss-attribution of {soft,}IRQ
++ * time, a more accurate solution would be to update the irq_time using
++ * the current rq->clock timestamp, except that would require using
++ * atomic ops.
++ */
++ if (irq_delta > delta)
++ irq_delta = delta;
++
++ rq->prev_irq_time += irq_delta;
++ delta -= irq_delta;
++ delayacct_irq(rq->curr, irq_delta);
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++ if (static_key_false((¶virt_steal_rq_enabled))) {
++ steal = paravirt_steal_clock(cpu_of(rq));
++ steal -= rq->prev_steal_time_rq;
++
++ if (unlikely(steal > delta))
++ steal = delta;
++
++ rq->prev_steal_time_rq += steal;
++ delta -= steal;
++ }
++#endif
++
++ rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++ if ((irq_delta + steal))
++ update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++ s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++ if (unlikely(delta <= 0))
++ return;
++ rq->clock += delta;
++ sched_update_rq_clock(rq);
++ update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS (sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT (8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l) (((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t) ((t) >> 17)
++#define LOAD_HALF_BLOCK(t) ((t) >> 16)
++#define BLOCK_MASK(t) ((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b) (1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++ u64 time = rq->clock;
++ u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
++ u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++ u64 curr = !!rq->nr_running;
++
++ if (delta) {
++ rq->load_history = rq->load_history >> delta;
++
++ if (delta < RQ_UTIL_SHIFT) {
++ rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++ if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++ rq->load_history ^= LOAD_BLOCK_BIT(delta);
++ }
++
++ rq->load_block = BLOCK_MASK(time) * prev;
++ } else {
++ rq->load_block += (time - rq->load_stamp) * prev;
++ }
++ if (prev ^ curr)
++ rq->load_history ^= CURRENT_LOAD_BIT;
++ rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++ return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++ return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid. Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++ struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++ rq_load_update(rq);
++#endif
++ data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
++ if (data)
++ data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++ rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++ int cpu = cpu_of(rq);
++
++ if (!tick_nohz_full_cpu(cpu))
++ return;
++
++ if (rq->nr_running < 2)
++ tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++ else
++ tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++ return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++ unsigned long ip = 0;
++ unsigned int state;
++
++ if (!p || p == current)
++ return 0;
++
++ /* Only get wchan if task is blocked and we can keep it that way. */
++ raw_spin_lock_irq(&p->pi_lock);
++ state = READ_ONCE(p->__state);
++ smp_rmb(); /* see try_to_wake_up() */
++ if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++ ip = __get_wchan(p);
++ raw_spin_unlock_irq(&p->pi_lock);
++
++ return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func) \
++ sched_info_dequeue(rq, p); \
++ \
++ __list_del_entry(&p->sq_node); \
++ if (p->sq_node.prev == p->sq_node.next) { \
++ clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq), \
++ rq->queue.bitmap); \
++ func; \
++ }
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags, func) \
++ sched_info_enqueue(rq, p); \
++ { \
++ int idx, prio; \
++ TASK_SCHED_PRIO_IDX(p, rq, idx, prio); \
++ list_add_tail(&p->sq_node, &rq->queue.heads[idx]); \
++ if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) { \
++ set_bit(prio, rq->queue.bitmap); \
++ func; \
++ } \
++ }
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++
++ /*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++ task_cpu(p), cpu_of(rq));
++#endif
++
++ __SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++ --rq->nr_running;
++#ifdef CONFIG_SMP
++ if (1 == rq->nr_running)
++ cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++ sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++
++ /*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++ task_cpu(p), cpu_of(rq));
++#endif
++
++ __SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++ ++rq->nr_running;
++#ifdef CONFIG_SMP
++ if (2 == rq->nr_running)
++ cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++ sched_update_tick_dependency(rq);
++}
++
++void requeue_task(struct task_struct *p, struct rq *rq)
++{
++ struct list_head *node = &p->sq_node;
++ int deq_idx, idx, prio;
++
++ TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++ /*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++ cpu_of(rq), task_cpu(p));
++#endif
++ if (list_is_last(node, &rq->queue.heads[idx]))
++ return;
++
++ __list_del_entry(node);
++ if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
++ clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
++
++ list_add_tail(node, &rq->queue.heads[idx]);
++ if (list_is_first(node, &rq->queue.heads[idx]))
++ set_bit(prio, rq->queue.bitmap);
++ update_sched_preempt_mask(rq);
++}
++
++/*
++ * try_cmpxchg based fetch_or() macro so it works for different integer types:
++ */
++#define fetch_or(ptr, mask) \
++ ({ \
++ typeof(ptr) _ptr = (ptr); \
++ typeof(mask) _mask = (mask); \
++ typeof(*_ptr) _val = *_ptr; \
++ \
++ do { \
++ } while (!try_cmpxchg(_ptr, &_val, _val | _mask)); \
++ _val; \
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++ struct thread_info *ti = task_thread_info(p);
++ return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++ struct thread_info *ti = task_thread_info(p);
++ typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++ do {
++ if (!(val & _TIF_POLLING_NRFLAG))
++ return false;
++ if (val & _TIF_NEED_RESCHED)
++ return true;
++ } while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
++
++ return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++ set_tsk_need_resched(p);
++ return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++ return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++ struct wake_q_node *node = &task->wake_q;
++
++ /*
++ * Atomically grab the task, if ->wake_q is !nil already it means
++ * it's already queued (either by us or someone else) and will get the
++ * wakeup due to that.
++ *
++ * In order to ensure that a pending wakeup will observe our pending
++ * state, even in the failed case, an explicit smp_mb() must be used.
++ */
++ smp_mb__before_atomic();
++ if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++ return false;
++
++ /*
++ * The head is context local, there can be no concurrency.
++ */
++ *head->lastp = node;
++ head->lastp = &node->next;
++ return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++ if (__wake_q_add(head, task))
++ get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++ if (!__wake_q_add(head, task))
++ put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++ struct wake_q_node *node = head->first;
++
++ while (node != WAKE_Q_TAIL) {
++ struct task_struct *task;
++
++ task = container_of(node, struct task_struct, wake_q);
++ /* task can safely be re-inserted now: */
++ node = node->next;
++ task->wake_q.next = NULL;
++
++ /*
++ * wake_up_process() executes a full barrier, which pairs with
++ * the queueing in wake_q_add() so as not to miss wakeups.
++ */
++ wake_up_process(task);
++ put_task_struct(task);
++ }
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++static inline void resched_curr(struct rq *rq)
++{
++ struct task_struct *curr = rq->curr;
++ int cpu;
++
++ lockdep_assert_held(&rq->lock);
++
++ if (test_tsk_need_resched(curr))
++ return;
++
++ cpu = cpu_of(rq);
++ if (cpu == smp_processor_id()) {
++ set_tsk_need_resched(curr);
++ set_preempt_need_resched();
++ return;
++ }
++
++ if (set_nr_and_not_polling(curr))
++ smp_send_reschedule(cpu);
++ else
++ trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ if (cpu_online(cpu) || cpu == smp_processor_id())
++ resched_curr(cpu_rq(cpu));
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++/*
++ * This routine will record that the CPU is going idle with tick stopped.
++ * This info will be used in performing idle load balancing in the future.
++ */
++void nohz_balance_enter_idle(int cpu) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU. This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be up to date wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++ int i, cpu = smp_processor_id(), default_cpu = -1;
++ struct cpumask *mask;
++ const struct cpumask *hk_mask;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
++ if (!idle_cpu(cpu))
++ return cpu;
++ default_cpu = cpu;
++ }
++
++ hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
++
++ for (mask = per_cpu(sched_cpu_topo_masks, cpu);
++ mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++ for_each_cpu_and(i, mask, hk_mask)
++ if (!idle_cpu(i))
++ return i;
++
++ if (default_cpu == -1)
++ default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
++ cpu = default_cpu;
++
++ return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (cpu == smp_processor_id())
++ return;
++
++ /*
++ * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
++ * part of the idle loop. This forces an exit from the idle loop
++ * and a round trip to schedule(). Now this could be optimized
++ * because a simple new idle loop iteration is enough to
++ * re-evaluate the next tick. Provided some re-ordering of tick
++ * nohz functions that would need to follow TIF_NR_POLLING
++ * clearing:
++ *
++ * - On most architectures, a simple fetch_or on ti::flags with a
++ * "0" value would be enough to know if an IPI needs to be sent.
++ *
++ * - x86 needs to perform a last need_resched() check between
++ * monitor and mwait which doesn't take timers into account.
++ * There a dedicated TIF_TIMER flag would be required to
++ * fetch_or here and be checked along with TIF_NEED_RESCHED
++ * before mwait().
++ *
++ * However, remote timer enqueue is not such a frequent event
++ * and testing of the above solutions didn't appear to report
++ * much benefits.
++ */
++ if (set_nr_and_not_polling(rq->idle))
++ smp_send_reschedule(cpu);
++ else
++ trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++ /*
++ * We just need the target to call irq_exit() and re-evaluate
++ * the next tick. The nohz full kick at least implies that.
++ * If needed we can still optimize that later with an
++ * empty IRQ.
++ */
++ if (cpu_is_offline(cpu))
++ return true; /* Don't try to wake offline CPUs. */
++ if (tick_nohz_full_cpu(cpu)) {
++ if (cpu != smp_processor_id() ||
++ tick_nohz_tick_stopped())
++ tick_nohz_full_kick_cpu(cpu);
++ return true;
++ }
++
++ return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++ if (!wake_up_full_nohz_cpu(cpu))
++ wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++ struct rq *rq = info;
++ int cpu = cpu_of(rq);
++ unsigned int flags;
++
++ /*
++ * Release the rq::nohz_csd.
++ */
++ flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++ WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++ rq->idle_balance = idle_cpu(cpu);
++ if (rq->idle_balance && !need_resched()) {
++ rq->nohz_idle_balance = flags;
++ raise_softirq_irqoff(SCHED_SOFTIRQ);
++ }
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void wakeup_preempt(struct rq *rq)
++{
++ if (sched_rq_first_task(rq) != rq->curr)
++ resched_curr(rq);
++}
++
++static __always_inline
++int __task_state_match(struct task_struct *p, unsigned int state)
++{
++ if (READ_ONCE(p->__state) & state)
++ return 1;
++
++ if (READ_ONCE(p->saved_state) & state)
++ return -1;
++
++ return 0;
++}
++
++static __always_inline
++int task_state_match(struct task_struct *p, unsigned int state)
++{
++ /*
++ * Serialize against current_save_and_set_rtlock_wait_state(),
++ * current_restore_rtlock_saved_state(), and __refrigerator().
++ */
++ guard(raw_spinlock_irq)(&p->pi_lock);
++
++ return __task_state_match(p, state);
++}
++
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero. When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count). If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++ unsigned long flags;
++ int running, queued, match;
++ unsigned long ncsw;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ for (;;) {
++ rq = task_rq(p);
++
++ /*
++ * If the task is actively running on another CPU
++ * still, just relax and busy-wait without holding
++ * any locks.
++ *
++ * NOTE! Since we don't hold any locks, it's not
++ * even sure that "rq" stays as the right runqueue!
++ * But we don't care, since this will return false
++ * if the runqueue has changed and p is actually now
++ * running somewhere else!
++ */
++ while (task_on_cpu(p)) {
++ if (!task_state_match(p, match_state))
++ return 0;
++ cpu_relax();
++ }
++
++ /*
++ * Ok, time to look more closely! We need the rq
++ * lock now, to be *sure*. If we're wrong, we'll
++ * just go back and repeat.
++ */
++ task_access_lock_irqsave(p, &lock, &flags);
++ trace_sched_wait_task(p);
++ running = task_on_cpu(p);
++ queued = p->on_rq;
++ ncsw = 0;
++ if ((match = __task_state_match(p, match_state))) {
++ /*
++ * When matching on p->saved_state, consider this task
++ * still queued so it will wait.
++ */
++ if (match < 0)
++ queued = 1;
++ ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++ }
++ task_access_unlock_irqrestore(p, lock, &flags);
++
++ /*
++ * If it changed from the expected state, bail out now.
++ */
++ if (unlikely(!ncsw))
++ break;
++
++ /*
++ * Was it really running after all now that we
++ * checked with the proper locks actually held?
++ *
++ * Oops. Go back and try again..
++ */
++ if (unlikely(running)) {
++ cpu_relax();
++ continue;
++ }
++
++ /*
++ * It's not enough that it's not actively running,
++ * it must be off the runqueue _entirely_, and not
++ * preempted!
++ *
++ * So if it was still runnable (but just not actively
++ * running right now), it's preempted, and we should
++ * yield - it could be a while.
++ */
++ if (unlikely(queued)) {
++ ktime_t to = NSEC_PER_SEC / HZ;
++
++ set_current_state(TASK_UNINTERRUPTIBLE);
++ schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++ continue;
++ }
++
++ /*
++ * Ahh, all good. It wasn't running, and it wasn't
++ * runnable, which means that it will never become
++ * running in the future either. We're all done!
++ */
++ break;
++ }
++
++ return ncsw;
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++ if (hrtimer_active(&rq->hrtick_timer))
++ hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++ struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++ WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++ raw_spin_lock(&rq->lock);
++ resched_curr(rq);
++ raw_spin_unlock(&rq->lock);
++
++ return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ * - enabled by features
++ * - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++ /**
++ * Alt schedule FW doesn't support sched_feat yet
++ if (!sched_feat(HRTICK))
++ return 0;
++ */
++ if (!cpu_active(cpu_of(rq)))
++ return 0;
++ return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++ struct hrtimer *timer = &rq->hrtick_timer;
++ ktime_t time = rq->hrtick_time;
++
++ hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++ struct rq *rq = arg;
++
++ raw_spin_lock(&rq->lock);
++ __hrtick_restart(rq);
++ raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++ struct hrtimer *timer = &rq->hrtick_timer;
++ s64 delta;
++
++ /*
++ * Don't schedule slices shorter than 10000ns, that just
++ * doesn't make sense and can cause timer DoS.
++ */
++ delta = max_t(s64, delay, 10000LL);
++
++ rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++ if (rq == this_rq())
++ __hrtick_restart(rq);
++ else
++ smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++ /*
++ * Don't schedule slices shorter than 10000ns, that just
++ * doesn't make sense. Rely on vruntime for fairness.
++ */
++ delay = max_t(u64, delay, 10000LL);
++ hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++ HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++ INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++ hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++ rq->hrtick_timer.function = hrtick;
++}
++#else /* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++ return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif /* CONFIG_SCHED_HRTICK */
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++ enqueue_task(p, rq, ENQUEUE_WAKEUP);
++
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++ ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++ /*
++ * If in_iowait is set, the code below may not trigger any cpufreq
++ * utilization updates, so do it here explicitly with the IOWAIT flag
++ * passed.
++ */
++ cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++/*
++ * deactivate_task - remove a task from the runqueue.
++ *
++ * Context: rq->lock
++ */
++static inline void deactivate_task(struct task_struct *p, struct rq *rq)
++{
++ WRITE_ONCE(p->on_rq, 0);
++ ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++ dequeue_task(p, rq, DEQUEUE_SLEEP);
++
++ cpufreq_update_util(rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++ /*
++ * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++ * successfully executed on another CPU. We must ensure that updates of
++ * per-task data have been completed by this moment.
++ */
++ smp_wmb();
++
++ WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++ unsigned int state = READ_ONCE(p->__state);
++
++ /*
++ * We should never call set_task_cpu() on a blocked task,
++ * ttwu() will sort out the placement.
++ */
++ WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++ /*
++ * The caller should hold either p->pi_lock or rq->lock, when changing
++ * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++ *
++ * sched_move_task() holds both and thus holding either pins the cgroup,
++ * see task_group().
++ */
++ WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++ lockdep_is_held(&task_rq(p)->lock)));
++#endif
++ /*
++ * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++ */
++ WARN_ON_ONCE(!cpu_online(new_cpu));
++
++ WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++ trace_sched_migrate_task(p, new_cpu);
++
++ if (task_cpu(p) != new_cpu)
++ {
++ rseq_migrate(p);
++ perf_event_task_migrate(p);
++ }
++
++ __set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED 0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++ /*
++ * This here violates the locking rules for affinity, since we're only
++ * supposed to change these variables while holding both rq->lock and
++ * p->pi_lock.
++ *
++ * HOWEVER, it magically works, because ttwu() is the only code that
++ * accesses these variables under p->pi_lock and only does so after
++ * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++ * before finish_task().
++ *
++ * XXX do further audits, this smells like something putrid.
++ */
++ SCHED_WARN_ON(!p->on_cpu);
++ p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++ struct task_struct *p = current;
++ int cpu;
++
++ if (p->migration_disabled) {
++ p->migration_disabled++;
++ return;
++ }
++
++ guard(preempt)();
++ cpu = smp_processor_id();
++ if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++ cpu_rq(cpu)->nr_pinned++;
++ p->migration_disabled = 1;
++ p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++ /*
++ * Violates locking rules! see comment in __do_set_cpus_ptr().
++ */
++ if (p->cpus_ptr == &p->cpus_mask)
++ __do_set_cpus_ptr(p, cpumask_of(cpu));
++ }
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++ struct task_struct *p = current;
++
++ if (0 == p->migration_disabled)
++ return;
++
++ if (p->migration_disabled > 1) {
++ p->migration_disabled--;
++ return;
++ }
++
++ if (WARN_ON_ONCE(!p->migration_disabled))
++ return;
++
++ /*
++ * Ensure stop_task runs either before or after this, and that
++ * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++ */
++ guard(preempt)();
++ /*
++ * Assumption: current should be running on allowed cpu
++ */
++ WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++ if (p->cpus_ptr != &p->cpus_mask)
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ /*
++ * Mustn't clear migration_disabled() until cpus_ptr points back at the
++ * regular cpus_mask, otherwise things that race (eg.
++ * select_fallback_rq) get confused.
++ */
++ barrier();
++ p->migration_disabled = 0;
++ this_rq()->nr_pinned--;
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++ return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++ /* When not in the task's cpumask, no point in looking further. */
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++ return false;
++
++ /* migrate_disabled() must be allowed to finish. */
++ if (is_migration_disabled(p))
++ return cpu_online(cpu);
++
++ /* Non kernel threads are not allowed during either online or offline. */
++ if (!(p->flags & PF_KTHREAD))
++ return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++ /* KTHREAD_IS_PER_CPU is always allowed. */
++ if (kthread_is_per_cpu(p))
++ return cpu_online(cpu);
++
++ /* Regular kernel threads don't get to stay during offline. */
++ if (cpu_dying(cpu))
++ return false;
++
++ /* But are allowed during online. */
++ return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ * stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ * off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ * it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ * is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
++{
++ int src_cpu;
++
++ lockdep_assert_held(&rq->lock);
++
++ src_cpu = cpu_of(rq);
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++ dequeue_task(p, rq, 0);
++ set_task_cpu(p, new_cpu);
++ raw_spin_unlock(&rq->lock);
++
++ rq = cpu_rq(new_cpu);
++
++ raw_spin_lock(&rq->lock);
++ WARN_ON_ONCE(task_cpu(p) != new_cpu);
++
++ sched_mm_cid_migrate_to(rq, p, src_cpu);
++
++ sched_task_sanity_check(p, rq);
++ enqueue_task(p, rq, 0);
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++ wakeup_preempt(rq);
++
++ return rq;
++}
++
++struct migration_arg {
++ struct task_struct *task;
++ int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
++{
++ /* Affinity changed (again). */
++ if (!is_cpu_allowed(p, dest_cpu))
++ return rq;
++
++ return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a high-prio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++ struct migration_arg *arg = data;
++ struct task_struct *p = arg->task;
++ struct rq *rq = this_rq();
++ unsigned long flags;
++
++ /*
++ * The original target CPU might have gone down and we might
++ * be on another CPU but it doesn't matter.
++ */
++ local_irq_save(flags);
++ /*
++ * We need to explicitly wake pending tasks before running
++ * __migrate_task() such that we will not miss enforcing cpus_ptr
++ * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++ */
++ flush_smp_call_function_queue();
++
++ raw_spin_lock(&p->pi_lock);
++ raw_spin_lock(&rq->lock);
++ /*
++ * If task_rq(p) != rq, it cannot be migrated here, because we're
++ * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++ * we're holding p->pi_lock.
++ */
++ if (task_rq(p) == rq && task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ rq = __migrate_task(rq, p, arg->dest_cpu);
++ }
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++ cpumask_copy(&p->cpus_mask, ctx->new_mask);
++ p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++ /*
++ * Swap in a new user_cpus_ptr if SCA_USER flag set
++ */
++ if (ctx->flags & SCA_USER)
++ swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++ lockdep_assert_held(&p->pi_lock);
++ set_cpus_allowed_common(p, ctx);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .user_mask = NULL,
++ .flags = SCA_USER, /* clear the user requested mask */
++ };
++ union cpumask_rcuhead {
++ cpumask_t cpumask;
++ struct rcu_head rcu;
++ };
++
++ __do_set_cpus_allowed(p, &ac);
++
++ /*
++ * Because this is called with p->pi_lock held, it is not possible
++ * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++ * kfree_rcu().
++ */
++ kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++ int node)
++{
++ cpumask_t *user_mask;
++ unsigned long flags;
++
++ /*
++ * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++ * may differ by now due to racing.
++ */
++ dst->user_cpus_ptr = NULL;
++
++ /*
++ * This check is racy and losing the race is a valid situation.
++ * It is not worth the extra overhead of taking the pi_lock on
++ * every fork/clone.
++ */
++ if (data_race(!src->user_cpus_ptr))
++ return 0;
++
++ user_mask = alloc_user_cpus_ptr(node);
++ if (!user_mask)
++ return -ENOMEM;
++
++ /*
++ * Use pi_lock to protect content of user_cpus_ptr
++ *
++ * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++ * do_set_cpus_allowed().
++ */
++ raw_spin_lock_irqsave(&src->pi_lock, flags);
++ if (src->user_cpus_ptr) {
++ swap(dst->user_cpus_ptr, user_mask);
++ cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++ }
++ raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++ if (unlikely(user_mask))
++ kfree(user_mask);
++
++ return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++ struct cpumask *user_mask = NULL;
++
++ swap(p->user_cpus_ptr, user_mask);
++
++ return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++ kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++ return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++ guard(preempt)();
++ int cpu = task_cpu(p);
++
++ if ((cpu != smp_processor_id()) && task_curr(p))
++ smp_send_reschedule(cpu);
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ * - cpu_active must be a subset of cpu_online
++ *
++ * - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ * see __set_cpus_allowed_ptr(). At this point the newly online
++ * CPU isn't yet part of the sched domains, and balancing will not
++ * see it.
++ *
++ * - on cpu-down we clear cpu_active() to mask the sched domains and
++ * avoid the load balancer to place new tasks on the to be removed
++ * CPU. Existing tasks will remain running there and will be taken
++ * off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++ int nid = cpu_to_node(cpu);
++ const struct cpumask *nodemask = NULL;
++ enum { cpuset, possible, fail } state = cpuset;
++ int dest_cpu;
++
++ /*
++ * If the node that the CPU is on has been offlined, cpu_to_node()
++ * will return -1. There is no CPU on the node, and we should
++ * select the CPU on the other node.
++ */
++ if (nid != -1) {
++ nodemask = cpumask_of_node(nid);
++
++ /* Look for allowed, online CPU in same node. */
++ for_each_cpu(dest_cpu, nodemask) {
++ if (is_cpu_allowed(p, dest_cpu))
++ return dest_cpu;
++ }
++ }
++
++ for (;;) {
++ /* Any allowed, online CPU? */
++ for_each_cpu(dest_cpu, p->cpus_ptr) {
++ if (!is_cpu_allowed(p, dest_cpu))
++ continue;
++ goto out;
++ }
++
++ /* No more Mr. Nice Guy. */
++ switch (state) {
++ case cpuset:
++ if (cpuset_cpus_allowed_fallback(p)) {
++ state = possible;
++ break;
++ }
++ fallthrough;
++ case possible:
++ /*
++ * XXX When called from select_task_rq() we only
++ * hold p->pi_lock and again violate locking order.
++ *
++ * More yuck to audit.
++ */
++ do_set_cpus_allowed(p, task_cpu_possible_mask(p));
++ state = fail;
++ break;
++
++ case fail:
++ BUG();
++ break;
++ }
++ }
++
++out:
++ if (state != cpuset) {
++ /*
++ * Don't tell them about moving exiting tasks or
++ * kernel threads (both mm NULL), since they never
++ * leave kernel.
++ */
++ if (p->mm && printk_ratelimit()) {
++ printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++ task_pid_nr(p), p->comm, cpu);
++ }
++ }
++
++ return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
++{
++ int cpu;
++
++ cpumask_copy(mask, sched_preempt_mask + ref);
++ if (prio < ref) {
++ for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++ if (prio < cpu_rq(cpu)->prio)
++ cpumask_set_cpu(cpu, mask);
++ }
++ } else {
++ for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
++ if (prio >= cpu_rq(cpu)->prio)
++ cpumask_clear_cpu(cpu, mask);
++ }
++ }
++}
++
++static inline int
++preempt_mask_check(cpumask_t *preempt_mask, cpumask_t *allow_mask, int prio)
++{
++ cpumask_t *mask = sched_preempt_mask + prio;
++ int pr = atomic_read(&sched_prio_record);
++
++ if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
++ sched_preempt_mask_flush(mask, prio, pr);
++ atomic_set(&sched_prio_record, prio);
++ }
++
++ return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++__read_mostly idle_select_func_t idle_select_func ____cacheline_aligned_in_smp = cpumask_and;
++
++static inline int select_task_rq(struct task_struct *p)
++{
++ cpumask_t allow_mask, mask;
++
++ if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++ return select_fallback_rq(task_cpu(p), p);
++
++ if (idle_select_func(&mask, &allow_mask, sched_idle_mask) ||
++ preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
++ return best_mask_cpu(task_cpu(p), &mask);
++
++ return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++ static struct lock_class_key stop_pi_lock;
++ struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++ struct sched_param start_param = { .sched_priority = 0 };
++ struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++ if (stop) {
++ /*
++ * Make it appear like a SCHED_FIFO task, its something
++ * userspace knows about and won't get confused about.
++ *
++ * Also, it will make PI more or less work without too
++ * much confusion -- but then, stop work should not
++ * rely on PI working anyway.
++ */
++ sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++ /*
++ * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++ * adjust the effective priority of a task. As a result,
++ * rt_mutex_setprio() can trigger (RT) balancing operations,
++ * which can then trigger wakeups of the stop thread to push
++ * around the current task.
++ *
++ * The stop task itself will never be part of the PI-chain, it
++ * never blocks, therefore that ->pi_lock recursion is safe.
++ * Tell lockdep about this by placing the stop->pi_lock in its
++ * own class.
++ */
++ lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++ }
++
++ cpu_rq(cpu)->stop = stop;
++
++ if (old_stop) {
++ /*
++ * Reset it back to a normal scheduling policy so that
++ * it can die in pieces.
++ */
++ sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++ }
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++ raw_spinlock_t *lock, unsigned long irq_flags)
++ __releases(rq->lock)
++ __releases(p->pi_lock)
++{
++ /* Can the task run on the task's current CPU? If so, we're done */
++ if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++ if (p->migration_disabled) {
++ if (likely(p->cpus_ptr != &p->cpus_mask))
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ p->migration_disabled = 0;
++ p->migration_flags |= MDF_FORCE_ENABLED;
++ /* When p is migrate_disabled, rq->lock should be held */
++ rq->nr_pinned--;
++ }
++
++ if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++ struct migration_arg arg = { p, dest_cpu };
++
++ /* Need help from migration thread: drop lock and wait. */
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++ return 0;
++ }
++ if (task_on_rq_queued(p)) {
++ /*
++ * OK, since we're going to drop the lock immediately
++ * afterwards anyway.
++ */
++ update_rq_clock(rq);
++ rq = move_queued_task(rq, p, dest_cpu);
++ lock = &rq->lock;
++ }
++ }
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++ struct affinity_context *ctx,
++ struct rq *rq,
++ raw_spinlock_t *lock,
++ unsigned long irq_flags)
++{
++ const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++ const struct cpumask *cpu_valid_mask = cpu_active_mask;
++ bool kthread = p->flags & PF_KTHREAD;
++ int dest_cpu;
++ int ret = 0;
++
++ if (kthread || is_migration_disabled(p)) {
++ /*
++ * Kernel threads are allowed on online && !active CPUs,
++ * however, during cpu-hot-unplug, even these might get pushed
++ * away if not KTHREAD_IS_PER_CPU.
++ *
++ * Specifically, migration_disabled() tasks must not fail the
++ * cpumask_any_and_distribute() pick below, esp. so on
++ * SCA_MIGRATE_ENABLE, otherwise we'll not call
++ * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++ */
++ cpu_valid_mask = cpu_online_mask;
++ }
++
++ if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ /*
++ * Must re-check here, to close a race against __kthread_bind(),
++ * sched_setaffinity() is not guaranteed to observe the flag.
++ */
++ if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++ goto out;
++
++ dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++ if (dest_cpu >= nr_cpu_ids) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ __do_set_cpus_allowed(p, ctx);
++
++ return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++ return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++int __set_cpus_allowed_ptr(struct task_struct *p,
++ struct affinity_context *ctx)
++{
++ unsigned long irq_flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++ rq = __task_access_lock(p, &lock);
++ /*
++ * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++ * flags are set.
++ */
++ if (p->user_cpus_ptr &&
++ !(ctx->flags & SCA_USER) &&
++ cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++ ctx->new_mask = rq->scratch_mask;
++
++
++ return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .flags = 0,
++ };
++
++ return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++ struct cpumask *new_mask,
++ const struct cpumask *subset_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .flags = 0,
++ };
++ unsigned long irq_flags;
++ raw_spinlock_t *lock;
++ struct rq *rq;
++ int err;
++
++ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++ rq = __task_access_lock(p, &lock);
++
++ if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++ err = -EINVAL;
++ goto err_unlock;
++ }
++
++ return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++ cpumask_var_t new_mask;
++ const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++ alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++ /*
++ * __migrate_task() can fail silently in the face of concurrent
++ * offlining of the chosen destination CPU, so take the hotplug
++ * lock to ensure that the migration succeeds.
++ */
++ cpus_read_lock();
++ if (!cpumask_available(new_mask))
++ goto out_set_mask;
++
++ if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++ goto out_free_mask;
++
++ /*
++ * We failed to find a valid subset of the affinity mask for the
++ * task, so override it based on its cpuset hierarchy.
++ */
++ cpuset_cpus_allowed(p, new_mask);
++ override_mask = new_mask;
++
++out_set_mask:
++ if (printk_ratelimit()) {
++ printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++ task_pid_nr(p), p->comm,
++ cpumask_pr_args(override_mask));
++ }
++
++ WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++ cpus_read_unlock();
++ free_cpumask_var(new_mask);
++}
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++ struct affinity_context ac = {
++ .new_mask = task_user_cpus(p),
++ .flags = 0,
++ };
++ int ret;
++
++ /*
++ * Try to restore the old affinity mask with __sched_setaffinity().
++ * Cpuset masking will be done there too.
++ */
++ ret = __sched_setaffinity(p, &ac);
++ WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++ return 0;
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++ return false;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq;
++
++ if (!schedstat_enabled())
++ return;
++
++ rq = this_rq();
++
++#ifdef CONFIG_SMP
++ if (cpu == rq->cpu) {
++ __schedstat_inc(rq->ttwu_local);
++ __schedstat_inc(p->stats.nr_wakeups_local);
++ } else {
++ /** Alt schedule FW ToDo:
++ * How to do ttwu_wake_remote
++ */
++ }
++#endif /* CONFIG_SMP */
++
++ __schedstat_inc(rq->ttwu_count);
++ __schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++ if (p->sched_contributes_to_load)
++ rq->nr_uninterruptible--;
++
++ if (
++#ifdef CONFIG_SMP
++ !(wake_flags & WF_MIGRATED) &&
++#endif
++ p->in_iowait) {
++ delayacct_blkio_end(p);
++ atomic_dec(&task_rq(p)->nr_iowait);
++ }
++
++ activate_task(p, rq);
++ wakeup_preempt(rq);
++
++ ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ * for (;;) {
++ * set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ * if (CONDITION)
++ * break;
++ *
++ * schedule();
++ * }
++ * __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ * %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++ struct rq *rq;
++ raw_spinlock_t *lock;
++ int ret = 0;
++
++ rq = __task_access_lock(p, &lock);
++ if (task_on_rq_queued(p)) {
++ if (!task_on_cpu(p)) {
++ /*
++ * When on_rq && !on_cpu the task is preempted, see if
++ * it should preempt the task that is current now.
++ */
++ update_rq_clock(rq);
++ wakeup_preempt(rq);
++ }
++ ttwu_do_wakeup(p);
++ ret = 1;
++ }
++ __task_access_unlock(p, lock);
++
++ return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++ struct llist_node *llist = arg;
++ struct rq *rq = this_rq();
++ struct task_struct *p, *t;
++ struct rq_flags rf;
++
++ if (!llist)
++ return;
++
++ rq_lock_irqsave(rq, &rf);
++ update_rq_clock(rq);
++
++ llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++ if (WARN_ON_ONCE(p->on_cpu))
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++ if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++ set_task_cpu(p, cpu_of(rq));
++
++ ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++ }
++
++ /*
++ * Must be after enqueueing at least once task such that
++ * idle_cpu() does not observe a false-negative -- if it does,
++ * it is possible for select_idle_siblings() to stack a number
++ * of tasks on this CPU during that window.
++ *
++ * It is OK to clear ttwu_pending when another task pending.
++ * We will receive IPI after local IRQ enabled and then enqueue it.
++ * Since now nr_running > 0, idle_cpu() will always get correct result.
++ */
++ WRITE_ONCE(rq->ttwu_pending, 0);
++ rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Prepare the scene for sending an IPI for a remote smp_call
++ *
++ * Returns true if the caller can proceed with sending the IPI.
++ * Returns false otherwise.
++ */
++bool call_function_single_prep_ipi(int cpu)
++{
++ if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
++ trace_sched_wake_idle_without_ipi(cpu);
++ return false;
++ }
++
++ return true;
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++ WRITE_ONCE(rq->ttwu_pending, 1);
++ __smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++ /*
++ * Do not complicate things with the async wake_list while the CPU is
++ * in hotplug state.
++ */
++ if (!cpu_active(cpu))
++ return false;
++
++ /* Ensure the task will still be allowed to run on the CPU. */
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++ return false;
++
++ /*
++ * If the CPU does not share cache, then queue the task on the
++ * remote rqs wakelist to avoid accessing remote data.
++ */
++ if (!cpus_share_cache(smp_processor_id(), cpu))
++ return true;
++
++ if (cpu == smp_processor_id())
++ return false;
++
++ /*
++ * If the wakee cpu is idle, or the task is descheduling and the
++ * only running task on the CPU, then use the wakelist to offload
++ * the task activation to the idle (or soon-to-be-idle) CPU as
++ * the current CPU is likely busy. nr_running is checked to
++ * avoid unnecessary task stacking.
++ *
++ * Note that we can only get here with (wakee) p->on_rq=0,
++ * p->on_cpu can be whatever, we've done the dequeue, so
++ * the wakee has been accounted out of ->nr_running.
++ */
++ if (!cpu_rq(cpu)->nr_running)
++ return true;
++
++ return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++ sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++ __ttwu_queue_wakelist(p, cpu, wake_flags);
++ return true;
++ }
++
++ return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ guard(rcu)();
++ if (is_idle_task(rcu_dereference(rq->curr))) {
++ guard(raw_spinlock_irqsave)(&rq->lock);
++ if (is_idle_task(rq->curr))
++ resched_curr(rq);
++ }
++}
++
++extern struct static_key_false sched_asym_cpucapacity;
++
++static __always_inline bool sched_asym_cpucap_active(void)
++{
++ return static_branch_unlikely(&sched_asym_cpucapacity);
++}
++
++bool cpus_equal_capacity(int this_cpu, int that_cpu)
++{
++ if (!sched_asym_cpucap_active())
++ return true;
++
++ if (this_cpu == that_cpu)
++ return true;
++
++ return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++ if (this_cpu == that_cpu)
++ return true;
++
++ return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (ttwu_queue_wakelist(p, cpu, wake_flags))
++ return;
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++ ttwu_do_activate(rq, p, wake_flags);
++ raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of saved_state:
++ *
++ * The related locking code always holds p::pi_lock when updating
++ * p::saved_state, which means the code is fully serialized in both cases.
++ *
++ * For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
++ * No other bits set. This allows to distinguish all wakeup scenarios.
++ *
++ * For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
++ * allows us to prevent early wakeup of tasks before they can be run on
++ * asymmetric ISA architectures (eg ARMv9).
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++ int match;
++
++ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++ WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++ state != TASK_RTLOCK_WAIT);
++ }
++
++ *success = !!(match = __task_state_match(p, state));
++
++ /*
++ * Saved state preserves the task state across blocking on
++ * an RT lock or TASK_FREEZABLE tasks. If the state matches,
++ * set p::saved_state to TASK_RUNNING, but do not wake the task
++ * because it waits for a lock wakeup or __thaw_task(). Also
++ * indicate success because from the regular waker's point of
++ * view this has succeeded.
++ *
++ * After acquiring the lock the task will restore p::__state
++ * from p::saved_state which ensures that the regular
++ * wakeup is not lost. The restore will also set
++ * p::saved_state to TASK_RUNNING so any further tests will
++ * not result in false positives vs. @success
++ */
++ if (match < 0)
++ p->saved_state = TASK_RUNNING;
++
++ return match > 0;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ * MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ * A) UNLOCK of the rq(c0)->lock scheduling out task t
++ * B) migration for t is required to synchronize *both* rq(c0)->lock and
++ * rq(c1)->lock (if not at the same time, then in that order).
++ * C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ * CPU0 CPU1 CPU2
++ *
++ * LOCK rq(0)->lock
++ * sched-out X
++ * sched-in Y
++ * UNLOCK rq(0)->lock
++ *
++ * LOCK rq(0)->lock // orders against CPU0
++ * dequeue X
++ * UNLOCK rq(0)->lock
++ *
++ * LOCK rq(1)->lock
++ * enqueue X
++ * UNLOCK rq(1)->lock
++ *
++ * LOCK rq(1)->lock // orders against CPU2
++ * sched-out Z
++ * sched-in X
++ * UNLOCK rq(1)->lock
++ *
++ *
++ * BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ * 1) smp_store_release(X->on_cpu, 0) -- finish_task()
++ * 2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ * CPU0 (schedule) CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ * LOCK rq(0)->lock LOCK X->pi_lock
++ * dequeue X
++ * sched-out X
++ * smp_store_release(X->on_cpu, 0);
++ *
++ * smp_cond_load_acquire(&X->on_cpu, !VAL);
++ * X->state = WAKING
++ * set_task_cpu(X,2)
++ *
++ * LOCK rq(2)->lock
++ * enqueue X
++ * X->state = RUNNING
++ * UNLOCK rq(2)->lock
++ *
++ * LOCK rq(2)->lock // orders against CPU1
++ * sched-out Z
++ * sched-in X
++ * UNLOCK rq(2)->lock
++ *
++ * UNLOCK X->pi_lock
++ * UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ * If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ * - p->sched_class
++ * - p->cpus_ptr
++ * - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ * - ttwu_runnable() -- old rq, unavoidable, see comment there;
++ * - ttwu_queue() -- new rq, for enqueue of the task;
++ * - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ * %false otherwise.
++ */
++int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
++{
++ guard(preempt)();
++ int cpu, success = 0;
++
++ if (p == current) {
++ /*
++ * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++ * == smp_processor_id()'. Together this means we can special
++ * case the whole 'p->on_rq && ttwu_runnable()' case below
++ * without taking any locks.
++ *
++ * In particular:
++ * - we rely on Program-Order guarantees for all the ordering,
++ * - we're serialized against set_special_state() by virtue of
++ * it disabling IRQs (this allows not taking ->pi_lock).
++ */
++ if (!ttwu_state_match(p, state, &success))
++ goto out;
++
++ trace_sched_waking(p);
++ ttwu_do_wakeup(p);
++ goto out;
++ }
++
++ /*
++ * If we are going to wake up a thread waiting for CONDITION we
++ * need to ensure that CONDITION=1 done by the caller can not be
++ * reordered with p->state check below. This pairs with smp_store_mb()
++ * in set_current_state() that the waiting thread does.
++ */
++ scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
++ smp_mb__after_spinlock();
++ if (!ttwu_state_match(p, state, &success))
++ break;
++
++ trace_sched_waking(p);
++
++ /*
++ * Ensure we load p->on_rq _after_ p->state, otherwise it would
++ * be possible to, falsely, observe p->on_rq == 0 and get stuck
++ * in smp_cond_load_acquire() below.
++ *
++ * sched_ttwu_pending() try_to_wake_up()
++ * STORE p->on_rq = 1 LOAD p->state
++ * UNLOCK rq->lock
++ *
++ * __schedule() (switch to task 'p')
++ * LOCK rq->lock smp_rmb();
++ * smp_mb__after_spinlock();
++ * UNLOCK rq->lock
++ *
++ * [task p]
++ * STORE p->state = UNINTERRUPTIBLE LOAD p->on_rq
++ *
++ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++ * __schedule(). See the comment for smp_mb__after_spinlock().
++ *
++ * A similar smp_rmb() lives in __task_needs_rq_lock().
++ */
++ smp_rmb();
++ if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++ break;
++
++#ifdef CONFIG_SMP
++ /*
++ * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++ * possible to, falsely, observe p->on_cpu == 0.
++ *
++ * One must be running (->on_cpu == 1) in order to remove oneself
++ * from the runqueue.
++ *
++ * __schedule() (switch to task 'p') try_to_wake_up()
++ * STORE p->on_cpu = 1 LOAD p->on_rq
++ * UNLOCK rq->lock
++ *
++ * __schedule() (put 'p' to sleep)
++ * LOCK rq->lock smp_rmb();
++ * smp_mb__after_spinlock();
++ * STORE p->on_rq = 0 LOAD p->on_cpu
++ *
++ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++ * __schedule(). See the comment for smp_mb__after_spinlock().
++ *
++ * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++ * schedule()'s deactivate_task() has 'happened' and p will no longer
++ * care about it's own p->state. See the comment in __schedule().
++ */
++ smp_acquire__after_ctrl_dep();
++
++ /*
++ * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++ * == 0), which means we need to do an enqueue, change p->state to
++ * TASK_WAKING such that we can unlock p->pi_lock before doing the
++ * enqueue, such as ttwu_queue_wakelist().
++ */
++ WRITE_ONCE(p->__state, TASK_WAKING);
++
++ /*
++ * If the owning (remote) CPU is still in the middle of schedule() with
++ * this task as prev, considering queueing p on the remote CPUs wake_list
++ * which potentially sends an IPI instead of spinning on p->on_cpu to
++ * let the waker make forward progress. This is safe because IRQs are
++ * disabled and the IPI will deliver after on_cpu is cleared.
++ *
++ * Ensure we load task_cpu(p) after p->on_cpu:
++ *
++ * set_task_cpu(p, cpu);
++ * STORE p->cpu = @cpu
++ * __schedule() (switch to task 'p')
++ * LOCK rq->lock
++ * smp_mb__after_spin_lock() smp_cond_load_acquire(&p->on_cpu)
++ * STORE p->on_cpu = 1 LOAD p->cpu
++ *
++ * to ensure we observe the correct CPU on which the task is currently
++ * scheduling.
++ */
++ if (smp_load_acquire(&p->on_cpu) &&
++ ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++ break;
++
++ /*
++ * If the owning (remote) CPU is still in the middle of schedule() with
++ * this task as prev, wait until it's done referencing the task.
++ *
++ * Pairs with the smp_store_release() in finish_task().
++ *
++ * This ensures that tasks getting woken will be fully ordered against
++ * their previous state and preserve Program Order.
++ */
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++ sched_task_ttwu(p);
++
++ if ((wake_flags & WF_CURRENT_CPU) &&
++ cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
++ cpu = smp_processor_id();
++ else
++ cpu = select_task_rq(p);
++
++ if (cpu != task_cpu(p)) {
++ if (p->in_iowait) {
++ delayacct_blkio_end(p);
++ atomic_dec(&task_rq(p)->nr_iowait);
++ }
++
++ wake_flags |= WF_MIGRATED;
++ set_task_cpu(p, cpu);
++ }
++#else
++ sched_task_ttwu(p);
++
++ cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++ ttwu_queue(p, cpu, wake_flags);
++ }
++out:
++ if (success)
++ ttwu_stat(p, task_cpu(p), wake_flags);
++
++ return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++ unsigned int state = READ_ONCE(p->__state);
++
++ /*
++ * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++ * the task is blocked. Make sure to check @state since ttwu() can drop
++ * locks at the end, see ttwu_queue_wakelist().
++ */
++ if (state == TASK_RUNNING || state == TASK_WAKING)
++ return true;
++
++ /*
++ * Ensure we load p->on_rq after p->__state, otherwise it would be
++ * possible to, falsely, observe p->on_rq == 0.
++ *
++ * See try_to_wake_up() for a longer comment.
++ */
++ smp_rmb();
++ if (p->on_rq)
++ return true;
++
++#ifdef CONFIG_SMP
++ /*
++ * Ensure the task has finished __schedule() and will not be referenced
++ * anymore. Again, see try_to_wake_up() for a longer comment.
++ */
++ smp_rmb();
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++ return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it. This function can use ->on_rq and task_curr()
++ * to work out what the state is, if required. Given that @func can be invoked
++ * with a runqueue lock held, it had better be quite lightweight.
++ *
++ * Returns:
++ * Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++ struct rq *rq = NULL;
++ struct rq_flags rf;
++ int ret;
++
++ raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++ if (__task_needs_rq_lock(p))
++ rq = __task_rq_lock(p, &rf);
++
++ /*
++ * At this point the task is pinned; either:
++ * - blocked and we're holding off wakeups (pi->lock)
++ * - woken, and we're holding off enqueue (rq->lock)
++ * - queued, and we're holding off schedule (rq->lock)
++ * - running, and we're holding off de-schedule (rq->lock)
++ *
++ * The called function (@func) can use: task_curr(), p->on_rq and
++ * p->__state to differentiate between these states.
++ */
++ ret = func(p, arg);
++
++ if (rq)
++ __task_rq_unlock(rq, &rf);
++
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++ return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU. If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee. Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++ struct task_struct *t;
++
++ smp_mb(); /* Pairing determined by caller's synchronization design. */
++ t = rcu_dereference(cpu_curr(cpu));
++ smp_mb(); /* Pairing determined by caller's synchronization design. */
++ return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++ return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++ return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++ p->on_rq = 0;
++ p->on_cpu = 0;
++ p->utime = 0;
++ p->stime = 0;
++ p->sched_time = 0;
++
++#ifdef CONFIG_SCHEDSTATS
++ /* Even if schedstat is disabled, there should not be garbage */
++ memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++ INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++ p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++ p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++ init_sched_mm_cid(p);
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++ __sched_fork(clone_flags, p);
++ /*
++ * We mark the process as NEW here. This guarantees that
++ * nobody will actually run it, and a signal or other external
++ * event cannot wake it up and insert it on the runqueue either.
++ */
++ p->__state = TASK_NEW;
++
++ /*
++ * Make sure we do not leak PI boosting priority to the child.
++ */
++ p->prio = current->normal_prio;
++
++ /*
++ * Revert to default priority/policy on fork if requested.
++ */
++ if (unlikely(p->sched_reset_on_fork)) {
++ if (task_has_rt_policy(p)) {
++ p->policy = SCHED_NORMAL;
++ p->static_prio = NICE_TO_PRIO(0);
++ p->rt_priority = 0;
++ } else if (PRIO_TO_NICE(p->static_prio) < 0)
++ p->static_prio = NICE_TO_PRIO(0);
++
++ p->prio = p->normal_prio = p->static_prio;
++
++ /*
++ * We don't need the reset flag anymore after the fork. It has
++ * fulfilled its duty:
++ */
++ p->sched_reset_on_fork = 0;
++ }
++
++#ifdef CONFIG_SCHED_INFO
++ if (unlikely(sched_info_on()))
++ memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++ init_task_preempt_count(p);
++
++ return 0;
++}
++
++void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++ unsigned long flags;
++ struct rq *rq;
++
++ /*
++ * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++ * required yet, but lockdep gets upset if rules are violated.
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ /*
++ * Share the timeslice between parent and child, thus the
++ * total amount of pending timeslices in the system doesn't change,
++ * resulting in more scheduling fairness.
++ */
++ rq = this_rq();
++ raw_spin_lock(&rq->lock);
++
++ rq->curr->time_slice /= 2;
++ p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++ hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++ if (p->time_slice < RESCHED_NS) {
++ p->time_slice = sysctl_sched_base_slice;
++ resched_curr(rq);
++ }
++ sched_task_fork(p, rq);
++ raw_spin_unlock(&rq->lock);
++
++ rseq_migrate(p);
++ /*
++ * We're setting the CPU for the first time, we don't migrate,
++ * so use __set_task_cpu().
++ */
++ __set_task_cpu(p, smp_processor_id());
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++ if (enabled)
++ static_branch_enable(&sched_schedstats);
++ else
++ static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++ if (!schedstat_enabled()) {
++ pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++ static_branch_enable(&sched_schedstats);
++ }
++}
++
++static int __init setup_schedstats(char *str)
++{
++ int ret = 0;
++ if (!str)
++ goto out;
++
++ if (!strcmp(str, "enable")) {
++ set_schedstats(true);
++ ret = 1;
++ } else if (!strcmp(str, "disable")) {
++ set_schedstats(false);
++ ret = 1;
++ }
++out:
++ if (!ret)
++ pr_warn("Unable to parse schedstats=\n");
++
++ return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(const struct ctl_table *table, int write, void *buffer,
++ size_t *lenp, loff_t *ppos)
++{
++ struct ctl_table t;
++ int err;
++ int state = static_branch_likely(&sched_schedstats);
++
++ if (write && !capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
++ t = *table;
++ t.data = &state;
++ err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++ if (err < 0)
++ return err;
++ if (write)
++ set_schedstats(state);
++ return err;
++}
++
++static struct ctl_table sched_core_sysctls[] = {
++ {
++ .procname = "sched_schedstats",
++ .data = NULL,
++ .maxlen = sizeof(unsigned int),
++ .mode = 0644,
++ .proc_handler = sysctl_schedstats,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_ONE,
++ },
++};
++static int __init sched_core_sysctl_init(void)
++{
++ register_sysctl_init("kernel", sched_core_sysctls);
++ return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++ unsigned long flags;
++ struct rq *rq;
++
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++ rseq_migrate(p);
++ /*
++ * Fork balancing, do it here and not earlier because:
++ * - cpus_ptr can change in the fork path
++ * - any previously selected CPU might disappear through hotplug
++ *
++ * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++ * as we're not fully set-up yet.
++ */
++ __set_task_cpu(p, cpu_of(rq));
++#endif
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++
++ activate_task(p, rq);
++ trace_sched_wakeup_new(p);
++ wakeup_preempt(rq);
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++ static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++ static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++ if (!static_branch_unlikely(&preempt_notifier_key))
++ WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++ hlist_add_head(¬ifier->link, ¤t->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++ hlist_del(¬ifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++ struct preempt_notifier *notifier;
++
++ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++ notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++ if (static_branch_unlikely(&preempt_notifier_key))
++ __fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++ struct preempt_notifier *notifier;
++
++ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++ notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++ if (static_branch_unlikely(&preempt_notifier_key))
++ __fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++ /*
++ * Claim the task as running, we do this before switching to it
++ * such that any running task will have this set.
++ *
++ * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++ * its ordering comment.
++ */
++ WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++ /*
++ * This must be the very last reference to @prev from this CPU. After
++ * p->on_cpu is cleared, the task can be moved to a different CPU. We
++ * must ensure this doesn't happen until the switch is completely
++ * finished.
++ *
++ * In particular, the load of prev->state in finish_task_switch() must
++ * happen before this.
++ *
++ * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++ */
++ smp_store_release(&prev->on_cpu, 0);
++#else
++ prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++ void (*func)(struct rq *rq);
++ struct balance_callback *next;
++
++ lockdep_assert_held(&rq->lock);
++
++ while (head) {
++ func = (void (*)(struct rq *))head->func;
++ next = head->next;
++ head->next = NULL;
++ head = next;
++
++ func(rq);
++ }
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++ .next = NULL,
++ .func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++ struct balance_callback *head = rq->balance_callback;
++
++ if (likely(!head))
++ return NULL;
++
++ lockdep_assert_rq_held(rq);
++ /*
++ * Must not take balance_push_callback off the list when
++ * splice_balance_callbacks() and balance_callbacks() are not
++ * in the same rq->lock section.
++ *
++ * In that case it would be possible for __schedule() to interleave
++ * and observe the list empty.
++ */
++ if (split && head == &balance_push_callback)
++ head = NULL;
++ else
++ rq->balance_callback = NULL;
++
++ return head;
++}
++
++struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++ return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++ do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++ unsigned long flags;
++
++ if (unlikely(head)) {
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ do_balance_callbacks(rq, head);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++ }
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++ /*
++ * Since the runqueue lock will be released by the next
++ * task (which is an invalid locking op but in the case
++ * of the scheduler it's an obvious special-case), so we
++ * do an early lockdep release here:
++ */
++ spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++ /* this is a valid case when another task releases the spinlock */
++ rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++ /*
++ * If we are tracking spinlock dependencies then we have to
++ * fix up the runqueue lock - which gets 'carried over' from
++ * prev into current:
++ */
++ spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++ __balance_callbacks(rq);
++ raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next) do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch() do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++ if (unlikely(current->kmap_ctrl.idx))
++ __kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++ if (unlikely(current->kmap_ctrl.idx))
++ __kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++ struct task_struct *next)
++{
++ kcov_prepare_switch(prev);
++ sched_info_switch(rq, prev, next);
++ perf_event_task_sched_out(prev, next);
++ rseq_preempt(prev);
++ fire_sched_out_preempt_notifiers(prev, next);
++ kmap_local_sched_out();
++ prepare_task(next);
++ prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock. (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. 'prev == current' is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++ __releases(rq->lock)
++{
++ struct rq *rq = this_rq();
++ struct mm_struct *mm = rq->prev_mm;
++ unsigned int prev_state;
++
++ /*
++ * The previous task will have left us with a preempt_count of 2
++ * because it left us after:
++ *
++ * schedule()
++ * preempt_disable(); // 1
++ * __schedule()
++ * raw_spin_lock_irq(&rq->lock) // 2
++ *
++ * Also, see FORK_PREEMPT_COUNT.
++ */
++ if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++ "corrupted preempt_count: %s/%d/0x%x\n",
++ current->comm, current->pid, preempt_count()))
++ preempt_count_set(FORK_PREEMPT_COUNT);
++
++ rq->prev_mm = NULL;
++
++ /*
++ * A task struct has one reference for the use as "current".
++ * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++ * schedule one last time. The schedule call will never return, and
++ * the scheduled task must drop that reference.
++ *
++ * We must observe prev->state before clearing prev->on_cpu (in
++ * finish_task), otherwise a concurrent wakeup can get prev
++ * running on another CPU and we could rave with its RUNNING -> DEAD
++ * transition, resulting in a double drop.
++ */
++ prev_state = READ_ONCE(prev->__state);
++ vtime_task_switch(prev);
++ perf_event_task_sched_in(prev, current);
++ finish_task(prev);
++ tick_nohz_task_switch();
++ finish_lock_switch(rq);
++ finish_arch_post_lock_switch();
++ kcov_finish_switch(current);
++ /*
++ * kmap_local_sched_out() is invoked with rq::lock held and
++ * interrupts disabled. There is no requirement for that, but the
++ * sched out code does not have an interrupt enabled section.
++ * Restoring the maps on sched in does not require interrupts being
++ * disabled either.
++ */
++ kmap_local_sched_in();
++
++ fire_sched_in_preempt_notifiers(current);
++ /*
++ * When switching through a kernel thread, the loop in
++ * membarrier_{private,global}_expedited() may have observed that
++ * kernel thread and not issued an IPI. It is therefore possible to
++ * schedule between user->kernel->user threads without passing though
++ * switch_mm(). Membarrier requires a barrier after storing to
++ * rq->curr, before returning to userspace, so provide them here:
++ *
++ * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++ * provided by mmdrop(),
++ * - a sync_core for SYNC_CORE.
++ */
++ if (mm) {
++ membarrier_mm_sync_core_before_usermode(mm);
++ mmdrop_sched(mm);
++ }
++ if (unlikely(prev_state == TASK_DEAD)) {
++ /* Task is done with its stack. */
++ put_task_stack(prev);
++
++ put_task_struct_rcu_user(prev);
++ }
++
++ return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++ __releases(rq->lock)
++{
++ /*
++ * New tasks start with FORK_PREEMPT_COUNT, see there and
++ * finish_task_switch() for details.
++ *
++ * finish_task_switch() will drop rq->lock() and lower preempt_count
++ * and the preempt_enable() will end up enabling preemption (on
++ * PREEMPT_COUNT kernels).
++ */
++
++ finish_task_switch(prev);
++ preempt_enable();
++
++ if (current->set_child_tid)
++ put_user(task_pid_vnr(current), current->set_child_tid);
++
++ calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++ struct task_struct *next)
++{
++ prepare_task_switch(rq, prev, next);
++
++ /*
++ * For paravirt, this is coupled with an exit in switch_to to
++ * combine the page table reload and the switch backend into
++ * one hypercall.
++ */
++ arch_start_context_switch(prev);
++
++ /*
++ * kernel -> kernel lazy + transfer active
++ * user -> kernel lazy + mmgrab() active
++ *
++ * kernel -> user switch + mmdrop() active
++ * user -> user switch
++ *
++ * switch_mm_cid() needs to be updated if the barriers provided
++ * by context_switch() are modified.
++ */
++ if (!next->mm) { // to kernel
++ enter_lazy_tlb(prev->active_mm, next);
++
++ next->active_mm = prev->active_mm;
++ if (prev->mm) // from user
++ mmgrab(prev->active_mm);
++ else
++ prev->active_mm = NULL;
++ } else { // to user
++ membarrier_switch_mm(rq, prev->active_mm, next->mm);
++ /*
++ * sys_membarrier() requires an smp_mb() between setting
++ * rq->curr / membarrier_switch_mm() and returning to userspace.
++ *
++ * The below provides this either through switch_mm(), or in
++ * case 'prev->active_mm == next->mm' through
++ * finish_task_switch()'s mmdrop().
++ */
++ switch_mm_irqs_off(prev->active_mm, next->mm, next);
++ lru_gen_use_mm(next->mm);
++
++ if (!prev->mm) { // from kernel
++ /* will mmdrop() in finish_task_switch(). */
++ rq->prev_mm = prev->active_mm;
++ prev->active_mm = NULL;
++ }
++ }
++
++ /* switch_mm_cid() requires the memory barriers above. */
++ switch_mm_cid(rq, prev, next);
++
++ prepare_lock_switch(rq, next);
++
++ /* Here we just switch the register state and the stack. */
++ switch_to(prev, next, prev);
++ barrier();
++
++ return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++ unsigned int i, sum = 0;
++
++ for_each_online_cpu(i)
++ sum += cpu_rq(i)->nr_running;
++
++ return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race. The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++ return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++ return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++ int i;
++ unsigned long long sum = 0;
++
++ for_each_possible_cpu(i)
++ sum += cpu_rq(i)->nr_switches;
++
++ return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++ return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++ unsigned int i, sum = 0;
++
++ for_each_possible_cpu(i)
++ sum += nr_iowait_cpu(i);
++
++ return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++ s64 ns = rq->clock_task - p->last_ran;
++
++ p->sched_time += ns;
++ cgroup_account_cputime(p, ns);
++ account_group_exec_runtime(p, ns);
++
++ p->time_slice -= ns;
++ p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++ unsigned long flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++ u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++ /*
++ * 64-bit doesn't need locks to atomically read a 64-bit value.
++ * So we have a optimization chance when the task's delta_exec is 0.
++ * Reading ->on_cpu is racy, but this is OK.
++ *
++ * If we race with it leaving CPU, we'll take a lock. So we're correct.
++ * If we race with it entering CPU, unaccounted time is 0. This is
++ * indistinguishable from the read occurring a few cycles earlier.
++ * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++ * been accounted, so we're correct here as well.
++ */
++ if (!p->on_cpu || !task_on_rq_queued(p))
++ return tsk_seruntime(p);
++#endif
++
++ rq = task_access_lock_irqsave(p, &lock, &flags);
++ /*
++ * Must be ->curr _and_ ->on_rq. If dequeued, we would
++ * project cycles that may never be accounted to this
++ * thread, breaking clock_gettime().
++ */
++ if (p == rq->curr && task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ update_curr(rq, p);
++ }
++ ns = tsk_seruntime(p);
++ task_access_unlock_irqrestore(p, lock, &flags);
++
++ return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++ struct task_struct *p = rq->curr;
++
++ if (is_idle_task(p))
++ return;
++
++ update_curr(rq, p);
++ cpufreq_update_util(rq, 0);
++
++ /*
++ * Tasks have less than RESCHED_NS of time slice left they will be
++ * rescheduled.
++ */
++ if (p->time_slice >= RESCHED_NS)
++ return;
++ set_tsk_need_resched(p);
++ set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++ int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++ u64 resched_latency, now = rq_clock(rq);
++ static bool warned_once;
++
++ if (sysctl_resched_latency_warn_once && warned_once)
++ return 0;
++
++ if (!need_resched() || !latency_warn_ms)
++ return 0;
++
++ if (system_state == SYSTEM_BOOTING)
++ return 0;
++
++ if (!rq->last_seen_need_resched_ns) {
++ rq->last_seen_need_resched_ns = now;
++ rq->ticks_without_resched = 0;
++ return 0;
++ }
++
++ rq->ticks_without_resched++;
++ resched_latency = now - rq->last_seen_need_resched_ns;
++ if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++ return 0;
++
++ warned_once = true;
++
++ return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++ long val;
++
++ if ((kstrtol(str, 0, &val))) {
++ pr_warn("Unable to set resched_latency_warn_ms\n");
++ return 1;
++ }
++
++ sysctl_resched_latency_warn_ms = val;
++ return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void sched_tick(void)
++{
++ int cpu __maybe_unused = smp_processor_id();
++ struct rq *rq = cpu_rq(cpu);
++ struct task_struct *curr = rq->curr;
++ u64 resched_latency;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++ arch_scale_freq_tick();
++
++ sched_clock_tick();
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++
++ scheduler_task_tick(rq);
++ if (sched_feat(LATENCY_WARN))
++ resched_latency = cpu_resched_latency(rq);
++ calc_global_load_tick(rq);
++
++ task_tick_mm_cid(rq, rq->curr);
++
++ raw_spin_unlock(&rq->lock);
++
++ if (sched_feat(LATENCY_WARN) && resched_latency)
++ resched_latency_warn(cpu, resched_latency);
++
++ perf_event_task_tick();
++
++ if (curr->flags & PF_WQ_WORKER)
++ wq_worker_tick(curr);
++}
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++ int cpu;
++ atomic_t state;
++ struct delayed_work work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE 0
++#define TICK_SCHED_REMOTE_OFFLINING 1
++#define TICK_SCHED_REMOTE_RUNNING 2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ * TICK_SCHED_REMOTE_OFFLINE
++ * | ^
++ * | |
++ * | | sched_tick_remote()
++ * | |
++ * | |
++ * +--TICK_SCHED_REMOTE_OFFLINING
++ * | ^
++ * | |
++ * sched_tick_start() | | sched_tick_stop()
++ * | |
++ * V |
++ * TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++ struct delayed_work *dwork = to_delayed_work(work);
++ struct tick_work *twork = container_of(dwork, struct tick_work, work);
++ int cpu = twork->cpu;
++ struct rq *rq = cpu_rq(cpu);
++ int os;
++
++ /*
++ * Handle the tick only if it appears the remote CPU is running in full
++ * dynticks mode. The check is racy by nature, but missing a tick or
++ * having one too much is no big deal because the scheduler tick updates
++ * statistics and checks timeslices in a time-independent way, regardless
++ * of when exactly it is running.
++ */
++ if (tick_nohz_tick_stopped_cpu(cpu)) {
++ guard(raw_spinlock_irqsave)(&rq->lock);
++ struct task_struct *curr = rq->curr;
++
++ if (cpu_online(cpu)) {
++ update_rq_clock(rq);
++
++ if (!is_idle_task(curr)) {
++ /*
++ * Make sure the next tick runs within a
++ * reasonable amount of time.
++ */
++ u64 delta = rq_clock_task(rq) - curr->last_ran;
++ WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++ }
++ scheduler_task_tick(rq);
++
++ calc_load_nohz_remote(rq);
++ }
++ }
++
++ /*
++ * Run the remote tick once per second (1Hz). This arbitrary
++ * frequency is large enough to avoid overload but short enough
++ * to keep scheduler internal stats reasonably up to date. But
++ * first update state to reflect hotplug activity if required.
++ */
++ os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++ if (os == TICK_SCHED_REMOTE_RUNNING)
++ queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++ int os;
++ struct tick_work *twork;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++ return;
++
++ WARN_ON_ONCE(!tick_work_cpu);
++
++ twork = per_cpu_ptr(tick_work_cpu, cpu);
++ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++ if (os == TICK_SCHED_REMOTE_OFFLINE) {
++ twork->cpu = cpu;
++ INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++ queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++ }
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++ struct tick_work *twork;
++ int os;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++ return;
++
++ WARN_ON_ONCE(!tick_work_cpu);
++
++ twork = per_cpu_ptr(tick_work_cpu, cpu);
++ /* There cannot be competing actions, but don't rely on stop-machine. */
++ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++ WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++ /* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++ tick_work_cpu = alloc_percpu(struct tick_work);
++ BUG_ON(!tick_work_cpu);
++ return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++ defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++ if (preempt_count() == val) {
++ unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++ current->preempt_disable_ip = ip;
++#endif
++ trace_preempt_off(CALLER_ADDR0, ip);
++ }
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Underflow?
++ */
++ if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++ return;
++#endif
++ __preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Spinlock count overflowing soon?
++ */
++ DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++ PREEMPT_MASK - 10);
++#endif
++ preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++ if (preempt_count() == val)
++ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Underflow?
++ */
++ if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++ return;
++ /*
++ * Is the spinlock portion underflowing?
++ */
++ if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++ !(preempt_count() & PREEMPT_MASK)))
++ return;
++#endif
++
++ preempt_latency_stop(val);
++ __preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ return p->preempt_disable_ip;
++#else
++ return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++ /* Save this before calling printk(), since that will clobber it */
++ unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++ if (oops_in_progress)
++ return;
++
++ printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++ prev->comm, prev->pid, preempt_count());
++
++ debug_show_held_locks(prev);
++ print_modules();
++ if (irqs_disabled())
++ print_irqtrace_events(prev);
++ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++ pr_err("Preemption disabled at:");
++ print_ip_sym(KERN_ERR, preempt_disable_ip);
++ }
++ check_panic_on_warn("scheduling while atomic");
++
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++ if (task_stack_end_corrupted(prev))
++ panic("corrupted stack end detected inside scheduler\n");
++
++ if (task_scs_end_corrupted(prev))
++ panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++ if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++ printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++ prev->comm, prev->pid, prev->non_block_count);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++ }
++#endif
++
++ if (unlikely(in_atomic_preempt_off())) {
++ __schedule_bug(prev);
++ preempt_count_set(PREEMPT_DISABLED);
++ }
++ rcu_sleep_check();
++ SCHED_WARN_ON(ct_state() == CONTEXT_USER);
++
++ profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++ schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++ printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx,"
++ " ecore_idle: 0x%04lx\n",
++ sched_rq_pending_mask.bits[0],
++ sched_idle_mask->bits[0],
++ sched_pcore_idle_mask->bits[0],
++ sched_ecore_idle_mask->bits[0]);
++}
++#endif
++
++#ifdef CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++ struct task_struct *p, *skip = rq->curr;
++ int nr_migrated = 0;
++ int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++ /* WA to check rq->curr is still on rq */
++ if (!task_on_rq_queued(skip))
++ return 0;
++
++ while (skip != rq->idle && nr_tries &&
++ (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++ skip = sched_rq_next_task(p, rq);
++ if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++ __SCHED_DEQUEUE_TASK(p, rq, 0, );
++ set_task_cpu(p, dest_cpu);
++ sched_task_sanity_check(p, dest_rq);
++ sched_mm_cid_migrate_to(dest_rq, p, cpu_of(rq));
++ __SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
++ nr_migrated++;
++ }
++ nr_tries--;
++ }
++
++ return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++ cpumask_t *topo_mask, *end_mask, chk;
++
++ if (unlikely(!rq->online))
++ return 0;
++
++ if (cpumask_empty(&sched_rq_pending_mask))
++ return 0;
++
++ topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
++ end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++ do {
++ int i;
++
++ if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
++ continue;
++
++ for_each_cpu_wrap(i, &chk, cpu) {
++ int nr_migrated;
++ struct rq *src_rq;
++
++ src_rq = cpu_rq(i);
++ if (!do_raw_spin_trylock(&src_rq->lock))
++ continue;
++ spin_acquire(&src_rq->lock.dep_map,
++ SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++ if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++ src_rq->nr_running -= nr_migrated;
++ if (src_rq->nr_running < 2)
++ cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++ spin_release(&src_rq->lock.dep_map, _RET_IP_);
++ do_raw_spin_unlock(&src_rq->lock);
++
++ rq->nr_running += nr_migrated;
++ if (rq->nr_running > 1)
++ cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++ update_sched_preempt_mask(rq);
++ cpufreq_update_util(rq, 0);
++
++ return 1;
++ }
++
++ spin_release(&src_rq->lock.dep_map, _RET_IP_);
++ do_raw_spin_unlock(&src_rq->lock);
++ }
++ } while (++topo_mask < end_mask);
++
++ return 0;
++}
++#endif
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++ p->time_slice = sysctl_sched_base_slice;
++
++ sched_task_renew(p, rq);
++
++ if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++ requeue_task(p, rq);
++}
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++ if (unlikely(rq->idle == p))
++ return;
++
++ update_curr(rq, p);
++
++ if (p->time_slice < RESCHED_NS)
++ time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++ struct task_struct *next = sched_rq_first_task(rq);
++
++ if (next == rq->idle) {
++#ifdef CONFIG_SMP
++ if (!take_other_rq_tasks(rq, cpu)) {
++ if (likely(rq->balance_func && rq->online))
++ rq->balance_func(rq, cpu);
++#endif /* CONFIG_SMP */
++
++ schedstat_inc(rq->sched_goidle);
++ /*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++ return next;
++#ifdef CONFIG_SMP
++ }
++ next = sched_rq_first_task(rq);
++#endif
++ }
++#ifdef CONFIG_HIGH_RES_TIMERS
++ hrtick_start(rq, next->time_slice);
++#endif
++ /*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++ return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
++ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
++ * optimize the AND operation out and just check for zero.
++ */
++#define SM_NONE 0x0
++#define SM_PREEMPT 0x1
++#define SM_RTLOCK_WAIT 0x2
++
++#ifndef CONFIG_PREEMPT_RT
++# define SM_MASK_PREEMPT (~0U)
++#else
++# define SM_MASK_PREEMPT SM_PREEMPT
++#endif
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ * 1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ * 2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ * paths. For example, see arch/x86/entry_64.S.
++ *
++ * To drive preemption between tasks, the scheduler sets the flag in timer
++ * interrupt handler sched_tick().
++ *
++ * 3. Wakeups don't really cause entry into schedule(). They add a
++ * task to the run-queue and that's it.
++ *
++ * Now, if the new task added to the run-queue preempts the current
++ * task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ * called on the nearest possible occasion:
++ *
++ * - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ * - in syscall or exception context, at the next outmost
++ * preempt_enable(). (this might be as soon as the wake_up()'s
++ * spin_unlock()!)
++ *
++ * - in IRQ context, return from interrupt-handler to
++ * preemptible context
++ *
++ * - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ * then at the next:
++ *
++ * - cond_resched() call
++ * - explicit schedule() call
++ * - return from syscall or exception to user-space
++ * - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(unsigned int sched_mode)
++{
++ struct task_struct *prev, *next;
++ unsigned long *switch_count;
++ unsigned long prev_state;
++ struct rq *rq;
++ int cpu;
++
++ cpu = smp_processor_id();
++ rq = cpu_rq(cpu);
++ prev = rq->curr;
++
++ schedule_debug(prev, !!sched_mode);
++
++ /* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++ hrtick_clear(rq);
++
++ local_irq_disable();
++ rcu_note_context_switch(!!sched_mode);
++
++ /*
++ * Make sure that signal_pending_state()->signal_pending() below
++ * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++ * done by the caller to avoid the race with signal_wake_up():
++ *
++ * __set_current_state(@state) signal_wake_up()
++ * schedule() set_tsk_thread_flag(p, TIF_SIGPENDING)
++ * wake_up_state(p, state)
++ * LOCK rq->lock LOCK p->pi_state
++ * smp_mb__after_spinlock() smp_mb__after_spinlock()
++ * if (signal_pending_state()) if (p->state & @state)
++ *
++ * Also, the membarrier system call requires a full memory barrier
++ * after coming from user-space, before storing to rq->curr; this
++ * barrier matches a full barrier in the proximity of the membarrier
++ * system call exit.
++ */
++ raw_spin_lock(&rq->lock);
++ smp_mb__after_spinlock();
++
++ update_rq_clock(rq);
++
++ switch_count = &prev->nivcsw;
++ /*
++ * We must load prev->state once (task_struct::state is volatile), such
++ * that we form a control dependency vs deactivate_task() below.
++ */
++ prev_state = READ_ONCE(prev->__state);
++ if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
++ if (signal_pending_state(prev_state, prev)) {
++ WRITE_ONCE(prev->__state, TASK_RUNNING);
++ } else {
++ prev->sched_contributes_to_load =
++ (prev_state & TASK_UNINTERRUPTIBLE) &&
++ !(prev_state & TASK_NOLOAD) &&
++ !(prev_state & TASK_FROZEN);
++
++ if (prev->sched_contributes_to_load)
++ rq->nr_uninterruptible++;
++
++ /*
++ * __schedule() ttwu()
++ * prev_state = prev->state; if (p->on_rq && ...)
++ * if (prev_state) goto out;
++ * p->on_rq = 0; smp_acquire__after_ctrl_dep();
++ * p->state = TASK_WAKING
++ *
++ * Where __schedule() and ttwu() have matching control dependencies.
++ *
++ * After this, schedule() must not care about p->state any more.
++ */
++ sched_task_deactivate(prev, rq);
++ deactivate_task(prev, rq);
++
++ if (prev->in_iowait) {
++ atomic_inc(&rq->nr_iowait);
++ delayacct_blkio_start();
++ }
++ }
++ switch_count = &prev->nvcsw;
++ }
++
++ check_curr(prev, rq);
++
++ next = choose_next_task(rq, cpu);
++ clear_tsk_need_resched(prev);
++ clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++ rq->last_seen_need_resched_ns = 0;
++#endif
++
++ if (likely(prev != next)) {
++ next->last_ran = rq->clock_task;
++
++ /*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++ rq->nr_switches++;
++ /*
++ * RCU users of rcu_dereference(rq->curr) may not see
++ * changes to task_struct made by pick_next_task().
++ */
++ RCU_INIT_POINTER(rq->curr, next);
++ /*
++ * The membarrier system call requires each architecture
++ * to have a full memory barrier after updating
++ * rq->curr, before returning to user-space.
++ *
++ * Here are the schemes providing that barrier on the
++ * various architectures:
++ * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
++ * RISC-V. switch_mm() relies on membarrier_arch_switch_mm()
++ * on PowerPC and on RISC-V.
++ * - finish_lock_switch() for weakly-ordered
++ * architectures where spin_unlock is a full barrier,
++ * - switch_to() for arm64 (weakly-ordered, spin_unlock
++ * is a RELEASE barrier),
++ *
++ * The barrier matches a full barrier in the proximity of
++ * the membarrier system call entry.
++ *
++ * On RISC-V, this barrier pairing is also needed for the
++ * SYNC_CORE command when switching between processes, cf.
++ * the inline comments in membarrier_arch_switch_mm().
++ */
++ ++*switch_count;
++
++ trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
++
++ /* Also unlocks the rq: */
++ rq = context_switch(rq, prev, next);
++
++ cpu = cpu_of(rq);
++ } else {
++ __balance_callbacks(rq);
++ raw_spin_unlock_irq(&rq->lock);
++ }
++}
++
++void __noreturn do_task_dead(void)
++{
++ /* Causes final put_task_struct in finish_task_switch(): */
++ set_special_state(TASK_DEAD);
++
++ /* Tell freezer to ignore us: */
++ current->flags |= PF_NOFREEZE;
++
++ __schedule(SM_NONE);
++ BUG();
++
++ /* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++ for (;;)
++ cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++ static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
++ unsigned int task_flags;
++
++ /*
++ * Establish LD_WAIT_CONFIG context to ensure none of the code called
++ * will use a blocking primitive -- which would lead to recursion.
++ */
++ lock_map_acquire_try(&sched_map);
++
++ task_flags = tsk->flags;
++ /*
++ * If a worker goes to sleep, notify and ask workqueue whether it
++ * wants to wake up a task to maintain concurrency.
++ */
++ if (task_flags & PF_WQ_WORKER)
++ wq_worker_sleeping(tsk);
++ else if (task_flags & PF_IO_WORKER)
++ io_wq_worker_sleeping(tsk);
++
++ /*
++ * spinlock and rwlock must not flush block requests. This will
++ * deadlock if the callback attempts to acquire a lock which is
++ * already acquired.
++ */
++ SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
++
++ /*
++ * If we are going to sleep and we have plugged IO queued,
++ * make sure to submit it to avoid deadlocks.
++ */
++ blk_flush_plug(tsk->plug, true);
++
++ lock_map_release(&sched_map);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++ if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
++ if (tsk->flags & PF_BLOCK_TS)
++ blk_plug_invalidate_ts(tsk);
++ if (tsk->flags & PF_WQ_WORKER)
++ wq_worker_running(tsk);
++ else if (tsk->flags & PF_IO_WORKER)
++ io_wq_worker_running(tsk);
++ }
++}
++
++static __always_inline void __schedule_loop(unsigned int sched_mode)
++{
++ do {
++ preempt_disable();
++ __schedule(sched_mode);
++ sched_preempt_enable_no_resched();
++ } while (need_resched());
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++ struct task_struct *tsk = current;
++
++#ifdef CONFIG_RT_MUTEXES
++ lockdep_assert(!tsk->sched_rt_mutex);
++#endif
++
++ if (!task_is_running(tsk))
++ sched_submit_work(tsk);
++ __schedule_loop(SM_NONE);
++ sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++ /*
++ * As this skips calling sched_submit_work(), which the idle task does
++ * regardless because that function is a NOP when the task is in a
++ * TASK_RUNNING state, make sure this isn't used someplace that the
++ * current task can be in any other state. Note, idle is always in the
++ * TASK_RUNNING state.
++ */
++ WARN_ON_ONCE(current->__state);
++ do {
++ __schedule(SM_NONE);
++ } while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++ /*
++ * If we come here after a random call to set_need_resched(),
++ * or we have been woken up remotely but the IPI has not yet arrived,
++ * we haven't yet exited the RCU idle mode. Do it here manually until
++ * we find a better solution.
++ *
++ * NB: There are buggy callers of this function. Ideally we
++ * should warn if prev_state != CONTEXT_USER, but that will trigger
++ * too frequently to make sense yet.
++ */
++ enum ctx_state prev_state = exception_enter();
++ schedule();
++ exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++ sched_preempt_enable_no_resched();
++ schedule();
++ preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++ __schedule_loop(SM_RTLOCK_WAIT);
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++ do {
++ /*
++ * Because the function tracer can trace preempt_count_sub()
++ * and it also uses preempt_enable/disable_notrace(), if
++ * NEED_RESCHED is set, the preempt_enable_notrace() called
++ * by the function tracer will call this function again and
++ * cause infinite recursion.
++ *
++ * Preemption must be disabled here before the function
++ * tracer can trace. Break up preempt_disable() into two
++ * calls. One to disable preemption without fear of being
++ * traced. The other to still record the preemption latency,
++ * which can also be traced by the function tracer.
++ */
++ preempt_disable_notrace();
++ preempt_latency_start(1);
++ __schedule(SM_PREEMPT);
++ preempt_latency_stop(1);
++ preempt_enable_no_resched_notrace();
++
++ /*
++ * Check again in case we missed a preemption opportunity
++ * between schedule and now.
++ */
++ } while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++ /*
++ * If there is a non-zero preempt_count or interrupts are disabled,
++ * we do not want to preempt the current task. Just return..
++ */
++ if (likely(!preemptible()))
++ return;
++
++ preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled preempt_schedule
++#define preempt_schedule_dynamic_disabled NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++ return;
++ preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++ enum ctx_state prev_ctx;
++
++ if (likely(!preemptible()))
++ return;
++
++ do {
++ /*
++ * Because the function tracer can trace preempt_count_sub()
++ * and it also uses preempt_enable/disable_notrace(), if
++ * NEED_RESCHED is set, the preempt_enable_notrace() called
++ * by the function tracer will call this function again and
++ * cause infinite recursion.
++ *
++ * Preemption must be disabled here before the function
++ * tracer can trace. Break up preempt_disable() into two
++ * calls. One to disable preemption without fear of being
++ * traced. The other to still record the preemption latency,
++ * which can also be traced by the function tracer.
++ */
++ preempt_disable_notrace();
++ preempt_latency_start(1);
++ /*
++ * Needs preempt disabled in case user_exit() is traced
++ * and the tracer calls preempt_enable_notrace() causing
++ * an infinite recursion.
++ */
++ prev_ctx = exception_enter();
++ __schedule(SM_PREEMPT);
++ exception_exit(prev_ctx);
++
++ preempt_latency_stop(1);
++ preempt_enable_no_resched_notrace();
++ } while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++ return;
++ preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of IRQ context.
++ * Note, that this is called and return with IRQs disabled. This will
++ * protect us against recursive calling from IRQ contexts.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++ enum ctx_state prev_state;
++
++ /* Catch callers which need to be fixed */
++ BUG_ON(preempt_count() || !irqs_disabled());
++
++ prev_state = exception_enter();
++
++ do {
++ preempt_disable();
++ local_irq_enable();
++ __schedule(SM_PREEMPT);
++ local_irq_disable();
++ sched_preempt_enable_no_resched();
++ } while (need_resched());
++
++ exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++ void *key)
++{
++ WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
++ return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++ /* Trigger resched if task sched_prio has been modified. */
++ if (task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ requeue_task(p, rq);
++ wakeup_preempt(rq);
++ }
++}
++
++void __setscheduler_prio(struct task_struct *p, int prio)
++{
++ p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++/*
++ * Would be more useful with typeof()/auto_type but they don't mix with
++ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
++ * name such that if someone were to implement this function we get to compare
++ * notes.
++ */
++#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
++
++void rt_mutex_pre_schedule(void)
++{
++ lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
++ sched_submit_work(current);
++}
++
++void rt_mutex_schedule(void)
++{
++ lockdep_assert(current->sched_rt_mutex);
++ __schedule_loop(SM_NONE);
++}
++
++void rt_mutex_post_schedule(void)
++{
++ sched_update_worker(current);
++ lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++ int prio;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ /* XXX used to be waiter->prio, not waiter->task->prio */
++ prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++ /*
++ * If nothing changed; bail early.
++ */
++ if (p->pi_top_task == pi_task && prio == p->prio)
++ return;
++
++ rq = __task_access_lock(p, &lock);
++ /*
++ * Set under pi_lock && rq->lock, such that the value can be used under
++ * either lock.
++ *
++ * Note that there is loads of tricky to make this pointer cache work
++ * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++ * ensure a task is de-boosted (pi_task is set to NULL) before the
++ * task is allowed to run again (and can exit). This ensures the pointer
++ * points to a blocked task -- which guarantees the task is present.
++ */
++ p->pi_top_task = pi_task;
++
++ /*
++ * For FIFO/RR we only need to set prio, if that matches we're done.
++ */
++ if (prio == p->prio)
++ goto out_unlock;
++
++ /*
++ * Idle task boosting is a no-no in general. There is one
++ * exception, when PREEMPT_RT and NOHZ is active:
++ *
++ * The idle task calls get_next_timer_interrupt() and holds
++ * the timer wheel base->lock on the CPU and another CPU wants
++ * to access the timer (probably to cancel it). We can safely
++ * ignore the boosting request, as the idle CPU runs this code
++ * with interrupts disabled and will complete the lock
++ * protected section without being interrupted. So there is no
++ * real need to boost.
++ */
++ if (unlikely(p == rq->idle)) {
++ WARN_ON(p != rq->curr);
++ WARN_ON(p->pi_blocked_on);
++ goto out_unlock;
++ }
++
++ trace_sched_pi_setprio(p, pi_task);
++
++ __setscheduler_prio(p, prio);
++
++ check_task_changed(p, rq);
++out_unlock:
++ /* Avoid rq from going away on us: */
++ preempt_disable();
++
++ if (task_on_rq_queued(p))
++ __balance_callbacks(rq);
++ __task_access_unlock(p, lock);
++
++ preempt_enable();
++}
++#endif
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++ if (should_resched(0)) {
++ preempt_schedule_common();
++ return 1;
++ }
++ /*
++ * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
++ * whether the current CPU is in an RCU read-side critical section,
++ * so the tick can report quiescent states even for CPUs looping
++ * in kernel context. In contrast, in non-preemptible kernels,
++ * RCU readers leave no in-memory hints, which means that CPU-bound
++ * processes executing in kernel context might never report an
++ * RCU quiescent state. Therefore, the following code causes
++ * cond_resched() to report a quiescent state, but only when RCU
++ * is in urgent need of one.
++ */
++#ifndef CONFIG_PREEMPT_RCU
++ rcu_all_qs();
++#endif
++ return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled __cond_resched
++#define cond_resched_dynamic_disabled ((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled __cond_resched
++#define might_resched_dynamic_disabled ((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++ klp_sched_try_switch();
++ if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++ return 0;
++ return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_might_resched))
++ return 0;
++ return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION. We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held(lock);
++
++ if (spin_needbreak(lock) || resched) {
++ spin_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ spin_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held_read(lock);
++
++ if (rwlock_needbreak(lock) || resched) {
++ read_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ read_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held_write(lock);
++
++ if (rwlock_needbreak(lock) || resched) {
++ write_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ write_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ * cond_resched <- __cond_resched
++ * might_resched <- RET0
++ * preempt_schedule <- NOP
++ * preempt_schedule_notrace <- NOP
++ * irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ * cond_resched <- __cond_resched
++ * might_resched <- __cond_resched
++ * preempt_schedule <- NOP
++ * preempt_schedule_notrace <- NOP
++ * irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ * cond_resched <- RET0
++ * might_resched <- RET0
++ * preempt_schedule <- preempt_schedule
++ * preempt_schedule_notrace <- preempt_schedule_notrace
++ * irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++ preempt_dynamic_undefined = -1,
++ preempt_dynamic_none,
++ preempt_dynamic_voluntary,
++ preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++ if (!strcmp(str, "none"))
++ return preempt_dynamic_none;
++
++ if (!strcmp(str, "voluntary"))
++ return preempt_dynamic_voluntary;
++
++ if (!strcmp(str, "full"))
++ return preempt_dynamic_full;
++
++ return -EINVAL;
++}
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f) static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f) static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f) static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_disable(f) static_key_disable(&sk_dynamic_##f.key)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++static DEFINE_MUTEX(sched_dynamic_mutex);
++static bool klp_override;
++
++static void __sched_dynamic_update(int mode)
++{
++ /*
++ * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++ * the ZERO state, which is invalid.
++ */
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(might_resched);
++ preempt_dynamic_enable(preempt_schedule);
++ preempt_dynamic_enable(preempt_schedule_notrace);
++ preempt_dynamic_enable(irqentry_exit_cond_resched);
++
++ switch (mode) {
++ case preempt_dynamic_none:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_disable(might_resched);
++ preempt_dynamic_disable(preempt_schedule);
++ preempt_dynamic_disable(preempt_schedule_notrace);
++ preempt_dynamic_disable(irqentry_exit_cond_resched);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: none\n");
++ break;
++
++ case preempt_dynamic_voluntary:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(might_resched);
++ preempt_dynamic_disable(preempt_schedule);
++ preempt_dynamic_disable(preempt_schedule_notrace);
++ preempt_dynamic_disable(irqentry_exit_cond_resched);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: voluntary\n");
++ break;
++
++ case preempt_dynamic_full:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_disable(might_resched);
++ preempt_dynamic_enable(preempt_schedule);
++ preempt_dynamic_enable(preempt_schedule_notrace);
++ preempt_dynamic_enable(irqentry_exit_cond_resched);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: full\n");
++ break;
++ }
++
++ preempt_dynamic_mode = mode;
++}
++
++void sched_dynamic_update(int mode)
++{
++ mutex_lock(&sched_dynamic_mutex);
++ __sched_dynamic_update(mode);
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
++
++static int klp_cond_resched(void)
++{
++ __klp_sched_try_switch();
++ return __cond_resched();
++}
++
++void sched_dynamic_klp_enable(void)
++{
++ mutex_lock(&sched_dynamic_mutex);
++
++ klp_override = true;
++ static_call_update(cond_resched, klp_cond_resched);
++
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++void sched_dynamic_klp_disable(void)
++{
++ mutex_lock(&sched_dynamic_mutex);
++
++ klp_override = false;
++ __sched_dynamic_update(preempt_dynamic_mode);
++
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
++
++
++static int __init setup_preempt_mode(char *str)
++{
++ int mode = sched_dynamic_mode(str);
++ if (mode < 0) {
++ pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++ return 0;
++ }
++
++ sched_dynamic_update(mode);
++ return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++ if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++ if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++ sched_dynamic_update(preempt_dynamic_none);
++ } else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++ sched_dynamic_update(preempt_dynamic_voluntary);
++ } else {
++ /* Default static call setting, nothing to do */
++ WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++ preempt_dynamic_mode = preempt_dynamic_full;
++ pr_info("Dynamic Preempt: full\n");
++ }
++ }
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++ bool preempt_model_##mode(void) \
++ { \
++ WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++ return preempt_dynamic_mode == preempt_dynamic_##mode; \
++ } \
++ EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC: */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* CONFIG_PREEMPT_DYNAMIC */
++
++int io_schedule_prepare(void)
++{
++ int old_iowait = current->in_iowait;
++
++ current->in_iowait = 1;
++ blk_flush_plug(current->plug, true);
++ return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++ current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO. Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++ int token;
++ long ret;
++
++ token = io_schedule_prepare();
++ ret = schedule_timeout(timeout);
++ io_schedule_finish(token);
++
++ return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++ int token;
++
++ token = io_schedule_prepare();
++ schedule();
++ io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++void sched_show_task(struct task_struct *p)
++{
++ unsigned long free = 0;
++ int ppid;
++
++ if (!try_get_task_stack(p))
++ return;
++
++ pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++ if (task_is_running(p))
++ pr_cont(" running task ");
++#ifdef CONFIG_DEBUG_STACK_USAGE
++ free = stack_not_used(p);
++#endif
++ ppid = 0;
++ rcu_read_lock();
++ if (pid_alive(p))
++ ppid = task_pid_nr(rcu_dereference(p->real_parent));
++ rcu_read_unlock();
++ pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
++ free, task_pid_nr(p), task_tgid_nr(p),
++ ppid, read_task_thread_flags(p));
++
++ print_worker_info(KERN_INFO, p);
++ print_stop_info(KERN_INFO, p);
++ show_stack(p, NULL, KERN_INFO);
++ put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++ unsigned int state = READ_ONCE(p->__state);
++
++ /* no filter, everything matches */
++ if (!state_filter)
++ return true;
++
++ /* filter, but doesn't match */
++ if (!(state & state_filter))
++ return false;
++
++ /*
++ * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++ * TASK_KILLABLE).
++ */
++ if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++ return false;
++
++ return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++ struct task_struct *g, *p;
++
++ rcu_read_lock();
++ for_each_process_thread(g, p) {
++ /*
++ * reset the NMI-timeout, listing all files on a slow
++ * console might take a lot of time:
++ * Also, reset softlockup watchdogs on all CPUs, because
++ * another CPU might be blocked waiting for us to process
++ * an IPI.
++ */
++ touch_nmi_watchdog();
++ touch_all_softlockup_watchdogs();
++ if (state_filter_match(state_filter, p))
++ sched_show_task(p);
++ }
++
++#ifdef CONFIG_SCHED_DEBUG
++ /* TODO: Alt schedule FW should support this
++ if (!state_filter)
++ sysrq_sched_debug_show();
++ */
++#endif
++ rcu_read_unlock();
++ /*
++ * Only show locks if all tasks are dumped:
++ */
++ if (!state_filter)
++ debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++ if (cpu == smp_processor_id() && in_hardirq()) {
++ struct pt_regs *regs;
++
++ regs = get_irq_regs();
++ if (regs) {
++ show_regs(regs);
++ return;
++ }
++ }
++
++ if (trigger_single_cpu_backtrace(cpu))
++ return;
++
++ pr_info("Task dump for CPU %d:\n", cpu);
++ sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++ struct affinity_context ac = (struct affinity_context) {
++ .new_mask = cpumask_of(cpu),
++ .flags = 0,
++ };
++#endif
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ __sched_fork(0, idle);
++
++ raw_spin_lock_irqsave(&idle->pi_lock, flags);
++ raw_spin_lock(&rq->lock);
++
++ idle->last_ran = rq->clock_task;
++ idle->__state = TASK_RUNNING;
++ /*
++ * PF_KTHREAD should already be set at this point; regardless, make it
++ * look like a proper per-CPU kthread.
++ */
++ idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
++ kthread_set_per_cpu(idle, cpu);
++
++ sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++ /*
++ * It's possible that init_idle() gets called multiple times on a task,
++ * in that case do_set_cpus_allowed() will not do the right thing.
++ *
++ * And since this is boot we can forgo the serialisation.
++ */
++ set_cpus_allowed_common(idle, &ac);
++#endif
++
++ /* Silence PROVE_RCU */
++ rcu_read_lock();
++ __set_task_cpu(idle, cpu);
++ rcu_read_unlock();
++
++ rq->idle = idle;
++ rcu_assign_pointer(rq->curr, idle);
++ idle->on_cpu = 1;
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++ /* Set the preempt count _outside_ the spinlocks! */
++ init_idle_preempt_count(idle, cpu);
++
++ ftrace_graph_init_idle_task(idle, cpu);
++ vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++ sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++ const struct cpumask __maybe_unused *trial)
++{
++ return 1;
++}
++
++int task_can_attach(struct task_struct *p)
++{
++ int ret = 0;
++
++ /*
++ * Kthreads which disallow setaffinity shouldn't be moved
++ * to a new cpuset; we don't want to change their CPU
++ * affinity and isolating such threads by their set of
++ * allowed nodes is unnecessary. Thus, cpusets are not
++ * applicable for such threads. This prevents checking for
++ * success of set_cpus_allowed_ptr() on all attached tasks
++ * before cpus_mask may be changed.
++ */
++ if (p->flags & PF_NO_SETAFFINITY)
++ ret = -EINVAL;
++
++ return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++ struct mm_struct *mm = current->active_mm;
++
++ BUG_ON(current != this_rq()->idle);
++
++ if (mm != &init_mm) {
++ switch_mm(mm, &init_mm, current);
++ finish_arch_post_lock_switch();
++ }
++
++ /* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++ struct task_struct *p = arg;
++ struct rq *rq = this_rq();
++ struct rq_flags rf;
++ int cpu;
++
++ raw_spin_lock_irq(&p->pi_lock);
++ rq_lock(rq, &rf);
++
++ update_rq_clock(rq);
++
++ if (task_rq(p) == rq && task_on_rq_queued(p)) {
++ cpu = select_fallback_rq(rq->cpu, p);
++ rq = __migrate_task(rq, p, cpu);
++ }
++
++ rq_unlock(rq, &rf);
++ raw_spin_unlock_irq(&p->pi_lock);
++
++ put_task_struct(p);
++
++ return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++ struct task_struct *push_task = rq->curr;
++
++ lockdep_assert_held(&rq->lock);
++
++ /*
++ * Ensure the thing is persistent until balance_push_set(.on = false);
++ */
++ rq->balance_callback = &balance_push_callback;
++
++ /*
++ * Only active while going offline and when invoked on the outgoing
++ * CPU.
++ */
++ if (!cpu_dying(rq->cpu) || rq != this_rq())
++ return;
++
++ /*
++ * Both the cpu-hotplug and stop task are in this case and are
++ * required to complete the hotplug process.
++ */
++ if (kthread_is_per_cpu(push_task) ||
++ is_migration_disabled(push_task)) {
++
++ /*
++ * If this is the idle task on the outgoing CPU try to wake
++ * up the hotplug control thread which might wait for the
++ * last task to vanish. The rcuwait_active() check is
++ * accurate here because the waiter is pinned on this CPU
++ * and can't obviously be running in parallel.
++ *
++ * On RT kernels this also has to check whether there are
++ * pinned and scheduled out tasks on the runqueue. They
++ * need to leave the migrate disabled section first.
++ */
++ if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++ rcuwait_active(&rq->hotplug_wait)) {
++ raw_spin_unlock(&rq->lock);
++ rcuwait_wake_up(&rq->hotplug_wait);
++ raw_spin_lock(&rq->lock);
++ }
++ return;
++ }
++
++ get_task_struct(push_task);
++ /*
++ * Temporarily drop rq->lock such that we can wake-up the stop task.
++ * Both preemption and IRQs are still disabled.
++ */
++ preempt_disable();
++ raw_spin_unlock(&rq->lock);
++ stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++ this_cpu_ptr(&push_work));
++ preempt_enable();
++ /*
++ * At this point need_resched() is true and we'll take the loop in
++ * schedule(). The next pick is obviously going to be the stop task
++ * which kthread_is_per_cpu() and will push this task away.
++ */
++ raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct rq_flags rf;
++
++ rq_lock_irqsave(rq, &rf);
++ if (on) {
++ WARN_ON_ONCE(rq->balance_callback);
++ rq->balance_callback = &balance_push_callback;
++ } else if (rq->balance_callback == &balance_push_callback) {
++ rq->balance_callback = NULL;
++ }
++ rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++ struct rq *rq = this_rq();
++
++ rcuwait_wait_event(&rq->hotplug_wait,
++ rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++ TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++ if (rq->online) {
++ update_rq_clock(rq);
++ rq->online = false;
++ }
++}
++
++static void set_rq_online(struct rq *rq)
++{
++ if (!rq->online)
++ rq->online = true;
++}
++
++static inline void sched_set_rq_online(struct rq *rq, int cpu)
++{
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ set_rq_online(rq);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++static inline void sched_set_rq_offline(struct rq *rq, int cpu)
++{
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ set_rq_offline(rq);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask. If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++ if (cpuhp_tasks_frozen) {
++ /*
++ * num_cpus_frozen tracks how many CPUs are involved in suspend
++ * resume sequence. As long as this is not the last online
++ * operation in the resume sequence, just build a single sched
++ * domain, ignoring cpusets.
++ */
++ partition_sched_domains(1, NULL, NULL);
++ if (--num_cpus_frozen)
++ return;
++ /*
++ * This is the last CPU online operation. So fall through and
++ * restore the original sched domains by considering the
++ * cpuset configurations.
++ */
++ cpuset_force_rebuild();
++ }
++
++ cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++ if (!cpuhp_tasks_frozen) {
++ cpuset_update_active_cpus();
++ } else {
++ num_cpus_frozen++;
++ partition_sched_domains(1, NULL, NULL);
++ }
++ return 0;
++}
++
++static inline void sched_smt_present_inc(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++ static_branch_inc_cpuslocked(&sched_smt_present);
++ cpumask_or(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++ }
++#endif
++}
++
++static inline void sched_smt_present_dec(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++ static_branch_dec_cpuslocked(&sched_smt_present);
++ if (!static_branch_likely(&sched_smt_present))
++ cpumask_clear(sched_pcore_idle_mask);
++ cpumask_andnot(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++ }
++#endif
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ /*
++ * Clear the balance_push callback and prepare to schedule
++ * regular tasks.
++ */
++ balance_push_set(cpu, false);
++
++ set_cpu_active(cpu, true);
++
++ if (sched_smp_initialized)
++ cpuset_cpu_active();
++
++ /*
++ * Put the rq online, if not already. This happens:
++ *
++ * 1) In the early boot process, because we build the real domains
++ * after all cpus have been brought up.
++ *
++ * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++ * domains.
++ */
++ sched_set_rq_online(rq, cpu);
++
++ /*
++ * When going up, increment the number of cores with SMT present.
++ */
++ sched_smt_present_inc(cpu);
++
++ return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ int ret;
++
++ set_cpu_active(cpu, false);
++
++ /*
++ * From this point forward, this CPU will refuse to run any task that
++ * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++ * push those tasks away until this gets cleared, see
++ * sched_cpu_dying().
++ */
++ balance_push_set(cpu, true);
++
++ /*
++ * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++ * users of this state to go away such that all new such users will
++ * observe it.
++ *
++ * Specifically, we rely on ttwu to no longer target this CPU, see
++ * ttwu_queue_cond() and is_cpu_allowed().
++ *
++ * Do sync before park smpboot threads to take care the RCU boost case.
++ */
++ synchronize_rcu();
++
++ sched_set_rq_offline(rq, cpu);
++
++ /*
++ * When going down, decrement the number of cores with SMT present.
++ */
++ sched_smt_present_dec(cpu);
++
++ if (!sched_smp_initialized)
++ return 0;
++
++ ret = cpuset_cpu_inactive(cpu);
++ if (ret) {
++ sched_smt_present_inc(cpu);
++ sched_set_rq_online(rq, cpu);
++ balance_push_set(cpu, false);
++ set_cpu_active(cpu, true);
++ return ret;
++ }
++
++ return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++ sched_rq_cpu_starting(cpu);
++ sched_tick_start(cpu);
++ return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++ balance_hotplug_wait();
++ return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the tear-down thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++ long delta = calc_load_fold_active(rq, 1);
++
++ if (delta)
++ atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++ struct task_struct *g, *p;
++ int cpu = cpu_of(rq);
++
++ lockdep_assert_held(&rq->lock);
++
++ printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++ for_each_process_thread(g, p) {
++ if (task_cpu(p) != cpu)
++ continue;
++
++ if (!task_on_rq_queued(p))
++ continue;
++
++ printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++ }
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ /* Handle pending wakeups and then migrate everything off */
++ sched_tick_stop(cpu);
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++ WARN(true, "Dying CPU not properly vacated!");
++ dump_rq_tasks(rq, KERN_WARNING);
++ }
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++ calc_load_migrate(rq);
++ hrtick_clear(rq);
++ return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++ int cpu;
++ cpumask_t *tmp;
++
++ for_each_possible_cpu(cpu) {
++ /* init topo masks */
++ tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++ cpumask_copy(tmp, cpu_possible_mask);
++ per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++ per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++ }
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++ if (cpumask_and(topo, topo, mask)) { \
++ cpumask_copy(topo, mask); \
++ printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name, \
++ cpu, (topo++)->bits[0]); \
++ } \
++ if (!last) \
++ bitmap_complement(cpumask_bits(topo), cpumask_bits(mask), \
++ nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++ int cpu;
++ cpumask_t *topo;
++
++ for_each_online_cpu(cpu) {
++ topo = per_cpu(sched_cpu_topo_masks, cpu);
++
++ bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++ nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++ TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++ TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
++
++ per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++ per_cpu(sched_cpu_llc_mask, cpu) = topo;
++ TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++ TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++ TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++ per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++ printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++ cpu, per_cpu(sd_llc_id, cpu),
++ (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++ per_cpu(sched_cpu_topo_masks, cpu)));
++ }
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++ /* Move init over to a non-isolated CPU */
++ if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++ BUG();
++ current->flags &= ~PF_NO_SETAFFINITY;
++
++ sched_init_topology();
++ sched_init_topology_cpumask();
++
++ sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++ sched_cpu_starting(smp_processor_id());
++ return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++ cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++ return in_lock_functions(addr) ||
++ (addr >= (unsigned long)__sched_text_start
++ && addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __ro_after_init;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++ int i;
++ struct rq *rq;
++
++ printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++ " by Alfred Chen.\n");
++
++ wait_bit_init();
++
++#ifdef CONFIG_SMP
++ for (i = 0; i < SCHED_QUEUE_BITS; i++)
++ cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++ task_group_cache = KMEM_CACHE(task_group, 0);
++
++ list_add(&root_task_group.list, &task_groups);
++ INIT_LIST_HEAD(&root_task_group.children);
++ INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++ for_each_possible_cpu(i) {
++ rq = cpu_rq(i);
++
++ sched_queue_init(&rq->queue);
++ rq->prio = IDLE_TASK_SCHED_PRIO;
++#ifdef CONFIG_SCHED_PDS
++ rq->prio_idx = rq->prio;
++#endif
++
++ raw_spin_lock_init(&rq->lock);
++ rq->nr_running = rq->nr_uninterruptible = 0;
++ rq->calc_load_active = 0;
++ rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++ rq->online = false;
++ rq->cpu = i;
++
++ rq->clear_idle_mask_func = cpumask_clear_cpu;
++ rq->set_idle_mask_func = cpumask_set_cpu;
++ rq->balance_func = NULL;
++ rq->active_balance_arg.active = 0;
++
++#ifdef CONFIG_NO_HZ_COMMON
++ INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++ rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++ rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++ rq->nr_switches = 0;
++
++ hrtick_rq_init(rq);
++ atomic_set(&rq->nr_iowait, 0);
++
++ zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++ }
++#ifdef CONFIG_SMP
++ /* Set rq->online for cpu 0 */
++ cpu_rq(0)->online = true;
++#endif
++ /*
++ * The boot idle thread does lazy MMU switching as well:
++ */
++ mmgrab(&init_mm);
++ enter_lazy_tlb(&init_mm, current);
++
++ /*
++ * The idle task doesn't need the kthread struct to function, but it
++ * is dressed up as a per-CPU kthread and thus needs to play the part
++ * if we want to avoid special-casing it in code that deals with per-CPU
++ * kthreads.
++ */
++ WARN_ON(!set_kthread_struct(current));
++
++ /*
++ * Make us the idle thread. Technically, schedule() should not be
++ * called from this thread, however somewhere below it might be,
++ * but because we are the idle thread, we just pick up running again
++ * when this runqueue becomes "idle".
++ */
++ init_idle(current, smp_processor_id());
++
++ calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++ idle_thread_set_boot_cpu();
++ balance_push_set(smp_processor_id(), false);
++
++ sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++ preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++ unsigned int state = get_current_state();
++ /*
++ * Blocking primitives will set (and therefore destroy) current->state,
++ * since we will exit with TASK_RUNNING make sure we enter with it,
++ * otherwise we will destroy state.
++ */
++ WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++ "do not call blocking ops when !TASK_RUNNING; "
++ "state=%x set at [<%p>] %pS\n", state,
++ (void *)current->task_state_change,
++ (void *)current->task_state_change);
++
++ __might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++ if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++ return;
++
++ if (preempt_count() == preempt_offset)
++ return;
++
++ pr_err("Preemption disabled at:");
++ print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++ unsigned int nested = preempt_count();
++
++ nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++ return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++ /* Ratelimiting timestamp: */
++ static unsigned long prev_jiffy;
++
++ unsigned long preempt_disable_ip;
++
++ /* WARN_ON_ONCE() by default, no rate limit required: */
++ rcu_sleep_check();
++
++ if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++ !is_idle_task(current) && !current->non_block_count) ||
++ system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++ oops_in_progress)
++ return;
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ /* Save this before calling printk(), since that will clobber it: */
++ preempt_disable_ip = get_preempt_disable_ip(current);
++
++ pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++ file, line);
++ pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(), current->non_block_count,
++ current->pid, current->comm);
++ pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++ offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++ if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++ pr_err("RCU nest depth: %d, expected: %u\n",
++ rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++ }
++
++ if (task_stack_end_corrupted(current))
++ pr_emerg("Thread overran stack, or stack corrupted\n");
++
++ debug_show_held_locks(current);
++ if (irqs_disabled())
++ print_irqtrace_events(current);
++
++ print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++ preempt_disable_ip);
++
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++ static unsigned long prev_jiffy;
++
++ if (irqs_disabled())
++ return;
++
++ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++ return;
++
++ if (preempt_count() > preempt_offset)
++ return;
++
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++ printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(),
++ current->pid, current->comm);
++
++ debug_show_held_locks(current);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++ static unsigned long prev_jiffy;
++
++ if (irqs_disabled())
++ return;
++
++ if (is_migration_disabled(current))
++ return;
++
++ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++ return;
++
++ if (preempt_count() > 0)
++ return;
++
++ if (current->migration_flags & MDF_FORCE_ENABLED)
++ return;
++
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++ pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(), is_migration_disabled(current),
++ current->pid, current->comm);
++
++ debug_show_held_locks(current);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++ struct task_struct *g, *p;
++ struct sched_attr attr = {
++ .sched_policy = SCHED_NORMAL,
++ };
++
++ read_lock(&tasklist_lock);
++ for_each_process_thread(g, p) {
++ /*
++ * Only normalize user tasks:
++ */
++ if (p->flags & PF_KTHREAD)
++ continue;
++
++ schedstat_set(p->stats.wait_start, 0);
++ schedstat_set(p->stats.sleep_start, 0);
++ schedstat_set(p->stats.block_start, 0);
++
++ if (!rt_task(p)) {
++ /*
++ * Renice negative nice level userspace
++ * tasks back to 0:
++ */
++ if (task_nice(p) < 0)
++ set_user_nice(p, 0);
++ continue;
++ }
++
++ __sched_setscheduler(p, &attr, false, false);
++ }
++ read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for KDB.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++ return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++ kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++ sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++ /*
++ * We have to wait for yet another RCU grace period to expire, as
++ * print_cfs_stats() might run concurrently.
++ */
++ call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++ struct task_group *tg;
++
++ tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++ if (!tg)
++ return ERR_PTR(-ENOMEM);
++
++ return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* RCU callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++ /* Now it should be safe to free those cfs_rqs: */
++ sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++ /* Wait for possible concurrent references to cfs_rqs complete: */
++ call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++ return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++ struct task_group *parent = css_tg(parent_css);
++ struct task_group *tg;
++
++ if (!parent) {
++ /* This is early initialization for the top cgroup */
++ return &root_task_group.css;
++ }
++
++ tg = sched_create_group(parent);
++ if (IS_ERR(tg))
++ return ERR_PTR(-ENOMEM);
++ return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++ struct task_group *parent = css_tg(css->parent);
++
++ if (parent)
++ sched_online_group(tg, parent);
++ return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++
++ sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++
++ /*
++ * Relies on the RCU grace period between css_released() and this.
++ */
++ sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++ return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++static DEFINE_MUTEX(shares_mutex);
++
++static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++ /*
++ * We can't change the weight of the root cgroup.
++ */
++ if (&root_task_group == tg)
++ return -EINVAL;
++
++ shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
++
++ mutex_lock(&shares_mutex);
++ if (tg->shares == shares)
++ goto done;
++
++ tg->shares = shares;
++done:
++ mutex_unlock(&shares_mutex);
++ return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 shareval)
++{
++ if (shareval > scale_load_down(ULONG_MAX))
++ shareval = MAX_SHARES;
++ return sched_group_set_shares(css_tg(css), scale_load(shareval));
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ struct task_group *tg = css_tg(css);
++
++ return (u64) scale_load_down(tg->shares);
++}
++#endif
++
++static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, s64 cfs_quota_us)
++{
++ return 0;
++}
++
++static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 cfs_period_us)
++{
++ return 0;
++}
++
++static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 cfs_burst_us)
++{
++ return 0;
++}
++
++static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 val)
++{
++ return 0;
++}
++
++static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 rt_period_us)
++{
++ return 0;
++}
++
++static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes,
++ loff_t off)
++{
++ return nbytes;
++}
++
++static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes,
++ loff_t off)
++{
++ return nbytes;
++}
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_FAIR_GROUP_SCHED
++ {
++ .name = "shares",
++ .read_u64 = cpu_shares_read_u64,
++ .write_u64 = cpu_shares_write_u64,
++ },
++#endif
++ {
++ .name = "cfs_quota_us",
++ .read_s64 = cpu_cfs_quota_read_s64,
++ .write_s64 = cpu_cfs_quota_write_s64,
++ },
++ {
++ .name = "cfs_period_us",
++ .read_u64 = cpu_cfs_period_read_u64,
++ .write_u64 = cpu_cfs_period_write_u64,
++ },
++ {
++ .name = "cfs_burst_us",
++ .read_u64 = cpu_cfs_burst_read_u64,
++ .write_u64 = cpu_cfs_burst_write_u64,
++ },
++ {
++ .name = "stat",
++ .seq_show = cpu_cfs_stat_show,
++ },
++ {
++ .name = "stat.local",
++ .seq_show = cpu_cfs_local_stat_show,
++ },
++ {
++ .name = "rt_runtime_us",
++ .read_s64 = cpu_rt_runtime_read,
++ .write_s64 = cpu_rt_runtime_write,
++ },
++ {
++ .name = "rt_period_us",
++ .read_u64 = cpu_rt_period_read_uint,
++ .write_u64 = cpu_rt_period_write_uint,
++ },
++ {
++ .name = "uclamp.min",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_min_show,
++ .write = cpu_uclamp_min_write,
++ },
++ {
++ .name = "uclamp.max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_max_show,
++ .write = cpu_uclamp_max_write,
++ },
++ { } /* Terminate */
++};
++
++static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft, u64 weight)
++{
++ return 0;
++}
++
++static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 nice)
++{
++ return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 idle)
++{
++ return 0;
++}
++
++static int cpu_max_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static ssize_t cpu_max_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes, loff_t off)
++{
++ return nbytes;
++}
++
++static struct cftype cpu_files[] = {
++ {
++ .name = "weight",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_u64 = cpu_weight_read_u64,
++ .write_u64 = cpu_weight_write_u64,
++ },
++ {
++ .name = "weight.nice",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_s64 = cpu_weight_nice_read_s64,
++ .write_s64 = cpu_weight_nice_write_s64,
++ },
++ {
++ .name = "idle",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_s64 = cpu_idle_read_s64,
++ .write_s64 = cpu_idle_write_s64,
++ },
++ {
++ .name = "max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_max_show,
++ .write = cpu_max_write,
++ },
++ {
++ .name = "max.burst",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_u64 = cpu_cfs_burst_read_u64,
++ .write_u64 = cpu_cfs_burst_write_u64,
++ },
++ {
++ .name = "uclamp.min",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_min_show,
++ .write = cpu_uclamp_min_write,
++ },
++ {
++ .name = "uclamp.max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_max_show,
++ .write = cpu_uclamp_max_write,
++ },
++ { } /* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++ struct cgroup_subsys_state *css)
++{
++ return 0;
++}
++
++static int cpu_local_stat_show(struct seq_file *sf,
++ struct cgroup_subsys_state *css)
++{
++ return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++ .css_alloc = cpu_cgroup_css_alloc,
++ .css_online = cpu_cgroup_css_online,
++ .css_released = cpu_cgroup_css_released,
++ .css_free = cpu_cgroup_css_free,
++ .css_extra_stat_show = cpu_extra_stat_show,
++ .css_local_stat_show = cpu_local_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++ .can_attach = cpu_cgroup_can_attach,
++#endif
++ .attach = cpu_cgroup_attach,
++ .legacy_cftypes = cpu_legacy_files,
++ .dfl_cftypes = cpu_files,
++ .early_init = true,
++ .threaded = true,
++};
++#endif /* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#
++/*
++ * @cid_lock: Guarantee forward-progress of cid allocation.
++ *
++ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
++ * is only used when contention is detected by the lock-free allocation so
++ * forward progress can be guaranteed.
++ */
++DEFINE_RAW_SPINLOCK(cid_lock);
++
++/*
++ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
++ *
++ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
++ * detected, it is set to 1 to ensure that all newly coming allocations are
++ * serialized by @cid_lock until the allocation which detected contention
++ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
++ * of a cid allocation.
++ */
++int use_cid_lock;
++
++/*
++ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
++ * concurrently with respect to the execution of the source runqueue context
++ * switch.
++ *
++ * There is one basic properties we want to guarantee here:
++ *
++ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
++ * used by a task. That would lead to concurrent allocation of the cid and
++ * userspace corruption.
++ *
++ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
++ * that a pair of loads observe at least one of a pair of stores, which can be
++ * shown as:
++ *
++ * X = Y = 0
++ *
++ * w[X]=1 w[Y]=1
++ * MB MB
++ * r[Y]=y r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible. But rather than using
++ * values 0 and 1, this algorithm cares about specific state transitions of the
++ * runqueue current task (as updated by the scheduler context switch), and the
++ * per-mm/cpu cid value.
++ *
++ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
++ * task->mm != mm for the rest of the discussion. There are two scheduler state
++ * transitions on context switch we care about:
++ *
++ * (TSA) Store to rq->curr with transition from (N) to (Y)
++ *
++ * (TSB) Store to rq->curr with transition from (Y) to (N)
++ *
++ * On the remote-clear side, there is one transition we care about:
++ *
++ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
++ *
++ * There is also a transition to UNSET state which can be performed from all
++ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
++ * guarantees that only a single thread will succeed:
++ *
++ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
++ *
++ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
++ * when a thread is actively using the cid (property (1)).
++ *
++ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
++ *
++ * Scenario A) (TSA)+(TMA) (from next task perspective)
++ *
++ * CPU0 CPU1
++ *
++ * Context switch CS-1 Remote-clear
++ * - store to rq->curr: (N)->(Y) (TSA) - cmpxchg to *pcpu_id to LAZY (TMA)
++ * (implied barrier after cmpxchg)
++ * - switch_mm_cid()
++ * - memory barrier (see switch_mm_cid()
++ * comment explaining how this barrier
++ * is combined with other scheduler
++ * barriers)
++ * - mm_cid_get (next)
++ * - READ_ONCE(*pcpu_cid) - rcu_dereference(src_rq->curr)
++ *
++ * This Dekker ensures that either task (Y) is observed by the
++ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
++ * observed.
++ *
++ * If task (Y) store is observed by rcu_dereference(), it means that there is
++ * still an active task on the cpu. Remote-clear will therefore not transition
++ * to UNSET, which fulfills property (1).
++ *
++ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
++ * it will move its state to UNSET, which clears the percpu cid perhaps
++ * uselessly (which is not an issue for correctness). Because task (Y) is not
++ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
++ * state to UNSET is done with a cmpxchg expecting that the old state has the
++ * LAZY flag set, only one thread will successfully UNSET.
++ *
++ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
++ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
++ * CPU1 will observe task (Y) and do nothing more, which is fine.
++ *
++ * What we are effectively preventing with this Dekker is a scenario where
++ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
++ * because this would UNSET a cid which is actively used.
++ */
++
++void sched_mm_cid_migrate_from(struct task_struct *t)
++{
++ t->migrate_from_cpu = task_cpu(t);
++}
++
++static
++int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
++ struct task_struct *t,
++ struct mm_cid *src_pcpu_cid)
++{
++ struct mm_struct *mm = t->mm;
++ struct task_struct *src_task;
++ int src_cid, last_mm_cid;
++
++ if (!mm)
++ return -1;
++
++ last_mm_cid = t->last_mm_cid;
++ /*
++ * If the migrated task has no last cid, or if the current
++ * task on src rq uses the cid, it means the source cid does not need
++ * to be moved to the destination cpu.
++ */
++ if (last_mm_cid == -1)
++ return -1;
++ src_cid = READ_ONCE(src_pcpu_cid->cid);
++ if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
++ return -1;
++
++ /*
++ * If we observe an active task using the mm on this rq, it means we
++ * are not the last task to be migrated from this cpu for this mm, so
++ * there is no need to move src_cid to the destination cpu.
++ */
++ rcu_read_lock();
++ src_task = rcu_dereference(src_rq->curr);
++ if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++ rcu_read_unlock();
++ t->last_mm_cid = -1;
++ return -1;
++ }
++ rcu_read_unlock();
++
++ return src_cid;
++}
++
++static
++int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
++ struct task_struct *t,
++ struct mm_cid *src_pcpu_cid,
++ int src_cid)
++{
++ struct task_struct *src_task;
++ struct mm_struct *mm = t->mm;
++ int lazy_cid;
++
++ if (src_cid == -1)
++ return -1;
++
++ /*
++ * Attempt to clear the source cpu cid to move it to the destination
++ * cpu.
++ */
++ lazy_cid = mm_cid_set_lazy_put(src_cid);
++ if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
++ return -1;
++
++ /*
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm matches the scheduler barrier in context_switch()
++ * between store to rq->curr and load of prev and next task's
++ * per-mm/cpu cid.
++ *
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm_cid_active matches the barrier in
++ * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++ * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++ * load of per-mm/cpu cid.
++ */
++
++ /*
++ * If we observe an active task using the mm on this rq after setting
++ * the lazy-put flag, this task will be responsible for transitioning
++ * from lazy-put flag set to MM_CID_UNSET.
++ */
++ scoped_guard (rcu) {
++ src_task = rcu_dereference(src_rq->curr);
++ if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++ rcu_read_unlock();
++ /*
++ * We observed an active task for this mm, there is therefore
++ * no point in moving this cid to the destination cpu.
++ */
++ t->last_mm_cid = -1;
++ return -1;
++ }
++ }
++
++ /*
++ * The src_cid is unused, so it can be unset.
++ */
++ if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++ return -1;
++ return src_cid;
++}
++
++/*
++ * Migration to dst cpu. Called with dst_rq lock held.
++ * Interrupts are disabled, which keeps the window of cid ownership without the
++ * source rq lock held small.
++ */
++void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu)
++{
++ struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
++ struct mm_struct *mm = t->mm;
++ int src_cid, dst_cid;
++ struct rq *src_rq;
++
++ lockdep_assert_rq_held(dst_rq);
++
++ if (!mm)
++ return;
++ if (src_cpu == -1) {
++ t->last_mm_cid = -1;
++ return;
++ }
++ /*
++ * Move the src cid if the dst cid is unset. This keeps id
++ * allocation closest to 0 in cases where few threads migrate around
++ * many CPUs.
++ *
++ * If destination cid is already set, we may have to just clear
++ * the src cid to ensure compactness in frequent migrations
++ * scenarios.
++ *
++ * It is not useful to clear the src cid when the number of threads is
++ * greater or equal to the number of allowed CPUs, because user-space
++ * can expect that the number of allowed cids can reach the number of
++ * allowed CPUs.
++ */
++ dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
++ dst_cid = READ_ONCE(dst_pcpu_cid->cid);
++ if (!mm_cid_is_unset(dst_cid) &&
++ atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
++ return;
++ src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
++ src_rq = cpu_rq(src_cpu);
++ src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
++ if (src_cid == -1)
++ return;
++ src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
++ src_cid);
++ if (src_cid == -1)
++ return;
++ if (!mm_cid_is_unset(dst_cid)) {
++ __mm_cid_put(mm, src_cid);
++ return;
++ }
++ /* Move src_cid to dst cpu. */
++ mm_cid_snapshot_time(dst_rq, mm);
++ WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
++}
++
++static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
++ int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct task_struct *t;
++ int cid, lazy_cid;
++
++ cid = READ_ONCE(pcpu_cid->cid);
++ if (!mm_cid_is_valid(cid))
++ return;
++
++ /*
++ * Clear the cpu cid if it is set to keep cid allocation compact. If
++ * there happens to be other tasks left on the source cpu using this
++ * mm, the next task using this mm will reallocate its cid on context
++ * switch.
++ */
++ lazy_cid = mm_cid_set_lazy_put(cid);
++ if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
++ return;
++
++ /*
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm matches the scheduler barrier in context_switch()
++ * between store to rq->curr and load of prev and next task's
++ * per-mm/cpu cid.
++ *
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm_cid_active matches the barrier in
++ * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++ * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++ * load of per-mm/cpu cid.
++ */
++
++ /*
++ * If we observe an active task using the mm on this rq after setting
++ * the lazy-put flag, that task will be responsible for transitioning
++ * from lazy-put flag set to MM_CID_UNSET.
++ */
++ scoped_guard (rcu) {
++ t = rcu_dereference(rq->curr);
++ if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
++ return;
++ }
++
++ /*
++ * The cid is unused, so it can be unset.
++ * Disable interrupts to keep the window of cid ownership without rq
++ * lock small.
++ */
++ scoped_guard (irqsave) {
++ if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++ __mm_cid_put(mm, cid);
++ }
++}
++
++static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct mm_cid *pcpu_cid;
++ struct task_struct *curr;
++ u64 rq_clock;
++
++ /*
++ * rq->clock load is racy on 32-bit but one spurious clear once in a
++ * while is irrelevant.
++ */
++ rq_clock = READ_ONCE(rq->clock);
++ pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++
++ /*
++ * In order to take care of infrequently scheduled tasks, bump the time
++ * snapshot associated with this cid if an active task using the mm is
++ * observed on this rq.
++ */
++ scoped_guard (rcu) {
++ curr = rcu_dereference(rq->curr);
++ if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
++ WRITE_ONCE(pcpu_cid->time, rq_clock);
++ return;
++ }
++ }
++
++ if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
++ return;
++ sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
++ int weight)
++{
++ struct mm_cid *pcpu_cid;
++ int cid;
++
++ pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++ cid = READ_ONCE(pcpu_cid->cid);
++ if (!mm_cid_is_valid(cid) || cid < weight)
++ return;
++ sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void task_mm_cid_work(struct callback_head *work)
++{
++ unsigned long now = jiffies, old_scan, next_scan;
++ struct task_struct *t = current;
++ struct cpumask *cidmask;
++ struct mm_struct *mm;
++ int weight, cpu;
++
++ SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
++
++ work->next = work; /* Prevent double-add */
++ if (t->flags & PF_EXITING)
++ return;
++ mm = t->mm;
++ if (!mm)
++ return;
++ old_scan = READ_ONCE(mm->mm_cid_next_scan);
++ next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++ if (!old_scan) {
++ unsigned long res;
++
++ res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
++ if (res != old_scan)
++ old_scan = res;
++ else
++ old_scan = next_scan;
++ }
++ if (time_before(now, old_scan))
++ return;
++ if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
++ return;
++ cidmask = mm_cidmask(mm);
++ /* Clear cids that were not recently used. */
++ for_each_possible_cpu(cpu)
++ sched_mm_cid_remote_clear_old(mm, cpu);
++ weight = cpumask_weight(cidmask);
++ /*
++ * Clear cids that are greater or equal to the cidmask weight to
++ * recompact it.
++ */
++ for_each_possible_cpu(cpu)
++ sched_mm_cid_remote_clear_weight(mm, cpu, weight);
++}
++
++void init_sched_mm_cid(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ int mm_users = 0;
++
++ if (mm) {
++ mm_users = atomic_read(&mm->mm_users);
++ if (mm_users == 1)
++ mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++ }
++ t->cid_work.next = &t->cid_work; /* Protect against double add */
++ init_task_work(&t->cid_work, task_mm_cid_work);
++}
++
++void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
++{
++ struct callback_head *work = &curr->cid_work;
++ unsigned long now = jiffies;
++
++ if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
++ work->next != work)
++ return;
++ if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
++ return;
++ task_work_add(curr, work, TWA_RESUME);
++}
++
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ guard(rq_lock_irqsave)(rq);
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 0);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ mm_cid_put(mm);
++ t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ guard(rq_lock_irqsave)(rq);
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 0);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ mm_cid_put(mm);
++ t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ scoped_guard (rq_lock_irqsave, rq) {
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 1);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
++ }
++ rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++ WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++ t->mm_cid_active = 1;
++}
++#endif
+diff --git a/kernel/sched/alt_core.h b/kernel/sched/alt_core.h
+new file mode 100644
+index 000000000000..6f7fcaaf0ef4
+--- /dev/null
++++ b/kernel/sched/alt_core.h
+@@ -0,0 +1,210 @@
++#ifndef _KERNEL_SCHED_ALT_CORE_H
++#define _KERNEL_SCHED_ALT_CORE_H
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/*
++ * Task related inlined functions
++ */
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++ return p->migration_disabled;
++#else
++ return false;
++#endif
++}
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p) rt_prio((p)->prio)
++#define rt_policy(policy) ((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p) (rt_policy((p)->policy))
++
++struct affinity_context {
++ const struct cpumask *new_mask;
++ struct cpumask *user_mask;
++ unsigned int flags;
++};
++
++#define SCA_CHECK 0x01
++#define SCA_MIGRATE_DISABLE 0x02
++#define SCA_MIGRATE_ENABLE 0x04
++#define SCA_USER 0x08
++
++#ifdef CONFIG_SMP
++
++extern int __set_cpus_allowed_ptr(struct task_struct *p, struct affinity_context *ctx);
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++ /*
++ * See do_set_cpus_allowed() above for the rcu_head usage.
++ */
++ int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++ return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++#else /* !CONFIG_SMP: */
++
++static inline int __set_cpus_allowed_ptr(struct task_struct *p,
++ struct affinity_context *ctx)
++{
++ return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++ return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++#ifdef CONFIG_RT_MUTEXES
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++ if (pi_task)
++ prio = min(prio, pi_task->prio);
++
++ return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++ struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++ return __rt_effective_prio(pi_task, prio);
++}
++
++#else /* !CONFIG_RT_MUTEXES: */
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++ return prio;
++}
++
++#endif /* !CONFIG_RT_MUTEXES */
++
++extern int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user, bool pi);
++extern int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++extern void __setscheduler_prio(struct task_struct *p, int prio);
++
++/*
++ * Context API
++ */
++static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++ struct rq *rq;
++ for (;;) {
++ rq = task_rq(p);
++ if (p->on_cpu || task_on_rq_queued(p)) {
++ raw_spin_lock(&rq->lock);
++ if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++ *plock = &rq->lock;
++ return rq;
++ }
++ raw_spin_unlock(&rq->lock);
++ } else if (task_on_rq_migrating(p)) {
++ do {
++ cpu_relax();
++ } while (unlikely(task_on_rq_migrating(p)));
++ } else {
++ *plock = NULL;
++ return rq;
++ }
++ }
++}
++
++static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++ if (NULL != lock)
++ raw_spin_unlock(lock);
++}
++
++void check_task_changed(struct task_struct *p, struct rq *rq);
++
++/*
++ * RQ related inlined functions
++ */
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++ const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
++
++ return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++ struct list_head *next = p->sq_node.next;
++
++ if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
++ struct list_head *head;
++ unsigned long idx = next - &rq->queue.heads[0];
++
++ idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++ sched_idx2prio(idx, rq) + 1);
++ head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++ return list_first_entry(head, struct task_struct, sq_node);
++ }
++
++ return list_next_entry(p, sq_node);
++}
++
++extern void requeue_task(struct task_struct *p, struct rq *rq);
++
++#ifdef ALT_SCHED_DEBUG
++extern void alt_sched_debug(void);
++#else
++static inline void alt_sched_debug(void) {}
++#endif
++
++extern int sched_yield_type;
++
++#ifdef CONFIG_SMP
++extern cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DECLARE_STATIC_KEY_FALSE(sched_smt_present);
++DECLARE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++
++extern cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++
++extern cpumask_t *const sched_idle_mask;
++extern cpumask_t *const sched_sg_idle_mask;
++extern cpumask_t *const sched_pcore_idle_mask;
++extern cpumask_t *const sched_ecore_idle_mask;
++
++extern struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu);
++
++typedef bool (*idle_select_func_t)(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p);
++
++extern idle_select_func_t idle_select_func;
++#endif
++
++/* balance callback */
++#ifdef CONFIG_SMP
++extern struct balance_callback *splice_balance_callbacks(struct rq *rq);
++extern void balance_callbacks(struct rq *rq, struct balance_callback *head);
++#else
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++ return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_CORE_H */
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1dbd7eb6a434
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,32 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date : 2020
++ */
++#include "sched.h"
++#include "linux/sched/debug.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...) \
++ do { \
++ if (m) \
++ seq_printf(m, x); \
++ else \
++ pr_cont(x); \
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++ struct seq_file *m)
++{
++ SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++ get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..e8e61a59eae8
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,989 @@
++#ifndef _KERNEL_SCHED_ALT_SCHED_H
++#define _KERNEL_SCHED_ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++ struct cgroup_subsys_state css;
++
++ struct rcu_head rcu;
++ struct list_head list;
++
++ struct task_group *parent;
++ struct list_head siblings;
++ struct list_head children;
++#ifdef CONFIG_FAIR_GROUP_SCHED
++ unsigned long shares;
++#endif
++};
++
++extern struct task_group *sched_create_group(struct task_group *parent);
++extern void sched_online_group(struct task_group *tg,
++ struct task_group *parent);
++extern void sched_destroy_group(struct task_group *tg);
++extern void sched_release_group(struct task_group *tg);
++#endif /* CONFIG_CGROUP_SCHED */
++
++#define MIN_SCHED_NORMAL_PRIO (32)
++/*
++ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
++ *
++ * -- BMQ --
++ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
++ * -- PDS --
++ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
++ */
++#define SCHED_LEVELS (64 + 1)
++
++#define IDLE_TASK_SCHED_PRIO (SCHED_LEVELS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x) WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x) ({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++ unsigned long __w = (w); \
++ if (__w) \
++ __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++ __w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w) (w)
++# define scale_load_down(w) (w)
++#endif
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++#define ROOT_TASK_GROUP_LOAD NICE_0_LOAD
++
++/*
++ * A weight of 0 or 1 can cause arithmetics problems.
++ * A weight of a cfs_rq is the sum of weights of which entities
++ * are queued on this cfs_rq, so a weight of a entity should not be
++ * too large, so as the shares value of a task group.
++ * (The default weight is 1024 - so there's no practical
++ * limitation from this.)
++ */
++#define MIN_SHARES (1UL << 1)
++#define MAX_SHARES (1UL << 18)
++#endif
++
++/*
++ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
++ */
++#ifdef CONFIG_SCHED_DEBUG
++# define const_debug __read_mostly
++#else
++# define const_debug const
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED 1
++#define TASK_ON_RQ_MIGRATING 2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++ return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++ return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/* Wake flags. The first three directly map to some SD flag value */
++#define WF_EXEC 0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
++#define WF_FORK 0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
++#define WF_TTWU 0x08 /* Wakeup; maps to SD_BALANCE_WAKE */
++
++#define WF_SYNC 0x10 /* Waker goes to sleep after wakeup */
++#define WF_MIGRATED 0x20 /* Internal use, task got migrated */
++#define WF_CURRENT_CPU 0x40 /* Prefer to move the wakee to the current CPU. */
++
++#ifdef CONFIG_SMP
++static_assert(WF_EXEC == SD_BALANCE_EXEC);
++static_assert(WF_FORK == SD_BALANCE_FORK);
++static_assert(WF_TTWU == SD_BALANCE_WAKE);
++#endif
++
++#define SCHED_QUEUE_BITS (SCHED_LEVELS - 1)
++
++struct sched_queue {
++ DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++ struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++ struct balance_callback *next;
++ void (*func)(struct rq *rq);
++};
++
++typedef void (*balance_func_t)(struct rq *rq, int cpu);
++typedef void (*set_idle_mask_func_t)(unsigned int cpu, struct cpumask *dstp);
++typedef void (*clear_idle_mask_func_t)(int cpu, struct cpumask *dstp);
++
++struct balance_arg {
++ struct task_struct *task;
++ int active;
++ cpumask_t *cpumask;
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++ /* runqueue lock: */
++ raw_spinlock_t lock;
++
++ struct task_struct __rcu *curr;
++ struct task_struct *idle;
++ struct task_struct *stop;
++ struct mm_struct *prev_mm;
++
++ struct sched_queue queue ____cacheline_aligned;
++
++ int prio;
++#ifdef CONFIG_SCHED_PDS
++ int prio_idx;
++ u64 time_edge;
++#endif
++
++ /* switch count */
++ u64 nr_switches;
++
++ atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++ u64 last_seen_need_resched_ns;
++ int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++ int membarrier_state;
++#endif
++
++ set_idle_mask_func_t set_idle_mask_func;
++ clear_idle_mask_func_t clear_idle_mask_func;
++
++#ifdef CONFIG_SMP
++ int cpu; /* cpu of this runqueue */
++ bool online;
++
++ unsigned int ttwu_pending;
++ unsigned char nohz_idle_balance;
++ unsigned char idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++ struct sched_avg avg_irq;
++#endif
++
++ balance_func_t balance_func;
++ struct balance_arg active_balance_arg ____cacheline_aligned;
++ struct cpu_stop_work active_balance_work;
++
++ struct balance_callback *balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++ struct rcuwait hotplug_wait;
++#endif
++ unsigned int nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++ u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++ u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++ u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++ /* For genenal cpu load util */
++ s32 load_history;
++ u64 load_block;
++ u64 load_stamp;
++
++ /* calc_load related fields */
++ unsigned long calc_load_update;
++ long calc_load_active;
++
++ /* Ensure that all clocks are in the same cache line */
++ u64 clock ____cacheline_aligned;
++ u64 clock_task;
++
++ unsigned int nr_running;
++ unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++ call_single_data_t hrtick_csd;
++#endif
++ struct hrtimer hrtick_timer;
++ ktime_t hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++ /* latency stats */
++ struct sched_info rq_sched_info;
++ unsigned long long rq_cpu_time;
++ /* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++ /* sys_sched_yield() stats */
++ unsigned int yld_count;
++
++ /* schedule() stats */
++ unsigned int sched_switch;
++ unsigned int sched_count;
++ unsigned int sched_goidle;
++
++ /* try_to_wake_up() stats */
++ unsigned int ttwu_count;
++ unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++ /* Must be inspected within a rcu lock section */
++ struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++ call_single_data_t nohz_csd;
++#endif
++ atomic_t nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++ /* Scratch cpumask to be temporarily used under rq_lock */
++ cpumask_var_t scratch_mask;
++};
++
++extern unsigned int sysctl_sched_base_slice;
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
++#define this_rq() this_cpu_ptr(&runqueues)
++#define task_rq(p) cpu_rq(task_cpu(p))
++#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
++#define raw_rq() raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++#ifdef CONFIG_SCHED_SMT
++ SMT_LEVEL_SPACE_HOLDER,
++#endif
++ COREGROUP_LEVEL_SPACE_HOLDER,
++ CORE_LEVEL_SPACE_HOLDER,
++ OTHER_LEVEL_SPACE_HOLDER,
++ NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++ int cpu;
++
++ while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++ mask++;
++
++ return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++ return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++ return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++ return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++ /*
++ * Relax lockdep_assert_held() checking as in VRQ, call to
++ * sched_info_xxxx() may not held rq->lock
++ * lockdep_assert_held(&rq->lock);
++ */
++ return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++ /*
++ * Relax lockdep_assert_held() checking as in VRQ, call to
++ * sched_info_xxxx() may not held rq->lock
++ * lockdep_assert_held(&rq->lock);
++ */
++ return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP 0x01
++
++#define ENQUEUE_WAKEUP 0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++ unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(p->pi_lock)
++ __acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++ __releases(rq->lock)
++ __releases(p->pi_lock)
++{
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ local_irq_disable();
++ rq = this_rq();
++ raw_spin_lock(&rq->lock);
++
++ return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++ return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++ return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++ lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++ raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++ local_irq_disable();
++ raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++ raw_spin_rq_unlock(rq);
++ local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++ return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++ return p->on_cpu;
++}
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++ struct cpuidle_state *idle_state)
++{
++ rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++ WARN_ON(!rcu_read_lock_held());
++ return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++ struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++ return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++ return rq->cpu;
++#else
++ return 0;
++#endif
++}
++
++extern void resched_cpu(int cpu);
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT 0
++#define NOHZ_STATS_KICK_BIT 1
++
++#define NOHZ_BALANCE_KICK BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK (NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu) (&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++ u64 total;
++ u64 tick_delta;
++ u64 irq_start_time;
++ struct u64_stats_sync sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++ struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++ unsigned int seq;
++ u64 total;
++
++ do {
++ seq = __u64_stats_fetch_begin(&irqtime->sync);
++ total = irqtime->total;
++ } while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++ return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant() (true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant() (false)
++#endif
++
++#ifdef CONFIG_SMP
++unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
++ unsigned long min,
++ unsigned long max);
++#endif /* CONFIG_SMP */
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV 0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++ struct mm_struct *prev_mm,
++ struct mm_struct *next_mm)
++{
++ int membarrier_state;
++
++ if (prev_mm == next_mm)
++ return;
++
++ membarrier_state = atomic_read(&next_mm->membarrier_state);
++ if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++ return;
++
++ WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++ struct mm_struct *prev_mm,
++ struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++ return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++ struct task_struct *p)
++{
++ return util;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
++#define MM_CID_SCAN_DELAY 100 /* 100ms */
++
++extern raw_spinlock_t cid_lock;
++extern int use_cid_lock;
++
++extern void sched_mm_cid_migrate_from(struct task_struct *t);
++extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu);
++extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
++extern void init_sched_mm_cid(struct task_struct *t);
++
++static inline void __mm_cid_put(struct mm_struct *mm, int cid)
++{
++ if (cid < 0)
++ return;
++ cpumask_clear_cpu(cid, mm_cidmask(mm));
++}
++
++/*
++ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
++ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
++ * be held to transition to other states.
++ *
++ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
++ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
++ */
++static inline void mm_cid_put_lazy(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ int cid;
++
++ lockdep_assert_irqs_disabled();
++ cid = __this_cpu_read(pcpu_cid->cid);
++ if (!mm_cid_is_lazy_put(cid) ||
++ !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++ return;
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
++{
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ int cid, res;
++
++ lockdep_assert_irqs_disabled();
++ cid = __this_cpu_read(pcpu_cid->cid);
++ for (;;) {
++ if (mm_cid_is_unset(cid))
++ return MM_CID_UNSET;
++ /*
++ * Attempt transition from valid or lazy-put to unset.
++ */
++ res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
++ if (res == cid)
++ break;
++ cid = res;
++ }
++ return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm)
++{
++ int cid;
++
++ lockdep_assert_irqs_disabled();
++ cid = mm_cid_pcpu_unset(mm);
++ if (cid == MM_CID_UNSET)
++ return;
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int __mm_cid_try_get(struct mm_struct *mm)
++{
++ struct cpumask *cpumask;
++ int cid;
++
++ cpumask = mm_cidmask(mm);
++ /*
++ * Retry finding first zero bit if the mask is temporarily
++ * filled. This only happens during concurrent remote-clear
++ * which owns a cid without holding a rq lock.
++ */
++ for (;;) {
++ cid = cpumask_first_zero(cpumask);
++ if (cid < nr_cpu_ids)
++ break;
++ cpu_relax();
++ }
++ if (cpumask_test_and_set_cpu(cid, cpumask))
++ return -1;
++ return cid;
++}
++
++/*
++ * Save a snapshot of the current runqueue time of this cpu
++ * with the per-cpu cid value, allowing to estimate how recently it was used.
++ */
++static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
++{
++ struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
++
++ lockdep_assert_rq_held(rq);
++ WRITE_ONCE(pcpu_cid->time, rq->clock);
++}
++
++static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++ int cid;
++
++ /*
++ * All allocations (even those using the cid_lock) are lock-free. If
++ * use_cid_lock is set, hold the cid_lock to perform cid allocation to
++ * guarantee forward progress.
++ */
++ if (!READ_ONCE(use_cid_lock)) {
++ cid = __mm_cid_try_get(mm);
++ if (cid >= 0)
++ goto end;
++ raw_spin_lock(&cid_lock);
++ } else {
++ raw_spin_lock(&cid_lock);
++ cid = __mm_cid_try_get(mm);
++ if (cid >= 0)
++ goto unlock;
++ }
++
++ /*
++ * cid concurrently allocated. Retry while forcing following
++ * allocations to use the cid_lock to ensure forward progress.
++ */
++ WRITE_ONCE(use_cid_lock, 1);
++ /*
++ * Set use_cid_lock before allocation. Only care about program order
++ * because this is only required for forward progress.
++ */
++ barrier();
++ /*
++ * Retry until it succeeds. It is guaranteed to eventually succeed once
++ * all newcoming allocations observe the use_cid_lock flag set.
++ */
++ do {
++ cid = __mm_cid_try_get(mm);
++ cpu_relax();
++ } while (cid < 0);
++ /*
++ * Allocate before clearing use_cid_lock. Only care about
++ * program order because this is for forward progress.
++ */
++ barrier();
++ WRITE_ONCE(use_cid_lock, 0);
++unlock:
++ raw_spin_unlock(&cid_lock);
++end:
++ mm_cid_snapshot_time(rq, mm);
++ return cid;
++}
++
++static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ struct cpumask *cpumask;
++ int cid;
++
++ lockdep_assert_rq_held(rq);
++ cpumask = mm_cidmask(mm);
++ cid = __this_cpu_read(pcpu_cid->cid);
++ if (mm_cid_is_valid(cid)) {
++ mm_cid_snapshot_time(rq, mm);
++ return cid;
++ }
++ if (mm_cid_is_lazy_put(cid)) {
++ if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++ }
++ cid = __mm_cid_get(rq, mm);
++ __this_cpu_write(pcpu_cid->cid, cid);
++ return cid;
++}
++
++static inline void switch_mm_cid(struct rq *rq,
++ struct task_struct *prev,
++ struct task_struct *next)
++{
++ /*
++ * Provide a memory barrier between rq->curr store and load of
++ * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
++ *
++ * Should be adapted if context_switch() is modified.
++ */
++ if (!next->mm) { // to kernel
++ /*
++ * user -> kernel transition does not guarantee a barrier, but
++ * we can use the fact that it performs an atomic operation in
++ * mmgrab().
++ */
++ if (prev->mm) // from user
++ smp_mb__after_mmgrab();
++ /*
++ * kernel -> kernel transition does not change rq->curr->mm
++ * state. It stays NULL.
++ */
++ } else { // to user
++ /*
++ * kernel -> user transition does not provide a barrier
++ * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
++ * Provide it here.
++ */
++ if (!prev->mm) // from kernel
++ smp_mb();
++ /*
++ * user -> user transition guarantees a memory barrier through
++ * switch_mm() when current->mm changes. If current->mm is
++ * unchanged, no barrier is needed.
++ */
++ }
++ if (prev->mm_cid_active) {
++ mm_cid_snapshot_time(rq, prev->mm);
++ mm_cid_put_lazy(prev);
++ prev->mm_cid = -1;
++ }
++ if (next->mm_cid_active)
++ next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
++static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
++static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu) { }
++static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
++static inline void init_sched_mm_cid(struct task_struct *t) { }
++#endif
++
++#ifdef CONFIG_SMP
++extern struct balance_callback balance_push_callback;
++
++static inline void
++queue_balance_callback(struct rq *rq,
++ struct balance_callback *head,
++ void (*func)(struct rq *rq))
++{
++ lockdep_assert_rq_held(rq);
++
++ /*
++ * Don't (re)queue an already queued item; nor queue anything when
++ * balance_push() is active, see the comment with
++ * balance_push_callback.
++ */
++ if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
++ return;
++
++ head->func = func;
++ head->next = rq->balance_callback;
++ rq->balance_callback = head;
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_SCHED_H */
+diff --git a/kernel/sched/alt_topology.c b/kernel/sched/alt_topology.c
+new file mode 100644
+index 000000000000..2266138ee783
+--- /dev/null
++++ b/kernel/sched/alt_topology.c
+@@ -0,0 +1,350 @@
++#include "alt_core.h"
++#include "alt_topology.h"
++
++#ifdef CONFIG_SMP
++
++static cpumask_t sched_pcore_mask ____cacheline_aligned_in_smp;
++
++static int __init sched_pcore_mask_setup(char *str)
++{
++ if (cpulist_parse(str, &sched_pcore_mask))
++ pr_warn("sched/alt: pcore_cpus= incorrect CPU range\n");
++
++ return 0;
++}
++__setup("pcore_cpus=", sched_pcore_mask_setup);
++
++/*
++ * set/clear idle mask functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static void set_idle_mask_smt(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ if (cpumask_subset(cpu_smt_mask(cpu), sched_idle_mask))
++ cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++
++static void clear_idle_mask_smt(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_andnot(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++#endif
++
++static void set_idle_mask_pcore(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ cpumask_set_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void clear_idle_mask_pcore(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_clear_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void set_idle_mask_ecore(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ cpumask_set_cpu(cpu, sched_ecore_idle_mask);
++}
++
++static void clear_idle_mask_ecore(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_clear_cpu(cpu, sched_ecore_idle_mask);
++}
++
++/*
++ * Idle cpu/rq selection functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static bool p1_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p)
++{
++ return cpumask_and(dstp, src1p, src2p + 1) ||
++ cpumask_and(dstp, src1p, src2p);
++}
++#endif
++
++static bool p1p2_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p)
++{
++ return cpumask_and(dstp, src1p, src2p + 1) ||
++ cpumask_and(dstp, src1p, src2p + 2) ||
++ cpumask_and(dstp, src1p, src2p);
++}
++
++/* common balance functions */
++static int active_balance_cpu_stop(void *data)
++{
++ struct balance_arg *arg = data;
++ struct task_struct *p = arg->task;
++ struct rq *rq = this_rq();
++ unsigned long flags;
++ cpumask_t tmp;
++
++ local_irq_save(flags);
++
++ raw_spin_lock(&p->pi_lock);
++ raw_spin_lock(&rq->lock);
++
++ arg->active = 0;
++
++ if (task_on_rq_queued(p) && task_rq(p) == rq &&
++ cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
++ !is_migration_disabled(p)) {
++ int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
++ rq = move_queued_task(rq, p, dcpu);
++ }
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++/* trigger_active_balance - for @rq */
++static inline int
++trigger_active_balance(struct rq *src_rq, struct rq *rq, cpumask_t *target_mask)
++{
++ struct balance_arg *arg;
++ unsigned long flags;
++ struct task_struct *p;
++ int res;
++
++ if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++ return 0;
++
++ arg = &rq->active_balance_arg;
++ res = (1 == rq->nr_running) && \
++ !is_migration_disabled((p = sched_rq_first_task(rq))) && \
++ cpumask_intersects(p->cpus_ptr, target_mask) && \
++ !arg->active;
++ if (res) {
++ arg->task = p;
++ arg->cpumask = target_mask;
++
++ arg->active = 1;
++ }
++
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++ if (res) {
++ preempt_disable();
++ raw_spin_unlock(&src_rq->lock);
++
++ stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop, arg,
++ &rq->active_balance_work);
++
++ preempt_enable();
++ raw_spin_lock(&src_rq->lock);
++ }
++
++ return res;
++}
++
++static inline int
++ecore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++ if (cpumask_andnot(single_task_mask, single_task_mask, &sched_pcore_mask)) {
++ int i, cpu = cpu_of(rq);
++
++ for_each_cpu_wrap(i, single_task_mask, cpu)
++ if (trigger_active_balance(rq, cpu_rq(i), target_mask))
++ return 1;
++ }
++
++ return 0;
++}
++
++static DEFINE_PER_CPU(struct balance_callback, active_balance_head);
++
++#ifdef CONFIG_SCHED_SMT
++static inline int
++smt_pcore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++ cpumask_t smt_single_mask;
++
++ if (cpumask_and(&smt_single_mask, single_task_mask, &sched_smt_mask)) {
++ int i, cpu = cpu_of(rq);
++
++ for_each_cpu_wrap(i, &smt_single_mask, cpu) {
++ if (cpumask_subset(cpu_smt_mask(i), &smt_single_mask) &&
++ trigger_active_balance(rq, cpu_rq(i), target_mask))
++ return 1;
++ }
++ }
++
++ return 0;
++}
++
++/* smt p core balance functions */
++static inline void smt_pcore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ (/* smt core group balance */
++ (static_key_count(&sched_smt_present.key) > 1 &&
++ smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)
++ ) ||
++ /* e core to idle smt core balance */
++ ecore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)))
++ return;
++}
++
++static void smt_pcore_balance_func(struct rq *rq, const int cpu)
++{
++ if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_pcore_balance);
++}
++
++/* smt balance functions */
++static inline void smt_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ static_key_count(&sched_smt_present.key) > 1 &&
++ smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask))
++ return;
++}
++
++static void smt_balance_func(struct rq *rq, const int cpu)
++{
++ if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_balance);
++}
++
++/* e core balance functions */
++static inline void ecore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ /* smt occupied p core to idle e core balance */
++ smt_pcore_source_balance(rq, &single_task_mask, sched_ecore_idle_mask))
++ return;
++}
++
++static void ecore_balance_func(struct rq *rq, const int cpu)
++{
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), ecore_balance);
++}
++#endif /* CONFIG_SCHED_SMT */
++
++/* p core balance functions */
++static inline void pcore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ /* idle e core to p core balance */
++ ecore_source_balance(rq, &single_task_mask, sched_pcore_idle_mask))
++ return;
++}
++
++static void pcore_balance_func(struct rq *rq, const int cpu)
++{
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), pcore_balance);
++}
++
++#ifdef ALT_SCHED_DEBUG
++#define SCHED_DEBUG_INFO(...) printk(KERN_INFO __VA_ARGS__)
++#else
++#define SCHED_DEBUG_INFO(...) do { } while(0)
++#endif
++
++#define SET_IDLE_SELECT_FUNC(func) \
++{ \
++ idle_select_func = func; \
++ printk(KERN_INFO "sched: "#func); \
++}
++
++#define SET_RQ_BALANCE_FUNC(rq, cpu, func) \
++{ \
++ rq->balance_func = func; \
++ SCHED_DEBUG_INFO("sched: cpu#%02d -> "#func, cpu); \
++}
++
++#define SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_func, clear_func) \
++{ \
++ rq->set_idle_mask_func = set_func; \
++ rq->clear_idle_mask_func = clear_func; \
++ SCHED_DEBUG_INFO("sched: cpu#%02d -> "#set_func" "#clear_func, cpu); \
++}
++
++void sched_init_topology(void)
++{
++ int cpu;
++ struct rq *rq;
++ cpumask_t sched_ecore_mask = { CPU_BITS_NONE };
++ int ecore_present = 0;
++
++#ifdef CONFIG_SCHED_SMT
++ if (!cpumask_empty(&sched_smt_mask))
++ printk(KERN_INFO "sched: smt mask: 0x%08lx\n", sched_smt_mask.bits[0]);
++#endif
++
++ if (!cpumask_empty(&sched_pcore_mask)) {
++ cpumask_andnot(&sched_ecore_mask, cpu_online_mask, &sched_pcore_mask);
++ printk(KERN_INFO "sched: pcore mask: 0x%08lx, ecore mask: 0x%08lx\n",
++ sched_pcore_mask.bits[0], sched_ecore_mask.bits[0]);
++
++ ecore_present = !cpumask_empty(&sched_ecore_mask);
++ }
++
++#ifdef CONFIG_SCHED_SMT
++ /* idle select function */
++ if (cpumask_equal(&sched_smt_mask, cpu_online_mask)) {
++ SET_IDLE_SELECT_FUNC(p1_idle_select_func);
++ } else
++#endif
++ if (!cpumask_empty(&sched_pcore_mask)) {
++ SET_IDLE_SELECT_FUNC(p1p2_idle_select_func);
++ }
++
++ for_each_online_cpu(cpu) {
++ rq = cpu_rq(cpu);
++ /* take chance to reset time slice for idle tasks */
++ rq->idle->time_slice = sysctl_sched_base_slice;
++
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) > 1) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_smt, clear_idle_mask_smt);
++
++ if (cpumask_test_cpu(cpu, &sched_pcore_mask) &&
++ !cpumask_intersects(&sched_ecore_mask, &sched_smt_mask)) {
++ SET_RQ_BALANCE_FUNC(rq, cpu, smt_pcore_balance_func);
++ } else {
++ SET_RQ_BALANCE_FUNC(rq, cpu, smt_balance_func);
++ }
++
++ continue;
++ }
++#endif
++ /* !SMT or only one cpu in sg */
++ if (cpumask_test_cpu(cpu, &sched_pcore_mask)) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_pcore, clear_idle_mask_pcore);
++
++ if (ecore_present)
++ SET_RQ_BALANCE_FUNC(rq, cpu, pcore_balance_func);
++
++ continue;
++ }
++ if (cpumask_test_cpu(cpu, &sched_ecore_mask)) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_ecore, clear_idle_mask_ecore);
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_intersects(&sched_pcore_mask, &sched_smt_mask))
++ SET_RQ_BALANCE_FUNC(rq, cpu, ecore_balance_func);
++#endif
++ }
++ }
++}
++#endif /* CONFIG_SMP */
+diff --git a/kernel/sched/alt_topology.h b/kernel/sched/alt_topology.h
+new file mode 100644
+index 000000000000..076174cd2bc6
+--- /dev/null
++++ b/kernel/sched/alt_topology.h
+@@ -0,0 +1,6 @@
++#ifndef _KERNEL_SCHED_ALT_TOPOLOGY_H
++#define _KERNEL_SCHED_ALT_TOPOLOGY_H
++
++extern void sched_init_topology(void);
++
++#endif /* _KERNEL_SCHED_ALT_TOPOLOGY_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..5a7835246ec3
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,103 @@
++#ifndef _KERNEL_SCHED_BMQ_H
++#define _KERNEL_SCHED_BMQ_H
++
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++static inline void boost_task(struct task_struct *p, int n)
++{
++ int limit;
++
++ switch (p->policy) {
++ case SCHED_NORMAL:
++ limit = -MAX_PRIORITY_ADJ;
++ break;
++ case SCHED_BATCH:
++ limit = 0;
++ break;
++ default:
++ return;
++ }
++
++ p->boost_prio = max(limit, p->boost_prio - n);
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++ if (p->boost_prio < MAX_PRIORITY_ADJ)
++ p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++/* This API is used in task_prio(), return value readed by human users */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++ return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++ return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
++ MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio) \
++ prio = task_sched_prio(p); \
++ idx = prio;
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++ return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++ return idx;
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++ return rq->prio;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (p->prio + p->boost_prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq) {}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++ deboost_task(p);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq) {}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++ p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++ s64 delta = this_rq()->clock_task > p->last_ran;
++
++ if (likely(delta > 0))
++ boost_task(p, delta >> 22);
++}
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++ boost_task(p, 1);
++}
++
++#endif /* _KERNEL_SCHED_BMQ_H */
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+index 39c315182b35..0fc447d9b863 100644
+--- a/kernel/sched/build_policy.c
++++ b/kernel/sched/build_policy.c
+@@ -42,14 +42,20 @@
+
+ #include "idle.c"
+
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+
+ #include "cputime.c"
++#ifndef CONFIG_SCHED_ALT
+ #include "deadline.c"
++#endif
+
+ #include "syscalls.c"
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+index 80a3df49ab47..58d04aa73634 100644
+--- a/kernel/sched/build_utility.c
++++ b/kernel/sched/build_utility.c
+@@ -56,6 +56,10 @@
+
+ #include "clock.c"
+
++#ifdef CONFIG_SCHED_ALT
++# include "alt_topology.c"
++#endif
++
+ #ifdef CONFIG_CGROUP_CPUACCT
+ # include "cpuacct.c"
+ #endif
+@@ -84,7 +88,9 @@
+
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index eece6244f9d2..3075127f9e95 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -197,12 +197,17 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
+
+ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ {
++#ifndef CONFIG_SCHED_ALT
+ unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
+
+ util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
+ util = max(util, boost);
+ sg_cpu->bw_min = min;
+ sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
++#else /* CONFIG_SCHED_ALT */
++ sg_cpu->bw_min = 0;
++ sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+
+ /**
+@@ -343,8 +348,10 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
+ */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
+ sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -676,6 +683,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ }
+
+ ret = sched_setattr_nocheck(thread, &attr);
++
+ if (ret) {
+ kthread_stop(thread);
+ pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 0bed0fa1acd9..031affa09446 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -126,7 +126,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ p->utime += cputime;
+ account_group_user_time(p, cputime);
+
+- index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++ index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+
+ /* Add user time to cpustat. */
+ task_group_account_field(p, index, cputime);
+@@ -150,7 +150,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ p->gtime += cputime;
+
+ /* Add guest time to cpustat. */
+- if (task_nice(p) > 0) {
++ if (task_running_nice(p)) {
+ task_group_account_field(p, CPUTIME_NICE, cputime);
+ cpustat[CPUTIME_GUEST_NICE] += cputime;
+ } else {
+@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+- return t->se.sum_exec_runtime;
++ return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ struct rq *rq;
+
+ rq = task_rq_lock(t, &rf);
+- ns = t->se.sum_exec_runtime;
++ ns = tsk_seruntime(t);
+ task_rq_unlock(rq, t, &rf);
+
+ return ns;
+@@ -623,7 +623,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ struct task_cputime cputime = {
+- .sum_exec_runtime = p->se.sum_exec_runtime,
++ .sum_exec_runtime = tsk_seruntime(p),
+ };
+
+ if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index c1eb9a1afd13..d3aec989797d 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -7,6 +7,7 @@
+ * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+ */
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * This allows printing both to /sys/kernel/debug/sched/debug and
+ * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+
+@@ -278,6 +280,7 @@ static const struct file_operations sched_dynamic_fops = {
+
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+
+ #ifdef CONFIG_SMP
+@@ -332,6 +335,7 @@ static const struct file_operations sched_debug_fops = {
+ .llseek = seq_lseek,
+ .release = seq_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+
+ static struct dentry *debugfs_sched;
+
+@@ -341,14 +345,17 @@ static __init int sched_init_debug(void)
+
+ debugfs_sched = debugfs_create_dir("sched", NULL);
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+
+ debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
+ debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
+
+@@ -373,11 +380,13 @@ static __init int sched_init_debug(void)
+ #endif
+
+ debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+
+ return 0;
+ }
+ late_initcall(sched_init_debug);
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+
+ static cpumask_var_t sd_sysctl_cpus;
+@@ -1111,6 +1120,7 @@ void proc_sched_set_task(struct task_struct *p)
+ memset(&p->stats, 0, sizeof(p->stats));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 6e78d071beb5..5fe0955ea1a6 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -424,6 +424,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ do_idle();
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * idle-task scheduling class.
+ */
+@@ -545,3 +546,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ .switched_to = switched_to_idle,
+ .update_curr = update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..fe3099071eb7
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,139 @@
++#ifndef _KERNEL_SCHED_PDS_H
++#define _KERNEL_SCHED_PDS_H
++
++#define ALT_SCHED_NAME "PDS"
++
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM (32)
++#define SCHED_EDGE_DELTA (SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x) ((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++ u64 sched_dl = max(p->deadline, rq->time_edge);
++
++#ifdef ALT_SCHED_DEBUG
++ if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
++ "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
++ return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++ return sched_dl - rq->time_edge;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++ return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++ MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio) \
++ if (p->prio < MIN_NORMAL_PRIO) { \
++ prio = p->prio >> 2; \
++ idx = prio; \
++ } else { \
++ u64 sched_dl = max(p->deadline, rq->time_edge); \
++ prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge; \
++ idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl); \
++ }
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++ return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++ sched_prio :
++ MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++ return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++ sched_idx :
++ MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++ return rq->prio_idx;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq)
++{
++ struct list_head head;
++ u64 old = rq->time_edge;
++ u64 now = rq->clock >> sched_timeslice_shift;
++ u64 prio, delta;
++ DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++ if (now == old)
++ return;
++
++ rq->time_edge = now;
++ delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++ INIT_LIST_HEAD(&head);
++
++ prio = MIN_SCHED_NORMAL_PRIO;
++ for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++ list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++ SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++ bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++ if (!list_empty(&head)) {
++ u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++ __list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
++ set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++ }
++ bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++ (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++ if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++ return;
++
++ rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
++ rq->prio_idx = sched_prio2idx(rq->prio, rq);
++}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++ if (p->prio >= MIN_NORMAL_PRIO)
++ p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
++ (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++ u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
++ if (unlikely(p->deadline > max_dl))
++ p->deadline = max_dl;
++}
++
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++ sched_task_renew(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++ p->time_slice = sysctl_sched_base_slice;
++ sched_task_renew(p, rq);
++}
++
++static inline void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
++
++#endif /* _KERNEL_SCHED_PDS_H */
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index fa52906a4478..4aee1f6a00ca 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * sched_entity:
+ *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+
+ return 0;
+ }
++#endif
+
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+ * hardware:
+ *
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index 2150062949d4..a82bff3231a4 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,13 +1,15 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
++#endif
+
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
+
+ static inline u64 hw_load_avg(struct rq *rq)
+@@ -44,6 +46,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ unsigned int enqueued;
+@@ -180,9 +183,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+
+ #else
+
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -200,6 +205,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ return 0;
+ }
++#endif
+
+ static inline int
+ update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 4c36cc680361..de56b7d1650c 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3629,4 +3633,9 @@ static inline void balance_callbacks(struct rq *rq, struct balance_callback *hea
+
+ #endif
+
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index eb0cdcd4d921..72224ecb5cbf 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -115,8 +115,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ } else {
+ struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ struct sched_domain *sd;
+ int dcount = 0;
++#endif
+ #endif
+ cpu = (unsigned long)(v - 2);
+ rq = cpu_rq(cpu);
+@@ -133,6 +135,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ seq_printf(seq, "\n");
+
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ /* domain-specific stats */
+ rcu_read_lock();
+ for_each_domain(cpu, sd) {
+@@ -160,6 +163,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ sd->ttwu_move_balance);
+ }
+ rcu_read_unlock();
++#endif
+ #endif
+ }
+ return 0;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 237780aa3c53..7980ec09bf3b 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart (struct rq *rq, unsigned long long delt
+
+ #endif /* CONFIG_SCHEDSTATS */
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ struct sched_entity se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
+ #endif
+ return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
+index ae1b42775ef9..7cf9073e1dc6 100644
+--- a/kernel/sched/syscalls.c
++++ b/kernel/sched/syscalls.c
+@@ -16,6 +16,14 @@
+ #include "sched.h"
+ #include "autogroup.h"
+
++#ifdef CONFIG_SCHED_ALT
++#include "alt_core.h"
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++ return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
++}
++#else /* !CONFIG_SCHED_ALT */
+ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ {
+ int prio;
+@@ -29,6 +37,7 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+
+ return prio;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ /*
+ * Calculate the expected normal priority: i.e. priority
+@@ -39,7 +48,11 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ */
+ static inline int normal_prio(struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++#else /* !CONFIG_SCHED_ALT */
+ return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio));
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /*
+@@ -64,6 +77,37 @@ static int effective_prio(struct task_struct *p)
+
+ void set_user_nice(struct task_struct *p, long nice)
+ {
++#ifdef CONFIG_SCHED_ALT
++ unsigned long flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++ return;
++ /*
++ * We have to be careful, if called from sys_setpriority(),
++ * the task might be in the middle of scheduling on another CPU.
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ rq = __task_access_lock(p, &lock);
++
++ p->static_prio = NICE_TO_PRIO(nice);
++ /*
++ * The RT priorities are set via sched_setscheduler(), but we still
++ * allow the 'normal' nice value to be set - but as expected
++ * it won't have any effect on scheduling until the task is
++ * not SCHED_NORMAL/SCHED_BATCH:
++ */
++ if (task_has_rt_policy(p))
++ goto out_unlock;
++
++ p->prio = effective_prio(p);
++
++ check_task_changed(p, rq);
++out_unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++#else
+ bool queued, running;
+ struct rq *rq;
+ int old_prio;
+@@ -112,6 +156,7 @@ void set_user_nice(struct task_struct *p, long nice)
+ * lowered its priority, then reschedule its CPU:
+ */
+ p->sched_class->prio_changed(rq, p, old_prio);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL(set_user_nice);
+
+@@ -190,7 +235,19 @@ SYSCALL_DEFINE1(nice, int, increment)
+ */
+ int task_prio(const struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++/*
++ * sched policy return value kernel prio user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53] [100 ... 139] 0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39] 100 0/[-20 ... 19]
++ * fifo, rr [-1 ... -100] [99 ... 0] [0 ... 99]
++ */
++ return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++ task_sched_prio_normal(p, task_rq(p));
++#else
+ return p->prio - MAX_RT_PRIO;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /**
+@@ -259,6 +316,7 @@ int sched_core_idle_cpu(int cpu)
+ #endif
+
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * This function computes an effective utilization for the given CPU, to be
+ * used for frequency selection given the linear relation: f = u * f_max.
+@@ -357,6 +415,7 @@ unsigned long sched_cpu_util(int cpu)
+ {
+ return effective_cpu_util(cpu, cpu_util_cfs(cpu), NULL, NULL);
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* CONFIG_SMP */
+
+ /**
+@@ -401,9 +460,11 @@ static void __setscheduler_params(struct task_struct *p,
+
+ p->policy = policy;
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_policy(policy))
+ __setparam_dl(p, attr);
+ else if (fair_policy(policy))
++#endif /* CONFIG_SCHED_ALT */
+ p->static_prio = NICE_TO_PRIO(attr->sched_nice);
+
+ /*
+@@ -413,7 +474,9 @@ static void __setscheduler_params(struct task_struct *p,
+ */
+ p->rt_priority = attr->sched_priority;
+ p->normal_prio = normal_prio(p);
++#ifndef CONFIG_SCHED_ALT
+ set_load_weight(p, true);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /*
+@@ -429,6 +492,8 @@ static bool check_same_owner(struct task_struct *p)
+ uid_eq(cred->euid, pcred->uid));
+ }
+
++#ifndef CONFIG_SCHED_ALT
++
+ #ifdef CONFIG_UCLAMP_TASK
+
+ static int uclamp_validate(struct task_struct *p,
+@@ -542,6 +607,7 @@ static inline int uclamp_validate(struct task_struct *p,
+ static void __setscheduler_uclamp(struct task_struct *p,
+ const struct sched_attr *attr) { }
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+
+ /*
+ * Allow unprivileged RT tasks to decrease priority.
+@@ -552,11 +618,13 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ const struct sched_attr *attr,
+ int policy, int reset_on_fork)
+ {
++#ifndef CONFIG_SCHED_ALT
+ if (fair_policy(policy)) {
+ if (attr->sched_nice < task_nice(p) &&
+ !is_nice_reduction(p, attr->sched_nice))
+ goto req_priv;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ if (rt_policy(policy)) {
+ unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
+@@ -571,6 +639,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ goto req_priv;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * Can't set/change SCHED_DEADLINE policy at all for now
+ * (safest behavior); in the future we would like to allow
+@@ -588,6 +657,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ if (!is_nice_reduction(p, task_nice(p)))
+ goto req_priv;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ /* Can't change other user's priorities: */
+ if (!check_same_owner(p))
+@@ -610,6 +680,158 @@ int __sched_setscheduler(struct task_struct *p,
+ const struct sched_attr *attr,
+ bool user, bool pi)
+ {
++#ifdef CONFIG_SCHED_ALT
++ const struct sched_attr dl_squash_attr = {
++ .size = sizeof(struct sched_attr),
++ .sched_policy = SCHED_FIFO,
++ .sched_nice = 0,
++ .sched_priority = 99,
++ };
++ int oldpolicy = -1, policy = attr->sched_policy;
++ int retval, newprio;
++ struct balance_callback *head;
++ unsigned long flags;
++ struct rq *rq;
++ int reset_on_fork;
++ raw_spinlock_t *lock;
++
++ /* The pi code expects interrupts enabled */
++ BUG_ON(pi && in_interrupt());
++
++ /*
++ * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++ */
++ if (unlikely(SCHED_DEADLINE == policy)) {
++ attr = &dl_squash_attr;
++ policy = attr->sched_policy;
++ }
++recheck:
++ /* Double check policy once rq lock held */
++ if (policy < 0) {
++ reset_on_fork = p->sched_reset_on_fork;
++ policy = oldpolicy = p->policy;
++ } else {
++ reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++ if (policy > SCHED_IDLE)
++ return -EINVAL;
++ }
++
++ if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++ return -EINVAL;
++
++ /*
++ * Valid priorities for SCHED_FIFO and SCHED_RR are
++ * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++ * SCHED_BATCH and SCHED_IDLE is 0.
++ */
++ if (attr->sched_priority < 0 ||
++ (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++ (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++ return -EINVAL;
++ if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++ (attr->sched_priority != 0))
++ return -EINVAL;
++
++ if (user) {
++ retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++ if (retval)
++ return retval;
++
++ retval = security_task_setscheduler(p);
++ if (retval)
++ return retval;
++ }
++
++ /*
++ * Make sure no PI-waiters arrive (or leave) while we are
++ * changing the priority of the task:
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++ /*
++ * To be able to change p->policy safely, task_access_lock()
++ * must be called.
++ * IF use task_access_lock() here:
++ * For the task p which is not running, reading rq->stop is
++ * racy but acceptable as ->stop doesn't change much.
++ * An enhancemnet can be made to read rq->stop saftly.
++ */
++ rq = __task_access_lock(p, &lock);
++
++ /*
++ * Changing the policy of the stop threads its a very bad idea
++ */
++ if (p == rq->stop) {
++ retval = -EINVAL;
++ goto unlock;
++ }
++
++ /*
++ * If not changing anything there's no need to proceed further:
++ */
++ if (unlikely(policy == p->policy)) {
++ if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++ goto change;
++ if (!rt_policy(policy) &&
++ NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++ goto change;
++
++ p->sched_reset_on_fork = reset_on_fork;
++ retval = 0;
++ goto unlock;
++ }
++change:
++
++ /* Re-check policy now with rq lock held */
++ if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++ policy = oldpolicy = -1;
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++ goto recheck;
++ }
++
++ p->sched_reset_on_fork = reset_on_fork;
++
++ newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++ if (pi) {
++ /*
++ * Take priority boosted tasks into account. If the new
++ * effective priority is unchanged, we just store the new
++ * normal parameters and do not touch the scheduler class and
++ * the runqueue. This will be done when the task deboost
++ * itself.
++ */
++ newprio = rt_effective_prio(p, newprio);
++ }
++
++ if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++ __setscheduler_params(p, attr);
++ __setscheduler_prio(p, newprio);
++ }
++
++ check_task_changed(p, rq);
++
++ /* Avoid rq from going away on us: */
++ preempt_disable();
++ head = splice_balance_callbacks(rq);
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ if (pi)
++ rt_mutex_adjust_pi(p);
++
++ /* Run balance callbacks after we've adjusted the PI chain: */
++ balance_callbacks(rq, head);
++ preempt_enable();
++
++ return 0;
++
++unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++ return retval;
++#else /* !CONFIG_SCHED_ALT */
+ int oldpolicy = -1, policy = attr->sched_policy;
+ int retval, oldprio, newprio, queued, running;
+ const struct sched_class *prev_class;
+@@ -835,6 +1057,7 @@ int __sched_setscheduler(struct task_struct *p,
+ if (cpuset_locked)
+ cpuset_unlock();
+ return retval;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ static int _sched_setscheduler(struct task_struct *p, int policy,
+@@ -1012,9 +1235,12 @@ static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *a
+
+ static void get_params(struct task_struct *p, struct sched_attr *attr)
+ {
++#ifndef CONFIG_SCHED_ALT
+ if (task_has_dl_policy(p))
+ __getparam_dl(p, attr);
+- else if (task_has_rt_policy(p))
++ else
++#endif
++ if (task_has_rt_policy(p))
+ attr->sched_priority = p->rt_priority;
+ else
+ attr->sched_nice = task_nice(p);
+@@ -1236,6 +1462,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
+ #ifdef CONFIG_SMP
+ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ {
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * If the task isn't a deadline task or admission control is
+ * disabled then we don't care about affinity changes.
+@@ -1252,6 +1479,7 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ guard(rcu)();
+ if (!cpumask_subset(task_rq(p)->rd->span, mask))
+ return -EBUSY;
++#endif
+
+ return 0;
+ }
+@@ -1276,9 +1504,11 @@ int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
+ ctx->new_mask = new_mask;
+ ctx->flags |= SCA_CHECK;
+
++#ifndef CONFIG_SCHED_ALT
+ retval = dl_task_check_affinity(p, new_mask);
+ if (retval)
+ goto out_free_new_mask;
++#endif
+
+ retval = __set_cpus_allowed_ptr(p, ctx);
+ if (retval)
+@@ -1458,13 +1688,34 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
+
+ static void do_sched_yield(void)
+ {
+- struct rq_flags rf;
+ struct rq *rq;
++ struct rq_flags rf;
++
++#ifdef CONFIG_SCHED_ALT
++ struct task_struct *p;
++
++ if (!sched_yield_type)
++ return;
+
+ rq = this_rq_lock_irq(&rf);
+
++ schedstat_inc(rq->yld_count);
++
++ p = current;
++ if (rt_task(p)) {
++ if (task_on_rq_queued(p))
++ requeue_task(p, rq);
++ } else if (rq->nr_running > 1) {
++ do_sched_yield_type_1(p, rq);
++ if (task_on_rq_queued(p))
++ requeue_task(p, rq);
++ }
++#else /* !CONFIG_SCHED_ALT */
++ rq = this_rq_lock_irq(&rf);
++
+ schedstat_inc(rq->yld_count);
+ current->sched_class->yield_task(rq);
++#endif /* !CONFIG_SCHED_ALT */
+
+ preempt_disable();
+ rq_unlock_irq(rq, &rf);
+@@ -1533,6 +1784,9 @@ EXPORT_SYMBOL(yield);
+ */
+ int __sched yield_to(struct task_struct *p, bool preempt)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return 0;
++#else /* !CONFIG_SCHED_ALT */
+ struct task_struct *curr = current;
+ struct rq *rq, *p_rq;
+ int yielded = 0;
+@@ -1578,6 +1832,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
+ schedule();
+
+ return yielded;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL_GPL(yield_to);
+
+@@ -1598,7 +1853,9 @@ SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
+ case SCHED_RR:
+ ret = MAX_RT_PRIO-1;
+ break;
++#ifndef CONFIG_SCHED_ALT
+ case SCHED_DEADLINE:
++#endif
+ case SCHED_NORMAL:
+ case SCHED_BATCH:
+ case SCHED_IDLE:
+@@ -1625,7 +1882,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+ case SCHED_RR:
+ ret = 1;
+ break;
++#ifndef CONFIG_SCHED_ALT
+ case SCHED_DEADLINE:
++#endif
+ case SCHED_NORMAL:
+ case SCHED_BATCH:
+ case SCHED_IDLE:
+@@ -1636,7 +1895,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+
+ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ {
++#ifndef CONFIG_SCHED_ALT
+ unsigned int time_slice = 0;
++#endif
+ int retval;
+
+ if (pid < 0)
+@@ -1651,6 +1912,7 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ if (retval)
+ return retval;
+
++#ifndef CONFIG_SCHED_ALT
+ scoped_guard (task_rq_lock, p) {
+ struct rq *rq = scope.rq;
+ if (p->sched_class->get_rr_interval)
+@@ -1659,6 +1921,13 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ }
+
+ jiffies_to_timespec64(time_slice, t);
++#else
++ }
++
++ alt_sched_debug();
++
++ *t = ns_to_timespec64(sysctl_sched_base_slice);
++#endif /* !CONFIG_SCHED_ALT */
+ return 0;
+ }
+
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 76504b776d03..de7d3db0c67b 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -3,6 +3,7 @@
+ * Scheduler topology setup/handling methods
+ */
+
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1451,8 +1452,10 @@ static void asym_cpu_capacity_scan(void)
+ */
+
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1687,6 +1690,7 @@ sd_init(struct sched_domain_topology_level *tl,
+
+ return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ /*
+ * Topology list, bottom-up.
+@@ -1723,6 +1727,7 @@ void __init set_sched_topology(struct sched_domain_topology_level *tl)
+ sched_domain_topology_saved = NULL;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2789,3 +2794,28 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
++
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++ struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++ return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++ return cpumask_nth(cpu, cpus);
++}
++
++const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
++{
++ return ERR_PTR(-EOPNOTSUPP);
++}
++EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 79e6cb1d5c48..61bc0352e233 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+
+ /* Constants used for minimum and maximum */
+
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PERF_EVENTS
+ static const int six_hundred_forty_kb = 640 * 1024;
+ #endif
+@@ -1907,6 +1911,17 @@ static struct ctl_table kern_table[] = {
+ .proc_handler = proc_dointvec,
+ },
+ #endif
++#ifdef CONFIG_SCHED_ALT
++ {
++ .procname = "yield_type",
++ .data = &sched_yield_type,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec_minmax,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_TWO,
++ },
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ {
+ .procname = "spin_retry",
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index b8ee320208d4..087b252383cb 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2074,8 +2074,10 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
+ int ret = 0;
+ u64 slack;
+
++#ifndef CONFIG_SCHED_ALT
+ slack = current->timer_slack_ns;
+- if (rt_task(current))
++ if (dl_task(current) || rt_task(current))
++#endif
+ slack = 0;
+
+ hrtimer_init_sleeper_on_stack(&t, clockid, mode);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index e9c6f9d0e42c..43ee0a94abdd 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ u64 stime, utime;
+
+ task_cputime(p, &utime, &stime);
+- store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++ store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -867,6 +867,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ }
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ if (tsk->dl.dl_overrun) {
+@@ -874,6 +875,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ }
+ }
++#endif
+
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -901,8 +903,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ u64 samples[CPUCLOCK_MAX];
+ unsigned long soft;
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(tsk))
+ check_dl_overrun(tsk);
++#endif
+
+ if (expiry_cache_is_inactive(pct))
+ return;
+@@ -916,7 +920,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ if (soft != RLIM_INFINITY) {
+ /* Task RT timeout is accounted in jiffies. RTTIME is usec */
+- unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++ unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+
+ /* At the hard limit, send SIGKILL. No further action. */
+@@ -1152,8 +1156,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ return true;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(tsk) && tsk->dl.dl_overrun)
+ return true;
++#endif
+
+ return false;
+ }
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index c4ad7cd7e778..e7202ffad129 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1419,10 +1419,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ /* Make this a -deadline thread */
+ static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++ /* No deadline on BMQ/PDS, use RR */
++ .sched_policy = SCHED_RR,
++#else
+ .sched_policy = SCHED_DEADLINE,
+ .sched_runtime = 100000ULL,
+ .sched_deadline = 10000000ULL,
+ .sched_period = 10000000ULL
++#endif
+ };
+ struct wakeup_test_data *x = data;
+
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 6f2545037e57..da5e353533a5 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1249,6 +1249,7 @@ static bool kick_pool(struct worker_pool *pool)
+
+ p = worker->task;
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ /*
+ * Idle @worker is about to execute @work and waking up provides an
+@@ -1278,6 +1279,8 @@ static bool kick_pool(struct worker_pool *pool)
+ }
+ }
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
++
+ wake_up_process(p);
+ return true;
+ }
+@@ -1406,7 +1409,11 @@ void wq_worker_running(struct task_struct *task)
+ * CPU intensive auto-detection cares about how long a work item hogged
+ * CPU without sleeping. Reset the starting timestamp on wakeup.
+ */
++#ifdef CONFIG_SCHED_ALT
++ worker->current_at = worker->task->sched_time;
++#else
+ worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+
+ WRITE_ONCE(worker->sleeping, 0);
+ }
+@@ -1491,7 +1498,11 @@ void wq_worker_tick(struct task_struct *task)
+ * We probably want to make this prettier in the future.
+ */
+ if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
++#ifdef CONFIG_SCHED_ALT
++ worker->task->sched_time - worker->current_at <
++#else
+ worker->task->se.sum_exec_runtime - worker->current_at <
++#endif
+ wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
+ return;
+
+@@ -3159,7 +3170,11 @@ __acquires(&pool->lock)
+ worker->current_func = work->func;
+ worker->current_pwq = pwq;
+ if (worker->task)
++#ifdef CONFIG_SCHED_ALT
++ worker->current_at = worker->task->sched_time;
++#else
+ worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ work_data = *work_data_bits(work);
+ worker->current_color = get_work_color(work_data);
+
diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 00000000..7748d78c
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig 2024-11-13 14:45:36.566335895 -0500
++++ b/init/Kconfig 2024-11-13 14:47:02.670787774 -0500
+@@ -860,8 +860,9 @@ config UCLAMP_BUCKETS_COUNT
+ If in doubt, use the default value.
+
+ menuconfig SCHED_ALT
++ depends on X86_64
+ bool "Alternative CPU Schedulers"
+- default y
++ default n
+ help
+ This feature enable alternative CPU scheduler"
+
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-14 13:49 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-14 13:49 UTC (permalink / raw
To: gentoo-commits
commit: 3e3358ec19faa4465e8fd12d2e884f345ed5a2fb
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 14 13:49:11 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov 14 13:49:11 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3e3358ec
Bluetooth: hci_core: Fix calling mgmt_device_connected
Bug: https://bugs.gentoo.org/942925
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +++
2400_bluetooth-mgmt-device-connected-fix.patch | 34 ++++++++++++++++++++++++++
2 files changed, 38 insertions(+)
diff --git a/0000_README b/0000_README
index 9d412a8c..39ba643a 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+Patch: 2400_bluetooth-mgmt-device-connected-fix.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git
+Desc: Bluetooth: hci_core: Fix calling mgmt_device_connected
+
Patch: 2600_HID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch
From: https://bugs.gentoo.org/942797
Desc: Revert: HID: multitouch: Add support for lenovo Y9000P Touchpad
diff --git a/2400_bluetooth-mgmt-device-connected-fix.patch b/2400_bluetooth-mgmt-device-connected-fix.patch
new file mode 100644
index 00000000..86cf10e9
--- /dev/null
+++ b/2400_bluetooth-mgmt-device-connected-fix.patch
@@ -0,0 +1,34 @@
+From 48adce305dc6d6c444fd00e40ad07d4a41acdfbf Mon Sep 17 00:00:00 2001
+From: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
+Date: Fri, 8 Nov 2024 11:19:54 -0500
+Subject: Bluetooth: hci_core: Fix calling mgmt_device_connected
+
+Since 61a939c68ee0 ("Bluetooth: Queue incoming ACL data until
+BT_CONNECTED state is reached") there is no long the need to call
+mgmt_device_connected as ACL data will be queued until BT_CONNECTED
+state.
+
+Link: https://bugzilla.kernel.org/show_bug.cgi?id=219458
+Link: https://github.com/bluez/bluez/issues/1014
+Fixes: 333b4fd11e89 ("Bluetooth: L2CAP: Fix uaf in l2cap_connect")
+Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
+---
+ net/bluetooth/hci_core.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index f6cff34a85421c..f9e19f9cb5a386 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3792,8 +3792,6 @@ static void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb)
+
+ hci_dev_lock(hdev);
+ conn = hci_conn_hash_lookup_handle(hdev, handle);
+- if (conn && hci_dev_test_flag(hdev, HCI_MGMT))
+- mgmt_device_connected(hdev, conn, NULL, 0);
+ hci_dev_unlock(hdev);
+
+ if (conn) {
+--
+cgit 1.2.3-korg
+
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-14 14:53 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-14 14:53 UTC (permalink / raw
To: gentoo-commits
commit: b0dedd957309335454022dc09e5fa80c209df009
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 14 14:53:17 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov 14 14:53:17 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b0dedd95
Linux patch 6.11.8
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1007_linux-6.11.8.patch | 6580 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6584 insertions(+)
diff --git a/0000_README b/0000_README
index 39ba643a..240867d2 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-6.11.7.patch
From: https://www.kernel.org
Desc: Linux 6.11.7
+Patch: 1007_linux-6.11.8.patch
+From: https://www.kernel.org
+Desc: Linux 6.11.8
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1007_linux-6.11.8.patch b/1007_linux-6.11.8.patch
new file mode 100644
index 00000000..33226822
--- /dev/null
+++ b/1007_linux-6.11.8.patch
@@ -0,0 +1,6580 @@
+diff --git a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
+index e95c216282818e..fb02e579463c98 100644
+--- a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
++++ b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
+@@ -61,7 +61,7 @@ properties:
+ - gmii
+ - rgmii
+ - sgmii
+- - 1000BaseX
++ - 1000base-x
+
+ xlnx,phy-type:
+ description:
+diff --git a/Documentation/netlink/specs/mptcp_pm.yaml b/Documentation/netlink/specs/mptcp_pm.yaml
+index 30d8342cacc870..dc190bf838fec6 100644
+--- a/Documentation/netlink/specs/mptcp_pm.yaml
++++ b/Documentation/netlink/specs/mptcp_pm.yaml
+@@ -293,7 +293,6 @@ operations:
+ doc: Get endpoint information
+ attribute-set: attr
+ dont-validate: [ strict ]
+- flags: [ uns-admin-perm ]
+ do: &get-addr-attrs
+ request:
+ attributes:
+diff --git a/Makefile b/Makefile
+index 692bbdf40fb5f2..b8641dde171ff9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 11
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/boot/dts/rockchip/rk3036-kylin.dts b/arch/arm/boot/dts/rockchip/rk3036-kylin.dts
+index e32c73d32f0aaf..2f84e28057121b 100644
+--- a/arch/arm/boot/dts/rockchip/rk3036-kylin.dts
++++ b/arch/arm/boot/dts/rockchip/rk3036-kylin.dts
+@@ -325,8 +325,8 @@ regulator-state-mem {
+ &i2c2 {
+ status = "okay";
+
+- rt5616: rt5616@1b {
+- compatible = "rt5616";
++ rt5616: audio-codec@1b {
++ compatible = "realtek,rt5616";
+ reg = <0x1b>;
+ clocks = <&cru SCLK_I2S_OUT>;
+ clock-names = "mclk";
+diff --git a/arch/arm/boot/dts/rockchip/rk3036.dtsi b/arch/arm/boot/dts/rockchip/rk3036.dtsi
+index 96279d1e02fec5..63b9912be06a7c 100644
+--- a/arch/arm/boot/dts/rockchip/rk3036.dtsi
++++ b/arch/arm/boot/dts/rockchip/rk3036.dtsi
+@@ -384,12 +384,13 @@ reboot-mode {
+ };
+ };
+
+- acodec: acodec-ana@20030000 {
+- compatible = "rk3036-codec";
++ acodec: audio-codec@20030000 {
++ compatible = "rockchip,rk3036-codec";
+ reg = <0x20030000 0x4000>;
+- rockchip,grf = <&grf>;
+ clock-names = "acodec_pclk";
+ clocks = <&cru PCLK_ACODEC>;
++ rockchip,grf = <&grf>;
++ #sound-dai-cells = <0>;
+ status = "disabled";
+ };
+
+@@ -399,7 +400,6 @@ hdmi: hdmi@20034000 {
+ interrupts = <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru PCLK_HDMI>;
+ clock-names = "pclk";
+- rockchip,grf = <&grf>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&hdmi_ctl>;
+ #sound-dai-cells = <0>;
+@@ -553,11 +553,11 @@ i2c0: i2c@20072000 {
+ };
+
+ spi: spi@20074000 {
+- compatible = "rockchip,rockchip-spi";
++ compatible = "rockchip,rk3036-spi";
+ reg = <0x20074000 0x1000>;
+ interrupts = <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&cru PCLK_SPI>, <&cru SCLK_SPI>;
+- clock-names = "apb-pclk","spi_pclk";
++ clocks = <&cru SCLK_SPI>, <&cru PCLK_SPI>;
++ clock-names = "spiclk", "apb_pclk";
+ dmas = <&pdma 8>, <&pdma 9>;
+ dma-names = "tx", "rx";
+ pinctrl-names = "default";
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 89b331575ed493..402ae0297993c2 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -2173,6 +2173,7 @@ config ARM64_SME
+ bool "ARM Scalable Matrix Extension support"
+ default y
+ depends on ARM64_SVE
++ depends on BROKEN
+ help
+ The Scalable Matrix Extension (SME) is an extension to the AArch64
+ execution state which utilises a substantial subset of the SVE
+diff --git a/arch/arm64/boot/dts/freescale/imx8-ss-vpu.dtsi b/arch/arm64/boot/dts/freescale/imx8-ss-vpu.dtsi
+index c6540768bdb926..87211c18d65a95 100644
+--- a/arch/arm64/boot/dts/freescale/imx8-ss-vpu.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8-ss-vpu.dtsi
+@@ -15,7 +15,7 @@ vpu: vpu@2c000000 {
+ mu_m0: mailbox@2d000000 {
+ compatible = "fsl,imx6sx-mu";
+ reg = <0x2d000000 0x20000>;
+- interrupts = <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 472 IRQ_TYPE_LEVEL_HIGH>;
+ #mbox-cells = <2>;
+ power-domains = <&pd IMX_SC_R_VPU_MU_0>;
+ status = "disabled";
+@@ -24,7 +24,7 @@ mu_m0: mailbox@2d000000 {
+ mu1_m0: mailbox@2d020000 {
+ compatible = "fsl,imx6sx-mu";
+ reg = <0x2d020000 0x20000>;
+- interrupts = <GIC_SPI 470 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 473 IRQ_TYPE_LEVEL_HIGH>;
+ #mbox-cells = <2>;
+ power-domains = <&pd IMX_SC_R_VPU_MU_1>;
+ status = "disabled";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-phyboard-pollux-rdk.dts b/arch/arm64/boot/dts/freescale/imx8mp-phyboard-pollux-rdk.dts
+index 00a240484c254e..b6fd292a3b91dc 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-phyboard-pollux-rdk.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-phyboard-pollux-rdk.dts
+@@ -191,6 +191,18 @@ ldb_lvds_ch1: endpoint {
+ };
+ };
+
++&media_blk_ctrl {
++ /*
++ * The LVDS panel on this device uses 72.4 MHz pixel clock,
++ * set IMX8MP_VIDEO_PLL1 to 72.4 * 7 = 506.8 MHz so the LDB
++ * serializer and LCDIFv3 scanout engine can reach accurate
++ * pixel clock of exactly 72.4 MHz.
++ */
++ assigned-clock-rates = <500000000>, <200000000>,
++ <0>, <0>, <500000000>,
++ <506800000>;
++};
++
+ &snvs_pwrkey {
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index 603dfe80216f88..6113ea3a284ce5 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -1261,7 +1261,7 @@ usdhc1: mmc@30b40000 {
+ compatible = "fsl,imx8mp-usdhc", "fsl,imx8mm-usdhc", "fsl,imx7d-usdhc";
+ reg = <0x30b40000 0x10000>;
+ interrupts = <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MP_CLK_DUMMY>,
++ clocks = <&clk IMX8MP_CLK_IPG_ROOT>,
+ <&clk IMX8MP_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MP_CLK_USDHC1_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+@@ -1275,7 +1275,7 @@ usdhc2: mmc@30b50000 {
+ compatible = "fsl,imx8mp-usdhc", "fsl,imx8mm-usdhc", "fsl,imx7d-usdhc";
+ reg = <0x30b50000 0x10000>;
+ interrupts = <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MP_CLK_DUMMY>,
++ clocks = <&clk IMX8MP_CLK_IPG_ROOT>,
+ <&clk IMX8MP_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MP_CLK_USDHC2_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+@@ -1289,7 +1289,7 @@ usdhc3: mmc@30b60000 {
+ compatible = "fsl,imx8mp-usdhc", "fsl,imx8mm-usdhc", "fsl,imx7d-usdhc";
+ reg = <0x30b60000 0x10000>;
+ interrupts = <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MP_CLK_DUMMY>,
++ clocks = <&clk IMX8MP_CLK_IPG_ROOT>,
+ <&clk IMX8MP_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MP_CLK_USDHC3_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+diff --git a/arch/arm64/boot/dts/freescale/imx8qxp-ss-vpu.dtsi b/arch/arm64/boot/dts/freescale/imx8qxp-ss-vpu.dtsi
+index 7894a3ab26d6bc..f81937b5fb720d 100644
+--- a/arch/arm64/boot/dts/freescale/imx8qxp-ss-vpu.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8qxp-ss-vpu.dtsi
+@@ -5,6 +5,14 @@
+ * Author: Alexander Stein
+ */
+
++&mu_m0 {
++ interrupts = <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>;
++};
++
++&mu1_m0 {
++ interrupts = <GIC_SPI 470 IRQ_TYPE_LEVEL_HIGH>;
++};
++
+ &vpu_core0 {
+ reg = <0x2d040000 0x10000>;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index 9bafb3b350ff62..38cb524cc56893 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -1973,7 +1973,7 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
+
+ clocks = <&gcc GCC_PCIE_1_PIPE_CLK>,
+ <&gcc GCC_PCIE_1_PIPE_CLK_SRC>,
+- <&pcie1_phy>,
++ <&pcie1_phy QMP_PCIE_PIPE_CLK>,
+ <&rpmhcc RPMH_CXO_CLK>,
+ <&gcc GCC_PCIE_1_AUX_CLK>,
+ <&gcc GCC_PCIE_1_CFG_AHB_CLK>,
+diff --git a/arch/arm64/boot/dts/rockchip/Makefile b/arch/arm64/boot/dts/rockchip/Makefile
+index fda1b980eb4bc9..36258dc8dafd5e 100644
+--- a/arch/arm64/boot/dts/rockchip/Makefile
++++ b/arch/arm64/boot/dts/rockchip/Makefile
+@@ -20,6 +20,7 @@ dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3328-evb.dtb
+ dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3328-nanopi-r2c.dtb
+ dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3328-nanopi-r2c-plus.dtb
+ dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3328-nanopi-r2s.dtb
++dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3328-nanopi-r2s-plus.dtb
+ dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3328-orangepi-r1-plus.dtb
+ dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3328-orangepi-r1-plus-lts.dtb
+ dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3328-rock64.dtb
+diff --git a/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi b/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
+index bb1aea82e666ed..b7163ed74232d7 100644
+--- a/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
+@@ -66,7 +66,6 @@ &emmc {
+ bus-width = <8>;
+ cap-mmc-highspeed;
+ mmc-hs200-1_8v;
+- supports-emmc;
+ mmc-pwrseq = <&emmc_pwrseq>;
+ non-removable;
+ vmmc-supply = <&vcc_3v3>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+index 9232357f4fec9c..d9e191ad1d77e0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+@@ -36,14 +36,14 @@ leds {
+
+ power_led: led-0 {
+ label = "firefly:red:power";
+- linux,default-trigger = "ir-power-click";
++ linux,default-trigger = "default-on";
+ default-state = "on";
+ gpios = <&gpio0 RK_PA6 GPIO_ACTIVE_HIGH>;
+ };
+
+ user_led: led-1 {
+ label = "firefly:blue:user";
+- linux,default-trigger = "ir-user-click";
++ linux,default-trigger = "rc-feedback";
+ default-state = "off";
+ gpios = <&gpio0 RK_PB2 GPIO_ACTIVE_HIGH>;
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s-plus.dts b/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s-plus.dts
+new file mode 100644
+index 00000000000000..4b9ced67742d26
+--- /dev/null
++++ b/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s-plus.dts
+@@ -0,0 +1,30 @@
++// SPDX-License-Identifier: (GPL-2.0+ OR MIT)
++/*
++ * (C) Copyright 2018 FriendlyElec Computer Tech. Co., Ltd.
++ * (http://www.friendlyarm.com)
++ *
++ * (C) Copyright 2016 Rockchip Electronics Co., Ltd
++ */
++
++/dts-v1/;
++#include "rk3328-nanopi-r2s.dts"
++
++/ {
++ compatible = "friendlyarm,nanopi-r2s-plus", "rockchip,rk3328";
++ model = "FriendlyElec NanoPi R2S Plus";
++
++ aliases {
++ mmc1 = &emmc;
++ };
++};
++
++&emmc {
++ bus-width = <8>;
++ cap-mmc-highspeed;
++ disable-wp;
++ mmc-hs200-1_8v;
++ non-removable;
++ pinctrl-names = "default";
++ pinctrl-0 = <&emmc_clk &emmc_cmd &emmc_bus8>;
++ status = "okay";
++};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index b01efd6d042c8e..a60259ae8a5323 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -754,8 +754,7 @@ hdmi: hdmi@ff3c0000 {
+ compatible = "rockchip,rk3328-dw-hdmi";
+ reg = <0x0 0xff3c0000 0x0 0x20000>;
+ reg-io-width = <4>;
+- interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 71 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru PCLK_HDMI>,
+ <&cru SCLK_HDMI_SFC>,
+ <&cru SCLK_RTC32K>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi b/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi
+index 8ac8acf4082df4..ab3fda69a1fb7b 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi
+@@ -61,7 +61,6 @@ i2c_lvds_blc: i2c@0 {
+ fan: fan@18 {
+ compatible = "ti,amc6821";
+ reg = <0x18>;
+- #cooling-cells = <2>;
+ };
+
+ rtc_twi: rtc@6f {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-eaidk-610.dts b/arch/arm64/boot/dts/rockchip/rk3399-eaidk-610.dts
+index 173da81fc23117..ea11d6b86e506b 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-eaidk-610.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-eaidk-610.dts
+@@ -542,7 +542,7 @@ &i2c1 {
+ status = "okay";
+
+ rt5651: audio-codec@1a {
+- compatible = "rockchip,rt5651";
++ compatible = "realtek,rt5651";
+ reg = <0x1a>;
+ clocks = <&cru SCLK_I2S_8CH_OUT>;
+ clock-names = "mclk";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-pinephone-pro.dts b/arch/arm64/boot/dts/rockchip/rk3399-pinephone-pro.dts
+index ef754ea30a940a..855e0ca92270b5 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-pinephone-pro.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-pinephone-pro.dts
+@@ -167,7 +167,6 @@ vcc1v8_lcd: vcc1v8-lcd {
+ regulator-max-microvolt = <1800000>;
+ vin-supply = <&vcc3v3_sys>;
+ gpio = <&gpio3 RK_PA5 GPIO_ACTIVE_HIGH>;
+- pinctrl-names = "default";
+ };
+
+ /* MIPI DSI panel 2.8v supply */
+@@ -179,7 +178,6 @@ vcc2v8_lcd: vcc2v8-lcd {
+ regulator-max-microvolt = <2800000>;
+ vin-supply = <&vcc3v3_sys>;
+ gpio = <&gpio3 RK_PA1 GPIO_ACTIVE_HIGH>;
+- pinctrl-names = "default";
+ };
+
+ vibrator {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rock960.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-rock960.dtsi
+index c920ddf44bafd0..55ac7145c08508 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rock960.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rock960.dtsi
+@@ -577,7 +577,7 @@ &uart0 {
+ bluetooth {
+ compatible = "brcm,bcm43438-bt";
+ clocks = <&rk808 1>;
+- clock-names = "ext_clock";
++ clock-names = "txco";
+ device-wakeup-gpios = <&gpio2 RK_PD3 GPIO_ACTIVE_HIGH>;
+ host-wakeup-gpios = <&gpio0 RK_PA4 GPIO_ACTIVE_HIGH>;
+ shutdown-gpios = <&gpio0 RK_PB1 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-sapphire-excavator.dts b/arch/arm64/boot/dts/rockchip/rk3399-sapphire-excavator.dts
+index dbec2b7173a0b6..31ea3d0182c062 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-sapphire-excavator.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-sapphire-excavator.dts
+@@ -163,7 +163,7 @@ &i2c1 {
+ status = "okay";
+
+ rt5651: rt5651@1a {
+- compatible = "rockchip,rt5651";
++ compatible = "realtek,rt5651";
+ reg = <0x1a>;
+ clocks = <&cru SCLK_I2S_8CH_OUT>;
+ clock-names = "mclk";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rg353p.dts b/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rg353p.dts
+index a73cf30801ec7f..9816a4ed4599e8 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rg353p.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rg353p.dts
+@@ -92,7 +92,7 @@ button-r2 {
+ };
+
+ &i2c2 {
+- pintctrl-names = "default";
++ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2m1_xfer>;
+ status = "okay";
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rg353v.dts b/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rg353v.dts
+index e9954a33e8cd31..a79a5614bcc885 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rg353v.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rg353v.dts
+@@ -79,7 +79,7 @@ button-r2 {
+ };
+
+ &i2c2 {
+- pintctrl-names = "default";
++ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2m1_xfer>;
+ status = "okay";
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-box-demo.dts b/arch/arm64/boot/dts/rockchip/rk3566-box-demo.dts
+index 0c18406e4c5973..7d468093382393 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-box-demo.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-box-demo.dts
+@@ -449,9 +449,9 @@ &uart1 {
+ bluetooth {
+ compatible = "brcm,bcm43438-bt";
+ clocks = <&pmucru CLK_RTC_32K>;
+- clock-names = "ext_clock";
+- device-wake-gpios = <&gpio2 RK_PC1 GPIO_ACTIVE_HIGH>;
+- host-wake-gpios = <&gpio2 RK_PC0 GPIO_ACTIVE_HIGH>;
++ clock-names = "txco";
++ device-wakeup-gpios = <&gpio2 RK_PC1 GPIO_ACTIVE_HIGH>;
++ host-wakeup-gpios = <&gpio2 RK_PC0 GPIO_ACTIVE_HIGH>;
+ shutdown-gpios = <&gpio2 RK_PB7 GPIO_ACTIVE_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&bt_host_wake_l &bt_wake_l &bt_enable_h>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts b/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts
+index c1194d1e438d0d..9a2f59a351dee5 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts
+@@ -507,7 +507,6 @@ &sdhci {
+ non-removable;
+ pinctrl-names = "default";
+ pinctrl-0 = <&emmc_bus8 &emmc_clk &emmc_cmd>;
+- supports-emmc;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-pinenote.dtsi b/arch/arm64/boot/dts/rockchip/rk3566-pinenote.dtsi
+index ae2536c65a8300..0131f2cdd312f3 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-pinenote.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3566-pinenote.dtsi
+@@ -684,11 +684,11 @@ bluetooth {
+ compatible = "brcm,bcm43438-bt";
+ clocks = <&rk817 1>;
+ clock-names = "lpo";
+- device-wake-gpios = <&gpio0 RK_PC2 GPIO_ACTIVE_HIGH>;
+- host-wake-gpios = <&gpio0 RK_PC3 GPIO_ACTIVE_HIGH>;
+- reset-gpios = <&gpio0 RK_PC4 GPIO_ACTIVE_LOW>;
++ device-wakeup-gpios = <&gpio0 RK_PC2 GPIO_ACTIVE_HIGH>;
++ host-wakeup-gpios = <&gpio0 RK_PC3 GPIO_ACTIVE_HIGH>;
+ pinctrl-0 = <&bt_enable_h>, <&bt_host_wake_l>, <&bt_wake_h>;
+ pinctrl-names = "default";
++ shutdown-gpios = <&gpio0 RK_PC4 GPIO_ACTIVE_HIGH>;
+ vbat-supply = <&vcc_wl>;
+ vddio-supply = <&vcca_1v8_pmu>;
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-radxa-cm3.dtsi b/arch/arm64/boot/dts/rockchip/rk3566-radxa-cm3.dtsi
+index 45de2630bb503a..1e36f73840dad2 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-radxa-cm3.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3566-radxa-cm3.dtsi
+@@ -402,9 +402,9 @@ bluetooth {
+ clock-names = "lpo";
+ device-wakeup-gpios = <&gpio2 RK_PB2 GPIO_ACTIVE_HIGH>;
+ host-wakeup-gpios = <&gpio2 RK_PB1 GPIO_ACTIVE_HIGH>;
+- reset-gpios = <&gpio2 RK_PC0 GPIO_ACTIVE_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&bt_host_wake_h &bt_reg_on_h &bt_wake_host_h>;
++ shutdown-gpios = <&gpio2 RK_PC0 GPIO_ACTIVE_HIGH>;
+ vbat-supply = <&vcc_3v3>;
+ vddio-supply = <&vcc_1v8>;
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-lubancat-2.dts b/arch/arm64/boot/dts/rockchip/rk3568-lubancat-2.dts
+index a3112d5df2008d..b505a4537ee8ca 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-lubancat-2.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-lubancat-2.dts
+@@ -589,7 +589,6 @@ &sdhci {
+ non-removable;
+ pinctrl-names = "default";
+ pinctrl-0 = <&emmc_bus8 &emmc_clk &emmc_cmd>;
+- supports-emmc;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-roc-pc.dts b/arch/arm64/boot/dts/rockchip/rk3568-roc-pc.dts
+index e333449ead045e..2fa89a0eeafcda 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-roc-pc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-roc-pc.dts
+@@ -272,7 +272,6 @@ vdd_logic: DCDC_REG1 {
+ regulator-name = "vdd_logic";
+ regulator-always-on;
+ regulator-boot-on;
+- regulator-init-microvolt = <900000>;
+ regulator-initial-mode = <0x2>;
+ regulator-min-microvolt = <500000>;
+ regulator-max-microvolt = <1350000>;
+@@ -285,7 +284,6 @@ regulator-state-mem {
+
+ vdd_gpu: DCDC_REG2 {
+ regulator-name = "vdd_gpu";
+- regulator-init-microvolt = <900000>;
+ regulator-initial-mode = <0x2>;
+ regulator-min-microvolt = <500000>;
+ regulator-max-microvolt = <1350000>;
+@@ -309,7 +307,6 @@ regulator-state-mem {
+
+ vdd_npu: DCDC_REG4 {
+ regulator-name = "vdd_npu";
+- regulator-init-microvolt = <900000>;
+ regulator-initial-mode = <0x2>;
+ regulator-min-microvolt = <500000>;
+ regulator-max-microvolt = <1350000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
+index ee99166ebd46f9..f695c5d5f91445 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
+@@ -337,15 +337,19 @@ l2_cache_b3: l2-cache-b3 {
+ cache-unified;
+ next-level-cache = <&l3_cache>;
+ };
++ };
+
+- l3_cache: l3-cache {
+- compatible = "cache";
+- cache-size = <3145728>;
+- cache-line-size = <64>;
+- cache-sets = <4096>;
+- cache-level = <3>;
+- cache-unified;
+- };
++ /*
++ * The L3 cache belongs to the DynamIQ Shared Unit (DSU),
++ * so it's represented here, outside the "cpus" node
++ */
++ l3_cache: l3-cache {
++ compatible = "cache";
++ cache-size = <3145728>;
++ cache-line-size = <64>;
++ cache-sets = <4096>;
++ cache-level = <3>;
++ cache-unified;
+ };
+
+ display_subsystem: display-subsystem {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-rock-5b.dts b/arch/arm64/boot/dts/rockchip/rk3588-rock-5b.dts
+index 966bbc582d89b8..6bd06e46a101d0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-rock-5b.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588-rock-5b.dts
+@@ -304,12 +304,12 @@ package_fan1: package-fan1 {
+ };
+
+ cooling-maps {
+- map1 {
++ map0 {
+ trip = <&package_fan0>;
+ cooling-device = <&fan THERMAL_NO_LIMIT 1>;
+ };
+
+- map2 {
++ map1 {
+ trip = <&package_fan1>;
+ cooling-device = <&fan 2 THERMAL_NO_LIMIT>;
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-toybrick-x0.dts b/arch/arm64/boot/dts/rockchip/rk3588-toybrick-x0.dts
+index d0021524e7f958..328dcb894ccb2d 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-toybrick-x0.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588-toybrick-x0.dts
+@@ -428,7 +428,6 @@ vdd_vdenc_s0: vdd_vdenc_mem_s0: dcdc-reg4 {
+ regulator-boot-on;
+ regulator-min-microvolt = <550000>;
+ regulator-max-microvolt = <950000>;
+- regulator-init-microvolt = <750000>;
+ regulator-ramp-delay = <12500>;
+
+ regulator-state-mem {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi
+index dbaa94ca69f476..432133251e318b 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi
+@@ -296,6 +296,7 @@ pmic@0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pmic_pins>, <&rk806_dvs1_null>,
+ <&rk806_dvs2_null>, <&rk806_dvs3_null>;
++ system-power-controller;
+
+ vcc1-supply = <&vcc5v0_sys>;
+ vcc2-supply = <&vcc5v0_sys>;
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index 77006df20a75ae..6d21971ae5594f 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -1367,6 +1367,7 @@ static void sve_init_regs(void)
+ } else {
+ fpsimd_to_sve(current);
+ current->thread.fp_type = FP_STATE_SVE;
++ fpsimd_flush_task_state(current);
+ }
+ }
+
+diff --git a/arch/arm64/kernel/smccc-call.S b/arch/arm64/kernel/smccc-call.S
+index 487381164ff6b6..2def9d0dd3ddba 100644
+--- a/arch/arm64/kernel/smccc-call.S
++++ b/arch/arm64/kernel/smccc-call.S
+@@ -7,48 +7,19 @@
+
+ #include <asm/asm-offsets.h>
+ #include <asm/assembler.h>
+-#include <asm/thread_info.h>
+-
+-/*
+- * If we have SMCCC v1.3 and (as is likely) no SVE state in
+- * the registers then set the SMCCC hint bit to say there's no
+- * need to preserve it. Do this by directly adjusting the SMCCC
+- * function value which is already stored in x0 ready to be called.
+- */
+-SYM_FUNC_START(__arm_smccc_sve_check)
+-
+- ldr_l x16, smccc_has_sve_hint
+- cbz x16, 2f
+-
+- get_current_task x16
+- ldr x16, [x16, #TSK_TI_FLAGS]
+- tbnz x16, #TIF_FOREIGN_FPSTATE, 1f // Any live FP state?
+- tbnz x16, #TIF_SVE, 2f // Does that state include SVE?
+-
+-1: orr x0, x0, ARM_SMCCC_1_3_SVE_HINT
+-
+-2: ret
+-SYM_FUNC_END(__arm_smccc_sve_check)
+-EXPORT_SYMBOL(__arm_smccc_sve_check)
+
+ .macro SMCCC instr
+- stp x29, x30, [sp, #-16]!
+- mov x29, sp
+-alternative_if ARM64_SVE
+- bl __arm_smccc_sve_check
+-alternative_else_nop_endif
+ \instr #0
+- ldr x4, [sp, #16]
++ ldr x4, [sp]
+ stp x0, x1, [x4, #ARM_SMCCC_RES_X0_OFFS]
+ stp x2, x3, [x4, #ARM_SMCCC_RES_X2_OFFS]
+- ldr x4, [sp, #24]
++ ldr x4, [sp, #8]
+ cbz x4, 1f /* no quirk structure */
+ ldr x9, [x4, #ARM_SMCCC_QUIRK_ID_OFFS]
+ cmp x9, #ARM_SMCCC_QUIRK_QCOM_A6
+ b.ne 1f
+ str x6, [x4, ARM_SMCCC_QUIRK_STATE_OFFS]
+-1: ldp x29, x30, [sp], #16
+- ret
++1: ret
+ .endm
+
+ /*
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 8f7d7e37bc8c60..0ed5c5c7a350d8 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4892,6 +4892,18 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+ BOOK3S_INTERRUPT_EXTERNAL, 0);
+ else
+ lpcr |= LPCR_MER;
++ } else {
++ /*
++ * L1's copy of L2's LPCR (vcpu->arch.vcore->lpcr) can get its MER bit
++ * unexpectedly set - for e.g. during NMI handling when all register
++ * states are synchronized from L0 to L1. L1 needs to inform L0 about
++ * MER=1 only when there are pending external interrupts.
++ * In the above if check, MER bit is set if there are pending
++ * external interrupts. Hence, explicity mask off MER bit
++ * here as otherwise it may generate spurious interrupts in L2 KVM
++ * causing an endless loop, which results in L2 guest getting hung.
++ */
++ lpcr &= ~LPCR_MER;
+ }
+ } else if (vcpu->arch.pending_exceptions ||
+ vcpu->arch.doorbell_request ||
+diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
+index f200a4ec044e64..d3db28f2f81108 100644
+--- a/arch/xtensa/Kconfig
++++ b/arch/xtensa/Kconfig
+@@ -14,6 +14,7 @@ config XTENSA
+ select ARCH_HAS_DMA_SET_UNCACHED if MMU
+ select ARCH_HAS_STRNCPY_FROM_USER if !KASAN
+ select ARCH_HAS_STRNLEN_USER
++ select ARCH_NEED_CMPXCHG_1_EMU
+ select ARCH_USE_MEMTEST
+ select ARCH_USE_QUEUED_RWLOCKS
+ select ARCH_USE_QUEUED_SPINLOCKS
+diff --git a/arch/xtensa/include/asm/cmpxchg.h b/arch/xtensa/include/asm/cmpxchg.h
+index 675a11ea8de76b..95e33a913962d8 100644
+--- a/arch/xtensa/include/asm/cmpxchg.h
++++ b/arch/xtensa/include/asm/cmpxchg.h
+@@ -15,6 +15,7 @@
+
+ #include <linux/bits.h>
+ #include <linux/stringify.h>
++#include <linux/cmpxchg-emu.h>
+
+ /*
+ * cmpxchg
+@@ -74,6 +75,7 @@ static __inline__ unsigned long
+ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
+ {
+ switch (size) {
++ case 1: return cmpxchg_emu_u8(ptr, old, new);
+ case 4: return __cmpxchg_u32(ptr, old, new);
+ default: __cmpxchg_called_with_bad_pointer();
+ return old;
+diff --git a/block/blk-map.c b/block/blk-map.c
+index 6ef2ec1f7d78bb..b5fd1d8574615e 100644
+--- a/block/blk-map.c
++++ b/block/blk-map.c
+@@ -561,55 +561,33 @@ EXPORT_SYMBOL(blk_rq_append_bio);
+ /* Prepare bio for passthrough IO given ITER_BVEC iter */
+ static int blk_rq_map_user_bvec(struct request *rq, const struct iov_iter *iter)
+ {
+- struct request_queue *q = rq->q;
+- size_t nr_iter = iov_iter_count(iter);
+- size_t nr_segs = iter->nr_segs;
+- struct bio_vec *bvecs, *bvprvp = NULL;
+- const struct queue_limits *lim = &q->limits;
+- unsigned int nsegs = 0, bytes = 0;
++ const struct queue_limits *lim = &rq->q->limits;
++ unsigned int max_bytes = lim->max_hw_sectors << SECTOR_SHIFT;
++ unsigned int nsegs;
+ struct bio *bio;
+- size_t i;
++ int ret;
+
+- if (!nr_iter || (nr_iter >> SECTOR_SHIFT) > queue_max_hw_sectors(q))
+- return -EINVAL;
+- if (nr_segs > queue_max_segments(q))
++ if (!iov_iter_count(iter) || iov_iter_count(iter) > max_bytes)
+ return -EINVAL;
+
+- /* no iovecs to alloc, as we already have a BVEC iterator */
++ /* reuse the bvecs from the iterator instead of allocating new ones */
+ bio = blk_rq_map_bio_alloc(rq, 0, GFP_KERNEL);
+- if (bio == NULL)
++ if (!bio)
+ return -ENOMEM;
+-
+ bio_iov_bvec_set(bio, (struct iov_iter *)iter);
+- blk_rq_bio_prep(rq, bio, nr_segs);
+-
+- /* loop to perform a bunch of sanity checks */
+- bvecs = (struct bio_vec *)iter->bvec;
+- for (i = 0; i < nr_segs; i++) {
+- struct bio_vec *bv = &bvecs[i];
+-
+- /*
+- * If the queue doesn't support SG gaps and adding this
+- * offset would create a gap, fallback to copy.
+- */
+- if (bvprvp && bvec_gap_to_prev(lim, bvprvp, bv->bv_offset)) {
+- blk_mq_map_bio_put(bio);
+- return -EREMOTEIO;
+- }
+- /* check full condition */
+- if (nsegs >= nr_segs || bytes > UINT_MAX - bv->bv_len)
+- goto put_bio;
+- if (bytes + bv->bv_len > nr_iter)
+- break;
+
+- nsegs++;
+- bytes += bv->bv_len;
+- bvprvp = bv;
++ /* check that the data layout matches the hardware restrictions */
++ ret = bio_split_rw_at(bio, lim, &nsegs, max_bytes);
++ if (ret) {
++ /* if we would have to split the bio, copy instead */
++ if (ret > 0)
++ ret = -EREMOTEIO;
++ blk_mq_map_bio_put(bio);
++ return ret;
+ }
++
++ blk_rq_bio_prep(rq, bio, nsegs);
+ return 0;
+-put_bio:
+- blk_mq_map_bio_put(bio);
+- return -EINVAL;
+ }
+
+ /**
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index de5281bcadc538..c7222c4685e060 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -105,9 +105,33 @@ static unsigned int bio_allowed_max_sectors(const struct queue_limits *lim)
+ return round_down(UINT_MAX, lim->logical_block_size) >> SECTOR_SHIFT;
+ }
+
+-static struct bio *bio_split_discard(struct bio *bio,
+- const struct queue_limits *lim,
+- unsigned *nsegs, struct bio_set *bs)
++static struct bio *bio_submit_split(struct bio *bio, int split_sectors)
++{
++ if (unlikely(split_sectors < 0)) {
++ bio->bi_status = errno_to_blk_status(split_sectors);
++ bio_endio(bio);
++ return NULL;
++ }
++
++ if (split_sectors) {
++ struct bio *split;
++
++ split = bio_split(bio, split_sectors, GFP_NOIO,
++ &bio->bi_bdev->bd_disk->bio_split);
++ split->bi_opf |= REQ_NOMERGE;
++ blkcg_bio_issue_init(split);
++ bio_chain(split, bio);
++ trace_block_split(split, bio->bi_iter.bi_sector);
++ WARN_ON_ONCE(bio_zone_write_plugging(bio));
++ submit_bio_noacct(bio);
++ return split;
++ }
++
++ return bio;
++}
++
++struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim,
++ unsigned *nsegs)
+ {
+ unsigned int max_discard_sectors, granularity;
+ sector_t tmp;
+@@ -121,10 +145,10 @@ static struct bio *bio_split_discard(struct bio *bio,
+ min(lim->max_discard_sectors, bio_allowed_max_sectors(lim));
+ max_discard_sectors -= max_discard_sectors % granularity;
+ if (unlikely(!max_discard_sectors))
+- return NULL;
++ return bio;
+
+ if (bio_sectors(bio) <= max_discard_sectors)
+- return NULL;
++ return bio;
+
+ split_sectors = max_discard_sectors;
+
+@@ -139,19 +163,18 @@ static struct bio *bio_split_discard(struct bio *bio,
+ if (split_sectors > tmp)
+ split_sectors -= tmp;
+
+- return bio_split(bio, split_sectors, GFP_NOIO, bs);
++ return bio_submit_split(bio, split_sectors);
+ }
+
+-static struct bio *bio_split_write_zeroes(struct bio *bio,
+- const struct queue_limits *lim,
+- unsigned *nsegs, struct bio_set *bs)
++struct bio *bio_split_write_zeroes(struct bio *bio,
++ const struct queue_limits *lim, unsigned *nsegs)
+ {
+ *nsegs = 0;
+ if (!lim->max_write_zeroes_sectors)
+- return NULL;
++ return bio;
+ if (bio_sectors(bio) <= lim->max_write_zeroes_sectors)
+- return NULL;
+- return bio_split(bio, lim->max_write_zeroes_sectors, GFP_NOIO, bs);
++ return bio;
++ return bio_submit_split(bio, lim->max_write_zeroes_sectors);
+ }
+
+ static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim,
+@@ -274,27 +297,19 @@ static bool bvec_split_segs(const struct queue_limits *lim,
+ }
+
+ /**
+- * bio_split_rw - split a bio in two bios
++ * bio_split_rw_at - check if and where to split a read/write bio
+ * @bio: [in] bio to be split
+ * @lim: [in] queue limits to split based on
+ * @segs: [out] number of segments in the bio with the first half of the sectors
+- * @bs: [in] bio set to allocate the clone from
+ * @max_bytes: [in] maximum number of bytes per bio
+ *
+- * Clone @bio, update the bi_iter of the clone to represent the first sectors
+- * of @bio and update @bio->bi_iter to represent the remaining sectors. The
+- * following is guaranteed for the cloned bio:
+- * - That it has at most @max_bytes worth of data
+- * - That it has at most queue_max_segments(@q) segments.
+- *
+- * Except for discard requests the cloned bio will point at the bi_io_vec of
+- * the original bio. It is the responsibility of the caller to ensure that the
+- * original bio is not freed before the cloned bio. The caller is also
+- * responsible for ensuring that @bs is only destroyed after processing of the
+- * split bio has finished.
++ * Find out if @bio needs to be split to fit the queue limits in @lim and a
++ * maximum size of @max_bytes. Returns a negative error number if @bio can't be
++ * split, 0 if the bio doesn't have to be split, or a positive sector offset if
++ * @bio needs to be split.
+ */
+-struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
+- unsigned *segs, struct bio_set *bs, unsigned max_bytes)
++int bio_split_rw_at(struct bio *bio, const struct queue_limits *lim,
++ unsigned *segs, unsigned max_bytes)
+ {
+ struct bio_vec bv, bvprv, *bvprvp = NULL;
+ struct bvec_iter iter;
+@@ -324,22 +339,17 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
+ }
+
+ *segs = nsegs;
+- return NULL;
++ return 0;
+ split:
+- if (bio->bi_opf & REQ_ATOMIC) {
+- bio->bi_status = BLK_STS_INVAL;
+- bio_endio(bio);
+- return ERR_PTR(-EINVAL);
+- }
++ if (bio->bi_opf & REQ_ATOMIC)
++ return -EINVAL;
++
+ /*
+ * We can't sanely support splitting for a REQ_NOWAIT bio. End it
+ * with EAGAIN if splitting is required and return an error pointer.
+ */
+- if (bio->bi_opf & REQ_NOWAIT) {
+- bio->bi_status = BLK_STS_AGAIN;
+- bio_endio(bio);
+- return ERR_PTR(-EAGAIN);
+- }
++ if (bio->bi_opf & REQ_NOWAIT)
++ return -EAGAIN;
+
+ *segs = nsegs;
+
+@@ -356,58 +366,16 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
+ * big IO can be trival, disable iopoll when split needed.
+ */
+ bio_clear_polled(bio);
+- return bio_split(bio, bytes >> SECTOR_SHIFT, GFP_NOIO, bs);
++ return bytes >> SECTOR_SHIFT;
+ }
+-EXPORT_SYMBOL_GPL(bio_split_rw);
++EXPORT_SYMBOL_GPL(bio_split_rw_at);
+
+-/**
+- * __bio_split_to_limits - split a bio to fit the queue limits
+- * @bio: bio to be split
+- * @lim: queue limits to split based on
+- * @nr_segs: returns the number of segments in the returned bio
+- *
+- * Check if @bio needs splitting based on the queue limits, and if so split off
+- * a bio fitting the limits from the beginning of @bio and return it. @bio is
+- * shortened to the remainder and re-submitted.
+- *
+- * The split bio is allocated from @q->bio_split, which is provided by the
+- * block layer.
+- */
+-struct bio *__bio_split_to_limits(struct bio *bio,
+- const struct queue_limits *lim,
+- unsigned int *nr_segs)
++struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
++ unsigned *nr_segs)
+ {
+- struct bio_set *bs = &bio->bi_bdev->bd_disk->bio_split;
+- struct bio *split;
+-
+- switch (bio_op(bio)) {
+- case REQ_OP_DISCARD:
+- case REQ_OP_SECURE_ERASE:
+- split = bio_split_discard(bio, lim, nr_segs, bs);
+- break;
+- case REQ_OP_WRITE_ZEROES:
+- split = bio_split_write_zeroes(bio, lim, nr_segs, bs);
+- break;
+- default:
+- split = bio_split_rw(bio, lim, nr_segs, bs,
+- get_max_io_size(bio, lim) << SECTOR_SHIFT);
+- if (IS_ERR(split))
+- return NULL;
+- break;
+- }
+-
+- if (split) {
+- /* there isn't chance to merge the split bio */
+- split->bi_opf |= REQ_NOMERGE;
+-
+- blkcg_bio_issue_init(split);
+- bio_chain(split, bio);
+- trace_block_split(split, bio->bi_iter.bi_sector);
+- WARN_ON_ONCE(bio_zone_write_plugging(bio));
+- submit_bio_noacct(bio);
+- return split;
+- }
+- return bio;
++ return bio_submit_split(bio,
++ bio_split_rw_at(bio, lim, nr_segs,
++ get_max_io_size(bio, lim) << SECTOR_SHIFT));
+ }
+
+ /**
+@@ -426,9 +394,7 @@ struct bio *bio_split_to_limits(struct bio *bio)
+ const struct queue_limits *lim = &bdev_get_queue(bio->bi_bdev)->limits;
+ unsigned int nr_segs;
+
+- if (bio_may_exceed_limits(bio, lim))
+- return __bio_split_to_limits(bio, lim, &nr_segs);
+- return bio;
++ return __bio_split_to_limits(bio, lim, &nr_segs);
+ }
+ EXPORT_SYMBOL(bio_split_to_limits);
+
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index b56a1c0dd13878..a2401e4d8c974b 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2939,7 +2939,7 @@ void blk_mq_submit_bio(struct bio *bio)
+ struct blk_plug *plug = current->plug;
+ const int is_sync = op_is_sync(bio->bi_opf);
+ struct blk_mq_hw_ctx *hctx;
+- unsigned int nr_segs = 1;
++ unsigned int nr_segs;
+ struct request *rq;
+ blk_status_t ret;
+
+@@ -2981,11 +2981,10 @@ void blk_mq_submit_bio(struct bio *bio)
+ goto queue_exit;
+ }
+
+- if (unlikely(bio_may_exceed_limits(bio, &q->limits))) {
+- bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
+- if (!bio)
+- goto queue_exit;
+- }
++ bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
++ if (!bio)
++ goto queue_exit;
++
+ if (!bio_integrity_prep(bio))
+ goto queue_exit;
+
+diff --git a/block/blk.h b/block/blk.h
+index e180863f918b15..0d8cd64c126064 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -331,33 +331,58 @@ ssize_t part_timeout_show(struct device *, struct device_attribute *, char *);
+ ssize_t part_timeout_store(struct device *, struct device_attribute *,
+ const char *, size_t);
+
+-static inline bool bio_may_exceed_limits(struct bio *bio,
+- const struct queue_limits *lim)
++struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim,
++ unsigned *nsegs);
++struct bio *bio_split_write_zeroes(struct bio *bio,
++ const struct queue_limits *lim, unsigned *nsegs);
++struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
++ unsigned *nr_segs);
++
++/*
++ * All drivers must accept single-segments bios that are smaller than PAGE_SIZE.
++ *
++ * This is a quick and dirty check that relies on the fact that bi_io_vec[0] is
++ * always valid if a bio has data. The check might lead to occasional false
++ * positives when bios are cloned, but compared to the performance impact of
++ * cloned bios themselves the loop below doesn't matter anyway.
++ */
++static inline bool bio_may_need_split(struct bio *bio,
++ const struct queue_limits *lim)
++{
++ return lim->chunk_sectors || bio->bi_vcnt != 1 ||
++ bio->bi_io_vec->bv_len + bio->bi_io_vec->bv_offset > PAGE_SIZE;
++}
++
++/**
++ * __bio_split_to_limits - split a bio to fit the queue limits
++ * @bio: bio to be split
++ * @lim: queue limits to split based on
++ * @nr_segs: returns the number of segments in the returned bio
++ *
++ * Check if @bio needs splitting based on the queue limits, and if so split off
++ * a bio fitting the limits from the beginning of @bio and return it. @bio is
++ * shortened to the remainder and re-submitted.
++ *
++ * The split bio is allocated from @q->bio_split, which is provided by the
++ * block layer.
++ */
++static inline struct bio *__bio_split_to_limits(struct bio *bio,
++ const struct queue_limits *lim, unsigned int *nr_segs)
+ {
+ switch (bio_op(bio)) {
++ default:
++ if (bio_may_need_split(bio, lim))
++ return bio_split_rw(bio, lim, nr_segs);
++ *nr_segs = 1;
++ return bio;
+ case REQ_OP_DISCARD:
+ case REQ_OP_SECURE_ERASE:
++ return bio_split_discard(bio, lim, nr_segs);
+ case REQ_OP_WRITE_ZEROES:
+- return true; /* non-trivial splitting decisions */
+- default:
+- break;
++ return bio_split_write_zeroes(bio, lim, nr_segs);
+ }
+-
+- /*
+- * All drivers must accept single-segments bios that are <= PAGE_SIZE.
+- * This is a quick and dirty check that relies on the fact that
+- * bi_io_vec[0] is always valid if a bio has data. The check might
+- * lead to occasional false negatives when bios are cloned, but compared
+- * to the performance impact of cloned bios themselves the loop below
+- * doesn't matter anyway.
+- */
+- return lim->chunk_sectors || bio->bi_vcnt != 1 ||
+- bio->bi_io_vec->bv_len + bio->bi_io_vec->bv_offset > PAGE_SIZE;
+ }
+
+-struct bio *__bio_split_to_limits(struct bio *bio,
+- const struct queue_limits *lim,
+- unsigned int *nr_segs);
+ int ll_back_merge_fn(struct request *req, struct bio *bio,
+ unsigned int nr_segs);
+ bool blk_attempt_req_merge(struct request_queue *q, struct request *rq,
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 1ff99a7091bbba..7df7abaf3e526b 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -525,10 +525,6 @@ static int tpm_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait)
+ {
+ struct tpm_chip *chip = container_of(rng, struct tpm_chip, hwrng);
+
+- /* Give back zero bytes, as TPM chip has not yet fully resumed: */
+- if (chip->flags & TPM_CHIP_FLAG_SUSPENDED)
+- return 0;
+-
+ return tpm_get_random(chip, data, max);
+ }
+
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 8134f002b121f8..b1daa0d7b341b1 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -370,6 +370,13 @@ int tpm_pm_suspend(struct device *dev)
+ if (!chip)
+ return -ENODEV;
+
++ rc = tpm_try_get_ops(chip);
++ if (rc) {
++ /* Can be safely set out of locks, as no action cannot race: */
++ chip->flags |= TPM_CHIP_FLAG_SUSPENDED;
++ goto out;
++ }
++
+ if (chip->flags & TPM_CHIP_FLAG_ALWAYS_POWERED)
+ goto suspended;
+
+@@ -377,21 +384,19 @@ int tpm_pm_suspend(struct device *dev)
+ !pm_suspend_via_firmware())
+ goto suspended;
+
+- rc = tpm_try_get_ops(chip);
+- if (!rc) {
+- if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+- tpm2_end_auth_session(chip);
+- tpm2_shutdown(chip, TPM2_SU_STATE);
+- } else {
+- rc = tpm1_pm_suspend(chip, tpm_suspend_pcr);
+- }
+-
+- tpm_put_ops(chip);
++ if (chip->flags & TPM_CHIP_FLAG_TPM2) {
++ tpm2_end_auth_session(chip);
++ tpm2_shutdown(chip, TPM2_SU_STATE);
++ goto suspended;
+ }
+
++ rc = tpm1_pm_suspend(chip, tpm_suspend_pcr);
++
+ suspended:
+ chip->flags |= TPM_CHIP_FLAG_SUSPENDED;
++ tpm_put_ops(chip);
+
++out:
+ if (rc)
+ dev_err(dev, "Ignoring error %d while suspending\n", rc);
+ return 0;
+@@ -440,11 +445,18 @@ int tpm_get_random(struct tpm_chip *chip, u8 *out, size_t max)
+ if (!chip)
+ return -ENODEV;
+
++ /* Give back zero bytes, as TPM chip has not yet fully resumed: */
++ if (chip->flags & TPM_CHIP_FLAG_SUSPENDED) {
++ rc = 0;
++ goto out;
++ }
++
+ if (chip->flags & TPM_CHIP_FLAG_TPM2)
+ rc = tpm2_get_random(chip, out, max);
+ else
+ rc = tpm1_get_random(chip, out, max);
+
++out:
+ tpm_put_ops(chip);
+ return rc;
+ }
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 931550b9ea699d..8478bccd2533eb 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -40,7 +40,7 @@
+
+ #define PLL_USER_CTL(p) ((p)->offset + (p)->regs[PLL_OFF_USER_CTL])
+ # define PLL_POST_DIV_SHIFT 8
+-# define PLL_POST_DIV_MASK(p) GENMASK((p)->width - 1, 0)
++# define PLL_POST_DIV_MASK(p) GENMASK((p)->width ? (p)->width - 1 : 3, 0)
+ # define PLL_ALPHA_MSB BIT(15)
+ # define PLL_ALPHA_EN BIT(24)
+ # define PLL_ALPHA_MODE BIT(25)
+diff --git a/drivers/clk/qcom/gcc-x1e80100.c b/drivers/clk/qcom/gcc-x1e80100.c
+index 0f578771071fad..8ea25aa25dff04 100644
+--- a/drivers/clk/qcom/gcc-x1e80100.c
++++ b/drivers/clk/qcom/gcc-x1e80100.c
+@@ -3123,7 +3123,7 @@ static struct clk_branch gcc_pcie_3_pipe_clk = {
+
+ static struct clk_branch gcc_pcie_3_pipediv2_clk = {
+ .halt_reg = 0x58060,
+- .halt_check = BRANCH_HALT_VOTED,
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52020,
+ .enable_mask = BIT(5),
+@@ -3248,7 +3248,7 @@ static struct clk_branch gcc_pcie_4_pipe_clk = {
+
+ static struct clk_branch gcc_pcie_4_pipediv2_clk = {
+ .halt_reg = 0x6b054,
+- .halt_check = BRANCH_HALT_VOTED,
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52010,
+ .enable_mask = BIT(27),
+@@ -3373,7 +3373,7 @@ static struct clk_branch gcc_pcie_5_pipe_clk = {
+
+ static struct clk_branch gcc_pcie_5_pipediv2_clk = {
+ .halt_reg = 0x2f054,
+- .halt_check = BRANCH_HALT_VOTED,
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52018,
+ .enable_mask = BIT(19),
+@@ -3511,7 +3511,7 @@ static struct clk_branch gcc_pcie_6a_pipe_clk = {
+
+ static struct clk_branch gcc_pcie_6a_pipediv2_clk = {
+ .halt_reg = 0x31060,
+- .halt_check = BRANCH_HALT_VOTED,
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52018,
+ .enable_mask = BIT(28),
+@@ -3649,7 +3649,7 @@ static struct clk_branch gcc_pcie_6b_pipe_clk = {
+
+ static struct clk_branch gcc_pcie_6b_pipediv2_clk = {
+ .halt_reg = 0x8d060,
+- .halt_check = BRANCH_HALT_VOTED,
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52010,
+ .enable_mask = BIT(28),
+@@ -6155,7 +6155,7 @@ static struct gdsc gcc_usb3_mp_ss1_phy_gdsc = {
+ .pd = {
+ .name = "gcc_usb3_mp_ss1_phy_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+diff --git a/drivers/clk/qcom/videocc-sm8350.c b/drivers/clk/qcom/videocc-sm8350.c
+index 5bd6fe3e129886..874d4da95ff8db 100644
+--- a/drivers/clk/qcom/videocc-sm8350.c
++++ b/drivers/clk/qcom/videocc-sm8350.c
+@@ -452,7 +452,7 @@ static struct gdsc mvs0_gdsc = {
+ .pd = {
+ .name = "mvs0_gdsc",
+ },
+- .flags = HW_CTRL | RETAIN_FF_ENABLE,
++ .flags = HW_CTRL_TRIGGER | RETAIN_FF_ENABLE,
+ .pwrsts = PWRSTS_OFF_ON,
+ };
+
+@@ -461,7 +461,7 @@ static struct gdsc mvs1_gdsc = {
+ .pd = {
+ .name = "mvs1_gdsc",
+ },
+- .flags = HW_CTRL | RETAIN_FF_ENABLE,
++ .flags = HW_CTRL_TRIGGER | RETAIN_FF_ENABLE,
+ .pwrsts = PWRSTS_OFF_ON,
+ };
+
+diff --git a/drivers/edac/qcom_edac.c b/drivers/edac/qcom_edac.c
+index d3cd4cc54ace9d..a9a8ba067007a9 100644
+--- a/drivers/edac/qcom_edac.c
++++ b/drivers/edac/qcom_edac.c
+@@ -342,9 +342,11 @@ static int qcom_llcc_edac_probe(struct platform_device *pdev)
+ int ecc_irq;
+ int rc;
+
+- rc = qcom_llcc_core_setup(llcc_driv_data, llcc_driv_data->bcast_regmap);
+- if (rc)
+- return rc;
++ if (!llcc_driv_data->ecc_irq_configured) {
++ rc = qcom_llcc_core_setup(llcc_driv_data, llcc_driv_data->bcast_regmap);
++ if (rc)
++ return rc;
++ }
+
+ /* Allocate edac control info */
+ edev_ctl = edac_device_alloc_ctl_info(0, "qcom-llcc", 1, "bank",
+diff --git a/drivers/firmware/arm_scmi/bus.c b/drivers/firmware/arm_scmi/bus.c
+index 96b2e5f9a8ef03..157172a5f2b577 100644
+--- a/drivers/firmware/arm_scmi/bus.c
++++ b/drivers/firmware/arm_scmi/bus.c
+@@ -325,7 +325,10 @@ EXPORT_SYMBOL_GPL(scmi_driver_unregister);
+
+ static void scmi_device_release(struct device *dev)
+ {
+- kfree(to_scmi_dev(dev));
++ struct scmi_device *scmi_dev = to_scmi_dev(dev);
++
++ kfree_const(scmi_dev->name);
++ kfree(scmi_dev);
+ }
+
+ static void __scmi_device_destroy(struct scmi_device *scmi_dev)
+@@ -338,7 +341,6 @@ static void __scmi_device_destroy(struct scmi_device *scmi_dev)
+ if (scmi_dev->protocol_id == SCMI_PROTOCOL_SYSTEM)
+ atomic_set(&scmi_syspower_registered, 0);
+
+- kfree_const(scmi_dev->name);
+ ida_free(&scmi_bus_id, scmi_dev->id);
+ device_unregister(&scmi_dev->dev);
+ }
+@@ -410,7 +412,6 @@ __scmi_device_create(struct device_node *np, struct device *parent,
+
+ return scmi_dev;
+ put_dev:
+- kfree_const(scmi_dev->name);
+ put_device(&scmi_dev->dev);
+ ida_free(&scmi_bus_id, id);
+ return NULL;
+diff --git a/drivers/firmware/qcom/Kconfig b/drivers/firmware/qcom/Kconfig
+index 73a1a41bf92ddd..b477d54b495a62 100644
+--- a/drivers/firmware/qcom/Kconfig
++++ b/drivers/firmware/qcom/Kconfig
+@@ -41,17 +41,6 @@ config QCOM_TZMEM_MODE_SHMBRIDGE
+
+ endchoice
+
+-config QCOM_SCM_DOWNLOAD_MODE_DEFAULT
+- bool "Qualcomm download mode enabled by default"
+- depends on QCOM_SCM
+- help
+- A device with "download mode" enabled will upon an unexpected
+- warm-restart enter a special debug mode that allows the user to
+- "download" memory content over USB for offline postmortem analysis.
+- The feature can be enabled/disabled on the kernel command line.
+-
+- Say Y here to enable "download mode" by default.
+-
+ config QCOM_QSEECOM
+ bool "Qualcomm QSEECOM interface driver"
+ depends on QCOM_SCM=y
+diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
+index 0f5ac346bda434..e10500cd4658f5 100644
+--- a/drivers/firmware/qcom/qcom_scm.c
++++ b/drivers/firmware/qcom/qcom_scm.c
+@@ -18,6 +18,7 @@
+ #include <linux/init.h>
+ #include <linux/interconnect.h>
+ #include <linux/interrupt.h>
++#include <linux/kstrtox.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+@@ -32,8 +33,7 @@
+ #include "qcom_scm.h"
+ #include "qcom_tzmem.h"
+
+-static bool download_mode = IS_ENABLED(CONFIG_QCOM_SCM_DOWNLOAD_MODE_DEFAULT);
+-module_param(download_mode, bool, 0);
++static u32 download_mode;
+
+ struct qcom_scm {
+ struct device *dev;
+@@ -112,6 +112,7 @@ enum qcom_scm_qseecom_tz_cmd_info {
+ };
+
+ #define QSEECOM_MAX_APP_NAME_SIZE 64
++#define SHMBRIDGE_RESULT_NOTSUPP 4
+
+ /* Each bit configures cold/warm boot address for one of the 4 CPUs */
+ static const u8 qcom_scm_cpu_cold_bits[QCOM_SCM_BOOT_MAX_CPUS] = {
+@@ -134,6 +135,11 @@ static const char * const qcom_scm_convention_names[] = {
+ [SMC_CONVENTION_LEGACY] = "smc legacy",
+ };
+
++static const char * const download_mode_name[] = {
++ [QCOM_DLOAD_NODUMP] = "off",
++ [QCOM_DLOAD_FULLDUMP] = "full",
++};
++
+ static struct qcom_scm *__scm;
+
+ static int qcom_scm_clk_enable(void)
+@@ -207,7 +213,7 @@ static DEFINE_SPINLOCK(scm_query_lock);
+
+ struct qcom_tzmem_pool *qcom_scm_get_tzmem_pool(void)
+ {
+- return __scm->mempool;
++ return __scm ? __scm->mempool : NULL;
+ }
+
+ static enum qcom_scm_convention __get_convention(void)
+@@ -526,18 +532,17 @@ static int qcom_scm_io_rmw(phys_addr_t addr, unsigned int mask, unsigned int val
+ return qcom_scm_io_writel(addr, new);
+ }
+
+-static void qcom_scm_set_download_mode(bool enable)
++static void qcom_scm_set_download_mode(u32 dload_mode)
+ {
+- u32 val = enable ? QCOM_DLOAD_FULLDUMP : QCOM_DLOAD_NODUMP;
+ int ret = 0;
+
+ if (__scm->dload_mode_addr) {
+ ret = qcom_scm_io_rmw(__scm->dload_mode_addr, QCOM_DLOAD_MASK,
+- FIELD_PREP(QCOM_DLOAD_MASK, val));
++ FIELD_PREP(QCOM_DLOAD_MASK, dload_mode));
+ } else if (__qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_BOOT,
+ QCOM_SCM_BOOT_SET_DLOAD_MODE)) {
+- ret = __qcom_scm_set_dload_mode(__scm->dev, enable);
+- } else {
++ ret = __qcom_scm_set_dload_mode(__scm->dev, !!dload_mode);
++ } else if (dload_mode) {
+ dev_err(__scm->dev,
+ "No available mechanism for setting download mode\n");
+ }
+@@ -1353,6 +1358,8 @@ EXPORT_SYMBOL_GPL(qcom_scm_lmh_dcvsh_available);
+
+ int qcom_scm_shm_bridge_enable(void)
+ {
++ int ret;
++
+ struct qcom_scm_desc desc = {
+ .svc = QCOM_SCM_SVC_MP,
+ .cmd = QCOM_SCM_MP_SHM_BRIDGE_ENABLE,
+@@ -1365,7 +1372,15 @@ int qcom_scm_shm_bridge_enable(void)
+ QCOM_SCM_MP_SHM_BRIDGE_ENABLE))
+ return -EOPNOTSUPP;
+
+- return qcom_scm_call(__scm->dev, &desc, &res) ?: res.result[0];
++ ret = qcom_scm_call(__scm->dev, &desc, &res);
++
++ if (ret)
++ return ret;
++
++ if (res.result[0] == SHMBRIDGE_RESULT_NOTSUPP)
++ return -EOPNOTSUPP;
++
++ return res.result[0];
+ }
+ EXPORT_SYMBOL_GPL(qcom_scm_shm_bridge_enable);
+
+@@ -1886,6 +1901,46 @@ static irqreturn_t qcom_scm_irq_handler(int irq, void *data)
+ return IRQ_HANDLED;
+ }
+
++static int get_download_mode(char *buffer, const struct kernel_param *kp)
++{
++ if (download_mode >= ARRAY_SIZE(download_mode_name))
++ return sysfs_emit(buffer, "unknown mode\n");
++
++ return sysfs_emit(buffer, "%s\n", download_mode_name[download_mode]);
++}
++
++static int set_download_mode(const char *val, const struct kernel_param *kp)
++{
++ bool tmp;
++ int ret;
++
++ ret = sysfs_match_string(download_mode_name, val);
++ if (ret < 0) {
++ ret = kstrtobool(val, &tmp);
++ if (ret < 0) {
++ pr_err("qcom_scm: err: %d\n", ret);
++ return ret;
++ }
++
++ ret = tmp ? 1 : 0;
++ }
++
++ download_mode = ret;
++ if (__scm)
++ qcom_scm_set_download_mode(download_mode);
++
++ return 0;
++}
++
++static const struct kernel_param_ops download_mode_param_ops = {
++ .get = get_download_mode,
++ .set = set_download_mode,
++};
++
++module_param_cb(download_mode, &download_mode_param_ops, NULL, 0644);
++MODULE_PARM_DESC(download_mode,
++ "download mode: off/0/N for no dump mode, full/on/1/Y for full dump mode");
++
+ static int qcom_scm_probe(struct platform_device *pdev)
+ {
+ struct qcom_tzmem_pool_config pool_config;
+@@ -1950,7 +2005,7 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ __get_convention();
+
+ /*
+- * If requested enable "download mode", from this point on warmboot
++ * If "download mode" is requested, from this point on warmboot
+ * will cause the boot stages to enter download mode, unless
+ * disabled below by a clean shutdown/reboot.
+ */
+@@ -2001,7 +2056,7 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ static void qcom_scm_shutdown(struct platform_device *pdev)
+ {
+ /* Clean shutdown, disable download mode to allow normal restart */
+- qcom_scm_set_download_mode(false);
++ qcom_scm_set_download_mode(QCOM_DLOAD_NODUMP);
+ }
+
+ static const struct of_device_id qcom_scm_dt_match[] = {
+diff --git a/drivers/firmware/smccc/smccc.c b/drivers/firmware/smccc/smccc.c
+index d670635914ecb6..a74600d9f2d72a 100644
+--- a/drivers/firmware/smccc/smccc.c
++++ b/drivers/firmware/smccc/smccc.c
+@@ -16,7 +16,6 @@ static u32 smccc_version = ARM_SMCCC_VERSION_1_0;
+ static enum arm_smccc_conduit smccc_conduit = SMCCC_CONDUIT_NONE;
+
+ bool __ro_after_init smccc_trng_available = false;
+-u64 __ro_after_init smccc_has_sve_hint = false;
+ s32 __ro_after_init smccc_soc_id_version = SMCCC_RET_NOT_SUPPORTED;
+ s32 __ro_after_init smccc_soc_id_revision = SMCCC_RET_NOT_SUPPORTED;
+
+@@ -28,9 +27,6 @@ void __init arm_smccc_version_init(u32 version, enum arm_smccc_conduit conduit)
+ smccc_conduit = conduit;
+
+ smccc_trng_available = smccc_probe_trng();
+- if (IS_ENABLED(CONFIG_ARM64_SVE) &&
+- smccc_version >= ARM_SMCCC_VERSION_1_3)
+- smccc_has_sve_hint = true;
+
+ if ((smccc_version >= ARM_SMCCC_VERSION_1_2) &&
+ (smccc_conduit != SMCCC_CONDUIT_NONE)) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 1f5a296f5ed2f4..7dd55ed57c1d97 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -172,8 +172,8 @@ static union acpi_object *amdgpu_atif_call(struct amdgpu_atif *atif,
+ &buffer);
+ obj = (union acpi_object *)buffer.pointer;
+
+- /* Fail if calling the method fails and ATIF is supported */
+- if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) {
++ /* Fail if calling the method fails */
++ if (ACPI_FAILURE(status)) {
+ DRM_DEBUG_DRIVER("failed to evaluate ATIF got %s\n",
+ acpi_format_exception(status));
+ kfree(obj);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index 0e1a11b6b989d7..ac9f2820279a95 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -402,7 +402,7 @@ static ssize_t amdgpu_debugfs_gprwave_read(struct file *f, char __user *buf, siz
+ int r;
+ uint32_t *data, x;
+
+- if (size & 0x3 || *pos & 0x3)
++ if (size > 4096 || size & 0x3 || *pos & 0x3)
+ return -EINVAL;
+
+ r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
+@@ -1648,7 +1648,7 @@ int amdgpu_debugfs_regs_init(struct amdgpu_device *adev)
+
+ for (i = 0; i < ARRAY_SIZE(debugfs_regs); i++) {
+ ent = debugfs_create_file(debugfs_regs_names[i],
+- S_IFREG | 0444, root,
++ S_IFREG | 0400, root,
+ adev, debugfs_regs[i]);
+ if (!i && !IS_ERR_OR_NULL(ent))
+ i_size_write(ent->d_inode, adev->rmmio_size);
+@@ -2194,11 +2194,11 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
+ amdgpu_securedisplay_debugfs_init(adev);
+ amdgpu_fw_attestation_debugfs_init(adev);
+
+- debugfs_create_file("amdgpu_evict_vram", 0444, root, adev,
++ debugfs_create_file("amdgpu_evict_vram", 0400, root, adev,
+ &amdgpu_evict_vram_fops);
+- debugfs_create_file("amdgpu_evict_gtt", 0444, root, adev,
++ debugfs_create_file("amdgpu_evict_gtt", 0400, root, adev,
+ &amdgpu_evict_gtt_fops);
+- debugfs_create_file("amdgpu_test_ib", 0444, root, adev,
++ debugfs_create_file("amdgpu_test_ib", 0400, root, adev,
+ &amdgpu_debugfs_test_ib_fops);
+ debugfs_create_file("amdgpu_vm_info", 0444, root, adev,
+ &amdgpu_debugfs_vm_info_fops);
+diff --git a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
+index 228fd4dd32f139..b51bef39f8f8c5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
++++ b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
+@@ -484,7 +484,7 @@ static bool __aqua_vanjaram_is_valid_mode(struct amdgpu_xcp_mgr *xcp_mgr,
+ case AMDGPU_SPX_PARTITION_MODE:
+ return adev->gmc.num_mem_partitions == 1 && num_xcc > 0;
+ case AMDGPU_DPX_PARTITION_MODE:
+- return adev->gmc.num_mem_partitions != 8 && (num_xcc % 4) == 0;
++ return adev->gmc.num_mem_partitions <= 2 && (num_xcc % 4) == 0;
+ case AMDGPU_TPX_PARTITION_MODE:
+ return (adev->gmc.num_mem_partitions == 1 ||
+ adev->gmc.num_mem_partitions == 3) &&
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 4f19e9736a676b..245a26cdfc5222 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -9384,6 +9384,7 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ bool mode_set_reset_required = false;
+ u32 i;
+ struct dc_commit_streams_params params = {dc_state->streams, dc_state->stream_count};
++ bool set_backlight_level = false;
+
+ /* Disable writeback */
+ for_each_old_connector_in_state(state, connector, old_con_state, i) {
+@@ -9503,6 +9504,7 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ acrtc->hw_mode = new_crtc_state->mode;
+ crtc->hwmode = new_crtc_state->mode;
+ mode_set_reset_required = true;
++ set_backlight_level = true;
+ } else if (modereset_required(new_crtc_state)) {
+ drm_dbg_atomic(dev,
+ "Atomic commit: RESET. crtc id %d:[%p]\n",
+@@ -9554,6 +9556,19 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ acrtc->otg_inst = status->primary_otg_inst;
+ }
+ }
++
++ /* During boot up and resume the DC layer will reset the panel brightness
++ * to fix a flicker issue.
++ * It will cause the dm->actual_brightness is not the current panel brightness
++ * level. (the dm->brightness is the correct panel level)
++ * So we set the backlight level with dm->brightness value after set mode
++ */
++ if (set_backlight_level) {
++ for (i = 0; i < dm->num_of_edps; i++) {
++ if (dm->backlight_dev[i])
++ amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);
++ }
++ }
+ }
+
+ static void dm_set_writeback(struct amdgpu_display_manager *dm,
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+index 0d8498ab9b235e..be8fbb04ad98f8 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+@@ -3127,7 +3127,9 @@ static enum bp_result bios_parser_get_vram_info(
+ struct atom_data_revision revision;
+
+ // vram info moved to umc_info for DCN4x
+- if (info && DATA_TABLES(umc_info)) {
++ if (dcb->ctx->dce_version >= DCN_VERSION_4_01 &&
++ dcb->ctx->dce_version < DCN_VERSION_MAX &&
++ info && DATA_TABLES(umc_info)) {
+ header = GET_IMAGE(struct atom_common_table_header,
+ DATA_TABLES(umc_info));
+
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 80e60ea2d11e3c..ee1bcfaae3e3db 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -1259,26 +1259,33 @@ static int smu_sw_init(void *handle)
+ smu->watermarks_bitmap = 0;
+ smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+ smu->default_power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
++ smu->user_dpm_profile.user_workload_mask = 0;
+
+ atomic_set(&smu->smu_power.power_gate.vcn_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.jpeg_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.vpe_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.umsch_mm_gated, 1);
+
+- smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT] = 0;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 1;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_POWERSAVING] = 2;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_VIDEO] = 3;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_VR] = 4;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_COMPUTE] = 5;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_CUSTOM] = 6;
++ smu->workload_priority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT] = 0;
++ smu->workload_priority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 1;
++ smu->workload_priority[PP_SMC_POWER_PROFILE_POWERSAVING] = 2;
++ smu->workload_priority[PP_SMC_POWER_PROFILE_VIDEO] = 3;
++ smu->workload_priority[PP_SMC_POWER_PROFILE_VR] = 4;
++ smu->workload_priority[PP_SMC_POWER_PROFILE_COMPUTE] = 5;
++ smu->workload_priority[PP_SMC_POWER_PROFILE_CUSTOM] = 6;
+
+ if (smu->is_apu ||
+- !smu_is_workload_profile_available(smu, PP_SMC_POWER_PROFILE_FULLSCREEN3D))
+- smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
+- else
+- smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D];
++ !smu_is_workload_profile_available(smu, PP_SMC_POWER_PROFILE_FULLSCREEN3D)) {
++ smu->driver_workload_mask =
++ 1 << smu->workload_priority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
++ } else {
++ smu->driver_workload_mask =
++ 1 << smu->workload_priority[PP_SMC_POWER_PROFILE_FULLSCREEN3D];
++ smu->default_power_profile_mode = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
++ }
+
++ smu->workload_mask = smu->driver_workload_mask |
++ smu->user_dpm_profile.user_workload_mask;
+ smu->workload_setting[0] = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+ smu->workload_setting[1] = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
+ smu->workload_setting[2] = PP_SMC_POWER_PROFILE_POWERSAVING;
+@@ -2348,17 +2355,20 @@ static int smu_switch_power_profile(void *handle,
+ return -EINVAL;
+
+ if (!en) {
+- smu->workload_mask &= ~(1 << smu->workload_prority[type]);
++ smu->driver_workload_mask &= ~(1 << smu->workload_priority[type]);
+ index = fls(smu->workload_mask);
+ index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+ workload[0] = smu->workload_setting[index];
+ } else {
+- smu->workload_mask |= (1 << smu->workload_prority[type]);
++ smu->driver_workload_mask |= (1 << smu->workload_priority[type]);
+ index = fls(smu->workload_mask);
+ index = index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+ workload[0] = smu->workload_setting[index];
+ }
+
++ smu->workload_mask = smu->driver_workload_mask |
++ smu->user_dpm_profile.user_workload_mask;
++
+ if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
+ smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
+ smu_bump_power_profile_mode(smu, workload, 0);
+@@ -3049,12 +3059,23 @@ static int smu_set_power_profile_mode(void *handle,
+ uint32_t param_size)
+ {
+ struct smu_context *smu = handle;
++ int ret;
+
+ if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled ||
+ !smu->ppt_funcs->set_power_profile_mode)
+ return -EOPNOTSUPP;
+
+- return smu_bump_power_profile_mode(smu, param, param_size);
++ if (smu->user_dpm_profile.user_workload_mask &
++ (1 << smu->workload_priority[param[param_size]]))
++ return 0;
++
++ smu->user_dpm_profile.user_workload_mask =
++ (1 << smu->workload_priority[param[param_size]]);
++ smu->workload_mask = smu->user_dpm_profile.user_workload_mask |
++ smu->driver_workload_mask;
++ ret = smu_bump_power_profile_mode(smu, param, param_size);
++
++ return ret;
+ }
+
+ static int smu_get_fan_control_mode(void *handle, u32 *fan_mode)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
+index b44a185d07e84c..d60d9a12a47ef7 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
+@@ -240,6 +240,7 @@ struct smu_user_dpm_profile {
+ /* user clock state information */
+ uint32_t clk_mask[SMU_CLK_COUNT];
+ uint32_t clk_dependency;
++ uint32_t user_workload_mask;
+ };
+
+ #define SMU_TABLE_INIT(tables, table_id, s, a, d) \
+@@ -557,7 +558,8 @@ struct smu_context {
+ bool disable_uclk_switch;
+
+ uint32_t workload_mask;
+- uint32_t workload_prority[WORKLOAD_POLICY_MAX];
++ uint32_t driver_workload_mask;
++ uint32_t workload_priority[WORKLOAD_POLICY_MAX];
+ uint32_t workload_setting[WORKLOAD_POLICY_MAX];
+ uint32_t power_profile_mode;
+ uint32_t default_power_profile_mode;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index c0f6b59369b7c4..31fe512028f460 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -1455,7 +1455,6 @@ static int arcturus_set_power_profile_mode(struct smu_context *smu,
+ return -EINVAL;
+ }
+
+-
+ if ((profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) &&
+ (smu->smc_fw_version >= 0x360d00)) {
+ if (size != 10)
+@@ -1523,14 +1522,14 @@ static int arcturus_set_power_profile_mode(struct smu_context *smu,
+
+ ret = smu_cmn_send_smc_msg_with_param(smu,
+ SMU_MSG_SetWorkloadMask,
+- 1 << workload_type,
++ smu->workload_mask,
+ NULL);
+ if (ret) {
+ dev_err(smu->adev->dev, "Fail to set workload type %d\n", workload_type);
+ return ret;
+ }
+
+- smu->power_profile_mode = profile_mode;
++ smu_cmn_assign_power_profile(smu);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index 076620fa3ef5a8..bb4ae529ae20ef 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -2081,10 +2081,13 @@ static int navi10_set_power_profile_mode(struct smu_context *smu, long *input, u
+ smu->power_profile_mode);
+ if (workload_type < 0)
+ return -EINVAL;
++
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- 1 << workload_type, NULL);
++ smu->workload_mask, NULL);
+ if (ret)
+ dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
++ else
++ smu_cmn_assign_power_profile(smu);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 0d3e1a121b670a..ca94c52663c071 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -1786,10 +1786,13 @@ static int sienna_cichlid_set_power_profile_mode(struct smu_context *smu, long *
+ smu->power_profile_mode);
+ if (workload_type < 0)
+ return -EINVAL;
++
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- 1 << workload_type, NULL);
++ smu->workload_mask, NULL);
+ if (ret)
+ dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
++ else
++ smu_cmn_assign_power_profile(smu);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+index 1fe020f1f4dbe2..952ee22cbc90e0 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+@@ -1079,7 +1079,7 @@ static int vangogh_set_power_profile_mode(struct smu_context *smu, long *input,
+ }
+
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify,
+- 1 << workload_type,
++ smu->workload_mask,
+ NULL);
+ if (ret) {
+ dev_err_once(smu->adev->dev, "Fail to set workload type %d\n",
+@@ -1087,7 +1087,7 @@ static int vangogh_set_power_profile_mode(struct smu_context *smu, long *input,
+ return ret;
+ }
+
+- smu->power_profile_mode = profile_mode;
++ smu_cmn_assign_power_profile(smu);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+index cc0504b063fa3a..62316a6707ef2f 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+@@ -890,14 +890,14 @@ static int renoir_set_power_profile_mode(struct smu_context *smu, long *input, u
+ }
+
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify,
+- 1 << workload_type,
++ smu->workload_mask,
+ NULL);
+ if (ret) {
+ dev_err_once(smu->adev->dev, "Fail to set workload type %d\n", workload_type);
+ return ret;
+ }
+
+- smu->power_profile_mode = profile_mode;
++ smu_cmn_assign_power_profile(smu);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index d53e162dcd8de2..5dd7ceca64feed 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2485,7 +2485,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ DpmActivityMonitorCoeffInt_t *activity_monitor =
+ &(activity_monitor_external.DpmActivityMonitorCoeffInt);
+ int workload_type, ret = 0;
+- u32 workload_mask, selected_workload_mask;
++ u32 workload_mask;
+
+ smu->power_profile_mode = input[size];
+
+@@ -2552,7 +2552,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ if (workload_type < 0)
+ return -EINVAL;
+
+- selected_workload_mask = workload_mask = 1 << workload_type;
++ workload_mask = 1 << workload_type;
+
+ /* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
+ if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
+@@ -2567,12 +2567,22 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ workload_mask |= 1 << workload_type;
+ }
+
++ smu->workload_mask |= workload_mask;
+ ret = smu_cmn_send_smc_msg_with_param(smu,
+ SMU_MSG_SetWorkloadMask,
+- workload_mask,
++ smu->workload_mask,
+ NULL);
+- if (!ret)
+- smu->workload_mask = selected_workload_mask;
++ if (!ret) {
++ smu_cmn_assign_power_profile(smu);
++ if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING) {
++ workload_type = smu_cmn_to_asic_specific_index(smu,
++ CMN2ASIC_MAPPING_WORKLOAD,
++ PP_SMC_POWER_PROFILE_FULLSCREEN3D);
++ smu->power_profile_mode = smu->workload_mask & (1 << workload_type)
++ ? PP_SMC_POWER_PROFILE_FULLSCREEN3D
++ : PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
++ }
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index b891a5e0a3969a..9d0b19419de0ff 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2499,13 +2499,14 @@ static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu, long *inp
+ smu->power_profile_mode);
+ if (workload_type < 0)
+ return -EINVAL;
++
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- 1 << workload_type, NULL);
++ smu->workload_mask, NULL);
+
+ if (ret)
+ dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
+ else
+- smu->workload_mask = (1 << workload_type);
++ smu_cmn_assign_power_profile(smu);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index ba17d01e64396a..d9f0e7f81ed788 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -354,54 +354,6 @@ static int smu_v14_0_2_store_powerplay_table(struct smu_context *smu)
+ return 0;
+ }
+
+-#ifndef atom_smc_dpm_info_table_14_0_0
+-struct atom_smc_dpm_info_table_14_0_0 {
+- struct atom_common_table_header table_header;
+- BoardTable_t BoardTable;
+-};
+-#endif
+-
+-static int smu_v14_0_2_append_powerplay_table(struct smu_context *smu)
+-{
+- struct smu_table_context *table_context = &smu->smu_table;
+- PPTable_t *smc_pptable = table_context->driver_pptable;
+- struct atom_smc_dpm_info_table_14_0_0 *smc_dpm_table;
+- BoardTable_t *BoardTable = &smc_pptable->BoardTable;
+- int index, ret;
+-
+- index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
+- smc_dpm_info);
+-
+- ret = amdgpu_atombios_get_data_table(smu->adev, index, NULL, NULL, NULL,
+- (uint8_t **)&smc_dpm_table);
+- if (ret)
+- return ret;
+-
+- memcpy(BoardTable, &smc_dpm_table->BoardTable, sizeof(BoardTable_t));
+-
+- return 0;
+-}
+-
+-#if 0
+-static int smu_v14_0_2_get_pptable_from_pmfw(struct smu_context *smu,
+- void **table,
+- uint32_t *size)
+-{
+- struct smu_table_context *smu_table = &smu->smu_table;
+- void *combo_pptable = smu_table->combo_pptable;
+- int ret = 0;
+-
+- ret = smu_cmn_get_combo_pptable(smu);
+- if (ret)
+- return ret;
+-
+- *table = combo_pptable;
+- *size = sizeof(struct smu_14_0_powerplay_table);
+-
+- return 0;
+-}
+-#endif
+-
+ static int smu_v14_0_2_get_pptable_from_pmfw(struct smu_context *smu,
+ void **table,
+ uint32_t *size)
+@@ -423,16 +375,12 @@ static int smu_v14_0_2_get_pptable_from_pmfw(struct smu_context *smu,
+ static int smu_v14_0_2_setup_pptable(struct smu_context *smu)
+ {
+ struct smu_table_context *smu_table = &smu->smu_table;
+- struct amdgpu_device *adev = smu->adev;
+ int ret = 0;
+
+ if (amdgpu_sriov_vf(smu->adev))
+ return 0;
+
+- if (!adev->scpm_enabled)
+- ret = smu_v14_0_setup_pptable(smu);
+- else
+- ret = smu_v14_0_2_get_pptable_from_pmfw(smu,
++ ret = smu_v14_0_2_get_pptable_from_pmfw(smu,
+ &smu_table->power_play_table,
+ &smu_table->power_play_table_size);
+ if (ret)
+@@ -442,16 +390,6 @@ static int smu_v14_0_2_setup_pptable(struct smu_context *smu)
+ if (ret)
+ return ret;
+
+- /*
+- * With SCPM enabled, the operation below will be handled
+- * by PSP. Driver involvment is unnecessary and useless.
+- */
+- if (!adev->scpm_enabled) {
+- ret = smu_v14_0_2_append_powerplay_table(smu);
+- if (ret)
+- return ret;
+- }
+-
+ ret = smu_v14_0_2_check_powerplay_table(smu);
+ if (ret)
+ return ret;
+@@ -1570,12 +1508,11 @@ static int smu_v14_0_2_set_power_profile_mode(struct smu_context *smu,
+ if (workload_type < 0)
+ return -EINVAL;
+
+- ret = smu_cmn_send_smc_msg_with_param(smu,
+- SMU_MSG_SetWorkloadMask,
+- 1 << workload_type,
+- NULL);
++ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
++ smu->workload_mask, NULL);
++
+ if (!ret)
+- smu->workload_mask = 1 << workload_type;
++ smu_cmn_assign_power_profile(smu);
+
+ return ret;
+ }
+@@ -1938,7 +1875,6 @@ static const struct pptable_funcs smu_v14_0_2_ppt_funcs = {
+ .check_fw_status = smu_v14_0_check_fw_status,
+ .setup_pptable = smu_v14_0_2_setup_pptable,
+ .check_fw_version = smu_v14_0_check_fw_version,
+- .write_pptable = smu_cmn_write_pptable,
+ .set_driver_table_location = smu_v14_0_set_driver_table_location,
+ .system_features_control = smu_v14_0_system_features_control,
+ .set_allowed_mask = smu_v14_0_set_allowed_mask,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+index 91ad434bcdaeb4..bdfc5e617333df 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+@@ -1138,6 +1138,14 @@ int smu_cmn_set_mp1_state(struct smu_context *smu,
+ return ret;
+ }
+
++void smu_cmn_assign_power_profile(struct smu_context *smu)
++{
++ uint32_t index;
++ index = fls(smu->workload_mask);
++ index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
++ smu->power_profile_mode = smu->workload_setting[index];
++}
++
+ bool smu_cmn_is_audio_func_enabled(struct amdgpu_device *adev)
+ {
+ struct pci_dev *p = NULL;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+index 1de685defe85b1..8a801e389659d1 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+@@ -130,6 +130,8 @@ void smu_cmn_init_soft_gpu_metrics(void *table, uint8_t frev, uint8_t crev);
+ int smu_cmn_set_mp1_state(struct smu_context *smu,
+ enum pp_mp1_state mp1_state);
+
++void smu_cmn_assign_power_profile(struct smu_context *smu);
++
+ /*
+ * Helper function to make sysfs_emit_at() happy. Align buf to
+ * the current page boundary and record the offset.
+diff --git a/drivers/gpu/drm/imagination/pvr_context.c b/drivers/gpu/drm/imagination/pvr_context.c
+index eded5e955cc0ac..4cb3494c0bb2c6 100644
+--- a/drivers/gpu/drm/imagination/pvr_context.c
++++ b/drivers/gpu/drm/imagination/pvr_context.c
+@@ -17,10 +17,14 @@
+
+ #include <drm/drm_auth.h>
+ #include <drm/drm_managed.h>
++
++#include <linux/bug.h>
+ #include <linux/errno.h>
+ #include <linux/kernel.h>
++#include <linux/list.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
++#include <linux/spinlock.h>
+ #include <linux/string.h>
+ #include <linux/types.h>
+ #include <linux/xarray.h>
+@@ -354,6 +358,10 @@ int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_co
+ return err;
+ }
+
++ spin_lock(&pvr_dev->ctx_list_lock);
++ list_add_tail(&ctx->file_link, &pvr_file->contexts);
++ spin_unlock(&pvr_dev->ctx_list_lock);
++
+ return 0;
+
+ err_destroy_fw_obj:
+@@ -380,6 +388,11 @@ pvr_context_release(struct kref *ref_count)
+ container_of(ref_count, struct pvr_context, ref_count);
+ struct pvr_device *pvr_dev = ctx->pvr_dev;
+
++ WARN_ON(in_interrupt());
++ spin_lock(&pvr_dev->ctx_list_lock);
++ list_del(&ctx->file_link);
++ spin_unlock(&pvr_dev->ctx_list_lock);
++
+ xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
+ pvr_context_destroy_queues(ctx);
+ pvr_fw_object_destroy(ctx->fw_obj);
+@@ -437,11 +450,30 @@ pvr_context_destroy(struct pvr_file *pvr_file, u32 handle)
+ */
+ void pvr_destroy_contexts_for_file(struct pvr_file *pvr_file)
+ {
++ struct pvr_device *pvr_dev = pvr_file->pvr_dev;
+ struct pvr_context *ctx;
+ unsigned long handle;
+
+ xa_for_each(&pvr_file->ctx_handles, handle, ctx)
+ pvr_context_destroy(pvr_file, handle);
++
++ spin_lock(&pvr_dev->ctx_list_lock);
++ ctx = list_first_entry(&pvr_file->contexts, struct pvr_context, file_link);
++
++ while (!list_entry_is_head(ctx, &pvr_file->contexts, file_link)) {
++ list_del_init(&ctx->file_link);
++
++ if (pvr_context_get_if_referenced(ctx)) {
++ spin_unlock(&pvr_dev->ctx_list_lock);
++
++ pvr_vm_unmap_all(ctx->vm_ctx);
++
++ pvr_context_put(ctx);
++ spin_lock(&pvr_dev->ctx_list_lock);
++ }
++ ctx = list_first_entry(&pvr_file->contexts, struct pvr_context, file_link);
++ }
++ spin_unlock(&pvr_dev->ctx_list_lock);
+ }
+
+ /**
+@@ -451,6 +483,7 @@ void pvr_destroy_contexts_for_file(struct pvr_file *pvr_file)
+ void pvr_context_device_init(struct pvr_device *pvr_dev)
+ {
+ xa_init_flags(&pvr_dev->ctx_ids, XA_FLAGS_ALLOC1);
++ spin_lock_init(&pvr_dev->ctx_list_lock);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/imagination/pvr_context.h b/drivers/gpu/drm/imagination/pvr_context.h
+index 0c7b97dfa6bafd..07afa179cdf421 100644
+--- a/drivers/gpu/drm/imagination/pvr_context.h
++++ b/drivers/gpu/drm/imagination/pvr_context.h
+@@ -85,6 +85,9 @@ struct pvr_context {
+ /** @compute: Transfer queue. */
+ struct pvr_queue *transfer;
+ } queues;
++
++ /** @file_link: pvr_file PVR context list link. */
++ struct list_head file_link;
+ };
+
+ static __always_inline struct pvr_queue *
+@@ -123,6 +126,24 @@ pvr_context_get(struct pvr_context *ctx)
+ return ctx;
+ }
+
++/**
++ * pvr_context_get_if_referenced() - Take an additional reference on a still
++ * referenced context.
++ * @ctx: Context pointer.
++ *
++ * Call pvr_context_put() to release.
++ *
++ * Returns:
++ * * True on success, or
++ * * false if no context pointer passed, or the context wasn't still
++ * * referenced.
++ */
++static __always_inline bool
++pvr_context_get_if_referenced(struct pvr_context *ctx)
++{
++ return ctx != NULL && kref_get_unless_zero(&ctx->ref_count) != 0;
++}
++
+ /**
+ * pvr_context_lookup() - Lookup context pointer from handle and file.
+ * @pvr_file: Pointer to pvr_file structure.
+diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
+index ecdd5767d8ef53..b1fbf9ccf19f75 100644
+--- a/drivers/gpu/drm/imagination/pvr_device.h
++++ b/drivers/gpu/drm/imagination/pvr_device.h
+@@ -23,6 +23,7 @@
+ #include <linux/kernel.h>
+ #include <linux/math.h>
+ #include <linux/mutex.h>
++#include <linux/spinlock_types.h>
+ #include <linux/timer.h>
+ #include <linux/types.h>
+ #include <linux/wait.h>
+@@ -293,6 +294,12 @@ struct pvr_device {
+
+ /** @sched_wq: Workqueue for schedulers. */
+ struct workqueue_struct *sched_wq;
++
++ /**
++ * @ctx_list_lock: Lock to be held when accessing the context list in
++ * struct pvr_file.
++ */
++ spinlock_t ctx_list_lock;
+ };
+
+ /**
+@@ -344,6 +351,9 @@ struct pvr_file {
+ * This array is used to allocate handles returned to userspace.
+ */
+ struct xarray vm_ctx_handles;
++
++ /** @contexts: PVR context list. */
++ struct list_head contexts;
+ };
+
+ /**
+diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
+index 1a0cb7aa9cea4c..fb17196e05f498 100644
+--- a/drivers/gpu/drm/imagination/pvr_drv.c
++++ b/drivers/gpu/drm/imagination/pvr_drv.c
+@@ -28,6 +28,7 @@
+ #include <linux/export.h>
+ #include <linux/fs.h>
+ #include <linux/kernel.h>
++#include <linux/list.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
+ #include <linux/moduleparam.h>
+@@ -1326,6 +1327,8 @@ pvr_drm_driver_open(struct drm_device *drm_dev, struct drm_file *file)
+ */
+ pvr_file->pvr_dev = pvr_dev;
+
++ INIT_LIST_HEAD(&pvr_file->contexts);
++
+ xa_init_flags(&pvr_file->ctx_handles, XA_FLAGS_ALLOC1);
+ xa_init_flags(&pvr_file->free_list_handles, XA_FLAGS_ALLOC1);
+ xa_init_flags(&pvr_file->hwrt_handles, XA_FLAGS_ALLOC1);
+diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
+index 97c0f772ed65f2..7bd6ba4c6e8ab6 100644
+--- a/drivers/gpu/drm/imagination/pvr_vm.c
++++ b/drivers/gpu/drm/imagination/pvr_vm.c
+@@ -14,6 +14,7 @@
+ #include <drm/drm_gem.h>
+ #include <drm/drm_gpuvm.h>
+
++#include <linux/bug.h>
+ #include <linux/container_of.h>
+ #include <linux/err.h>
+ #include <linux/errno.h>
+@@ -597,12 +598,26 @@ pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
+ }
+
+ /**
+- * pvr_vm_context_release() - Teardown a VM context.
+- * @ref_count: Pointer to reference counter of the VM context.
++ * pvr_vm_unmap_all() - Unmap all mappings associated with a VM context.
++ * @vm_ctx: Target VM context.
+ *
+ * This function ensures that no mappings are left dangling by unmapping them
+ * all in order of ascending device-virtual address.
+ */
++void
++pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx)
++{
++ WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuvm_mgr.mm_start,
++ vm_ctx->gpuvm_mgr.mm_range));
++}
++
++/**
++ * pvr_vm_context_release() - Teardown a VM context.
++ * @ref_count: Pointer to reference counter of the VM context.
++ *
++ * This function also ensures that no mappings are left dangling by calling
++ * pvr_vm_unmap_all.
++ */
+ static void
+ pvr_vm_context_release(struct kref *ref_count)
+ {
+@@ -612,8 +627,7 @@ pvr_vm_context_release(struct kref *ref_count)
+ if (vm_ctx->fw_mem_ctx_obj)
+ pvr_fw_object_destroy(vm_ctx->fw_mem_ctx_obj);
+
+- WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuvm_mgr.mm_start,
+- vm_ctx->gpuvm_mgr.mm_range));
++ pvr_vm_unmap_all(vm_ctx);
+
+ pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
+ drm_gem_private_object_fini(&vm_ctx->dummy_gem);
+diff --git a/drivers/gpu/drm/imagination/pvr_vm.h b/drivers/gpu/drm/imagination/pvr_vm.h
+index f2a6463f2b059e..79406243617c1f 100644
+--- a/drivers/gpu/drm/imagination/pvr_vm.h
++++ b/drivers/gpu/drm/imagination/pvr_vm.h
+@@ -39,6 +39,7 @@ int pvr_vm_map(struct pvr_vm_context *vm_ctx,
+ struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
+ u64 device_addr, u64 size);
+ int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
++void pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx);
+
+ dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx);
+ struct dma_resv *pvr_vm_get_dma_resv(struct pvr_vm_context *vm_ctx);
+diff --git a/drivers/gpu/drm/panthor/panthor_device.c b/drivers/gpu/drm/panthor/panthor_device.c
+index 4082c8f2951dfd..6fbff516c1c1f0 100644
+--- a/drivers/gpu/drm/panthor/panthor_device.c
++++ b/drivers/gpu/drm/panthor/panthor_device.c
+@@ -390,11 +390,15 @@ int panthor_device_mmap_io(struct panthor_device *ptdev, struct vm_area_struct *
+ {
+ u64 offset = (u64)vma->vm_pgoff << PAGE_SHIFT;
+
++ if ((vma->vm_flags & VM_SHARED) == 0)
++ return -EINVAL;
++
+ switch (offset) {
+ case DRM_PANTHOR_USER_FLUSH_ID_MMIO_OFFSET:
+ if (vma->vm_end - vma->vm_start != PAGE_SIZE ||
+ (vma->vm_flags & (VM_WRITE | VM_EXEC)))
+ return -EINVAL;
++ vm_flags_clear(vma, VM_MAYWRITE);
+
+ break;
+
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
+index 837ba312f3a8b4..d18f32640a79fb 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.c
++++ b/drivers/gpu/drm/panthor/panthor_mmu.c
+@@ -1580,7 +1580,9 @@ panthor_vm_pool_get_vm(struct panthor_vm_pool *pool, u32 handle)
+ {
+ struct panthor_vm *vm;
+
++ xa_lock(&pool->xa);
+ vm = panthor_vm_get(xa_load(&pool->xa, handle));
++ xa_unlock(&pool->xa);
+
+ return vm;
+ }
+diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+index 660ff42e45a6f4..4f0027d93efcbe 100644
+--- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+@@ -509,7 +509,7 @@
+ * [4-6] RSVD
+ * [7] Disabled
+ */
+-#define CCS_MODE XE_REG(0x14804)
++#define CCS_MODE XE_REG(0x14804, XE_REG_OPTION_MASKED)
+ #define CCS_MODE_CSLICE_0_3_MASK REG_GENMASK(11, 0) /* 3 bits per cslice */
+ #define CCS_MODE_CSLICE_MASK 0x7 /* CCS0-3 + rsvd */
+ #define CCS_MODE_CSLICE_WIDTH ilog2(CCS_MODE_CSLICE_MASK + 1)
+diff --git a/drivers/gpu/drm/xe/xe_device.h b/drivers/gpu/drm/xe/xe_device.h
+index 533ccfb2567a2c..41d6ca3cce96af 100644
+--- a/drivers/gpu/drm/xe/xe_device.h
++++ b/drivers/gpu/drm/xe/xe_device.h
+@@ -174,4 +174,18 @@ void xe_device_declare_wedged(struct xe_device *xe);
+ struct xe_file *xe_file_get(struct xe_file *xef);
+ void xe_file_put(struct xe_file *xef);
+
++/*
++ * Occasionally it is seen that the G2H worker starts running after a delay of more than
++ * a second even after being queued and activated by the Linux workqueue subsystem. This
++ * leads to G2H timeout error. The root cause of issue lies with scheduling latency of
++ * Lunarlake Hybrid CPU. Issue disappears if we disable Lunarlake atom cores from BIOS
++ * and this is beyond xe kmd.
++ *
++ * TODO: Drop this change once workqueue scheduling delay issue is fixed on LNL Hybrid CPU.
++ */
++#define LNL_FLUSH_WORKQUEUE(wq__) \
++ flush_workqueue(wq__)
++#define LNL_FLUSH_WORK(wrk__) \
++ flush_work(wrk__)
++
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
+index 6e5ba381eadedd..6623287fd47307 100644
+--- a/drivers/gpu/drm/xe/xe_exec.c
++++ b/drivers/gpu/drm/xe/xe_exec.c
+@@ -129,12 +129,16 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ if (XE_IOCTL_DBG(xe, !q))
+ return -ENOENT;
+
+- if (XE_IOCTL_DBG(xe, q->flags & EXEC_QUEUE_FLAG_VM))
+- return -EINVAL;
++ if (XE_IOCTL_DBG(xe, q->flags & EXEC_QUEUE_FLAG_VM)) {
++ err = -EINVAL;
++ goto err_exec_queue;
++ }
+
+ if (XE_IOCTL_DBG(xe, args->num_batch_buffer &&
+- q->width != args->num_batch_buffer))
+- return -EINVAL;
++ q->width != args->num_batch_buffer)) {
++ err = -EINVAL;
++ goto err_exec_queue;
++ }
+
+ if (XE_IOCTL_DBG(xe, q->ops->reset_status(q))) {
+ err = -ECANCELED;
+@@ -208,6 +212,7 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ fence = xe_sync_in_fence_get(syncs, num_syncs, q, vm);
+ if (IS_ERR(fence)) {
+ err = PTR_ERR(fence);
++ xe_vm_unlock(vm);
+ goto err_unlock_list;
+ }
+ for (i = 0; i < num_syncs; i++)
+diff --git a/drivers/gpu/drm/xe/xe_gt_ccs_mode.c b/drivers/gpu/drm/xe/xe_gt_ccs_mode.c
+index d2e4dc3aaf613a..b8d832c8f9078d 100644
+--- a/drivers/gpu/drm/xe/xe_gt_ccs_mode.c
++++ b/drivers/gpu/drm/xe/xe_gt_ccs_mode.c
+@@ -68,6 +68,12 @@ static void __xe_gt_apply_ccs_mode(struct xe_gt *gt, u32 num_engines)
+ }
+ }
+
++ /*
++ * Mask bits need to be set for the register. Though only Xe2+
++ * platforms require setting of mask bits, it won't harm for older
++ * platforms as these bits are unused there.
++ */
++ mode |= CCS_MODE_CSLICE_0_3_MASK << 16;
+ xe_mmio_write32(gt, CCS_MODE, mode);
+
+ xe_gt_dbg(gt, "CCS_MODE=%x config:%08x, num_engines:%d, num_slices:%d\n",
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+index 82795133e129ec..836c15253ce7ec 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+@@ -71,6 +71,8 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
+ struct xe_device *xe = gt_to_xe(gt);
+ struct xe_gt_tlb_invalidation_fence *fence, *next;
+
++ LNL_FLUSH_WORK(>->uc.guc.ct.g2h_worker);
++
+ spin_lock_irq(>->tlb_invalidation.pending_lock);
+ list_for_each_entry_safe(fence, next,
+ >->tlb_invalidation.pending_fences, link) {
+diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
+index cd9918e3896c09..12e1fe6a8da285 100644
+--- a/drivers/gpu/drm/xe/xe_guc_ct.c
++++ b/drivers/gpu/drm/xe/xe_guc_ct.c
+@@ -888,6 +888,15 @@ static int guc_ct_send_recv(struct xe_guc_ct *ct, const u32 *action, u32 len,
+
+ ret = wait_event_timeout(ct->g2h_fence_wq, g2h_fence.done, HZ);
+
++ if (!ret) {
++ LNL_FLUSH_WORK(&ct->g2h_worker);
++ if (g2h_fence.done) {
++ xe_gt_warn(gt, "G2H fence %u, action %04x, done\n",
++ g2h_fence.seqno, action[0]);
++ ret = 1;
++ }
++ }
++
+ /*
+ * Ensure we serialize with completion side to prevent UAF with fence going out of scope on
+ * the stack, since we have no clue if it will fire after the timeout before we can erase
+diff --git a/drivers/gpu/drm/xe/xe_wait_user_fence.c b/drivers/gpu/drm/xe/xe_wait_user_fence.c
+index 92f65b9c528015..2bff43c5962e0c 100644
+--- a/drivers/gpu/drm/xe/xe_wait_user_fence.c
++++ b/drivers/gpu/drm/xe/xe_wait_user_fence.c
+@@ -155,6 +155,13 @@ int xe_wait_user_fence_ioctl(struct drm_device *dev, void *data,
+ }
+
+ if (!timeout) {
++ LNL_FLUSH_WORKQUEUE(xe->ordered_wq);
++ err = do_compare(addr, args->value, args->mask,
++ args->op);
++ if (err <= 0) {
++ drm_dbg(&xe->drm, "LNL_FLUSH_WORKQUEUE resolved ufence timeout\n");
++ break;
++ }
+ err = -ETIME;
+ break;
+ }
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 988d0acbdf04dd..3fcf098f4f5696 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1875,7 +1875,7 @@ u8 *hid_alloc_report_buf(struct hid_report *report, gfp_t flags)
+
+ u32 len = hid_report_len(report) + 7;
+
+- return kmalloc(len, flags);
++ return kzalloc(len, flags);
+ }
+ EXPORT_SYMBOL_GPL(hid_alloc_report_buf);
+
+diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c
+index edda6a70907b43..821f59e7ec3a65 100644
+--- a/drivers/i2c/busses/i2c-designware-common.c
++++ b/drivers/i2c/busses/i2c-designware-common.c
+@@ -442,7 +442,7 @@ int i2c_dw_set_sda_hold(struct dw_i2c_dev *dev)
+ void __i2c_dw_disable(struct dw_i2c_dev *dev)
+ {
+ struct i2c_timings *t = &dev->timings;
+- unsigned int raw_intr_stats;
++ unsigned int raw_intr_stats, ic_stats;
+ unsigned int enable;
+ int timeout = 100;
+ bool abort_needed;
+@@ -450,9 +450,11 @@ void __i2c_dw_disable(struct dw_i2c_dev *dev)
+ int ret;
+
+ regmap_read(dev->map, DW_IC_RAW_INTR_STAT, &raw_intr_stats);
++ regmap_read(dev->map, DW_IC_STATUS, &ic_stats);
+ regmap_read(dev->map, DW_IC_ENABLE, &enable);
+
+- abort_needed = raw_intr_stats & DW_IC_INTR_MST_ON_HOLD;
++ abort_needed = (raw_intr_stats & DW_IC_INTR_MST_ON_HOLD) ||
++ (ic_stats & DW_IC_STATUS_MASTER_HOLD_TX_FIFO_EMPTY);
+ if (abort_needed) {
+ if (!(enable & DW_IC_ENABLE_ENABLE)) {
+ regmap_write(dev->map, DW_IC_ENABLE, DW_IC_ENABLE_ENABLE);
+diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
+index e45daedad96724..82eade5b811cd8 100644
+--- a/drivers/i2c/busses/i2c-designware-core.h
++++ b/drivers/i2c/busses/i2c-designware-core.h
+@@ -117,6 +117,7 @@
+ #define DW_IC_STATUS_RFNE BIT(3)
+ #define DW_IC_STATUS_MASTER_ACTIVITY BIT(5)
+ #define DW_IC_STATUS_SLAVE_ACTIVITY BIT(6)
++#define DW_IC_STATUS_MASTER_HOLD_TX_FIFO_EMPTY BIT(7)
+
+ #define DW_IC_SDA_HOLD_RX_SHIFT 16
+ #define DW_IC_SDA_HOLD_RX_MASK GENMASK(23, 16)
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 74f21e03d4a374..be559cef47777c 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -524,6 +524,13 @@ static int gic_irq_set_irqchip_state(struct irq_data *d,
+ }
+
+ gic_poke_irq(d, reg);
++
++ /*
++ * Force read-back to guarantee that the active state has taken
++ * effect, and won't race with a guest-driven deactivation.
++ */
++ if (reg == GICD_ISACTIVER)
++ gic_peek_irq(d, reg);
+ return 0;
+ }
+
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 17f0fab1e25496..8c0ede33af6fb5 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -1905,16 +1905,13 @@ static void check_migrations(struct work_struct *ws)
+ * This function gets called on the error paths of the constructor, so we
+ * have to cope with a partially initialised struct.
+ */
+-static void destroy(struct cache *cache)
++static void __destroy(struct cache *cache)
+ {
+- unsigned int i;
+-
+ mempool_exit(&cache->migration_pool);
+
+ if (cache->prison)
+ dm_bio_prison_destroy_v2(cache->prison);
+
+- cancel_delayed_work_sync(&cache->waker);
+ if (cache->wq)
+ destroy_workqueue(cache->wq);
+
+@@ -1942,13 +1939,22 @@ static void destroy(struct cache *cache)
+ if (cache->policy)
+ dm_cache_policy_destroy(cache->policy);
+
++ bioset_exit(&cache->bs);
++
++ kfree(cache);
++}
++
++static void destroy(struct cache *cache)
++{
++ unsigned int i;
++
++ cancel_delayed_work_sync(&cache->waker);
++
+ for (i = 0; i < cache->nr_ctr_args ; i++)
+ kfree(cache->ctr_args[i]);
+ kfree(cache->ctr_args);
+
+- bioset_exit(&cache->bs);
+-
+- kfree(cache);
++ __destroy(cache);
+ }
+
+ static void cache_dtr(struct dm_target *ti)
+@@ -2003,7 +2009,6 @@ struct cache_args {
+ sector_t cache_sectors;
+
+ struct dm_dev *origin_dev;
+- sector_t origin_sectors;
+
+ uint32_t block_size;
+
+@@ -2084,6 +2089,7 @@ static int parse_cache_dev(struct cache_args *ca, struct dm_arg_set *as,
+ static int parse_origin_dev(struct cache_args *ca, struct dm_arg_set *as,
+ char **error)
+ {
++ sector_t origin_sectors;
+ int r;
+
+ if (!at_least_one_arg(as, error))
+@@ -2096,8 +2102,8 @@ static int parse_origin_dev(struct cache_args *ca, struct dm_arg_set *as,
+ return r;
+ }
+
+- ca->origin_sectors = get_dev_size(ca->origin_dev);
+- if (ca->ti->len > ca->origin_sectors) {
++ origin_sectors = get_dev_size(ca->origin_dev);
++ if (ca->ti->len > origin_sectors) {
+ *error = "Device size larger than cached device";
+ return -EINVAL;
+ }
+@@ -2407,7 +2413,7 @@ static int cache_create(struct cache_args *ca, struct cache **result)
+
+ ca->metadata_dev = ca->origin_dev = ca->cache_dev = NULL;
+
+- origin_blocks = cache->origin_sectors = ca->origin_sectors;
++ origin_blocks = cache->origin_sectors = ti->len;
+ origin_blocks = block_div(origin_blocks, ca->block_size);
+ cache->origin_blocks = to_oblock(origin_blocks);
+
+@@ -2561,7 +2567,7 @@ static int cache_create(struct cache_args *ca, struct cache **result)
+ *result = cache;
+ return 0;
+ bad:
+- destroy(cache);
++ __destroy(cache);
+ return r;
+ }
+
+@@ -2612,7 +2618,7 @@ static int cache_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+
+ r = copy_ctr_args(cache, argc - 3, (const char **)argv + 3);
+ if (r) {
+- destroy(cache);
++ __destroy(cache);
+ goto out;
+ }
+
+@@ -2895,19 +2901,19 @@ static dm_cblock_t get_cache_dev_size(struct cache *cache)
+ static bool can_resize(struct cache *cache, dm_cblock_t new_size)
+ {
+ if (from_cblock(new_size) > from_cblock(cache->cache_size)) {
+- if (cache->sized) {
+- DMERR("%s: unable to extend cache due to missing cache table reload",
+- cache_device_name(cache));
+- return false;
+- }
++ DMERR("%s: unable to extend cache due to missing cache table reload",
++ cache_device_name(cache));
++ return false;
+ }
+
+ /*
+ * We can't drop a dirty block when shrinking the cache.
+ */
+- while (from_cblock(new_size) < from_cblock(cache->cache_size)) {
+- new_size = to_cblock(from_cblock(new_size) + 1);
+- if (is_dirty(cache, new_size)) {
++ if (cache->loaded_mappings) {
++ new_size = to_cblock(find_next_bit(cache->dirty_bitset,
++ from_cblock(cache->cache_size),
++ from_cblock(new_size)));
++ if (new_size != cache->cache_size) {
+ DMERR("%s: unable to shrink cache; cache block %llu is dirty",
+ cache_device_name(cache),
+ (unsigned long long) from_cblock(new_size));
+@@ -2943,20 +2949,15 @@ static int cache_preresume(struct dm_target *ti)
+ /*
+ * Check to see if the cache has resized.
+ */
+- if (!cache->sized) {
+- r = resize_cache_dev(cache, csize);
+- if (r)
+- return r;
+-
+- cache->sized = true;
+-
+- } else if (csize != cache->cache_size) {
++ if (!cache->sized || csize != cache->cache_size) {
+ if (!can_resize(cache, csize))
+ return -EINVAL;
+
+ r = resize_cache_dev(cache, csize);
+ if (r)
+ return r;
++
++ cache->sized = true;
+ }
+
+ if (!cache->loaded_mappings) {
+diff --git a/drivers/md/dm-unstripe.c b/drivers/md/dm-unstripe.c
+index 48587c16c44570..e8a9432057dce1 100644
+--- a/drivers/md/dm-unstripe.c
++++ b/drivers/md/dm-unstripe.c
+@@ -85,8 +85,8 @@ static int unstripe_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ }
+ uc->physical_start = start;
+
+- uc->unstripe_offset = uc->unstripe * uc->chunk_size;
+- uc->unstripe_width = (uc->stripes - 1) * uc->chunk_size;
++ uc->unstripe_offset = (sector_t)uc->unstripe * uc->chunk_size;
++ uc->unstripe_width = (sector_t)(uc->stripes - 1) * uc->chunk_size;
+ uc->chunk_shift = is_power_of_2(uc->chunk_size) ? fls(uc->chunk_size) - 1 : 0;
+
+ tmp_len = ti->len;
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index ff4a6b570b7644..19230404d8c2bd 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2290,8 +2290,10 @@ static struct mapped_device *alloc_dev(int minor)
+ * override accordingly.
+ */
+ md->disk = blk_alloc_disk(NULL, md->numa_node_id);
+- if (IS_ERR(md->disk))
++ if (IS_ERR(md->disk)) {
++ md->disk = NULL;
+ goto bad;
++ }
+ md->queue = md->disk->queue;
+
+ init_waitqueue_head(&md->wait);
+diff --git a/drivers/media/cec/usb/pulse8/pulse8-cec.c b/drivers/media/cec/usb/pulse8/pulse8-cec.c
+index ba67587bd43ec0..171366fe35443b 100644
+--- a/drivers/media/cec/usb/pulse8/pulse8-cec.c
++++ b/drivers/media/cec/usb/pulse8/pulse8-cec.c
+@@ -685,7 +685,7 @@ static int pulse8_setup(struct pulse8 *pulse8, struct serio *serio,
+ err = pulse8_send_and_wait(pulse8, cmd, 1, cmd[0], 4);
+ if (err)
+ return err;
+- date = (data[0] << 24) | (data[1] << 16) | (data[2] << 8) | data[3];
++ date = ((unsigned)data[0] << 24) | (data[1] << 16) | (data[2] << 8) | data[3];
+ dev_info(pulse8->dev, "Firmware build date %ptT\n", &date);
+
+ dev_dbg(pulse8->dev, "Persistent config:\n");
+diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+index 642c48e8c1f584..ded11cd8dbf7c6 100644
+--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+@@ -1795,6 +1795,9 @@ static void tpg_precalculate_line(struct tpg_data *tpg)
+ unsigned p;
+ unsigned x;
+
++ if (WARN_ON_ONCE(!tpg->src_width || !tpg->scaled_width))
++ return;
++
+ switch (tpg->pattern) {
+ case TPG_PAT_GREEN:
+ contrast = TPG_COLOR_100_RED;
+diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
+index 4f78f30b3646e4..a05aa271a1baa7 100644
+--- a/drivers/media/dvb-core/dvb_frontend.c
++++ b/drivers/media/dvb-core/dvb_frontend.c
+@@ -443,8 +443,8 @@ static int dvb_frontend_swzigzag_autotune(struct dvb_frontend *fe, int check_wra
+
+ default:
+ fepriv->auto_step++;
+- fepriv->auto_sub_step = -1; /* it'll be incremented to 0 in a moment */
+- break;
++ fepriv->auto_sub_step = 0;
++ continue;
+ }
+
+ if (!ready) fepriv->auto_sub_step++;
+diff --git a/drivers/media/dvb-core/dvb_vb2.c b/drivers/media/dvb-core/dvb_vb2.c
+index 192a8230c4aa96..29edaaff7a5c9d 100644
+--- a/drivers/media/dvb-core/dvb_vb2.c
++++ b/drivers/media/dvb-core/dvb_vb2.c
+@@ -366,9 +366,15 @@ int dvb_vb2_querybuf(struct dvb_vb2_ctx *ctx, struct dmx_buffer *b)
+ int dvb_vb2_expbuf(struct dvb_vb2_ctx *ctx, struct dmx_exportbuffer *exp)
+ {
+ struct vb2_queue *q = &ctx->vb_q;
++ struct vb2_buffer *vb2 = vb2_get_buffer(q, exp->index);
+ int ret;
+
+- ret = vb2_core_expbuf(&ctx->vb_q, &exp->fd, q->type, q->bufs[exp->index],
++ if (!vb2) {
++ dprintk(1, "[%s] invalid buffer index\n", ctx->name);
++ return -EINVAL;
++ }
++
++ ret = vb2_core_expbuf(&ctx->vb_q, &exp->fd, q->type, vb2,
+ 0, exp->flags);
+ if (ret) {
+ dprintk(1, "[%s] index=%d errno=%d\n", ctx->name,
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index b43695bc51e754..14f323fbada719 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -86,10 +86,15 @@ static DECLARE_RWSEM(minor_rwsem);
+ static int dvb_device_open(struct inode *inode, struct file *file)
+ {
+ struct dvb_device *dvbdev;
++ unsigned int minor = iminor(inode);
++
++ if (minor >= MAX_DVB_MINORS)
++ return -ENODEV;
+
+ mutex_lock(&dvbdev_mutex);
+ down_read(&minor_rwsem);
+- dvbdev = dvb_minors[iminor(inode)];
++
++ dvbdev = dvb_minors[minor];
+
+ if (dvbdev && dvbdev->fops) {
+ int err = 0;
+@@ -525,7 +530,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ for (minor = 0; minor < MAX_DVB_MINORS; minor++)
+ if (!dvb_minors[minor])
+ break;
+- if (minor == MAX_DVB_MINORS) {
++ if (minor >= MAX_DVB_MINORS) {
+ if (new_node) {
+ list_del(&new_node->list_head);
+ kfree(dvbdevfops);
+@@ -540,6 +545,14 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ }
+ #else
+ minor = nums2minor(adap->num, type, id);
++ if (minor >= MAX_DVB_MINORS) {
++ dvb_media_device_free(dvbdev);
++ list_del(&dvbdev->list_head);
++ kfree(dvbdev);
++ *pdvbdev = NULL;
++ mutex_unlock(&dvbdev_register_lock);
++ return ret;
++ }
+ #endif
+ dvbdev->minor = minor;
+ dvb_minors[minor] = dvb_device_get(dvbdev);
+diff --git a/drivers/media/dvb-frontends/cx24116.c b/drivers/media/dvb-frontends/cx24116.c
+index 8b978a9f74a4e5..f5dd3a81725a72 100644
+--- a/drivers/media/dvb-frontends/cx24116.c
++++ b/drivers/media/dvb-frontends/cx24116.c
+@@ -741,6 +741,7 @@ static int cx24116_read_snr_pct(struct dvb_frontend *fe, u16 *snr)
+ {
+ struct cx24116_state *state = fe->demodulator_priv;
+ u8 snr_reading;
++ int ret;
+ static const u32 snr_tab[] = { /* 10 x Table (rounded up) */
+ 0x00000, 0x0199A, 0x03333, 0x04ccD, 0x06667,
+ 0x08000, 0x0999A, 0x0b333, 0x0cccD, 0x0e667,
+@@ -749,7 +750,11 @@ static int cx24116_read_snr_pct(struct dvb_frontend *fe, u16 *snr)
+
+ dprintk("%s()\n", __func__);
+
+- snr_reading = cx24116_readreg(state, CX24116_REG_QUALITY0);
++ ret = cx24116_readreg(state, CX24116_REG_QUALITY0);
++ if (ret < 0)
++ return ret;
++
++ snr_reading = ret;
+
+ if (snr_reading >= 0xa0 /* 100% */)
+ *snr = 0xffff;
+diff --git a/drivers/media/dvb-frontends/stb0899_algo.c b/drivers/media/dvb-frontends/stb0899_algo.c
+index df89c33dac23c5..40537c4ccb0d75 100644
+--- a/drivers/media/dvb-frontends/stb0899_algo.c
++++ b/drivers/media/dvb-frontends/stb0899_algo.c
+@@ -269,7 +269,7 @@ static enum stb0899_status stb0899_search_carrier(struct stb0899_state *state)
+
+ short int derot_freq = 0, last_derot_freq = 0, derot_limit, next_loop = 3;
+ int index = 0;
+- u8 cfr[2];
++ u8 cfr[2] = {0};
+ u8 reg;
+
+ internal->status = NOCARRIER;
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index 48230d5109f054..272945a878b3ce 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -2519,10 +2519,10 @@ static int adv76xx_log_status(struct v4l2_subdev *sd)
+ const struct adv76xx_chip_info *info = state->info;
+ struct v4l2_dv_timings timings;
+ struct stdi_readback stdi;
+- u8 reg_io_0x02 = io_read(sd, 0x02);
++ int ret;
++ u8 reg_io_0x02;
+ u8 edid_enabled;
+ u8 cable_det;
+-
+ static const char * const csc_coeff_sel_rb[16] = {
+ "bypassed", "YPbPr601 -> RGB", "reserved", "YPbPr709 -> RGB",
+ "reserved", "RGB -> YPbPr601", "reserved", "RGB -> YPbPr709",
+@@ -2621,13 +2621,21 @@ static int adv76xx_log_status(struct v4l2_subdev *sd)
+ v4l2_info(sd, "-----Color space-----\n");
+ v4l2_info(sd, "RGB quantization range ctrl: %s\n",
+ rgb_quantization_range_txt[state->rgb_quantization_range]);
+- v4l2_info(sd, "Input color space: %s\n",
+- input_color_space_txt[reg_io_0x02 >> 4]);
+- v4l2_info(sd, "Output color space: %s %s, alt-gamma %s\n",
+- (reg_io_0x02 & 0x02) ? "RGB" : "YCbCr",
+- (((reg_io_0x02 >> 2) & 0x01) ^ (reg_io_0x02 & 0x01)) ?
+- "(16-235)" : "(0-255)",
+- (reg_io_0x02 & 0x08) ? "enabled" : "disabled");
++
++ ret = io_read(sd, 0x02);
++ if (ret < 0) {
++ v4l2_info(sd, "Can't read Input/Output color space\n");
++ } else {
++ reg_io_0x02 = ret;
++
++ v4l2_info(sd, "Input color space: %s\n",
++ input_color_space_txt[reg_io_0x02 >> 4]);
++ v4l2_info(sd, "Output color space: %s %s, alt-gamma %s\n",
++ (reg_io_0x02 & 0x02) ? "RGB" : "YCbCr",
++ (((reg_io_0x02 >> 2) & 0x01) ^ (reg_io_0x02 & 0x01)) ?
++ "(16-235)" : "(0-255)",
++ (reg_io_0x02 & 0x08) ? "enabled" : "disabled");
++ }
+ v4l2_info(sd, "Color space conversion: %s\n",
+ csc_coeff_sel_rb[cp_read(sd, info->cp_csc) >> 4]);
+
+diff --git a/drivers/media/i2c/ar0521.c b/drivers/media/i2c/ar0521.c
+index d557f3b3de3d33..21f7b76513e78c 100644
+--- a/drivers/media/i2c/ar0521.c
++++ b/drivers/media/i2c/ar0521.c
+@@ -255,10 +255,10 @@ static u32 calc_pll(struct ar0521_dev *sensor, u32 freq, u16 *pre_ptr, u16 *mult
+ continue; /* Minimum value */
+ if (new_mult > 254)
+ break; /* Maximum, larger pre won't work either */
+- if (sensor->extclk_freq * (u64)new_mult < AR0521_PLL_MIN *
++ if (sensor->extclk_freq * (u64)new_mult < (u64)AR0521_PLL_MIN *
+ new_pre)
+ continue;
+- if (sensor->extclk_freq * (u64)new_mult > AR0521_PLL_MAX *
++ if (sensor->extclk_freq * (u64)new_mult > (u64)AR0521_PLL_MAX *
+ new_pre)
+ break; /* Larger pre won't work either */
+ new_pll = div64_round_up(sensor->extclk_freq * (u64)new_mult,
+diff --git a/drivers/media/pci/mgb4/mgb4_cmt.c b/drivers/media/pci/mgb4/mgb4_cmt.c
+index 70dc78ef193c73..a25b68403bc608 100644
+--- a/drivers/media/pci/mgb4/mgb4_cmt.c
++++ b/drivers/media/pci/mgb4/mgb4_cmt.c
+@@ -227,6 +227,8 @@ void mgb4_cmt_set_vin_freq_range(struct mgb4_vin_dev *vindev,
+ u32 config;
+ size_t i;
+
++ freq_range = array_index_nospec(freq_range, ARRAY_SIZE(cmt_vals_in));
++
+ addr = cmt_addrs_in[vindev->config->id];
+ reg_set = cmt_vals_in[freq_range];
+
+diff --git a/drivers/media/platform/samsung/s5p-jpeg/jpeg-core.c b/drivers/media/platform/samsung/s5p-jpeg/jpeg-core.c
+index d2c4a0178b3c5c..1db4609b35574f 100644
+--- a/drivers/media/platform/samsung/s5p-jpeg/jpeg-core.c
++++ b/drivers/media/platform/samsung/s5p-jpeg/jpeg-core.c
+@@ -775,11 +775,14 @@ static void exynos4_jpeg_parse_decode_h_tbl(struct s5p_jpeg_ctx *ctx)
+ (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) + ctx->out_q.sos + 2;
+ jpeg_buffer.curr = 0;
+
+- word = 0;
+-
+ if (get_word_be(&jpeg_buffer, &word))
+ return;
+- jpeg_buffer.size = (long)word - 2;
++
++ if (word < 2)
++ jpeg_buffer.size = 0;
++ else
++ jpeg_buffer.size = (long)word - 2;
++
+ jpeg_buffer.data += 2;
+ jpeg_buffer.curr = 0;
+
+@@ -1058,6 +1061,7 @@ static int get_word_be(struct s5p_jpeg_buffer *buf, unsigned int *word)
+ if (byte == -1)
+ return -1;
+ *word = (unsigned int)byte | temp;
++
+ return 0;
+ }
+
+@@ -1145,7 +1149,7 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
+ if (get_word_be(&jpeg_buffer, &word))
+ break;
+ length = (long)word - 2;
+- if (!length)
++ if (length <= 0)
+ return false;
+ sof = jpeg_buffer.curr; /* after 0xffc0 */
+ sof_len = length;
+@@ -1176,7 +1180,7 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
+ if (get_word_be(&jpeg_buffer, &word))
+ break;
+ length = (long)word - 2;
+- if (!length)
++ if (length <= 0)
+ return false;
+ if (n_dqt >= S5P_JPEG_MAX_MARKER)
+ return false;
+@@ -1189,7 +1193,7 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
+ if (get_word_be(&jpeg_buffer, &word))
+ break;
+ length = (long)word - 2;
+- if (!length)
++ if (length <= 0)
+ return false;
+ if (n_dht >= S5P_JPEG_MAX_MARKER)
+ return false;
+@@ -1214,6 +1218,7 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
+ if (get_word_be(&jpeg_buffer, &word))
+ break;
+ length = (long)word - 2;
++ /* No need to check underflows as skip() does it */
+ skip(&jpeg_buffer, length);
+ break;
+ }
+diff --git a/drivers/media/test-drivers/vivid/vivid-core.c b/drivers/media/test-drivers/vivid/vivid-core.c
+index 00e0d08af3573b..4f330f4fc6be9e 100644
+--- a/drivers/media/test-drivers/vivid/vivid-core.c
++++ b/drivers/media/test-drivers/vivid/vivid-core.c
+@@ -910,7 +910,7 @@ static int vivid_create_queue(struct vivid_dev *dev,
+ * videobuf2-core.c to MAX_BUFFER_INDEX.
+ */
+ if (buf_type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
+- q->max_num_buffers = 64;
++ q->max_num_buffers = MAX_VID_CAP_BUFFERS;
+ if (buf_type == V4L2_BUF_TYPE_SDR_CAPTURE)
+ q->max_num_buffers = 1024;
+ if (buf_type == V4L2_BUF_TYPE_VBI_CAPTURE)
+diff --git a/drivers/media/test-drivers/vivid/vivid-core.h b/drivers/media/test-drivers/vivid/vivid-core.h
+index cc18a3bc6dc0b2..d2d52763b11977 100644
+--- a/drivers/media/test-drivers/vivid/vivid-core.h
++++ b/drivers/media/test-drivers/vivid/vivid-core.h
+@@ -26,6 +26,8 @@
+ #define MAX_INPUTS 16
+ /* The maximum number of outputs */
+ #define MAX_OUTPUTS 16
++/* The maximum number of video capture buffers */
++#define MAX_VID_CAP_BUFFERS 64
+ /* The maximum up or down scaling factor is 4 */
+ #define MAX_ZOOM 4
+ /* The maximum image width/height are set to 4K DMT */
+@@ -481,7 +483,7 @@ struct vivid_dev {
+ /* video capture */
+ struct tpg_data tpg;
+ unsigned ms_vid_cap;
+- bool must_blank[VIDEO_MAX_FRAME];
++ bool must_blank[MAX_VID_CAP_BUFFERS];
+
+ const struct vivid_fmt *fmt_cap;
+ struct v4l2_fract timeperframe_vid_cap;
+diff --git a/drivers/media/test-drivers/vivid/vivid-ctrls.c b/drivers/media/test-drivers/vivid/vivid-ctrls.c
+index 8bb38bc7b8cc27..2b5c8fbcd0a278 100644
+--- a/drivers/media/test-drivers/vivid/vivid-ctrls.c
++++ b/drivers/media/test-drivers/vivid/vivid-ctrls.c
+@@ -553,7 +553,7 @@ static int vivid_vid_cap_s_ctrl(struct v4l2_ctrl *ctrl)
+ break;
+ case VIVID_CID_PERCENTAGE_FILL:
+ tpg_s_perc_fill(&dev->tpg, ctrl->val);
+- for (i = 0; i < VIDEO_MAX_FRAME; i++)
++ for (i = 0; i < MAX_VID_CAP_BUFFERS; i++)
+ dev->must_blank[i] = ctrl->val < 100;
+ break;
+ case VIVID_CID_INSERT_SAV:
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+index 69620e0a35a02f..6a790ac8cbe689 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+@@ -213,7 +213,7 @@ static int vid_cap_start_streaming(struct vb2_queue *vq, unsigned count)
+
+ dev->vid_cap_seq_count = 0;
+ dprintk(dev, 1, "%s\n", __func__);
+- for (i = 0; i < VIDEO_MAX_FRAME; i++)
++ for (i = 0; i < MAX_VID_CAP_BUFFERS; i++)
+ dev->must_blank[i] = tpg_g_perc_fill(&dev->tpg) < 100;
+ if (dev->start_streaming_error) {
+ dev->start_streaming_error = false;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index f0febdc08c2d65..2bba7123ea5e98 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -371,7 +371,7 @@ static int uvc_parse_format(struct uvc_device *dev,
+ * Parse the frame descriptors. Only uncompressed, MJPEG and frame
+ * based formats have frame descriptors.
+ */
+- while (buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE &&
++ while (ftype && buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE &&
+ buffer[2] == ftype) {
+ unsigned int maxIntervalIndex;
+
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls-api.c b/drivers/media/v4l2-core/v4l2-ctrls-api.c
+index e5a364efd5e668..95a2202879d8c1 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls-api.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls-api.c
+@@ -753,9 +753,10 @@ static int get_ctrl(struct v4l2_ctrl *ctrl, struct v4l2_ext_control *c)
+ for (i = 0; i < master->ncontrols; i++)
+ cur_to_new(master->cluster[i]);
+ ret = call_op(master, g_volatile_ctrl);
+- new_to_user(c, ctrl);
++ if (!ret)
++ ret = new_to_user(c, ctrl);
+ } else {
+- cur_to_user(c, ctrl);
++ ret = cur_to_user(c, ctrl);
+ }
+ v4l2_ctrl_unlock(master);
+ return ret;
+@@ -770,7 +771,10 @@ int v4l2_g_ctrl(struct v4l2_ctrl_handler *hdl, struct v4l2_control *control)
+ if (!ctrl || !ctrl->is_int)
+ return -EINVAL;
+ ret = get_ctrl(ctrl, &c);
+- control->value = c.value;
++
++ if (!ret)
++ control->value = c.value;
++
+ return ret;
+ }
+ EXPORT_SYMBOL(v4l2_g_ctrl);
+@@ -811,10 +815,11 @@ static int set_ctrl_lock(struct v4l2_fh *fh, struct v4l2_ctrl *ctrl,
+ int ret;
+
+ v4l2_ctrl_lock(ctrl);
+- user_to_new(c, ctrl);
+- ret = set_ctrl(fh, ctrl, 0);
++ ret = user_to_new(c, ctrl);
++ if (!ret)
++ ret = set_ctrl(fh, ctrl, 0);
+ if (!ret)
+- cur_to_user(c, ctrl);
++ ret = cur_to_user(c, ctrl);
+ v4l2_ctrl_unlock(ctrl);
+ return ret;
+ }
+diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
+index c63f7fc1e69177..511615dc334196 100644
+--- a/drivers/net/can/c_can/c_can_main.c
++++ b/drivers/net/can/c_can/c_can_main.c
+@@ -1011,7 +1011,6 @@ static int c_can_handle_bus_err(struct net_device *dev,
+
+ /* common for all type of bus errors */
+ priv->can.can_stats.bus_error++;
+- stats->rx_errors++;
+
+ /* propagate the error condition to the CAN stack */
+ skb = alloc_can_err_skb(dev, &cf);
+@@ -1027,26 +1026,32 @@ static int c_can_handle_bus_err(struct net_device *dev,
+ case LEC_STUFF_ERROR:
+ netdev_dbg(dev, "stuff error\n");
+ cf->data[2] |= CAN_ERR_PROT_STUFF;
++ stats->rx_errors++;
+ break;
+ case LEC_FORM_ERROR:
+ netdev_dbg(dev, "form error\n");
+ cf->data[2] |= CAN_ERR_PROT_FORM;
++ stats->rx_errors++;
+ break;
+ case LEC_ACK_ERROR:
+ netdev_dbg(dev, "ack error\n");
+ cf->data[3] = CAN_ERR_PROT_LOC_ACK;
++ stats->tx_errors++;
+ break;
+ case LEC_BIT1_ERROR:
+ netdev_dbg(dev, "bit1 error\n");
+ cf->data[2] |= CAN_ERR_PROT_BIT1;
++ stats->tx_errors++;
+ break;
+ case LEC_BIT0_ERROR:
+ netdev_dbg(dev, "bit0 error\n");
+ cf->data[2] |= CAN_ERR_PROT_BIT0;
++ stats->tx_errors++;
+ break;
+ case LEC_CRC_ERROR:
+ netdev_dbg(dev, "CRC error\n");
+ cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
++ stats->rx_errors++;
+ break;
+ default:
+ break;
+diff --git a/drivers/net/can/cc770/Kconfig b/drivers/net/can/cc770/Kconfig
+index 467ef19de1c183..aae25c2f849e45 100644
+--- a/drivers/net/can/cc770/Kconfig
++++ b/drivers/net/can/cc770/Kconfig
+@@ -7,7 +7,7 @@ if CAN_CC770
+
+ config CAN_CC770_ISA
+ tristate "ISA Bus based legacy CC770 driver"
+- depends on ISA
++ depends on HAS_IOPORT
+ help
+ This driver adds legacy support for CC770 and AN82527 chips
+ connected to the ISA bus using I/O port, memory mapped or
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 7fec04b024d5b8..39333466d6d276 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1764,7 +1764,8 @@ static int m_can_close(struct net_device *dev)
+ netif_stop_queue(dev);
+
+ m_can_stop(dev);
+- free_irq(dev->irq, dev);
++ if (dev->irq)
++ free_irq(dev->irq, dev);
+
+ m_can_clean(dev);
+
+diff --git a/drivers/net/can/sja1000/Kconfig b/drivers/net/can/sja1000/Kconfig
+index 01168db4c10653..2f516cc6d22c40 100644
+--- a/drivers/net/can/sja1000/Kconfig
++++ b/drivers/net/can/sja1000/Kconfig
+@@ -87,7 +87,7 @@ config CAN_PLX_PCI
+
+ config CAN_SJA1000_ISA
+ tristate "ISA Bus based legacy SJA1000 driver"
+- depends on ISA
++ depends on HAS_IOPORT
+ help
+ This driver adds legacy support for SJA1000 chips connected to
+ the ISA bus using I/O port, memory mapped or indirect access.
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
+index 83c18035b2a24d..4ea01d3d36d56a 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
+@@ -2,7 +2,7 @@
+ //
+ // mcp251xfd - Microchip MCP251xFD Family CAN controller driver
+ //
+-// Copyright (c) 2019, 2020, 2021 Pengutronix,
++// Copyright (c) 2019, 2020, 2021, 2024 Pengutronix,
+ // Marc Kleine-Budde <kernel@pengutronix.de>
+ //
+ // Based on:
+@@ -483,9 +483,11 @@ int mcp251xfd_ring_alloc(struct mcp251xfd_priv *priv)
+ };
+ const struct ethtool_coalesce ec = {
+ .rx_coalesce_usecs_irq = priv->rx_coalesce_usecs_irq,
+- .rx_max_coalesced_frames_irq = priv->rx_obj_num_coalesce_irq,
++ .rx_max_coalesced_frames_irq = priv->rx_obj_num_coalesce_irq == 0 ?
++ 1 : priv->rx_obj_num_coalesce_irq,
+ .tx_coalesce_usecs_irq = priv->tx_coalesce_usecs_irq,
+- .tx_max_coalesced_frames_irq = priv->tx_obj_num_coalesce_irq,
++ .tx_max_coalesced_frames_irq = priv->tx_obj_num_coalesce_irq == 0 ?
++ 1 : priv->tx_obj_num_coalesce_irq,
+ };
+ struct can_ram_layout layout;
+
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+index f732556d233a7b..d3ac865933fdf6 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+@@ -16,9 +16,9 @@
+
+ #include "mcp251xfd.h"
+
+-static inline bool mcp251xfd_tx_fifo_sta_full(u32 fifo_sta)
++static inline bool mcp251xfd_tx_fifo_sta_empty(u32 fifo_sta)
+ {
+- return !(fifo_sta & MCP251XFD_REG_FIFOSTA_TFNRFNIF);
++ return fifo_sta & MCP251XFD_REG_FIFOSTA_TFERFFIF;
+ }
+
+ static inline int
+@@ -122,7 +122,11 @@ mcp251xfd_get_tef_len(struct mcp251xfd_priv *priv, u8 *len_p)
+ if (err)
+ return err;
+
+- if (mcp251xfd_tx_fifo_sta_full(fifo_sta)) {
++ /* If the chip says the TX-FIFO is empty, but there are no TX
++ * buffers free in the ring, we assume all have been sent.
++ */
++ if (mcp251xfd_tx_fifo_sta_empty(fifo_sta) &&
++ mcp251xfd_get_tx_free(tx_ring) == 0) {
+ *len_p = tx_ring->obj_num;
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/arc/emac_main.c b/drivers/net/ethernet/arc/emac_main.c
+index 31ee477dd131e8..8283aeee35fb6d 100644
+--- a/drivers/net/ethernet/arc/emac_main.c
++++ b/drivers/net/ethernet/arc/emac_main.c
+@@ -111,6 +111,7 @@ static void arc_emac_tx_clean(struct net_device *ndev)
+ {
+ struct arc_emac_priv *priv = netdev_priv(ndev);
+ struct net_device_stats *stats = &ndev->stats;
++ struct device *dev = ndev->dev.parent;
+ unsigned int i;
+
+ for (i = 0; i < TX_BD_NUM; i++) {
+@@ -140,7 +141,7 @@ static void arc_emac_tx_clean(struct net_device *ndev)
+ stats->tx_bytes += skb->len;
+ }
+
+- dma_unmap_single(&ndev->dev, dma_unmap_addr(tx_buff, addr),
++ dma_unmap_single(dev, dma_unmap_addr(tx_buff, addr),
+ dma_unmap_len(tx_buff, len), DMA_TO_DEVICE);
+
+ /* return the sk_buff to system */
+@@ -174,6 +175,7 @@ static void arc_emac_tx_clean(struct net_device *ndev)
+ static int arc_emac_rx(struct net_device *ndev, int budget)
+ {
+ struct arc_emac_priv *priv = netdev_priv(ndev);
++ struct device *dev = ndev->dev.parent;
+ unsigned int work_done;
+
+ for (work_done = 0; work_done < budget; work_done++) {
+@@ -223,9 +225,9 @@ static int arc_emac_rx(struct net_device *ndev, int budget)
+ continue;
+ }
+
+- addr = dma_map_single(&ndev->dev, (void *)skb->data,
++ addr = dma_map_single(dev, (void *)skb->data,
+ EMAC_BUFFER_SIZE, DMA_FROM_DEVICE);
+- if (dma_mapping_error(&ndev->dev, addr)) {
++ if (dma_mapping_error(dev, addr)) {
+ if (net_ratelimit())
+ netdev_err(ndev, "cannot map dma buffer\n");
+ dev_kfree_skb(skb);
+@@ -237,7 +239,7 @@ static int arc_emac_rx(struct net_device *ndev, int budget)
+ }
+
+ /* unmap previosly mapped skb */
+- dma_unmap_single(&ndev->dev, dma_unmap_addr(rx_buff, addr),
++ dma_unmap_single(dev, dma_unmap_addr(rx_buff, addr),
+ dma_unmap_len(rx_buff, len), DMA_FROM_DEVICE);
+
+ pktlen = info & LEN_MASK;
+@@ -423,6 +425,7 @@ static int arc_emac_open(struct net_device *ndev)
+ {
+ struct arc_emac_priv *priv = netdev_priv(ndev);
+ struct phy_device *phy_dev = ndev->phydev;
++ struct device *dev = ndev->dev.parent;
+ int i;
+
+ phy_dev->autoneg = AUTONEG_ENABLE;
+@@ -445,9 +448,9 @@ static int arc_emac_open(struct net_device *ndev)
+ if (unlikely(!rx_buff->skb))
+ return -ENOMEM;
+
+- addr = dma_map_single(&ndev->dev, (void *)rx_buff->skb->data,
++ addr = dma_map_single(dev, (void *)rx_buff->skb->data,
+ EMAC_BUFFER_SIZE, DMA_FROM_DEVICE);
+- if (dma_mapping_error(&ndev->dev, addr)) {
++ if (dma_mapping_error(dev, addr)) {
+ netdev_err(ndev, "cannot dma map\n");
+ dev_kfree_skb(rx_buff->skb);
+ return -ENOMEM;
+@@ -548,6 +551,7 @@ static void arc_emac_set_rx_mode(struct net_device *ndev)
+ static void arc_free_tx_queue(struct net_device *ndev)
+ {
+ struct arc_emac_priv *priv = netdev_priv(ndev);
++ struct device *dev = ndev->dev.parent;
+ unsigned int i;
+
+ for (i = 0; i < TX_BD_NUM; i++) {
+@@ -555,7 +559,7 @@ static void arc_free_tx_queue(struct net_device *ndev)
+ struct buffer_state *tx_buff = &priv->tx_buff[i];
+
+ if (tx_buff->skb) {
+- dma_unmap_single(&ndev->dev,
++ dma_unmap_single(dev,
+ dma_unmap_addr(tx_buff, addr),
+ dma_unmap_len(tx_buff, len),
+ DMA_TO_DEVICE);
+@@ -579,6 +583,7 @@ static void arc_free_tx_queue(struct net_device *ndev)
+ static void arc_free_rx_queue(struct net_device *ndev)
+ {
+ struct arc_emac_priv *priv = netdev_priv(ndev);
++ struct device *dev = ndev->dev.parent;
+ unsigned int i;
+
+ for (i = 0; i < RX_BD_NUM; i++) {
+@@ -586,7 +591,7 @@ static void arc_free_rx_queue(struct net_device *ndev)
+ struct buffer_state *rx_buff = &priv->rx_buff[i];
+
+ if (rx_buff->skb) {
+- dma_unmap_single(&ndev->dev,
++ dma_unmap_single(dev,
+ dma_unmap_addr(rx_buff, addr),
+ dma_unmap_len(rx_buff, len),
+ DMA_FROM_DEVICE);
+@@ -679,6 +684,7 @@ static netdev_tx_t arc_emac_tx(struct sk_buff *skb, struct net_device *ndev)
+ unsigned int len, *txbd_curr = &priv->txbd_curr;
+ struct net_device_stats *stats = &ndev->stats;
+ __le32 *info = &priv->txbd[*txbd_curr].info;
++ struct device *dev = ndev->dev.parent;
+ dma_addr_t addr;
+
+ if (skb_padto(skb, ETH_ZLEN))
+@@ -692,10 +698,9 @@ static netdev_tx_t arc_emac_tx(struct sk_buff *skb, struct net_device *ndev)
+ return NETDEV_TX_BUSY;
+ }
+
+- addr = dma_map_single(&ndev->dev, (void *)skb->data, len,
+- DMA_TO_DEVICE);
++ addr = dma_map_single(dev, (void *)skb->data, len, DMA_TO_DEVICE);
+
+- if (unlikely(dma_mapping_error(&ndev->dev, addr))) {
++ if (unlikely(dma_mapping_error(dev, addr))) {
+ stats->tx_dropped++;
+ stats->tx_errors++;
+ dev_kfree_skb_any(skb);
+diff --git a/drivers/net/ethernet/arc/emac_mdio.c b/drivers/net/ethernet/arc/emac_mdio.c
+index 87f40c2ba90404..078b1a72c16135 100644
+--- a/drivers/net/ethernet/arc/emac_mdio.c
++++ b/drivers/net/ethernet/arc/emac_mdio.c
+@@ -133,6 +133,7 @@ int arc_mdio_probe(struct arc_emac_priv *priv)
+ struct arc_emac_mdio_bus_data *data = &priv->bus_data;
+ struct device_node *np = priv->dev->of_node;
+ const char *name = "Synopsys MII Bus";
++ struct device_node *mdio_node;
+ struct mii_bus *bus;
+ int error;
+
+@@ -164,7 +165,13 @@ int arc_mdio_probe(struct arc_emac_priv *priv)
+
+ snprintf(bus->id, MII_BUS_ID_SIZE, "%s", bus->name);
+
+- error = of_mdiobus_register(bus, priv->dev->of_node);
++ /* Backwards compatibility for EMAC nodes without MDIO subnode. */
++ mdio_node = of_get_child_by_name(np, "mdio");
++ if (!mdio_node)
++ mdio_node = of_node_get(np);
++
++ error = of_mdiobus_register(bus, mdio_node);
++ of_node_put(mdio_node);
+ if (error) {
+ mdiobus_free(bus);
+ return dev_err_probe(priv->dev, error,
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h
+index 6f0e58a2a58ad4..9e1d44ae92ccee 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h
+@@ -56,7 +56,7 @@ DECLARE_EVENT_CLASS(dpaa_eth_fd,
+ __entry->fd_format = qm_fd_get_format(fd);
+ __entry->fd_offset = qm_fd_get_offset(fd);
+ __entry->fd_length = qm_fd_get_length(fd);
+- __entry->fd_status = fd->status;
++ __entry->fd_status = __be32_to_cpu(fd->status);
+ __assign_str(name);
+ ),
+
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index 11b14555802c9a..d3fbeaa6ed9f27 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -665,19 +665,11 @@ static int enetc_sriov_configure(struct pci_dev *pdev, int num_vfs)
+
+ if (!num_vfs) {
+ enetc_msg_psi_free(pf);
+- kfree(pf->vf_state);
+ pf->num_vfs = 0;
+ pci_disable_sriov(pdev);
+ } else {
+ pf->num_vfs = num_vfs;
+
+- pf->vf_state = kcalloc(num_vfs, sizeof(struct enetc_vf_state),
+- GFP_KERNEL);
+- if (!pf->vf_state) {
+- pf->num_vfs = 0;
+- return -ENOMEM;
+- }
+-
+ err = enetc_msg_psi_init(pf);
+ if (err) {
+ dev_err(&pdev->dev, "enetc_msg_psi_init (%d)\n", err);
+@@ -696,7 +688,6 @@ static int enetc_sriov_configure(struct pci_dev *pdev, int num_vfs)
+ err_en_sriov:
+ enetc_msg_psi_free(pf);
+ err_msg_psi:
+- kfree(pf->vf_state);
+ pf->num_vfs = 0;
+
+ return err;
+@@ -1286,6 +1277,12 @@ static int enetc_pf_probe(struct pci_dev *pdev,
+ pf = enetc_si_priv(si);
+ pf->si = si;
+ pf->total_vfs = pci_sriov_get_totalvfs(pdev);
++ if (pf->total_vfs) {
++ pf->vf_state = kcalloc(pf->total_vfs, sizeof(struct enetc_vf_state),
++ GFP_KERNEL);
++ if (!pf->vf_state)
++ goto err_alloc_vf_state;
++ }
+
+ err = enetc_setup_mac_addresses(node, pf);
+ if (err)
+@@ -1363,6 +1360,8 @@ static int enetc_pf_probe(struct pci_dev *pdev,
+ free_netdev(ndev);
+ err_alloc_netdev:
+ err_setup_mac_addresses:
++ kfree(pf->vf_state);
++err_alloc_vf_state:
+ enetc_psi_destroy(pdev);
+ err_psi_create:
+ return err;
+@@ -1389,6 +1388,7 @@ static void enetc_pf_remove(struct pci_dev *pdev)
+ enetc_free_si_resources(priv);
+
+ free_netdev(si->ndev);
++ kfree(pf->vf_state);
+
+ enetc_psi_destroy(pdev);
+ }
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_vf.c b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
+index dfcaac302e2451..b15db70769e5ee 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_vf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
+@@ -78,11 +78,18 @@ static int enetc_vf_set_mac_addr(struct net_device *ndev, void *addr)
+ {
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ struct sockaddr *saddr = addr;
++ int err;
+
+ if (!is_valid_ether_addr(saddr->sa_data))
+ return -EADDRNOTAVAIL;
+
+- return enetc_msg_vsi_set_primary_mac_addr(priv, saddr);
++ err = enetc_msg_vsi_set_primary_mac_addr(priv, saddr);
++ if (err)
++ return err;
++
++ eth_hw_addr_set(ndev, saddr->sa_data);
++
++ return 0;
+ }
+
+ static int enetc_vf_set_features(struct net_device *ndev,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+index 67b0bf310daaaf..9a63fbc6940831 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+@@ -25,8 +25,11 @@ void hnae3_unregister_ae_algo_prepare(struct hnae3_ae_algo *ae_algo)
+ pci_id = pci_match_id(ae_algo->pdev_id_table, ae_dev->pdev);
+ if (!pci_id)
+ continue;
+- if (IS_ENABLED(CONFIG_PCI_IOV))
++ if (IS_ENABLED(CONFIG_PCI_IOV)) {
++ device_lock(&ae_dev->pdev->dev);
+ pci_disable_sriov(ae_dev->pdev);
++ device_unlock(&ae_dev->pdev->dev);
++ }
+ }
+ }
+ EXPORT_SYMBOL(hnae3_unregister_ae_algo_prepare);
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index ce227b56cf7243..2f9655cf5dd9ee 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -1205,12 +1205,10 @@ s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool to_sx)
+ if (ret_val)
+ goto out;
+
+- if (hw->mac.type != e1000_pch_mtp) {
+- ret_val = e1000e_force_smbus(hw);
+- if (ret_val) {
+- e_dbg("Failed to force SMBUS: %d\n", ret_val);
+- goto release;
+- }
++ ret_val = e1000e_force_smbus(hw);
++ if (ret_val) {
++ e_dbg("Failed to force SMBUS: %d\n", ret_val);
++ goto release;
+ }
+
+ /* Si workaround for ULP entry flow on i127/rev6 h/w. Enable
+@@ -1273,13 +1271,6 @@ s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool to_sx)
+ }
+
+ release:
+- if (hw->mac.type == e1000_pch_mtp) {
+- ret_val = e1000e_force_smbus(hw);
+- if (ret_val)
+- e_dbg("Failed to force SMBUS over MTL system: %d\n",
+- ret_val);
+- }
+-
+ hw->phy.ops.release(hw);
+ out:
+ if (ret_val)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index d546567e0286e4..b292f656d18b08 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -754,6 +754,7 @@ enum i40e_filter_state {
+ I40E_FILTER_ACTIVE, /* Added to switch by FW */
+ I40E_FILTER_FAILED, /* Rejected by FW */
+ I40E_FILTER_REMOVE, /* To be removed */
++ I40E_FILTER_NEW_SYNC, /* New, not sent yet, is in i40e_sync_vsi_filters() */
+ /* There is no 'removed' state; the filter struct is freed */
+ };
+ struct i40e_mac_filter {
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+index abf624d770e670..208c2f0857b61c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+@@ -89,6 +89,7 @@ static char *i40e_filter_state_string[] = {
+ "ACTIVE",
+ "FAILED",
+ "REMOVE",
++ "NEW_SYNC",
+ };
+
+ /**
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index f7d4b5f79422b1..02c2a04740cd76 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -1255,6 +1255,7 @@ int i40e_count_filters(struct i40e_vsi *vsi)
+
+ hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist) {
+ if (f->state == I40E_FILTER_NEW ||
++ f->state == I40E_FILTER_NEW_SYNC ||
+ f->state == I40E_FILTER_ACTIVE)
+ ++cnt;
+ }
+@@ -1441,6 +1442,8 @@ static int i40e_correct_mac_vlan_filters(struct i40e_vsi *vsi,
+
+ new->f = add_head;
+ new->state = add_head->state;
++ if (add_head->state == I40E_FILTER_NEW)
++ add_head->state = I40E_FILTER_NEW_SYNC;
+
+ /* Add the new filter to the tmp list */
+ hlist_add_head(&new->hlist, tmp_add_list);
+@@ -1550,6 +1553,8 @@ static int i40e_correct_vf_mac_vlan_filters(struct i40e_vsi *vsi,
+ return -ENOMEM;
+ new_mac->f = add_head;
+ new_mac->state = add_head->state;
++ if (add_head->state == I40E_FILTER_NEW)
++ add_head->state = I40E_FILTER_NEW_SYNC;
+
+ /* Add the new filter to the tmp list */
+ hlist_add_head(&new_mac->hlist, tmp_add_list);
+@@ -2437,7 +2442,8 @@ static int
+ i40e_aqc_broadcast_filter(struct i40e_vsi *vsi, const char *vsi_name,
+ struct i40e_mac_filter *f)
+ {
+- bool enable = f->state == I40E_FILTER_NEW;
++ bool enable = f->state == I40E_FILTER_NEW ||
++ f->state == I40E_FILTER_NEW_SYNC;
+ struct i40e_hw *hw = &vsi->back->hw;
+ int aq_ret;
+
+@@ -2611,6 +2617,7 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
+
+ /* Add it to the hash list */
+ hlist_add_head(&new->hlist, &tmp_add_list);
++ f->state = I40E_FILTER_NEW_SYNC;
+ }
+
+ /* Count the number of active (current and new) VLAN
+@@ -2762,7 +2769,8 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
+ spin_lock_bh(&vsi->mac_filter_hash_lock);
+ hlist_for_each_entry_safe(new, h, &tmp_add_list, hlist) {
+ /* Only update the state if we're still NEW */
+- if (new->f->state == I40E_FILTER_NEW)
++ if (new->f->state == I40E_FILTER_NEW ||
++ new->f->state == I40E_FILTER_NEW_SYNC)
+ new->f->state = new->state;
+ hlist_del(&new->hlist);
+ netdev_hw_addr_refcnt(new->f, vsi->netdev, -1);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+index 5412eff8ef233f..ee9862ddfe15e0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+@@ -1830,11 +1830,12 @@ static int
+ ice_set_fdir_input_set(struct ice_vsi *vsi, struct ethtool_rx_flow_spec *fsp,
+ struct ice_fdir_fltr *input)
+ {
+- u16 dest_vsi, q_index = 0;
++ s16 q_index = ICE_FDIR_NO_QUEUE_IDX;
+ u16 orig_q_index = 0;
+ struct ice_pf *pf;
+ struct ice_hw *hw;
+ int flow_type;
++ u16 dest_vsi;
+ u8 dest_ctl;
+
+ if (!vsi || !fsp || !input)
+diff --git a/drivers/net/ethernet/intel/ice/ice_fdir.h b/drivers/net/ethernet/intel/ice/ice_fdir.h
+index ab5b118daa2da6..820023c0271fd5 100644
+--- a/drivers/net/ethernet/intel/ice/ice_fdir.h
++++ b/drivers/net/ethernet/intel/ice/ice_fdir.h
+@@ -53,6 +53,8 @@
+ */
+ #define ICE_FDIR_IPV4_PKT_FLAG_MF 0x20
+
++#define ICE_FDIR_NO_QUEUE_IDX -1
++
+ enum ice_fltr_prgm_desc_dest {
+ ICE_FLTR_PRGM_DESC_DEST_DROP_PKT,
+ ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX,
+@@ -186,7 +188,7 @@ struct ice_fdir_fltr {
+ u16 flex_fltr;
+
+ /* filter control */
+- u16 q_index;
++ s16 q_index;
+ u16 orig_q_index;
+ u16 dest_vsi;
+ u8 dest_ctl;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
+index 2c31ad87587a4a..66544faab710aa 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf.h
++++ b/drivers/net/ethernet/intel/idpf/idpf.h
+@@ -141,6 +141,7 @@ enum idpf_vport_state {
+ * @adapter: Adapter back pointer
+ * @vport: Vport back pointer
+ * @vport_id: Vport identifier
++ * @link_speed_mbps: Link speed in mbps
+ * @vport_idx: Relative vport index
+ * @state: See enum idpf_vport_state
+ * @netstats: Packet and byte stats
+@@ -150,6 +151,7 @@ struct idpf_netdev_priv {
+ struct idpf_adapter *adapter;
+ struct idpf_vport *vport;
+ u32 vport_id;
++ u32 link_speed_mbps;
+ u16 vport_idx;
+ enum idpf_vport_state state;
+ struct rtnl_link_stats64 netstats;
+@@ -287,7 +289,6 @@ struct idpf_port_stats {
+ * @tx_itr_profile: TX profiles for Dynamic Interrupt Moderation
+ * @port_stats: per port csum, header split, and other offload stats
+ * @link_up: True if link is up
+- * @link_speed_mbps: Link speed in mbps
+ * @sw_marker_wq: workqueue for marker packets
+ */
+ struct idpf_vport {
+@@ -331,7 +332,6 @@ struct idpf_vport {
+ struct idpf_port_stats port_stats;
+
+ bool link_up;
+- u32 link_speed_mbps;
+
+ wait_queue_head_t sw_marker_wq;
+ };
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+index 3806ddd3ce4ab9..59b1a1a099967f 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+@@ -1296,24 +1296,19 @@ static void idpf_set_msglevel(struct net_device *netdev, u32 data)
+ static int idpf_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings *cmd)
+ {
+- struct idpf_vport *vport;
+-
+- idpf_vport_ctrl_lock(netdev);
+- vport = idpf_netdev_to_vport(netdev);
++ struct idpf_netdev_priv *np = netdev_priv(netdev);
+
+ ethtool_link_ksettings_zero_link_mode(cmd, supported);
+ cmd->base.autoneg = AUTONEG_DISABLE;
+ cmd->base.port = PORT_NONE;
+- if (vport->link_up) {
++ if (netif_carrier_ok(netdev)) {
+ cmd->base.duplex = DUPLEX_FULL;
+- cmd->base.speed = vport->link_speed_mbps;
++ cmd->base.speed = np->link_speed_mbps;
+ } else {
+ cmd->base.duplex = DUPLEX_UNKNOWN;
+ cmd->base.speed = SPEED_UNKNOWN;
+ }
+
+- idpf_vport_ctrl_unlock(netdev);
+-
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index 0b6c8fd5bc90f7..e46b1a60f1f443 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -1799,6 +1799,7 @@ static int idpf_init_hard_reset(struct idpf_adapter *adapter)
+ */
+ err = idpf_vc_core_init(adapter);
+ if (err) {
++ cancel_delayed_work_sync(&adapter->mbx_task);
+ idpf_deinit_dflt_mbx(adapter);
+ goto unlock_mutex;
+ }
+@@ -1873,7 +1874,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
+ * mess with. Nothing below should use those variables from new_vport
+ * and should instead always refer to them in vport if they need to.
+ */
+- memcpy(new_vport, vport, offsetof(struct idpf_vport, link_speed_mbps));
++ memcpy(new_vport, vport, offsetof(struct idpf_vport, link_up));
+
+ /* Adjust resource parameters prior to reallocating resources */
+ switch (reset_cause) {
+@@ -1919,7 +1920,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
+ /* Same comment as above regarding avoiding copying the wait_queues and
+ * mutexes applies here. We do not want to mess with those if possible.
+ */
+- memcpy(vport, new_vport, offsetof(struct idpf_vport, link_speed_mbps));
++ memcpy(vport, new_vport, offsetof(struct idpf_vport, link_up));
+
+ if (reset_cause == IDPF_SR_Q_CHANGE)
+ idpf_vport_alloc_vec_indexes(vport);
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+index 3c0f97650d72fd..c477d4453e9057 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+@@ -141,7 +141,7 @@ static void idpf_handle_event_link(struct idpf_adapter *adapter,
+ }
+ np = netdev_priv(vport->netdev);
+
+- vport->link_speed_mbps = le32_to_cpu(v2e->link_speed);
++ np->link_speed_mbps = le32_to_cpu(v2e->link_speed);
+
+ if (vport->link_up == v2e->link_status)
+ return;
+@@ -3063,7 +3063,6 @@ int idpf_vc_core_init(struct idpf_adapter *adapter)
+ adapter->state = __IDPF_VER_CHECK;
+ if (adapter->vcxn_mngr)
+ idpf_vc_xn_shutdown(adapter->vcxn_mngr);
+- idpf_deinit_dflt_mbx(adapter);
+ set_bit(IDPF_HR_DRV_LOAD, adapter->flags);
+ queue_delayed_work(adapter->vc_event_wq, &adapter->vc_event_task,
+ msecs_to_jiffies(task_delay));
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+index b93791d6b59332..f5dc876eb50099 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+@@ -394,6 +394,7 @@ static int ionic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ err_out_pci:
+ ionic_dev_teardown(ionic);
+ ionic_clear_pci(ionic);
++ ionic_debugfs_del_dev(ionic);
+ err_out:
+ mutex_destroy(&ionic->dev_cmd_lock);
+ ionic_devlink_free(ionic);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 02368917efb4ad..afb8a5a079fa0f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3794,6 +3794,7 @@ static int stmmac_request_irq_single(struct net_device *dev)
+ /* Request the Wake IRQ in case of another line
+ * is used for WoL
+ */
++ priv->wol_irq_disabled = true;
+ if (priv->wol_irq > 0 && priv->wol_irq != dev->irq) {
+ ret = request_irq(priv->wol_irq, stmmac_interrupt,
+ IRQF_SHARED, dev->name, dev);
+diff --git a/drivers/net/ethernet/vertexcom/mse102x.c b/drivers/net/ethernet/vertexcom/mse102x.c
+index edd8b59680e538..33ef3a49de8ee8 100644
+--- a/drivers/net/ethernet/vertexcom/mse102x.c
++++ b/drivers/net/ethernet/vertexcom/mse102x.c
+@@ -222,7 +222,7 @@ static int mse102x_tx_frame_spi(struct mse102x_net *mse, struct sk_buff *txp,
+ struct mse102x_net_spi *mses = to_mse102x_spi(mse);
+ struct spi_transfer *xfer = &mses->spi_xfer;
+ struct spi_message *msg = &mses->spi_msg;
+- struct sk_buff *tskb;
++ struct sk_buff *tskb = NULL;
+ int ret;
+
+ netif_dbg(mse, tx_queued, mse->ndev, "%s: skb %p, %d@%p\n",
+@@ -235,7 +235,6 @@ static int mse102x_tx_frame_spi(struct mse102x_net *mse, struct sk_buff *txp,
+ if (!tskb)
+ return -ENOMEM;
+
+- dev_kfree_skb(txp);
+ txp = tskb;
+ }
+
+@@ -257,6 +256,8 @@ static int mse102x_tx_frame_spi(struct mse102x_net *mse, struct sk_buff *txp,
+ mse->stats.xfer_err++;
+ }
+
++ dev_kfree_skb(tskb);
++
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 0c4c57e7fddc2c..877f190e3af4e6 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -862,13 +862,13 @@ axienet_start_xmit_dmaengine(struct sk_buff *skb, struct net_device *ndev)
+ skbuf_dma->sg_len = sg_len;
+ dma_tx_desc->callback_param = lp;
+ dma_tx_desc->callback_result = axienet_dma_tx_cb;
+- dmaengine_submit(dma_tx_desc);
+- dma_async_issue_pending(lp->tx_chan);
+ txq = skb_get_tx_queue(lp->ndev, skb);
+ netdev_tx_sent_queue(txq, skb->len);
+ netif_txq_maybe_stop(txq, CIRC_SPACE(lp->tx_ring_head, lp->tx_ring_tail, TX_BD_NUM_MAX),
+ MAX_SKB_FRAGS + 1, 2 * MAX_SKB_FRAGS);
+
++ dmaengine_submit(dma_tx_desc);
++ dma_async_issue_pending(lp->tx_chan);
+ return NETDEV_TX_OK;
+
+ xmit_error_unmap_sg:
+diff --git a/drivers/net/phy/dp83848.c b/drivers/net/phy/dp83848.c
+index 937061acfc613a..351411f0aa6f47 100644
+--- a/drivers/net/phy/dp83848.c
++++ b/drivers/net/phy/dp83848.c
+@@ -147,6 +147,8 @@ MODULE_DEVICE_TABLE(mdio, dp83848_tbl);
+ /* IRQ related */ \
+ .config_intr = dp83848_config_intr, \
+ .handle_interrupt = dp83848_handle_interrupt, \
++ \
++ .flags = PHY_RST_AFTER_CLK_EN, \
+ }
+
+ static struct phy_driver dp83848_driver[] = {
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 792e9eadbfc3dc..53a038fcbe991d 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -368,15 +368,16 @@ struct receive_queue {
+ * because table sizes may be differ according to the device configuration.
+ */
+ #define VIRTIO_NET_RSS_MAX_KEY_SIZE 40
+-#define VIRTIO_NET_RSS_MAX_TABLE_LEN 128
+ struct virtio_net_ctrl_rss {
+ u32 hash_types;
+ u16 indirection_table_mask;
+ u16 unclassified_queue;
+- u16 indirection_table[VIRTIO_NET_RSS_MAX_TABLE_LEN];
++ u16 hash_cfg_reserved; /* for HASH_CONFIG (see virtio_net_hash_config for details) */
+ u16 max_tx_vq;
+ u8 hash_key_length;
+ u8 key[VIRTIO_NET_RSS_MAX_KEY_SIZE];
++
++ u16 *indirection_table;
+ };
+
+ /* Control VQ buffers: protected by the rtnl lock */
+@@ -512,6 +513,25 @@ static struct sk_buff *virtnet_skb_append_frag(struct sk_buff *head_skb,
+ struct page *page, void *buf,
+ int len, int truesize);
+
++static int rss_indirection_table_alloc(struct virtio_net_ctrl_rss *rss, u16 indir_table_size)
++{
++ if (!indir_table_size) {
++ rss->indirection_table = NULL;
++ return 0;
++ }
++
++ rss->indirection_table = kmalloc_array(indir_table_size, sizeof(u16), GFP_KERNEL);
++ if (!rss->indirection_table)
++ return -ENOMEM;
++
++ return 0;
++}
++
++static void rss_indirection_table_free(struct virtio_net_ctrl_rss *rss)
++{
++ kfree(rss->indirection_table);
++}
++
+ static bool is_xdp_frame(void *ptr)
+ {
+ return (unsigned long)ptr & VIRTIO_XDP_FLAG;
+@@ -3374,15 +3394,59 @@ static void virtnet_ack_link_announce(struct virtnet_info *vi)
+ dev_warn(&vi->dev->dev, "Failed to ack link announce.\n");
+ }
+
++static bool virtnet_commit_rss_command(struct virtnet_info *vi);
++
++static void virtnet_rss_update_by_qpairs(struct virtnet_info *vi, u16 queue_pairs)
++{
++ u32 indir_val = 0;
++ int i = 0;
++
++ for (; i < vi->rss_indir_table_size; ++i) {
++ indir_val = ethtool_rxfh_indir_default(i, queue_pairs);
++ vi->rss.indirection_table[i] = indir_val;
++ }
++ vi->rss.max_tx_vq = queue_pairs;
++}
++
+ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
+ {
+ struct virtio_net_ctrl_mq *mq __free(kfree) = NULL;
+- struct scatterlist sg;
++ struct virtio_net_ctrl_rss old_rss;
+ struct net_device *dev = vi->dev;
++ struct scatterlist sg;
+
+ if (!vi->has_cvq || !virtio_has_feature(vi->vdev, VIRTIO_NET_F_MQ))
+ return 0;
+
++ /* Firstly check if we need update rss. Do updating if both (1) rss enabled and
++ * (2) no user configuration.
++ *
++ * During rss command processing, device updates queue_pairs using rss.max_tx_vq. That is,
++ * the device updates queue_pairs together with rss, so we can skip the sperate queue_pairs
++ * update (VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET below) and return directly.
++ */
++ if (vi->has_rss && !netif_is_rxfh_configured(dev)) {
++ memcpy(&old_rss, &vi->rss, sizeof(old_rss));
++ if (rss_indirection_table_alloc(&vi->rss, vi->rss_indir_table_size)) {
++ vi->rss.indirection_table = old_rss.indirection_table;
++ return -ENOMEM;
++ }
++
++ virtnet_rss_update_by_qpairs(vi, queue_pairs);
++
++ if (!virtnet_commit_rss_command(vi)) {
++ /* restore ctrl_rss if commit_rss_command failed */
++ rss_indirection_table_free(&vi->rss);
++ memcpy(&vi->rss, &old_rss, sizeof(old_rss));
++
++ dev_warn(&dev->dev, "Fail to set num of queue pairs to %d, because committing RSS failed\n",
++ queue_pairs);
++ return -EINVAL;
++ }
++ rss_indirection_table_free(&old_rss);
++ goto succ;
++ }
++
+ mq = kzalloc(sizeof(*mq), GFP_KERNEL);
+ if (!mq)
+ return -ENOMEM;
+@@ -3395,12 +3459,12 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
+ dev_warn(&dev->dev, "Fail to set num of queue pairs to %d\n",
+ queue_pairs);
+ return -EINVAL;
+- } else {
+- vi->curr_queue_pairs = queue_pairs;
+- /* virtnet_open() will refill when device is going to up. */
+- if (dev->flags & IFF_UP)
+- schedule_delayed_work(&vi->refill, 0);
+ }
++succ:
++ vi->curr_queue_pairs = queue_pairs;
++ /* virtnet_open() will refill when device is going to up. */
++ if (dev->flags & IFF_UP)
++ schedule_delayed_work(&vi->refill, 0);
+
+ return 0;
+ }
+@@ -3828,11 +3892,15 @@ static bool virtnet_commit_rss_command(struct virtnet_info *vi)
+ /* prepare sgs */
+ sg_init_table(sgs, 4);
+
+- sg_buf_size = offsetof(struct virtio_net_ctrl_rss, indirection_table);
++ sg_buf_size = offsetof(struct virtio_net_ctrl_rss, hash_cfg_reserved);
+ sg_set_buf(&sgs[0], &vi->rss, sg_buf_size);
+
+- sg_buf_size = sizeof(uint16_t) * (vi->rss.indirection_table_mask + 1);
+- sg_set_buf(&sgs[1], vi->rss.indirection_table, sg_buf_size);
++ if (vi->has_rss) {
++ sg_buf_size = sizeof(uint16_t) * vi->rss_indir_table_size;
++ sg_set_buf(&sgs[1], vi->rss.indirection_table, sg_buf_size);
++ } else {
++ sg_set_buf(&sgs[1], &vi->rss.hash_cfg_reserved, sizeof(uint16_t));
++ }
+
+ sg_buf_size = offsetof(struct virtio_net_ctrl_rss, key)
+ - offsetof(struct virtio_net_ctrl_rss, max_tx_vq);
+@@ -3856,21 +3924,14 @@ static bool virtnet_commit_rss_command(struct virtnet_info *vi)
+
+ static void virtnet_init_default_rss(struct virtnet_info *vi)
+ {
+- u32 indir_val = 0;
+- int i = 0;
+-
+ vi->rss.hash_types = vi->rss_hash_types_supported;
+ vi->rss_hash_types_saved = vi->rss_hash_types_supported;
+ vi->rss.indirection_table_mask = vi->rss_indir_table_size
+ ? vi->rss_indir_table_size - 1 : 0;
+ vi->rss.unclassified_queue = 0;
+
+- for (; i < vi->rss_indir_table_size; ++i) {
+- indir_val = ethtool_rxfh_indir_default(i, vi->curr_queue_pairs);
+- vi->rss.indirection_table[i] = indir_val;
+- }
++ virtnet_rss_update_by_qpairs(vi, vi->curr_queue_pairs);
+
+- vi->rss.max_tx_vq = vi->has_rss ? vi->curr_queue_pairs : 0;
+ vi->rss.hash_key_length = vi->rss_key_size;
+
+ netdev_rss_key_fill(vi->rss.key, vi->rss_key_size);
+@@ -6420,10 +6481,19 @@ static int virtnet_probe(struct virtio_device *vdev)
+ virtio_cread16(vdev, offsetof(struct virtio_net_config,
+ rss_max_indirection_table_length));
+ }
++ err = rss_indirection_table_alloc(&vi->rss, vi->rss_indir_table_size);
++ if (err)
++ goto free;
+
+ if (vi->has_rss || vi->has_rss_hash_report) {
+ vi->rss_key_size =
+ virtio_cread8(vdev, offsetof(struct virtio_net_config, rss_max_key_size));
++ if (vi->rss_key_size > VIRTIO_NET_RSS_MAX_KEY_SIZE) {
++ dev_err(&vdev->dev, "rss_max_key_size=%u exceeds the limit %u.\n",
++ vi->rss_key_size, VIRTIO_NET_RSS_MAX_KEY_SIZE);
++ err = -EINVAL;
++ goto free;
++ }
+
+ vi->rss_hash_types_supported =
+ virtio_cread32(vdev, offsetof(struct virtio_net_config, supported_hash_types));
+@@ -6551,6 +6621,15 @@ static int virtnet_probe(struct virtio_device *vdev)
+
+ virtio_device_ready(vdev);
+
++ if (vi->has_rss || vi->has_rss_hash_report) {
++ if (!virtnet_commit_rss_command(vi)) {
++ dev_warn(&vdev->dev, "RSS disabled because committing failed.\n");
++ dev->hw_features &= ~NETIF_F_RXHASH;
++ vi->has_rss_hash_report = false;
++ vi->has_rss = false;
++ }
++ }
++
+ virtnet_set_queues(vi, vi->curr_queue_pairs);
+
+ /* a random MAC address has been assigned, notify the device.
+@@ -6674,6 +6753,8 @@ static void virtnet_remove(struct virtio_device *vdev)
+
+ remove_vq_common(vi);
+
++ rss_indirection_table_free(&vi->rss);
++
+ free_netdev(vi->dev);
+ }
+
+diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
+index 210d84c67ef9df..7a9c09cd4fdcfe 100644
+--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
++++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
+@@ -226,7 +226,7 @@ int t7xx_dpmaif_rx_buf_alloc(struct dpmaif_ctrl *dpmaif_ctrl,
+ return 0;
+
+ err_unmap_skbs:
+- while (--i > 0)
++ while (i--)
+ t7xx_unmap_bat_skb(dpmaif_ctrl->dev, bat_req->bat_skb, i);
+
+ return ret;
+diff --git a/drivers/platform/x86/amd/pmc/pmc.c b/drivers/platform/x86/amd/pmc/pmc.c
+index bbb8edb62e009f..5669f94c3d06bf 100644
+--- a/drivers/platform/x86/amd/pmc/pmc.c
++++ b/drivers/platform/x86/amd/pmc/pmc.c
+@@ -998,6 +998,11 @@ static int amd_pmc_s2d_init(struct amd_pmc_dev *dev)
+ amd_pmc_send_cmd(dev, S2D_PHYS_ADDR_LOW, &phys_addr_low, dev->s2d_msg_id, true);
+ amd_pmc_send_cmd(dev, S2D_PHYS_ADDR_HIGH, &phys_addr_hi, dev->s2d_msg_id, true);
+
++ if (!phys_addr_hi && !phys_addr_low) {
++ dev_err(dev->dev, "STB is not enabled on the system; disable enable_stb or contact system vendor\n");
++ return -EINVAL;
++ }
++
+ stb_phys_addr = ((u64)phys_addr_hi << 32 | phys_addr_low);
+
+ /* Clear msg_port for other SMU operation */
+diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c
+index 8f1f719befa3e7..347bb43a5f2b75 100644
+--- a/drivers/platform/x86/amd/pmf/core.c
++++ b/drivers/platform/x86/amd/pmf/core.c
+@@ -37,12 +37,6 @@
+ #define AMD_PMF_RESULT_CMD_UNKNOWN 0xFE
+ #define AMD_PMF_RESULT_FAILED 0xFF
+
+-/* List of supported CPU ids */
+-#define AMD_CPU_ID_RMB 0x14b5
+-#define AMD_CPU_ID_PS 0x14e8
+-#define PCI_DEVICE_ID_AMD_1AH_M20H_ROOT 0x1507
+-#define PCI_DEVICE_ID_AMD_1AH_M60H_ROOT 0x1122
+-
+ #define PMF_MSG_DELAY_MIN_US 50
+ #define RESPONSE_REGISTER_LOOP_MAX 20000
+
+@@ -261,7 +255,20 @@ int amd_pmf_set_dram_addr(struct amd_pmf_dev *dev, bool alloc_buffer)
+
+ /* Get Metrics Table Address */
+ if (alloc_buffer) {
+- dev->buf = kzalloc(sizeof(dev->m_table), GFP_KERNEL);
++ switch (dev->cpu_id) {
++ case AMD_CPU_ID_PS:
++ case AMD_CPU_ID_RMB:
++ dev->mtable_size = sizeof(dev->m_table);
++ break;
++ case PCI_DEVICE_ID_AMD_1AH_M20H_ROOT:
++ case PCI_DEVICE_ID_AMD_1AH_M60H_ROOT:
++ dev->mtable_size = sizeof(dev->m_table_v2);
++ break;
++ default:
++ dev_err(dev->dev, "Invalid CPU id: 0x%x", dev->cpu_id);
++ }
++
++ dev->buf = kzalloc(dev->mtable_size, GFP_KERNEL);
+ if (!dev->buf)
+ return -ENOMEM;
+ }
+diff --git a/drivers/platform/x86/amd/pmf/pmf.h b/drivers/platform/x86/amd/pmf/pmf.h
+index 753d5662c08019..1ec11b588be4fd 100644
+--- a/drivers/platform/x86/amd/pmf/pmf.h
++++ b/drivers/platform/x86/amd/pmf/pmf.h
+@@ -19,6 +19,12 @@
+ #define POLICY_SIGN_COOKIE 0x31535024
+ #define POLICY_COOKIE_OFFSET 0x10
+
++/* List of supported CPU ids */
++#define AMD_CPU_ID_RMB 0x14b5
++#define AMD_CPU_ID_PS 0x14e8
++#define PCI_DEVICE_ID_AMD_1AH_M20H_ROOT 0x1507
++#define PCI_DEVICE_ID_AMD_1AH_M60H_ROOT 0x1122
++
+ struct cookie_header {
+ u32 sign;
+ u32 length;
+@@ -181,6 +187,53 @@ struct apmf_fan_idx {
+ u32 fan_ctl_idx;
+ } __packed;
+
++struct smu_pmf_metrics_v2 {
++ u16 core_frequency[16]; /* MHz */
++ u16 core_power[16]; /* mW */
++ u16 core_temp[16]; /* centi-C */
++ u16 gfx_temp; /* centi-C */
++ u16 soc_temp; /* centi-C */
++ u16 stapm_opn_limit; /* mW */
++ u16 stapm_cur_limit; /* mW */
++ u16 infra_cpu_maxfreq; /* MHz */
++ u16 infra_gfx_maxfreq; /* MHz */
++ u16 skin_temp; /* centi-C */
++ u16 gfxclk_freq; /* MHz */
++ u16 fclk_freq; /* MHz */
++ u16 gfx_activity; /* GFX busy % [0-100] */
++ u16 socclk_freq; /* MHz */
++ u16 vclk_freq; /* MHz */
++ u16 vcn_activity; /* VCN busy % [0-100] */
++ u16 vpeclk_freq; /* MHz */
++ u16 ipuclk_freq; /* MHz */
++ u16 ipu_busy[8]; /* NPU busy % [0-100] */
++ u16 dram_reads; /* MB/sec */
++ u16 dram_writes; /* MB/sec */
++ u16 core_c0residency[16]; /* C0 residency % [0-100] */
++ u16 ipu_power; /* mW */
++ u32 apu_power; /* mW */
++ u32 gfx_power; /* mW */
++ u32 dgpu_power; /* mW */
++ u32 socket_power; /* mW */
++ u32 all_core_power; /* mW */
++ u32 filter_alpha_value; /* time constant [us] */
++ u32 metrics_counter;
++ u16 memclk_freq; /* MHz */
++ u16 mpipuclk_freq; /* MHz */
++ u16 ipu_reads; /* MB/sec */
++ u16 ipu_writes; /* MB/sec */
++ u32 throttle_residency_prochot;
++ u32 throttle_residency_spl;
++ u32 throttle_residency_fppt;
++ u32 throttle_residency_sppt;
++ u32 throttle_residency_thm_core;
++ u32 throttle_residency_thm_gfx;
++ u32 throttle_residency_thm_soc;
++ u16 psys;
++ u16 spare1;
++ u32 spare[6];
++} __packed;
++
+ struct smu_pmf_metrics {
+ u16 gfxclk_freq; /* in MHz */
+ u16 socclk_freq; /* in MHz */
+@@ -278,6 +331,7 @@ struct amd_pmf_dev {
+ int hb_interval; /* SBIOS heartbeat interval */
+ struct delayed_work heart_beat;
+ struct smu_pmf_metrics m_table;
++ struct smu_pmf_metrics_v2 m_table_v2;
+ struct delayed_work work_buffer;
+ ktime_t start_time;
+ int socket_power_history[AVG_SAMPLE_SIZE];
+@@ -302,6 +356,7 @@ struct amd_pmf_dev {
+ bool smart_pc_enabled;
+ u16 pmf_if_version;
+ struct input_dev *pmf_idev;
++ size_t mtable_size;
+ };
+
+ struct apmf_sps_prop_granular_v2 {
+diff --git a/drivers/platform/x86/amd/pmf/spc.c b/drivers/platform/x86/amd/pmf/spc.c
+index 3c153fb1425e9f..06226eb0eab33f 100644
+--- a/drivers/platform/x86/amd/pmf/spc.c
++++ b/drivers/platform/x86/amd/pmf/spc.c
+@@ -53,30 +53,50 @@ void amd_pmf_dump_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *
+ void amd_pmf_dump_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in) {}
+ #endif
+
+-static void amd_pmf_get_smu_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
++static void amd_pmf_get_c0_residency(u16 *core_res, size_t size, struct ta_pmf_enact_table *in)
+ {
+ u16 max, avg = 0;
+ int i;
+
+- memset(dev->buf, 0, sizeof(dev->m_table));
+- amd_pmf_send_cmd(dev, SET_TRANSFER_TABLE, 0, 7, NULL);
+- memcpy(&dev->m_table, dev->buf, sizeof(dev->m_table));
+-
+- in->ev_info.socket_power = dev->m_table.apu_power + dev->m_table.dgpu_power;
+- in->ev_info.skin_temperature = dev->m_table.skin_temp;
+-
+ /* Get the avg and max C0 residency of all the cores */
+- max = dev->m_table.avg_core_c0residency[0];
+- for (i = 0; i < ARRAY_SIZE(dev->m_table.avg_core_c0residency); i++) {
+- avg += dev->m_table.avg_core_c0residency[i];
+- if (dev->m_table.avg_core_c0residency[i] > max)
+- max = dev->m_table.avg_core_c0residency[i];
++ max = *core_res;
++ for (i = 0; i < size; i++) {
++ avg += core_res[i];
++ if (core_res[i] > max)
++ max = core_res[i];
+ }
+-
+- avg = DIV_ROUND_CLOSEST(avg, ARRAY_SIZE(dev->m_table.avg_core_c0residency));
++ avg = DIV_ROUND_CLOSEST(avg, size);
+ in->ev_info.avg_c0residency = avg;
+ in->ev_info.max_c0residency = max;
+- in->ev_info.gfx_busy = dev->m_table.avg_gfx_activity;
++}
++
++static void amd_pmf_get_smu_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
++{
++ /* Get the updated metrics table data */
++ memset(dev->buf, 0, dev->mtable_size);
++ amd_pmf_send_cmd(dev, SET_TRANSFER_TABLE, 0, 7, NULL);
++
++ switch (dev->cpu_id) {
++ case AMD_CPU_ID_PS:
++ memcpy(&dev->m_table, dev->buf, dev->mtable_size);
++ in->ev_info.socket_power = dev->m_table.apu_power + dev->m_table.dgpu_power;
++ in->ev_info.skin_temperature = dev->m_table.skin_temp;
++ in->ev_info.gfx_busy = dev->m_table.avg_gfx_activity;
++ amd_pmf_get_c0_residency(dev->m_table.avg_core_c0residency,
++ ARRAY_SIZE(dev->m_table.avg_core_c0residency), in);
++ break;
++ case PCI_DEVICE_ID_AMD_1AH_M20H_ROOT:
++ case PCI_DEVICE_ID_AMD_1AH_M60H_ROOT:
++ memcpy(&dev->m_table_v2, dev->buf, dev->mtable_size);
++ in->ev_info.socket_power = dev->m_table_v2.apu_power + dev->m_table_v2.dgpu_power;
++ in->ev_info.skin_temperature = dev->m_table_v2.skin_temp;
++ in->ev_info.gfx_busy = dev->m_table_v2.gfx_activity;
++ amd_pmf_get_c0_residency(dev->m_table_v2.core_c0residency,
++ ARRAY_SIZE(dev->m_table_v2.core_c0residency), in);
++ break;
++ default:
++ dev_err(dev->dev, "Unsupported CPU id: 0x%x", dev->cpu_id);
++ }
+ }
+
+ static const char * const pmf_battery_supply_name[] = {
+diff --git a/drivers/pwm/pwm-imx-tpm.c b/drivers/pwm/pwm-imx-tpm.c
+index 96ea343856f0c3..7ee7b65b9b90c5 100644
+--- a/drivers/pwm/pwm-imx-tpm.c
++++ b/drivers/pwm/pwm-imx-tpm.c
+@@ -106,7 +106,9 @@ static int pwm_imx_tpm_round_state(struct pwm_chip *chip,
+ p->prescale = prescale;
+
+ period_count = (clock_unit + ((1 << prescale) >> 1)) >> prescale;
+- p->mod = period_count;
++ if (period_count == 0)
++ return -EINVAL;
++ p->mod = period_count - 1;
+
+ /* calculate real period HW can support */
+ tmp = (u64)period_count << prescale;
+diff --git a/drivers/regulator/rtq2208-regulator.c b/drivers/regulator/rtq2208-regulator.c
+index a5c126afc648c5..5925fa7a9a06f0 100644
+--- a/drivers/regulator/rtq2208-regulator.c
++++ b/drivers/regulator/rtq2208-regulator.c
+@@ -568,7 +568,7 @@ static int rtq2208_probe(struct i2c_client *i2c)
+ struct regmap *regmap;
+ struct rtq2208_regulator_desc *rdesc[RTQ2208_LDO_MAX];
+ struct regulator_dev *rdev;
+- struct regulator_config cfg;
++ struct regulator_config cfg = {};
+ struct rtq2208_rdev_map *rdev_map;
+ int i, ret = 0, idx, n_regulator = 0;
+ unsigned int regulator_idx_table[RTQ2208_LDO_MAX],
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index 82d460ff477718..d877a1a1aeb4bf 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -1354,14 +1354,18 @@ static int qcom_glink_request_intent(struct qcom_glink *glink,
+ goto unlock;
+
+ ret = wait_event_timeout(channel->intent_req_wq,
+- READ_ONCE(channel->intent_req_result) >= 0 &&
+- READ_ONCE(channel->intent_received),
++ READ_ONCE(channel->intent_req_result) == 0 ||
++ (READ_ONCE(channel->intent_req_result) > 0 &&
++ READ_ONCE(channel->intent_received)) ||
++ glink->abort_tx,
+ 10 * HZ);
+ if (!ret) {
+ dev_err(glink->dev, "intent request timed out\n");
+ ret = -ETIMEDOUT;
++ } else if (glink->abort_tx) {
++ ret = -ECANCELED;
+ } else {
+- ret = READ_ONCE(channel->intent_req_result) ? 0 : -ECANCELED;
++ ret = READ_ONCE(channel->intent_req_result) ? 0 : -EAGAIN;
+ }
+
+ unlock:
+diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
+index c8b9654d30f0c3..a4d17f3da25d0f 100644
+--- a/drivers/scsi/sd_zbc.c
++++ b/drivers/scsi/sd_zbc.c
+@@ -188,8 +188,7 @@ static void *sd_zbc_alloc_report_buffer(struct scsi_disk *sdkp,
+ bufsize = min_t(size_t, bufsize, queue_max_segments(q) << PAGE_SHIFT);
+
+ while (bufsize >= SECTOR_SIZE) {
+- buf = __vmalloc(bufsize,
+- GFP_KERNEL | __GFP_ZERO | __GFP_NORETRY);
++ buf = kvzalloc(bufsize, GFP_KERNEL | __GFP_NORETRY);
+ if (buf) {
+ *buflen = bufsize;
+ return buf;
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index 37e11e50172859..9ff3b42cb1955e 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -139,6 +139,7 @@ struct qcom_llcc_config {
+ int size;
+ bool need_llcc_cfg;
+ bool no_edac;
++ bool irq_configured;
+ };
+
+ struct qcom_sct_config {
+@@ -720,6 +721,7 @@ static const struct qcom_llcc_config x1e80100_cfg[] = {
+ .need_llcc_cfg = true,
+ .reg_offset = llcc_v2_1_reg_offset,
+ .edac_reg_offset = &llcc_v2_1_edac_reg_offset,
++ .irq_configured = true,
+ },
+ };
+
+@@ -1347,6 +1349,7 @@ static int qcom_llcc_probe(struct platform_device *pdev)
+ drv_data->cfg = llcc_cfg;
+ drv_data->cfg_size = sz;
+ drv_data->edac_reg_offset = cfg->edac_reg_offset;
++ drv_data->ecc_irq_configured = cfg->irq_configured;
+ mutex_init(&drv_data->lock);
+ platform_set_drvdata(pdev, drv_data);
+
+diff --git a/drivers/staging/media/av7110/av7110.h b/drivers/staging/media/av7110/av7110.h
+index ec461fd187af47..b584754f4be0ba 100644
+--- a/drivers/staging/media/av7110/av7110.h
++++ b/drivers/staging/media/av7110/av7110.h
+@@ -88,6 +88,8 @@ struct infrared {
+ u32 ir_config;
+ };
+
++#define MAX_CI_SLOTS 2
++
+ /* place to store all the necessary device information */
+ struct av7110 {
+ /* devices */
+@@ -163,7 +165,7 @@ struct av7110 {
+
+ /* CA */
+
+- struct ca_slot_info ci_slot[2];
++ struct ca_slot_info ci_slot[MAX_CI_SLOTS];
+
+ enum av7110_video_mode vidmode;
+ struct dmxdev dmxdev;
+diff --git a/drivers/staging/media/av7110/av7110_ca.c b/drivers/staging/media/av7110/av7110_ca.c
+index 6ce212c64e5da3..fce4023c9dea22 100644
+--- a/drivers/staging/media/av7110/av7110_ca.c
++++ b/drivers/staging/media/av7110/av7110_ca.c
+@@ -26,23 +26,28 @@
+
+ void CI_handle(struct av7110 *av7110, u8 *data, u16 len)
+ {
++ unsigned slot_num;
++
+ dprintk(8, "av7110:%p\n", av7110);
+
+ if (len < 3)
+ return;
+ switch (data[0]) {
+ case CI_MSG_CI_INFO:
+- if (data[2] != 1 && data[2] != 2)
++ if (data[2] != 1 && data[2] != MAX_CI_SLOTS)
+ break;
++
++ slot_num = array_index_nospec(data[2] - 1, MAX_CI_SLOTS);
++
+ switch (data[1]) {
+ case 0:
+- av7110->ci_slot[data[2] - 1].flags = 0;
++ av7110->ci_slot[slot_num].flags = 0;
+ break;
+ case 1:
+- av7110->ci_slot[data[2] - 1].flags |= CA_CI_MODULE_PRESENT;
++ av7110->ci_slot[slot_num].flags |= CA_CI_MODULE_PRESENT;
+ break;
+ case 2:
+- av7110->ci_slot[data[2] - 1].flags |= CA_CI_MODULE_READY;
++ av7110->ci_slot[slot_num].flags |= CA_CI_MODULE_READY;
+ break;
+ }
+ break;
+@@ -262,15 +267,19 @@ static int dvb_ca_ioctl(struct file *file, unsigned int cmd, void *parg)
+ case CA_GET_SLOT_INFO:
+ {
+ struct ca_slot_info *info = (struct ca_slot_info *)parg;
++ unsigned int slot_num;
+
+ if (info->num < 0 || info->num > 1) {
+ mutex_unlock(&av7110->ioctl_mutex);
+ return -EINVAL;
+ }
+- av7110->ci_slot[info->num].num = info->num;
+- av7110->ci_slot[info->num].type = FW_CI_LL_SUPPORT(av7110->arm_app) ?
+- CA_CI_LINK : CA_CI;
+- memcpy(info, &av7110->ci_slot[info->num], sizeof(struct ca_slot_info));
++ slot_num = array_index_nospec(info->num, MAX_CI_SLOTS);
++
++ av7110->ci_slot[slot_num].num = info->num;
++ av7110->ci_slot[slot_num].type = FW_CI_LL_SUPPORT(av7110->arm_app) ?
++ CA_CI_LINK : CA_CI;
++ memcpy(info, &av7110->ci_slot[slot_num],
++ sizeof(struct ca_slot_info));
+ break;
+ }
+
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index c4d97dbf6ba836..0ea1019b9edf6a 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -593,7 +593,7 @@ vchiq_platform_init_state(struct vchiq_state *state)
+ {
+ struct vchiq_arm_state *platform_state;
+
+- platform_state = kzalloc(sizeof(*platform_state), GFP_KERNEL);
++ platform_state = devm_kzalloc(state->dev, sizeof(*platform_state), GFP_KERNEL);
+ if (!platform_state)
+ return -ENOMEM;
+
+@@ -1731,7 +1731,7 @@ static int vchiq_probe(struct platform_device *pdev)
+ return -ENOENT;
+ }
+
+- mgmt = kzalloc(sizeof(*mgmt), GFP_KERNEL);
++ mgmt = devm_kzalloc(&pdev->dev, sizeof(*mgmt), GFP_KERNEL);
+ if (!mgmt)
+ return -ENOMEM;
+
+@@ -1789,8 +1789,6 @@ static void vchiq_remove(struct platform_device *pdev)
+
+ arm_state = vchiq_platform_get_arm_state(&mgmt->state);
+ kthread_stop(arm_state->ka_thread);
+-
+- kfree(mgmt);
+ }
+
+ static struct platform_driver vchiq_driver = {
+diff --git a/drivers/thermal/qcom/lmh.c b/drivers/thermal/qcom/lmh.c
+index 5225b3621a56c4..d2d49264cf83a4 100644
+--- a/drivers/thermal/qcom/lmh.c
++++ b/drivers/thermal/qcom/lmh.c
+@@ -73,7 +73,14 @@ static struct irq_chip lmh_irq_chip = {
+ static int lmh_irq_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hw)
+ {
+ struct lmh_hw_data *lmh_data = d->host_data;
++ static struct lock_class_key lmh_lock_key;
++ static struct lock_class_key lmh_request_key;
+
++ /*
++ * This lock class tells lockdep that GPIO irqs are in a different
++ * category than their parents, so it won't report false recursion.
++ */
++ irq_set_lockdep_class(irq, &lmh_lock_key, &lmh_request_key);
+ irq_set_chip_and_handler(irq, &lmh_irq_chip, handle_simple_irq);
+ irq_set_chip_data(irq, lmh_data);
+
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 1f252692815a18..0cbdc35d450076 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -128,18 +128,15 @@ static struct thermal_trip *thermal_of_trips_init(struct device_node *np, int *n
+ struct device_node *trips;
+ int ret, count;
+
++ *ntrips = 0;
++
+ trips = of_get_child_by_name(np, "trips");
+- if (!trips) {
+- pr_err("Failed to find 'trips' node\n");
+- return ERR_PTR(-EINVAL);
+- }
++ if (!trips)
++ return NULL;
+
+ count = of_get_child_count(trips);
+- if (!count) {
+- pr_err("No trip point defined\n");
+- ret = -EINVAL;
+- goto out_of_node_put;
+- }
++ if (!count)
++ return NULL;
+
+ tt = kzalloc(sizeof(*tt) * count, GFP_KERNEL);
+ if (!tt) {
+@@ -162,7 +159,6 @@ static struct thermal_trip *thermal_of_trips_init(struct device_node *np, int *n
+
+ out_kfree:
+ kfree(tt);
+- *ntrips = 0;
+ out_of_node_put:
+ of_node_put(trips);
+
+@@ -491,11 +487,14 @@ static struct thermal_zone_device *thermal_of_zone_register(struct device_node *
+
+ trips = thermal_of_trips_init(np, &ntrips);
+ if (IS_ERR(trips)) {
+- pr_err("Failed to find trip points for %pOFn id=%d\n", sensor, id);
++ pr_err("Failed to parse trip points for %pOFn id=%d\n", sensor, id);
+ ret = PTR_ERR(trips);
+ goto out_of_node_put;
+ }
+
++ if (!trips)
++ pr_info("No trip points found for %pOFn id=%d\n", sensor, id);
++
+ ret = thermal_of_monitor_init(np, &delay, &pdelay);
+ if (ret) {
+ pr_err("Failed to initialize monitoring delays from %pOFn\n", np);
+diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c
+index 7db9869a9f3fe7..89d2919d0193e8 100644
+--- a/drivers/thunderbolt/retimer.c
++++ b/drivers/thunderbolt/retimer.c
+@@ -532,6 +532,8 @@ int tb_retimer_scan(struct tb_port *port, bool add)
+ }
+
+ ret = 0;
++ if (!IS_ENABLED(CONFIG_USB4_DEBUGFS_MARGINING))
++ max = min(last_idx, max);
+
+ /* Add retimers if they do not exist already */
+ for (i = 1; i <= max; i++) {
+diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
+index 4d83b65afb5b69..f6478b693072a4 100644
+--- a/drivers/thunderbolt/usb4.c
++++ b/drivers/thunderbolt/usb4.c
+@@ -48,7 +48,7 @@ enum usb4_ba_index {
+
+ /* Delays in us used with usb4_port_wait_for_bit() */
+ #define USB4_PORT_DELAY 50
+-#define USB4_PORT_SB_DELAY 5000
++#define USB4_PORT_SB_DELAY 1000
+
+ static int usb4_native_switch_op(struct tb_switch *sw, u16 opcode,
+ u32 *metadata, u8 *status,
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 83567388a7b58e..03490f062d63aa 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -8641,6 +8641,14 @@ static int ufshcd_add_lus(struct ufs_hba *hba)
+ ufshcd_init_clk_scaling_sysfs(hba);
+ }
+
++ /*
++ * The RTC update code accesses the hba->ufs_device_wlun->sdev_gendev
++ * pointer and hence must only be started after the WLUN pointer has
++ * been initialized by ufshcd_scsi_add_wlus().
++ */
++ schedule_delayed_work(&hba->ufs_rtc_update_work,
++ msecs_to_jiffies(UFS_RTC_UPDATE_INTERVAL_MS));
++
+ ufs_bsg_probe(hba);
+ scsi_scan_host(hba->host);
+
+@@ -8800,8 +8808,6 @@ static int ufshcd_device_init(struct ufs_hba *hba, bool init_dev_params)
+ ufshcd_force_reset_auto_bkops(hba);
+
+ ufshcd_set_timestamp_attr(hba);
+- schedule_delayed_work(&hba->ufs_rtc_update_work,
+- msecs_to_jiffies(UFS_RTC_UPDATE_INTERVAL_MS));
+
+ /* Gear up to HS gear if supported */
+ if (hba->max_pwr_info.is_valid) {
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 427e5660f87c24..98114c2827c098 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -2342,10 +2342,18 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ u32 reg;
+ int i;
+
+- dwc->susphy_state = (dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)) &
+- DWC3_GUSB2PHYCFG_SUSPHY) ||
+- (dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0)) &
+- DWC3_GUSB3PIPECTL_SUSPHY);
++ if (!pm_runtime_suspended(dwc->dev) && !PMSG_IS_AUTO(msg)) {
++ dwc->susphy_state = (dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)) &
++ DWC3_GUSB2PHYCFG_SUSPHY) ||
++ (dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0)) &
++ DWC3_GUSB3PIPECTL_SUSPHY);
++ /*
++ * TI AM62 platform requires SUSPHY to be
++ * enabled for system suspend to work.
++ */
++ if (!dwc->susphy_state)
++ dwc3_enable_susphy(dwc, true);
++ }
+
+ switch (dwc->current_dr_role) {
+ case DWC3_GCTL_PRTCAP_DEVICE:
+@@ -2398,15 +2406,6 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ break;
+ }
+
+- if (!PMSG_IS_AUTO(msg)) {
+- /*
+- * TI AM62 platform requires SUSPHY to be
+- * enabled for system suspend to work.
+- */
+- if (!dwc->susphy_state)
+- dwc3_enable_susphy(dwc, true);
+- }
+-
+ return 0;
+ }
+
+diff --git a/drivers/usb/musb/sunxi.c b/drivers/usb/musb/sunxi.c
+index d54283fd026b22..05b6e7e52e0275 100644
+--- a/drivers/usb/musb/sunxi.c
++++ b/drivers/usb/musb/sunxi.c
+@@ -293,8 +293,6 @@ static int sunxi_musb_exit(struct musb *musb)
+ if (test_bit(SUNXI_MUSB_FL_HAS_SRAM, &glue->flags))
+ sunxi_sram_release(musb->controller->parent);
+
+- devm_usb_put_phy(glue->dev, glue->xceiv);
+-
+ return 0;
+ }
+
+diff --git a/drivers/usb/serial/io_edgeport.c b/drivers/usb/serial/io_edgeport.c
+index abe4bbb0ac654f..477c0927dc1b9d 100644
+--- a/drivers/usb/serial/io_edgeport.c
++++ b/drivers/usb/serial/io_edgeport.c
+@@ -770,11 +770,12 @@ static void edge_bulk_out_data_callback(struct urb *urb)
+ static void edge_bulk_out_cmd_callback(struct urb *urb)
+ {
+ struct edgeport_port *edge_port = urb->context;
++ struct device *dev = &urb->dev->dev;
+ int status = urb->status;
+
+ atomic_dec(&CmdUrbs);
+- dev_dbg(&urb->dev->dev, "%s - FREE URB %p (outstanding %d)\n",
+- __func__, urb, atomic_read(&CmdUrbs));
++ dev_dbg(dev, "%s - FREE URB %p (outstanding %d)\n", __func__, urb,
++ atomic_read(&CmdUrbs));
+
+
+ /* clean up the transfer buffer */
+@@ -784,8 +785,7 @@ static void edge_bulk_out_cmd_callback(struct urb *urb)
+ usb_free_urb(urb);
+
+ if (status) {
+- dev_dbg(&urb->dev->dev,
+- "%s - nonzero write bulk status received: %d\n",
++ dev_dbg(dev, "%s - nonzero write bulk status received: %d\n",
+ __func__, status);
+ return;
+ }
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 55886b64cadd83..04f511adc00256 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -251,6 +251,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_VENDOR_ID 0x2c7c
+ /* These Quectel products use Quectel's vendor ID */
+ #define QUECTEL_PRODUCT_EC21 0x0121
++#define QUECTEL_PRODUCT_RG650V 0x0122
+ #define QUECTEL_PRODUCT_EM061K_LTA 0x0123
+ #define QUECTEL_PRODUCT_EM061K_LMS 0x0124
+ #define QUECTEL_PRODUCT_EC25 0x0125
+@@ -1273,6 +1274,8 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG912Y, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG916Q, 0xff, 0x00, 0x00) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RG650V, 0xff, 0xff, 0x30) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RG650V, 0xff, 0, 0) },
+
+ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
+@@ -2320,6 +2323,9 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) }, /* Fibocom FG150 Diag */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) }, /* Fibocom FG150 AT */
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0111, 0xff) }, /* Fibocom FM160 (MBIM mode) */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x0112, 0xff, 0xff, 0x30) }, /* Fibocom FG132 Diag */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x0112, 0xff, 0xff, 0x40) }, /* Fibocom FG132 AT */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x0112, 0xff, 0, 0) }, /* Fibocom FG132 NMEA */
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0115, 0xff), /* Fibocom FM135 (laptop MBIM) */
+ .driver_info = RSVD(5) },
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) }, /* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index 703a9c56355731..061ff754b307bc 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -166,6 +166,8 @@ static const struct usb_device_id id_table[] = {
+ {DEVICE_SWI(0x1199, 0x9090)}, /* Sierra Wireless EM7565 QDL */
+ {DEVICE_SWI(0x1199, 0x9091)}, /* Sierra Wireless EM7565 */
+ {DEVICE_SWI(0x1199, 0x90d2)}, /* Sierra Wireless EM9191 QDL */
++ {DEVICE_SWI(0x1199, 0x90e4)}, /* Sierra Wireless EM86xx QDL*/
++ {DEVICE_SWI(0x1199, 0x90e5)}, /* Sierra Wireless EM86xx */
+ {DEVICE_SWI(0x1199, 0xc080)}, /* Sierra Wireless EM7590 QDL */
+ {DEVICE_SWI(0x1199, 0xc081)}, /* Sierra Wireless EM7590 */
+ {DEVICE_SWI(0x413c, 0x81a2)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */
+diff --git a/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_pdphy.c b/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_pdphy.c
+index 5b7f52b74a40aa..726423684bae0a 100644
+--- a/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_pdphy.c
++++ b/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_pdphy.c
+@@ -227,6 +227,10 @@ qcom_pmic_typec_pdphy_pd_transmit_payload(struct pmic_typec_pdphy *pmic_typec_pd
+
+ spin_lock_irqsave(&pmic_typec_pdphy->lock, flags);
+
++ hdr_len = sizeof(msg->header);
++ txbuf_len = pd_header_cnt_le(msg->header) * 4;
++ txsize_len = hdr_len + txbuf_len - 1;
++
+ ret = regmap_read(pmic_typec_pdphy->regmap,
+ pmic_typec_pdphy->base + USB_PDPHY_RX_ACKNOWLEDGE_REG,
+ &val);
+@@ -244,10 +248,6 @@ qcom_pmic_typec_pdphy_pd_transmit_payload(struct pmic_typec_pdphy *pmic_typec_pd
+ if (ret)
+ goto done;
+
+- hdr_len = sizeof(msg->header);
+- txbuf_len = pd_header_cnt_le(msg->header) * 4;
+- txsize_len = hdr_len + txbuf_len - 1;
+-
+ /* Write message header sizeof(u16) to USB_PDPHY_TX_BUFFER_HDR_REG */
+ ret = regmap_bulk_write(pmic_typec_pdphy->regmap,
+ pmic_typec_pdphy->base + USB_PDPHY_TX_BUFFER_HDR_REG,
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index b3ec799fc87337..763ff99f000d94 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -482,6 +482,8 @@ static void ucsi_ccg_update_set_new_cam_cmd(struct ucsi_ccg *uc,
+
+ port = uc->orig;
+ new_cam = UCSI_SET_NEW_CAM_GET_AM(*cmd);
++ if (new_cam >= ARRAY_SIZE(uc->updated))
++ return;
+ new_port = &uc->updated[new_cam];
+ cam = new_port->linked_idx;
+ enter_new_mode = UCSI_SET_NEW_CAM_ENTER(*cmd);
+diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
+index 31e437d94869de..a98fa0ccae601d 100644
+--- a/fs/btrfs/bio.c
++++ b/fs/btrfs/bio.c
+@@ -74,20 +74,13 @@ struct btrfs_bio *btrfs_bio_alloc(unsigned int nr_vecs, blk_opf_t opf,
+
+ static struct btrfs_bio *btrfs_split_bio(struct btrfs_fs_info *fs_info,
+ struct btrfs_bio *orig_bbio,
+- u64 map_length, bool use_append)
++ u64 map_length)
+ {
+ struct btrfs_bio *bbio;
+ struct bio *bio;
+
+- if (use_append) {
+- unsigned int nr_segs;
+-
+- bio = bio_split_rw(&orig_bbio->bio, &fs_info->limits, &nr_segs,
+- &btrfs_clone_bioset, map_length);
+- } else {
+- bio = bio_split(&orig_bbio->bio, map_length >> SECTOR_SHIFT,
+- GFP_NOFS, &btrfs_clone_bioset);
+- }
++ bio = bio_split(&orig_bbio->bio, map_length >> SECTOR_SHIFT, GFP_NOFS,
++ &btrfs_clone_bioset);
+ bbio = btrfs_bio(bio);
+ btrfs_bio_init(bbio, fs_info, NULL, orig_bbio);
+ bbio->inode = orig_bbio->inode;
+@@ -648,6 +641,19 @@ static bool btrfs_wq_submit_bio(struct btrfs_bio *bbio,
+ return true;
+ }
+
++static u64 btrfs_append_map_length(struct btrfs_bio *bbio, u64 map_length)
++{
++ unsigned int nr_segs;
++ int sector_offset;
++
++ map_length = min(map_length, bbio->fs_info->max_zone_append_size);
++ sector_offset = bio_split_rw_at(&bbio->bio, &bbio->fs_info->limits,
++ &nr_segs, map_length);
++ if (sector_offset)
++ return sector_offset << SECTOR_SHIFT;
++ return map_length;
++}
++
+ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+ {
+ struct btrfs_inode *inode = bbio->inode;
+@@ -674,10 +680,10 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+
+ map_length = min(map_length, length);
+ if (use_append)
+- map_length = min(map_length, fs_info->max_zone_append_size);
++ map_length = btrfs_append_map_length(bbio, map_length);
+
+ if (map_length < length) {
+- bbio = btrfs_split_bio(fs_info, bbio, map_length, use_append);
++ bbio = btrfs_split_bio(fs_info, bbio, map_length);
+ bio = &bbio->bio;
+ }
+
+diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
+index 06a9e0542d708b..7a2a4fdc07f1ff 100644
+--- a/fs/btrfs/delayed-ref.c
++++ b/fs/btrfs/delayed-ref.c
+@@ -649,7 +649,7 @@ static bool insert_delayed_ref(struct btrfs_trans_handle *trans,
+ &href->ref_add_list);
+ else if (ref->action == BTRFS_DROP_DELAYED_REF) {
+ ASSERT(!list_empty(&exist->add_list));
+- list_del(&exist->add_list);
++ list_del_init(&exist->add_list);
+ } else {
+ ASSERT(0);
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 13675e128af6e0..c7b123cb282e72 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1599,7 +1599,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
+ clear_bits |= EXTENT_CLEAR_DATA_RESV;
+ extent_clear_unlock_delalloc(inode, start, end, locked_page,
+ &cached, clear_bits, page_ops);
+- btrfs_qgroup_free_data(inode, NULL, start, cur_alloc_size, NULL);
++ btrfs_qgroup_free_data(inode, NULL, start, end - start + 1, NULL);
+ }
+ return ret;
+ }
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 926d7a9ed99df0..c64d0713412231 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -1979,25 +1979,10 @@ static int btrfs_get_tree_super(struct fs_context *fc)
+ * fsconfig(FSCONFIG_SET_FLAG, "ro"). This option is seen by the filesystem
+ * in fc->sb_flags.
+ *
+- * This disambiguation has rather positive consequences. Mounting a subvolume
+- * ro will not also turn the superblock ro. Only the mount for the subvolume
+- * will become ro.
+- *
+- * So, if the superblock creation request comes from the new mount API the
+- * caller must have explicitly done:
+- *
+- * fsconfig(FSCONFIG_SET_FLAG, "ro")
+- * fsmount/mount_setattr(MOUNT_ATTR_RDONLY)
+- *
+- * IOW, at some point the caller must have explicitly turned the whole
+- * superblock ro and we shouldn't just undo it like we did for the old mount
+- * API. In any case, it lets us avoid the hack in the new mount API.
+- *
+- * Consequently, the remounting hack must only be used for requests originating
+- * from the old mount API and should be marked for full deprecation so it can be
+- * turned off in a couple of years.
+- *
+- * The new mount API has no reason to support this hack.
++ * But, currently the util-linux mount command already utilizes the new mount
++ * API and is still setting fsconfig(FSCONFIG_SET_FLAG, "ro") no matter if it's
++ * btrfs or not, setting the whole super block RO. To make per-subvolume mounting
++ * work with different options work we need to keep backward compatibility.
+ */
+ static struct vfsmount *btrfs_reconfigure_for_mount(struct fs_context *fc)
+ {
+@@ -2019,7 +2004,7 @@ static struct vfsmount *btrfs_reconfigure_for_mount(struct fs_context *fc)
+ if (IS_ERR(mnt))
+ return mnt;
+
+- if (!fc->oldapi || !ro2rw)
++ if (!ro2rw)
+ return mnt;
+
+ /* We need to convert to rw, call reconfigure. */
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index b4914a11c3c25a..4accaed91bd292 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -205,12 +205,15 @@ void nfs_set_cache_invalid(struct inode *inode, unsigned long flags)
+ nfs_fscache_invalidate(inode, 0);
+ flags &= ~NFS_INO_REVAL_FORCED;
+
+- nfsi->cache_validity |= flags;
++ flags |= nfsi->cache_validity;
++ if (inode->i_mapping->nrpages == 0)
++ flags &= ~NFS_INO_INVALID_DATA;
+
+- if (inode->i_mapping->nrpages == 0) {
+- nfsi->cache_validity &= ~NFS_INO_INVALID_DATA;
+- nfs_ooo_clear(nfsi);
+- } else if (nfsi->cache_validity & NFS_INO_INVALID_DATA) {
++ /* pairs with nfs_clear_invalid_mapping()'s smp_load_acquire() */
++ smp_store_release(&nfsi->cache_validity, flags);
++
++ if (inode->i_mapping->nrpages == 0 ||
++ nfsi->cache_validity & NFS_INO_INVALID_DATA) {
+ nfs_ooo_clear(nfsi);
+ }
+ trace_nfs_set_cache_invalid(inode, 0);
+@@ -628,23 +631,35 @@ nfs_fattr_fixup_delegated(struct inode *inode, struct nfs_fattr *fattr)
+ }
+ }
+
++static void nfs_update_timestamps(struct inode *inode, unsigned int ia_valid)
++{
++ enum file_time_flags time_flags = 0;
++ unsigned int cache_flags = 0;
++
++ if (ia_valid & ATTR_MTIME) {
++ time_flags |= S_MTIME | S_CTIME;
++ cache_flags |= NFS_INO_INVALID_CTIME | NFS_INO_INVALID_MTIME;
++ }
++ if (ia_valid & ATTR_ATIME) {
++ time_flags |= S_ATIME;
++ cache_flags |= NFS_INO_INVALID_ATIME;
++ }
++ inode_update_timestamps(inode, time_flags);
++ NFS_I(inode)->cache_validity &= ~cache_flags;
++}
++
+ void nfs_update_delegated_atime(struct inode *inode)
+ {
+ spin_lock(&inode->i_lock);
+- if (nfs_have_delegated_atime(inode)) {
+- inode_update_timestamps(inode, S_ATIME);
+- NFS_I(inode)->cache_validity &= ~NFS_INO_INVALID_ATIME;
+- }
++ if (nfs_have_delegated_atime(inode))
++ nfs_update_timestamps(inode, ATTR_ATIME);
+ spin_unlock(&inode->i_lock);
+ }
+
+ void nfs_update_delegated_mtime_locked(struct inode *inode)
+ {
+- if (nfs_have_delegated_mtime(inode)) {
+- inode_update_timestamps(inode, S_CTIME | S_MTIME);
+- NFS_I(inode)->cache_validity &= ~(NFS_INO_INVALID_CTIME |
+- NFS_INO_INVALID_MTIME);
+- }
++ if (nfs_have_delegated_mtime(inode))
++ nfs_update_timestamps(inode, ATTR_MTIME);
+ }
+
+ void nfs_update_delegated_mtime(struct inode *inode)
+@@ -682,15 +697,16 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ attr->ia_valid &= ~ATTR_SIZE;
+ }
+
+- if (nfs_have_delegated_mtime(inode)) {
+- if (attr->ia_valid & ATTR_MTIME) {
+- nfs_update_delegated_mtime(inode);
+- attr->ia_valid &= ~ATTR_MTIME;
+- }
+- if (attr->ia_valid & ATTR_ATIME) {
+- nfs_update_delegated_atime(inode);
+- attr->ia_valid &= ~ATTR_ATIME;
+- }
++ if (nfs_have_delegated_mtime(inode) && attr->ia_valid & ATTR_MTIME) {
++ spin_lock(&inode->i_lock);
++ nfs_update_timestamps(inode, attr->ia_valid);
++ spin_unlock(&inode->i_lock);
++ attr->ia_valid &= ~(ATTR_MTIME | ATTR_ATIME);
++ } else if (nfs_have_delegated_atime(inode) &&
++ attr->ia_valid & ATTR_ATIME &&
++ !(attr->ia_valid & ATTR_MTIME)) {
++ nfs_update_delegated_atime(inode);
++ attr->ia_valid &= ~ATTR_ATIME;
+ }
+
+ /* Optimization: if the end result is no change, don't RPC */
+@@ -1408,6 +1424,13 @@ int nfs_clear_invalid_mapping(struct address_space *mapping)
+ TASK_KILLABLE|TASK_FREEZABLE_UNSAFE);
+ if (ret)
+ goto out;
++ smp_rmb(); /* pairs with smp_wmb() below */
++ if (test_bit(NFS_INO_INVALIDATING, bitlock))
++ continue;
++ /* pairs with nfs_set_cache_invalid()'s smp_store_release() */
++ if (!(smp_load_acquire(&nfsi->cache_validity) & NFS_INO_INVALID_DATA))
++ goto out;
++ /* Slow-path that double-checks with spinlock held */
+ spin_lock(&inode->i_lock);
+ if (test_bit(NFS_INO_INVALIDATING, bitlock)) {
+ spin_unlock(&inode->i_lock);
+@@ -1633,6 +1656,7 @@ void nfs_fattr_init(struct nfs_fattr *fattr)
+ fattr->gencount = nfs_inc_attr_generation_counter();
+ fattr->owner_name = NULL;
+ fattr->group_name = NULL;
++ fattr->mdsthreshold = NULL;
+ }
+ EXPORT_SYMBOL_GPL(nfs_fattr_init);
+
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index cd2fbde2e6d72e..9d40319e063dea 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -3452,6 +3452,10 @@ static int nfs4_do_setattr(struct inode *inode, const struct cred *cred,
+ adjust_flags |= NFS_INO_INVALID_MODE;
+ if (sattr->ia_valid & (ATTR_UID | ATTR_GID))
+ adjust_flags |= NFS_INO_INVALID_OTHER;
++ if (sattr->ia_valid & ATTR_ATIME)
++ adjust_flags |= NFS_INO_INVALID_ATIME;
++ if (sattr->ia_valid & ATTR_MTIME)
++ adjust_flags |= NFS_INO_INVALID_MTIME;
+
+ do {
+ nfs4_bitmap_copy_adjust(bitmask, nfs4_bitmask(server, fattr->label),
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index 97b386032b717a..e17d80876cf07e 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -882,7 +882,15 @@ static int nfs_request_mount(struct fs_context *fc,
+ * Now ask the mount server to map our export path
+ * to a file handle.
+ */
+- status = nfs_mount(&request, ctx->timeo, ctx->retrans);
++ if ((request.protocol == XPRT_TRANSPORT_UDP) ==
++ !(ctx->flags & NFS_MOUNT_TCP))
++ /*
++ * NFS protocol and mount protocol are both UDP or neither UDP
++ * so timeouts are compatible. Use NFS timeouts for MOUNT
++ */
++ status = nfs_mount(&request, ctx->timeo, ctx->retrans);
++ else
++ status = nfs_mount(&request, NFS_UNSPEC_TIMEO, NFS_UNSPEC_RETRANS);
+ if (status != 0) {
+ dfprintk(MOUNT, "NFS: unable to mount server %s, error %d\n",
+ request.hostname, status);
+diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
+index fb1140c52f07cb..f8b7770d1fbba8 100644
+--- a/fs/ocfs2/xattr.c
++++ b/fs/ocfs2/xattr.c
+@@ -2036,8 +2036,7 @@ static int ocfs2_xa_remove(struct ocfs2_xa_loc *loc,
+ rc = 0;
+ ocfs2_xa_cleanup_value_truncate(loc, "removing",
+ orig_clusters);
+- if (rc)
+- goto out;
++ goto out;
+ }
+ }
+
+diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
+index b52d85f8ad5920..b4521b09605881 100644
+--- a/fs/proc/vmcore.c
++++ b/fs/proc/vmcore.c
+@@ -457,10 +457,6 @@ static vm_fault_t mmap_vmcore_fault(struct vm_fault *vmf)
+ #endif
+ }
+
+-static const struct vm_operations_struct vmcore_mmap_ops = {
+- .fault = mmap_vmcore_fault,
+-};
+-
+ /**
+ * vmcore_alloc_buf - allocate buffer in vmalloc memory
+ * @size: size of buffer
+@@ -488,6 +484,11 @@ static inline char *vmcore_alloc_buf(size_t size)
+ * virtually contiguous user-space in ELF layout.
+ */
+ #ifdef CONFIG_MMU
++
++static const struct vm_operations_struct vmcore_mmap_ops = {
++ .fault = mmap_vmcore_fault,
++};
++
+ /*
+ * remap_oldmem_pfn_checked - do remap_oldmem_pfn_range replacing all pages
+ * reported as not being ram with the zero page.
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index cac80e7bfefc74..a751793c4512af 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -70,6 +70,7 @@ struct ksmbd_conn *ksmbd_conn_alloc(void)
+ atomic_set(&conn->req_running, 0);
+ atomic_set(&conn->r_count, 0);
+ atomic_set(&conn->refcnt, 1);
++ atomic_set(&conn->mux_smb_requests, 0);
+ conn->total_credits = 1;
+ conn->outstanding_credits = 0;
+
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index b379ae4fdcdffa..8ddd5a3c7bafb6 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -107,6 +107,7 @@ struct ksmbd_conn {
+ __le16 signing_algorithm;
+ bool binding;
+ atomic_t refcnt;
++ atomic_t mux_smb_requests;
+ };
+
+ struct ksmbd_conn_ops {
+diff --git a/fs/smb/server/mgmt/user_session.c b/fs/smb/server/mgmt/user_session.c
+index 1e4624e9d434ab..ad02fe555fda7e 100644
+--- a/fs/smb/server/mgmt/user_session.c
++++ b/fs/smb/server/mgmt/user_session.c
+@@ -90,7 +90,7 @@ static int __rpc_method(char *rpc_name)
+
+ int ksmbd_session_rpc_open(struct ksmbd_session *sess, char *rpc_name)
+ {
+- struct ksmbd_session_rpc *entry;
++ struct ksmbd_session_rpc *entry, *old;
+ struct ksmbd_rpc_command *resp;
+ int method;
+
+@@ -106,16 +106,19 @@ int ksmbd_session_rpc_open(struct ksmbd_session *sess, char *rpc_name)
+ entry->id = ksmbd_ipc_id_alloc();
+ if (entry->id < 0)
+ goto free_entry;
+- xa_store(&sess->rpc_handle_list, entry->id, entry, GFP_KERNEL);
++ old = xa_store(&sess->rpc_handle_list, entry->id, entry, GFP_KERNEL);
++ if (xa_is_err(old))
++ goto free_id;
+
+ resp = ksmbd_rpc_open(sess, entry->id);
+ if (!resp)
+- goto free_id;
++ goto erase_xa;
+
+ kvfree(resp);
+ return entry->id;
+-free_id:
++erase_xa:
+ xa_erase(&sess->rpc_handle_list, entry->id);
++free_id:
+ ksmbd_rpc_id_free(entry->id);
+ free_entry:
+ kfree(entry);
+@@ -175,6 +178,7 @@ static void ksmbd_expire_session(struct ksmbd_conn *conn)
+ unsigned long id;
+ struct ksmbd_session *sess;
+
++ down_write(&sessions_table_lock);
+ down_write(&conn->session_lock);
+ xa_for_each(&conn->sessions, id, sess) {
+ if (atomic_read(&sess->refcnt) == 0 &&
+@@ -188,6 +192,7 @@ static void ksmbd_expire_session(struct ksmbd_conn *conn)
+ }
+ }
+ up_write(&conn->session_lock);
++ up_write(&sessions_table_lock);
+ }
+
+ int ksmbd_session_register(struct ksmbd_conn *conn,
+@@ -229,7 +234,6 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn)
+ }
+ }
+ }
+- up_write(&sessions_table_lock);
+
+ down_write(&conn->session_lock);
+ xa_for_each(&conn->sessions, id, sess) {
+@@ -249,6 +253,7 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn)
+ }
+ }
+ up_write(&conn->session_lock);
++ up_write(&sessions_table_lock);
+ }
+
+ struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn,
+diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c
+index bb3e7b09201a88..6223908f9c5642 100644
+--- a/fs/smb/server/server.c
++++ b/fs/smb/server/server.c
+@@ -238,11 +238,11 @@ static void __handle_ksmbd_work(struct ksmbd_work *work,
+ } while (is_chained == true);
+
+ send:
+- if (work->sess)
+- ksmbd_user_session_put(work->sess);
+ if (work->tcon)
+ ksmbd_tree_connect_put(work->tcon);
+ smb3_preauth_hash_rsp(work);
++ if (work->sess)
++ ksmbd_user_session_put(work->sess);
+ if (work->sess && work->sess->enc && work->encrypted &&
+ conn->ops->encrypt_resp) {
+ rc = conn->ops->encrypt_resp(work);
+@@ -270,6 +270,7 @@ static void handle_ksmbd_work(struct work_struct *wk)
+
+ ksmbd_conn_try_dequeue_request(work);
+ ksmbd_free_work_struct(work);
++ atomic_dec(&conn->mux_smb_requests);
+ /*
+ * Checking waitqueue to dropping pending requests on
+ * disconnection. waitqueue_active is safe because it
+@@ -291,6 +292,15 @@ static int queue_ksmbd_work(struct ksmbd_conn *conn)
+ struct ksmbd_work *work;
+ int err;
+
++ err = ksmbd_init_smb_server(conn);
++ if (err)
++ return 0;
++
++ if (atomic_inc_return(&conn->mux_smb_requests) >= conn->vals->max_credits) {
++ atomic_dec_return(&conn->mux_smb_requests);
++ return -ENOSPC;
++ }
++
+ work = ksmbd_alloc_work_struct();
+ if (!work) {
+ pr_err("allocation for work failed\n");
+@@ -301,12 +311,6 @@ static int queue_ksmbd_work(struct ksmbd_conn *conn)
+ work->request_buf = conn->request_buf;
+ conn->request_buf = NULL;
+
+- err = ksmbd_init_smb_server(work);
+- if (err) {
+- ksmbd_free_work_struct(work);
+- return 0;
+- }
+-
+ ksmbd_conn_enqueue_request(work);
+ atomic_inc(&conn->r_count);
+ /* update activity on connection */
+diff --git a/fs/smb/server/smb_common.c b/fs/smb/server/smb_common.c
+index 13818ecb6e1b2f..663b014b9d1886 100644
+--- a/fs/smb/server/smb_common.c
++++ b/fs/smb/server/smb_common.c
+@@ -388,6 +388,10 @@ static struct smb_version_ops smb1_server_ops = {
+ .set_rsp_status = set_smb1_rsp_status,
+ };
+
++static struct smb_version_values smb1_server_values = {
++ .max_credits = SMB2_MAX_CREDITS,
++};
++
+ static int smb1_negotiate(struct ksmbd_work *work)
+ {
+ return ksmbd_smb_negotiate_common(work, SMB_COM_NEGOTIATE);
+@@ -399,18 +403,18 @@ static struct smb_version_cmds smb1_server_cmds[1] = {
+
+ static int init_smb1_server(struct ksmbd_conn *conn)
+ {
++ conn->vals = &smb1_server_values;
+ conn->ops = &smb1_server_ops;
+ conn->cmds = smb1_server_cmds;
+ conn->max_cmds = ARRAY_SIZE(smb1_server_cmds);
+ return 0;
+ }
+
+-int ksmbd_init_smb_server(struct ksmbd_work *work)
++int ksmbd_init_smb_server(struct ksmbd_conn *conn)
+ {
+- struct ksmbd_conn *conn = work->conn;
+ __le32 proto;
+
+- proto = *(__le32 *)((struct smb_hdr *)work->request_buf)->Protocol;
++ proto = *(__le32 *)((struct smb_hdr *)conn->request_buf)->Protocol;
+ if (conn->need_neg == false) {
+ if (proto == SMB1_PROTO_NUMBER)
+ return -EINVAL;
+diff --git a/fs/smb/server/smb_common.h b/fs/smb/server/smb_common.h
+index cc1d6dfe29d565..a3d8a905b07e07 100644
+--- a/fs/smb/server/smb_common.h
++++ b/fs/smb/server/smb_common.h
+@@ -427,7 +427,7 @@ bool ksmbd_smb_request(struct ksmbd_conn *conn);
+
+ int ksmbd_lookup_dialect_by_id(__le16 *cli_dialects, __le16 dialects_count);
+
+-int ksmbd_init_smb_server(struct ksmbd_work *work);
++int ksmbd_init_smb_server(struct ksmbd_conn *conn);
+
+ struct ksmbd_kstat;
+ int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work,
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index 1748dff58c3bc9..cfc614c638daf6 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -392,6 +392,9 @@ static int tracefs_reconfigure(struct fs_context *fc)
+ struct tracefs_fs_info *sb_opts = sb->s_fs_info;
+ struct tracefs_fs_info *new_opts = fc->s_fs_info;
+
++ if (!new_opts)
++ return 0;
++
+ sync_filesystem(sb);
+ /* structure copy of new mount options to sb */
+ *sb_opts = *new_opts;
+@@ -478,14 +481,17 @@ static int tracefs_fill_super(struct super_block *sb, struct fs_context *fc)
+ sb->s_op = &tracefs_super_operations;
+ sb->s_d_op = &tracefs_dentry_operations;
+
+- tracefs_apply_options(sb, false);
+-
+ return 0;
+ }
+
+ static int tracefs_get_tree(struct fs_context *fc)
+ {
+- return get_tree_single(fc, tracefs_fill_super);
++ int err = get_tree_single(fc, tracefs_fill_super);
++
++ if (err)
++ return err;
++
++ return tracefs_reconfigure(fc);
+ }
+
+ static void tracefs_free_fc(struct fs_context *fc)
+diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
+index 083f8565371616..374ff338755ca2 100644
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -227,8 +227,6 @@ u32 arm_smccc_get_version(void);
+
+ void __init arm_smccc_version_init(u32 version, enum arm_smccc_conduit conduit);
+
+-extern u64 smccc_has_sve_hint;
+-
+ /**
+ * arm_smccc_get_soc_id_version()
+ *
+@@ -326,15 +324,6 @@ struct arm_smccc_quirk {
+ } state;
+ };
+
+-/**
+- * __arm_smccc_sve_check() - Set the SVE hint bit when doing SMC calls
+- *
+- * Sets the SMCCC hint bit to indicate if there is live state in the SVE
+- * registers, this modifies x0 in place and should never be called from C
+- * code.
+- */
+-asmlinkage unsigned long __arm_smccc_sve_check(unsigned long x0);
+-
+ /**
+ * __arm_smccc_smc() - make SMC calls
+ * @a0-a7: arguments passed in registers 0 to 7
+@@ -402,20 +391,6 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
+
+ #endif
+
+-/* nVHE hypervisor doesn't have a current thread so needs separate checks */
+-#if defined(CONFIG_ARM64_SVE) && !defined(__KVM_NVHE_HYPERVISOR__)
+-
+-#define SMCCC_SVE_CHECK ALTERNATIVE("nop \n", "bl __arm_smccc_sve_check \n", \
+- ARM64_SVE)
+-#define smccc_sve_clobbers "x16", "x30", "cc",
+-
+-#else
+-
+-#define SMCCC_SVE_CHECK
+-#define smccc_sve_clobbers
+-
+-#endif
+-
+ #define __constraint_read_2 "r" (arg0)
+ #define __constraint_read_3 __constraint_read_2, "r" (arg1)
+ #define __constraint_read_4 __constraint_read_3, "r" (arg2)
+@@ -486,12 +461,11 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
+ register unsigned long r3 asm("r3"); \
+ CONCATENATE(__declare_arg_, \
+ COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__); \
+- asm volatile(SMCCC_SVE_CHECK \
+- inst "\n" : \
++ asm volatile(inst "\n" : \
+ "=r" (r0), "=r" (r1), "=r" (r2), "=r" (r3) \
+ : CONCATENATE(__constraint_read_, \
+ COUNT_ARGS(__VA_ARGS__)) \
+- : smccc_sve_clobbers "memory"); \
++ : "memory"); \
+ if (___res) \
+ *___res = (typeof(*___res)){r0, r1, r2, r3}; \
+ } while (0)
+@@ -540,7 +514,7 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
+ asm ("" : \
+ : CONCATENATE(__constraint_read_, \
+ COUNT_ARGS(__VA_ARGS__)) \
+- : smccc_sve_clobbers "memory"); \
++ : "memory"); \
+ if (___res) \
+ ___res->a0 = SMCCC_RET_NOT_SUPPORTED; \
+ } while (0)
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index a46e2047bea4d2..faceadb040f9ac 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -324,8 +324,8 @@ static inline void bio_next_folio(struct folio_iter *fi, struct bio *bio)
+ void bio_trim(struct bio *bio, sector_t offset, sector_t size);
+ extern struct bio *bio_split(struct bio *bio, int sectors,
+ gfp_t gfp, struct bio_set *bs);
+-struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
+- unsigned *segs, struct bio_set *bs, unsigned max_bytes);
++int bio_split_rw_at(struct bio *bio, const struct queue_limits *lim,
++ unsigned *segs, unsigned max_bytes);
+
+ /**
+ * bio_next_split - get next @sectors from a bio, splitting if necessary
+diff --git a/include/linux/soc/qcom/llcc-qcom.h b/include/linux/soc/qcom/llcc-qcom.h
+index 9e9f528b13701f..2f20281d4ad435 100644
+--- a/include/linux/soc/qcom/llcc-qcom.h
++++ b/include/linux/soc/qcom/llcc-qcom.h
+@@ -125,6 +125,7 @@ struct llcc_edac_reg_offset {
+ * @num_banks: Number of llcc banks
+ * @bitmap: Bit map to track the active slice ids
+ * @ecc_irq: interrupt for llcc cache error detection and reporting
++ * @ecc_irq_configured: 'True' if firmware has already configured the irq propagation
+ * @version: Indicates the LLCC version
+ */
+ struct llcc_drv_data {
+@@ -139,6 +140,7 @@ struct llcc_drv_data {
+ u32 num_banks;
+ unsigned long *bitmap;
+ int ecc_irq;
++ bool ecc_irq_configured;
+ u32 version;
+ };
+
+diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h
+index 6030a823561735..0327031865c6a9 100644
+--- a/include/linux/user_namespace.h
++++ b/include/linux/user_namespace.h
+@@ -139,7 +139,8 @@ static inline long get_rlimit_value(struct ucounts *ucounts, enum rlimit_type ty
+
+ long inc_rlimit_ucounts(struct ucounts *ucounts, enum rlimit_type type, long v);
+ bool dec_rlimit_ucounts(struct ucounts *ucounts, enum rlimit_type type, long v);
+-long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum rlimit_type type);
++long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum rlimit_type type,
++ bool override_rlimit);
+ void dec_rlimit_put_ucounts(struct ucounts *ucounts, enum rlimit_type type);
+ bool is_rlimit_overlimit(struct ucounts *ucounts, enum rlimit_type type, unsigned long max);
+
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 2be4738eae1cc1..0d01e0310e5fe7 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1099,6 +1099,7 @@ struct nft_rule_blob {
+ * @name: name of the chain
+ * @udlen: user data length
+ * @udata: user data in the chain
++ * @rcu_head: rcu head for deferred release
+ * @blob_next: rule blob pointer to the next in the chain
+ */
+ struct nft_chain {
+@@ -1116,6 +1117,7 @@ struct nft_chain {
+ char *name;
+ u16 udlen;
+ u8 *udata;
++ struct rcu_head rcu_head;
+
+ /* Only used during control plane commit phase: */
+ struct nft_rule_blob *blob_next;
+@@ -1259,6 +1261,7 @@ static inline void nft_use_inc_restore(u32 *use)
+ * @sets: sets in the table
+ * @objects: stateful objects in the table
+ * @flowtables: flow tables in the table
++ * @net: netnamespace this table belongs to
+ * @hgenerator: handle generator state
+ * @handle: table handle
+ * @use: number of chain references to this table
+@@ -1278,6 +1281,7 @@ struct nft_table {
+ struct list_head sets;
+ struct list_head objects;
+ struct list_head flowtables;
++ possible_net_t net;
+ u64 hgenerator;
+ u64 handle;
+ u32 use;
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index a1b126a6b0d72d..cc22596c7250cf 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -287,6 +287,7 @@
+ EM(rxrpc_call_see_input, "SEE input ") \
+ EM(rxrpc_call_see_release, "SEE release ") \
+ EM(rxrpc_call_see_userid_exists, "SEE u-exists") \
++ EM(rxrpc_call_see_waiting_call, "SEE q-conn ") \
+ E_(rxrpc_call_see_zap, "SEE zap ")
+
+ #define rxrpc_txqueue_traces \
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 60c737e423a188..7a60ce49250663 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -419,7 +419,8 @@ __sigqueue_alloc(int sig, struct task_struct *t, gfp_t gfp_flags,
+ */
+ rcu_read_lock();
+ ucounts = task_ucounts(t);
+- sigpending = inc_rlimit_get_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING);
++ sigpending = inc_rlimit_get_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING,
++ override_rlimit);
+ rcu_read_unlock();
+ if (!sigpending)
+ return NULL;
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index 8c07714ff27d42..696406939be554 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -307,7 +307,8 @@ void dec_rlimit_put_ucounts(struct ucounts *ucounts, enum rlimit_type type)
+ do_dec_rlimit_put_ucounts(ucounts, NULL, type);
+ }
+
+-long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum rlimit_type type)
++long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum rlimit_type type,
++ bool override_rlimit)
+ {
+ /* Caller must hold a reference to ucounts */
+ struct ucounts *iter;
+@@ -317,10 +318,11 @@ long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum rlimit_type type)
+ for (iter = ucounts; iter; iter = iter->ns->ucounts) {
+ long new = atomic_long_add_return(1, &iter->rlimit[type]);
+ if (new < 0 || new > max)
+- goto unwind;
++ goto dec_unwind;
+ if (iter == ucounts)
+ ret = new;
+- max = get_userns_rlimit_max(iter->ns, type);
++ if (!override_rlimit)
++ max = get_userns_rlimit_max(iter->ns, type);
+ /*
+ * Grab an extra ucount reference for the caller when
+ * the rlimit count was previously 0.
+@@ -334,7 +336,6 @@ long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum rlimit_type type)
+ dec_unwind:
+ dec = atomic_long_sub_return(1, &iter->rlimit[type]);
+ WARN_ON_ONCE(dec < 0);
+-unwind:
+ do_dec_rlimit_put_ucounts(ucounts, iter, type);
+ return 0;
+ }
+diff --git a/lib/objpool.c b/lib/objpool.c
+index fd108fe0d095a7..b998b720c7329d 100644
+--- a/lib/objpool.c
++++ b/lib/objpool.c
+@@ -74,15 +74,21 @@ objpool_init_percpu_slots(struct objpool_head *pool, int nr_objs,
+ * warm caches and TLB hits. in default vmalloc is used to
+ * reduce the pressure of kernel slab system. as we know,
+ * mimimal size of vmalloc is one page since vmalloc would
+- * always align the requested size to page size
++ * always align the requested size to page size.
++ * but if vmalloc fails or it is not available (e.g. GFP_ATOMIC)
++ * allocate percpu slot with kmalloc.
+ */
+- if ((pool->gfp & GFP_ATOMIC) == GFP_ATOMIC)
+- slot = kmalloc_node(size, pool->gfp, cpu_to_node(i));
+- else
++ slot = NULL;
++
++ if ((pool->gfp & (GFP_ATOMIC | GFP_KERNEL)) != GFP_ATOMIC)
+ slot = __vmalloc_node(size, sizeof(void *), pool->gfp,
+ cpu_to_node(i), __builtin_return_address(0));
+- if (!slot)
+- return -ENOMEM;
++
++ if (!slot) {
++ slot = kmalloc_node(size, pool->gfp, cpu_to_node(i));
++ if (!slot)
++ return -ENOMEM;
++ }
+ memset(slot, 0, size);
+ pool->cpu_slots[i] = slot;
+
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index 7a87628b76ab70..828ed4a977f384 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -1406,7 +1406,7 @@ static void damon_do_apply_schemes(struct damon_ctx *c,
+ damon_for_each_scheme(s, c) {
+ struct damos_quota *quota = &s->quota;
+
+- if (c->passed_sample_intervals != s->next_apply_sis)
++ if (c->passed_sample_intervals < s->next_apply_sis)
+ continue;
+
+ if (!s->wmarks.activated)
+@@ -1450,17 +1450,31 @@ static unsigned long damon_feed_loop_next_input(unsigned long last_input,
+ unsigned long score)
+ {
+ const unsigned long goal = 10000;
+- unsigned long score_goal_diff = max(goal, score) - min(goal, score);
+- unsigned long score_goal_diff_bp = score_goal_diff * 10000 / goal;
+- unsigned long compensation = last_input * score_goal_diff_bp / 10000;
+ /* Set minimum input as 10000 to avoid compensation be zero */
+ const unsigned long min_input = 10000;
++ unsigned long score_goal_diff, compensation;
++ bool over_achieving = score > goal;
+
+- if (goal > score)
++ if (score == goal)
++ return last_input;
++ if (score >= goal * 2)
++ return min_input;
++
++ if (over_achieving)
++ score_goal_diff = score - goal;
++ else
++ score_goal_diff = goal - score;
++
++ if (last_input < ULONG_MAX / score_goal_diff)
++ compensation = last_input * score_goal_diff / goal;
++ else
++ compensation = last_input / goal * score_goal_diff;
++
++ if (over_achieving)
++ return max(last_input - compensation, min_input);
++ if (last_input < ULONG_MAX - compensation)
+ return last_input + compensation;
+- if (last_input > compensation + min_input)
+- return last_input - compensation;
+- return min_input;
++ return ULONG_MAX;
+ }
+
+ #ifdef CONFIG_PSI
+@@ -1613,7 +1627,7 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
+ bool has_schemes_to_apply = false;
+
+ damon_for_each_scheme(s, c) {
+- if (c->passed_sample_intervals != s->next_apply_sis)
++ if (c->passed_sample_intervals < s->next_apply_sis)
+ continue;
+
+ if (!s->wmarks.activated)
+@@ -1633,9 +1647,9 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
+ }
+
+ damon_for_each_scheme(s, c) {
+- if (c->passed_sample_intervals != s->next_apply_sis)
++ if (c->passed_sample_intervals < s->next_apply_sis)
+ continue;
+- s->next_apply_sis +=
++ s->next_apply_sis = c->passed_sample_intervals +
+ (s->apply_interval_us ? s->apply_interval_us :
+ c->attrs.aggr_interval) / sample_interval;
+ }
+@@ -1987,7 +2001,7 @@ static int kdamond_fn(void *data)
+ if (ctx->ops.check_accesses)
+ max_nr_accesses = ctx->ops.check_accesses(ctx);
+
+- if (ctx->passed_sample_intervals == next_aggregation_sis) {
++ if (ctx->passed_sample_intervals >= next_aggregation_sis) {
+ kdamond_merge_regions(ctx,
+ max_nr_accesses / 10,
+ sz_limit);
+@@ -2005,7 +2019,7 @@ static int kdamond_fn(void *data)
+
+ sample_interval = ctx->attrs.sample_interval ?
+ ctx->attrs.sample_interval : 1;
+- if (ctx->passed_sample_intervals == next_aggregation_sis) {
++ if (ctx->passed_sample_intervals >= next_aggregation_sis) {
+ ctx->next_aggregation_sis = next_aggregation_sis +
+ ctx->attrs.aggr_interval / sample_interval;
+
+@@ -2015,7 +2029,7 @@ static int kdamond_fn(void *data)
+ ctx->ops.reset_aggregated(ctx);
+ }
+
+- if (ctx->passed_sample_intervals == next_ops_update_sis) {
++ if (ctx->passed_sample_intervals >= next_ops_update_sis) {
+ ctx->next_ops_update_sis = next_ops_update_sis +
+ ctx->attrs.ops_update_interval /
+ sample_interval;
+diff --git a/mm/filemap.c b/mm/filemap.c
+index a6bc35830a34c3..ece82bda1c1e28 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -2609,7 +2609,7 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter,
+ if (unlikely(!iov_iter_count(iter)))
+ return 0;
+
+- iov_iter_truncate(iter, inode->i_sb->s_maxbytes);
++ iov_iter_truncate(iter, inode->i_sb->s_maxbytes - iocb->ki_pos);
+ folio_batch_init(&fbatch);
+
+ do {
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index e44508e46e8979..a4d0dbb04ea764 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -3268,18 +3268,38 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
+ return ret;
+ }
+
+-void __folio_undo_large_rmappable(struct folio *folio)
++/*
++ * __folio_unqueue_deferred_split() is not to be called directly:
++ * the folio_unqueue_deferred_split() inline wrapper in mm/internal.h
++ * limits its calls to those folios which may have a _deferred_list for
++ * queueing THP splits, and that list is (racily observed to be) non-empty.
++ *
++ * It is unsafe to call folio_unqueue_deferred_split() until folio refcount is
++ * zero: because even when split_queue_lock is held, a non-empty _deferred_list
++ * might be in use on deferred_split_scan()'s unlocked on-stack list.
++ *
++ * If memory cgroups are enabled, split_queue_lock is in the mem_cgroup: it is
++ * therefore important to unqueue deferred split before changing folio memcg.
++ */
++bool __folio_unqueue_deferred_split(struct folio *folio)
+ {
+ struct deferred_split *ds_queue;
+ unsigned long flags;
++ bool unqueued = false;
++
++ WARN_ON_ONCE(folio_ref_count(folio));
++ WARN_ON_ONCE(!mem_cgroup_disabled() && !folio_memcg(folio));
+
+ ds_queue = get_deferred_split_queue(folio);
+ spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+ if (!list_empty(&folio->_deferred_list)) {
+ ds_queue->split_queue_len--;
+ list_del_init(&folio->_deferred_list);
++ unqueued = true;
+ }
+ spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
++
++ return unqueued; /* useful for debug warnings */
+ }
+
+ void deferred_split_folio(struct folio *folio)
+@@ -3298,14 +3318,11 @@ void deferred_split_folio(struct folio *folio)
+ return;
+
+ /*
+- * The try_to_unmap() in page reclaim path might reach here too,
+- * this may cause a race condition to corrupt deferred split queue.
+- * And, if page reclaim is already handling the same folio, it is
+- * unnecessary to handle it again in shrinker.
+- *
+- * Check the swapcache flag to determine if the folio is being
+- * handled by page reclaim since THP swap would add the folio into
+- * swap cache before calling try_to_unmap().
++ * Exclude swapcache: originally to avoid a corrupt deferred split
++ * queue. Nowadays that is fully prevented by mem_cgroup_swapout();
++ * but if page reclaim is already handling the same folio, it is
++ * unnecessary to handle it again in the shrinker, so excluding
++ * swapcache here may still be a useful optimization.
+ */
+ if (folio_test_swapcache(folio))
+ return;
+diff --git a/mm/internal.h b/mm/internal.h
+index a963f67d3452ad..7da580dfae6c5a 100644
+--- a/mm/internal.h
++++ b/mm/internal.h
+@@ -631,11 +631,11 @@ static inline void folio_set_order(struct folio *folio, unsigned int order)
+ #endif
+ }
+
+-void __folio_undo_large_rmappable(struct folio *folio);
+-static inline void folio_undo_large_rmappable(struct folio *folio)
++bool __folio_unqueue_deferred_split(struct folio *folio);
++static inline bool folio_unqueue_deferred_split(struct folio *folio)
+ {
+ if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio))
+- return;
++ return false;
+
+ /*
+ * At this point, there is no one trying to add the folio to
+@@ -643,9 +643,9 @@ static inline void folio_undo_large_rmappable(struct folio *folio)
+ * to check without acquiring the split_queue_lock.
+ */
+ if (data_race(list_empty(&folio->_deferred_list)))
+- return;
++ return false;
+
+- __folio_undo_large_rmappable(folio);
++ return __folio_unqueue_deferred_split(folio);
+ }
+
+ static inline struct folio *page_rmappable_folio(struct page *page)
+diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c
+index 417c96f2da28e8..103c5fe41c68e3 100644
+--- a/mm/memcontrol-v1.c
++++ b/mm/memcontrol-v1.c
+@@ -845,6 +845,8 @@ static int mem_cgroup_move_account(struct folio *folio,
+ css_get(&to->css);
+ css_put(&from->css);
+
++ /* Warning should never happen, so don't worry about refcount non-0 */
++ WARN_ON_ONCE(folio_unqueue_deferred_split(folio));
+ folio->memcg_data = (unsigned long)to;
+
+ __folio_memcg_unlock(from);
+@@ -1214,7 +1216,9 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
+ enum mc_target_type target_type;
+ union mc_target target;
+ struct folio *folio;
++ bool tried_split_before = false;
+
++retry_pmd:
+ ptl = pmd_trans_huge_lock(pmd, vma);
+ if (ptl) {
+ if (mc.precharge < HPAGE_PMD_NR) {
+@@ -1224,6 +1228,27 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
+ target_type = get_mctgt_type_thp(vma, addr, *pmd, &target);
+ if (target_type == MC_TARGET_PAGE) {
+ folio = target.folio;
++ /*
++ * Deferred split queue locking depends on memcg,
++ * and unqueue is unsafe unless folio refcount is 0:
++ * split or skip if on the queue? first try to split.
++ */
++ if (!list_empty(&folio->_deferred_list)) {
++ spin_unlock(ptl);
++ if (!tried_split_before)
++ split_folio(folio);
++ folio_unlock(folio);
++ folio_put(folio);
++ if (tried_split_before)
++ return 0;
++ tried_split_before = true;
++ goto retry_pmd;
++ }
++ /*
++ * So long as that pmd lock is held, the folio cannot
++ * be racily added to the _deferred_list, because
++ * __folio_remove_rmap() will find !partially_mapped.
++ */
+ if (folio_isolate_lru(folio)) {
+ if (!mem_cgroup_move_account(folio, true,
+ mc.from, mc.to)) {
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index d563fb515766bc..9b0a6a77a7b219 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -4604,9 +4604,6 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
+ struct obj_cgroup *objcg;
+
+ VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
+- VM_BUG_ON_FOLIO(folio_order(folio) > 1 &&
+- !folio_test_hugetlb(folio) &&
+- !list_empty(&folio->_deferred_list), folio);
+
+ /*
+ * Nobody should be changing or seriously looking at
+@@ -4653,6 +4650,7 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
+ ug->nr_memory += nr_pages;
+ ug->pgpgout++;
+
++ WARN_ON_ONCE(folio_unqueue_deferred_split(folio));
+ folio->memcg_data = 0;
+ }
+
+@@ -4769,6 +4767,9 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
+
+ /* Transfer the charge and the css ref */
+ commit_charge(new, memcg);
++
++ /* Warning should never happen, so don't worry about refcount non-0 */
++ WARN_ON_ONCE(folio_unqueue_deferred_split(old));
+ old->memcg_data = 0;
+ }
+
+@@ -4955,6 +4956,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
+ VM_BUG_ON_FOLIO(oldid, folio);
+ mod_memcg_state(swap_memcg, MEMCG_SWAP, nr_entries);
+
++ folio_unqueue_deferred_split(folio);
+ folio->memcg_data = 0;
+
+ if (!mem_cgroup_is_root(memcg))
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 75b858bd6aa58f..5028f3788b67ad 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -415,7 +415,7 @@ static int __folio_migrate_mapping(struct address_space *mapping,
+ folio_test_large_rmappable(folio)) {
+ if (!folio_ref_freeze(folio, expected_count))
+ return -EAGAIN;
+- folio_undo_large_rmappable(folio);
++ folio_unqueue_deferred_split(folio);
+ folio_ref_unfreeze(folio, expected_count);
+ }
+
+@@ -438,7 +438,7 @@ static int __folio_migrate_mapping(struct address_space *mapping,
+ }
+
+ /* Take off deferred split queue while frozen and memcg set */
+- folio_undo_large_rmappable(folio);
++ folio_unqueue_deferred_split(folio);
+
+ /*
+ * Now we know that no one else is looking at the folio:
+diff --git a/mm/mlock.c b/mm/mlock.c
+index e3e3dc2b295639..cde076fa7d5e5a 100644
+--- a/mm/mlock.c
++++ b/mm/mlock.c
+@@ -725,14 +725,17 @@ static int apply_mlockall_flags(int flags)
+ }
+
+ for_each_vma(vmi, vma) {
++ int error;
+ vm_flags_t newflags;
+
+ newflags = vma->vm_flags & ~VM_LOCKED_MASK;
+ newflags |= to_add;
+
+- /* Ignore errors */
+- mlock_fixup(&vmi, vma, &prev, vma->vm_start, vma->vm_end,
+- newflags);
++ error = mlock_fixup(&vmi, vma, &prev, vma->vm_start, vma->vm_end,
++ newflags);
++ /* Ignore errors, but prev needs fixing up. */
++ if (error)
++ prev = vma;
+ cond_resched();
+ }
+ out:
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index ec459522c29349..f9111356d1047b 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -2663,7 +2663,6 @@ void free_unref_folios(struct folio_batch *folios)
+ unsigned long pfn = folio_pfn(folio);
+ unsigned int order = folio_order(folio);
+
+- folio_undo_large_rmappable(folio);
+ if (!free_pages_prepare(&folio->page, order))
+ continue;
+ /*
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index cff602cedf8e63..cf70eda8895650 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -418,8 +418,11 @@ kmem_buckets *kmem_buckets_create(const char *name, slab_flags_t flags,
+ unsigned int usersize,
+ void (*ctor)(void *))
+ {
++ unsigned long mask = 0;
++ unsigned int idx;
+ kmem_buckets *b;
+- int idx;
++
++ BUILD_BUG_ON(ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]) > BITS_PER_LONG);
+
+ /*
+ * When the separate buckets API is not built in, just return
+@@ -441,7 +444,7 @@ kmem_buckets *kmem_buckets_create(const char *name, slab_flags_t flags,
+ for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
+ char *short_size, *cache_name;
+ unsigned int cache_useroffset, cache_usersize;
+- unsigned int size;
++ unsigned int size, aligned_idx;
+
+ if (!kmalloc_caches[KMALLOC_NORMAL][idx])
+ continue;
+@@ -454,10 +457,6 @@ kmem_buckets *kmem_buckets_create(const char *name, slab_flags_t flags,
+ if (WARN_ON(!short_size))
+ goto fail;
+
+- cache_name = kasprintf(GFP_KERNEL, "%s-%s", name, short_size + 1);
+- if (WARN_ON(!cache_name))
+- goto fail;
+-
+ if (useroffset >= size) {
+ cache_useroffset = 0;
+ cache_usersize = 0;
+@@ -465,18 +464,28 @@ kmem_buckets *kmem_buckets_create(const char *name, slab_flags_t flags,
+ cache_useroffset = useroffset;
+ cache_usersize = min(size - cache_useroffset, usersize);
+ }
+- (*b)[idx] = kmem_cache_create_usercopy(cache_name, size,
++
++ aligned_idx = __kmalloc_index(size, false);
++ if (!(*b)[aligned_idx]) {
++ cache_name = kasprintf(GFP_KERNEL, "%s-%s", name, short_size + 1);
++ if (WARN_ON(!cache_name))
++ goto fail;
++ (*b)[aligned_idx] = kmem_cache_create_usercopy(cache_name, size,
+ 0, flags, cache_useroffset,
+ cache_usersize, ctor);
+- kfree(cache_name);
+- if (WARN_ON(!(*b)[idx]))
+- goto fail;
++ kfree(cache_name);
++ if (WARN_ON(!(*b)[aligned_idx]))
++ goto fail;
++ set_bit(aligned_idx, &mask);
++ }
++ if (idx != aligned_idx)
++ (*b)[idx] = (*b)[aligned_idx];
+ }
+
+ return b;
+
+ fail:
+- for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++)
++ for_each_set_bit(idx, &mask, ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]))
+ kmem_cache_destroy((*b)[idx]);
+ kfree(b);
+
+diff --git a/mm/swap.c b/mm/swap.c
+index 9caf6b017cf0ab..1e734a5a6453e2 100644
+--- a/mm/swap.c
++++ b/mm/swap.c
+@@ -123,7 +123,7 @@ void __folio_put(struct folio *folio)
+ }
+
+ page_cache_release(folio);
+- folio_undo_large_rmappable(folio);
++ folio_unqueue_deferred_split(folio);
+ mem_cgroup_uncharge(folio);
+ free_unref_page(&folio->page, folio_order(folio));
+ }
+@@ -1020,7 +1020,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs)
+ free_huge_folio(folio);
+ continue;
+ }
+- folio_undo_large_rmappable(folio);
++ folio_unqueue_deferred_split(folio);
+ __page_cache_release(folio, &lruvec, &flags);
+
+ if (j != i)
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index f5bcd08527ae0f..b7f326f87363a2 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1462,7 +1462,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
+ */
+ nr_reclaimed += nr_pages;
+
+- folio_undo_large_rmappable(folio);
++ folio_unqueue_deferred_split(folio);
+ if (folio_batch_add(&free_folios, folio) == 0) {
+ mem_cgroup_uncharge_folios(&free_folios);
+ try_to_unmap_flush();
+@@ -1849,7 +1849,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
+ if (unlikely(folio_put_testzero(folio))) {
+ __folio_clear_lru_flags(folio);
+
+- folio_undo_large_rmappable(folio);
++ folio_unqueue_deferred_split(folio);
+ if (folio_batch_add(&free_folios, folio) == 0) {
+ spin_unlock_irq(&lruvec->lru_lock);
+ mem_cgroup_uncharge_folios(&free_folios);
+diff --git a/net/mptcp/mptcp_pm_gen.c b/net/mptcp/mptcp_pm_gen.c
+index c30a2a90a19252..bfb37c5a88c4ef 100644
+--- a/net/mptcp/mptcp_pm_gen.c
++++ b/net/mptcp/mptcp_pm_gen.c
+@@ -112,7 +112,6 @@ const struct genl_ops mptcp_pm_nl_ops[11] = {
+ .dumpit = mptcp_pm_nl_get_addr_dumpit,
+ .policy = mptcp_pm_get_addr_nl_policy,
+ .maxattr = MPTCP_PM_ATTR_TOKEN,
+- .flags = GENL_UNS_ADMIN_PERM,
+ },
+ {
+ .cmd = MPTCP_PM_CMD_FLUSH_ADDRS,
+diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
+index 8eaa9fbe3e343b..8f3b01d46d243f 100644
+--- a/net/mptcp/pm_userspace.c
++++ b/net/mptcp/pm_userspace.c
+@@ -91,6 +91,7 @@ static int mptcp_userspace_pm_delete_local_addr(struct mptcp_sock *msk,
+ struct mptcp_pm_addr_entry *addr)
+ {
+ struct mptcp_pm_addr_entry *entry, *tmp;
++ struct sock *sk = (struct sock *)msk;
+
+ list_for_each_entry_safe(entry, tmp, &msk->pm.userspace_pm_local_addr_list, list) {
+ if (mptcp_addresses_equal(&entry->addr, &addr->addr, false)) {
+@@ -98,7 +99,7 @@ static int mptcp_userspace_pm_delete_local_addr(struct mptcp_sock *msk,
+ * be used multiple times (e.g. fullmesh mode).
+ */
+ list_del_rcu(&entry->list);
+- kfree(entry);
++ sock_kfree_s(sk, entry, sizeof(*entry));
+ msk->pm.local_addr_used--;
+ return 0;
+ }
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index e792f153f9587b..58503348ed3a3e 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1493,6 +1493,7 @@ static int nf_tables_newtable(struct sk_buff *skb, const struct nfnl_info *info,
+ INIT_LIST_HEAD(&table->sets);
+ INIT_LIST_HEAD(&table->objects);
+ INIT_LIST_HEAD(&table->flowtables);
++ write_pnet(&table->net, net);
+ table->family = family;
+ table->flags = flags;
+ table->handle = ++nft_net->table_handle;
+@@ -11363,22 +11364,48 @@ int nft_data_dump(struct sk_buff *skb, int attr, const struct nft_data *data,
+ }
+ EXPORT_SYMBOL_GPL(nft_data_dump);
+
+-int __nft_release_basechain(struct nft_ctx *ctx)
++static void __nft_release_basechain_now(struct nft_ctx *ctx)
+ {
+ struct nft_rule *rule, *nr;
+
+- if (WARN_ON(!nft_is_base_chain(ctx->chain)))
+- return 0;
+-
+- nf_tables_unregister_hook(ctx->net, ctx->chain->table, ctx->chain);
+ list_for_each_entry_safe(rule, nr, &ctx->chain->rules, list) {
+ list_del(&rule->list);
+- nft_use_dec(&ctx->chain->use);
+ nf_tables_rule_release(ctx, rule);
+ }
++ nf_tables_chain_destroy(ctx->chain);
++}
++
++static void nft_release_basechain_rcu(struct rcu_head *head)
++{
++ struct nft_chain *chain = container_of(head, struct nft_chain, rcu_head);
++ struct nft_ctx ctx = {
++ .family = chain->table->family,
++ .chain = chain,
++ .net = read_pnet(&chain->table->net),
++ };
++
++ __nft_release_basechain_now(&ctx);
++ put_net(ctx.net);
++}
++
++int __nft_release_basechain(struct nft_ctx *ctx)
++{
++ struct nft_rule *rule;
++
++ if (WARN_ON_ONCE(!nft_is_base_chain(ctx->chain)))
++ return 0;
++
++ nf_tables_unregister_hook(ctx->net, ctx->chain->table, ctx->chain);
++ list_for_each_entry(rule, &ctx->chain->rules, list)
++ nft_use_dec(&ctx->chain->use);
++
+ nft_chain_del(ctx->chain);
+ nft_use_dec(&ctx->table->use);
+- nf_tables_chain_destroy(ctx->chain);
++
++ if (maybe_get_net(ctx->net))
++ call_rcu(&ctx->chain->rcu_head, nft_release_basechain_rcu);
++ else
++ __nft_release_basechain_now(ctx);
+
+ return 0;
+ }
+diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
+index d25bf1cf36700d..bb11e8289d6dcf 100644
+--- a/net/rxrpc/conn_client.c
++++ b/net/rxrpc/conn_client.c
+@@ -516,6 +516,7 @@ void rxrpc_connect_client_calls(struct rxrpc_local *local)
+
+ spin_lock(&local->client_call_lock);
+ list_move_tail(&call->wait_link, &bundle->waiting_calls);
++ rxrpc_see_call(call, rxrpc_call_see_waiting_call);
+ spin_unlock(&local->client_call_lock);
+
+ if (rxrpc_bundle_has_space(bundle))
+@@ -586,7 +587,10 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call
+ _debug("call is waiting");
+ ASSERTCMP(call->call_id, ==, 0);
+ ASSERT(!test_bit(RXRPC_CALL_EXPOSED, &call->flags));
++ /* May still be on ->new_client_calls. */
++ spin_lock(&local->client_call_lock);
+ list_del_init(&call->wait_link);
++ spin_unlock(&local->client_call_lock);
+ return;
+ }
+
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index 7d315a18612ba5..a0524ba8d78781 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -3751,7 +3751,7 @@ enum sctp_disposition sctp_sf_ootb(struct net *net,
+ }
+
+ ch = (struct sctp_chunkhdr *)ch_end;
+- } while (ch_end < skb_tail_pointer(skb));
++ } while (ch_end + sizeof(*ch) < skb_tail_pointer(skb));
+
+ if (ootb_shut_ack)
+ return sctp_sf_shut_8_4_5(net, ep, asoc, type, arg, commands);
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 8e3093938cd226..c61a02aba319af 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -3367,8 +3367,10 @@ static int __smc_create(struct net *net, struct socket *sock, int protocol,
+ else
+ rc = smc_create_clcsk(net, sk, family);
+
+- if (rc)
++ if (rc) {
+ sk_common_release(sk);
++ sock->sk = NULL;
++ }
+ out:
+ return rc;
+ }
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 0e1691316f4234..1326fbf45a3479 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -2459,6 +2459,7 @@ static void xs_tcp_setup_socket(struct work_struct *work)
+ case -EHOSTUNREACH:
+ case -EADDRINUSE:
+ case -ENOBUFS:
++ case -ENOTCONN:
+ break;
+ default:
+ printk("%s: connect returned unhandled error %d\n",
+diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
+index e2157e38721770..56c232cf5b0f4f 100644
+--- a/net/vmw_vsock/hyperv_transport.c
++++ b/net/vmw_vsock/hyperv_transport.c
+@@ -549,6 +549,7 @@ static void hvs_destruct(struct vsock_sock *vsk)
+ vmbus_hvsock_device_unregister(chan);
+
+ kfree(hvs);
++ vsk->trans = NULL;
+ }
+
+ static int hvs_dgram_bind(struct vsock_sock *vsk, struct sockaddr_vm *addr)
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 645222ac84e3fb..01b6b1ed5acfb8 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -1087,6 +1087,7 @@ void virtio_transport_destruct(struct vsock_sock *vsk)
+ struct virtio_vsock_sock *vvs = vsk->trans;
+
+ kfree(vvs);
++ vsk->trans = NULL;
+ }
+ EXPORT_SYMBOL_GPL(virtio_transport_destruct);
+
+diff --git a/security/keys/keyring.c b/security/keys/keyring.c
+index 4448758f643a57..f331725d5a370d 100644
+--- a/security/keys/keyring.c
++++ b/security/keys/keyring.c
+@@ -772,8 +772,11 @@ static bool search_nested_keyrings(struct key *keyring,
+ for (; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ ptr = READ_ONCE(node->slots[slot]);
+
+- if (assoc_array_ptr_is_meta(ptr) && node->back_pointer)
+- goto descend_to_node;
++ if (assoc_array_ptr_is_meta(ptr)) {
++ if (node->back_pointer ||
++ assoc_array_ptr_is_shortcut(ptr))
++ goto descend_to_node;
++ }
+
+ if (!keyring_ptr_is_keyring(ptr))
+ continue;
+diff --git a/security/keys/trusted-keys/trusted_dcp.c b/security/keys/trusted-keys/trusted_dcp.c
+index 4edc5bbbcda3c9..e908c53a803c4b 100644
+--- a/security/keys/trusted-keys/trusted_dcp.c
++++ b/security/keys/trusted-keys/trusted_dcp.c
+@@ -133,6 +133,7 @@ static int do_aead_crypto(u8 *in, u8 *out, size_t len, u8 *key, u8 *nonce,
+ struct scatterlist src_sg, dst_sg;
+ struct crypto_aead *aead;
+ int ret;
++ DECLARE_CRYPTO_WAIT(wait);
+
+ aead = crypto_alloc_aead("gcm(aes)", 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(aead)) {
+@@ -163,8 +164,8 @@ static int do_aead_crypto(u8 *in, u8 *out, size_t len, u8 *key, u8 *nonce,
+ }
+
+ aead_request_set_crypt(aead_req, &src_sg, &dst_sg, len, nonce);
+- aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL,
+- NULL);
++ aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_SLEEP,
++ crypto_req_done, &wait);
+ aead_request_set_ad(aead_req, 0);
+
+ if (crypto_aead_setkey(aead, key, AES_KEYSIZE_128)) {
+@@ -174,9 +175,9 @@ static int do_aead_crypto(u8 *in, u8 *out, size_t len, u8 *key, u8 *nonce,
+ }
+
+ if (do_encrypt)
+- ret = crypto_aead_encrypt(aead_req);
++ ret = crypto_wait_req(crypto_aead_encrypt(aead_req), &wait);
+ else
+- ret = crypto_aead_decrypt(aead_req);
++ ret = crypto_wait_req(crypto_aead_decrypt(aead_req), &wait);
+
+ free_req:
+ aead_request_free(aead_req);
+diff --git a/sound/firewire/tascam/amdtp-tascam.c b/sound/firewire/tascam/amdtp-tascam.c
+index 0b42d65590081a..079afa4bd3811b 100644
+--- a/sound/firewire/tascam/amdtp-tascam.c
++++ b/sound/firewire/tascam/amdtp-tascam.c
+@@ -238,7 +238,7 @@ int amdtp_tscm_init(struct amdtp_stream *s, struct fw_unit *unit,
+ err = amdtp_stream_init(s, unit, dir, flags, fmt,
+ process_ctx_payloads, sizeof(struct amdtp_tscm));
+ if (err < 0)
+- return 0;
++ return err;
+
+ if (dir == AMDTP_OUT_STREAM) {
+ // Use fixed value for FDF field.
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 843cc1ed75c3e5..3a63749ec17d1b 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -205,8 +205,6 @@ static void cx_auto_shutdown(struct hda_codec *codec)
+ {
+ struct conexant_spec *spec = codec->spec;
+
+- snd_hda_gen_shutup_speakers(codec);
+-
+ /* Turn the problematic codec into D3 to avoid spurious noises
+ from the internal speaker during (and after) reboot */
+ cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, false);
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index ace6328e91e31c..601785ee2f0b84 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -381,6 +381,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Redmi Book Pro 15 2022"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "TIMI"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Xiaomi Book Pro 14 2022"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+diff --git a/sound/soc/sof/sof-client-probes-ipc4.c b/sound/soc/sof/sof-client-probes-ipc4.c
+index 796eac0a2e74f7..603aed222480ff 100644
+--- a/sound/soc/sof/sof-client-probes-ipc4.c
++++ b/sound/soc/sof/sof-client-probes-ipc4.c
+@@ -125,6 +125,7 @@ static int ipc4_probes_init(struct sof_client_dev *cdev, u32 stream_tag,
+ msg.primary |= SOF_IPC4_MSG_TARGET(SOF_IPC4_MODULE_MSG);
+ msg.extension = SOF_IPC4_MOD_EXT_DST_MOD_INSTANCE(INVALID_PIPELINE_ID);
+ msg.extension |= SOF_IPC4_MOD_EXT_CORE_ID(0);
++ msg.extension |= SOF_IPC4_MOD_EXT_PARAM_SIZE(sizeof(cfg) / sizeof(uint32_t));
+
+ msg.data_size = sizeof(cfg);
+ msg.data_ptr = &cfg;
+diff --git a/sound/soc/stm/stm32_spdifrx.c b/sound/soc/stm/stm32_spdifrx.c
+index 9eed3c57e3f11c..a438df468571f5 100644
+--- a/sound/soc/stm/stm32_spdifrx.c
++++ b/sound/soc/stm/stm32_spdifrx.c
+@@ -939,7 +939,7 @@ static void stm32_spdifrx_remove(struct platform_device *pdev)
+ {
+ struct stm32_spdifrx_data *spdifrx = platform_get_drvdata(pdev);
+
+- if (spdifrx->ctrl_chan)
++ if (!IS_ERR(spdifrx->ctrl_chan))
+ dma_release_channel(spdifrx->ctrl_chan);
+
+ if (spdifrx->dmab)
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 2d27d729c3bea8..25b3c045847329 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1205,6 +1205,7 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ }
+ break;
+ case USB_ID(0x1bcf, 0x2283): /* NexiGo N930AF FHD Webcam */
++ case USB_ID(0x03f0, 0x654a): /* HP 320 FHD Webcam */
+ if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+ usb_audio_info(chip,
+ "set resolution quirk: cval->res = 16\n");
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index f4c68eb7e07a12..cee49341dabc16 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2114,6 +2114,8 @@ struct usb_audio_quirk_flags_table {
+
+ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ /* Device matches */
++ DEVICE_FLG(0x03f0, 0x654a, /* HP 320 FHD Webcam */
++ QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x041e, 0x3000, /* Creative SB Extigy */
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x041e, 0x4080, /* Creative Live Cam VF0610 */
+diff --git a/tools/lib/thermal/sampling.c b/tools/lib/thermal/sampling.c
+index 70577423a9f0c2..f67c1f9ea1d785 100644
+--- a/tools/lib/thermal/sampling.c
++++ b/tools/lib/thermal/sampling.c
+@@ -16,6 +16,8 @@ static int handle_thermal_sample(struct nl_msg *n, void *arg)
+ struct thermal_handler_param *thp = arg;
+ struct thermal_handler *th = thp->th;
+
++ arg = thp->arg;
++
+ genlmsg_parse(nlh, 0, attrs, THERMAL_GENL_ATTR_MAX, NULL);
+
+ switch (genlhdr->cmd) {
+diff --git a/tools/testing/selftests/mm/hugetlb_dio.c b/tools/testing/selftests/mm/hugetlb_dio.c
+index f9ac20c657ec6e..60001c142ce998 100644
+--- a/tools/testing/selftests/mm/hugetlb_dio.c
++++ b/tools/testing/selftests/mm/hugetlb_dio.c
+@@ -44,13 +44,6 @@ void run_dio_using_hugetlb(unsigned int start_off, unsigned int end_off)
+ if (fd < 0)
+ ksft_exit_fail_perror("Error opening file\n");
+
+- /* Get the free huge pages before allocation */
+- free_hpage_b = get_free_hugepages();
+- if (free_hpage_b == 0) {
+- close(fd);
+- ksft_exit_skip("No free hugepage, exiting!\n");
+- }
+-
+ /* Allocate a hugetlb page */
+ orig_buffer = mmap(NULL, h_pagesize, mmap_prot, mmap_flags, -1, 0);
+ if (orig_buffer == MAP_FAILED) {
+@@ -94,8 +87,20 @@ void run_dio_using_hugetlb(unsigned int start_off, unsigned int end_off)
+ int main(void)
+ {
+ size_t pagesize = 0;
++ int fd;
+
+ ksft_print_header();
++
++ /* Open the file to DIO */
++ fd = open("/tmp", O_TMPFILE | O_RDWR | O_DIRECT, 0664);
++ if (fd < 0)
++ ksft_exit_skip("Unable to allocate file: %s\n", strerror(errno));
++ close(fd);
++
++ /* Check if huge pages are free */
++ if (!get_free_hugepages())
++ ksft_exit_skip("No free hugepage, exiting\n");
++
+ ksft_set_plan(4);
+
+ /* Get base page size */
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-17 18:15 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-17 18:15 UTC (permalink / raw
To: gentoo-commits
commit: ca9cd6a3e840e11208e4a90d5496c25af4e9ac3d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 17 18:14:58 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Nov 17 18:14:58 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ca9cd6a3
Linux patch 6.11.9
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1008_linux-6.11.9.patch | 2356 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2360 insertions(+)
diff --git a/0000_README b/0000_README
index 240867d2..5e0fcd5f 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1007_linux-6.11.8.patch
From: https://www.kernel.org
Desc: Linux 6.11.8
+Patch: 1008_linux-6.11.9.patch
+From: https://www.kernel.org
+Desc: Linux 6.11.9
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1008_linux-6.11.9.patch b/1008_linux-6.11.9.patch
new file mode 100644
index 00000000..90ec0dd6
--- /dev/null
+++ b/1008_linux-6.11.9.patch
@@ -0,0 +1,2356 @@
+diff --git a/Makefile b/Makefile
+index b8641dde171ff9..3e48c8d84540bc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 11
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h
+index 04a78010fc725e..ab6985d9e49f00 100644
+--- a/arch/loongarch/include/asm/loongarch.h
++++ b/arch/loongarch/include/asm/loongarch.h
+@@ -256,7 +256,7 @@
+ #define CSR_ESTAT_IS_WIDTH 14
+ #define CSR_ESTAT_IS (_ULCAST_(0x3fff) << CSR_ESTAT_IS_SHIFT)
+
+-#define LOONGARCH_CSR_ERA 0x6 /* ERA */
++#define LOONGARCH_CSR_ERA 0x6 /* Exception return address */
+
+ #define LOONGARCH_CSR_BADV 0x7 /* Bad virtual address */
+
+diff --git a/arch/loongarch/kvm/timer.c b/arch/loongarch/kvm/timer.c
+index 74a4b5c272d60e..32dc213374beac 100644
+--- a/arch/loongarch/kvm/timer.c
++++ b/arch/loongarch/kvm/timer.c
+@@ -161,10 +161,11 @@ static void _kvm_save_timer(struct kvm_vcpu *vcpu)
+ if (kvm_vcpu_is_blocking(vcpu)) {
+
+ /*
+- * HRTIMER_MODE_PINNED is suggested since vcpu may run in
+- * the same physical cpu in next time
++ * HRTIMER_MODE_PINNED_HARD is suggested since vcpu may run in
++ * the same physical cpu in next time, and the timer should run
++ * in hardirq context even in the PREEMPT_RT case.
+ */
+- hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED);
++ hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED_HARD);
+ }
+ }
+
+diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
+index 6905283f535b96..9218fc521c22dc 100644
+--- a/arch/loongarch/kvm/vcpu.c
++++ b/arch/loongarch/kvm/vcpu.c
+@@ -1144,7 +1144,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+ vcpu->arch.vpid = 0;
+ vcpu->arch.flush_gpa = INVALID_GPA;
+
+- hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
++ hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED_HARD);
+ vcpu->arch.swtimer.function = kvm_swtimer_wakeup;
+
+ vcpu->arch.handle_exit = kvm_handle_exit;
+diff --git a/arch/powerpc/platforms/powernv/opal-irqchip.c b/arch/powerpc/platforms/powernv/opal-irqchip.c
+index 56a1f7ce78d2c7..d92759c21fae94 100644
+--- a/arch/powerpc/platforms/powernv/opal-irqchip.c
++++ b/arch/powerpc/platforms/powernv/opal-irqchip.c
+@@ -282,6 +282,7 @@ int __init opal_event_init(void)
+ name, NULL);
+ if (rc) {
+ pr_warn("Error %d requesting OPAL irq %d\n", rc, (int)r->start);
++ kfree(name);
+ continue;
+ }
+ }
+diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c
+index 0a1e859323b457..a8085cd8215e35 100644
+--- a/arch/riscv/kvm/aia_imsic.c
++++ b/arch/riscv/kvm/aia_imsic.c
+@@ -55,7 +55,7 @@ struct imsic {
+ /* IMSIC SW-file */
+ struct imsic_mrif *swfile;
+ phys_addr_t swfile_pa;
+- spinlock_t swfile_extirq_lock;
++ raw_spinlock_t swfile_extirq_lock;
+ };
+
+ #define imsic_vs_csr_read(__c) \
+@@ -622,7 +622,7 @@ static void imsic_swfile_extirq_update(struct kvm_vcpu *vcpu)
+ * interruptions between reading topei and updating pending status.
+ */
+
+- spin_lock_irqsave(&imsic->swfile_extirq_lock, flags);
++ raw_spin_lock_irqsave(&imsic->swfile_extirq_lock, flags);
+
+ if (imsic_mrif_atomic_read(mrif, &mrif->eidelivery) &&
+ imsic_mrif_topei(mrif, imsic->nr_eix, imsic->nr_msis))
+@@ -630,7 +630,7 @@ static void imsic_swfile_extirq_update(struct kvm_vcpu *vcpu)
+ else
+ kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_EXT);
+
+- spin_unlock_irqrestore(&imsic->swfile_extirq_lock, flags);
++ raw_spin_unlock_irqrestore(&imsic->swfile_extirq_lock, flags);
+ }
+
+ static void imsic_swfile_read(struct kvm_vcpu *vcpu, bool clear,
+@@ -1051,7 +1051,7 @@ int kvm_riscv_vcpu_aia_imsic_init(struct kvm_vcpu *vcpu)
+ }
+ imsic->swfile = page_to_virt(swfile_page);
+ imsic->swfile_pa = page_to_phys(swfile_page);
+- spin_lock_init(&imsic->swfile_extirq_lock);
++ raw_spin_lock_init(&imsic->swfile_extirq_lock);
+
+ /* Setup IO device */
+ kvm_iodevice_init(&imsic->iodev, &imsic_iodoev_ops);
+diff --git a/block/elevator.c b/block/elevator.c
+index 640fcc891b0d2b..9430cde13d1a41 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -550,7 +550,7 @@ EXPORT_SYMBOL_GPL(elv_unregister);
+ static inline bool elv_support_iosched(struct request_queue *q)
+ {
+ if (!queue_is_mq(q) ||
+- (q->tag_set && (q->tag_set->flags & BLK_MQ_F_NO_SCHED)))
++ (q->tag_set->flags & BLK_MQ_F_NO_SCHED))
+ return false;
+ return true;
+ }
+@@ -561,7 +561,7 @@ static inline bool elv_support_iosched(struct request_queue *q)
+ */
+ static struct elevator_type *elevator_get_default(struct request_queue *q)
+ {
+- if (q->tag_set && q->tag_set->flags & BLK_MQ_F_NO_SCHED_BY_DEFAULT)
++ if (q->tag_set->flags & BLK_MQ_F_NO_SCHED_BY_DEFAULT)
+ return NULL;
+
+ if (q->nr_hw_queues != 1 &&
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 122cd910c4e1c1..192ea14d64ce62 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -396,7 +396,7 @@ void crypto_alg_tested(const char *name, int err)
+ q->cra_flags |= CRYPTO_ALG_DEAD;
+ alg = test->adult;
+
+- if (list_empty(&alg->cra_list))
++ if (crypto_is_dead(alg))
+ goto complete;
+
+ if (err == -ECANCELED)
+diff --git a/drivers/crypto/marvell/cesa/hash.c b/drivers/crypto/marvell/cesa/hash.c
+index 8d84ad45571c7f..f150861ceaf695 100644
+--- a/drivers/crypto/marvell/cesa/hash.c
++++ b/drivers/crypto/marvell/cesa/hash.c
+@@ -947,7 +947,7 @@ struct ahash_alg mv_md5_alg = {
+ .base = {
+ .cra_name = "md5",
+ .cra_driver_name = "mv-md5",
+- .cra_priority = 300,
++ .cra_priority = 0,
+ .cra_flags = CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_ALLOCATES_MEMORY |
+ CRYPTO_ALG_KERN_DRIVER_ONLY,
+@@ -1018,7 +1018,7 @@ struct ahash_alg mv_sha1_alg = {
+ .base = {
+ .cra_name = "sha1",
+ .cra_driver_name = "mv-sha1",
+- .cra_priority = 300,
++ .cra_priority = 0,
+ .cra_flags = CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_ALLOCATES_MEMORY |
+ CRYPTO_ALG_KERN_DRIVER_ONLY,
+@@ -1092,7 +1092,7 @@ struct ahash_alg mv_sha256_alg = {
+ .base = {
+ .cra_name = "sha256",
+ .cra_driver_name = "mv-sha256",
+- .cra_priority = 300,
++ .cra_priority = 0,
+ .cra_flags = CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_ALLOCATES_MEMORY |
+ CRYPTO_ALG_KERN_DRIVER_ONLY,
+@@ -1302,7 +1302,7 @@ struct ahash_alg mv_ahmac_md5_alg = {
+ .base = {
+ .cra_name = "hmac(md5)",
+ .cra_driver_name = "mv-hmac-md5",
+- .cra_priority = 300,
++ .cra_priority = 0,
+ .cra_flags = CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_ALLOCATES_MEMORY |
+ CRYPTO_ALG_KERN_DRIVER_ONLY,
+@@ -1373,7 +1373,7 @@ struct ahash_alg mv_ahmac_sha1_alg = {
+ .base = {
+ .cra_name = "hmac(sha1)",
+ .cra_driver_name = "mv-hmac-sha1",
+- .cra_priority = 300,
++ .cra_priority = 0,
+ .cra_flags = CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_ALLOCATES_MEMORY |
+ CRYPTO_ALG_KERN_DRIVER_ONLY,
+@@ -1444,7 +1444,7 @@ struct ahash_alg mv_ahmac_sha256_alg = {
+ .base = {
+ .cra_name = "hmac(sha256)",
+ .cra_driver_name = "mv-hmac-sha256",
+- .cra_priority = 300,
++ .cra_priority = 0,
+ .cra_flags = CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_ALLOCATES_MEMORY |
+ CRYPTO_ALG_KERN_DRIVER_ONLY,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 546b02f2241a67..5953bc5f31192a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -1170,7 +1170,7 @@ static int kfd_ioctl_alloc_memory_of_gpu(struct file *filep,
+
+ if (flags & KFD_IOC_ALLOC_MEM_FLAGS_AQL_QUEUE_MEM)
+ size >>= 1;
+- WRITE_ONCE(pdd->vram_usage, pdd->vram_usage + PAGE_ALIGN(size));
++ atomic64_add(PAGE_ALIGN(size), &pdd->vram_usage);
+ }
+
+ mutex_unlock(&p->mutex);
+@@ -1241,7 +1241,7 @@ static int kfd_ioctl_free_memory_of_gpu(struct file *filep,
+ kfd_process_device_remove_obj_handle(
+ pdd, GET_IDR_HANDLE(args->handle));
+
+- WRITE_ONCE(pdd->vram_usage, pdd->vram_usage - size);
++ atomic64_sub(size, &pdd->vram_usage);
+
+ err_unlock:
+ err_pdd:
+@@ -2346,7 +2346,7 @@ static int criu_restore_memory_of_gpu(struct kfd_process_device *pdd,
+ } else if (bo_bucket->alloc_flags & KFD_IOC_ALLOC_MEM_FLAGS_VRAM) {
+ bo_bucket->restored_offset = offset;
+ /* Update the VRAM usage count */
+- WRITE_ONCE(pdd->vram_usage, pdd->vram_usage + bo_bucket->size);
++ atomic64_add(bo_bucket->size, &pdd->vram_usage);
+ }
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 2b3ec92981e8f9..f35741fade9111 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -766,7 +766,7 @@ struct kfd_process_device {
+ enum kfd_pdd_bound bound;
+
+ /* VRAM usage */
+- uint64_t vram_usage;
++ atomic64_t vram_usage;
+ struct attribute attr_vram;
+ char vram_filename[MAX_SYSFS_FILENAME_LEN];
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index e44892109f71b0..8343b3e4de7b58 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -306,7 +306,7 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr,
+ } else if (strncmp(attr->name, "vram_", 5) == 0) {
+ struct kfd_process_device *pdd = container_of(attr, struct kfd_process_device,
+ attr_vram);
+- return snprintf(buffer, PAGE_SIZE, "%llu\n", READ_ONCE(pdd->vram_usage));
++ return snprintf(buffer, PAGE_SIZE, "%llu\n", atomic64_read(&pdd->vram_usage));
+ } else if (strncmp(attr->name, "sdma_", 5) == 0) {
+ struct kfd_process_device *pdd = container_of(attr, struct kfd_process_device,
+ attr_sdma);
+@@ -1599,7 +1599,7 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_node *dev,
+ pdd->bound = PDD_UNBOUND;
+ pdd->already_dequeued = false;
+ pdd->runtime_inuse = false;
+- pdd->vram_usage = 0;
++ atomic64_set(&pdd->vram_usage, 0);
+ pdd->sdma_past_activity_counter = 0;
+ pdd->user_gpu_id = dev->id;
+ atomic64_set(&pdd->evict_duration_counter, 0);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index bd9c2921e0dccc..7d00d89586a10c 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -404,6 +404,27 @@ static void svm_range_bo_release(struct kref *kref)
+ spin_lock(&svm_bo->list_lock);
+ }
+ spin_unlock(&svm_bo->list_lock);
++
++ if (mmget_not_zero(svm_bo->eviction_fence->mm)) {
++ struct kfd_process_device *pdd;
++ struct kfd_process *p;
++ struct mm_struct *mm;
++
++ mm = svm_bo->eviction_fence->mm;
++ /*
++ * The forked child process takes svm_bo device pages ref, svm_bo could be
++ * released after parent process is gone.
++ */
++ p = kfd_lookup_process_by_mm(mm);
++ if (p) {
++ pdd = kfd_get_process_device_data(svm_bo->node, p);
++ if (pdd)
++ atomic64_sub(amdgpu_bo_size(svm_bo->bo), &pdd->vram_usage);
++ kfd_unref_process(p);
++ }
++ mmput(mm);
++ }
++
+ if (!dma_fence_is_signaled(&svm_bo->eviction_fence->base))
+ /* We're not in the eviction worker. Signal the fence. */
+ dma_fence_signal(&svm_bo->eviction_fence->base);
+@@ -531,6 +552,7 @@ int
+ svm_range_vram_node_new(struct kfd_node *node, struct svm_range *prange,
+ bool clear)
+ {
++ struct kfd_process_device *pdd;
+ struct amdgpu_bo_param bp;
+ struct svm_range_bo *svm_bo;
+ struct amdgpu_bo_user *ubo;
+@@ -622,6 +644,10 @@ svm_range_vram_node_new(struct kfd_node *node, struct svm_range *prange,
+ list_add(&prange->svm_bo_list, &svm_bo->range_list);
+ spin_unlock(&svm_bo->list_lock);
+
++ pdd = svm_range_get_pdd_by_node(prange, node);
++ if (pdd)
++ atomic64_add(amdgpu_bo_size(bo), &pdd->vram_usage);
++
+ return 0;
+
+ reserve_bo_failed:
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index 3f4719b3c26818..4e2807f5f94cf3 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -62,7 +62,7 @@
+ #define VMWGFX_DRIVER_MINOR 20
+ #define VMWGFX_DRIVER_PATCHLEVEL 0
+ #define VMWGFX_FIFO_STATIC_SIZE (1024*1024)
+-#define VMWGFX_MAX_DISPLAYS 16
++#define VMWGFX_NUM_DISPLAY_UNITS 8
+ #define VMWGFX_CMD_BOUNCE_INIT_SIZE 32768
+
+ #define VMWGFX_MIN_INITIAL_WIDTH 1280
+@@ -82,7 +82,7 @@
+ #define VMWGFX_NUM_GB_CONTEXT 256
+ #define VMWGFX_NUM_GB_SHADER 20000
+ #define VMWGFX_NUM_GB_SURFACE 32768
+-#define VMWGFX_NUM_GB_SCREEN_TARGET VMWGFX_MAX_DISPLAYS
++#define VMWGFX_NUM_GB_SCREEN_TARGET VMWGFX_NUM_DISPLAY_UNITS
+ #define VMWGFX_NUM_DXCONTEXT 256
+ #define VMWGFX_NUM_DXQUERY 512
+ #define VMWGFX_NUM_MOB (VMWGFX_NUM_GB_CONTEXT +\
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index aec624196d6ea7..63b8d7591253cd 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -2197,7 +2197,7 @@ int vmw_kms_update_layout_ioctl(struct drm_device *dev, void *data,
+ struct drm_mode_config *mode_config = &dev->mode_config;
+ struct drm_vmw_update_layout_arg *arg =
+ (struct drm_vmw_update_layout_arg *)data;
+- void __user *user_rects;
++ const void __user *user_rects;
+ struct drm_vmw_rect *rects;
+ struct drm_rect *drm_rects;
+ unsigned rects_size;
+@@ -2209,6 +2209,8 @@ int vmw_kms_update_layout_ioctl(struct drm_device *dev, void *data,
+ VMWGFX_MIN_INITIAL_HEIGHT};
+ vmw_du_update_layout(dev_priv, 1, &def_rect);
+ return 0;
++ } else if (arg->num_outputs > VMWGFX_NUM_DISPLAY_UNITS) {
++ return -E2BIG;
+ }
+
+ rects_size = arg->num_outputs * sizeof(struct drm_vmw_rect);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+index 6141fadf81efeb..2a6c6d6581e02b 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+@@ -199,9 +199,6 @@ struct vmw_kms_dirty {
+ s32 unit_y2;
+ };
+
+-#define VMWGFX_NUM_DISPLAY_UNITS 8
+-
+-
+ #define vmw_framebuffer_to_vfb(x) \
+ container_of(x, struct vmw_framebuffer, base)
+ #define vmw_framebuffer_to_vfbs(x) \
+diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
+index fb394189d9e233..13213f39e52f99 100644
+--- a/drivers/gpu/drm/xe/xe_device.c
++++ b/drivers/gpu/drm/xe/xe_device.c
+@@ -870,7 +870,7 @@ void xe_device_l2_flush(struct xe_device *xe)
+ spin_lock(>->global_invl_lock);
+ xe_mmio_write32(gt, XE2_GLOBAL_INVAL, 0x1);
+
+- if (xe_mmio_wait32(gt, XE2_GLOBAL_INVAL, 0x1, 0x0, 150, NULL, true))
++ if (xe_mmio_wait32(gt, XE2_GLOBAL_INVAL, 0x1, 0x0, 500, NULL, true))
+ xe_gt_err_once(gt, "Global invalidation timeout\n");
+ spin_unlock(>->global_invl_lock);
+
+diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
+index b263fff1527377..7d9fc489dcb81e 100644
+--- a/drivers/gpu/drm/xe/xe_force_wake.c
++++ b/drivers/gpu/drm/xe/xe_force_wake.c
+@@ -115,9 +115,15 @@ static int __domain_wait(struct xe_gt *gt, struct xe_force_wake_domain *domain,
+ XE_FORCE_WAKE_ACK_TIMEOUT_MS * USEC_PER_MSEC,
+ &value, true);
+ if (ret)
+- xe_gt_notice(gt, "Force wake domain %d failed to ack %s (%pe) reg[%#x] = %#x\n",
+- domain->id, str_wake_sleep(wake), ERR_PTR(ret),
+- domain->reg_ack.addr, value);
++ xe_gt_err(gt, "Force wake domain %d failed to ack %s (%pe) reg[%#x] = %#x\n",
++ domain->id, str_wake_sleep(wake), ERR_PTR(ret),
++ domain->reg_ack.addr, value);
++ if (value == ~0) {
++ xe_gt_err(gt,
++ "Force wake domain %d: %s. MMIO unreliable (forcewake register returns 0xFFFFFFFF)!\n",
++ domain->id, str_wake_sleep(wake));
++ ret = -EIO;
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
+index 12e1fe6a8da285..1e8bb8b28a23ec 100644
+--- a/drivers/gpu/drm/xe/xe_guc_ct.c
++++ b/drivers/gpu/drm/xe/xe_guc_ct.c
+@@ -897,6 +897,24 @@ static int guc_ct_send_recv(struct xe_guc_ct *ct, const u32 *action, u32 len,
+ }
+ }
+
++ /*
++ * Occasionally it is seen that the G2H worker starts running after a delay of more than
++ * a second even after being queued and activated by the Linux workqueue subsystem. This
++ * leads to G2H timeout error. The root cause of issue lies with scheduling latency of
++ * Lunarlake Hybrid CPU. Issue dissappears if we disable Lunarlake atom cores from BIOS
++ * and this is beyond xe kmd.
++ *
++ * TODO: Drop this change once workqueue scheduling delay issue is fixed on LNL Hybrid CPU.
++ */
++ if (!ret) {
++ flush_work(&ct->g2h_worker);
++ if (g2h_fence.done) {
++ xe_gt_warn(gt, "G2H fence %u, action %04x, done\n",
++ g2h_fence.seqno, action[0]);
++ ret = 1;
++ }
++ }
++
+ /*
+ * Ensure we serialize with completion side to prevent UAF with fence going out of scope on
+ * the stack, since we have no clue if it will fire after the timeout before we can erase
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index cbdd44567d1072..792024c28da86f 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -1771,8 +1771,13 @@ void xe_guc_submit_stop(struct xe_guc *guc)
+
+ mutex_lock(&guc->submission_state.lock);
+
+- xa_for_each(&guc->submission_state.exec_queue_lookup, index, q)
++ xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) {
++ /* Prevent redundant attempts to stop parallel queues */
++ if (q->guc->id != index)
++ continue;
++
+ guc_exec_queue_stop(guc, q);
++ }
+
+ mutex_unlock(&guc->submission_state.lock);
+
+@@ -1810,8 +1815,13 @@ int xe_guc_submit_start(struct xe_guc *guc)
+
+ mutex_lock(&guc->submission_state.lock);
+ atomic_dec(&guc->submission_state.stopped);
+- xa_for_each(&guc->submission_state.exec_queue_lookup, index, q)
++ xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) {
++ /* Prevent redundant attempts to start parallel queues */
++ if (q->guc->id != index)
++ continue;
++
+ guc_exec_queue_start(q);
++ }
+ mutex_unlock(&guc->submission_state.lock);
+
+ wake_up_all(&guc->ct.wq);
+diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
+index 4e01df6b1b7a1d..3a30b12e22521b 100644
+--- a/drivers/gpu/drm/xe/xe_query.c
++++ b/drivers/gpu/drm/xe/xe_query.c
+@@ -161,7 +161,11 @@ query_engine_cycles(struct xe_device *xe,
+ cpu_clock);
+
+ xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+- resp.width = 36;
++
++ if (GRAPHICS_VER(xe) >= 20)
++ resp.width = 64;
++ else
++ resp.width = 36;
+
+ /* Only write to the output fields of user query */
+ if (put_user(resp.cpu_timestamp, &query_ptr->cpu_timestamp))
+diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
+index de80c8b7c8913c..9d77f2d4096f59 100644
+--- a/drivers/gpu/drm/xe/xe_sync.c
++++ b/drivers/gpu/drm/xe/xe_sync.c
+@@ -54,8 +54,9 @@ static struct xe_user_fence *user_fence_create(struct xe_device *xe, u64 addr,
+ {
+ struct xe_user_fence *ufence;
+ u64 __user *ptr = u64_to_user_ptr(addr);
++ u64 __maybe_unused prefetch_val;
+
+- if (!access_ok(ptr, sizeof(*ptr)))
++ if (get_user(prefetch_val, ptr))
+ return ERR_PTR(-EFAULT);
+
+ ufence = kzalloc(sizeof(*ufence), GFP_KERNEL);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 8a991b30e3c6d2..92cff3f2658cf5 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -509,6 +509,7 @@
+ #define I2C_DEVICE_ID_GOODIX_01E8 0x01e8
+ #define I2C_DEVICE_ID_GOODIX_01E9 0x01e9
+ #define I2C_DEVICE_ID_GOODIX_01F0 0x01f0
++#define I2C_DEVICE_ID_GOODIX_0D42 0x0d42
+
+ #define USB_VENDOR_ID_GOODTOUCH 0x1aad
+ #define USB_DEVICE_ID_GOODTOUCH_000f 0x000f
+@@ -868,6 +869,7 @@
+ #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1 0xc539
+ #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_1 0xc53f
+ #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_POWERPLAY 0xc53a
++#define USB_DEVICE_ID_LOGITECH_BOLT_RECEIVER 0xc548
+ #define USB_DEVICE_ID_SPACETRAVELLER 0xc623
+ #define USB_DEVICE_ID_SPACENAVIGATOR 0xc626
+ #define USB_DEVICE_ID_DINOVO_DESKTOP 0xc704
+diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
+index e5e72aa5260a91..24654c7ecb04ea 100644
+--- a/drivers/hid/hid-lenovo.c
++++ b/drivers/hid/hid-lenovo.c
+@@ -473,6 +473,7 @@ static int lenovo_input_mapping(struct hid_device *hdev,
+ return lenovo_input_mapping_tp10_ultrabook_kbd(hdev, hi, field,
+ usage, bit, max);
+ case USB_DEVICE_ID_LENOVO_X1_TAB:
++ case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ return lenovo_input_mapping_x1_tab_kbd(hdev, hi, field, usage, bit, max);
+ default:
+ return 0;
+@@ -583,6 +584,7 @@ static ssize_t attr_fn_lock_store(struct device *dev,
+ break;
+ case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+ case USB_DEVICE_ID_LENOVO_X1_TAB:
++ case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ ret = lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value);
+ if (ret)
+ return ret;
+@@ -776,6 +778,7 @@ static int lenovo_event(struct hid_device *hdev, struct hid_field *field,
+ return lenovo_event_cptkbd(hdev, field, usage, value);
+ case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+ case USB_DEVICE_ID_LENOVO_X1_TAB:
++ case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ return lenovo_event_tp10ubkbd(hdev, field, usage, value);
+ default:
+ return 0;
+@@ -1056,6 +1059,7 @@ static int lenovo_led_brightness_set(struct led_classdev *led_cdev,
+ break;
+ case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+ case USB_DEVICE_ID_LENOVO_X1_TAB:
++ case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ ret = lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value);
+ break;
+ }
+@@ -1286,6 +1290,7 @@ static int lenovo_probe(struct hid_device *hdev,
+ break;
+ case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+ case USB_DEVICE_ID_LENOVO_X1_TAB:
++ case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ ret = lenovo_probe_tp10ubkbd(hdev);
+ break;
+ default:
+@@ -1372,6 +1377,7 @@ static void lenovo_remove(struct hid_device *hdev)
+ break;
+ case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+ case USB_DEVICE_ID_LENOVO_X1_TAB:
++ case USB_DEVICE_ID_LENOVO_X1_TAB3:
+ lenovo_remove_tp10ubkbd(hdev);
+ break;
+ }
+@@ -1421,6 +1427,8 @@ static const struct hid_device_id lenovo_devices[] = {
+ */
+ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB) },
++ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++ USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB3) },
+ { }
+ };
+
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 847462650549e9..24da739647635e 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2017,6 +2017,10 @@ static const struct hid_device_id mt_devices[] = {
+ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ USB_VENDOR_ID_ELAN, 0x3148) },
+
++ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
++ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++ USB_VENDOR_ID_ELAN, 0x32ae) },
++
+ /* Elitegroup panel */
+ { .driver_data = MT_CLS_SERIAL,
+ MT_USB_DEVICE(USB_VENDOR_ID_ELITEGROUP,
+@@ -2086,6 +2090,11 @@ static const struct hid_device_id mt_devices[] = {
+ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ 0x347d, 0x7853) },
+
++ /* HONOR MagicBook Art 14 touchpad */
++ { .driver_data = MT_CLS_VTL,
++ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++ 0x35cc, 0x0104) },
++
+ /* Ilitek dual touch panel */
+ { .driver_data = MT_CLS_NSMU,
+ MT_USB_DEVICE(USB_VENDOR_ID_ILITEK,
+@@ -2128,6 +2137,10 @@ static const struct hid_device_id mt_devices[] = {
+ HID_DEVICE(BUS_BLUETOOTH, HID_GROUP_MULTITOUCH_WIN_8,
+ USB_VENDOR_ID_LOGITECH,
+ USB_DEVICE_ID_LOGITECH_CASA_TOUCHPAD) },
++ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
++ HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
++ USB_VENDOR_ID_LOGITECH,
++ USB_DEVICE_ID_LOGITECH_BOLT_RECEIVER) },
+
+ /* MosArt panels */
+ { .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE,
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 2f8a9d3f1e861e..8914c7db94718f 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -50,6 +50,7 @@
+ #define I2C_HID_QUIRK_BAD_INPUT_SIZE BIT(3)
+ #define I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET BIT(4)
+ #define I2C_HID_QUIRK_NO_SLEEP_ON_SUSPEND BIT(5)
++#define I2C_HID_QUIRK_DELAY_WAKEUP_AFTER_RESUME BIT(6)
+
+ /* Command opcodes */
+ #define I2C_HID_OPCODE_RESET 0x01
+@@ -140,6 +141,8 @@ static const struct i2c_hid_quirks {
+ { USB_VENDOR_ID_ELAN, HID_ANY_ID,
+ I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET |
+ I2C_HID_QUIRK_BOGUS_IRQ },
++ { I2C_VENDOR_ID_GOODIX, I2C_DEVICE_ID_GOODIX_0D42,
++ I2C_HID_QUIRK_DELAY_WAKEUP_AFTER_RESUME },
+ { 0, 0 }
+ };
+
+@@ -981,6 +984,13 @@ static int i2c_hid_core_resume(struct i2c_hid *ihid)
+ return -ENXIO;
+ }
+
++ /* On Goodix 27c6:0d42 wait extra time before device wakeup.
++ * It's not clear why but if we send wakeup too early, the device will
++ * never trigger input interrupts.
++ */
++ if (ihid->quirks & I2C_HID_QUIRK_DELAY_WAKEUP_AFTER_RESUME)
++ msleep(1500);
++
+ /* Instead of resetting device, simply powers the device on. This
+ * solves "incomplete reports" on Raydium devices 2386:3118 and
+ * 2386:4B33 and fixes various SIS touchscreens no longer sending
+diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
+index 64ad9e0895bd0f..a034264c566986 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
+@@ -331,6 +331,8 @@ static int siw_tcp_sendpages(struct socket *s, struct page **page, int offset,
+ msg.msg_flags &= ~MSG_MORE;
+
+ tcp_rate_check_app_limited(sk);
++ if (!sendpage_ok(page[i]))
++ msg.msg_flags &= ~MSG_SPLICE_PAGES;
+ bvec_set_page(&bvec, page[i], bytes, offset);
+ iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size);
+
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c b/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c
+index 9dc772f2cbb27c..99030e6b16e7aa 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c
+@@ -130,7 +130,7 @@ int arm_mmu500_reset(struct arm_smmu_device *smmu)
+
+ /*
+ * Disable MMU-500's not-particularly-beneficial next-page
+- * prefetcher for the sake of errata #841119 and #826419.
++ * prefetcher for the sake of at least 5 known errata.
+ */
+ for (i = 0; i < smmu->num_context_banks; ++i) {
+ reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR);
+@@ -138,7 +138,7 @@ int arm_mmu500_reset(struct arm_smmu_device *smmu)
+ arm_smmu_cb_write(smmu, i, ARM_SMMU_CB_ACTLR, reg);
+ reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR);
+ if (reg & ARM_MMU500_ACTLR_CPRE)
+- dev_warn_once(smmu->dev, "Failed to disable prefetcher [errata #841119 and #826419], check ACR.CACHE_LOCK\n");
++ dev_warn_once(smmu->dev, "Failed to disable prefetcher for errata workarounds, check SACR.CACHE_LOCK\n");
+ }
+
+ return 0;
+diff --git a/drivers/irqchip/irq-mscc-ocelot.c b/drivers/irqchip/irq-mscc-ocelot.c
+index 4d0c3532dbe735..c19ab379e8c5ea 100644
+--- a/drivers/irqchip/irq-mscc-ocelot.c
++++ b/drivers/irqchip/irq-mscc-ocelot.c
+@@ -37,7 +37,7 @@ static struct chip_props ocelot_props = {
+ .reg_off_ena_clr = 0x1c,
+ .reg_off_ena_set = 0x20,
+ .reg_off_ident = 0x38,
+- .reg_off_trigger = 0x5c,
++ .reg_off_trigger = 0x4,
+ .n_irq = 24,
+ };
+
+@@ -70,7 +70,7 @@ static struct chip_props jaguar2_props = {
+ .reg_off_ena_clr = 0x1c,
+ .reg_off_ena_set = 0x20,
+ .reg_off_ident = 0x38,
+- .reg_off_trigger = 0x5c,
++ .reg_off_trigger = 0x4,
+ .n_irq = 29,
+ };
+
+diff --git a/drivers/net/mdio/mdio-bcm-unimac.c b/drivers/net/mdio/mdio-bcm-unimac.c
+index f40eb50bb978d8..b7bc70586ee0a4 100644
+--- a/drivers/net/mdio/mdio-bcm-unimac.c
++++ b/drivers/net/mdio/mdio-bcm-unimac.c
+@@ -337,6 +337,7 @@ static const struct of_device_id unimac_mdio_ids[] = {
+ { .compatible = "brcm,asp-v2.2-mdio", },
+ { .compatible = "brcm,asp-v2.1-mdio", },
+ { .compatible = "brcm,asp-v2.0-mdio", },
++ { .compatible = "brcm,bcm6846-mdio", },
+ { .compatible = "brcm,genet-mdio-v5", },
+ { .compatible = "brcm,genet-mdio-v4", },
+ { .compatible = "brcm,genet-mdio-v3", },
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 4823dbdf54656f..f137c82f1c0f7f 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1426,6 +1426,7 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x030e, 4)}, /* Quectel EM05GV2 */
+ {QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)}, /* Fibocom NL678 series */
++ {QMI_QUIRK_SET_DTR(0x2cb7, 0x0112, 0)}, /* Fibocom FG132 */
+ {QMI_FIXED_INTF(0x0489, 0xe0b4, 0)}, /* Foxconn T77W968 LTE */
+ {QMI_FIXED_INTF(0x0489, 0xe0b5, 0)}, /* Foxconn T77W968 LTE with eSIM support*/
+ {QMI_FIXED_INTF(0x2692, 0x9025, 4)}, /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 89ad4217f86068..128932c849a1a4 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1302,10 +1302,9 @@ static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl)
+ queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
+ }
+
+-static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
+- blk_status_t status)
++static void nvme_keep_alive_finish(struct request *rq,
++ blk_status_t status, struct nvme_ctrl *ctrl)
+ {
+- struct nvme_ctrl *ctrl = rq->end_io_data;
+ unsigned long flags;
+ bool startka = false;
+ unsigned long rtt = jiffies - (rq->deadline - rq->timeout);
+@@ -1323,13 +1322,11 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
+ delay = 0;
+ }
+
+- blk_mq_free_request(rq);
+-
+ if (status) {
+ dev_err(ctrl->device,
+ "failed nvme_keep_alive_end_io error=%d\n",
+ status);
+- return RQ_END_IO_NONE;
++ return;
+ }
+
+ ctrl->ka_last_check_time = jiffies;
+@@ -1341,7 +1338,6 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
+ spin_unlock_irqrestore(&ctrl->lock, flags);
+ if (startka)
+ queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
+- return RQ_END_IO_NONE;
+ }
+
+ static void nvme_keep_alive_work(struct work_struct *work)
+@@ -1350,6 +1346,7 @@ static void nvme_keep_alive_work(struct work_struct *work)
+ struct nvme_ctrl, ka_work);
+ bool comp_seen = ctrl->comp_seen;
+ struct request *rq;
++ blk_status_t status;
+
+ ctrl->ka_last_check_time = jiffies;
+
+@@ -1372,9 +1369,9 @@ static void nvme_keep_alive_work(struct work_struct *work)
+ nvme_init_request(rq, &ctrl->ka_cmd);
+
+ rq->timeout = ctrl->kato * HZ;
+- rq->end_io = nvme_keep_alive_end_io;
+- rq->end_io_data = ctrl;
+- blk_execute_rq_nowait(rq, false);
++ status = blk_execute_rq(rq, false);
++ nvme_keep_alive_finish(rq, status, ctrl);
++ blk_mq_free_request(rq);
+ }
+
+ static void nvme_start_keep_alive(struct nvme_ctrl *ctrl)
+@@ -2472,8 +2469,13 @@ int nvme_enable_ctrl(struct nvme_ctrl *ctrl)
+ else
+ ctrl->ctrl_config = NVME_CC_CSS_NVM;
+
+- if (ctrl->cap & NVME_CAP_CRMS_CRWMS && ctrl->cap & NVME_CAP_CRMS_CRIMS)
+- ctrl->ctrl_config |= NVME_CC_CRIME;
++ /*
++ * Setting CRIME results in CSTS.RDY before the media is ready. This
++ * makes it possible for media related commands to return the error
++ * NVME_SC_ADMIN_COMMAND_MEDIA_NOT_READY. Until the driver is
++ * restructured to handle retries, disable CC.CRIME.
++ */
++ ctrl->ctrl_config &= ~NVME_CC_CRIME;
+
+ ctrl->ctrl_config |= (NVME_CTRL_PAGE_SHIFT - 12) << NVME_CC_MPS_SHIFT;
+ ctrl->ctrl_config |= NVME_CC_AMS_RR | NVME_CC_SHN_NONE;
+@@ -2508,10 +2510,7 @@ int nvme_enable_ctrl(struct nvme_ctrl *ctrl)
+ * devices are known to get this wrong. Use the larger of the
+ * two values.
+ */
+- if (ctrl->ctrl_config & NVME_CC_CRIME)
+- ready_timeout = NVME_CRTO_CRIMT(crto);
+- else
+- ready_timeout = NVME_CRTO_CRWMT(crto);
++ ready_timeout = NVME_CRTO_CRWMT(crto);
+
+ if (ready_timeout < timeout)
+ dev_warn_once(ctrl->device, "bad crto:%x cap:%llx\n",
+@@ -3793,7 +3792,8 @@ struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu)) {
+ if (ns->head->ns_id == nsid) {
+ if (!nvme_get_ns(ns))
+ continue;
+@@ -4840,7 +4840,8 @@ void nvme_mark_namespaces_dead(struct nvme_ctrl *ctrl)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list)
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu))
+ blk_mark_disk_dead(ns->disk);
+ srcu_read_unlock(&ctrl->srcu, srcu_idx);
+ }
+@@ -4852,7 +4853,8 @@ void nvme_unfreeze(struct nvme_ctrl *ctrl)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list)
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu))
+ blk_mq_unfreeze_queue(ns->queue);
+ srcu_read_unlock(&ctrl->srcu, srcu_idx);
+ clear_bit(NVME_CTRL_FROZEN, &ctrl->flags);
+@@ -4865,7 +4867,8 @@ int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu)) {
+ timeout = blk_mq_freeze_queue_wait_timeout(ns->queue, timeout);
+ if (timeout <= 0)
+ break;
+@@ -4881,7 +4884,8 @@ void nvme_wait_freeze(struct nvme_ctrl *ctrl)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list)
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu))
+ blk_mq_freeze_queue_wait(ns->queue);
+ srcu_read_unlock(&ctrl->srcu, srcu_idx);
+ }
+@@ -4894,7 +4898,8 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl)
+
+ set_bit(NVME_CTRL_FROZEN, &ctrl->flags);
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list)
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu))
+ blk_freeze_queue_start(ns->queue);
+ srcu_read_unlock(&ctrl->srcu, srcu_idx);
+ }
+@@ -4942,7 +4947,8 @@ void nvme_sync_io_queues(struct nvme_ctrl *ctrl)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list)
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu))
+ blk_sync_queue(ns->queue);
+ srcu_read_unlock(&ctrl->srcu, srcu_idx);
+ }
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 6d97058cde7a11..a43982aaa40d74 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -577,6 +577,20 @@ static int nvme_add_ns_head_cdev(struct nvme_ns_head *head)
+ return ret;
+ }
+
++static void nvme_partition_scan_work(struct work_struct *work)
++{
++ struct nvme_ns_head *head =
++ container_of(work, struct nvme_ns_head, partition_scan_work);
++
++ if (WARN_ON_ONCE(!test_and_clear_bit(GD_SUPPRESS_PART_SCAN,
++ &head->disk->state)))
++ return;
++
++ mutex_lock(&head->disk->open_mutex);
++ bdev_disk_changed(head->disk, false);
++ mutex_unlock(&head->disk->open_mutex);
++}
++
+ static void nvme_requeue_work(struct work_struct *work)
+ {
+ struct nvme_ns_head *head =
+@@ -603,6 +617,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
+ bio_list_init(&head->requeue_list);
+ spin_lock_init(&head->requeue_lock);
+ INIT_WORK(&head->requeue_work, nvme_requeue_work);
++ INIT_WORK(&head->partition_scan_work, nvme_partition_scan_work);
+
+ /*
+ * Add a multipath node if the subsystems supports multiple controllers.
+@@ -626,6 +641,16 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
+ return PTR_ERR(head->disk);
+ head->disk->fops = &nvme_ns_head_ops;
+ head->disk->private_data = head;
++
++ /*
++ * We need to suppress the partition scan from occuring within the
++ * controller's scan_work context. If a path error occurs here, the IO
++ * will wait until a path becomes available or all paths are torn down,
++ * but that action also occurs within scan_work, so it would deadlock.
++ * Defer the partion scan to a different context that does not block
++ * scan_work.
++ */
++ set_bit(GD_SUPPRESS_PART_SCAN, &head->disk->state);
+ sprintf(head->disk->disk_name, "nvme%dn%d",
+ ctrl->subsys->instance, head->instance);
+ return 0;
+@@ -652,6 +677,7 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
+ return;
+ }
+ nvme_add_ns_head_cdev(head);
++ kblockd_schedule_work(&head->partition_scan_work);
+ }
+
+ mutex_lock(&head->lock);
+@@ -972,6 +998,12 @@ void nvme_mpath_shutdown_disk(struct nvme_ns_head *head)
+ kblockd_schedule_work(&head->requeue_work);
+ if (test_bit(NVME_NSHEAD_DISK_LIVE, &head->flags)) {
+ nvme_cdev_del(&head->cdev, &head->cdev_device);
++ /*
++ * requeue I/O after NVME_NSHEAD_DISK_LIVE has been cleared
++ * to allow multipath to fail all I/O.
++ */
++ synchronize_srcu(&head->srcu);
++ kblockd_schedule_work(&head->requeue_work);
+ del_gendisk(head->disk);
+ }
+ }
+@@ -983,6 +1015,7 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
+ /* make sure all pending bios are cleaned up */
+ kblockd_schedule_work(&head->requeue_work);
+ flush_work(&head->requeue_work);
++ flush_work(&head->partition_scan_work);
+ put_disk(head->disk);
+ }
+
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 313a4f978a2cf3..093cb423f536be 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -494,6 +494,7 @@ struct nvme_ns_head {
+ struct bio_list requeue_list;
+ spinlock_t requeue_lock;
+ struct work_struct requeue_work;
++ struct work_struct partition_scan_work;
+ struct mutex lock;
+ unsigned long flags;
+ #define NVME_NSHEAD_DISK_LIVE 0
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index e3d82e91151afa..c4d776c0ec2060 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -2644,10 +2644,11 @@ static int nvme_tcp_get_address(struct nvme_ctrl *ctrl, char *buf, int size)
+
+ len = nvmf_get_address(ctrl, buf, size);
+
++ if (!test_bit(NVME_TCP_Q_LIVE, &queue->flags))
++ return len;
++
+ mutex_lock(&queue->queue_lock);
+
+- if (!test_bit(NVME_TCP_Q_LIVE, &queue->flags))
+- goto done;
+ ret = kernel_getsockname(queue->sock, (struct sockaddr *)&src_addr);
+ if (ret > 0) {
+ if (len > 0)
+@@ -2655,7 +2656,7 @@ static int nvme_tcp_get_address(struct nvme_ctrl *ctrl, char *buf, int size)
+ len += scnprintf(buf + len, size - len, "%ssrc_addr=%pISc\n",
+ (len) ? "," : "", &src_addr);
+ }
+-done:
++
+ mutex_unlock(&queue->queue_lock);
+
+ return len;
+diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
+index e32790d8fc260c..a9d112d34d4f43 100644
+--- a/drivers/nvme/target/loop.c
++++ b/drivers/nvme/target/loop.c
+@@ -265,6 +265,13 @@ static void nvme_loop_destroy_admin_queue(struct nvme_loop_ctrl *ctrl)
+ {
+ if (!test_and_clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags))
+ return;
++ /*
++ * It's possible that some requests might have been added
++ * after admin queue is stopped/quiesced. So now start the
++ * queue to flush these requests to the completion.
++ */
++ nvme_unquiesce_admin_queue(&ctrl->ctrl);
++
+ nvmet_sq_destroy(&ctrl->queues[0].nvme_sq);
+ nvme_remove_admin_tag_set(&ctrl->ctrl);
+ }
+@@ -297,6 +304,12 @@ static void nvme_loop_destroy_io_queues(struct nvme_loop_ctrl *ctrl)
+ nvmet_sq_destroy(&ctrl->queues[i].nvme_sq);
+ }
+ ctrl->ctrl.queue_count = 1;
++ /*
++ * It's possible that some requests might have been added
++ * after io queue is stopped/quiesced. So now start the
++ * queue to flush these requests to the completion.
++ */
++ nvme_unquiesce_io_queues(&ctrl->ctrl);
+ }
+
+ static int nvme_loop_init_io_queues(struct nvme_loop_ctrl *ctrl)
+diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
+index 24d0e2418d2e6b..0f9b280c438d98 100644
+--- a/drivers/nvme/target/passthru.c
++++ b/drivers/nvme/target/passthru.c
+@@ -535,10 +535,6 @@ u16 nvmet_parse_passthru_admin_cmd(struct nvmet_req *req)
+ break;
+ case nvme_admin_identify:
+ switch (req->cmd->identify.cns) {
+- case NVME_ID_CNS_CTRL:
+- req->execute = nvmet_passthru_execute_cmd;
+- req->p.use_workqueue = true;
+- return NVME_SC_SUCCESS;
+ case NVME_ID_CNS_CS_CTRL:
+ switch (req->cmd->identify.csi) {
+ case NVME_CSI_ZNS:
+@@ -547,7 +543,9 @@ u16 nvmet_parse_passthru_admin_cmd(struct nvmet_req *req)
+ return NVME_SC_SUCCESS;
+ }
+ return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
++ case NVME_ID_CNS_CTRL:
+ case NVME_ID_CNS_NS:
++ case NVME_ID_CNS_NS_DESC_LIST:
+ req->execute = nvmet_passthru_execute_cmd;
+ req->p.use_workqueue = true;
+ return NVME_SC_SUCCESS;
+diff --git a/drivers/pinctrl/intel/Kconfig b/drivers/pinctrl/intel/Kconfig
+index 2101d30bd66c15..14c26c023590e6 100644
+--- a/drivers/pinctrl/intel/Kconfig
++++ b/drivers/pinctrl/intel/Kconfig
+@@ -46,6 +46,7 @@ config PINCTRL_INTEL_PLATFORM
+ of Intel PCH pins and using them as GPIOs. Currently the following
+ Intel SoCs / platforms require this to be functional:
+ - Lunar Lake
++ - Panther Lake
+
+ config PINCTRL_ALDERLAKE
+ tristate "Intel Alder Lake pinctrl and GPIO driver"
+diff --git a/drivers/pinctrl/pinctrl-aw9523.c b/drivers/pinctrl/pinctrl-aw9523.c
+index b5e1c467625ba0..1374f30166bc3b 100644
+--- a/drivers/pinctrl/pinctrl-aw9523.c
++++ b/drivers/pinctrl/pinctrl-aw9523.c
+@@ -987,8 +987,10 @@ static int aw9523_probe(struct i2c_client *client)
+ lockdep_set_subclass(&awi->i2c_lock, i2c_adapter_depth(client->adapter));
+
+ pdesc = devm_kzalloc(dev, sizeof(*pdesc), GFP_KERNEL);
+- if (!pdesc)
+- return -ENOMEM;
++ if (!pdesc) {
++ ret = -ENOMEM;
++ goto err_disable_vregs;
++ }
+
+ ret = aw9523_hw_init(awi);
+ if (ret)
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index 3ba4e1c5e15dfe..57aefccbb8556d 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -1865,13 +1865,12 @@ static inline void ap_scan_domains(struct ap_card *ac)
+ }
+ /* if no queue device exists, create a new one */
+ if (!aq) {
+- aq = ap_queue_create(qid, ac->ap_dev.device_type);
++ aq = ap_queue_create(qid, ac);
+ if (!aq) {
+ AP_DBF_WARN("%s(%d,%d) ap_queue_create() failed\n",
+ __func__, ac->id, dom);
+ continue;
+ }
+- aq->card = ac;
+ aq->config = !decfg;
+ aq->chkstop = chkstop;
+ aq->se_bstate = hwinfo.bs;
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index 0b275c71931964..f4622ee4d89473 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -272,7 +272,7 @@ int ap_test_config_usage_domain(unsigned int domain);
+ int ap_test_config_ctrl_domain(unsigned int domain);
+
+ void ap_queue_init_reply(struct ap_queue *aq, struct ap_message *ap_msg);
+-struct ap_queue *ap_queue_create(ap_qid_t qid, int device_type);
++struct ap_queue *ap_queue_create(ap_qid_t qid, struct ap_card *ac);
+ void ap_queue_prepare_remove(struct ap_queue *aq);
+ void ap_queue_remove(struct ap_queue *aq);
+ void ap_queue_init_state(struct ap_queue *aq);
+diff --git a/drivers/s390/crypto/ap_queue.c b/drivers/s390/crypto/ap_queue.c
+index 1f647ffd6f4db9..dcd1590c0f81f4 100644
+--- a/drivers/s390/crypto/ap_queue.c
++++ b/drivers/s390/crypto/ap_queue.c
+@@ -22,6 +22,11 @@ static void __ap_flush_queue(struct ap_queue *aq);
+ * some AP queue helper functions
+ */
+
++static inline bool ap_q_supported_in_se(struct ap_queue *aq)
++{
++ return aq->card->hwinfo.ep11 || aq->card->hwinfo.accel;
++}
++
+ static inline bool ap_q_supports_bind(struct ap_queue *aq)
+ {
+ return aq->card->hwinfo.ep11 || aq->card->hwinfo.accel;
+@@ -1104,18 +1109,19 @@ static void ap_queue_device_release(struct device *dev)
+ kfree(aq);
+ }
+
+-struct ap_queue *ap_queue_create(ap_qid_t qid, int device_type)
++struct ap_queue *ap_queue_create(ap_qid_t qid, struct ap_card *ac)
+ {
+ struct ap_queue *aq;
+
+ aq = kzalloc(sizeof(*aq), GFP_KERNEL);
+ if (!aq)
+ return NULL;
++ aq->card = ac;
+ aq->ap_dev.device.release = ap_queue_device_release;
+ aq->ap_dev.device.type = &ap_queue_type;
+- aq->ap_dev.device_type = device_type;
+- // add optional SE secure binding attributes group
+- if (ap_sb_available() && is_prot_virt_guest())
++ aq->ap_dev.device_type = ac->ap_dev.device_type;
++ /* in SE environment add bind/associate attributes group */
++ if (ap_is_se_guest() && ap_q_supported_in_se(aq))
+ aq->ap_dev.device.groups = ap_queue_dev_sb_attr_groups;
+ aq->qid = qid;
+ spin_lock_init(&aq->lock);
+@@ -1196,10 +1202,16 @@ bool ap_queue_usable(struct ap_queue *aq)
+ }
+
+ /* SE guest's queues additionally need to be bound */
+- if (ap_q_needs_bind(aq) &&
+- !(aq->se_bstate == AP_BS_Q_USABLE ||
+- aq->se_bstate == AP_BS_Q_USABLE_NO_SECURE_KEY))
+- rc = false;
++ if (ap_is_se_guest()) {
++ if (!ap_q_supported_in_se(aq)) {
++ rc = false;
++ goto unlock_and_out;
++ }
++ if (ap_q_needs_bind(aq) &&
++ !(aq->se_bstate == AP_BS_Q_USABLE ||
++ aq->se_bstate == AP_BS_Q_USABLE_NO_SECURE_KEY))
++ rc = false;
++ }
+
+ unlock_and_out:
+ spin_unlock_bh(&aq->lock);
+diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c b/drivers/vdpa/ifcvf/ifcvf_base.c
+index 472daa588a9d21..d5507b63b6cdb3 100644
+--- a/drivers/vdpa/ifcvf/ifcvf_base.c
++++ b/drivers/vdpa/ifcvf/ifcvf_base.c
+@@ -108,7 +108,7 @@ int ifcvf_init_hw(struct ifcvf_hw *hw, struct pci_dev *pdev)
+ u32 i;
+
+ ret = pci_read_config_byte(pdev, PCI_CAPABILITY_LIST, &pos);
+- if (ret < 0) {
++ if (ret) {
+ IFCVF_ERR(pdev, "Failed to read PCI capability list\n");
+ return -EIO;
+ }
+diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
+index c44d8ba00c02c3..88074451dd6151 100644
+--- a/drivers/virtio/virtio_pci_common.c
++++ b/drivers/virtio/virtio_pci_common.c
+@@ -24,6 +24,16 @@ MODULE_PARM_DESC(force_legacy,
+ "Force legacy mode for transitional virtio 1 devices");
+ #endif
+
++bool vp_is_avq(struct virtio_device *vdev, unsigned int index)
++{
++ struct virtio_pci_device *vp_dev = to_vp_device(vdev);
++
++ if (!virtio_has_feature(vdev, VIRTIO_F_ADMIN_VQ))
++ return false;
++
++ return index == vp_dev->admin_vq.vq_index;
++}
++
+ /* wait for pending irq handlers */
+ void vp_synchronize_vectors(struct virtio_device *vdev)
+ {
+@@ -234,10 +244,9 @@ static struct virtqueue *vp_setup_vq(struct virtio_device *vdev, unsigned int in
+ return vq;
+ }
+
+-static void vp_del_vq(struct virtqueue *vq)
++static void vp_del_vq(struct virtqueue *vq, struct virtio_pci_vq_info *info)
+ {
+ struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
+- struct virtio_pci_vq_info *info = vp_dev->vqs[vq->index];
+ unsigned long flags;
+
+ /*
+@@ -258,13 +267,16 @@ static void vp_del_vq(struct virtqueue *vq)
+ void vp_del_vqs(struct virtio_device *vdev)
+ {
+ struct virtio_pci_device *vp_dev = to_vp_device(vdev);
++ struct virtio_pci_vq_info *info;
+ struct virtqueue *vq, *n;
+ int i;
+
+ list_for_each_entry_safe(vq, n, &vdev->vqs, list) {
+- if (vp_dev->per_vq_vectors) {
+- int v = vp_dev->vqs[vq->index]->msix_vector;
++ info = vp_is_avq(vdev, vq->index) ? vp_dev->admin_vq.info :
++ vp_dev->vqs[vq->index];
+
++ if (vp_dev->per_vq_vectors) {
++ int v = info->msix_vector;
+ if (v != VIRTIO_MSI_NO_VECTOR &&
+ !vp_is_slow_path_vector(v)) {
+ int irq = pci_irq_vector(vp_dev->pci_dev, v);
+@@ -273,7 +285,7 @@ void vp_del_vqs(struct virtio_device *vdev)
+ free_irq(irq, vq);
+ }
+ }
+- vp_del_vq(vq);
++ vp_del_vq(vq, info);
+ }
+ vp_dev->per_vq_vectors = false;
+
+@@ -354,7 +366,7 @@ vp_find_one_vq_msix(struct virtio_device *vdev, int queue_idx,
+ vring_interrupt, 0,
+ vp_dev->msix_names[msix_vec], vq);
+ if (err) {
+- vp_del_vq(vq);
++ vp_del_vq(vq, *p_info);
+ return ERR_PTR(err);
+ }
+
+diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h
+index 1d9c49947f52d1..8beecf23ec85e0 100644
+--- a/drivers/virtio/virtio_pci_common.h
++++ b/drivers/virtio/virtio_pci_common.h
+@@ -178,6 +178,7 @@ struct virtio_device *virtio_pci_vf_get_pf_dev(struct pci_dev *pdev);
+ #define VIRTIO_ADMIN_CMD_BITMAP 0
+ #endif
+
++bool vp_is_avq(struct virtio_device *vdev, unsigned int index);
+ void vp_modern_avq_done(struct virtqueue *vq);
+ int vp_modern_admin_cmd_exec(struct virtio_device *vdev,
+ struct virtio_admin_cmd *cmd);
+diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
+index 9193c30d640aeb..4fbcbc7a9ae1cc 100644
+--- a/drivers/virtio/virtio_pci_modern.c
++++ b/drivers/virtio/virtio_pci_modern.c
+@@ -43,16 +43,6 @@ static int vp_avq_index(struct virtio_device *vdev, u16 *index, u16 *num)
+ return 0;
+ }
+
+-static bool vp_is_avq(struct virtio_device *vdev, unsigned int index)
+-{
+- struct virtio_pci_device *vp_dev = to_vp_device(vdev);
+-
+- if (!virtio_has_feature(vdev, VIRTIO_F_ADMIN_VQ))
+- return false;
+-
+- return index == vp_dev->admin_vq.vq_index;
+-}
+-
+ void vp_modern_avq_done(struct virtqueue *vq)
+ {
+ struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
+@@ -245,7 +235,7 @@ static void vp_modern_avq_cleanup(struct virtio_device *vdev)
+ if (!virtio_has_feature(vdev, VIRTIO_F_ADMIN_VQ))
+ return;
+
+- vq = vp_dev->vqs[vp_dev->admin_vq.vq_index]->vq;
++ vq = vp_dev->admin_vq.info->vq;
+ if (!vq)
+ return;
+
+diff --git a/fs/9p/fid.c b/fs/9p/fid.c
+index de009a33e0e26d..f84412290a30cf 100644
+--- a/fs/9p/fid.c
++++ b/fs/9p/fid.c
+@@ -131,10 +131,9 @@ static struct p9_fid *v9fs_fid_find(struct dentry *dentry, kuid_t uid, int any)
+ }
+ }
+ spin_unlock(&dentry->d_lock);
+- } else {
+- if (dentry->d_inode)
+- ret = v9fs_fid_find_inode(dentry->d_inode, false, uid, any);
+ }
++ if (!ret && dentry->d_inode)
++ ret = v9fs_fid_find_inode(dentry->d_inode, false, uid, any);
+
+ return ret;
+ }
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index b306c09808706b..c9d620175e80ca 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -130,6 +130,7 @@ struct afs_call {
+ wait_queue_head_t waitq; /* processes awaiting completion */
+ struct work_struct async_work; /* async I/O processor */
+ struct work_struct work; /* actual work processor */
++ struct work_struct free_work; /* Deferred free processor */
+ struct rxrpc_call *rxcall; /* RxRPC call handle */
+ struct rxrpc_peer *peer; /* Remote endpoint */
+ struct key *key; /* security for this call */
+@@ -1333,6 +1334,7 @@ extern int __net_init afs_open_socket(struct afs_net *);
+ extern void __net_exit afs_close_socket(struct afs_net *);
+ extern void afs_charge_preallocation(struct work_struct *);
+ extern void afs_put_call(struct afs_call *);
++void afs_deferred_put_call(struct afs_call *call);
+ void afs_make_call(struct afs_call *call, gfp_t gfp);
+ void afs_wait_for_call_to_complete(struct afs_call *call);
+ extern struct afs_call *afs_alloc_flat_call(struct afs_net *,
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index c453428f3c8ba9..9f2a3bb56ec69e 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -18,6 +18,7 @@
+
+ struct workqueue_struct *afs_async_calls;
+
++static void afs_deferred_free_worker(struct work_struct *work);
+ static void afs_wake_up_call_waiter(struct sock *, struct rxrpc_call *, unsigned long);
+ static void afs_wake_up_async_call(struct sock *, struct rxrpc_call *, unsigned long);
+ static void afs_process_async_call(struct work_struct *);
+@@ -149,6 +150,7 @@ static struct afs_call *afs_alloc_call(struct afs_net *net,
+ call->debug_id = atomic_inc_return(&rxrpc_debug_id);
+ refcount_set(&call->ref, 1);
+ INIT_WORK(&call->async_work, afs_process_async_call);
++ INIT_WORK(&call->free_work, afs_deferred_free_worker);
+ init_waitqueue_head(&call->waitq);
+ spin_lock_init(&call->state_lock);
+ call->iter = &call->def_iter;
+@@ -159,6 +161,36 @@ static struct afs_call *afs_alloc_call(struct afs_net *net,
+ return call;
+ }
+
++static void afs_free_call(struct afs_call *call)
++{
++ struct afs_net *net = call->net;
++ int o;
++
++ ASSERT(!work_pending(&call->async_work));
++
++ rxrpc_kernel_put_peer(call->peer);
++
++ if (call->rxcall) {
++ rxrpc_kernel_shutdown_call(net->socket, call->rxcall);
++ rxrpc_kernel_put_call(net->socket, call->rxcall);
++ call->rxcall = NULL;
++ }
++ if (call->type->destructor)
++ call->type->destructor(call);
++
++ afs_unuse_server_notime(call->net, call->server, afs_server_trace_put_call);
++ kfree(call->request);
++
++ o = atomic_read(&net->nr_outstanding_calls);
++ trace_afs_call(call->debug_id, afs_call_trace_free, 0, o,
++ __builtin_return_address(0));
++ kfree(call);
++
++ o = atomic_dec_return(&net->nr_outstanding_calls);
++ if (o == 0)
++ wake_up_var(&net->nr_outstanding_calls);
++}
++
+ /*
+ * Dispose of a reference on a call.
+ */
+@@ -173,32 +205,34 @@ void afs_put_call(struct afs_call *call)
+ o = atomic_read(&net->nr_outstanding_calls);
+ trace_afs_call(debug_id, afs_call_trace_put, r - 1, o,
+ __builtin_return_address(0));
++ if (zero)
++ afs_free_call(call);
++}
+
+- if (zero) {
+- ASSERT(!work_pending(&call->async_work));
+- ASSERT(call->type->name != NULL);
+-
+- rxrpc_kernel_put_peer(call->peer);
+-
+- if (call->rxcall) {
+- rxrpc_kernel_shutdown_call(net->socket, call->rxcall);
+- rxrpc_kernel_put_call(net->socket, call->rxcall);
+- call->rxcall = NULL;
+- }
+- if (call->type->destructor)
+- call->type->destructor(call);
++static void afs_deferred_free_worker(struct work_struct *work)
++{
++ struct afs_call *call = container_of(work, struct afs_call, free_work);
+
+- afs_unuse_server_notime(call->net, call->server, afs_server_trace_put_call);
+- kfree(call->request);
++ afs_free_call(call);
++}
+
+- trace_afs_call(call->debug_id, afs_call_trace_free, 0, o,
+- __builtin_return_address(0));
+- kfree(call);
++/*
++ * Dispose of a reference on a call, deferring the cleanup to a workqueue
++ * to avoid lock recursion.
++ */
++void afs_deferred_put_call(struct afs_call *call)
++{
++ struct afs_net *net = call->net;
++ unsigned int debug_id = call->debug_id;
++ bool zero;
++ int r, o;
+
+- o = atomic_dec_return(&net->nr_outstanding_calls);
+- if (o == 0)
+- wake_up_var(&net->nr_outstanding_calls);
+- }
++ zero = __refcount_dec_and_test(&call->ref, &r);
++ o = atomic_read(&net->nr_outstanding_calls);
++ trace_afs_call(debug_id, afs_call_trace_put, r - 1, o,
++ __builtin_return_address(0));
++ if (zero)
++ schedule_work(&call->free_work);
+ }
+
+ static struct afs_call *afs_get_call(struct afs_call *call,
+@@ -640,7 +674,8 @@ static void afs_wake_up_call_waiter(struct sock *sk, struct rxrpc_call *rxcall,
+ }
+
+ /*
+- * wake up an asynchronous call
++ * Wake up an asynchronous call. The caller is holding the call notify
++ * spinlock around this, so we can't call afs_put_call().
+ */
+ static void afs_wake_up_async_call(struct sock *sk, struct rxrpc_call *rxcall,
+ unsigned long call_user_ID)
+@@ -657,7 +692,7 @@ static void afs_wake_up_async_call(struct sock *sk, struct rxrpc_call *rxcall,
+ __builtin_return_address(0));
+
+ if (!queue_work(afs_async_calls, &call->async_work))
+- afs_put_call(call);
++ afs_deferred_put_call(call);
+ }
+ }
+
+diff --git a/fs/netfs/locking.c b/fs/netfs/locking.c
+index 75dc52a49b3a46..709a6aa101028f 100644
+--- a/fs/netfs/locking.c
++++ b/fs/netfs/locking.c
+@@ -121,6 +121,7 @@ int netfs_start_io_write(struct inode *inode)
+ up_write(&inode->i_rwsem);
+ return -ERESTARTSYS;
+ }
++ downgrade_write(&inode->i_rwsem);
+ return 0;
+ }
+ EXPORT_SYMBOL(netfs_start_io_write);
+@@ -135,7 +136,7 @@ EXPORT_SYMBOL(netfs_start_io_write);
+ void netfs_end_io_write(struct inode *inode)
+ __releases(inode->i_rwsem)
+ {
+- up_write(&inode->i_rwsem);
++ up_read(&inode->i_rwsem);
+ }
+ EXPORT_SYMBOL(netfs_end_io_write);
+
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 02d2beb7ddb953..c6d0d17a759c1e 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -1128,9 +1128,12 @@ int ocfs2_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ trace_ocfs2_setattr(inode, dentry,
+ (unsigned long long)OCFS2_I(inode)->ip_blkno,
+ dentry->d_name.len, dentry->d_name.name,
+- attr->ia_valid, attr->ia_mode,
+- from_kuid(&init_user_ns, attr->ia_uid),
+- from_kgid(&init_user_ns, attr->ia_gid));
++ attr->ia_valid,
++ attr->ia_valid & ATTR_MODE ? attr->ia_mode : 0,
++ attr->ia_valid & ATTR_UID ?
++ from_kuid(&init_user_ns, attr->ia_uid) : 0,
++ attr->ia_valid & ATTR_GID ?
++ from_kgid(&init_user_ns, attr->ia_gid) : 0);
+
+ /* ensuring we don't even attempt to truncate a symlink */
+ if (S_ISLNK(inode->i_mode))
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 5375b0c1dfb99d..9af28ed4cca46d 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -1054,6 +1054,7 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
+ */
+ }
+
++ put_net(cifs_net_ns(server));
+ kfree(server->leaf_fullpath);
+ kfree(server);
+
+@@ -1649,8 +1650,6 @@ cifs_put_tcp_session(struct TCP_Server_Info *server, int from_reconnect)
+ /* srv_count can never go negative */
+ WARN_ON(server->srv_count < 0);
+
+- put_net(cifs_net_ns(server));
+-
+ list_del_init(&server->tcp_ses_list);
+ spin_unlock(&cifs_tcp_ses_lock);
+
+@@ -3077,13 +3076,22 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ if (server->ssocket) {
+ socket = server->ssocket;
+ } else {
+- rc = __sock_create(cifs_net_ns(server), sfamily, SOCK_STREAM,
++ struct net *net = cifs_net_ns(server);
++ struct sock *sk;
++
++ rc = __sock_create(net, sfamily, SOCK_STREAM,
+ IPPROTO_TCP, &server->ssocket, 1);
+ if (rc < 0) {
+ cifs_server_dbg(VFS, "Error %d creating socket\n", rc);
+ return rc;
+ }
+
++ sk = server->ssocket->sk;
++ __netns_tracker_free(net, &sk->ns_tracker, false);
++ sk->sk_net_refcnt = 1;
++ get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
++ sock_inuse_add(net, 1);
++
+ /* BB other socket options to set KEEPALIVE, NODELAY? */
+ cifs_dbg(FYI, "Socket created\n");
+ socket = server->ssocket;
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 3a33924db2bc78..61fef28801140b 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -390,8 +390,12 @@ tls_offload_ctx_tx(const struct tls_context *tls_ctx)
+
+ static inline bool tls_sw_has_ctx_tx(const struct sock *sk)
+ {
+- struct tls_context *ctx = tls_get_ctx(sk);
++ struct tls_context *ctx;
++
++ if (!sk_is_inet(sk) || !inet_test_bit(IS_ICSK, sk))
++ return false;
+
++ ctx = tls_get_ctx(sk);
+ if (!ctx)
+ return false;
+ return !!tls_sw_ctx_tx(ctx);
+@@ -399,8 +403,12 @@ static inline bool tls_sw_has_ctx_tx(const struct sock *sk)
+
+ static inline bool tls_sw_has_ctx_rx(const struct sock *sk)
+ {
+- struct tls_context *ctx = tls_get_ctx(sk);
++ struct tls_context *ctx;
++
++ if (!sk_is_inet(sk) || !inet_test_bit(IS_ICSK, sk))
++ return false;
+
++ ctx = tls_get_ctx(sk);
+ if (!ctx)
+ return false;
+ return !!tls_sw_ctx_rx(ctx);
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 19b590b5aaec93..b282ed12503581 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -3136,13 +3136,17 @@ static void bpf_link_show_fdinfo(struct seq_file *m, struct file *filp)
+ {
+ const struct bpf_link *link = filp->private_data;
+ const struct bpf_prog *prog = link->prog;
++ enum bpf_link_type type = link->type;
+ char prog_tag[sizeof(prog->tag) * 2 + 1] = { };
+
+- seq_printf(m,
+- "link_type:\t%s\n"
+- "link_id:\t%u\n",
+- bpf_link_type_strs[link->type],
+- link->id);
++ if (type < ARRAY_SIZE(bpf_link_type_strs) && bpf_link_type_strs[type]) {
++ seq_printf(m, "link_type:\t%s\n", bpf_link_type_strs[type]);
++ } else {
++ WARN_ONCE(1, "missing BPF_LINK_TYPE(...) for link type %u\n", type);
++ seq_printf(m, "link_type:\t<%u>\n", type);
++ }
++ seq_printf(m, "link_id:\t%u\n", link->id);
++
+ if (prog) {
+ bin2hex(prog_tag, prog->tag, sizeof(prog->tag));
+ seq_printf(m,
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 626c5284ca5a84..a5a9b4e418a685 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -21718,7 +21718,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
+ /* 'struct bpf_verifier_env' can be global, but since it's not small,
+ * allocate/free it every time bpf_check() is called
+ */
+- env = kzalloc(sizeof(struct bpf_verifier_env), GFP_KERNEL);
++ env = kvzalloc(sizeof(struct bpf_verifier_env), GFP_KERNEL);
+ if (!env)
+ return -ENOMEM;
+
+@@ -21944,6 +21944,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
+ mutex_unlock(&bpf_verifier_lock);
+ vfree(env->insn_aux_data);
+ err_free_env:
+- kfree(env);
++ kvfree(env);
+ return ret;
+ }
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index cf70eda8895650..f320b0da8a5848 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -1285,7 +1285,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
+ /* Zero out spare memory. */
+ if (want_init_on_alloc(flags)) {
+ kasan_disable_current();
+- memset((void *)p + new_size, 0, ks - new_size);
++ memset(kasan_reset_tag(p) + new_size, 0, ks - new_size);
+ kasan_enable_current();
+ }
+
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 5cd94721d974f4..09f8ced9f8bb7f 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -977,8 +977,10 @@ static int p9_client_version(struct p9_client *c)
+ struct p9_client *p9_client_create(const char *dev_name, char *options)
+ {
+ int err;
++ static atomic_t seqno = ATOMIC_INIT(0);
+ struct p9_client *clnt;
+ char *client_id;
++ char *cache_name;
+
+ clnt = kmalloc(sizeof(*clnt), GFP_KERNEL);
+ if (!clnt)
+@@ -1035,15 +1037,23 @@ struct p9_client *p9_client_create(const char *dev_name, char *options)
+ if (err)
+ goto close_trans;
+
++ cache_name = kasprintf(GFP_KERNEL,
++ "9p-fcall-cache-%u", atomic_inc_return(&seqno));
++ if (!cache_name) {
++ err = -ENOMEM;
++ goto close_trans;
++ }
++
+ /* P9_HDRSZ + 4 is the smallest packet header we can have that is
+ * followed by data accessed from userspace by read
+ */
+ clnt->fcall_cache =
+- kmem_cache_create_usercopy("9p-fcall-cache", clnt->msize,
++ kmem_cache_create_usercopy(cache_name, clnt->msize,
+ 0, 0, P9_HDRSZ + 4,
+ clnt->msize - (P9_HDRSZ + 4),
+ NULL);
+
++ kfree(cache_name);
+ return clnt;
+
+ close_trans:
+diff --git a/net/core/filter.c b/net/core/filter.c
+index b2b551401bc298..fe5ac8da5022f8 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2248,7 +2248,7 @@ static int bpf_out_neigh_v6(struct net *net, struct sk_buff *skb,
+ rcu_read_unlock();
+ return ret;
+ }
+- rcu_read_unlock_bh();
++ rcu_read_unlock();
+ if (dst)
+ IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
+ out_drop:
+diff --git a/samples/landlock/sandboxer.c b/samples/landlock/sandboxer.c
+index e8223c3e781ab8..5d8a9df5273f5b 100644
+--- a/samples/landlock/sandboxer.c
++++ b/samples/landlock/sandboxer.c
+@@ -57,6 +57,25 @@ static inline int landlock_restrict_self(const int ruleset_fd,
+ #define ENV_TCP_CONNECT_NAME "LL_TCP_CONNECT"
+ #define ENV_DELIMITER ":"
+
++static int str2num(const char *numstr, __u64 *num_dst)
++{
++ char *endptr = NULL;
++ int err = 0;
++ __u64 num;
++
++ errno = 0;
++ num = strtoull(numstr, &endptr, 10);
++ if (errno != 0)
++ err = errno;
++ /* Was the string empty, or not entirely parsed successfully? */
++ else if ((*numstr == '\0') || (*endptr != '\0'))
++ err = EINVAL;
++ else
++ *num_dst = num;
++
++ return err;
++}
++
+ static int parse_path(char *env_path, const char ***const path_list)
+ {
+ int i, num_paths = 0;
+@@ -157,7 +176,6 @@ static int populate_ruleset_net(const char *const env_var, const int ruleset_fd,
+ char *env_port_name, *env_port_name_next, *strport;
+ struct landlock_net_port_attr net_port = {
+ .allowed_access = allowed_access,
+- .port = 0,
+ };
+
+ env_port_name = getenv(env_var);
+@@ -168,7 +186,17 @@ static int populate_ruleset_net(const char *const env_var, const int ruleset_fd,
+
+ env_port_name_next = env_port_name;
+ while ((strport = strsep(&env_port_name_next, ENV_DELIMITER))) {
+- net_port.port = atoi(strport);
++ __u64 port;
++
++ if (strcmp(strport, "") == 0)
++ continue;
++
++ if (str2num(strport, &port)) {
++ fprintf(stderr, "Failed to parse port at \"%s\"\n",
++ strport);
++ goto out_free_name;
++ }
++ net_port.port = port;
+ if (landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT,
+ &net_port, 0)) {
+ fprintf(stderr,
+diff --git a/sound/Kconfig b/sound/Kconfig
+index 4c036a9a420ab5..8b40205394fe00 100644
+--- a/sound/Kconfig
++++ b/sound/Kconfig
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ menuconfig SOUND
+ tristate "Sound card support"
+- depends on HAS_IOMEM || UML
++ depends on HAS_IOMEM || INDIRECT_IOMEM
+ help
+ If you have a sound card in your computer, i.e. if it can say more
+ than an occasional beep, say Y.
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 660fd984a92859..d1d39f4cc94256 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10258,6 +10258,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0c1e, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS),
+ SND_PCI_QUIRK(0x1028, 0x0c28, "Dell Inspiron 16 Plus 7630", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS),
+ SND_PCI_QUIRK(0x1028, 0x0c4d, "Dell", ALC287_FIXUP_CS35L41_I2C_4),
++ SND_PCI_QUIRK(0x1028, 0x0c94, "Dell Polaris 3 metal", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x1028, 0x0c96, "Dell Polaris 2in1", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1028, 0x0cbd, "Dell Oasis 13 CS MTL-U", ALC289_FIXUP_DELL_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1028, 0x0cbe, "Dell Oasis 13 2-IN-1 MTL-U", ALC289_FIXUP_DELL_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1028, 0x0cbf, "Dell Oasis 13 Low Weight MTU-L", ALC289_FIXUP_DELL_CS35L41_SPI_2),
+@@ -10572,11 +10574,15 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x1043, 0x10a1, "ASUS UX391UA", ALC294_FIXUP_ASUS_SPK),
++ SND_PCI_QUIRK(0x1043, 0x10a4, "ASUS TP3407SA", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x10c0, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x10d0, "ASUS X540LA/X540LJ", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x10d3, "ASUS K6500ZC", ALC294_FIXUP_ASUS_SPK),
++ SND_PCI_QUIRK(0x1043, 0x1154, "ASUS TP3607SH", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x1043, 0x11c0, "ASUS X556UR", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1043, 0x1204, "ASUS Strix G615JHR_JMR_JPR", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x1043, 0x1214, "ASUS Strix G615LH_LM_LP", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1271, "ASUS X430UN", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1290, "ASUS X441SA", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+@@ -10656,6 +10662,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1e63, "ASUS H7606W", ALC285_FIXUP_CS35L56_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x1e83, "ASUS GA605W", ALC285_FIXUP_CS35L56_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
++ SND_PCI_QUIRK(0x1043, 0x1eb3, "ASUS Ally RCLA72", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x1ed3, "ASUS HN7306W", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x1ee2, "ASUS UM6702RA/RC", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x1c52, "ASUS Zephyrus G15 2022", ALC289_FIXUP_ASUS_GA401),
+@@ -10670,6 +10677,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a50, "ASUS G834JYR/JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a60, "ASUS G634JYR/JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
++ SND_PCI_QUIRK(0x1043, 0x3e30, "ASUS TP3607SA", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x1043, 0x3ee0, "ASUS Strix G815_JHR_JMR_JPR", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x1043, 0x3ef0, "ASUS Strix G635LR_LW_LX", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x1043, 0x3f00, "ASUS Strix G815LH_LM_LP", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x1043, 0x3f10, "ASUS Strix G835LR_LW_LX", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x1043, 0x3f20, "ASUS Strix G615LR_LW", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x1043, 0x3f30, "ASUS Strix G815LR_LW", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+@@ -10891,11 +10905,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3878, "Lenovo Legion 7 Slim 16ARHA7", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x387d, "Yoga S780-16 pro Quad AAC", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x387e, "Yoga S780-16 pro Quad YC", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x17aa, 0x387f, "Yoga S780-16 pro dual LX", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x17aa, 0x3880, "Yoga S780-16 pro dual YC", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3881, "YB9 dual power mode2 YC", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3882, "Lenovo Yoga Pro 7 14APH8", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+ SND_PCI_QUIRK(0x17aa, 0x3884, "Y780 YG DUAL", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3886, "Y780 VECO DUAL", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3891, "Lenovo Yoga Pro 7 14AHP9", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
++ SND_PCI_QUIRK(0x17aa, 0x38a5, "Y580P AMD dual", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38a7, "Y780P AMD YG dual", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38a8, "Y780P AMD VECO dual", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38a9, "Thinkbook 16P", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD),
+@@ -10904,6 +10921,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x38b5, "Legion Slim 7 16IRH8", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x38b6, "Legion Slim 7 16APH8", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x38b7, "Legion Slim 7 16APH8", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x17aa, 0x38b8, "Yoga S780-14.5 proX AMD YC Dual", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x17aa, 0x38b9, "Yoga S780-14.5 proX AMD LX Dual", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38ba, "Yoga S780-14.5 Air AMD quad YC", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38bb, "Yoga S780-14.5 Air AMD quad AAC", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38be, "Yoga S980-14.5 proX YC Dual", ALC287_FIXUP_TAS2781_I2C),
+@@ -10914,12 +10933,22 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x38cb, "Y790 YG DUAL", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38cd, "Y790 VECO DUAL", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38d2, "Lenovo Yoga 9 14IMH9", ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN),
++ SND_PCI_QUIRK(0x17aa, 0x38d3, "Yoga S990-16 Pro IMH YC Dual", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x17aa, 0x38d4, "Yoga S990-16 Pro IMH VECO Dual", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x17aa, 0x38d5, "Yoga S990-16 Pro IMH YC Quad", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x17aa, 0x38d6, "Yoga S990-16 Pro IMH VECO Quad", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38d7, "Lenovo Yoga 9 14IMH9", ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN),
++ SND_PCI_QUIRK(0x17aa, 0x38df, "Yoga Y990 Intel YC Dual", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x17aa, 0x38e0, "Yoga Y990 Intel VECO Dual", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x17aa, 0x38f8, "Yoga Book 9i", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38df, "Y990 YG DUAL", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x17aa, 0x38fd, "ThinkBook plus Gen5 Hybrid", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC),
++ SND_PCI_QUIRK(0x17aa, 0x391f, "Yoga S990-16 pro Quad YC Quad", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x17aa, 0x3920, "Yoga S990-16 pro Quad VECO Quad", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 601785ee2f0b84..dc476bfb6da40f 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -325,6 +325,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "M6500RC"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "E1404FA"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+@@ -339,6 +346,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "M7600RE"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "M3502RA"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+diff --git a/sound/soc/codecs/aw88399.c b/sound/soc/codecs/aw88399.c
+index 8dc2b8aa6832d5..bba59885242d0c 100644
+--- a/sound/soc/codecs/aw88399.c
++++ b/sound/soc/codecs/aw88399.c
+@@ -656,7 +656,7 @@ static int aw_dev_get_dsp_status(struct aw_device *aw_dev)
+ if (ret)
+ return ret;
+ if (!(reg_val & (~AW88399_WDT_CNT_MASK)))
+- ret = -EPERM;
++ return -EPERM;
+
+ return 0;
+ }
+diff --git a/sound/soc/codecs/lpass-rx-macro.c b/sound/soc/codecs/lpass-rx-macro.c
+index ac759f4a880d09..3be83c74d40993 100644
+--- a/sound/soc/codecs/lpass-rx-macro.c
++++ b/sound/soc/codecs/lpass-rx-macro.c
+@@ -202,12 +202,14 @@
+ #define CDC_RX_RXn_RX_PATH_SEC3(rx, n) (0x042c + rx->rxn_reg_stride * n)
+ #define CDC_RX_RX0_RX_PATH_SEC4 (0x0430)
+ #define CDC_RX_RX0_RX_PATH_SEC7 (0x0434)
+-#define CDC_RX_RXn_RX_PATH_SEC7(rx, n) (0x0434 + rx->rxn_reg_stride * n)
++#define CDC_RX_RXn_RX_PATH_SEC7(rx, n) \
++ (0x0434 + (rx->rxn_reg_stride * n) + ((n > 1) ? rx->rxn_reg_stride2 : 0))
+ #define CDC_RX_DSM_OUT_DELAY_SEL_MASK GENMASK(2, 0)
+ #define CDC_RX_DSM_OUT_DELAY_TWO_SAMPLE 0x2
+ #define CDC_RX_RX0_RX_PATH_MIX_SEC0 (0x0438)
+ #define CDC_RX_RX0_RX_PATH_MIX_SEC1 (0x043C)
+-#define CDC_RX_RXn_RX_PATH_DSM_CTL(rx, n) (0x0440 + rx->rxn_reg_stride * n)
++#define CDC_RX_RXn_RX_PATH_DSM_CTL(rx, n) \
++ (0x0440 + (rx->rxn_reg_stride * n) + ((n > 1) ? rx->rxn_reg_stride2 : 0))
+ #define CDC_RX_RXn_DSM_CLK_EN_MASK BIT(0)
+ #define CDC_RX_RX0_RX_PATH_DSM_CTL (0x0440)
+ #define CDC_RX_RX0_RX_PATH_DSM_DATA1 (0x0444)
+@@ -645,6 +647,7 @@ struct rx_macro {
+ int rx_mclk_cnt;
+ enum lpass_codec_version codec_version;
+ int rxn_reg_stride;
++ int rxn_reg_stride2;
+ bool is_ear_mode_on;
+ bool hph_pwr_mode;
+ bool hph_hd2_mode;
+@@ -1929,9 +1932,6 @@ static int rx_macro_digital_mute(struct snd_soc_dai *dai, int mute, int stream)
+ CDC_RX_PATH_PGA_MUTE_MASK, 0x0);
+ }
+
+- if (j == INTERP_AUX)
+- dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(rx, 2);
+-
+ int_mux_cfg0 = CDC_RX_INP_MUX_RX_INT0_CFG0 + j * 8;
+ int_mux_cfg1 = int_mux_cfg0 + 4;
+ int_mux_cfg0_val = snd_soc_component_read(component, int_mux_cfg0);
+@@ -2702,9 +2702,6 @@ static int rx_macro_enable_interp_clk(struct snd_soc_component *component,
+
+ main_reg = CDC_RX_RXn_RX_PATH_CTL(rx, interp_idx);
+ dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(rx, interp_idx);
+- if (interp_idx == INTERP_AUX)
+- dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(rx, 2);
+-
+ rx_cfg2_reg = CDC_RX_RXn_RX_PATH_CFG2(rx, interp_idx);
+
+ if (SND_SOC_DAPM_EVENT_ON(event)) {
+@@ -3821,6 +3818,7 @@ static int rx_macro_probe(struct platform_device *pdev)
+ case LPASS_CODEC_VERSION_2_0:
+ case LPASS_CODEC_VERSION_2_1:
+ rx->rxn_reg_stride = 0x80;
++ rx->rxn_reg_stride2 = 0xc;
+ def_count = ARRAY_SIZE(rx_defaults) + ARRAY_SIZE(rx_pre_2_5_defaults);
+ reg_defaults = kmalloc_array(def_count, sizeof(struct reg_default), GFP_KERNEL);
+ if (!reg_defaults)
+@@ -3834,6 +3832,7 @@ static int rx_macro_probe(struct platform_device *pdev)
+ case LPASS_CODEC_VERSION_2_7:
+ case LPASS_CODEC_VERSION_2_8:
+ rx->rxn_reg_stride = 0xc0;
++ rx->rxn_reg_stride2 = 0x0;
+ def_count = ARRAY_SIZE(rx_defaults) + ARRAY_SIZE(rx_2_5_defaults);
+ reg_defaults = kmalloc_array(def_count, sizeof(struct reg_default), GFP_KERNEL);
+ if (!reg_defaults)
+diff --git a/sound/soc/codecs/rt722-sdca-sdw.c b/sound/soc/codecs/rt722-sdca-sdw.c
+index 87354bb1564e8d..d5c985ff5ac553 100644
+--- a/sound/soc/codecs/rt722-sdca-sdw.c
++++ b/sound/soc/codecs/rt722-sdca-sdw.c
+@@ -253,7 +253,7 @@ static int rt722_sdca_read_prop(struct sdw_slave *slave)
+ }
+
+ /* set the timeout values */
+- prop->clk_stop_timeout = 200;
++ prop->clk_stop_timeout = 900;
+
+ /* wake-up event */
+ prop->wake_capable = 1;
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index a3d580b2bbf463..1ecfa1184adac1 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -28,6 +28,13 @@
+
+ #define MICFIL_OSR_DEFAULT 16
+
++#define MICFIL_NUM_RATES 7
++#define MICFIL_CLK_SRC_NUM 3
++/* clock source ids */
++#define MICFIL_AUDIO_PLL1 0
++#define MICFIL_AUDIO_PLL2 1
++#define MICFIL_CLK_EXT3 2
++
+ enum quality {
+ QUALITY_HIGH,
+ QUALITY_MEDIUM,
+@@ -45,9 +52,12 @@ struct fsl_micfil {
+ struct clk *mclk;
+ struct clk *pll8k_clk;
+ struct clk *pll11k_clk;
++ struct clk *clk_src[MICFIL_CLK_SRC_NUM];
+ struct snd_dmaengine_dai_dma_data dma_params_rx;
+ struct sdma_peripheral_config sdmacfg;
+ struct snd_soc_card *card;
++ struct snd_pcm_hw_constraint_list constraint_rates;
++ unsigned int constraint_rates_list[MICFIL_NUM_RATES];
+ unsigned int dataline;
+ char name[32];
+ int irq[MICFIL_IRQ_LINES];
+@@ -475,12 +485,34 @@ static int fsl_micfil_startup(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+ {
+ struct fsl_micfil *micfil = snd_soc_dai_get_drvdata(dai);
++ unsigned int rates[MICFIL_NUM_RATES] = {8000, 11025, 16000, 22050, 32000, 44100, 48000};
++ int i, j, k = 0;
++ u64 clk_rate;
+
+ if (!micfil) {
+ dev_err(dai->dev, "micfil dai priv_data not set\n");
+ return -EINVAL;
+ }
+
++ micfil->constraint_rates.list = micfil->constraint_rates_list;
++ micfil->constraint_rates.count = 0;
++
++ for (j = 0; j < MICFIL_NUM_RATES; j++) {
++ for (i = 0; i < MICFIL_CLK_SRC_NUM; i++) {
++ clk_rate = clk_get_rate(micfil->clk_src[i]);
++ if (clk_rate != 0 && do_div(clk_rate, rates[j]) == 0) {
++ micfil->constraint_rates_list[k++] = rates[j];
++ micfil->constraint_rates.count++;
++ break;
++ }
++ }
++ }
++
++ if (micfil->constraint_rates.count > 0)
++ snd_pcm_hw_constraint_list(substream->runtime, 0,
++ SNDRV_PCM_HW_PARAM_RATE,
++ &micfil->constraint_rates);
++
+ return 0;
+ }
+
+@@ -1175,6 +1207,12 @@ static int fsl_micfil_probe(struct platform_device *pdev)
+ fsl_asoc_get_pll_clocks(&pdev->dev, &micfil->pll8k_clk,
+ &micfil->pll11k_clk);
+
++ micfil->clk_src[MICFIL_AUDIO_PLL1] = micfil->pll8k_clk;
++ micfil->clk_src[MICFIL_AUDIO_PLL2] = micfil->pll11k_clk;
++ micfil->clk_src[MICFIL_CLK_EXT3] = devm_clk_get(&pdev->dev, "clkext3");
++ if (IS_ERR(micfil->clk_src[MICFIL_CLK_EXT3]))
++ micfil->clk_src[MICFIL_CLK_EXT3] = NULL;
++
+ /* init regmap */
+ regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ if (IS_ERR(regs))
+diff --git a/sound/soc/intel/avs/core.c b/sound/soc/intel/avs/core.c
+index f2dc82a2abc717..4d1e6c84918c66 100644
+--- a/sound/soc/intel/avs/core.c
++++ b/sound/soc/intel/avs/core.c
+@@ -28,6 +28,7 @@
+ #include "avs.h"
+ #include "cldma.h"
+ #include "messages.h"
++#include "pcm.h"
+
+ static u32 pgctl_mask = AZX_PGCTL_LSRMD_MASK;
+ module_param(pgctl_mask, uint, 0444);
+@@ -247,7 +248,7 @@ static void hdac_stream_update_pos(struct hdac_stream *stream, u64 buffer_size)
+ static void hdac_update_stream(struct hdac_bus *bus, struct hdac_stream *stream)
+ {
+ if (stream->substream) {
+- snd_pcm_period_elapsed(stream->substream);
++ avs_period_elapsed(stream->substream);
+ } else if (stream->cstream) {
+ u64 buffer_size = stream->cstream->runtime->buffer_size;
+
+diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
+index c76b86254a8b4d..37b1880c81141d 100644
+--- a/sound/soc/intel/avs/pcm.c
++++ b/sound/soc/intel/avs/pcm.c
+@@ -16,6 +16,7 @@
+ #include <sound/soc-component.h>
+ #include "avs.h"
+ #include "path.h"
++#include "pcm.h"
+ #include "topology.h"
+ #include "../../codecs/hda.h"
+
+@@ -30,6 +31,7 @@ struct avs_dma_data {
+ struct hdac_ext_stream *host_stream;
+ };
+
++ struct work_struct period_elapsed_work;
+ struct snd_pcm_substream *substream;
+ };
+
+@@ -56,6 +58,22 @@ avs_dai_find_path_template(struct snd_soc_dai *dai, bool is_fe, int direction)
+ return dw->priv;
+ }
+
++static void avs_period_elapsed_work(struct work_struct *work)
++{
++ struct avs_dma_data *data = container_of(work, struct avs_dma_data, period_elapsed_work);
++
++ snd_pcm_period_elapsed(data->substream);
++}
++
++void avs_period_elapsed(struct snd_pcm_substream *substream)
++{
++ struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
++ struct snd_soc_dai *dai = snd_soc_rtd_to_cpu(rtd, 0);
++ struct avs_dma_data *data = snd_soc_dai_get_dma_data(dai, substream);
++
++ schedule_work(&data->period_elapsed_work);
++}
++
+ static int avs_dai_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+ {
+ struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+@@ -77,6 +95,7 @@ static int avs_dai_startup(struct snd_pcm_substream *substream, struct snd_soc_d
+ data->substream = substream;
+ data->template = template;
+ data->adev = adev;
++ INIT_WORK(&data->period_elapsed_work, avs_period_elapsed_work);
+ snd_soc_dai_set_dma_data(dai, substream, data);
+
+ if (rtd->dai_link->ignore_suspend)
+diff --git a/sound/soc/intel/avs/pcm.h b/sound/soc/intel/avs/pcm.h
+new file mode 100644
+index 00000000000000..0f3615c9039822
+--- /dev/null
++++ b/sound/soc/intel/avs/pcm.h
+@@ -0,0 +1,16 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright(c) 2024 Intel Corporation
++ *
++ * Authors: Cezary Rojewski <cezary.rojewski@intel.com>
++ * Amadeusz Slawinski <amadeuszx.slawinski@linux.intel.com>
++ */
++
++#ifndef __SOUND_SOC_INTEL_AVS_PCM_H
++#define __SOUND_SOC_INTEL_AVS_PCM_H
++
++#include <sound/pcm.h>
++
++void avs_period_elapsed(struct snd_pcm_substream *substream);
++
++#endif
+diff --git a/sound/soc/intel/common/soc-acpi-intel-lnl-match.c b/sound/soc/intel/common/soc-acpi-intel-lnl-match.c
+index edfb668d0580d3..8452b66149119b 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-lnl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-lnl-match.c
+@@ -166,6 +166,15 @@ static const struct snd_soc_acpi_adr_device rt1316_3_group1_adr[] = {
+ }
+ };
+
++static const struct snd_soc_acpi_adr_device rt1318_1_adr[] = {
++ {
++ .adr = 0x000133025D131801ull,
++ .num_endpoints = 1,
++ .endpoints = &single_endpoint,
++ .name_prefix = "rt1318-1"
++ }
++};
++
+ static const struct snd_soc_acpi_adr_device rt1318_1_group1_adr[] = {
+ {
+ .adr = 0x000130025D131801ull,
+@@ -184,6 +193,15 @@ static const struct snd_soc_acpi_adr_device rt1318_2_group1_adr[] = {
+ }
+ };
+
++static const struct snd_soc_acpi_adr_device rt713_0_adr[] = {
++ {
++ .adr = 0x000031025D071301ull,
++ .num_endpoints = 1,
++ .endpoints = &single_endpoint,
++ .name_prefix = "rt713"
++ }
++};
++
+ static const struct snd_soc_acpi_adr_device rt714_0_adr[] = {
+ {
+ .adr = 0x000030025D071401ull,
+@@ -286,6 +304,20 @@ static const struct snd_soc_acpi_link_adr lnl_sdw_rt1318_l12_rt714_l0[] = {
+ {}
+ };
+
++static const struct snd_soc_acpi_link_adr lnl_sdw_rt713_l0_rt1318_l1[] = {
++ {
++ .mask = BIT(0),
++ .num_adr = ARRAY_SIZE(rt713_0_adr),
++ .adr_d = rt713_0_adr,
++ },
++ {
++ .mask = BIT(1),
++ .num_adr = ARRAY_SIZE(rt1318_1_adr),
++ .adr_d = rt1318_1_adr,
++ },
++ {}
++};
++
+ /* this table is used when there is no I2S codec present */
+ struct snd_soc_acpi_mach snd_soc_acpi_intel_lnl_sdw_machines[] = {
+ /* mockup tests need to be first */
+@@ -343,6 +375,12 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_lnl_sdw_machines[] = {
+ .drv_name = "sof_sdw",
+ .sof_tplg_filename = "sof-lnl-rt1318-l12-rt714-l0.tplg"
+ },
++ {
++ .link_mask = BIT(0) | BIT(1),
++ .links = lnl_sdw_rt713_l0_rt1318_l1,
++ .drv_name = "sof_sdw",
++ .sof_tplg_filename = "sof-lnl-rt713-l0-rt1318-l1.tplg"
++ },
+ {},
+ };
+ EXPORT_SYMBOL_GPL(snd_soc_acpi_intel_lnl_sdw_machines);
+diff --git a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
+index 745c5ada4c4bfd..d50cbd8040d45f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
++++ b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
+@@ -420,6 +420,15 @@ verify_umulti_link_info(int fd, bool retprobe, __u64 *offsets,
+ if (!ASSERT_NEQ(err, -1, "readlink"))
+ return -1;
+
++ memset(&info, 0, sizeof(info));
++ err = bpf_link_get_info_by_fd(fd, &info, &len);
++ if (!ASSERT_OK(err, "bpf_link_get_info_by_fd"))
++ return -1;
++
++ ASSERT_EQ(info.uprobe_multi.count, 3, "info.uprobe_multi.count");
++ ASSERT_EQ(info.uprobe_multi.path_size, strlen(path) + 1,
++ "info.uprobe_multi.path_size");
++
+ for (bit = 0; bit < 8; bit++) {
+ memset(&info, 0, sizeof(info));
+ info.uprobe_multi.path = ptr_to_u64(path_buf);
+diff --git a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
+index 13b29a7faa71a1..d24d3a36ec1448 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
++++ b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
+@@ -656,4 +656,71 @@ __naked void two_old_ids_one_cur_id(void)
+ : __clobber_all);
+ }
+
++SEC("socket")
++/* Note the flag, see verifier.c:opt_subreg_zext_lo32_rnd_hi32() */
++__flag(BPF_F_TEST_RND_HI32)
++__success
++/* This test was added because of a bug in verifier.c:sync_linked_regs(),
++ * upon range propagation it destroyed subreg_def marks for registers.
++ * The subreg_def mark is used to decide whether zero extension instructions
++ * are needed when register is read. When BPF_F_TEST_RND_HI32 is set it
++ * also causes generation of statements to randomize upper halves of
++ * read registers.
++ *
++ * The test is written in a way to return an upper half of a register
++ * that is affected by range propagation and must have it's subreg_def
++ * preserved. This gives a return value of 0 and leads to undefined
++ * return value if subreg_def mark is not preserved.
++ */
++__retval(0)
++/* Check that verifier believes r1/r0 are zero at exit */
++__log_level(2)
++__msg("4: (77) r1 >>= 32 ; R1_w=0")
++__msg("5: (bf) r0 = r1 ; R0_w=0 R1_w=0")
++__msg("6: (95) exit")
++__msg("from 3 to 4")
++__msg("4: (77) r1 >>= 32 ; R1_w=0")
++__msg("5: (bf) r0 = r1 ; R0_w=0 R1_w=0")
++__msg("6: (95) exit")
++/* Verify that statements to randomize upper half of r1 had not been
++ * generated.
++ */
++__xlated("call unknown")
++__xlated("r0 &= 2147483647")
++__xlated("w1 = w0")
++/* This is how disasm.c prints BPF_ZEXT_REG at the moment, x86 and arm
++ * are the only CI archs that do not need zero extension for subregs.
++ */
++#if !defined(__TARGET_ARCH_x86) && !defined(__TARGET_ARCH_arm64)
++__xlated("w1 = w1")
++#endif
++__xlated("if w0 < 0xa goto pc+0")
++__xlated("r1 >>= 32")
++__xlated("r0 = r1")
++__xlated("exit")
++__naked void linked_regs_and_subreg_def(void)
++{
++ asm volatile (
++ "call %[bpf_ktime_get_ns];"
++ /* make sure r0 is in 32-bit range, otherwise w1 = w0 won't
++ * assign same IDs to registers.
++ */
++ "r0 &= 0x7fffffff;"
++ /* link w1 and w0 via ID */
++ "w1 = w0;"
++ /* 'if' statement propagates range info from w0 to w1,
++ * but should not affect w1->subreg_def property.
++ */
++ "if w0 < 10 goto +0;"
++ /* r1 is read here, on archs that require subreg zero
++ * extension this would cause zext patch generation.
++ */
++ "r1 >>= 32;"
++ "r0 = r1;"
++ "exit;"
++ :
++ : __imm(bpf_ktime_get_ns)
++ : __clobber_all);
++}
++
+ char _license[] SEC("license") = "GPL";
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-19 18:42 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-19 18:42 UTC (permalink / raw
To: gentoo-commits
commit: f3d6553ba359b2a6f77661f0f4c52e51e24671e0
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Nov 19 18:36:54 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Nov 19 18:36:54 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f3d6553b
GCC 15 defaults to -std=gnu23. Hack in CSTD_FLAG to pass -std=gnu11
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
2980_GCC15-gnu23-to-gnu11-fix.patch | 105 ++++++++++++++++++++++++++++++++++++
2 files changed, 109 insertions(+)
diff --git a/0000_README b/0000_README
index 5e0fcd5f..069847dc 100644
--- a/0000_README
+++ b/0000_README
@@ -123,6 +123,10 @@ Patch: 2952_resolve-btfids-Fix-compiler-warnings.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=2c3d022abe6c3165109393b75a127b06c2c70063
Desc: resolve_btfids: Fix compiler warnings
+Patch: 2980_GCC15-gnu23-to-gnu11-fix.patch
+From: https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
+Desc: GCC 15 defaults to -std=gnu23. Hack in CSTD_FLAG to pass -std=gnu11 everywhere.
+
Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
From: https://lore.kernel.org/bpf/
Desc: libbpf: workaround -Wmaybe-uninitialized false positive
diff --git a/2980_GCC15-gnu23-to-gnu11-fix.patch b/2980_GCC15-gnu23-to-gnu11-fix.patch
new file mode 100644
index 00000000..c74b6180
--- /dev/null
+++ b/2980_GCC15-gnu23-to-gnu11-fix.patch
@@ -0,0 +1,105 @@
+iGCC 15 defaults to -std=gnu23. While most of the kernel builds with -std=gnu11,
+some of it forgets to pass that flag. Hack in CSTD_FLAG to pass -std=gnu11
+everywhere.
+
+https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
+--- a/Makefile
++++ b/Makefile
+@@ -416,6 +416,8 @@ export KCONFIG_CONFIG
+ # SHELL used by kbuild
+ CONFIG_SHELL := sh
+
++CSTD_FLAG := -std=gnu11
++
+ HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null)
+ HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null)
+ HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null)
+@@ -437,7 +439,7 @@ HOSTRUSTC = rustc
+ HOSTPKG_CONFIG = pkg-config
+
+ KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
+- -O2 -fomit-frame-pointer -std=gnu11
++ -O2 -fomit-frame-pointer $(CSTD_FLAG)
+ KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
+ KBUILD_USERLDFLAGS := $(USERLDFLAGS)
+
+@@ -545,7 +547,7 @@ LINUXINCLUDE := \
+ KBUILD_AFLAGS := -D__ASSEMBLY__ -fno-PIE
+
+ KBUILD_CFLAGS :=
+-KBUILD_CFLAGS += -std=gnu11
++KBUILD_CFLAGS += $(CSTD_FLAG)
+ KBUILD_CFLAGS += -fshort-wchar
+ KBUILD_CFLAGS += -funsigned-char
+ KBUILD_CFLAGS += -fno-common
+@@ -589,7 +591,7 @@ export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AW
+ export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
+ export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
+ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
+-export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS
++export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS CSTD_FLAG
+
+ export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
+ export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -65,7 +65,7 @@ VDSO_CFLAGS += -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
+ -fno-strict-aliasing -fno-common \
+ -Werror-implicit-function-declaration \
+ -Wno-format-security \
+- -std=gnu11
++ $(CSTD_FLAG)
+ VDSO_CFLAGS += -O2
+ # Some useful compiler-dependent flags from top-level Makefile
+ VDSO_CFLAGS += $(call cc32-option,-Wno-pointer-sign)
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -47,7 +47,7 @@ endif
+
+ # How to compile the 16-bit code. Note we always compile for -march=i386;
+ # that way we can complain to the user if the CPU is insufficient.
+-REALMODE_CFLAGS := -std=gnu11 -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
++REALMODE_CFLAGS := $(CSTD_FLAG) -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
+ -Wall -Wstrict-prototypes -march=i386 -mregparm=3 \
+ -fno-strict-aliasing -fomit-frame-pointer -fno-pic \
+ -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none)
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -7,7 +7,7 @@
+ #
+
+ # non-x86 reuses KBUILD_CFLAGS, x86 does not
+-cflags-y := $(KBUILD_CFLAGS)
++cflags-y := $(KBUILD_CFLAGS) $(CSTD_FLAG)
+
+ cflags-$(CONFIG_X86_32) := -march=i386
+ cflags-$(CONFIG_X86_64) := -mcmodel=small
+@@ -18,7 +18,7 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \
+ $(call cc-disable-warning, address-of-packed-member) \
+ $(call cc-disable-warning, gnu) \
+ -fno-asynchronous-unwind-tables \
+- $(CLANG_FLAGS)
++ $(CLANG_FLAGS) $(CSTD_FLAG)
+
+ # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
+ # disable the stackleak plugin
+@@ -42,7 +42,7 @@ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(cflags-y)) \
+ -ffreestanding \
+ -fno-stack-protector \
+ $(call cc-option,-fno-addrsig) \
+- -D__DISABLE_EXPORTS
++ -D__DISABLE_EXPORTS $(CSTD_FLAG)
+
+ #
+ # struct randomization only makes sense for Linux internal types, which the EFI
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -24,7 +24,7 @@ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
+ # case of cross compiling, as it has the '--target=' flag, which is needed to
+ # avoid errors with '-march=i386', and future flags may depend on the target to
+ # be valid.
+-KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
++KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS) $(CSTD_FLAG)
+ KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
+ KBUILD_CFLAGS += -Wundef
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-21 18:02 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-21 18:02 UTC (permalink / raw
To: gentoo-commits
commit: 0f40783765322f593867fe66021be5d411d17648
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 21 18:01:52 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov 21 18:01:52 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0f407837
kbuild,bpf: Pass make jobs' value to pahole
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
2996_kbuild-bpf-jobs-val-pahole-fix.patch | 64 +++++++++++++++++++++++++++++++
2 files changed, 68 insertions(+)
diff --git a/0000_README b/0000_README
index 069847dc..3c22b167 100644
--- a/0000_README
+++ b/0000_README
@@ -139,6 +139,10 @@ Patch: 2995_dtrace-6.11_p1.patch
From: https://github.com/thesamesam/linux/tree/dtrace-sam/v2/6.11-flat
Desc: dtrace patch for 6.11.X (CTF, modules.builtin.objs)
+Patch: 2996_kbuild-bpf-jobs-val-pahole-fix.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+Desc: kbuild,bpf: Pass make jobs' value to pahole
+
Patch: 3000_Support-printing-firmware-info.patch
From: https://bugs.gentoo.org/732852
Desc: Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev
diff --git a/2996_kbuild-bpf-jobs-val-pahole-fix.patch b/2996_kbuild-bpf-jobs-val-pahole-fix.patch
new file mode 100644
index 00000000..e69cde35
--- /dev/null
+++ b/2996_kbuild-bpf-jobs-val-pahole-fix.patch
@@ -0,0 +1,64 @@
+From 09048d22b7825a5025cf7e135f7e3134daff4df2 Mon Sep 17 00:00:00 2001
+From: Florian Schmaus <flo@geekplace.eu>
+Date: Sat, 2 Nov 2024 11:04:51 +0100
+Subject: kbuild,bpf: Pass make jobs' value to pahole
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Pass the value of make's -j/--jobs argument to pahole, to avoid out of
+memory errors and make pahole respect the "jobs" value of make.
+
+On systems with little memory but many cores, invoking pahole using -j
+without argument potentially creates too many pahole instances,
+causing an out-of-memory situation. Instead, we should pass make's
+"jobs" value as an argument to pahole's -j, which is likely configured
+to be (much) lower than the actual core count on such systems.
+
+If make was invoked without -j, either via cmdline or MAKEFLAGS, then
+JOBS will be simply empty, resulting in the existing behavior, as
+expected.
+
+Signed-off-by: Florian Schmaus <flo@geekplace.eu>
+Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
+Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
+Link: https://lore.kernel.org/bpf/20241102100452.793970-1-flo@geekplace.eu
+---
+ scripts/Makefile.btf | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+(limited to 'scripts/Makefile.btf')
+
+diff --git a/scripts/Makefile.btf b/scripts/Makefile.btf
+index b75f09f3f4248d..c3cbeb13de5035 100644
+--- a/scripts/Makefile.btf
++++ b/scripts/Makefile.btf
+@@ -3,6 +3,8 @@
+ pahole-ver := $(CONFIG_PAHOLE_VERSION)
+ pahole-flags-y :=
+
++JOBS := $(patsubst -j%,%,$(filter -j%,$(MAKEFLAGS)))
++
+ ifeq ($(call test-le, $(pahole-ver), 125),y)
+
+ # pahole 1.18 through 1.21 can't handle zero-sized per-CPU vars
+@@ -12,14 +14,14 @@ endif
+
+ pahole-flags-$(call test-ge, $(pahole-ver), 121) += --btf_gen_floats
+
+-pahole-flags-$(call test-ge, $(pahole-ver), 122) += -j
++pahole-flags-$(call test-ge, $(pahole-ver), 122) += -j$(JOBS)
+
+ pahole-flags-$(call test-ge, $(pahole-ver), 125) += --skip_encoding_btf_inconsistent_proto --btf_gen_optimized
+
+ else
+
+ # Switch to using --btf_features for v1.26 and later.
+-pahole-flags-$(call test-ge, $(pahole-ver), 126) = -j --btf_features=encode_force,var,float,enum64,decl_tag,type_tag,optimized_func,consistent_func,decl_tag_kfuncs
++pahole-flags-$(call test-ge, $(pahole-ver), 126) = -j$(JOBS) --btf_features=encode_force,var,float,enum64,decl_tag,type_tag,optimized_func,consistent_func,decl_tag_kfuncs
+
+ ifneq ($(KBUILD_EXTMOD),)
+ module-pahole-flags-$(call test-ge, $(pahole-ver), 126) += --btf_features=distilled_base
+--
+cgit 1.2.3-korg
+
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-22 17:46 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-22 17:46 UTC (permalink / raw
To: gentoo-commits
commit: 80fc1f5354a1d26cad5e58fd20b089999eef1cc7
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 22 17:46:18 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 22 17:46:18 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=80fc1f53
Linux patch 6.11.10
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1009_linux-6.11.10.patch | 4469 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4473 insertions(+)
diff --git a/0000_README b/0000_README
index 3c22b167..ac3dcb33 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-6.11.9.patch
From: https://www.kernel.org
Desc: Linux 6.11.9
+Patch: 1009_linux-6.11.10.patch
+From: https://www.kernel.org
+Desc: Linux 6.11.10
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1009_linux-6.11.10.patch b/1009_linux-6.11.10.patch
new file mode 100644
index 00000000..08a334ab
--- /dev/null
+++ b/1009_linux-6.11.10.patch
@@ -0,0 +1,4469 @@
+diff --git a/Makefile b/Makefile
+index 3e48c8d84540bc..4b075e003b2d61 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 11
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 173159e93c99c4..a1e5cdc1b336ae 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -1597,6 +1597,9 @@ config ATAGS_PROC
+ config ARCH_SUPPORTS_CRASH_DUMP
+ def_bool y
+
++config ARCH_DEFAULT_CRASH_DUMP
++ def_bool y
++
+ config AUTO_ZRELADDR
+ bool "Auto calculation of the decompressed kernel image address" if !ARCH_MULTIPLATFORM
+ default !(ARCH_FOOTBRIDGE || ARCH_RPC || ARCH_SA1100)
+diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
+index 1ec35f065617e4..28873cda464f51 100644
+--- a/arch/arm/kernel/head.S
++++ b/arch/arm/kernel/head.S
+@@ -252,11 +252,15 @@ __create_page_tables:
+ */
+ add r0, r4, #KERNEL_OFFSET >> (SECTION_SHIFT - PMD_ENTRY_ORDER)
+ ldr r6, =(_end - 1)
++
++ /* For XIP, kernel_sec_start/kernel_sec_end are currently in RO memory */
++#ifndef CONFIG_XIP_KERNEL
+ adr_l r5, kernel_sec_start @ _pa(kernel_sec_start)
+ #if defined CONFIG_CPU_ENDIAN_BE8 || defined CONFIG_CPU_ENDIAN_BE32
+ str r8, [r5, #4] @ Save physical start of kernel (BE)
+ #else
+ str r8, [r5] @ Save physical start of kernel (LE)
++#endif
+ #endif
+ orr r3, r8, r7 @ Add the MMU flags
+ add r6, r4, r6, lsr #(SECTION_SHIFT - PMD_ENTRY_ORDER)
+@@ -264,6 +268,7 @@ __create_page_tables:
+ add r3, r3, #1 << SECTION_SHIFT
+ cmp r0, r6
+ bls 1b
++#ifndef CONFIG_XIP_KERNEL
+ eor r3, r3, r7 @ Remove the MMU flags
+ adr_l r5, kernel_sec_end @ _pa(kernel_sec_end)
+ #if defined CONFIG_CPU_ENDIAN_BE8 || defined CONFIG_CPU_ENDIAN_BE32
+@@ -271,8 +276,7 @@ __create_page_tables:
+ #else
+ str r3, [r5] @ Save physical end of kernel (LE)
+ #endif
+-
+-#ifdef CONFIG_XIP_KERNEL
++#else
+ /*
+ * Map the kernel image separately as it is not located in RAM.
+ */
+diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
+index 480e307501bb4b..6ea645939573fb 100644
+--- a/arch/arm/kernel/traps.c
++++ b/arch/arm/kernel/traps.c
+@@ -570,6 +570,7 @@ static int bad_syscall(int n, struct pt_regs *regs)
+ static inline int
+ __do_cache_op(unsigned long start, unsigned long end)
+ {
++ unsigned int ua_flags;
+ int ret;
+
+ do {
+@@ -578,7 +579,9 @@ __do_cache_op(unsigned long start, unsigned long end)
+ if (fatal_signal_pending(current))
+ return 0;
+
++ ua_flags = uaccess_save_and_enable();
+ ret = flush_icache_user_range(start, start + chunk);
++ uaccess_restore(ua_flags);
+ if (ret)
+ return ret;
+
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index 3f774856ca6763..b6ad0a8810e427 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -1402,18 +1402,6 @@ static void __init devicemaps_init(const struct machine_desc *mdesc)
+ create_mapping(&map);
+ }
+
+- /*
+- * Map the kernel if it is XIP.
+- * It is always first in the modulearea.
+- */
+-#ifdef CONFIG_XIP_KERNEL
+- map.pfn = __phys_to_pfn(CONFIG_XIP_PHYS_ADDR & SECTION_MASK);
+- map.virtual = MODULES_VADDR;
+- map.length = ((unsigned long)_exiprom - map.virtual + ~SECTION_MASK) & SECTION_MASK;
+- map.type = MT_ROM;
+- create_mapping(&map);
+-#endif
+-
+ /*
+ * Map the cache flushing regions.
+ */
+@@ -1603,12 +1591,27 @@ static void __init map_kernel(void)
+ * This will only persist until we turn on proper memory management later on
+ * and we remap the whole kernel with page granularity.
+ */
++#ifdef CONFIG_XIP_KERNEL
++ phys_addr_t kernel_nx_start = kernel_sec_start;
++#else
+ phys_addr_t kernel_x_start = kernel_sec_start;
+ phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
+ phys_addr_t kernel_nx_start = kernel_x_end;
++#endif
+ phys_addr_t kernel_nx_end = kernel_sec_end;
+ struct map_desc map;
+
++ /*
++ * Map the kernel if it is XIP.
++ * It is always first in the modulearea.
++ */
++#ifdef CONFIG_XIP_KERNEL
++ map.pfn = __phys_to_pfn(CONFIG_XIP_PHYS_ADDR & SECTION_MASK);
++ map.virtual = MODULES_VADDR;
++ map.length = ((unsigned long)_exiprom - map.virtual + ~SECTION_MASK) & SECTION_MASK;
++ map.type = MT_ROM;
++ create_mapping(&map);
++#else
+ map.pfn = __phys_to_pfn(kernel_x_start);
+ map.virtual = __phys_to_virt(kernel_x_start);
+ map.length = kernel_x_end - kernel_x_start;
+@@ -1618,7 +1621,7 @@ static void __init map_kernel(void)
+ /* If the nx part is small it may end up covered by the tail of the RWX section */
+ if (kernel_x_end == kernel_nx_end)
+ return;
+-
++#endif
+ map.pfn = __phys_to_pfn(kernel_nx_start);
+ map.virtual = __phys_to_virt(kernel_nx_start);
+ map.length = kernel_nx_end - kernel_nx_start;
+@@ -1762,6 +1765,11 @@ void __init paging_init(const struct machine_desc *mdesc)
+ {
+ void *zero_page;
+
++#ifdef CONFIG_XIP_KERNEL
++ /* Store the kernel RW RAM region start/end in these variables */
++ kernel_sec_start = CONFIG_PHYS_OFFSET & SECTION_MASK;
++ kernel_sec_end = round_up(__pa(_end), SECTION_SIZE);
++#endif
+ pr_debug("physical kernel sections: 0x%08llx-0x%08llx\n",
+ kernel_sec_start, kernel_sec_end);
+
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 402ae0297993c2..7d92c0a9c43d42 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1559,6 +1559,9 @@ config ARCH_DEFAULT_KEXEC_IMAGE_VERIFY_SIG
+ config ARCH_SUPPORTS_CRASH_DUMP
+ def_bool y
+
++config ARCH_DEFAULT_CRASH_DUMP
++ def_bool y
++
+ config ARCH_HAS_GENERIC_CRASHKERNEL_RESERVATION
+ def_bool CRASH_RESERVE
+
+diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h
+index 5966ee4a61542e..ef35c52aabd66d 100644
+--- a/arch/arm64/include/asm/mman.h
++++ b/arch/arm64/include/asm/mman.h
+@@ -3,6 +3,8 @@
+ #define __ASM_MMAN_H__
+
+ #include <linux/compiler.h>
++#include <linux/fs.h>
++#include <linux/shmem_fs.h>
+ #include <linux/types.h>
+ #include <uapi/asm/mman.h>
+
+@@ -21,19 +23,21 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
+ }
+ #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
+
+-static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags)
++static inline unsigned long arch_calc_vm_flag_bits(struct file *file,
++ unsigned long flags)
+ {
+ /*
+ * Only allow MTE on anonymous mappings as these are guaranteed to be
+ * backed by tags-capable memory. The vm_flags may be overridden by a
+ * filesystem supporting MTE (RAM-based).
+ */
+- if (system_supports_mte() && (flags & MAP_ANONYMOUS))
++ if (system_supports_mte() &&
++ ((flags & MAP_ANONYMOUS) || shmem_file(file)))
+ return VM_MTE_ALLOWED;
+
+ return 0;
+ }
+-#define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags)
++#define arch_calc_vm_flag_bits(file, flags) arch_calc_vm_flag_bits(file, flags)
+
+ static inline bool arch_validate_prot(unsigned long prot,
+ unsigned long addr __always_unused)
+diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
+index 70f169210b523f..ce232ddcd27d70 100644
+--- a/arch/loongarch/Kconfig
++++ b/arch/loongarch/Kconfig
+@@ -599,6 +599,9 @@ config ARCH_SUPPORTS_KEXEC
+ config ARCH_SUPPORTS_CRASH_DUMP
+ def_bool y
+
++config ARCH_DEFAULT_CRASH_DUMP
++ def_bool y
++
+ config ARCH_SELECTS_CRASH_DUMP
+ def_bool y
+ depends on CRASH_DUMP
+diff --git a/arch/loongarch/include/asm/kasan.h b/arch/loongarch/include/asm/kasan.h
+index c6bce5fbff57b0..7f52bd31b9d4f5 100644
+--- a/arch/loongarch/include/asm/kasan.h
++++ b/arch/loongarch/include/asm/kasan.h
+@@ -25,6 +25,7 @@
+ /* 64-bit segment value. */
+ #define XKPRANGE_UC_SEG (0x8000)
+ #define XKPRANGE_CC_SEG (0x9000)
++#define XKPRANGE_WC_SEG (0xa000)
+ #define XKVRANGE_VC_SEG (0xffff)
+
+ /* Cached */
+@@ -41,20 +42,28 @@
+ #define XKPRANGE_UC_SHADOW_SIZE (XKPRANGE_UC_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
+ #define XKPRANGE_UC_SHADOW_END (XKPRANGE_UC_KASAN_OFFSET + XKPRANGE_UC_SHADOW_SIZE)
+
++/* WriteCombine */
++#define XKPRANGE_WC_START WRITECOMBINE_BASE
++#define XKPRANGE_WC_SIZE XRANGE_SIZE
++#define XKPRANGE_WC_KASAN_OFFSET XKPRANGE_UC_SHADOW_END
++#define XKPRANGE_WC_SHADOW_SIZE (XKPRANGE_WC_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
++#define XKPRANGE_WC_SHADOW_END (XKPRANGE_WC_KASAN_OFFSET + XKPRANGE_WC_SHADOW_SIZE)
++
+ /* VMALLOC (Cached or UnCached) */
+ #define XKVRANGE_VC_START MODULES_VADDR
+ #define XKVRANGE_VC_SIZE round_up(KFENCE_AREA_END - MODULES_VADDR + 1, PGDIR_SIZE)
+-#define XKVRANGE_VC_KASAN_OFFSET XKPRANGE_UC_SHADOW_END
++#define XKVRANGE_VC_KASAN_OFFSET XKPRANGE_WC_SHADOW_END
+ #define XKVRANGE_VC_SHADOW_SIZE (XKVRANGE_VC_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
+ #define XKVRANGE_VC_SHADOW_END (XKVRANGE_VC_KASAN_OFFSET + XKVRANGE_VC_SHADOW_SIZE)
+
+ /* KAsan shadow memory start right after vmalloc. */
+ #define KASAN_SHADOW_START round_up(KFENCE_AREA_END, PGDIR_SIZE)
+ #define KASAN_SHADOW_SIZE (XKVRANGE_VC_SHADOW_END - XKPRANGE_CC_KASAN_OFFSET)
+-#define KASAN_SHADOW_END round_up(KASAN_SHADOW_START + KASAN_SHADOW_SIZE, PGDIR_SIZE)
++#define KASAN_SHADOW_END (round_up(KASAN_SHADOW_START + KASAN_SHADOW_SIZE, PGDIR_SIZE) - 1)
+
+ #define XKPRANGE_CC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_CC_KASAN_OFFSET)
+ #define XKPRANGE_UC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_UC_KASAN_OFFSET)
++#define XKPRANGE_WC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_WC_KASAN_OFFSET)
+ #define XKVRANGE_VC_SHADOW_OFFSET (KASAN_SHADOW_START + XKVRANGE_VC_KASAN_OFFSET)
+
+ extern bool kasan_early_stage;
+diff --git a/arch/loongarch/kernel/paravirt.c b/arch/loongarch/kernel/paravirt.c
+index 9c9b75b76f62f2..867fa816726cf3 100644
+--- a/arch/loongarch/kernel/paravirt.c
++++ b/arch/loongarch/kernel/paravirt.c
+@@ -50,11 +50,18 @@ static u64 paravt_steal_clock(int cpu)
+ }
+
+ #ifdef CONFIG_SMP
++static struct smp_ops native_ops;
++
+ static void pv_send_ipi_single(int cpu, unsigned int action)
+ {
+ int min, old;
+ irq_cpustat_t *info = &per_cpu(irq_stat, cpu);
+
++ if (unlikely(action == ACTION_BOOT_CPU)) {
++ native_ops.send_ipi_single(cpu, action);
++ return;
++ }
++
+ old = atomic_fetch_or(BIT(action), &info->message);
+ if (old)
+ return;
+@@ -74,6 +81,11 @@ static void pv_send_ipi_mask(const struct cpumask *mask, unsigned int action)
+ if (cpumask_empty(mask))
+ return;
+
++ if (unlikely(action == ACTION_BOOT_CPU)) {
++ native_ops.send_ipi_mask(mask, action);
++ return;
++ }
++
+ action = BIT(action);
+ for_each_cpu(i, mask) {
+ info = &per_cpu(irq_stat, i);
+@@ -141,6 +153,8 @@ static void pv_init_ipi(void)
+ {
+ int r, swi;
+
++ /* Init native ipi irq for ACTION_BOOT_CPU */
++ native_ops.init_ipi();
+ swi = get_percpu_irq(INT_SWI0);
+ if (swi < 0)
+ panic("SWI0 IRQ mapping failed\n");
+@@ -179,6 +193,7 @@ int __init pv_ipi_init(void)
+ return 0;
+
+ #ifdef CONFIG_SMP
++ native_ops = mp_ops;
+ mp_ops.init_ipi = pv_init_ipi;
+ mp_ops.send_ipi_single = pv_send_ipi_single;
+ mp_ops.send_ipi_mask = pv_send_ipi_mask;
+diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c
+index ca405ab86aaef6..b1329fe01fae90 100644
+--- a/arch/loongarch/kernel/smp.c
++++ b/arch/loongarch/kernel/smp.c
+@@ -296,7 +296,7 @@ static void __init fdt_smp_setup(void)
+ __cpu_number_map[cpuid] = cpu;
+ __cpu_logical_map[cpu] = cpuid;
+
+- early_numa_add_cpu(cpu, 0);
++ early_numa_add_cpu(cpuid, 0);
+ set_cpuid_to_node(cpuid, 0);
+ }
+
+diff --git a/arch/loongarch/mm/kasan_init.c b/arch/loongarch/mm/kasan_init.c
+index 427d6b1aec09e7..d2681272d8f0f3 100644
+--- a/arch/loongarch/mm/kasan_init.c
++++ b/arch/loongarch/mm/kasan_init.c
+@@ -13,6 +13,13 @@
+
+ static pgd_t kasan_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
+
++#ifdef __PAGETABLE_P4D_FOLDED
++#define __pgd_none(early, pgd) (0)
++#else
++#define __pgd_none(early, pgd) (early ? (pgd_val(pgd) == 0) : \
++(__pa(pgd_val(pgd)) == (unsigned long)__pa(kasan_early_shadow_p4d)))
++#endif
++
+ #ifdef __PAGETABLE_PUD_FOLDED
+ #define __p4d_none(early, p4d) (0)
+ #else
+@@ -55,6 +62,9 @@ void *kasan_mem_to_shadow(const void *addr)
+ case XKPRANGE_UC_SEG:
+ offset = XKPRANGE_UC_SHADOW_OFFSET;
+ break;
++ case XKPRANGE_WC_SEG:
++ offset = XKPRANGE_WC_SHADOW_OFFSET;
++ break;
+ case XKVRANGE_VC_SEG:
+ offset = XKVRANGE_VC_SHADOW_OFFSET;
+ break;
+@@ -79,6 +89,8 @@ const void *kasan_shadow_to_mem(const void *shadow_addr)
+
+ if (addr >= XKVRANGE_VC_SHADOW_OFFSET)
+ return (void *)(((addr - XKVRANGE_VC_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + XKVRANGE_VC_START);
++ else if (addr >= XKPRANGE_WC_SHADOW_OFFSET)
++ return (void *)(((addr - XKPRANGE_WC_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + XKPRANGE_WC_START);
+ else if (addr >= XKPRANGE_UC_SHADOW_OFFSET)
+ return (void *)(((addr - XKPRANGE_UC_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + XKPRANGE_UC_START);
+ else if (addr >= XKPRANGE_CC_SHADOW_OFFSET)
+@@ -142,6 +154,19 @@ static pud_t *__init kasan_pud_offset(p4d_t *p4dp, unsigned long addr, int node,
+ return pud_offset(p4dp, addr);
+ }
+
++static p4d_t *__init kasan_p4d_offset(pgd_t *pgdp, unsigned long addr, int node, bool early)
++{
++ if (__pgd_none(early, pgdp_get(pgdp))) {
++ phys_addr_t p4d_phys = early ?
++ __pa_symbol(kasan_early_shadow_p4d) : kasan_alloc_zeroed_page(node);
++ if (!early)
++ memcpy(__va(p4d_phys), kasan_early_shadow_p4d, sizeof(kasan_early_shadow_p4d));
++ pgd_populate(&init_mm, pgdp, (p4d_t *)__va(p4d_phys));
++ }
++
++ return p4d_offset(pgdp, addr);
++}
++
+ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
+ unsigned long end, int node, bool early)
+ {
+@@ -178,19 +203,19 @@ static void __init kasan_pud_populate(p4d_t *p4dp, unsigned long addr,
+ do {
+ next = pud_addr_end(addr, end);
+ kasan_pmd_populate(pudp, addr, next, node, early);
+- } while (pudp++, addr = next, addr != end);
++ } while (pudp++, addr = next, addr != end && __pud_none(early, READ_ONCE(*pudp)));
+ }
+
+ static void __init kasan_p4d_populate(pgd_t *pgdp, unsigned long addr,
+ unsigned long end, int node, bool early)
+ {
+ unsigned long next;
+- p4d_t *p4dp = p4d_offset(pgdp, addr);
++ p4d_t *p4dp = kasan_p4d_offset(pgdp, addr, node, early);
+
+ do {
+ next = p4d_addr_end(addr, end);
+ kasan_pud_populate(p4dp, addr, next, node, early);
+- } while (p4dp++, addr = next, addr != end);
++ } while (p4dp++, addr = next, addr != end && __p4d_none(early, READ_ONCE(*p4dp)));
+ }
+
+ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
+@@ -218,7 +243,7 @@ static void __init kasan_map_populate(unsigned long start, unsigned long end,
+ asmlinkage void __init kasan_early_init(void)
+ {
+ BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
+- BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
++ BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END + 1, PGDIR_SIZE));
+ }
+
+ static inline void kasan_set_pgd(pgd_t *pgdp, pgd_t pgdval)
+@@ -233,7 +258,7 @@ static void __init clear_pgds(unsigned long start, unsigned long end)
+ * swapper_pg_dir. pgd_clear() can't be used
+ * here because it's nop on 2,3-level pagetable setups
+ */
+- for (; start < end; start += PGDIR_SIZE)
++ for (; start < end; start = pgd_addr_end(start, end))
+ kasan_set_pgd((pgd_t *)pgd_offset_k(start), __pgd(0));
+ }
+
+@@ -242,6 +267,17 @@ void __init kasan_init(void)
+ u64 i;
+ phys_addr_t pa_start, pa_end;
+
++ /*
++ * If PGDIR_SIZE is too large for cpu_vabits, KASAN_SHADOW_END will
++ * overflow UINTPTR_MAX and then looks like a user space address.
++ * For example, PGDIR_SIZE of CONFIG_4KB_4LEVEL is 2^39, which is too
++ * large for Loongson-2K series whose cpu_vabits = 39.
++ */
++ if (KASAN_SHADOW_END < vm_map_base) {
++ pr_warn("PGDIR_SIZE too large for cpu_vabits, KernelAddressSanitizer disabled.\n");
++ return;
++ }
++
+ /*
+ * PGD was populated as invalid_pmd_table or invalid_pud_table
+ * in pagetable_init() which depends on how many levels of page
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 60077e57693563..b547f4304d0c3d 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -2881,6 +2881,9 @@ config ARCH_SUPPORTS_KEXEC
+ config ARCH_SUPPORTS_CRASH_DUMP
+ def_bool y
+
++config ARCH_DEFAULT_CRASH_DUMP
++ def_bool y
++
+ config PHYSICAL_START
+ hex "Physical address where the kernel is loaded"
+ default "0xffffffff84000000"
+diff --git a/arch/parisc/include/asm/mman.h b/arch/parisc/include/asm/mman.h
+index 89b6beeda0b869..663f587dc78965 100644
+--- a/arch/parisc/include/asm/mman.h
++++ b/arch/parisc/include/asm/mman.h
+@@ -2,6 +2,7 @@
+ #ifndef __ASM_MMAN_H__
+ #define __ASM_MMAN_H__
+
++#include <linux/fs.h>
+ #include <uapi/asm/mman.h>
+
+ /* PARISC cannot allow mdwe as it needs writable stacks */
+@@ -11,7 +12,7 @@ static inline bool arch_memory_deny_write_exec_supported(void)
+ }
+ #define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported
+
+-static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags)
++static inline unsigned long arch_calc_vm_flag_bits(struct file *file, unsigned long flags)
+ {
+ /*
+ * The stack on parisc grows upwards, so if userspace requests memory
+@@ -23,6 +24,6 @@ static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags)
+
+ return 0;
+ }
+-#define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags)
++#define arch_calc_vm_flag_bits(file, flags) arch_calc_vm_flag_bits(file, flags)
+
+ #endif /* __ASM_MMAN_H__ */
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index d7b09b064a8ac5..0f3c1f958eace0 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -682,6 +682,10 @@ config RELOCATABLE_TEST
+ config ARCH_SUPPORTS_CRASH_DUMP
+ def_bool PPC64 || PPC_BOOK3S_32 || PPC_85xx || (44x && !SMP)
+
++config ARCH_DEFAULT_CRASH_DUMP
++ bool
++ default y if !PPC_BOOK3S_32
++
+ config ARCH_SELECTS_CRASH_DUMP
+ def_bool y
+ depends on CRASH_DUMP
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 6651a5cbdc2717..e513eb642557a3 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -884,6 +884,9 @@ config ARCH_SUPPORTS_KEXEC_PURGATORY
+ config ARCH_SUPPORTS_CRASH_DUMP
+ def_bool y
+
++config ARCH_DEFAULT_CRASH_DUMP
++ def_bool y
++
+ config ARCH_HAS_GENERIC_CRASHKERNEL_RESERVATION
+ def_bool CRASH_RESERVE
+
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index c60e699e99f5b3..fff371b89e4181 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -275,6 +275,9 @@ config ARCH_SUPPORTS_CRASH_DUMP
+ This option also enables s390 zfcpdump.
+ See also <file:Documentation/arch/s390/zfcpdump.rst>
+
++config ARCH_DEFAULT_CRASH_DUMP
++ def_bool y
++
+ menu "Processor type and features"
+
+ config HAVE_MARCH_Z10_FEATURES
+diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
+index 1aa3c4a0c5b276..3a6338962636ae 100644
+--- a/arch/sh/Kconfig
++++ b/arch/sh/Kconfig
+@@ -549,6 +549,9 @@ config ARCH_SUPPORTS_KEXEC
+ config ARCH_SUPPORTS_CRASH_DUMP
+ def_bool BROKEN_ON_SMP
+
++config ARCH_DEFAULT_CRASH_DUMP
++ def_bool y
++
+ config ARCH_SUPPORTS_KEXEC_JUMP
+ def_bool y
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 4e0b05a6bb5c36..ca70367ac54407 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2087,6 +2087,9 @@ config ARCH_SUPPORTS_KEXEC_JUMP
+ config ARCH_SUPPORTS_CRASH_DUMP
+ def_bool X86_64 || (X86_32 && HIGHMEM)
+
++config ARCH_DEFAULT_CRASH_DUMP
++ def_bool y
++
+ config ARCH_SUPPORTS_CRASH_HOTPLUG
+ def_bool y
+
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 801fd85c3ef693..5fb0cc6c16420e 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -138,9 +138,10 @@ ifeq ($(CONFIG_X86_32),y)
+
+ ifeq ($(CONFIG_STACKPROTECTOR),y)
+ ifeq ($(CONFIG_SMP),y)
+- KBUILD_CFLAGS += -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard
++ KBUILD_CFLAGS += -mstack-protector-guard-reg=fs \
++ -mstack-protector-guard-symbol=__ref_stack_chk_guard
+ else
+- KBUILD_CFLAGS += -mstack-protector-guard=global
++ KBUILD_CFLAGS += -mstack-protector-guard=global
+ endif
+ endif
+ else
+diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
+index 324686bca36813..b7ea3e8e9eccd5 100644
+--- a/arch/x86/entry/entry.S
++++ b/arch/x86/entry/entry.S
+@@ -51,3 +51,19 @@ EXPORT_SYMBOL_GPL(mds_verw_sel);
+ .popsection
+
+ THUNK warn_thunk_thunk, __warn_thunk
++
++#ifndef CONFIG_X86_64
++/*
++ * Clang's implementation of TLS stack cookies requires the variable in
++ * question to be a TLS variable. If the variable happens to be defined as an
++ * ordinary variable with external linkage in the same compilation unit (which
++ * amounts to the whole of vmlinux with LTO enabled), Clang will drop the
++ * segment register prefix from the references, resulting in broken code. Work
++ * around this by avoiding the symbol used in -mstack-protector-guard-symbol=
++ * entirely in the C code, and use an alias emitted by the linker script
++ * instead.
++ */
++#ifdef CONFIG_STACKPROTECTOR
++EXPORT_SYMBOL(__ref_stack_chk_guard);
++#endif
++#endif
+diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h
+index 25466c4d213481..3674006e39744e 100644
+--- a/arch/x86/include/asm/asm-prototypes.h
++++ b/arch/x86/include/asm/asm-prototypes.h
+@@ -20,3 +20,6 @@
+ extern void cmpxchg8b_emu(void);
+ #endif
+
++#if defined(__GENKSYMS__) && defined(CONFIG_STACKPROTECTOR)
++extern unsigned long __ref_stack_chk_guard;
++#endif
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index f01b72052f7908..2b8ca66793fb03 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -924,6 +924,17 @@ static void init_amd_zen4(struct cpuinfo_x86 *c)
+ {
+ if (!cpu_has(c, X86_FEATURE_HYPERVISOR))
+ msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT);
++
++ /*
++ * These Zen4 SoCs advertise support for virtualized VMLOAD/VMSAVE
++ * in some BIOS versions but they can lead to random host reboots.
++ */
++ switch (c->x86_model) {
++ case 0x18 ... 0x1f:
++ case 0x60 ... 0x7f:
++ clear_cpu_cap(c, X86_FEATURE_V_VMSAVE_VMLOAD);
++ break;
++ }
+ }
+
+ static void init_amd_zen5(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index b7d97f97cd9822..c472282ab40fd2 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -2084,8 +2084,10 @@ void syscall_init(void)
+
+ #ifdef CONFIG_STACKPROTECTOR
+ DEFINE_PER_CPU(unsigned long, __stack_chk_guard);
++#ifndef CONFIG_SMP
+ EXPORT_PER_CPU_SYMBOL(__stack_chk_guard);
+ #endif
++#endif
+
+ #endif /* CONFIG_X86_64 */
+
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 6fb8d7ba9b50aa..37b13394df7ab0 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -492,6 +492,9 @@ SECTIONS
+ . = ASSERT((_end - LOAD_OFFSET <= KERNEL_IMAGE_SIZE),
+ "kernel image bigger than KERNEL_IMAGE_SIZE");
+
++/* needed for Clang - see arch/x86/entry/entry.S */
++PROVIDE(__ref_stack_chk_guard = __stack_chk_guard);
++
+ #ifdef CONFIG_X86_64
+ /*
+ * Per-cpu symbols which need to be offset from __per_cpu_load
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index b1269e2bb76161..ed18109ef220c6 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -2629,19 +2629,26 @@ void kvm_apic_update_apicv(struct kvm_vcpu *vcpu)
+ {
+ struct kvm_lapic *apic = vcpu->arch.apic;
+
+- if (apic->apicv_active) {
+- /* irr_pending is always true when apicv is activated. */
+- apic->irr_pending = true;
++ /*
++ * When APICv is enabled, KVM must always search the IRR for a pending
++ * IRQ, as other vCPUs and devices can set IRR bits even if the vCPU
++ * isn't running. If APICv is disabled, KVM _should_ search the IRR
++ * for a pending IRQ. But KVM currently doesn't ensure *all* hardware,
++ * e.g. CPUs and IOMMUs, has seen the change in state, i.e. searching
++ * the IRR at this time could race with IRQ delivery from hardware that
++ * still sees APICv as being enabled.
++ *
++ * FIXME: Ensure other vCPUs and devices observe the change in APICv
++ * state prior to updating KVM's metadata caches, so that KVM
++ * can safely search the IRR and set irr_pending accordingly.
++ */
++ apic->irr_pending = true;
++
++ if (apic->apicv_active)
+ apic->isr_count = 1;
+- } else {
+- /*
+- * Don't clear irr_pending, searching the IRR can race with
+- * updates from the CPU as APICv is still active from hardware's
+- * perspective. The flag will be cleared as appropriate when
+- * KVM injects the interrupt.
+- */
++ else
+ apic->isr_count = count_vectors(apic->regs + APIC_ISR);
+- }
++
+ apic->highest_isr_cache = -1;
+ }
+
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 2392a7ef254df2..52bb847372fb4d 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -1197,11 +1197,14 @@ static void nested_vmx_transition_tlb_flush(struct kvm_vcpu *vcpu,
+ kvm_hv_nested_transtion_tlb_flush(vcpu, enable_ept);
+
+ /*
+- * If vmcs12 doesn't use VPID, L1 expects linear and combined mappings
+- * for *all* contexts to be flushed on VM-Enter/VM-Exit, i.e. it's a
+- * full TLB flush from the guest's perspective. This is required even
+- * if VPID is disabled in the host as KVM may need to synchronize the
+- * MMU in response to the guest TLB flush.
++ * If VPID is disabled, then guest TLB accesses use VPID=0, i.e. the
++ * same VPID as the host, and so architecturally, linear and combined
++ * mappings for VPID=0 must be flushed at VM-Enter and VM-Exit. KVM
++ * emulates L2 sharing L1's VPID=0 by using vpid01 while running L2,
++ * and so KVM must also emulate TLB flush of VPID=0, i.e. vpid01. This
++ * is required if VPID is disabled in KVM, as a TLB flush (there are no
++ * VPIDs) still occurs from L1's perspective, and KVM may need to
++ * synchronize the MMU in response to the guest TLB flush.
+ *
+ * Note, using TLB_FLUSH_GUEST is correct even if nested EPT is in use.
+ * EPT is a special snowflake, as guest-physical mappings aren't
+@@ -2291,6 +2294,17 @@ static void prepare_vmcs02_early_rare(struct vcpu_vmx *vmx,
+
+ vmcs_write64(VMCS_LINK_POINTER, INVALID_GPA);
+
++ /*
++ * If VPID is disabled, then guest TLB accesses use VPID=0, i.e. the
++ * same VPID as the host. Emulate this behavior by using vpid01 for L2
++ * if VPID is disabled in vmcs12. Note, if VPID is disabled, VM-Enter
++ * and VM-Exit are architecturally required to flush VPID=0, but *only*
++ * VPID=0. I.e. using vpid02 would be ok (so long as KVM emulates the
++ * required flushes), but doing so would cause KVM to over-flush. E.g.
++ * if L1 runs L2 X with VPID12=1, then runs L2 Y with VPID12 disabled,
++ * and then runs L2 X again, then KVM can and should retain TLB entries
++ * for VPID12=1.
++ */
+ if (enable_vpid) {
+ if (nested_cpu_has_vpid(vmcs12) && vmx->nested.vpid02)
+ vmcs_write16(VIRTUAL_PROCESSOR_ID, vmx->nested.vpid02);
+@@ -5890,6 +5904,12 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
+ return nested_vmx_fail(vcpu,
+ VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID);
+
++ /*
++ * Always flush the effective vpid02, i.e. never flush the current VPID
++ * and never explicitly flush vpid01. INVVPID targets a VPID, not a
++ * VMCS, and so whether or not the current vmcs12 has VPID enabled is
++ * irrelevant (and there may not be a loaded vmcs12).
++ */
+ vpid02 = nested_get_vpid02(vcpu);
+ switch (type) {
+ case VMX_VPID_EXTENT_INDIVIDUAL_ADDR:
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 733a0c45d1a612..5455106b6d9c59 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -217,9 +217,11 @@ module_param(ple_window_shrink, uint, 0444);
+ static unsigned int ple_window_max = KVM_VMX_DEFAULT_PLE_WINDOW_MAX;
+ module_param(ple_window_max, uint, 0444);
+
+-/* Default is SYSTEM mode, 1 for host-guest mode */
++/* Default is SYSTEM mode, 1 for host-guest mode (which is BROKEN) */
+ int __read_mostly pt_mode = PT_MODE_SYSTEM;
++#ifdef CONFIG_BROKEN
+ module_param(pt_mode, int, S_IRUGO);
++#endif
+
+ struct x86_pmu_lbr __ro_after_init vmx_lbr_caps;
+
+@@ -3220,7 +3222,7 @@ void vmx_flush_tlb_all(struct kvm_vcpu *vcpu)
+
+ static inline int vmx_get_current_vpid(struct kvm_vcpu *vcpu)
+ {
+- if (is_guest_mode(vcpu))
++ if (is_guest_mode(vcpu) && nested_cpu_has_vpid(get_vmcs12(vcpu)))
+ return nested_get_vpid02(vcpu);
+ return to_vmx(vcpu)->vpid;
+ }
+diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
+index aa7d279321ea0c..2c102dc164e197 100644
+--- a/arch/x86/mm/ioremap.c
++++ b/arch/x86/mm/ioremap.c
+@@ -655,7 +655,8 @@ static bool memremap_is_setup_data(resource_size_t phys_addr,
+ paddr_next = data->next;
+ len = data->len;
+
+- if ((phys_addr > paddr) && (phys_addr < (paddr + len))) {
++ if ((phys_addr > paddr) &&
++ (phys_addr < (paddr + sizeof(struct setup_data) + len))) {
+ memunmap(data);
+ return true;
+ }
+@@ -717,7 +718,8 @@ static bool __init early_memremap_is_setup_data(resource_size_t phys_addr,
+ paddr_next = data->next;
+ len = data->len;
+
+- if ((phys_addr > paddr) && (phys_addr < (paddr + len))) {
++ if ((phys_addr > paddr) &&
++ (phys_addr < (paddr + sizeof(struct setup_data) + len))) {
+ early_memunmap(data, sizeof(*data));
+ return true;
+ }
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 1ccbb515751533..24d2f4f37d0fd5 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -3288,13 +3288,12 @@ static int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
+ case INTEL_TLV_TEST_EXCEPTION:
+ /* Generate devcoredump from exception */
+ if (!hci_devcd_init(hdev, skb->len)) {
+- hci_devcd_append(hdev, skb);
++ hci_devcd_append(hdev, skb_clone(skb, GFP_ATOMIC));
+ hci_devcd_complete(hdev);
+ } else {
+ bt_dev_err(hdev, "Failed to generate devcoredump");
+- kfree_skb(skb);
+ }
+- return 0;
++ break;
+ default:
+ bt_dev_err(hdev, "Invalid exception type %02X", tlv->val[0]);
+ }
+diff --git a/drivers/char/tpm/tpm2-sessions.c b/drivers/char/tpm/tpm2-sessions.c
+index c8fdfe901dfb7c..adbf47967707d7 100644
+--- a/drivers/char/tpm/tpm2-sessions.c
++++ b/drivers/char/tpm/tpm2-sessions.c
+@@ -948,10 +948,13 @@ static int tpm2_load_null(struct tpm_chip *chip, u32 *null_key)
+ /* Deduce from the name change TPM interference: */
+ dev_err(&chip->dev, "null key integrity check failed\n");
+ tpm2_flush_context(chip, tmp_null_key);
+- chip->flags |= TPM_CHIP_FLAG_DISABLE;
+
+ err:
+- return rc ? -ENODEV : 0;
++ if (rc) {
++ chip->flags |= TPM_CHIP_FLAG_DISABLE;
++ rc = -ENODEV;
++ }
++ return rc;
+ }
+
+ /**
+diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
+index 4b7f1cbb9b04d6..408f1652886292 100644
+--- a/drivers/firmware/arm_scmi/perf.c
++++ b/drivers/firmware/arm_scmi/perf.c
+@@ -373,7 +373,7 @@ static int iter_perf_levels_update_state(struct scmi_iterator_state *st,
+ return 0;
+ }
+
+-static inline void
++static inline int
+ process_response_opp(struct device *dev, struct perf_dom_info *dom,
+ struct scmi_opp *opp, unsigned int loop_idx,
+ const struct scmi_msg_resp_perf_describe_levels *r)
+@@ -386,12 +386,16 @@ process_response_opp(struct device *dev, struct perf_dom_info *dom,
+ le16_to_cpu(r->opp[loop_idx].transition_latency_us);
+
+ ret = xa_insert(&dom->opps_by_lvl, opp->perf, opp, GFP_KERNEL);
+- if (ret)
+- dev_warn(dev, "Failed to add opps_by_lvl at %d for %s - ret:%d\n",
++ if (ret) {
++ dev_info(dev, FW_BUG "Failed to add opps_by_lvl at %d for %s - ret:%d\n",
+ opp->perf, dom->info.name, ret);
++ return ret;
++ }
++
++ return 0;
+ }
+
+-static inline void
++static inline int
+ process_response_opp_v4(struct device *dev, struct perf_dom_info *dom,
+ struct scmi_opp *opp, unsigned int loop_idx,
+ const struct scmi_msg_resp_perf_describe_levels_v4 *r)
+@@ -404,9 +408,11 @@ process_response_opp_v4(struct device *dev, struct perf_dom_info *dom,
+ le16_to_cpu(r->opp[loop_idx].transition_latency_us);
+
+ ret = xa_insert(&dom->opps_by_lvl, opp->perf, opp, GFP_KERNEL);
+- if (ret)
+- dev_warn(dev, "Failed to add opps_by_lvl at %d for %s - ret:%d\n",
++ if (ret) {
++ dev_info(dev, FW_BUG "Failed to add opps_by_lvl at %d for %s - ret:%d\n",
+ opp->perf, dom->info.name, ret);
++ return ret;
++ }
+
+ /* Note that PERF v4 reports always five 32-bit words */
+ opp->indicative_freq = le32_to_cpu(r->opp[loop_idx].indicative_freq);
+@@ -415,13 +421,21 @@ process_response_opp_v4(struct device *dev, struct perf_dom_info *dom,
+
+ ret = xa_insert(&dom->opps_by_idx, opp->level_index, opp,
+ GFP_KERNEL);
+- if (ret)
++ if (ret) {
+ dev_warn(dev,
+ "Failed to add opps_by_idx at %d for %s - ret:%d\n",
+ opp->level_index, dom->info.name, ret);
+
++ /* Cleanup by_lvl too */
++ xa_erase(&dom->opps_by_lvl, opp->perf);
++
++ return ret;
++ }
++
+ hash_add(dom->opps_by_freq, &opp->hash, opp->indicative_freq);
+ }
++
++ return 0;
+ }
+
+ static int
+@@ -429,16 +443,22 @@ iter_perf_levels_process_response(const struct scmi_protocol_handle *ph,
+ const void *response,
+ struct scmi_iterator_state *st, void *priv)
+ {
++ int ret;
+ struct scmi_opp *opp;
+ struct scmi_perf_ipriv *p = priv;
+
+- opp = &p->perf_dom->opp[st->desc_index + st->loop_idx];
++ opp = &p->perf_dom->opp[p->perf_dom->opp_count];
+ if (PROTOCOL_REV_MAJOR(p->version) <= 0x3)
+- process_response_opp(ph->dev, p->perf_dom, opp, st->loop_idx,
+- response);
++ ret = process_response_opp(ph->dev, p->perf_dom, opp,
++ st->loop_idx, response);
+ else
+- process_response_opp_v4(ph->dev, p->perf_dom, opp, st->loop_idx,
+- response);
++ ret = process_response_opp_v4(ph->dev, p->perf_dom, opp,
++ st->loop_idx, response);
++
++ /* Skip BAD duplicates received from firmware */
++ if (ret)
++ return ret == -EBUSY ? 0 : ret;
++
+ p->perf_dom->opp_count++;
+
+ dev_dbg(ph->dev, "Level %d Power %d Latency %dus Ifreq %d Index %d\n",
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index e32161f6b67a37..11c5e43fc2fab7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -180,7 +180,8 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
+ * When GTT is just an alternative to VRAM make sure that we
+ * only use it as fallback and still try to fill up VRAM first.
+ */
+- if (domain & abo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM)
++ if (domain & abo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM &&
++ !(adev->flags & AMD_IS_APU))
+ places[c].flags |= TTM_PL_FLAG_FALLBACK;
+ c++;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index b73136d390cc03..6dfa58741afe7a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -1124,8 +1124,10 @@ static void gmc_v9_0_get_coherence_flags(struct amdgpu_device *adev,
+ uint64_t *flags)
+ {
+ struct amdgpu_device *bo_adev = amdgpu_ttm_adev(bo->tbo.bdev);
+- bool is_vram = bo->tbo.resource->mem_type == TTM_PL_VRAM;
+- bool coherent = bo->flags & (AMDGPU_GEM_CREATE_COHERENT | AMDGPU_GEM_CREATE_EXT_COHERENT);
++ bool is_vram = bo->tbo.resource &&
++ bo->tbo.resource->mem_type == TTM_PL_VRAM;
++ bool coherent = bo->flags & (AMDGPU_GEM_CREATE_COHERENT |
++ AMDGPU_GEM_CREATE_EXT_COHERENT);
+ bool ext_coherent = bo->flags & AMDGPU_GEM_CREATE_EXT_COHERENT;
+ bool uncached = bo->flags & AMDGPU_GEM_CREATE_UNCACHED;
+ struct amdgpu_vm *vm = mapping->bo_va->base.vm;
+@@ -1133,6 +1135,8 @@ static void gmc_v9_0_get_coherence_flags(struct amdgpu_device *adev,
+ bool snoop = false;
+ bool is_local;
+
++ dma_resv_assert_held(bo->tbo.base.resv);
++
+ switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
+ case IP_VERSION(9, 4, 1):
+ case IP_VERSION(9, 4, 2):
+@@ -1251,9 +1255,8 @@ static void gmc_v9_0_get_vm_pte(struct amdgpu_device *adev,
+ *flags &= ~AMDGPU_PTE_VALID;
+ }
+
+- if (bo && bo->tbo.resource)
+- gmc_v9_0_get_coherence_flags(adev, mapping->bo_va->base.bo,
+- mapping, flags);
++ if ((*flags & AMDGPU_PTE_VALID) && bo)
++ gmc_v9_0_get_coherence_flags(adev, bo, mapping, flags);
+ }
+
+ static void gmc_v9_0_override_vm_pte_flags(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+index 62d792ed0323aa..0383d1f95780a4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+@@ -524,7 +524,7 @@ static int mes_v12_0_set_hw_resources_1(struct amdgpu_mes *mes, int pipe)
+ mes_set_hw_res_1_pkt.header.type = MES_API_TYPE_SCHEDULER;
+ mes_set_hw_res_1_pkt.header.opcode = MES_SCH_API_SET_HW_RSRC_1;
+ mes_set_hw_res_1_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS;
+- mes_set_hw_res_1_pkt.mes_kiq_unmap_timeout = 100;
++ mes_set_hw_res_1_pkt.mes_kiq_unmap_timeout = 0xa;
+
+ return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe,
+ &mes_set_hw_res_1_pkt, sizeof(mes_set_hw_res_1_pkt),
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
+index fb37e354a9d5c2..1ac730328516ff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
+@@ -247,6 +247,12 @@ static void nbio_v7_7_init_registers(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_SOC15(NBIO, 0, regBIF0_PCIE_MST_CTRL_3, data);
+
++ switch (adev->ip_versions[NBIO_HWIP][0]) {
++ case IP_VERSION(7, 7, 0):
++ data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4) & ~BIT(23);
++ WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4, data);
++ break;
++ }
+ }
+
+ static void nbio_v7_7_update_medium_grain_clock_gating(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
+index 4938e6b340e9e4..73065a85e0d264 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nv.c
++++ b/drivers/gpu/drm/amd/amdgpu/nv.c
+@@ -67,8 +67,8 @@ static const struct amd_ip_funcs nv_common_ip_funcs;
+
+ /* Navi */
+ static const struct amdgpu_video_codec_info nv_video_codecs_encode_array[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 4096, 0)},
+ };
+
+ static const struct amdgpu_video_codecs nv_video_codecs_encode = {
+@@ -94,8 +94,8 @@ static const struct amdgpu_video_codecs nv_video_codecs_decode = {
+
+ /* Sienna Cichlid */
+ static const struct amdgpu_video_codec_info sc_video_codecs_encode_array[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2160, 0)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 7680, 4352, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 0)},
+ };
+
+ static const struct amdgpu_video_codecs sc_video_codecs_encode = {
+@@ -136,8 +136,8 @@ static const struct amdgpu_video_codecs sc_video_codecs_decode_vcn1 = {
+
+ /* SRIOV Sienna Cichlid, not const since data is controlled by host */
+ static struct amdgpu_video_codec_info sriov_sc_video_codecs_encode_array[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2160, 0)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 7680, 4352, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 0)},
+ };
+
+ static struct amdgpu_video_codec_info sriov_sc_video_codecs_decode_array_vcn0[] = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index 8d16dacdc17203..307185c0e1b8f2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -90,8 +90,8 @@ static const struct amd_ip_funcs soc15_common_ip_funcs;
+ /* Vega, Raven, Arcturus */
+ static const struct amdgpu_video_codec_info vega_video_codecs_encode_array[] =
+ {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 4096, 0)},
+ };
+
+ static const struct amdgpu_video_codecs vega_video_codecs_encode =
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
+index d30ad7d56def9c..bba35880badb9f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
+@@ -49,13 +49,13 @@ static const struct amd_ip_funcs soc21_common_ip_funcs;
+
+ /* SOC21 */
+ static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_encode_array_vcn0[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 0)},
+ };
+
+ static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_encode_array_vcn1[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 0)},
+ };
+
+@@ -96,14 +96,14 @@ static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_decode_vcn1 = {
+
+ /* SRIOV SOC21, not const since data is controlled by host */
+ static struct amdgpu_video_codec_info sriov_vcn_4_0_0_video_codecs_encode_array_vcn0[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 0)},
+ };
+
+ static struct amdgpu_video_codec_info sriov_vcn_4_0_0_video_codecs_encode_array_vcn1[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 0)},
+ };
+
+ static struct amdgpu_video_codecs sriov_vcn_4_0_0_video_codecs_encode_vcn0 = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc24.c b/drivers/gpu/drm/amd/amdgpu/soc24.c
+index fd4c3d4f838798..29a848f2466bb9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc24.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc24.c
+@@ -48,7 +48,7 @@
+ static const struct amd_ip_funcs soc24_common_ip_funcs;
+
+ static const struct amdgpu_video_codec_info vcn_5_0_0_video_codecs_encode_array_vcn0[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 0)},
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
+index d39c670f622046..792b2eb6bbacea 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vi.c
++++ b/drivers/gpu/drm/amd/amdgpu/vi.c
+@@ -136,15 +136,15 @@ static const struct amdgpu_video_codec_info polaris_video_codecs_encode_array[]
+ {
+ .codec_type = AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC,
+ .max_width = 4096,
+- .max_height = 2304,
+- .max_pixels_per_frame = 4096 * 2304,
++ .max_height = 4096,
++ .max_pixels_per_frame = 4096 * 4096,
+ .max_level = 0,
+ },
+ {
+ .codec_type = AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC,
+ .max_width = 4096,
+- .max_height = 2304,
+- .max_pixels_per_frame = 4096 * 2304,
++ .max_height = 4096,
++ .max_pixels_per_frame = 4096 * 4096,
+ .max_level = 0,
+ },
+ };
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 245a26cdfc5222..339bdfb7af2f81 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -6740,7 +6740,7 @@ create_stream_for_sink(struct drm_connector *connector,
+ if (stream->out_transfer_func.tf == TRANSFER_FUNCTION_GAMMA22)
+ tf = TRANSFER_FUNC_GAMMA_22;
+ mod_build_vsc_infopacket(stream, &stream->vsc_infopacket, stream->output_color_space, tf);
+- aconnector->psr_skip_count = AMDGPU_DM_PSR_ENTRY_DELAY;
++ aconnector->sr_skip_count = AMDGPU_DM_PSR_ENTRY_DELAY;
+
+ }
+ finish:
+@@ -8830,6 +8830,56 @@ static void amdgpu_dm_update_cursor(struct drm_plane *plane,
+ }
+ }
+
++static void amdgpu_dm_enable_self_refresh(struct amdgpu_crtc *acrtc_attach,
++ const struct dm_crtc_state *acrtc_state,
++ const u64 current_ts)
++{
++ struct psr_settings *psr = &acrtc_state->stream->link->psr_settings;
++ struct replay_settings *pr = &acrtc_state->stream->link->replay_settings;
++ struct amdgpu_dm_connector *aconn =
++ (struct amdgpu_dm_connector *)acrtc_state->stream->dm_stream_context;
++
++ if (acrtc_state->update_type > UPDATE_TYPE_FAST) {
++ if (pr->config.replay_supported && !pr->replay_feature_enabled)
++ amdgpu_dm_link_setup_replay(acrtc_state->stream->link, aconn);
++ else if (psr->psr_version != DC_PSR_VERSION_UNSUPPORTED &&
++ !psr->psr_feature_enabled)
++ if (!aconn->disallow_edp_enter_psr)
++ amdgpu_dm_link_setup_psr(acrtc_state->stream);
++ }
++
++ /* Decrement skip count when SR is enabled and we're doing fast updates. */
++ if (acrtc_state->update_type == UPDATE_TYPE_FAST &&
++ (psr->psr_feature_enabled || pr->config.replay_supported)) {
++ if (aconn->sr_skip_count > 0)
++ aconn->sr_skip_count--;
++
++ /* Allow SR when skip count is 0. */
++ acrtc_attach->dm_irq_params.allow_sr_entry = !aconn->sr_skip_count;
++
++ /*
++ * If sink supports PSR SU/Panel Replay, there is no need to rely on
++ * a vblank event disable request to enable PSR/RP. PSR SU/RP
++ * can be enabled immediately once OS demonstrates an
++ * adequate number of fast atomic commits to notify KMD
++ * of update events. See `vblank_control_worker()`.
++ */
++ if (acrtc_attach->dm_irq_params.allow_sr_entry &&
++#ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
++ !amdgpu_dm_crc_window_is_activated(acrtc_state->base.crtc) &&
++#endif
++ (current_ts - psr->psr_dirty_rects_change_timestamp_ns) > 500000000) {
++ if (pr->replay_feature_enabled && !pr->replay_allow_active)
++ amdgpu_dm_replay_enable(acrtc_state->stream, true);
++ if (psr->psr_version >= DC_PSR_VERSION_SU_1 &&
++ !psr->psr_allow_active && !aconn->disallow_edp_enter_psr)
++ amdgpu_dm_psr_enable(acrtc_state->stream);
++ }
++ } else {
++ acrtc_attach->dm_irq_params.allow_sr_entry = false;
++ }
++}
++
+ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ struct drm_device *dev,
+ struct amdgpu_display_manager *dm,
+@@ -8983,7 +9033,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ * during the PSR-SU was disabled.
+ */
+ if (acrtc_state->stream->link->psr_settings.psr_version >= DC_PSR_VERSION_SU_1 &&
+- acrtc_attach->dm_irq_params.allow_psr_entry &&
++ acrtc_attach->dm_irq_params.allow_sr_entry &&
+ #ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
+ !amdgpu_dm_crc_window_is_activated(acrtc_state->base.crtc) &&
+ #endif
+@@ -9158,9 +9208,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ bundle->stream_update.abm_level = &acrtc_state->abm_level;
+
+ mutex_lock(&dm->dc_lock);
+- if ((acrtc_state->update_type > UPDATE_TYPE_FAST) &&
+- acrtc_state->stream->link->psr_settings.psr_allow_active)
+- amdgpu_dm_psr_disable(acrtc_state->stream);
++ if (acrtc_state->update_type > UPDATE_TYPE_FAST) {
++ if (acrtc_state->stream->link->replay_settings.replay_allow_active)
++ amdgpu_dm_replay_disable(acrtc_state->stream);
++ if (acrtc_state->stream->link->psr_settings.psr_allow_active)
++ amdgpu_dm_psr_disable(acrtc_state->stream);
++ }
+ mutex_unlock(&dm->dc_lock);
+
+ /*
+@@ -9201,57 +9254,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ dm_update_pflip_irq_state(drm_to_adev(dev),
+ acrtc_attach);
+
+- if (acrtc_state->update_type > UPDATE_TYPE_FAST) {
+- if (acrtc_state->stream->link->replay_settings.config.replay_supported &&
+- !acrtc_state->stream->link->replay_settings.replay_feature_enabled) {
+- struct amdgpu_dm_connector *aconn =
+- (struct amdgpu_dm_connector *)acrtc_state->stream->dm_stream_context;
+- amdgpu_dm_link_setup_replay(acrtc_state->stream->link, aconn);
+- } else if (acrtc_state->stream->link->psr_settings.psr_version != DC_PSR_VERSION_UNSUPPORTED &&
+- !acrtc_state->stream->link->psr_settings.psr_feature_enabled) {
+-
+- struct amdgpu_dm_connector *aconn = (struct amdgpu_dm_connector *)
+- acrtc_state->stream->dm_stream_context;
+-
+- if (!aconn->disallow_edp_enter_psr)
+- amdgpu_dm_link_setup_psr(acrtc_state->stream);
+- }
+- }
+-
+- /* Decrement skip count when PSR is enabled and we're doing fast updates. */
+- if (acrtc_state->update_type == UPDATE_TYPE_FAST &&
+- acrtc_state->stream->link->psr_settings.psr_feature_enabled) {
+- struct amdgpu_dm_connector *aconn =
+- (struct amdgpu_dm_connector *)acrtc_state->stream->dm_stream_context;
+-
+- if (aconn->psr_skip_count > 0)
+- aconn->psr_skip_count--;
+-
+- /* Allow PSR when skip count is 0. */
+- acrtc_attach->dm_irq_params.allow_psr_entry = !aconn->psr_skip_count;
+-
+- /*
+- * If sink supports PSR SU, there is no need to rely on
+- * a vblank event disable request to enable PSR. PSR SU
+- * can be enabled immediately once OS demonstrates an
+- * adequate number of fast atomic commits to notify KMD
+- * of update events. See `vblank_control_worker()`.
+- */
+- if (acrtc_state->stream->link->psr_settings.psr_version >= DC_PSR_VERSION_SU_1 &&
+- acrtc_attach->dm_irq_params.allow_psr_entry &&
+-#ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
+- !amdgpu_dm_crc_window_is_activated(acrtc_state->base.crtc) &&
+-#endif
+- !acrtc_state->stream->link->psr_settings.psr_allow_active &&
+- !aconn->disallow_edp_enter_psr &&
+- (timestamp_ns -
+- acrtc_state->stream->link->psr_settings.psr_dirty_rects_change_timestamp_ns) >
+- 500000000)
+- amdgpu_dm_psr_enable(acrtc_state->stream);
+- } else {
+- acrtc_attach->dm_irq_params.allow_psr_entry = false;
+- }
+-
++ amdgpu_dm_enable_self_refresh(acrtc_attach, acrtc_state, timestamp_ns);
+ mutex_unlock(&dm->dc_lock);
+ }
+
+@@ -12035,7 +12038,7 @@ static int parse_amd_vsdb(struct amdgpu_dm_connector *aconnector,
+ break;
+ }
+
+- while (j < EDID_LENGTH) {
++ while (j < EDID_LENGTH - sizeof(struct amd_vsdb_block)) {
+ struct amd_vsdb_block *amd_vsdb = (struct amd_vsdb_block *)&edid_ext[j];
+ unsigned int ieeeId = (amd_vsdb->ieee_id[2] << 16) | (amd_vsdb->ieee_id[1] << 8) | (amd_vsdb->ieee_id[0]);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index 2d7755e2b6c320..87a8d1d4f9a1b6 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -727,7 +727,7 @@ struct amdgpu_dm_connector {
+ /* Cached display modes */
+ struct drm_display_mode freesync_vid_base;
+
+- int psr_skip_count;
++ int sr_skip_count;
+ bool disallow_edp_enter_psr;
+
+ /* Record progress status of mst*/
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index 99014339aaa390..df3d124c4d7a10 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -251,9 +251,10 @@ static void amdgpu_dm_crtc_vblank_control_worker(struct work_struct *work)
+ else if (dm->active_vblank_irq_count)
+ dm->active_vblank_irq_count--;
+
+- dc_allow_idle_optimizations(dm->dc, dm->active_vblank_irq_count == 0);
+-
+- DRM_DEBUG_KMS("Allow idle optimizations (MALL): %d\n", dm->active_vblank_irq_count == 0);
++ if (dm->active_vblank_irq_count > 0) {
++ DRM_DEBUG_KMS("Allow idle optimizations (MALL): false\n");
++ dc_allow_idle_optimizations(dm->dc, false);
++ }
+
+ /*
+ * Control PSR based on vblank requirements from OS
+@@ -265,11 +266,15 @@ static void amdgpu_dm_crtc_vblank_control_worker(struct work_struct *work)
+ * where the SU region is the full hactive*vactive region. See
+ * fill_dc_dirty_rects().
+ */
+- if (vblank_work->stream && vblank_work->stream->link) {
++ if (vblank_work->stream && vblank_work->stream->link && vblank_work->acrtc) {
+ amdgpu_dm_crtc_set_panel_sr_feature(
+ vblank_work, vblank_work->enable,
+- vblank_work->acrtc->dm_irq_params.allow_psr_entry ||
+- vblank_work->stream->link->replay_settings.replay_feature_enabled);
++ vblank_work->acrtc->dm_irq_params.allow_sr_entry);
++ }
++
++ if (dm->active_vblank_irq_count == 0) {
++ DRM_DEBUG_KMS("Allow idle optimizations (MALL): true\n");
++ dc_allow_idle_optimizations(dm->dc, true);
+ }
+
+ mutex_unlock(&dm->dc_lock);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq_params.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq_params.h
+index 5c9303241aeb99..6a7ecc1e4602e7 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq_params.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq_params.h
+@@ -33,7 +33,7 @@ struct dm_irq_params {
+ struct mod_vrr_params vrr_params;
+ struct dc_stream_state *stream;
+ int active_planes;
+- bool allow_psr_entry;
++ bool allow_sr_entry;
+ struct mod_freesync_config freesync_config;
+
+ #ifdef CONFIG_DEBUG_FS
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+index be8fbb04ad98f8..c9a6de110b742f 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+@@ -3122,14 +3122,12 @@ static enum bp_result bios_parser_get_vram_info(
+ struct dc_vram_info *info)
+ {
+ struct bios_parser *bp = BP_FROM_DCB(dcb);
+- static enum bp_result result = BP_RESULT_BADBIOSTABLE;
++ enum bp_result result = BP_RESULT_BADBIOSTABLE;
+ struct atom_common_table_header *header;
+ struct atom_data_revision revision;
+
+ // vram info moved to umc_info for DCN4x
+- if (dcb->ctx->dce_version >= DCN_VERSION_4_01 &&
+- dcb->ctx->dce_version < DCN_VERSION_MAX &&
+- info && DATA_TABLES(umc_info)) {
++ if (info && DATA_TABLES(umc_info)) {
+ header = GET_IMAGE(struct atom_common_table_header,
+ DATA_TABLES(umc_info));
+
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_state.c b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+index 665157f8d4cbe2..0d801ce84c1acb 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_state.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+@@ -265,6 +265,9 @@ struct dc_state *dc_state_create_copy(struct dc_state *src_state)
+ dc_state_copy_internal(new_state, src_state);
+
+ #ifdef CONFIG_DRM_AMD_DC_FP
++ new_state->bw_ctx.dml2 = NULL;
++ new_state->bw_ctx.dml2_dc_power_source = NULL;
++
+ if (src_state->bw_ctx.dml2 &&
+ !dml2_create_copy(&new_state->bw_ctx.dml2, src_state->bw_ctx.dml2)) {
+ dc_state_release(new_state);
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c
+index a8bebc60f05939..2c2c7322cb1615 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c
+@@ -30,6 +30,7 @@
+ #include "dml2_pmo_dcn4_fams2.h"
+
+ static const double MIN_VACTIVE_MARGIN_PCT = 0.25; // We need more than non-zero margin because DET buffer granularity can alter vactive latency hiding
++static const double MIN_BLANK_STUTTER_FACTOR = 3.0;
+
+ static const enum dml2_pmo_pstate_strategy base_strategy_list_1_display[][PMO_DCN4_MAX_DISPLAYS] = {
+ // VActive Preferred
+@@ -2004,6 +2005,7 @@ bool pmo_dcn4_fams2_init_for_stutter(struct dml2_pmo_init_for_stutter_in_out *in
+ struct dml2_pmo_instance *pmo = in_out->instance;
+ bool stutter_period_meets_z8_eco = true;
+ bool z8_stutter_optimization_too_expensive = false;
++ bool stutter_optimization_too_expensive = false;
+ double line_time_us, vblank_nom_time_us;
+
+ unsigned int i;
+@@ -2025,10 +2027,15 @@ bool pmo_dcn4_fams2_init_for_stutter(struct dml2_pmo_init_for_stutter_in_out *in
+ line_time_us = (double)in_out->base_display_config->display_config.stream_descriptors[i].timing.h_total / (in_out->base_display_config->display_config.stream_descriptors[i].timing.pixel_clock_khz * 1000) * 1000000;
+ vblank_nom_time_us = line_time_us * in_out->base_display_config->display_config.stream_descriptors[i].timing.vblank_nom;
+
+- if (vblank_nom_time_us < pmo->soc_bb->power_management_parameters.z8_stutter_exit_latency_us) {
++ if (vblank_nom_time_us < pmo->soc_bb->power_management_parameters.z8_stutter_exit_latency_us * MIN_BLANK_STUTTER_FACTOR) {
+ z8_stutter_optimization_too_expensive = true;
+ break;
+ }
++
++ if (vblank_nom_time_us < pmo->soc_bb->power_management_parameters.stutter_enter_plus_exit_latency_us * MIN_BLANK_STUTTER_FACTOR) {
++ stutter_optimization_too_expensive = true;
++ break;
++ }
+ }
+
+ pmo->scratch.pmo_dcn4.num_stutter_candidates = 0;
+@@ -2044,7 +2051,7 @@ bool pmo_dcn4_fams2_init_for_stutter(struct dml2_pmo_init_for_stutter_in_out *in
+ pmo->scratch.pmo_dcn4.z8_vblank_optimizable = false;
+ }
+
+- if (pmo->soc_bb->power_management_parameters.stutter_enter_plus_exit_latency_us > 0) {
++ if (!stutter_optimization_too_expensive && pmo->soc_bb->power_management_parameters.stutter_enter_plus_exit_latency_us > 0) {
+ pmo->scratch.pmo_dcn4.optimal_vblank_reserved_time_for_stutter_us[pmo->scratch.pmo_dcn4.num_stutter_candidates] = (unsigned int)pmo->soc_bb->power_management_parameters.stutter_enter_plus_exit_latency_us;
+ pmo->scratch.pmo_dcn4.num_stutter_candidates++;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index ee1bcfaae3e3db..80e60ea2d11e3c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -1259,33 +1259,26 @@ static int smu_sw_init(void *handle)
+ smu->watermarks_bitmap = 0;
+ smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+ smu->default_power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+- smu->user_dpm_profile.user_workload_mask = 0;
+
+ atomic_set(&smu->smu_power.power_gate.vcn_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.jpeg_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.vpe_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.umsch_mm_gated, 1);
+
+- smu->workload_priority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT] = 0;
+- smu->workload_priority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 1;
+- smu->workload_priority[PP_SMC_POWER_PROFILE_POWERSAVING] = 2;
+- smu->workload_priority[PP_SMC_POWER_PROFILE_VIDEO] = 3;
+- smu->workload_priority[PP_SMC_POWER_PROFILE_VR] = 4;
+- smu->workload_priority[PP_SMC_POWER_PROFILE_COMPUTE] = 5;
+- smu->workload_priority[PP_SMC_POWER_PROFILE_CUSTOM] = 6;
++ smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT] = 0;
++ smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 1;
++ smu->workload_prority[PP_SMC_POWER_PROFILE_POWERSAVING] = 2;
++ smu->workload_prority[PP_SMC_POWER_PROFILE_VIDEO] = 3;
++ smu->workload_prority[PP_SMC_POWER_PROFILE_VR] = 4;
++ smu->workload_prority[PP_SMC_POWER_PROFILE_COMPUTE] = 5;
++ smu->workload_prority[PP_SMC_POWER_PROFILE_CUSTOM] = 6;
+
+ if (smu->is_apu ||
+- !smu_is_workload_profile_available(smu, PP_SMC_POWER_PROFILE_FULLSCREEN3D)) {
+- smu->driver_workload_mask =
+- 1 << smu->workload_priority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
+- } else {
+- smu->driver_workload_mask =
+- 1 << smu->workload_priority[PP_SMC_POWER_PROFILE_FULLSCREEN3D];
+- smu->default_power_profile_mode = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
+- }
++ !smu_is_workload_profile_available(smu, PP_SMC_POWER_PROFILE_FULLSCREEN3D))
++ smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
++ else
++ smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D];
+
+- smu->workload_mask = smu->driver_workload_mask |
+- smu->user_dpm_profile.user_workload_mask;
+ smu->workload_setting[0] = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+ smu->workload_setting[1] = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
+ smu->workload_setting[2] = PP_SMC_POWER_PROFILE_POWERSAVING;
+@@ -2355,20 +2348,17 @@ static int smu_switch_power_profile(void *handle,
+ return -EINVAL;
+
+ if (!en) {
+- smu->driver_workload_mask &= ~(1 << smu->workload_priority[type]);
++ smu->workload_mask &= ~(1 << smu->workload_prority[type]);
+ index = fls(smu->workload_mask);
+ index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+ workload[0] = smu->workload_setting[index];
+ } else {
+- smu->driver_workload_mask |= (1 << smu->workload_priority[type]);
++ smu->workload_mask |= (1 << smu->workload_prority[type]);
+ index = fls(smu->workload_mask);
+ index = index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+ workload[0] = smu->workload_setting[index];
+ }
+
+- smu->workload_mask = smu->driver_workload_mask |
+- smu->user_dpm_profile.user_workload_mask;
+-
+ if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
+ smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
+ smu_bump_power_profile_mode(smu, workload, 0);
+@@ -3059,23 +3049,12 @@ static int smu_set_power_profile_mode(void *handle,
+ uint32_t param_size)
+ {
+ struct smu_context *smu = handle;
+- int ret;
+
+ if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled ||
+ !smu->ppt_funcs->set_power_profile_mode)
+ return -EOPNOTSUPP;
+
+- if (smu->user_dpm_profile.user_workload_mask &
+- (1 << smu->workload_priority[param[param_size]]))
+- return 0;
+-
+- smu->user_dpm_profile.user_workload_mask =
+- (1 << smu->workload_priority[param[param_size]]);
+- smu->workload_mask = smu->user_dpm_profile.user_workload_mask |
+- smu->driver_workload_mask;
+- ret = smu_bump_power_profile_mode(smu, param, param_size);
+-
+- return ret;
++ return smu_bump_power_profile_mode(smu, param, param_size);
+ }
+
+ static int smu_get_fan_control_mode(void *handle, u32 *fan_mode)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
+index d60d9a12a47ef7..b44a185d07e84c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
+@@ -240,7 +240,6 @@ struct smu_user_dpm_profile {
+ /* user clock state information */
+ uint32_t clk_mask[SMU_CLK_COUNT];
+ uint32_t clk_dependency;
+- uint32_t user_workload_mask;
+ };
+
+ #define SMU_TABLE_INIT(tables, table_id, s, a, d) \
+@@ -558,8 +557,7 @@ struct smu_context {
+ bool disable_uclk_switch;
+
+ uint32_t workload_mask;
+- uint32_t driver_workload_mask;
+- uint32_t workload_priority[WORKLOAD_POLICY_MAX];
++ uint32_t workload_prority[WORKLOAD_POLICY_MAX];
+ uint32_t workload_setting[WORKLOAD_POLICY_MAX];
+ uint32_t power_profile_mode;
+ uint32_t default_power_profile_mode;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index 31fe512028f460..c0f6b59369b7c4 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -1455,6 +1455,7 @@ static int arcturus_set_power_profile_mode(struct smu_context *smu,
+ return -EINVAL;
+ }
+
++
+ if ((profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) &&
+ (smu->smc_fw_version >= 0x360d00)) {
+ if (size != 10)
+@@ -1522,14 +1523,14 @@ static int arcturus_set_power_profile_mode(struct smu_context *smu,
+
+ ret = smu_cmn_send_smc_msg_with_param(smu,
+ SMU_MSG_SetWorkloadMask,
+- smu->workload_mask,
++ 1 << workload_type,
+ NULL);
+ if (ret) {
+ dev_err(smu->adev->dev, "Fail to set workload type %d\n", workload_type);
+ return ret;
+ }
+
+- smu_cmn_assign_power_profile(smu);
++ smu->power_profile_mode = profile_mode;
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index bb4ae529ae20ef..076620fa3ef5a8 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -2081,13 +2081,10 @@ static int navi10_set_power_profile_mode(struct smu_context *smu, long *input, u
+ smu->power_profile_mode);
+ if (workload_type < 0)
+ return -EINVAL;
+-
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- smu->workload_mask, NULL);
++ 1 << workload_type, NULL);
+ if (ret)
+ dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
+- else
+- smu_cmn_assign_power_profile(smu);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index ca94c52663c071..0d3e1a121b670a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -1786,13 +1786,10 @@ static int sienna_cichlid_set_power_profile_mode(struct smu_context *smu, long *
+ smu->power_profile_mode);
+ if (workload_type < 0)
+ return -EINVAL;
+-
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- smu->workload_mask, NULL);
++ 1 << workload_type, NULL);
+ if (ret)
+ dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
+- else
+- smu_cmn_assign_power_profile(smu);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+index 952ee22cbc90e0..1fe020f1f4dbe2 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+@@ -1079,7 +1079,7 @@ static int vangogh_set_power_profile_mode(struct smu_context *smu, long *input,
+ }
+
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify,
+- smu->workload_mask,
++ 1 << workload_type,
+ NULL);
+ if (ret) {
+ dev_err_once(smu->adev->dev, "Fail to set workload type %d\n",
+@@ -1087,7 +1087,7 @@ static int vangogh_set_power_profile_mode(struct smu_context *smu, long *input,
+ return ret;
+ }
+
+- smu_cmn_assign_power_profile(smu);
++ smu->power_profile_mode = profile_mode;
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+index 62316a6707ef2f..cc0504b063fa3a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+@@ -890,14 +890,14 @@ static int renoir_set_power_profile_mode(struct smu_context *smu, long *input, u
+ }
+
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify,
+- smu->workload_mask,
++ 1 << workload_type,
+ NULL);
+ if (ret) {
+ dev_err_once(smu->adev->dev, "Fail to set workload type %d\n", workload_type);
+ return ret;
+ }
+
+- smu_cmn_assign_power_profile(smu);
++ smu->power_profile_mode = profile_mode;
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index 5dd7ceca64feed..d53e162dcd8de2 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2485,7 +2485,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ DpmActivityMonitorCoeffInt_t *activity_monitor =
+ &(activity_monitor_external.DpmActivityMonitorCoeffInt);
+ int workload_type, ret = 0;
+- u32 workload_mask;
++ u32 workload_mask, selected_workload_mask;
+
+ smu->power_profile_mode = input[size];
+
+@@ -2552,7 +2552,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ if (workload_type < 0)
+ return -EINVAL;
+
+- workload_mask = 1 << workload_type;
++ selected_workload_mask = workload_mask = 1 << workload_type;
+
+ /* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
+ if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
+@@ -2567,22 +2567,12 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ workload_mask |= 1 << workload_type;
+ }
+
+- smu->workload_mask |= workload_mask;
+ ret = smu_cmn_send_smc_msg_with_param(smu,
+ SMU_MSG_SetWorkloadMask,
+- smu->workload_mask,
++ workload_mask,
+ NULL);
+- if (!ret) {
+- smu_cmn_assign_power_profile(smu);
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING) {
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- PP_SMC_POWER_PROFILE_FULLSCREEN3D);
+- smu->power_profile_mode = smu->workload_mask & (1 << workload_type)
+- ? PP_SMC_POWER_PROFILE_FULLSCREEN3D
+- : PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+- }
+- }
++ if (!ret)
++ smu->workload_mask = selected_workload_mask;
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index 9d0b19419de0ff..b891a5e0a3969a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2499,14 +2499,13 @@ static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu, long *inp
+ smu->power_profile_mode);
+ if (workload_type < 0)
+ return -EINVAL;
+-
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- smu->workload_mask, NULL);
++ 1 << workload_type, NULL);
+
+ if (ret)
+ dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
+ else
+- smu_cmn_assign_power_profile(smu);
++ smu->workload_mask = (1 << workload_type);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_0_ppt.c
+index 8798ebfcea8328..84f9b007b59f2e 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_0_ppt.c
+@@ -1132,7 +1132,7 @@ static int smu_v14_0_common_get_dpm_level_count(struct smu_context *smu,
+ static int smu_v14_0_0_print_clk_levels(struct smu_context *smu,
+ enum smu_clk_type clk_type, char *buf)
+ {
+- int i, size = 0, ret = 0;
++ int i, idx, ret = 0, size = 0;
+ uint32_t cur_value = 0, value = 0, count = 0;
+ uint32_t min, max;
+
+@@ -1168,7 +1168,8 @@ static int smu_v14_0_0_print_clk_levels(struct smu_context *smu,
+ break;
+
+ for (i = 0; i < count; i++) {
+- ret = smu_v14_0_common_get_dpm_freq_by_index(smu, clk_type, i, &value);
++ idx = (clk_type == SMU_MCLK) ? (count - i - 1) : i;
++ ret = smu_v14_0_common_get_dpm_freq_by_index(smu, clk_type, idx, &value);
+ if (ret)
+ break;
+
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index d9f0e7f81ed788..eaf80c5b3e4d06 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -1508,11 +1508,12 @@ static int smu_v14_0_2_set_power_profile_mode(struct smu_context *smu,
+ if (workload_type < 0)
+ return -EINVAL;
+
+- ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- smu->workload_mask, NULL);
+-
++ ret = smu_cmn_send_smc_msg_with_param(smu,
++ SMU_MSG_SetWorkloadMask,
++ 1 << workload_type,
++ NULL);
+ if (!ret)
+- smu_cmn_assign_power_profile(smu);
++ smu->workload_mask = 1 << workload_type;
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+index bdfc5e617333df..91ad434bcdaeb4 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+@@ -1138,14 +1138,6 @@ int smu_cmn_set_mp1_state(struct smu_context *smu,
+ return ret;
+ }
+
+-void smu_cmn_assign_power_profile(struct smu_context *smu)
+-{
+- uint32_t index;
+- index = fls(smu->workload_mask);
+- index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+- smu->power_profile_mode = smu->workload_setting[index];
+-}
+-
+ bool smu_cmn_is_audio_func_enabled(struct amdgpu_device *adev)
+ {
+ struct pci_dev *p = NULL;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+index 8a801e389659d1..1de685defe85b1 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+@@ -130,8 +130,6 @@ void smu_cmn_init_soft_gpu_metrics(void *table, uint8_t frev, uint8_t crev);
+ int smu_cmn_set_mp1_state(struct smu_context *smu,
+ enum pp_mp1_state mp1_state);
+
+-void smu_cmn_assign_power_profile(struct smu_context *smu);
+-
+ /*
+ * Helper function to make sysfs_emit_at() happy. Align buf to
+ * the current page boundary and record the offset.
+diff --git a/drivers/gpu/drm/bridge/tc358768.c b/drivers/gpu/drm/bridge/tc358768.c
+index 0e8813278a2fa5..bb1750a3dab0dc 100644
+--- a/drivers/gpu/drm/bridge/tc358768.c
++++ b/drivers/gpu/drm/bridge/tc358768.c
+@@ -125,6 +125,9 @@
+ #define TC358768_DSI_CONFW_MODE_CLR (6 << 29)
+ #define TC358768_DSI_CONFW_ADDR_DSI_CONTROL (0x3 << 24)
+
++/* TC358768_DSICMD_TX (0x0600) register */
++#define TC358768_DSI_CMDTX_DC_START BIT(0)
++
+ static const char * const tc358768_supplies[] = {
+ "vddc", "vddmipi", "vddio"
+ };
+@@ -229,6 +232,21 @@ static void tc358768_update_bits(struct tc358768_priv *priv, u32 reg, u32 mask,
+ tc358768_write(priv, reg, tmp);
+ }
+
++static void tc358768_dsicmd_tx(struct tc358768_priv *priv)
++{
++ u32 val;
++
++ /* start transfer */
++ tc358768_write(priv, TC358768_DSICMD_TX, TC358768_DSI_CMDTX_DC_START);
++ if (priv->error)
++ return;
++
++ /* wait transfer completion */
++ priv->error = regmap_read_poll_timeout(priv->regmap, TC358768_DSICMD_TX, val,
++ (val & TC358768_DSI_CMDTX_DC_START) == 0,
++ 100, 100000);
++}
++
+ static int tc358768_sw_reset(struct tc358768_priv *priv)
+ {
+ /* Assert Reset */
+@@ -516,8 +534,7 @@ static ssize_t tc358768_dsi_host_transfer(struct mipi_dsi_host *host,
+ }
+ }
+
+- /* start transfer */
+- tc358768_write(priv, TC358768_DSICMD_TX, 1);
++ tc358768_dsicmd_tx(priv);
+
+ ret = tc358768_clear_error(priv);
+ if (ret)
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_gsc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_gsc_fw.c
+index 551b0d7974ff13..5dc0ccd0763639 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_gsc_fw.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_gsc_fw.c
+@@ -80,6 +80,7 @@ int intel_gsc_fw_get_binary_info(struct intel_uc_fw *gsc_fw, const void *data, s
+ const struct intel_gsc_cpd_header_v2 *cpd_header = NULL;
+ const struct intel_gsc_cpd_entry *cpd_entry = NULL;
+ const struct intel_gsc_manifest_header *manifest;
++ struct intel_uc_fw_ver min_ver = { 0 };
+ size_t min_size = sizeof(*layout);
+ int i;
+
+@@ -212,33 +213,46 @@ int intel_gsc_fw_get_binary_info(struct intel_uc_fw *gsc_fw, const void *data, s
+ }
+ }
+
+- if (IS_ARROWLAKE(gt->i915)) {
++ /*
++ * ARL SKUs require newer firmwares, but the blob is actually common
++ * across all MTL and ARL SKUs, so we need to do an explicit version check
++ * here rather than using a separate table entry. If a too old version
++ * is found, then just don't use GSC rather than aborting the driver load.
++ * Note that the major number in the GSC FW version is used to indicate
++ * the platform, so we expect it to always be 102 for MTL/ARL binaries.
++ */
++ if (IS_ARROWLAKE_S(gt->i915))
++ min_ver = (struct intel_uc_fw_ver){ 102, 0, 10, 1878 };
++ else if (IS_ARROWLAKE_H(gt->i915) || IS_ARROWLAKE_U(gt->i915))
++ min_ver = (struct intel_uc_fw_ver){ 102, 1, 15, 1926 };
++
++ if (IS_METEORLAKE(gt->i915) && gsc->release.major != 102) {
++ gt_info(gt, "Invalid GSC firmware for MTL/ARL, got %d.%d.%d.%d but need 102.x.x.x",
++ gsc->release.major, gsc->release.minor,
++ gsc->release.patch, gsc->release.build);
++ return -EINVAL;
++ }
++
++ if (min_ver.major) {
+ bool too_old = false;
+
+- /*
+- * ARL requires a newer firmware than MTL did (102.0.10.1878) but the
+- * firmware is actually common. So, need to do an explicit version check
+- * here rather than using a separate table entry. And if the older
+- * MTL-only version is found, then just don't use GSC rather than aborting
+- * the driver load.
+- */
+- if (gsc->release.major < 102) {
++ if (gsc->release.minor < min_ver.minor) {
+ too_old = true;
+- } else if (gsc->release.major == 102) {
+- if (gsc->release.minor == 0) {
+- if (gsc->release.patch < 10) {
++ } else if (gsc->release.minor == min_ver.minor) {
++ if (gsc->release.patch < min_ver.patch) {
++ too_old = true;
++ } else if (gsc->release.patch == min_ver.patch) {
++ if (gsc->release.build < min_ver.build)
+ too_old = true;
+- } else if (gsc->release.patch == 10) {
+- if (gsc->release.build < 1878)
+- too_old = true;
+- }
+ }
+ }
+
+ if (too_old) {
+- gt_info(gt, "GSC firmware too old for ARL, got %d.%d.%d.%d but need at least 102.0.10.1878",
++ gt_info(gt, "GSC firmware too old for ARL, got %d.%d.%d.%d but need at least %d.%d.%d.%d",
+ gsc->release.major, gsc->release.minor,
+- gsc->release.patch, gsc->release.build);
++ gsc->release.patch, gsc->release.build,
++ min_ver.major, min_ver.minor,
++ min_ver.patch, min_ver.build);
+ return -EINVAL;
+ }
+ }
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 110340e02a0213..0c0c666f11ea20 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -546,8 +546,12 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+ #define IS_LUNARLAKE(i915) (0 && i915)
+ #define IS_BATTLEMAGE(i915) (0 && i915)
+
+-#define IS_ARROWLAKE(i915) \
+- IS_SUBPLATFORM(i915, INTEL_METEORLAKE, INTEL_SUBPLATFORM_ARL)
++#define IS_ARROWLAKE_H(i915) \
++ IS_SUBPLATFORM(i915, INTEL_METEORLAKE, INTEL_SUBPLATFORM_ARL_H)
++#define IS_ARROWLAKE_U(i915) \
++ IS_SUBPLATFORM(i915, INTEL_METEORLAKE, INTEL_SUBPLATFORM_ARL_U)
++#define IS_ARROWLAKE_S(i915) \
++ IS_SUBPLATFORM(i915, INTEL_METEORLAKE, INTEL_SUBPLATFORM_ARL_S)
+ #define IS_DG2_G10(i915) \
+ IS_SUBPLATFORM(i915, INTEL_DG2, INTEL_SUBPLATFORM_G10)
+ #define IS_DG2_G11(i915) \
+diff --git a/drivers/gpu/drm/i915/intel_device_info.c b/drivers/gpu/drm/i915/intel_device_info.c
+index 01a6502530501a..bd0cb707e9d494 100644
+--- a/drivers/gpu/drm/i915/intel_device_info.c
++++ b/drivers/gpu/drm/i915/intel_device_info.c
+@@ -202,8 +202,16 @@ static const u16 subplatform_g12_ids[] = {
+ INTEL_DG2_G12_IDS(ID),
+ };
+
+-static const u16 subplatform_arl_ids[] = {
+- INTEL_ARL_IDS(ID),
++static const u16 subplatform_arl_h_ids[] = {
++ INTEL_ARL_H_IDS(ID),
++};
++
++static const u16 subplatform_arl_u_ids[] = {
++ INTEL_ARL_U_IDS(ID),
++};
++
++static const u16 subplatform_arl_s_ids[] = {
++ INTEL_ARL_S_IDS(ID),
+ };
+
+ static bool find_devid(u16 id, const u16 *p, unsigned int num)
+@@ -263,9 +271,15 @@ static void intel_device_info_subplatform_init(struct drm_i915_private *i915)
+ } else if (find_devid(devid, subplatform_g12_ids,
+ ARRAY_SIZE(subplatform_g12_ids))) {
+ mask = BIT(INTEL_SUBPLATFORM_G12);
+- } else if (find_devid(devid, subplatform_arl_ids,
+- ARRAY_SIZE(subplatform_arl_ids))) {
+- mask = BIT(INTEL_SUBPLATFORM_ARL);
++ } else if (find_devid(devid, subplatform_arl_h_ids,
++ ARRAY_SIZE(subplatform_arl_h_ids))) {
++ mask = BIT(INTEL_SUBPLATFORM_ARL_H);
++ } else if (find_devid(devid, subplatform_arl_u_ids,
++ ARRAY_SIZE(subplatform_arl_u_ids))) {
++ mask = BIT(INTEL_SUBPLATFORM_ARL_U);
++ } else if (find_devid(devid, subplatform_arl_s_ids,
++ ARRAY_SIZE(subplatform_arl_s_ids))) {
++ mask = BIT(INTEL_SUBPLATFORM_ARL_S);
+ }
+
+ GEM_BUG_ON(mask & ~INTEL_SUBPLATFORM_MASK);
+diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h
+index 643ff1bf74eeb0..a9fcaf33df9e26 100644
+--- a/drivers/gpu/drm/i915/intel_device_info.h
++++ b/drivers/gpu/drm/i915/intel_device_info.h
+@@ -128,7 +128,9 @@ enum intel_platform {
+ #define INTEL_SUBPLATFORM_RPLU 2
+
+ /* MTL */
+-#define INTEL_SUBPLATFORM_ARL 0
++#define INTEL_SUBPLATFORM_ARL_H 0
++#define INTEL_SUBPLATFORM_ARL_U 1
++#define INTEL_SUBPLATFORM_ARL_S 2
+
+ enum intel_ppgtt_type {
+ INTEL_PPGTT_NONE = I915_GEM_PPGTT_NONE,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/r535.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/r535.c
+index 027867c2a8c5b6..99110ab2f44dcd 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/r535.c
+@@ -992,7 +992,7 @@ r535_dp_train_target(struct nvkm_outp *outp, u8 target, bool mst, u8 link_nr, u8
+ ctrl->data = data;
+
+ ret = nvkm_gsp_rm_ctrl_push(&disp->rm.objcom, &ctrl, sizeof(*ctrl));
+- if (ret == -EAGAIN && ctrl->retryTimeMs) {
++ if ((ret == -EAGAIN || ret == -EBUSY) && ctrl->retryTimeMs) {
+ /*
+ * Device (likely an eDP panel) isn't ready yet, wait for the time specified
+ * by GSP before retrying again
+@@ -1060,33 +1060,44 @@ r535_dp_aux_xfer(struct nvkm_outp *outp, u8 type, u32 addr, u8 *data, u8 *psize)
+ NV0073_CTRL_DP_AUXCH_CTRL_PARAMS *ctrl;
+ u8 size = *psize;
+ int ret;
++ int retries;
+
+- ctrl = nvkm_gsp_rm_ctrl_get(&disp->rm.objcom, NV0073_CTRL_CMD_DP_AUXCH_CTRL, sizeof(*ctrl));
+- if (IS_ERR(ctrl))
+- return PTR_ERR(ctrl);
++ for (retries = 0; retries < 3; ++retries) {
++ ctrl = nvkm_gsp_rm_ctrl_get(&disp->rm.objcom, NV0073_CTRL_CMD_DP_AUXCH_CTRL, sizeof(*ctrl));
++ if (IS_ERR(ctrl))
++ return PTR_ERR(ctrl);
+
+- ctrl->subDeviceInstance = 0;
+- ctrl->displayId = BIT(outp->index);
+- ctrl->bAddrOnly = !size;
+- ctrl->cmd = type;
+- if (ctrl->bAddrOnly) {
+- ctrl->cmd = NVDEF_SET(ctrl->cmd, NV0073_CTRL, DP_AUXCH_CMD, REQ_TYPE, WRITE);
+- ctrl->cmd = NVDEF_SET(ctrl->cmd, NV0073_CTRL, DP_AUXCH_CMD, I2C_MOT, FALSE);
+- }
+- ctrl->addr = addr;
+- ctrl->size = !ctrl->bAddrOnly ? (size - 1) : 0;
+- memcpy(ctrl->data, data, size);
++ ctrl->subDeviceInstance = 0;
++ ctrl->displayId = BIT(outp->index);
++ ctrl->bAddrOnly = !size;
++ ctrl->cmd = type;
++ if (ctrl->bAddrOnly) {
++ ctrl->cmd = NVDEF_SET(ctrl->cmd, NV0073_CTRL, DP_AUXCH_CMD, REQ_TYPE, WRITE);
++ ctrl->cmd = NVDEF_SET(ctrl->cmd, NV0073_CTRL, DP_AUXCH_CMD, I2C_MOT, FALSE);
++ }
++ ctrl->addr = addr;
++ ctrl->size = !ctrl->bAddrOnly ? (size - 1) : 0;
++ memcpy(ctrl->data, data, size);
+
+- ret = nvkm_gsp_rm_ctrl_push(&disp->rm.objcom, &ctrl, sizeof(*ctrl));
+- if (ret) {
+- nvkm_gsp_rm_ctrl_done(&disp->rm.objcom, ctrl);
+- return ret;
++ ret = nvkm_gsp_rm_ctrl_push(&disp->rm.objcom, &ctrl, sizeof(*ctrl));
++ if ((ret == -EAGAIN || ret == -EBUSY) && ctrl->retryTimeMs) {
++ /*
++ * Device (likely an eDP panel) isn't ready yet, wait for the time specified
++ * by GSP before retrying again
++ */
++ nvkm_debug(&disp->engine.subdev,
++ "Waiting %dms for GSP LT panel delay before retrying in AUX\n",
++ ctrl->retryTimeMs);
++ msleep(ctrl->retryTimeMs);
++ nvkm_gsp_rm_ctrl_done(&disp->rm.objcom, ctrl);
++ } else {
++ memcpy(data, ctrl->data, size);
++ *psize = ctrl->size;
++ ret = ctrl->replyType;
++ nvkm_gsp_rm_ctrl_done(&disp->rm.objcom, ctrl);
++ break;
++ }
+ }
+-
+- memcpy(data, ctrl->data, size);
+- *psize = ctrl->size;
+- ret = ctrl->replyType;
+- nvkm_gsp_rm_ctrl_done(&disp->rm.objcom, ctrl);
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/fw.c b/drivers/gpu/drm/nouveau/nvkm/falcon/fw.c
+index a1c8545f1249a1..cac6d64ab67d1d 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/falcon/fw.c
++++ b/drivers/gpu/drm/nouveau/nvkm/falcon/fw.c
+@@ -89,11 +89,6 @@ nvkm_falcon_fw_boot(struct nvkm_falcon_fw *fw, struct nvkm_subdev *user,
+ nvkm_falcon_fw_dtor_sigs(fw);
+ }
+
+- /* after last write to the img, sync dma mappings */
+- dma_sync_single_for_device(fw->fw.device->dev,
+- fw->fw.phys,
+- sg_dma_len(&fw->fw.mem.sgl),
+- DMA_TO_DEVICE);
+
+ FLCNFW_DBG(fw, "resetting");
+ fw->func->reset(fw);
+@@ -105,6 +100,12 @@ nvkm_falcon_fw_boot(struct nvkm_falcon_fw *fw, struct nvkm_subdev *user,
+ goto done;
+ }
+
++ /* after last write to the img, sync dma mappings */
++ dma_sync_single_for_device(fw->fw.device->dev,
++ fw->fw.phys,
++ sg_dma_len(&fw->fw.mem.sgl),
++ DMA_TO_DEVICE);
++
+ ret = fw->func->load(fw);
+ if (ret)
+ goto done;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+index cf58f9da91396d..d586aea3089841 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+@@ -78,7 +78,7 @@ r535_rpc_status_to_errno(uint32_t rpc_status)
+ switch (rpc_status) {
+ case 0x55: /* NV_ERR_NOT_READY */
+ case 0x66: /* NV_ERR_TIMEOUT_RETRY */
+- return -EAGAIN;
++ return -EBUSY;
+ case 0x51: /* NV_ERR_NO_MEMORY */
+ return -ENOMEM;
+ default:
+@@ -601,7 +601,7 @@ r535_gsp_rpc_rm_alloc_push(struct nvkm_gsp_object *object, void *argv, u32 repc)
+
+ if (rpc->status) {
+ ret = ERR_PTR(r535_rpc_status_to_errno(rpc->status));
+- if (PTR_ERR(ret) != -EAGAIN)
++ if (PTR_ERR(ret) != -EAGAIN && PTR_ERR(ret) != -EBUSY)
+ nvkm_error(&gsp->subdev, "RM_ALLOC: 0x%x\n", rpc->status);
+ } else {
+ ret = repc ? rpc->params : NULL;
+@@ -660,7 +660,7 @@ r535_gsp_rpc_rm_ctrl_push(struct nvkm_gsp_object *object, void **argv, u32 repc)
+
+ if (rpc->status) {
+ ret = r535_rpc_status_to_errno(rpc->status);
+- if (ret != -EAGAIN)
++ if (ret != -EAGAIN && ret != -EBUSY)
+ nvkm_error(&gsp->subdev, "cli:0x%08x obj:0x%08x ctrl cmd:0x%08x failed: 0x%08x\n",
+ object->client->object.handle, object->handle, rpc->cmd, rpc->status);
+ }
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
+index d18f32640a79fb..64378e8f124bde 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.c
++++ b/drivers/gpu/drm/panthor/panthor_mmu.c
+@@ -990,6 +990,8 @@ panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot,
+
+ if (!size)
+ break;
++
++ offset = 0;
+ }
+
+ return panthor_vm_flush_range(vm, start_iova, iova - start_iova);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index f161f40d8ce4c8..69900138295bf7 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -1093,10 +1093,10 @@ static int vop_plane_atomic_async_check(struct drm_plane *plane,
+ if (!plane->state->fb)
+ return -EINVAL;
+
+- if (state)
+- crtc_state = drm_atomic_get_existing_crtc_state(state,
+- new_plane_state->crtc);
+- else /* Special case for asynchronous cursor updates. */
++ crtc_state = drm_atomic_get_existing_crtc_state(state, new_plane_state->crtc);
++
++ /* Special case for asynchronous cursor updates. */
++ if (!crtc_state)
+ crtc_state = plane->crtc->state;
+
+ return drm_atomic_helper_check_plane_state(plane->state, crtc_state,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 63b8d7591253cd..10d596cb4b4029 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -1265,6 +1265,8 @@ static int vmw_framebuffer_surface_create_handle(struct drm_framebuffer *fb,
+ struct vmw_framebuffer_surface *vfbs = vmw_framebuffer_to_vfbs(fb);
+ struct vmw_bo *bo = vmw_user_object_buffer(&vfbs->uo);
+
++ if (WARN_ON(!bo))
++ return -EINVAL;
+ return drm_gem_handle_create(file_priv, &bo->tbo.base, handle);
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index e147ef1d0578f3..9a01babe679c97 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -869,8 +869,8 @@ int xe_bo_evict_pinned(struct xe_bo *bo)
+ if (WARN_ON(!xe_bo_is_pinned(bo)))
+ return -EINVAL;
+
+- if (WARN_ON(!xe_bo_is_vram(bo)))
+- return -EINVAL;
++ if (!xe_bo_is_vram(bo))
++ return 0;
+
+ ret = ttm_bo_mem_space(&bo->ttm, &placement, &new_mem, &ctx);
+ if (ret)
+@@ -920,6 +920,7 @@ int xe_bo_restore_pinned(struct xe_bo *bo)
+ .interruptible = false,
+ };
+ struct ttm_resource *new_mem;
++ struct ttm_place *place = &bo->placements[0];
+ int ret;
+
+ xe_bo_assert_held(bo);
+@@ -930,9 +931,15 @@ int xe_bo_restore_pinned(struct xe_bo *bo)
+ if (WARN_ON(!xe_bo_is_pinned(bo)))
+ return -EINVAL;
+
+- if (WARN_ON(xe_bo_is_vram(bo) || !bo->ttm.ttm))
++ if (WARN_ON(xe_bo_is_vram(bo)))
++ return -EINVAL;
++
++ if (WARN_ON(!bo->ttm.ttm && !xe_bo_is_stolen(bo)))
+ return -EINVAL;
+
++ if (!mem_type_is_vram(place->mem_type))
++ return 0;
++
+ ret = ttm_bo_mem_space(&bo->ttm, &bo->placement, &new_mem, &ctx);
+ if (ret)
+ return ret;
+@@ -1702,6 +1709,7 @@ int xe_bo_pin_external(struct xe_bo *bo)
+
+ int xe_bo_pin(struct xe_bo *bo)
+ {
++ struct ttm_place *place = &bo->placements[0];
+ struct xe_device *xe = xe_bo_device(bo);
+ int err;
+
+@@ -1732,21 +1740,21 @@ int xe_bo_pin(struct xe_bo *bo)
+ */
+ if (IS_DGFX(xe) && !(IS_ENABLED(CONFIG_DRM_XE_DEBUG) &&
+ bo->flags & XE_BO_FLAG_INTERNAL_TEST)) {
+- struct ttm_place *place = &(bo->placements[0]);
+-
+ if (mem_type_is_vram(place->mem_type)) {
+ xe_assert(xe, place->flags & TTM_PL_FLAG_CONTIGUOUS);
+
+ place->fpfn = (xe_bo_addr(bo, 0, PAGE_SIZE) -
+ vram_region_gpu_offset(bo->ttm.resource)) >> PAGE_SHIFT;
+ place->lpfn = place->fpfn + (bo->size >> PAGE_SHIFT);
+-
+- spin_lock(&xe->pinned.lock);
+- list_add_tail(&bo->pinned_link, &xe->pinned.kernel_bo_present);
+- spin_unlock(&xe->pinned.lock);
+ }
+ }
+
++ if (mem_type_is_vram(place->mem_type) || bo->flags & XE_BO_FLAG_GGTT) {
++ spin_lock(&xe->pinned.lock);
++ list_add_tail(&bo->pinned_link, &xe->pinned.kernel_bo_present);
++ spin_unlock(&xe->pinned.lock);
++ }
++
+ ttm_bo_pin(&bo->ttm);
+
+ /*
+@@ -1792,23 +1800,18 @@ void xe_bo_unpin_external(struct xe_bo *bo)
+
+ void xe_bo_unpin(struct xe_bo *bo)
+ {
++ struct ttm_place *place = &bo->placements[0];
+ struct xe_device *xe = xe_bo_device(bo);
+
+ xe_assert(xe, !bo->ttm.base.import_attach);
+ xe_assert(xe, xe_bo_is_pinned(bo));
+
+- if (IS_DGFX(xe) && !(IS_ENABLED(CONFIG_DRM_XE_DEBUG) &&
+- bo->flags & XE_BO_FLAG_INTERNAL_TEST)) {
+- struct ttm_place *place = &(bo->placements[0]);
+-
+- if (mem_type_is_vram(place->mem_type)) {
+- spin_lock(&xe->pinned.lock);
+- xe_assert(xe, !list_empty(&bo->pinned_link));
+- list_del_init(&bo->pinned_link);
+- spin_unlock(&xe->pinned.lock);
+- }
++ if (mem_type_is_vram(place->mem_type) || bo->flags & XE_BO_FLAG_GGTT) {
++ spin_lock(&xe->pinned.lock);
++ xe_assert(xe, !list_empty(&bo->pinned_link));
++ list_del_init(&bo->pinned_link);
++ spin_unlock(&xe->pinned.lock);
+ }
+-
+ ttm_bo_unpin(&bo->ttm);
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_bo_evict.c b/drivers/gpu/drm/xe/xe_bo_evict.c
+index 541b49007d738a..8fb2be0610035b 100644
+--- a/drivers/gpu/drm/xe/xe_bo_evict.c
++++ b/drivers/gpu/drm/xe/xe_bo_evict.c
+@@ -34,14 +34,22 @@ int xe_bo_evict_all(struct xe_device *xe)
+ u8 id;
+ int ret;
+
+- if (!IS_DGFX(xe))
+- return 0;
+-
+ /* User memory */
+- for (mem_type = XE_PL_VRAM0; mem_type <= XE_PL_VRAM1; ++mem_type) {
++ for (mem_type = XE_PL_TT; mem_type <= XE_PL_VRAM1; ++mem_type) {
+ struct ttm_resource_manager *man =
+ ttm_manager_type(bdev, mem_type);
+
++ /*
++ * On igpu platforms with flat CCS we need to ensure we save and restore any CCS
++ * state since this state lives inside graphics stolen memory which doesn't survive
++ * hibernation.
++ *
++ * This can be further improved by only evicting objects that we know have actually
++ * used a compression enabled PAT index.
++ */
++ if (mem_type == XE_PL_TT && (IS_DGFX(xe) || !xe_device_has_flat_ccs(xe)))
++ continue;
++
+ if (man) {
+ ret = ttm_resource_manager_evict_all(bdev, man);
+ if (ret)
+@@ -125,9 +133,6 @@ int xe_bo_restore_kernel(struct xe_device *xe)
+ struct xe_bo *bo;
+ int ret;
+
+- if (!IS_DGFX(xe))
+- return 0;
+-
+ spin_lock(&xe->pinned.lock);
+ for (;;) {
+ bo = list_first_entry_or_null(&xe->pinned.evicted,
+@@ -159,7 +164,6 @@ int xe_bo_restore_kernel(struct xe_device *xe)
+ * should setup the iosys map.
+ */
+ xe_assert(xe, !iosys_map_is_null(&bo->vmap));
+- xe_assert(xe, xe_bo_is_vram(bo));
+
+ xe_bo_put(bo);
+
+diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
+index 5906df55dfdf2d..7b099a25e42a61 100644
+--- a/drivers/gpu/drm/xe/xe_oa.c
++++ b/drivers/gpu/drm/xe/xe_oa.c
+@@ -1206,9 +1206,11 @@ static int xe_oa_release(struct inode *inode, struct file *file)
+ struct xe_oa_stream *stream = file->private_data;
+ struct xe_gt *gt = stream->gt;
+
++ xe_pm_runtime_get(gt_to_xe(gt));
+ mutex_lock(>->oa.gt_lock);
+ xe_oa_destroy_locked(stream);
+ mutex_unlock(>->oa.gt_lock);
++ xe_pm_runtime_put(gt_to_xe(gt));
+
+ /* Release the reference the OA stream kept on the driver */
+ drm_dev_put(>_to_xe(gt)->drm);
+diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
+index c4cf26f1d1496a..be0743dac3fff3 100644
+--- a/drivers/infiniband/core/addr.c
++++ b/drivers/infiniband/core/addr.c
+@@ -269,8 +269,6 @@ rdma_find_ndev_for_src_ip_rcu(struct net *net, const struct sockaddr *src_in)
+ break;
+ #endif
+ }
+- if (!ret && dev && is_vlan_dev(dev))
+- dev = vlan_dev_real_dev(dev);
+ return ret ? ERR_PTR(ret) : dev;
+ }
+
+diff --git a/drivers/mailbox/qcom-cpucp-mbox.c b/drivers/mailbox/qcom-cpucp-mbox.c
+index e5437c294803c7..44f4ed15f81849 100644
+--- a/drivers/mailbox/qcom-cpucp-mbox.c
++++ b/drivers/mailbox/qcom-cpucp-mbox.c
+@@ -138,7 +138,7 @@ static int qcom_cpucp_mbox_probe(struct platform_device *pdev)
+ return irq;
+
+ ret = devm_request_irq(dev, irq, qcom_cpucp_mbox_irq_fn,
+- IRQF_TRIGGER_HIGH, "apss_cpucp_mbox", cpucp);
++ IRQF_TRIGGER_HIGH | IRQF_NO_SUSPEND, "apss_cpucp_mbox", cpucp);
+ if (ret < 0)
+ return dev_err_probe(dev, ret, "Failed to register irq: %d\n", irq);
+
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index 14f323fbada719..9df7c213716aec 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -530,6 +530,9 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ for (minor = 0; minor < MAX_DVB_MINORS; minor++)
+ if (!dvb_minors[minor])
+ break;
++#else
++ minor = nums2minor(adap->num, type, id);
++#endif
+ if (minor >= MAX_DVB_MINORS) {
+ if (new_node) {
+ list_del(&new_node->list_head);
+@@ -543,17 +546,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ mutex_unlock(&dvbdev_register_lock);
+ return -EINVAL;
+ }
+-#else
+- minor = nums2minor(adap->num, type, id);
+- if (minor >= MAX_DVB_MINORS) {
+- dvb_media_device_free(dvbdev);
+- list_del(&dvbdev->list_head);
+- kfree(dvbdev);
+- *pdvbdev = NULL;
+- mutex_unlock(&dvbdev_register_lock);
+- return ret;
+- }
+-#endif
++
+ dvbdev->minor = minor;
+ dvb_minors[minor] = dvb_device_get(dvbdev);
+ up_write(&minor_rwsem);
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 41e451235f6373..e9f6e4e622901a 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -2957,8 +2957,8 @@ static int dw_mci_init_slot(struct dw_mci *host)
+ if (host->use_dma == TRANS_MODE_IDMAC) {
+ mmc->max_segs = host->ring_size;
+ mmc->max_blk_size = 65535;
+- mmc->max_req_size = DW_MCI_DESC_DATA_LENGTH * host->ring_size;
+- mmc->max_seg_size = mmc->max_req_size;
++ mmc->max_seg_size = 0x1000;
++ mmc->max_req_size = mmc->max_seg_size * host->ring_size;
+ mmc->max_blk_count = mmc->max_req_size / 512;
+ } else if (host->use_dma == TRANS_MODE_EDMAC) {
+ mmc->max_segs = 64;
+diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
+index d3bd0ac99ec468..e0ab5fd635e6cd 100644
+--- a/drivers/mmc/host/sunxi-mmc.c
++++ b/drivers/mmc/host/sunxi-mmc.c
+@@ -1191,10 +1191,9 @@ static const struct sunxi_mmc_cfg sun50i_a64_emmc_cfg = {
+ .needs_new_timings = true,
+ };
+
+-static const struct sunxi_mmc_cfg sun50i_a100_cfg = {
++static const struct sunxi_mmc_cfg sun50i_h616_cfg = {
+ .idma_des_size_bits = 16,
+ .idma_des_shift = 2,
+- .clk_delays = NULL,
+ .can_calibrate = true,
+ .mask_data0 = true,
+ .needs_new_timings = true,
+@@ -1217,8 +1216,9 @@ static const struct of_device_id sunxi_mmc_of_match[] = {
+ { .compatible = "allwinner,sun20i-d1-mmc", .data = &sun20i_d1_cfg },
+ { .compatible = "allwinner,sun50i-a64-mmc", .data = &sun50i_a64_cfg },
+ { .compatible = "allwinner,sun50i-a64-emmc", .data = &sun50i_a64_emmc_cfg },
+- { .compatible = "allwinner,sun50i-a100-mmc", .data = &sun50i_a100_cfg },
++ { .compatible = "allwinner,sun50i-a100-mmc", .data = &sun20i_d1_cfg },
+ { .compatible = "allwinner,sun50i-a100-emmc", .data = &sun50i_a100_emmc_cfg },
++ { .compatible = "allwinner,sun50i-h616-mmc", .data = &sun50i_h616_cfg },
+ { /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, sunxi_mmc_of_match);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index e20bee1bdffd72..66725c16326350 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -934,6 +934,8 @@ static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active,
+
+ if (bond->dev->flags & IFF_UP)
+ bond_hw_addr_flush(bond->dev, old_active->dev);
++
++ bond_slave_ns_maddrs_add(bond, old_active);
+ }
+
+ if (new_active) {
+@@ -950,6 +952,8 @@ static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active,
+ dev_mc_sync(new_active->dev, bond->dev);
+ netif_addr_unlock_bh(bond->dev);
+ }
++
++ bond_slave_ns_maddrs_del(bond, new_active);
+ }
+ }
+
+@@ -2267,6 +2271,11 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ bond_compute_features(bond);
+ bond_set_carrier(bond);
+
++ /* Needs to be called before bond_select_active_slave(), which will
++ * remove the maddrs if the slave is selected as active slave.
++ */
++ bond_slave_ns_maddrs_add(bond, new_slave);
++
+ if (bond_uses_primary(bond)) {
+ block_netpoll_tx();
+ bond_select_active_slave(bond);
+@@ -2276,7 +2285,6 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ if (bond_mode_can_use_xmit_hash(bond))
+ bond_update_slave_arr(bond, NULL);
+
+-
+ if (!slave_dev->netdev_ops->ndo_bpf ||
+ !slave_dev->netdev_ops->ndo_xdp_xmit) {
+ if (bond->xdp_prog) {
+@@ -2474,6 +2482,12 @@ static int __bond_release_one(struct net_device *bond_dev,
+ if (oldcurrent == slave)
+ bond_change_active_slave(bond, NULL);
+
++ /* Must be called after bond_change_active_slave () as the slave
++ * might change from an active slave to a backup slave. Then it is
++ * necessary to clear the maddrs on the backup slave.
++ */
++ bond_slave_ns_maddrs_del(bond, slave);
++
+ if (bond_is_lb(bond)) {
+ /* Must be called only after the slave has been
+ * detached from the list and the curr_active_slave
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index 95d59a18c0223f..327b6ecdc77e00 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -15,6 +15,7 @@
+ #include <linux/sched/signal.h>
+
+ #include <net/bonding.h>
++#include <net/ndisc.h>
+
+ static int bond_option_active_slave_set(struct bonding *bond,
+ const struct bond_opt_value *newval);
+@@ -1234,6 +1235,68 @@ static int bond_option_arp_ip_targets_set(struct bonding *bond,
+ }
+
+ #if IS_ENABLED(CONFIG_IPV6)
++static bool slave_can_set_ns_maddr(const struct bonding *bond, struct slave *slave)
++{
++ return BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP &&
++ !bond_is_active_slave(slave) &&
++ slave->dev->flags & IFF_MULTICAST;
++}
++
++static void slave_set_ns_maddrs(struct bonding *bond, struct slave *slave, bool add)
++{
++ struct in6_addr *targets = bond->params.ns_targets;
++ char slot_maddr[MAX_ADDR_LEN];
++ int i;
++
++ if (!slave_can_set_ns_maddr(bond, slave))
++ return;
++
++ for (i = 0; i < BOND_MAX_NS_TARGETS; i++) {
++ if (ipv6_addr_any(&targets[i]))
++ break;
++
++ if (!ndisc_mc_map(&targets[i], slot_maddr, slave->dev, 0)) {
++ if (add)
++ dev_mc_add(slave->dev, slot_maddr);
++ else
++ dev_mc_del(slave->dev, slot_maddr);
++ }
++ }
++}
++
++void bond_slave_ns_maddrs_add(struct bonding *bond, struct slave *slave)
++{
++ if (!bond->params.arp_validate)
++ return;
++ slave_set_ns_maddrs(bond, slave, true);
++}
++
++void bond_slave_ns_maddrs_del(struct bonding *bond, struct slave *slave)
++{
++ if (!bond->params.arp_validate)
++ return;
++ slave_set_ns_maddrs(bond, slave, false);
++}
++
++static void slave_set_ns_maddr(struct bonding *bond, struct slave *slave,
++ struct in6_addr *target, struct in6_addr *slot)
++{
++ char target_maddr[MAX_ADDR_LEN], slot_maddr[MAX_ADDR_LEN];
++
++ if (!bond->params.arp_validate || !slave_can_set_ns_maddr(bond, slave))
++ return;
++
++ /* remove the previous maddr from slave */
++ if (!ipv6_addr_any(slot) &&
++ !ndisc_mc_map(slot, slot_maddr, slave->dev, 0))
++ dev_mc_del(slave->dev, slot_maddr);
++
++ /* add new maddr on slave if target is set */
++ if (!ipv6_addr_any(target) &&
++ !ndisc_mc_map(target, target_maddr, slave->dev, 0))
++ dev_mc_add(slave->dev, target_maddr);
++}
++
+ static void _bond_options_ns_ip6_target_set(struct bonding *bond, int slot,
+ struct in6_addr *target,
+ unsigned long last_rx)
+@@ -1243,8 +1306,10 @@ static void _bond_options_ns_ip6_target_set(struct bonding *bond, int slot,
+ struct slave *slave;
+
+ if (slot >= 0 && slot < BOND_MAX_NS_TARGETS) {
+- bond_for_each_slave(bond, slave, iter)
++ bond_for_each_slave(bond, slave, iter) {
+ slave->target_last_arp_rx[slot] = last_rx;
++ slave_set_ns_maddr(bond, slave, target, &targets[slot]);
++ }
+ targets[slot] = *target;
+ }
+ }
+@@ -1296,15 +1361,30 @@ static int bond_option_ns_ip6_targets_set(struct bonding *bond,
+ {
+ return -EPERM;
+ }
++
++static void slave_set_ns_maddrs(struct bonding *bond, struct slave *slave, bool add) {}
++
++void bond_slave_ns_maddrs_add(struct bonding *bond, struct slave *slave) {}
++
++void bond_slave_ns_maddrs_del(struct bonding *bond, struct slave *slave) {}
+ #endif
+
+ static int bond_option_arp_validate_set(struct bonding *bond,
+ const struct bond_opt_value *newval)
+ {
++ bool changed = !!bond->params.arp_validate != !!newval->value;
++ struct list_head *iter;
++ struct slave *slave;
++
+ netdev_dbg(bond->dev, "Setting arp_validate to %s (%llu)\n",
+ newval->string, newval->value);
+ bond->params.arp_validate = newval->value;
+
++ if (changed) {
++ bond_for_each_slave(bond, slave, iter)
++ slave_set_ns_maddrs(bond, slave, !!bond->params.arp_validate);
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index 71a168746ebe21..deabed1d46a145 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -866,7 +866,7 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv,
+ return 0;
+
+ err_rule:
+- mlx5_tc_ct_entry_destroy_mod_hdr(ct_priv, zone_rule->attr, zone_rule->mh);
++ mlx5_tc_ct_entry_destroy_mod_hdr(ct_priv, attr, zone_rule->mh);
+ mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
+ err_mod_hdr:
+ kfree(attr);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+index d61be26a4df1a5..3db31cc1071926 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+@@ -660,7 +660,7 @@ tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
+ while (remaining > 0) {
+ skb_frag_t *frag = &record->frags[i];
+
+- get_page(skb_frag_page(frag));
++ page_ref_inc(skb_frag_page(frag));
+ remaining -= skb_frag_size(frag);
+ info->frags[i++] = *frag;
+ }
+@@ -763,7 +763,7 @@ void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
+ stats = sq->stats;
+
+ mlx5e_tx_dma_unmap(sq->pdev, dma);
+- put_page(wi->resync_dump_frag_page);
++ page_ref_dec(wi->resync_dump_frag_page);
+ stats->tls_dump_packets++;
+ stats->tls_dump_bytes += wi->num_bytes;
+ }
+@@ -816,12 +816,12 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
+
+ err_out:
+ for (; i < info.nr_frags; i++)
+- /* The put_page() here undoes the page ref obtained in tx_sync_info_get().
++ /* The page_ref_dec() here undoes the page ref obtained in tx_sync_info_get().
+ * Page refs obtained for the DUMP WQEs above (by page_ref_add) will be
+ * released only upon their completions (or in mlx5e_free_txqsq_descs,
+ * if channel closes).
+ */
+- put_page(skb_frag_page(&info.frags[i]));
++ page_ref_dec(skb_frag_page(&info.frags[i]));
+
+ return MLX5E_KTLS_SYNC_FAIL;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 3e11c1c6d4f69b..99d0b977ed3d2f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -4266,7 +4266,8 @@ void mlx5e_set_xdp_feature(struct net_device *netdev)
+ struct mlx5e_params *params = &priv->channels.params;
+ xdp_features_t val;
+
+- if (params->packet_merge.type != MLX5E_PACKET_MERGE_NONE) {
++ if (!netdev->netdev_ops->ndo_bpf ||
++ params->packet_merge.type != MLX5E_PACKET_MERGE_NONE) {
+ xdp_clear_features_flag(netdev);
+ return;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+index 5bf8318cc48b8e..1d60465cc2ca4f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+@@ -36,6 +36,7 @@
+ #include "en.h"
+ #include "en/port.h"
+ #include "eswitch.h"
++#include "lib/mlx5.h"
+
+ static int mlx5e_test_health_info(struct mlx5e_priv *priv)
+ {
+@@ -247,6 +248,9 @@ static int mlx5e_cond_loopback(struct mlx5e_priv *priv)
+ if (is_mdev_switchdev_mode(priv->mdev))
+ return -EOPNOTSUPP;
+
++ if (mlx5_get_sd(priv->mdev))
++ return -EOPNOTSUPP;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index a47d6419160d78..fb01acbadf7325 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1946,13 +1946,22 @@ lookup_fte_locked(struct mlx5_flow_group *g,
+ fte_tmp = NULL;
+ goto out;
+ }
++
++ nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
++
+ if (!fte_tmp->node.active) {
++ up_write_ref_node(&fte_tmp->node, false);
++
++ if (take_write)
++ up_write_ref_node(&g->node, false);
++ else
++ up_read_ref_node(&g->node);
++
+ tree_put_node(&fte_tmp->node, false);
+- fte_tmp = NULL;
+- goto out;
++
++ return NULL;
+ }
+
+- nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
+ out:
+ if (take_write)
+ up_write_ref_node(&g->node, false);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+index 81a9232a03e1ba..7db9cab9bedf69 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+@@ -593,9 +593,11 @@ static void irq_pool_free(struct mlx5_irq_pool *pool)
+ kvfree(pool);
+ }
+
+-static int irq_pools_init(struct mlx5_core_dev *dev, int sf_vec, int pcif_vec)
++static int irq_pools_init(struct mlx5_core_dev *dev, int sf_vec, int pcif_vec,
++ bool dynamic_vec)
+ {
+ struct mlx5_irq_table *table = dev->priv.irq_table;
++ int sf_vec_available = sf_vec;
+ int num_sf_ctrl;
+ int err;
+
+@@ -616,6 +618,13 @@ static int irq_pools_init(struct mlx5_core_dev *dev, int sf_vec, int pcif_vec)
+ num_sf_ctrl = DIV_ROUND_UP(mlx5_sf_max_functions(dev),
+ MLX5_SFS_PER_CTRL_IRQ);
+ num_sf_ctrl = min_t(int, MLX5_IRQ_CTRL_SF_MAX, num_sf_ctrl);
++ if (!dynamic_vec && (num_sf_ctrl + 1) > sf_vec_available) {
++ mlx5_core_dbg(dev,
++ "Not enough IRQs for SFs control and completion pool, required=%d avail=%d\n",
++ num_sf_ctrl + 1, sf_vec_available);
++ return 0;
++ }
++
+ table->sf_ctrl_pool = irq_pool_alloc(dev, pcif_vec, num_sf_ctrl,
+ "mlx5_sf_ctrl",
+ MLX5_EQ_SHARE_IRQ_MIN_CTRL,
+@@ -624,9 +633,11 @@ static int irq_pools_init(struct mlx5_core_dev *dev, int sf_vec, int pcif_vec)
+ err = PTR_ERR(table->sf_ctrl_pool);
+ goto err_pf;
+ }
+- /* init sf_comp_pool */
++ sf_vec_available -= num_sf_ctrl;
++
++ /* init sf_comp_pool, remaining vectors are for the SF completions */
+ table->sf_comp_pool = irq_pool_alloc(dev, pcif_vec + num_sf_ctrl,
+- sf_vec - num_sf_ctrl, "mlx5_sf_comp",
++ sf_vec_available, "mlx5_sf_comp",
+ MLX5_EQ_SHARE_IRQ_MIN_COMP,
+ MLX5_EQ_SHARE_IRQ_MAX_COMP);
+ if (IS_ERR(table->sf_comp_pool)) {
+@@ -715,6 +726,7 @@ int mlx5_irq_table_get_num_comp(struct mlx5_irq_table *table)
+ int mlx5_irq_table_create(struct mlx5_core_dev *dev)
+ {
+ int num_eqs = mlx5_max_eq_cap_get(dev);
++ bool dynamic_vec;
+ int total_vec;
+ int pcif_vec;
+ int req_vec;
+@@ -724,21 +736,31 @@ int mlx5_irq_table_create(struct mlx5_core_dev *dev)
+ if (mlx5_core_is_sf(dev))
+ return 0;
+
++ /* PCI PF vectors usage is limited by online cpus, device EQs and
++ * PCI MSI-X capability.
++ */
+ pcif_vec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() + 1;
+ pcif_vec = min_t(int, pcif_vec, num_eqs);
++ pcif_vec = min_t(int, pcif_vec, pci_msix_vec_count(dev->pdev));
+
+ total_vec = pcif_vec;
+ if (mlx5_sf_max_functions(dev))
+ total_vec += MLX5_MAX_MSIX_PER_SF * mlx5_sf_max_functions(dev);
+ total_vec = min_t(int, total_vec, pci_msix_vec_count(dev->pdev));
+- pcif_vec = min_t(int, pcif_vec, pci_msix_vec_count(dev->pdev));
+
+ req_vec = pci_msix_can_alloc_dyn(dev->pdev) ? 1 : total_vec;
+ n = pci_alloc_irq_vectors(dev->pdev, 1, req_vec, PCI_IRQ_MSIX);
+ if (n < 0)
+ return n;
+
+- err = irq_pools_init(dev, total_vec - pcif_vec, pcif_vec);
++ /* Further limit vectors of the pools based on platform for non dynamic case */
++ dynamic_vec = pci_msix_can_alloc_dyn(dev->pdev);
++ if (!dynamic_vec) {
++ pcif_vec = min_t(int, n, pcif_vec);
++ total_vec = min_t(int, n, total_vec);
++ }
++
++ err = irq_pools_init(dev, total_vec - pcif_vec, pcif_vec, dynamic_vec);
+ if (err)
+ pci_free_irq_vectors(dev->pdev);
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
+index d68f0c4e783505..9739bc9867c514 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
+@@ -108,7 +108,12 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
+ if (IS_ERR(dwmac->tx_clk))
+ return PTR_ERR(dwmac->tx_clk);
+
+- clk_prepare_enable(dwmac->tx_clk);
++ ret = clk_prepare_enable(dwmac->tx_clk);
++ if (ret) {
++ dev_err(&pdev->dev,
++ "Failed to enable tx_clk\n");
++ return ret;
++ }
+
+ /* Check and configure TX clock rate */
+ rate = clk_get_rate(dwmac->tx_clk);
+@@ -119,7 +124,7 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
+ if (ret) {
+ dev_err(&pdev->dev,
+ "Failed to set tx_clk\n");
+- return ret;
++ goto err_tx_clk_disable;
+ }
+ }
+ }
+@@ -133,7 +138,7 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
+ if (ret) {
+ dev_err(&pdev->dev,
+ "Failed to set clk_ptp_ref\n");
+- return ret;
++ goto err_tx_clk_disable;
+ }
+ }
+ }
+@@ -149,12 +154,15 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
+ }
+
+ ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+- if (ret) {
+- clk_disable_unprepare(dwmac->tx_clk);
+- return ret;
+- }
++ if (ret)
++ goto err_tx_clk_disable;
+
+ return 0;
++
++err_tx_clk_disable:
++ if (dwmac->data->tx_clk_en)
++ clk_disable_unprepare(dwmac->tx_clk);
++ return ret;
+ }
+
+ static void intel_eth_plat_remove(struct platform_device *pdev)
+@@ -162,7 +170,8 @@ static void intel_eth_plat_remove(struct platform_device *pdev)
+ struct intel_dwmac *dwmac = get_stmmac_bsp_priv(&pdev->dev);
+
+ stmmac_pltfr_remove(pdev);
+- clk_disable_unprepare(dwmac->tx_clk);
++ if (dwmac->data->tx_clk_en)
++ clk_disable_unprepare(dwmac->tx_clk);
+ }
+
+ static struct platform_driver intel_eth_plat_driver = {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
+index 2a9132d6d743ce..001857c294fba1 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
+@@ -589,9 +589,9 @@ static int mediatek_dwmac_common_data(struct platform_device *pdev,
+
+ plat->mac_interface = priv_plat->phy_mode;
+ if (priv_plat->mac_wol)
+- plat->flags |= STMMAC_FLAG_USE_PHY_WOL;
+- else
+ plat->flags &= ~STMMAC_FLAG_USE_PHY_WOL;
++ else
++ plat->flags |= STMMAC_FLAG_USE_PHY_WOL;
+ plat->riwt_off = 1;
+ plat->maxmtu = ETH_DATA_LEN;
+ plat->host_dma_width = priv_plat->variant->dma_bit_mask;
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index 33cb3590a5cdec..55d12679b24b72 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -15,6 +15,7 @@
+ #include <linux/genalloc.h>
+ #include <linux/if_vlan.h>
+ #include <linux/interrupt.h>
++#include <linux/io-64-nonatomic-hi-lo.h>
+ #include <linux/kernel.h>
+ #include <linux/mfd/syscon.h>
+ #include <linux/module.h>
+@@ -389,6 +390,8 @@ static int prueth_perout_enable(void *clockops_data,
+ struct prueth_emac *emac = clockops_data;
+ u32 reduction_factor = 0, offset = 0;
+ struct timespec64 ts;
++ u64 current_cycle;
++ u64 start_offset;
+ u64 ns_period;
+
+ if (!on)
+@@ -427,8 +430,14 @@ static int prueth_perout_enable(void *clockops_data,
+ writel(reduction_factor, emac->prueth->shram.va +
+ TIMESYNC_FW_WC_SYNCOUT_REDUCTION_FACTOR_OFFSET);
+
+- writel(0, emac->prueth->shram.va +
+- TIMESYNC_FW_WC_SYNCOUT_START_TIME_CYCLECOUNT_OFFSET);
++ current_cycle = icssg_read_time(emac->prueth->shram.va +
++ TIMESYNC_FW_WC_CYCLECOUNT_OFFSET);
++
++ /* Rounding of current_cycle count to next second */
++ start_offset = roundup(current_cycle, MSEC_PER_SEC);
++
++ hi_lo_writeq(start_offset, emac->prueth->shram.va +
++ TIMESYNC_FW_WC_SYNCOUT_START_TIME_CYCLECOUNT_OFFSET);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.h b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+index 4d1c895dacdb67..169949acf2539f 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.h
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+@@ -316,6 +316,18 @@ static inline int prueth_emac_slice(struct prueth_emac *emac)
+ extern const struct ethtool_ops icssg_ethtool_ops;
+ extern const struct dev_pm_ops prueth_dev_pm_ops;
+
++static inline u64 icssg_read_time(const void __iomem *addr)
++{
++ u32 low, high;
++
++ do {
++ high = readl(addr + 4);
++ low = readl(addr);
++ } while (high != readl(addr + 4));
++
++ return low + ((u64)high << 32);
++}
++
+ /* Classifier helpers */
+ void icssg_class_set_mac_addr(struct regmap *miig_rt, int slice, u8 *mac);
+ void icssg_class_set_host_mac_addr(struct regmap *miig_rt, const u8 *mac);
+diff --git a/drivers/net/ethernet/vertexcom/mse102x.c b/drivers/net/ethernet/vertexcom/mse102x.c
+index 33ef3a49de8ee8..bccf0ac66b1a84 100644
+--- a/drivers/net/ethernet/vertexcom/mse102x.c
++++ b/drivers/net/ethernet/vertexcom/mse102x.c
+@@ -437,13 +437,15 @@ static void mse102x_tx_work(struct work_struct *work)
+ mse = &mses->mse102x;
+
+ while ((txb = skb_dequeue(&mse->txq))) {
++ unsigned int len = max_t(unsigned int, txb->len, ETH_ZLEN);
++
+ mutex_lock(&mses->lock);
+ ret = mse102x_tx_pkt_spi(mse, txb, work_timeout);
+ mutex_unlock(&mses->lock);
+ if (ret) {
+ mse->ndev->stats.tx_dropped++;
+ } else {
+- mse->ndev->stats.tx_bytes += txb->len;
++ mse->ndev->stats.tx_bytes += len;
+ mse->ndev->stats.tx_packets++;
+ }
+
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 51c526d227fab3..ed60836180b78f 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -78,7 +78,7 @@ struct phylink {
+ unsigned int pcs_neg_mode;
+ unsigned int pcs_state;
+
+- bool mac_link_dropped;
++ bool link_failed;
+ bool using_mac_select_pcs;
+
+ struct sfp_bus *sfp_bus;
+@@ -1475,9 +1475,9 @@ static void phylink_resolve(struct work_struct *w)
+ cur_link_state = pl->old_link_state;
+
+ if (pl->phylink_disable_state) {
+- pl->mac_link_dropped = false;
++ pl->link_failed = false;
+ link_state.link = false;
+- } else if (pl->mac_link_dropped) {
++ } else if (pl->link_failed) {
+ link_state.link = false;
+ retrigger = true;
+ } else {
+@@ -1572,7 +1572,7 @@ static void phylink_resolve(struct work_struct *w)
+ phylink_link_up(pl, link_state);
+ }
+ if (!link_state.link && retrigger) {
+- pl->mac_link_dropped = false;
++ pl->link_failed = false;
+ queue_work(system_power_efficient_wq, &pl->resolve);
+ }
+ mutex_unlock(&pl->state_mutex);
+@@ -1793,6 +1793,8 @@ static void phylink_phy_change(struct phy_device *phydev, bool up)
+ pl->phy_state.pause |= MLO_PAUSE_RX;
+ pl->phy_state.interface = phydev->interface;
+ pl->phy_state.link = up;
++ if (!up)
++ pl->link_failed = true;
+ mutex_unlock(&pl->state_mutex);
+
+ phylink_run_resolve(pl);
+@@ -2116,7 +2118,7 @@ EXPORT_SYMBOL_GPL(phylink_disconnect_phy);
+ static void phylink_link_changed(struct phylink *pl, bool up, const char *what)
+ {
+ if (!up)
+- pl->mac_link_dropped = true;
++ pl->link_failed = true;
+ phylink_run_resolve(pl);
+ phylink_dbg(pl, "%s link %s\n", what, up ? "up" : "down");
+ }
+@@ -2750,7 +2752,7 @@ int phylink_ethtool_set_pauseparam(struct phylink *pl,
+ * link will cycle.
+ */
+ if (manual_changed) {
+- pl->mac_link_dropped = true;
++ pl->link_failed = true;
+ phylink_run_resolve(pl);
+ }
+
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index 671dc55cbd3a87..bc562c759e1e9d 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -1380,8 +1380,9 @@ static int pmu_sbi_device_probe(struct platform_device *pdev)
+ goto out_unregister;
+
+ cpu = get_cpu();
+-
+ ret = pmu_sbi_snapshot_setup(pmu, cpu);
++ put_cpu();
++
+ if (ret) {
+ /* Snapshot is an optional feature. Continue if not available */
+ pmu_sbi_snapshot_free(pmu);
+@@ -1395,7 +1396,6 @@ static int pmu_sbi_device_probe(struct platform_device *pdev)
+ */
+ static_branch_enable(&sbi_pmu_snapshot_available);
+ }
+- put_cpu();
+ }
+
+ register_sysctl("kernel", sbi_pmu_sysctl_table);
+diff --git a/drivers/pmdomain/arm/scmi_perf_domain.c b/drivers/pmdomain/arm/scmi_perf_domain.c
+index d7ef46ccd9b8a4..3693423459c9cc 100644
+--- a/drivers/pmdomain/arm/scmi_perf_domain.c
++++ b/drivers/pmdomain/arm/scmi_perf_domain.c
+@@ -125,7 +125,8 @@ static int scmi_perf_domain_probe(struct scmi_device *sdev)
+ scmi_pd->ph = ph;
+ scmi_pd->genpd.name = scmi_pd->info->name;
+ scmi_pd->genpd.flags = GENPD_FLAG_ALWAYS_ON |
+- GENPD_FLAG_OPP_TABLE_FW;
++ GENPD_FLAG_OPP_TABLE_FW |
++ GENPD_FLAG_DEV_NAME_FW;
+ scmi_pd->genpd.set_performance_state = scmi_pd_set_perf_state;
+ scmi_pd->genpd.attach_dev = scmi_pd_attach_dev;
+ scmi_pd->genpd.detach_dev = scmi_pd_detach_dev;
+diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
+index 3a4e59b93677c3..8a4a7850450cde 100644
+--- a/drivers/pmdomain/core.c
++++ b/drivers/pmdomain/core.c
+@@ -7,6 +7,7 @@
+ #define pr_fmt(fmt) "PM: " fmt
+
+ #include <linux/delay.h>
++#include <linux/idr.h>
+ #include <linux/kernel.h>
+ #include <linux/io.h>
+ #include <linux/platform_device.h>
+@@ -23,6 +24,9 @@
+ #include <linux/cpu.h>
+ #include <linux/debugfs.h>
+
++/* Provides a unique ID for each genpd device */
++static DEFINE_IDA(genpd_ida);
++
+ #define GENPD_RETRY_MAX_MS 250 /* Approximate */
+
+ #define GENPD_DEV_CALLBACK(genpd, type, callback, dev) \
+@@ -129,6 +133,7 @@ static const struct genpd_lock_ops genpd_spin_ops = {
+ #define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN)
+ #define genpd_is_rpm_always_on(genpd) (genpd->flags & GENPD_FLAG_RPM_ALWAYS_ON)
+ #define genpd_is_opp_table_fw(genpd) (genpd->flags & GENPD_FLAG_OPP_TABLE_FW)
++#define genpd_is_dev_name_fw(genpd) (genpd->flags & GENPD_FLAG_DEV_NAME_FW)
+
+ static inline bool irq_safe_dev_in_sleep_domain(struct device *dev,
+ const struct generic_pm_domain *genpd)
+@@ -147,7 +152,7 @@ static inline bool irq_safe_dev_in_sleep_domain(struct device *dev,
+
+ if (ret)
+ dev_warn_once(dev, "PM domain %s will not be powered off\n",
+- genpd->name);
++ dev_name(&genpd->dev));
+
+ return ret;
+ }
+@@ -232,7 +237,7 @@ static void genpd_debug_remove(struct generic_pm_domain *genpd)
+ if (!genpd_debugfs_dir)
+ return;
+
+- debugfs_lookup_and_remove(genpd->name, genpd_debugfs_dir);
++ debugfs_lookup_and_remove(dev_name(&genpd->dev), genpd_debugfs_dir);
+ }
+
+ static void genpd_update_accounting(struct generic_pm_domain *genpd)
+@@ -689,7 +694,7 @@ static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
+ genpd->states[state_idx].power_on_latency_ns = elapsed_ns;
+ genpd->gd->max_off_time_changed = true;
+ pr_debug("%s: Power-%s latency exceeded, new value %lld ns\n",
+- genpd->name, "on", elapsed_ns);
++ dev_name(&genpd->dev), "on", elapsed_ns);
+
+ out:
+ raw_notifier_call_chain(&genpd->power_notifiers, GENPD_NOTIFY_ON, NULL);
+@@ -740,7 +745,7 @@ static int _genpd_power_off(struct generic_pm_domain *genpd, bool timed)
+ genpd->states[state_idx].power_off_latency_ns = elapsed_ns;
+ genpd->gd->max_off_time_changed = true;
+ pr_debug("%s: Power-%s latency exceeded, new value %lld ns\n",
+- genpd->name, "off", elapsed_ns);
++ dev_name(&genpd->dev), "off", elapsed_ns);
+
+ out:
+ raw_notifier_call_chain(&genpd->power_notifiers, GENPD_NOTIFY_OFF,
+@@ -1898,7 +1903,7 @@ int dev_pm_genpd_add_notifier(struct device *dev, struct notifier_block *nb)
+
+ if (ret) {
+ dev_warn(dev, "failed to add notifier for PM domain %s\n",
+- genpd->name);
++ dev_name(&genpd->dev));
+ return ret;
+ }
+
+@@ -1945,7 +1950,7 @@ int dev_pm_genpd_remove_notifier(struct device *dev)
+
+ if (ret) {
+ dev_warn(dev, "failed to remove notifier for PM domain %s\n",
+- genpd->name);
++ dev_name(&genpd->dev));
+ return ret;
+ }
+
+@@ -1971,7 +1976,7 @@ static int genpd_add_subdomain(struct generic_pm_domain *genpd,
+ */
+ if (!genpd_is_irq_safe(genpd) && genpd_is_irq_safe(subdomain)) {
+ WARN(1, "Parent %s of subdomain %s must be IRQ safe\n",
+- genpd->name, subdomain->name);
++ dev_name(&genpd->dev), subdomain->name);
+ return -EINVAL;
+ }
+
+@@ -2046,7 +2051,7 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
+
+ if (!list_empty(&subdomain->parent_links) || subdomain->device_count) {
+ pr_warn("%s: unable to remove subdomain %s\n",
+- genpd->name, subdomain->name);
++ dev_name(&genpd->dev), subdomain->name);
+ ret = -EBUSY;
+ goto out;
+ }
+@@ -2180,6 +2185,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
+ genpd->status = is_off ? GENPD_STATE_OFF : GENPD_STATE_ON;
+ genpd->device_count = 0;
+ genpd->provider = NULL;
++ genpd->device_id = -ENXIO;
+ genpd->has_provider = false;
+ genpd->accounting_time = ktime_get_mono_fast_ns();
+ genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
+@@ -2220,7 +2226,18 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
+ return ret;
+
+ device_initialize(&genpd->dev);
+- dev_set_name(&genpd->dev, "%s", genpd->name);
++
++ if (!genpd_is_dev_name_fw(genpd)) {
++ dev_set_name(&genpd->dev, "%s", genpd->name);
++ } else {
++ ret = ida_alloc(&genpd_ida, GFP_KERNEL);
++ if (ret < 0) {
++ put_device(&genpd->dev);
++ return ret;
++ }
++ genpd->device_id = ret;
++ dev_set_name(&genpd->dev, "%s_%u", genpd->name, genpd->device_id);
++ }
+
+ mutex_lock(&gpd_list_lock);
+ list_add(&genpd->gpd_list_node, &gpd_list);
+@@ -2242,13 +2259,13 @@ static int genpd_remove(struct generic_pm_domain *genpd)
+
+ if (genpd->has_provider) {
+ genpd_unlock(genpd);
+- pr_err("Provider present, unable to remove %s\n", genpd->name);
++ pr_err("Provider present, unable to remove %s\n", dev_name(&genpd->dev));
+ return -EBUSY;
+ }
+
+ if (!list_empty(&genpd->parent_links) || genpd->device_count) {
+ genpd_unlock(genpd);
+- pr_err("%s: unable to remove %s\n", __func__, genpd->name);
++ pr_err("%s: unable to remove %s\n", __func__, dev_name(&genpd->dev));
+ return -EBUSY;
+ }
+
+@@ -2262,9 +2279,11 @@ static int genpd_remove(struct generic_pm_domain *genpd)
+ genpd_unlock(genpd);
+ genpd_debug_remove(genpd);
+ cancel_work_sync(&genpd->power_off_work);
++ if (genpd->device_id != -ENXIO)
++ ida_free(&genpd_ida, genpd->device_id);
+ genpd_free_data(genpd);
+
+- pr_debug("%s: removed %s\n", __func__, genpd->name);
++ pr_debug("%s: removed %s\n", __func__, dev_name(&genpd->dev));
+
+ return 0;
+ }
+@@ -3226,12 +3245,12 @@ static int genpd_summary_one(struct seq_file *s,
+ else
+ snprintf(state, sizeof(state), "%s",
+ status_lookup[genpd->status]);
+- seq_printf(s, "%-30s %-30s %u", genpd->name, state, genpd->performance_state);
++ seq_printf(s, "%-30s %-30s %u", dev_name(&genpd->dev), state, genpd->performance_state);
+
+ /*
+ * Modifications on the list require holding locks on both
+ * parent and child, so we are safe.
+- * Also genpd->name is immutable.
++ * Also the device name is immutable.
+ */
+ list_for_each_entry(link, &genpd->parent_links, parent_node) {
+ if (list_is_first(&link->parent_node, &genpd->parent_links))
+@@ -3456,7 +3475,7 @@ static void genpd_debug_add(struct generic_pm_domain *genpd)
+ if (!genpd_debugfs_dir)
+ return;
+
+- d = debugfs_create_dir(genpd->name, genpd_debugfs_dir);
++ d = debugfs_create_dir(dev_name(&genpd->dev), genpd_debugfs_dir);
+
+ debugfs_create_file("current_state", 0444,
+ d, genpd, &status_fops);
+diff --git a/drivers/pmdomain/imx/imx93-blk-ctrl.c b/drivers/pmdomain/imx/imx93-blk-ctrl.c
+index 904ffa55b8f4d3..b10348ac10f06d 100644
+--- a/drivers/pmdomain/imx/imx93-blk-ctrl.c
++++ b/drivers/pmdomain/imx/imx93-blk-ctrl.c
+@@ -313,7 +313,9 @@ static void imx93_blk_ctrl_remove(struct platform_device *pdev)
+
+ of_genpd_del_provider(pdev->dev.of_node);
+
+- for (i = 0; bc->onecell_data.num_domains; i++) {
++ pm_runtime_disable(&pdev->dev);
++
++ for (i = 0; i < bc->onecell_data.num_domains; i++) {
+ struct imx93_blk_ctrl_domain *domain = &bc->domains[i];
+
+ pm_genpd_remove(&domain->genpd);
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index bf56f3d6962533..f1da0b7e08e9e0 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -232,7 +232,7 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ struct page *pg;
+ unsigned int nsg;
+ int sglen;
+- u64 pa;
++ u64 pa, offset;
+ u64 paend;
+ struct scatterlist *sg;
+ struct device *dma = mvdev->vdev.dma_dev;
+@@ -255,8 +255,10 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ sg = mr->sg_head.sgl;
+ for (map = vhost_iotlb_itree_first(iotlb, mr->start, mr->end - 1);
+ map; map = vhost_iotlb_itree_next(map, mr->start, mr->end - 1)) {
+- paend = map->addr + maplen(map, mr);
+- for (pa = map->addr; pa < paend; pa += sglen) {
++ offset = mr->start > map->start ? mr->start - map->start : 0;
++ pa = map->addr + offset;
++ paend = map->addr + offset + maplen(map, mr);
++ for (; pa < paend; pa += sglen) {
+ pg = pfn_to_page(__phys_to_pfn(pa));
+ if (!sg) {
+ mlx5_vdpa_warn(mvdev, "sg null. start 0x%llx, end 0x%llx\n",
+diff --git a/drivers/vdpa/solidrun/snet_main.c b/drivers/vdpa/solidrun/snet_main.c
+index 99428a04068d2d..c8b74980dbd172 100644
+--- a/drivers/vdpa/solidrun/snet_main.c
++++ b/drivers/vdpa/solidrun/snet_main.c
+@@ -555,7 +555,7 @@ static const struct vdpa_config_ops snet_config_ops = {
+
+ static int psnet_open_pf_bar(struct pci_dev *pdev, struct psnet *psnet)
+ {
+- char name[50];
++ char *name;
+ int ret, i, mask = 0;
+ /* We don't know which BAR will be used to communicate..
+ * We will map every bar with len > 0.
+@@ -573,7 +573,10 @@ static int psnet_open_pf_bar(struct pci_dev *pdev, struct psnet *psnet)
+ return -ENODEV;
+ }
+
+- snprintf(name, sizeof(name), "psnet[%s]-bars", pci_name(pdev));
++ name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "psnet[%s]-bars", pci_name(pdev));
++ if (!name)
++ return -ENOMEM;
++
+ ret = pcim_iomap_regions(pdev, mask, name);
+ if (ret) {
+ SNET_ERR(pdev, "Failed to request and map PCI BARs\n");
+@@ -590,10 +593,13 @@ static int psnet_open_pf_bar(struct pci_dev *pdev, struct psnet *psnet)
+
+ static int snet_open_vf_bar(struct pci_dev *pdev, struct snet *snet)
+ {
+- char name[50];
++ char *name;
+ int ret;
+
+- snprintf(name, sizeof(name), "snet[%s]-bar", pci_name(pdev));
++ name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "snet[%s]-bars", pci_name(pdev));
++ if (!name)
++ return -ENOMEM;
++
+ /* Request and map BAR */
+ ret = pcim_iomap_regions(pdev, BIT(snet->psnet->cfg.vf_bar), name);
+ if (ret) {
+diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c
+index ac4ab22f7d8b94..16380764275ea0 100644
+--- a/drivers/vdpa/virtio_pci/vp_vdpa.c
++++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
+@@ -612,7 +612,11 @@ static int vp_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ goto mdev_err;
+ }
+
+- mdev_id = kzalloc(sizeof(struct virtio_device_id), GFP_KERNEL);
++ /*
++ * id_table should be a null terminated array, so allocate one additional
++ * entry here, see vdpa_mgmtdev_get_classes().
++ */
++ mdev_id = kcalloc(2, sizeof(struct virtio_device_id), GFP_KERNEL);
+ if (!mdev_id) {
+ err = -ENOMEM;
+ goto mdev_id_err;
+@@ -632,8 +636,8 @@ static int vp_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ goto probe_err;
+ }
+
+- mdev_id->device = mdev->id.device;
+- mdev_id->vendor = mdev->id.vendor;
++ mdev_id[0].device = mdev->id.device;
++ mdev_id[0].vendor = mdev->id.vendor;
+ mgtdev->id_table = mdev_id;
+ mgtdev->max_supported_vqs = vp_modern_get_num_queues(mdev);
+ mgtdev->supported_features = vp_modern_get_features(mdev);
+diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
+index 7a2a4fdc07f1ff..af65d113853bcc 100644
+--- a/fs/btrfs/delayed-ref.c
++++ b/fs/btrfs/delayed-ref.c
+@@ -298,7 +298,7 @@ static int comp_refs(struct btrfs_delayed_ref_node *ref1,
+ if (ref1->ref_root < ref2->ref_root)
+ return -1;
+ if (ref1->ref_root > ref2->ref_root)
+- return -1;
++ return 1;
+ if (ref1->type == BTRFS_EXTENT_DATA_REF_KEY)
+ ret = comp_data_refs(ref1, ref2);
+ }
+diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
+index c034080c334b98..5f894459bafefe 100644
+--- a/fs/nilfs2/btnode.c
++++ b/fs/nilfs2/btnode.c
+@@ -68,7 +68,6 @@ nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr)
+ goto failed;
+ }
+ memset(bh->b_data, 0, i_blocksize(inode));
+- bh->b_bdev = inode->i_sb->s_bdev;
+ bh->b_blocknr = blocknr;
+ set_buffer_mapped(bh);
+ set_buffer_uptodate(bh);
+@@ -133,7 +132,6 @@ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr,
+ goto found;
+ }
+ set_buffer_mapped(bh);
+- bh->b_bdev = inode->i_sb->s_bdev;
+ bh->b_blocknr = pblocknr; /* set block address for read */
+ bh->b_end_io = end_buffer_read_sync;
+ get_bh(bh);
+diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
+index 1c9ae36a03abe9..ace22253fed0f2 100644
+--- a/fs/nilfs2/gcinode.c
++++ b/fs/nilfs2/gcinode.c
+@@ -83,10 +83,8 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff,
+ goto out;
+ }
+
+- if (!buffer_mapped(bh)) {
+- bh->b_bdev = inode->i_sb->s_bdev;
++ if (!buffer_mapped(bh))
+ set_buffer_mapped(bh);
+- }
+ bh->b_blocknr = pbn;
+ bh->b_end_io = end_buffer_read_sync;
+ get_bh(bh);
+diff --git a/fs/nilfs2/mdt.c b/fs/nilfs2/mdt.c
+index 4f792a0ad0f0ff..8e66f2c13ad658 100644
+--- a/fs/nilfs2/mdt.c
++++ b/fs/nilfs2/mdt.c
+@@ -89,7 +89,6 @@ static int nilfs_mdt_create_block(struct inode *inode, unsigned long block,
+ if (buffer_uptodate(bh))
+ goto failed_bh;
+
+- bh->b_bdev = sb->s_bdev;
+ err = nilfs_mdt_insert_new_block(inode, block, bh, init_block);
+ if (likely(!err)) {
+ get_bh(bh);
+diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
+index 097e2c7f7f718c..d30114a713608d 100644
+--- a/fs/nilfs2/page.c
++++ b/fs/nilfs2/page.c
+@@ -39,7 +39,6 @@ static struct buffer_head *__nilfs_get_folio_block(struct folio *folio,
+ first_block = (unsigned long)index << (PAGE_SHIFT - blkbits);
+ bh = get_nth_bh(bh, block - first_block);
+
+- touch_buffer(bh);
+ wait_on_buffer(bh);
+ return bh;
+ }
+@@ -64,6 +63,7 @@ struct buffer_head *nilfs_grab_buffer(struct inode *inode,
+ folio_put(folio);
+ return NULL;
+ }
++ bh->b_bdev = inode->i_sb->s_bdev;
+ return bh;
+ }
+
+diff --git a/fs/ocfs2/resize.c b/fs/ocfs2/resize.c
+index c4a4016d3866a3..b0733c08ed13b1 100644
+--- a/fs/ocfs2/resize.c
++++ b/fs/ocfs2/resize.c
+@@ -574,6 +574,8 @@ int ocfs2_group_add(struct inode *inode, struct ocfs2_new_group_input *input)
+ ocfs2_commit_trans(osb, handle);
+
+ out_free_group_bh:
++ if (ret < 0)
++ ocfs2_remove_from_cache(INODE_CACHE(inode), group_bh);
+ brelse(group_bh);
+
+ out_unlock:
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index afee70125ae3bf..ded6d0b3e02336 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -2321,6 +2321,7 @@ static int ocfs2_verify_volume(struct ocfs2_dinode *di,
+ struct ocfs2_blockcheck_stats *stats)
+ {
+ int status = -EAGAIN;
++ u32 blksz_bits;
+
+ if (memcmp(di->i_signature, OCFS2_SUPER_BLOCK_SIGNATURE,
+ strlen(OCFS2_SUPER_BLOCK_SIGNATURE)) == 0) {
+@@ -2335,11 +2336,15 @@ static int ocfs2_verify_volume(struct ocfs2_dinode *di,
+ goto out;
+ }
+ status = -EINVAL;
+- if ((1 << le32_to_cpu(di->id2.i_super.s_blocksize_bits)) != blksz) {
++ /* Acceptable block sizes are 512 bytes, 1K, 2K and 4K. */
++ blksz_bits = le32_to_cpu(di->id2.i_super.s_blocksize_bits);
++ if (blksz_bits < 9 || blksz_bits > 12) {
+ mlog(ML_ERROR, "found superblock with incorrect block "
+- "size: found %u, should be %u\n",
+- 1 << le32_to_cpu(di->id2.i_super.s_blocksize_bits),
+- blksz);
++ "size bits: found %u, should be 9, 10, 11, or 12\n",
++ blksz_bits);
++ } else if ((1 << le32_to_cpu(blksz_bits)) != blksz) {
++ mlog(ML_ERROR, "found superblock with incorrect block "
++ "size: found %u, should be %u\n", 1 << blksz_bits, blksz);
+ } else if (le16_to_cpu(di->id2.i_super.s_major_rev_level) !=
+ OCFS2_MAJOR_REV_LEVEL ||
+ le16_to_cpu(di->id2.i_super.s_minor_rev_level) !=
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 5f171ad7b436be..164247e724dcdf 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -2672,8 +2672,10 @@ static int pagemap_scan_get_args(struct pm_scan_arg *arg,
+ return -EFAULT;
+ if (!arg->vec && arg->vec_len)
+ return -EINVAL;
++ if (UINT_MAX == SIZE_MAX && arg->vec_len > SIZE_MAX)
++ return -EINVAL;
+ if (arg->vec && !access_ok((void __user *)(long)arg->vec,
+- arg->vec_len * sizeof(struct page_region)))
++ size_mul(arg->vec_len, sizeof(struct page_region))))
+ return -EFAULT;
+
+ /* Fixup default values */
+diff --git a/include/drm/intel/i915_pciids.h b/include/drm/intel/i915_pciids.h
+index 2bf03ebfcf73da..f35534522d3338 100644
+--- a/include/drm/intel/i915_pciids.h
++++ b/include/drm/intel/i915_pciids.h
+@@ -771,13 +771,24 @@
+ INTEL_ATS_M150_IDS(MACRO__, ## __VA_ARGS__), \
+ INTEL_ATS_M75_IDS(MACRO__, ## __VA_ARGS__)
+
+-/* MTL */
+-#define INTEL_ARL_IDS(MACRO__, ...) \
+- MACRO__(0x7D41, ## __VA_ARGS__), \
++/* ARL */
++#define INTEL_ARL_H_IDS(MACRO__, ...) \
+ MACRO__(0x7D51, ## __VA_ARGS__), \
+- MACRO__(0x7D67, ## __VA_ARGS__), \
+ MACRO__(0x7DD1, ## __VA_ARGS__)
+
++#define INTEL_ARL_U_IDS(MACRO__, ...) \
++ MACRO__(0x7D41, ## __VA_ARGS__) \
++
++#define INTEL_ARL_S_IDS(MACRO__, ...) \
++ MACRO__(0x7D67, ## __VA_ARGS__), \
++ MACRO__(0xB640, ## __VA_ARGS__)
++
++#define INTEL_ARL_IDS(MACRO__, ...) \
++ INTEL_ARL_H_IDS(MACRO__, ## __VA_ARGS__), \
++ INTEL_ARL_U_IDS(MACRO__, ## __VA_ARGS__), \
++ INTEL_ARL_S_IDS(MACRO__, ## __VA_ARGS__)
++
++/* MTL */
+ #define INTEL_MTL_IDS(MACRO__, ...) \
+ INTEL_ARL_IDS(MACRO__, ## __VA_ARGS__), \
+ MACRO__(0x7D40, ## __VA_ARGS__), \
+diff --git a/include/linux/mman.h b/include/linux/mman.h
+index bcb201ab7a412e..c2748703431558 100644
+--- a/include/linux/mman.h
++++ b/include/linux/mman.h
+@@ -2,6 +2,7 @@
+ #ifndef _LINUX_MMAN_H
+ #define _LINUX_MMAN_H
+
++#include <linux/fs.h>
+ #include <linux/mm.h>
+ #include <linux/percpu_counter.h>
+
+@@ -94,7 +95,7 @@ static inline void vm_unacct_memory(long pages)
+ #endif
+
+ #ifndef arch_calc_vm_flag_bits
+-#define arch_calc_vm_flag_bits(flags) 0
++#define arch_calc_vm_flag_bits(file, flags) 0
+ #endif
+
+ #ifndef arch_validate_prot
+@@ -151,13 +152,13 @@ calc_vm_prot_bits(unsigned long prot, unsigned long pkey)
+ * Combine the mmap "flags" argument into "vm_flags" used internally.
+ */
+ static inline unsigned long
+-calc_vm_flag_bits(unsigned long flags)
++calc_vm_flag_bits(struct file *file, unsigned long flags)
+ {
+ return _calc_vm_trans(flags, MAP_GROWSDOWN, VM_GROWSDOWN ) |
+ _calc_vm_trans(flags, MAP_LOCKED, VM_LOCKED ) |
+ _calc_vm_trans(flags, MAP_SYNC, VM_SYNC ) |
+ _calc_vm_trans(flags, MAP_STACK, VM_NOHUGEPAGE) |
+- arch_calc_vm_flag_bits(flags);
++ arch_calc_vm_flag_bits(file, flags);
+ }
+
+ unsigned long vm_commit_limit(void);
+diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
+index 858c8e7851fb5d..f63faa12bba0f7 100644
+--- a/include/linux/pm_domain.h
++++ b/include/linux/pm_domain.h
+@@ -92,6 +92,10 @@ struct dev_pm_domain_list {
+ * GENPD_FLAG_OPP_TABLE_FW: The genpd provider supports performance states,
+ * but its corresponding OPP tables are not
+ * described in DT, but are given directly by FW.
++ *
++ * GENPD_FLAG_DEV_NAME_FW: Instructs genpd to generate an unique device name
++ * using ida. It is used by genpd providers which
++ * get their genpd-names directly from FW.
+ */
+ #define GENPD_FLAG_PM_CLK (1U << 0)
+ #define GENPD_FLAG_IRQ_SAFE (1U << 1)
+@@ -101,6 +105,7 @@ struct dev_pm_domain_list {
+ #define GENPD_FLAG_RPM_ALWAYS_ON (1U << 5)
+ #define GENPD_FLAG_MIN_RESIDENCY (1U << 6)
+ #define GENPD_FLAG_OPP_TABLE_FW (1U << 7)
++#define GENPD_FLAG_DEV_NAME_FW (1U << 8)
+
+ enum gpd_status {
+ GENPD_STATE_ON = 0, /* PM domain is on */
+@@ -163,6 +168,7 @@ struct generic_pm_domain {
+ atomic_t sd_count; /* Number of subdomains with power "on" */
+ enum gpd_status status; /* Current state of the domain */
+ unsigned int device_count; /* Number of devices */
++ unsigned int device_id; /* unique device id */
+ unsigned int suspended_count; /* System suspend device counter */
+ unsigned int prepared_count; /* Suspend counter of prepared devices */
+ unsigned int performance_state; /* Aggregated max performance state */
+diff --git a/include/linux/sched/task_stack.h b/include/linux/sched/task_stack.h
+index ccd72b978e1fc7..4ac64f68dcf03f 100644
+--- a/include/linux/sched/task_stack.h
++++ b/include/linux/sched/task_stack.h
+@@ -9,6 +9,7 @@
+ #include <linux/sched.h>
+ #include <linux/magic.h>
+ #include <linux/refcount.h>
++#include <linux/kasan.h>
+
+ #ifdef CONFIG_THREAD_INFO_IN_TASK
+
+@@ -89,6 +90,7 @@ static inline int object_is_on_stack(const void *obj)
+ {
+ void *stack = task_stack_page(current);
+
++ obj = kasan_reset_tag(obj);
+ return (obj >= stack) && (obj < (stack + THREAD_SIZE));
+ }
+
+diff --git a/include/linux/sockptr.h b/include/linux/sockptr.h
+index fc5a206c40435f..195debe2b1dbc5 100644
+--- a/include/linux/sockptr.h
++++ b/include/linux/sockptr.h
+@@ -77,7 +77,9 @@ static inline int copy_safe_from_sockptr(void *dst, size_t ksize,
+ {
+ if (optlen < ksize)
+ return -EINVAL;
+- return copy_from_sockptr(dst, optval, ksize);
++ if (copy_from_sockptr(dst, optval, ksize))
++ return -EFAULT;
++ return 0;
+ }
+
+ static inline int copy_struct_from_sockptr(void *dst, size_t ksize,
+diff --git a/include/net/bond_options.h b/include/net/bond_options.h
+index 473a0147769eb9..18687ccf063830 100644
+--- a/include/net/bond_options.h
++++ b/include/net/bond_options.h
+@@ -161,5 +161,7 @@ void bond_option_arp_ip_targets_clear(struct bonding *bond);
+ #if IS_ENABLED(CONFIG_IPV6)
+ void bond_option_ns_ip6_targets_clear(struct bonding *bond);
+ #endif
++void bond_slave_ns_maddrs_add(struct bonding *bond, struct slave *slave);
++void bond_slave_ns_maddrs_del(struct bonding *bond, struct slave *slave);
+
+ #endif /* _NET_BOND_OPTIONS_H */
+diff --git a/kernel/Kconfig.kexec b/kernel/Kconfig.kexec
+index 6c34e63c88ff4c..4d111f87195167 100644
+--- a/kernel/Kconfig.kexec
++++ b/kernel/Kconfig.kexec
+@@ -97,7 +97,7 @@ config KEXEC_JUMP
+
+ config CRASH_DUMP
+ bool "kernel crash dumps"
+- default y
++ default ARCH_DEFAULT_CRASH_DUMP
+ depends on ARCH_SUPPORTS_CRASH_DUMP
+ depends on KEXEC_CORE
+ select VMCORE_INFO
+diff --git a/lib/buildid.c b/lib/buildid.c
+index 26007cc99a38f6..aee749f2647da0 100644
+--- a/lib/buildid.c
++++ b/lib/buildid.c
+@@ -40,7 +40,7 @@ static int parse_build_id_buf(unsigned char *build_id,
+ name_sz == note_name_sz &&
+ memcmp(nhdr + 1, note_name, note_name_sz) == 0 &&
+ desc_sz > 0 && desc_sz <= BUILD_ID_SIZE_MAX) {
+- data = note_start + note_off + ALIGN(note_name_sz, 4);
++ data = note_start + note_off + sizeof(Elf32_Nhdr) + ALIGN(note_name_sz, 4);
+ memcpy(build_id, data, desc_sz);
+ memset(build_id + desc_sz, 0, BUILD_ID_SIZE_MAX - desc_sz);
+ if (size)
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 8a04f29aa4230d..ccebd17fb48f6f 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1316,7 +1316,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
+ * to. we assume access permissions have been handled by the open
+ * of the memory object, so we don't do any here.
+ */
+- vm_flags |= calc_vm_prot_bits(prot, pkey) | calc_vm_flag_bits(flags) |
++ vm_flags |= calc_vm_prot_bits(prot, pkey) | calc_vm_flag_bits(file, flags) |
+ mm->def_flags | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC;
+
+ /* Obtain the address to map to. we verify (or select) it and ensure
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 3ca167d84c5655..ffed8584deb1a0 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -648,7 +648,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ * Prevent negative return values when {old,new}_addr was realigned
+ * but we broke out of the above loop for the first PMD itself.
+ */
+- if (len + old_addr < old_end)
++ if (old_addr < old_end - len)
+ return 0;
+
+ return len + old_addr - old_end; /* how much done */
+diff --git a/mm/nommu.c b/mm/nommu.c
+index 7296e775e04e29..50100b909187af 100644
+--- a/mm/nommu.c
++++ b/mm/nommu.c
+@@ -568,7 +568,7 @@ static int delete_vma_from_mm(struct vm_area_struct *vma)
+ VMA_ITERATOR(vmi, vma->vm_mm, vma->vm_start);
+
+ vma_iter_config(&vmi, vma->vm_start, vma->vm_end);
+- if (vma_iter_prealloc(&vmi, vma)) {
++ if (vma_iter_prealloc(&vmi, NULL)) {
+ pr_warn("Allocation of vma tree for process %d failed\n",
+ current->pid);
+ return -ENOMEM;
+@@ -838,7 +838,7 @@ static unsigned long determine_vm_flags(struct file *file,
+ {
+ unsigned long vm_flags;
+
+- vm_flags = calc_vm_prot_bits(prot, 0) | calc_vm_flag_bits(flags);
++ vm_flags = calc_vm_prot_bits(prot, 0) | calc_vm_flag_bits(file, flags);
+
+ if (!file) {
+ /*
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index f9111356d1047b..2f19464db66e05 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1040,6 +1040,7 @@ __always_inline bool free_pages_prepare(struct page *page,
+ bool skip_kasan_poison = should_skip_kasan_poison(page);
+ bool init = want_init_on_free();
+ bool compound = PageCompound(page);
++ struct folio *folio = page_folio(page);
+
+ VM_BUG_ON_PAGE(PageTail(page), page);
+
+@@ -1049,6 +1050,20 @@ __always_inline bool free_pages_prepare(struct page *page,
+ if (memcg_kmem_online() && PageMemcgKmem(page))
+ __memcg_kmem_uncharge_page(page, order);
+
++ /*
++ * In rare cases, when truncation or holepunching raced with
++ * munlock after VM_LOCKED was cleared, Mlocked may still be
++ * found set here. This does not indicate a problem, unless
++ * "unevictable_pgs_cleared" appears worryingly large.
++ */
++ if (unlikely(folio_test_mlocked(folio))) {
++ long nr_pages = folio_nr_pages(folio);
++
++ __folio_clear_mlocked(folio);
++ zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages);
++ count_vm_events(UNEVICTABLE_PGCLEARED, nr_pages);
++ }
++
+ if (unlikely(PageHWPoison(page)) && !order) {
+ /* Do not let hwpoison pages hit pcplists/buddy */
+ reset_page_owner(page, order);
+@@ -4569,7 +4584,8 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
+ gfp = alloc_gfp;
+
+ /* Find an allowed local zone that meets the low watermark. */
+- for_each_zone_zonelist_nodemask(zone, z, ac.zonelist, ac.highest_zoneidx, ac.nodemask) {
++ z = ac.preferred_zoneref;
++ for_next_zone_zonelist_nodemask(zone, z, ac.highest_zoneidx, ac.nodemask) {
+ unsigned long mark;
+
+ if (cpusets_enabled() && (alloc_flags & ALLOC_CPUSET) &&
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 941d1271395202..67f2ae6a8f0f36 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1163,9 +1163,7 @@ static int shmem_getattr(struct mnt_idmap *idmap,
+ stat->attributes_mask |= (STATX_ATTR_APPEND |
+ STATX_ATTR_IMMUTABLE |
+ STATX_ATTR_NODUMP);
+- inode_lock_shared(inode);
+ generic_fillattr(idmap, request_mask, inode, stat);
+- inode_unlock_shared(inode);
+
+ if (shmem_huge_global_enabled(inode, 0, false, NULL, 0))
+ stat->blksize = HPAGE_PMD_SIZE;
+@@ -2599,9 +2597,6 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma)
+ if (ret)
+ return ret;
+
+- /* arm64 - allow memory tagging on RAM-based files */
+- vm_flags_set(vma, VM_MTE_ALLOWED);
+-
+ file_accessed(file);
+ /* This is anonymous shared memory if it is unlinked at the time of mmap */
+ if (inode->i_nlink)
+diff --git a/mm/swap.c b/mm/swap.c
+index 1e734a5a6453e2..5d1850adf76a13 100644
+--- a/mm/swap.c
++++ b/mm/swap.c
+@@ -82,20 +82,6 @@ static void __page_cache_release(struct folio *folio, struct lruvec **lruvecp,
+ lruvec_del_folio(*lruvecp, folio);
+ __folio_clear_lru_flags(folio);
+ }
+-
+- /*
+- * In rare cases, when truncation or holepunching raced with
+- * munlock after VM_LOCKED was cleared, Mlocked may still be
+- * found set here. This does not indicate a problem, unless
+- * "unevictable_pgs_cleared" appears worryingly large.
+- */
+- if (unlikely(folio_test_mlocked(folio))) {
+- long nr_pages = folio_nr_pages(folio);
+-
+- __folio_clear_mlocked(folio);
+- zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages);
+- count_vm_events(UNEVICTABLE_PGCLEARED, nr_pages);
+- }
+ }
+
+ /*
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 6e07350817bec2..eeb4f025ca3bfc 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3788,8 +3788,6 @@ static void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb)
+
+ hci_dev_lock(hdev);
+ conn = hci_conn_hash_lookup_handle(hdev, handle);
+- if (conn && hci_dev_test_flag(hdev, HCI_MGMT))
+- mgmt_device_connected(hdev, conn, NULL, 0);
+ hci_dev_unlock(hdev);
+
+ if (conn) {
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index da5dba120bc9a5..d6649246188d72 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -618,7 +618,7 @@ static int dccp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
+ by tcp. Feel free to propose better solution.
+ --ANK (980728)
+ */
+- if (np->rxopt.all)
++ if (np->rxopt.all && sk->sk_state != DCCP_LISTEN)
+ opt_skb = skb_clone_and_charge_r(skb, sk);
+
+ if (sk->sk_state == DCCP_OPEN) { /* Fast path */
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 200fea92f12fc9..84cd46311da092 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1617,7 +1617,7 @@ int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
+ by tcp. Feel free to propose better solution.
+ --ANK (980728)
+ */
+- if (np->rxopt.all)
++ if (np->rxopt.all && sk->sk_state != TCP_LISTEN)
+ opt_skb = skb_clone_and_charge_r(skb, sk);
+
+ if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */
+@@ -1655,8 +1655,6 @@ int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
+ if (reason)
+ goto reset;
+ }
+- if (opt_skb)
+- __kfree_skb(opt_skb);
+ return 0;
+ }
+ } else
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index ba99484c501ed2..a09120d2eb8075 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -520,7 +520,8 @@ __lookup_addr(struct pm_nl_pernet *pernet, const struct mptcp_addr_info *info)
+ {
+ struct mptcp_pm_addr_entry *entry;
+
+- list_for_each_entry(entry, &pernet->local_addr_list, list) {
++ list_for_each_entry_rcu(entry, &pernet->local_addr_list, list,
++ lockdep_is_held(&pernet->lock)) {
+ if (mptcp_addresses_equal(&entry->addr, info, entry->addr.port))
+ return entry;
+ }
+diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
+index 8f3b01d46d243f..0ba145188e6b09 100644
+--- a/net/mptcp/pm_userspace.c
++++ b/net/mptcp/pm_userspace.c
+@@ -325,14 +325,17 @@ int mptcp_pm_nl_remove_doit(struct sk_buff *skb, struct genl_info *info)
+
+ lock_sock(sk);
+
++ spin_lock_bh(&msk->pm.lock);
+ match = mptcp_userspace_pm_lookup_addr_by_id(msk, id_val);
+ if (!match) {
+ GENL_SET_ERR_MSG(info, "address with specified id not found");
++ spin_unlock_bh(&msk->pm.lock);
+ release_sock(sk);
+ goto out;
+ }
+
+ list_move(&match->list, &free_list);
++ spin_unlock_bh(&msk->pm.lock);
+
+ mptcp_pm_remove_addrs(msk, &free_list);
+
+@@ -574,6 +577,7 @@ int mptcp_userspace_pm_set_flags(struct sk_buff *skb, struct genl_info *info)
+ struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN];
+ struct nlattr *attr = info->attrs[MPTCP_PM_ATTR_ADDR];
+ struct net *net = sock_net(skb->sk);
++ struct mptcp_pm_addr_entry *entry;
+ struct mptcp_sock *msk;
+ int ret = -EINVAL;
+ struct sock *sk;
+@@ -615,6 +619,17 @@ int mptcp_userspace_pm_set_flags(struct sk_buff *skb, struct genl_info *info)
+ if (loc.flags & MPTCP_PM_ADDR_FLAG_BACKUP)
+ bkup = 1;
+
++ spin_lock_bh(&msk->pm.lock);
++ list_for_each_entry(entry, &msk->pm.userspace_pm_local_addr_list, list) {
++ if (mptcp_addresses_equal(&entry->addr, &loc.addr, false)) {
++ if (bkup)
++ entry->flags |= MPTCP_PM_ADDR_FLAG_BACKUP;
++ else
++ entry->flags &= ~MPTCP_PM_ADDR_FLAG_BACKUP;
++ }
++ }
++ spin_unlock_bh(&msk->pm.lock);
++
+ lock_sock(sk);
+ ret = mptcp_pm_nl_mp_prio_send_ack(msk, &loc.addr, &rem.addr, bkup);
+ release_sock(sk);
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index ec87b36f0d451a..7913ba6b5daa38 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2082,7 +2082,8 @@ static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied)
+ slow = lock_sock_fast(ssk);
+ WRITE_ONCE(ssk->sk_rcvbuf, rcvbuf);
+ WRITE_ONCE(tcp_sk(ssk)->window_clamp, window_clamp);
+- tcp_cleanup_rbuf(ssk, 1);
++ if (tcp_can_send_ack(ssk))
++ tcp_cleanup_rbuf(ssk, 1);
+ unlock_sock_fast(ssk, slow);
+ }
+ }
+@@ -2205,7 +2206,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ cmsg_flags = MPTCP_CMSG_INQ;
+
+ while (copied < len) {
+- int bytes_read;
++ int err, bytes_read;
+
+ bytes_read = __mptcp_recvmsg_mskq(msk, msg, len - copied, flags, &tss, &cmsg_flags);
+ if (unlikely(bytes_read < 0)) {
+@@ -2267,9 +2268,16 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ }
+
+ pr_debug("block timeout %ld\n", timeo);
+- sk_wait_data(sk, &timeo, NULL);
++ mptcp_rcv_space_adjust(msk, copied);
++ err = sk_wait_data(sk, &timeo, NULL);
++ if (err < 0) {
++ err = copied ? : err;
++ goto out_err;
++ }
+ }
+
++ mptcp_rcv_space_adjust(msk, copied);
++
+ out_err:
+ if (cmsg_flags && copied >= 0) {
+ if (cmsg_flags & MPTCP_CMSG_TS)
+@@ -2285,8 +2293,6 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ pr_debug("msk=%p rx queue empty=%d:%d copied=%d\n",
+ msk, skb_queue_empty_lockless(&sk->sk_receive_queue),
+ skb_queue_empty(&msk->receive_queue), copied);
+- if (!(flags & MSG_PEEK))
+- mptcp_rcv_space_adjust(msk, copied);
+
+ release_sock(sk);
+ return copied;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 0a9287fadb47a2..f84aad420d4464 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -393,15 +393,6 @@ static void netlink_skb_set_owner_r(struct sk_buff *skb, struct sock *sk)
+
+ static void netlink_sock_destruct(struct sock *sk)
+ {
+- struct netlink_sock *nlk = nlk_sk(sk);
+-
+- if (nlk->cb_running) {
+- if (nlk->cb.done)
+- nlk->cb.done(&nlk->cb);
+- module_put(nlk->cb.module);
+- kfree_skb(nlk->cb.skb);
+- }
+-
+ skb_queue_purge(&sk->sk_receive_queue);
+
+ if (!sock_flag(sk, SOCK_DEAD)) {
+@@ -414,14 +405,6 @@ static void netlink_sock_destruct(struct sock *sk)
+ WARN_ON(nlk_sk(sk)->groups);
+ }
+
+-static void netlink_sock_destruct_work(struct work_struct *work)
+-{
+- struct netlink_sock *nlk = container_of(work, struct netlink_sock,
+- work);
+-
+- sk_free(&nlk->sk);
+-}
+-
+ /* This lock without WQ_FLAG_EXCLUSIVE is good on UP and it is _very_ bad on
+ * SMP. Look, when several writers sleep and reader wakes them up, all but one
+ * immediately hit write lock and grab all the cpus. Exclusive sleep solves
+@@ -731,12 +714,6 @@ static void deferred_put_nlk_sk(struct rcu_head *head)
+ if (!refcount_dec_and_test(&sk->sk_refcnt))
+ return;
+
+- if (nlk->cb_running && nlk->cb.done) {
+- INIT_WORK(&nlk->work, netlink_sock_destruct_work);
+- schedule_work(&nlk->work);
+- return;
+- }
+-
+ sk_free(sk);
+ }
+
+@@ -788,6 +765,14 @@ static int netlink_release(struct socket *sock)
+ NETLINK_URELEASE, &n);
+ }
+
++ /* Terminate any outstanding dump */
++ if (nlk->cb_running) {
++ if (nlk->cb.done)
++ nlk->cb.done(&nlk->cb);
++ module_put(nlk->cb.module);
++ kfree_skb(nlk->cb.skb);
++ }
++
+ module_put(nlk->module);
+
+ if (netlink_is_kernel(sk)) {
+diff --git a/net/netlink/af_netlink.h b/net/netlink/af_netlink.h
+index 9751e29d4bbb9a..b1a17c0d97a103 100644
+--- a/net/netlink/af_netlink.h
++++ b/net/netlink/af_netlink.h
+@@ -4,7 +4,6 @@
+
+ #include <linux/rhashtable.h>
+ #include <linux/atomic.h>
+-#include <linux/workqueue.h>
+ #include <net/sock.h>
+
+ /* flags */
+@@ -51,7 +50,6 @@ struct netlink_sock {
+
+ struct rhash_head node;
+ struct rcu_head rcu;
+- struct work_struct work;
+ };
+
+ static inline struct netlink_sock *nlk_sk(struct sock *sk)
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index 9412d88a99bc12..d3a03c57545bcc 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -92,6 +92,16 @@ struct tc_u_common {
+ long knodes;
+ };
+
++static u32 handle2id(u32 h)
++{
++ return ((h & 0x80000000) ? ((h >> 20) & 0x7FF) : h);
++}
++
++static u32 id2handle(u32 id)
++{
++ return (id | 0x800U) << 20;
++}
++
+ static inline unsigned int u32_hash_fold(__be32 key,
+ const struct tc_u32_sel *sel,
+ u8 fshift)
+@@ -310,7 +320,7 @@ static u32 gen_new_htid(struct tc_u_common *tp_c, struct tc_u_hnode *ptr)
+ int id = idr_alloc_cyclic(&tp_c->handle_idr, ptr, 1, 0x7FF, GFP_KERNEL);
+ if (id < 0)
+ return 0;
+- return (id | 0x800U) << 20;
++ return id2handle(id);
+ }
+
+ static struct hlist_head *tc_u_common_hash;
+@@ -360,7 +370,7 @@ static int u32_init(struct tcf_proto *tp)
+ return -ENOBUFS;
+
+ refcount_set(&root_ht->refcnt, 1);
+- root_ht->handle = tp_c ? gen_new_htid(tp_c, root_ht) : 0x80000000;
++ root_ht->handle = tp_c ? gen_new_htid(tp_c, root_ht) : id2handle(0);
+ root_ht->prio = tp->prio;
+ root_ht->is_root = true;
+ idr_init(&root_ht->handle_idr);
+@@ -612,7 +622,7 @@ static int u32_destroy_hnode(struct tcf_proto *tp, struct tc_u_hnode *ht,
+ if (phn == ht) {
+ u32_clear_hw_hnode(tp, ht, extack);
+ idr_destroy(&ht->handle_idr);
+- idr_remove(&tp_c->handle_idr, ht->handle);
++ idr_remove(&tp_c->handle_idr, handle2id(ht->handle));
+ RCU_INIT_POINTER(*hn, ht->next);
+ kfree_rcu(ht, rcu);
+ return 0;
+@@ -989,7 +999,7 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+
+ err = u32_replace_hw_hnode(tp, ht, userflags, extack);
+ if (err) {
+- idr_remove(&tp_c->handle_idr, handle);
++ idr_remove(&tp_c->handle_idr, handle2id(handle));
+ kfree(ht);
+ return err;
+ }
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index f7b809c0d142c0..38e2fbdcbeac4b 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -683,7 +683,7 @@ static int sctp_v6_available(union sctp_addr *addr, struct sctp_sock *sp)
+ struct sock *sk = &sp->inet.sk;
+ struct net *net = sock_net(sk);
+ struct net_device *dev = NULL;
+- int type;
++ int type, res, bound_dev_if;
+
+ type = ipv6_addr_type(in6);
+ if (IPV6_ADDR_ANY == type)
+@@ -697,14 +697,21 @@ static int sctp_v6_available(union sctp_addr *addr, struct sctp_sock *sp)
+ if (!(type & IPV6_ADDR_UNICAST))
+ return 0;
+
+- if (sk->sk_bound_dev_if) {
+- dev = dev_get_by_index_rcu(net, sk->sk_bound_dev_if);
++ rcu_read_lock();
++ bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
++ if (bound_dev_if) {
++ res = 0;
++ dev = dev_get_by_index_rcu(net, bound_dev_if);
+ if (!dev)
+- return 0;
++ goto out;
+ }
+
+- return ipv6_can_nonlocal_bind(net, &sp->inet) ||
+- ipv6_chk_addr(net, in6, dev, 0);
++ res = ipv6_can_nonlocal_bind(net, &sp->inet) ||
++ ipv6_chk_addr(net, in6, dev, 0);
++
++out:
++ rcu_read_unlock();
++ return res;
+ }
+
+ /* This function checks if the address is a valid address to be used for
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 0ff9b2dd86bac2..a0202d9b479217 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -835,6 +835,9 @@ static void vsock_sk_destruct(struct sock *sk)
+ {
+ struct vsock_sock *vsk = vsock_sk(sk);
+
++ /* Flush MSG_ZEROCOPY leftovers. */
++ __skb_queue_purge(&sk->sk_error_queue);
++
+ vsock_deassign_transport(vsk);
+
+ /* When clearing these addresses, there's no need to set the family and
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 01b6b1ed5acfb8..0211964e45459d 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -400,6 +400,7 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
+ if (virtio_transport_init_zcopy_skb(vsk, skb,
+ info->msg,
+ can_zcopy)) {
++ kfree_skb(skb);
+ ret = -ENOMEM;
+ break;
+ }
+@@ -1478,6 +1479,14 @@ virtio_transport_recv_listen(struct sock *sk, struct sk_buff *skb,
+ return -ENOMEM;
+ }
+
++ /* __vsock_release() might have already flushed accept_queue.
++ * Subsequent enqueues would lead to a memory leak.
++ */
++ if (sk->sk_shutdown == SHUTDOWN_MASK) {
++ virtio_transport_reset_no_sock(t, skb);
++ return -ESHUTDOWN;
++ }
++
+ child = vsock_create_connected(sk);
+ if (!child) {
+ virtio_transport_reset_no_sock(t, skb);
+diff --git a/samples/pktgen/pktgen_sample01_simple.sh b/samples/pktgen/pktgen_sample01_simple.sh
+index cdb9f497f87da7..66cb707479e6c5 100755
+--- a/samples/pktgen/pktgen_sample01_simple.sh
++++ b/samples/pktgen/pktgen_sample01_simple.sh
+@@ -76,7 +76,7 @@ if [ -n "$DST_PORT" ]; then
+ pg_set $DEV "udp_dst_max $UDP_DST_MAX"
+ fi
+
+-[ ! -z "$UDP_CSUM" ] && pg_set $dev "flag UDPCSUM"
++[ ! -z "$UDP_CSUM" ] && pg_set $DEV "flag UDPCSUM"
+
+ # Setup random UDP port src range
+ pg_set $DEV "flag UDPSRC_RND"
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index 62fe66dd53ce5a..309630f319e2d6 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -1084,7 +1084,8 @@ static void evm_file_release(struct file *file)
+ if (!S_ISREG(inode->i_mode) || !(mode & FMODE_WRITE))
+ return;
+
+- if (iint && atomic_read(&inode->i_writecount) == 1)
++ if (iint && iint->flags & EVM_NEW_FILE &&
++ atomic_read(&inode->i_writecount) == 1)
+ iint->flags &= ~EVM_NEW_FILE;
+ }
+
+diff --git a/security/integrity/ima/ima_template_lib.c b/security/integrity/ima/ima_template_lib.c
+index 4183956c53af49..0e627eac9c33bb 100644
+--- a/security/integrity/ima/ima_template_lib.c
++++ b/security/integrity/ima/ima_template_lib.c
+@@ -318,15 +318,21 @@ static int ima_eventdigest_init_common(const u8 *digest, u32 digestsize,
+ hash_algo_name[hash_algo]);
+ }
+
+- if (digest)
++ if (digest) {
+ memcpy(buffer + offset, digest, digestsize);
+- else
++ } else {
+ /*
+ * If digest is NULL, the event being recorded is a violation.
+ * Make room for the digest by increasing the offset by the
+- * hash algorithm digest size.
++ * hash algorithm digest size. If the hash algorithm is not
++ * specified increase the offset by IMA_DIGEST_SIZE which
++ * fits SHA1 or MD5
+ */
+- offset += hash_digest_size[hash_algo];
++ if (hash_algo < HASH_ALGO__LAST)
++ offset += hash_digest_size[hash_algo];
++ else
++ offset += IMA_DIGEST_SIZE;
++ }
+
+ return ima_write_template_field_data(buffer, offset + digestsize,
+ fmt, field_data);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d1d39f4cc94256..833635aaee1d02 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7436,7 +7436,6 @@ static void alc287_alc1318_playback_pcm_hook(struct hda_pcm_stream *hinfo,
+ struct snd_pcm_substream *substream,
+ int action)
+ {
+- alc_write_coef_idx(codec, 0x10, 0x8806); /* Change MLK to GPIO3 */
+ switch (action) {
+ case HDA_GEN_PCM_ACT_OPEN:
+ alc_write_coefex_idx(codec, 0x5a, 0x00, 0x954f); /* write gpio3 to high */
+@@ -7450,7 +7449,6 @@ static void alc287_alc1318_playback_pcm_hook(struct hda_pcm_stream *hinfo,
+ static void alc287_s4_power_gpio3_default(struct hda_codec *codec)
+ {
+ if (is_s4_suspend(codec)) {
+- alc_write_coef_idx(codec, 0x10, 0x8806); /* Change MLK to GPIO3 */
+ alc_write_coefex_idx(codec, 0x5a, 0x00, 0x554f); /* write gpio3 as default value */
+ }
+ }
+@@ -7459,9 +7457,17 @@ static void alc287_fixup_lenovo_thinkpad_with_alc1318(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+ struct alc_spec *spec = codec->spec;
++ static const struct coef_fw coefs[] = {
++ WRITE_COEF(0x24, 0x0013), WRITE_COEF(0x25, 0x0000), WRITE_COEF(0x26, 0xC300),
++ WRITE_COEF(0x28, 0x0001), WRITE_COEF(0x29, 0xb023),
++ WRITE_COEF(0x24, 0x0013), WRITE_COEF(0x25, 0x0000), WRITE_COEF(0x26, 0xC301),
++ WRITE_COEF(0x28, 0x0001), WRITE_COEF(0x29, 0xb023),
++ };
+
+ if (action != HDA_FIXUP_ACT_PRE_PROBE)
+ return;
++ alc_update_coef_idx(codec, 0x10, 1<<11, 1<<11);
++ alc_process_coef_fw(codec, coefs);
+ spec->power_hook = alc287_s4_power_gpio3_default;
+ spec->gen.pcm_playback_hook = alc287_alc1318_playback_pcm_hook;
+ }
+@@ -10477,6 +10483,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8b59, "HP Elite mt645 G7 Mobile Thin Client U89", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8b5d, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8b5e, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++ SND_PCI_QUIRK(0x103c, 0x8b5f, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8b63, "HP Elite Dragonfly 13.5 inch G4", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8b65, "HP ProBook 455 15.6 inch G10 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8b66, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+@@ -11664,6 +11671,8 @@ static const struct snd_hda_pin_quirk alc269_fallback_pin_fixup_tbl[] = {
+ {0x1a, 0x40000000}),
+ SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC2XX_FIXUP_HEADSET_MIC,
+ {0x19, 0x40000000}),
++ SND_HDA_PIN_QUIRK(0x10ec0255, 0x1558, "Clevo", ALC2XX_FIXUP_HEADSET_MIC,
++ {0x19, 0x40000000}),
+ {}
+ };
+
+diff --git a/tools/mm/page-types.c b/tools/mm/page-types.c
+index 2a4ca4dd2da80a..69f00eab1b8c7d 100644
+--- a/tools/mm/page-types.c
++++ b/tools/mm/page-types.c
+@@ -421,7 +421,7 @@ static void show_page(unsigned long voffset, unsigned long offset,
+ if (opt_file)
+ printf("%lx\t", voffset);
+ if (opt_list_cgroup)
+- printf("@%" PRIu64 "\t", cgroup)
++ printf("@%" PRIu64 "\t", cgroup);
+ if (opt_list_mapcnt)
+ printf("%" PRIu64 "\t", mapcnt);
+
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index 48d32c5aa3eb71..a6f92129bb02f3 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -235,10 +235,10 @@ CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \
+ -Wno-gnu-variable-sized-type-not-at-end -MD -MP -DCONFIG_64BIT \
+ -fno-builtin-memcmp -fno-builtin-memcpy \
+ -fno-builtin-memset -fno-builtin-strnlen \
+- -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \
+- -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \
+- -I$(<D) -Iinclude/$(ARCH_DIR) -I ../rseq -I.. $(EXTRA_CFLAGS) \
+- $(KHDR_INCLUDES)
++ -fno-stack-protector -fno-PIE -fno-strict-aliasing \
++ -I$(LINUX_TOOL_INCLUDE) -I$(LINUX_TOOL_ARCH_INCLUDE) \
++ -I$(LINUX_HDR_PATH) -Iinclude -I$(<D) -Iinclude/$(ARCH_DIR) \
++ -I ../rseq -I.. $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
+ ifeq ($(ARCH),s390)
+ CFLAGS += -march=z10
+ endif
+diff --git a/tools/testing/selftests/mm/hugetlb_dio.c b/tools/testing/selftests/mm/hugetlb_dio.c
+index 60001c142ce998..432d5af15e66b7 100644
+--- a/tools/testing/selftests/mm/hugetlb_dio.c
++++ b/tools/testing/selftests/mm/hugetlb_dio.c
+@@ -44,6 +44,13 @@ void run_dio_using_hugetlb(unsigned int start_off, unsigned int end_off)
+ if (fd < 0)
+ ksft_exit_fail_perror("Error opening file\n");
+
++ /* Get the free huge pages before allocation */
++ free_hpage_b = get_free_hugepages();
++ if (free_hpage_b == 0) {
++ close(fd);
++ ksft_exit_skip("No free hugepage, exiting!\n");
++ }
++
+ /* Allocate a hugetlb page */
+ orig_buffer = mmap(NULL, h_pagesize, mmap_prot, mmap_flags, -1, 0);
+ if (orig_buffer == MAP_FAILED) {
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/u32.json b/tools/testing/selftests/tc-testing/tc-tests/filters/u32.json
+index 24bd0c2a3014cf..b2ca9d4e991bdf 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/filters/u32.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/filters/u32.json
+@@ -329,5 +329,29 @@
+ "teardown": [
+ "$TC qdisc del dev $DEV1 parent root drr"
+ ]
++ },
++ {
++ "id": "1234",
++ "name": "Exercise IDR leaks by creating/deleting a filter many (2048) times",
++ "category": [
++ "filter",
++ "u32"
++ ],
++ "plugins": {
++ "requires": "nsPlugin"
++ },
++ "setup": [
++ "$TC qdisc add dev $DEV1 parent root handle 10: drr",
++ "$TC filter add dev $DEV1 parent 10:0 protocol ip prio 2 u32 match ip src 0.0.0.2/32 action drop",
++ "$TC filter add dev $DEV1 parent 10:0 protocol ip prio 3 u32 match ip src 0.0.0.3/32 action drop"
++ ],
++ "cmdUnderTest": "bash -c 'for i in {1..2048} ;do echo filter delete dev $DEV1 pref 3;echo filter add dev $DEV1 parent 10:0 protocol ip prio 3 u32 match ip src 0.0.0.3/32 action drop;done | $TC -b -'",
++ "expExitCode": "0",
++ "verifyCmd": "$TC filter show dev $DEV1",
++ "matchPattern": "protocol ip pref 3 u32",
++ "matchCount": "3",
++ "teardown": [
++ "$TC qdisc del dev $DEV1 parent root drr"
++ ]
+ }
+ ]
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-11-22 17:54 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-11-22 17:54 UTC (permalink / raw
To: gentoo-commits
commit: b9e624ee914e6e31e2d0cbabb808b27b7bf0e667
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 22 17:54:20 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 22 17:54:20 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b9e624ee
Remove redundant patch
Removed:
2400_bluetooth-mgmt-device-connected-fix.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ---
2400_bluetooth-mgmt-device-connected-fix.patch | 34 --------------------------
2 files changed, 38 deletions(-)
diff --git a/0000_README b/0000_README
index ac3dcb33..97f8e18e 100644
--- a/0000_README
+++ b/0000_README
@@ -99,10 +99,6 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
-Patch: 2400_bluetooth-mgmt-device-connected-fix.patch
-From: https://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git
-Desc: Bluetooth: hci_core: Fix calling mgmt_device_connected
-
Patch: 2600_HID-revert-Y900P-fix-ThinkPad-L15-touchpad.patch
From: https://bugs.gentoo.org/942797
Desc: Revert: HID: multitouch: Add support for lenovo Y9000P Touchpad
diff --git a/2400_bluetooth-mgmt-device-connected-fix.patch b/2400_bluetooth-mgmt-device-connected-fix.patch
deleted file mode 100644
index 86cf10e9..00000000
--- a/2400_bluetooth-mgmt-device-connected-fix.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-From 48adce305dc6d6c444fd00e40ad07d4a41acdfbf Mon Sep 17 00:00:00 2001
-From: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
-Date: Fri, 8 Nov 2024 11:19:54 -0500
-Subject: Bluetooth: hci_core: Fix calling mgmt_device_connected
-
-Since 61a939c68ee0 ("Bluetooth: Queue incoming ACL data until
-BT_CONNECTED state is reached") there is no long the need to call
-mgmt_device_connected as ACL data will be queued until BT_CONNECTED
-state.
-
-Link: https://bugzilla.kernel.org/show_bug.cgi?id=219458
-Link: https://github.com/bluez/bluez/issues/1014
-Fixes: 333b4fd11e89 ("Bluetooth: L2CAP: Fix uaf in l2cap_connect")
-Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
----
- net/bluetooth/hci_core.c | 2 --
- 1 file changed, 2 deletions(-)
-
-diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
-index f6cff34a85421c..f9e19f9cb5a386 100644
---- a/net/bluetooth/hci_core.c
-+++ b/net/bluetooth/hci_core.c
-@@ -3792,8 +3792,6 @@ static void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb)
-
- hci_dev_lock(hdev);
- conn = hci_conn_hash_lookup_handle(hdev, handle);
-- if (conn && hci_dev_test_flag(hdev, HCI_MGMT))
-- mgmt_device_connected(hdev, conn, NULL, 0);
- hci_dev_unlock(hdev);
-
- if (conn) {
---
-cgit 1.2.3-korg
-
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [gentoo-commits] proj/linux-patches:6.11 commit in: /
@ 2024-12-05 14:07 Mike Pagano
0 siblings, 0 replies; 26+ messages in thread
From: Mike Pagano @ 2024-12-05 14:07 UTC (permalink / raw
To: gentoo-commits
commit: 0900bdfcb3ce50eca73639dd34eb4610509c24dd
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 5 14:06:55 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Dec 5 14:06:55 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0900bdfc
Linux patch 6.11.11
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1010_linux-6.11.11.patch | 37692 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 37696 insertions(+)
diff --git a/0000_README b/0000_README
index 97f8e18e..5150bfbc 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 1009_linux-6.11.10.patch
From: https://www.kernel.org
Desc: Linux 6.11.10
+Patch: 1010_linux-6.11.11.patch
+From: https://www.kernel.org
+Desc: Linux 6.11.11
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1010_linux-6.11.11.patch b/1010_linux-6.11.11.patch
new file mode 100644
index 00000000..256ace91
--- /dev/null
+++ b/1010_linux-6.11.11.patch
@@ -0,0 +1,37692 @@
+diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
+index d0c1acfcad405e..ef2d8fcc6e9935 100644
+--- a/Documentation/ABI/testing/sysfs-fs-f2fs
++++ b/Documentation/ABI/testing/sysfs-fs-f2fs
+@@ -311,10 +311,13 @@ Description: Do background GC aggressively when set. Set to 0 by default.
+ GC approach and turns SSR mode on.
+ gc urgent low(2): lowers the bar of checking I/O idling in
+ order to process outstanding discard commands and GC a
+- little bit aggressively. uses cost benefit GC approach.
++ little bit aggressively. always uses cost benefit GC approach,
++ and will override age-threshold GC approach if ATGC is enabled
++ at the same time.
+ gc urgent mid(3): does GC forcibly in a period of given
+ gc_urgent_sleep_time and executes a mid level of I/O idling check.
+- uses cost benefit GC approach.
++ always uses cost benefit GC approach, and will override
++ age-threshold GC approach if ATGC is enabled at the same time.
+
+ What: /sys/fs/f2fs/<disk>/gc_urgent_sleep_time
+ Date: August 2017
+diff --git a/Documentation/RCU/stallwarn.rst b/Documentation/RCU/stallwarn.rst
+index ca7b7cd806a16c..30080ff6f4062d 100644
+--- a/Documentation/RCU/stallwarn.rst
++++ b/Documentation/RCU/stallwarn.rst
+@@ -249,7 +249,7 @@ ticks this GP)" indicates that this CPU has not taken any scheduling-clock
+ interrupts during the current stalled grace period.
+
+ The "idle=" portion of the message prints the dyntick-idle state.
+-The hex number before the first "/" is the low-order 12 bits of the
++The hex number before the first "/" is the low-order 16 bits of the
+ dynticks counter, which will have an even-numbered value if the CPU
+ is in dyntick-idle mode and an odd-numbered value otherwise. The hex
+ number between the two "/"s is the value of the nesting, which will be
+diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
+index 4fd492cb49704f..ad2d8ddad27fe4 100644
+--- a/Documentation/arch/x86/boot.rst
++++ b/Documentation/arch/x86/boot.rst
+@@ -896,10 +896,19 @@ Offset/size: 0x260/4
+
+ The kernel runtime start address is determined by the following algorithm::
+
+- if (relocatable_kernel)
+- runtime_start = align_up(load_address, kernel_alignment)
+- else
+- runtime_start = pref_address
++ if (relocatable_kernel) {
++ if (load_address < pref_address)
++ load_address = pref_address;
++ runtime_start = align_up(load_address, kernel_alignment);
++ } else {
++ runtime_start = pref_address;
++ }
++
++Hence the necessary memory window location and size can be estimated by
++a boot loader as::
++
++ memory_window_start = runtime_start;
++ memory_window_size = init_size;
+
+ ============ ===============
+ Field name: handover_offset
+diff --git a/Documentation/devicetree/bindings/cache/qcom,llcc.yaml b/Documentation/devicetree/bindings/cache/qcom,llcc.yaml
+index 68ea5f70b75f03..ee7edc6f60e2b4 100644
+--- a/Documentation/devicetree/bindings/cache/qcom,llcc.yaml
++++ b/Documentation/devicetree/bindings/cache/qcom,llcc.yaml
+@@ -39,11 +39,11 @@ properties:
+
+ reg:
+ minItems: 2
+- maxItems: 9
++ maxItems: 10
+
+ reg-names:
+ minItems: 2
+- maxItems: 9
++ maxItems: 10
+
+ interrupts:
+ maxItems: 1
+@@ -134,6 +134,36 @@ allOf:
+ - qcom,qdu1000-llcc
+ - qcom,sc8180x-llcc
+ - qcom,sc8280xp-llcc
++ then:
++ properties:
++ reg:
++ items:
++ - description: LLCC0 base register region
++ - description: LLCC1 base register region
++ - description: LLCC2 base register region
++ - description: LLCC3 base register region
++ - description: LLCC4 base register region
++ - description: LLCC5 base register region
++ - description: LLCC6 base register region
++ - description: LLCC7 base register region
++ - description: LLCC broadcast base register region
++ reg-names:
++ items:
++ - const: llcc0_base
++ - const: llcc1_base
++ - const: llcc2_base
++ - const: llcc3_base
++ - const: llcc4_base
++ - const: llcc5_base
++ - const: llcc6_base
++ - const: llcc7_base
++ - const: llcc_broadcast_base
++
++ - if:
++ properties:
++ compatible:
++ contains:
++ enum:
+ - qcom,x1e80100-llcc
+ then:
+ properties:
+@@ -148,6 +178,7 @@ allOf:
+ - description: LLCC6 base register region
+ - description: LLCC7 base register region
+ - description: LLCC broadcast base register region
++ - description: LLCC broadcast AND register region
+ reg-names:
+ items:
+ - const: llcc0_base
+@@ -159,6 +190,7 @@ allOf:
+ - const: llcc6_base
+ - const: llcc7_base
+ - const: llcc_broadcast_base
++ - const: llcc_broadcast_and_base
+
+ - if:
+ properties:
+diff --git a/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml b/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
+index 5e942bccf27787..2b2041818a0a44 100644
+--- a/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
++++ b/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
+@@ -26,9 +26,21 @@ properties:
+ description:
+ Specifies the reference clock(s) from which the output frequency is
+ derived. This must either reference one clock if only the first clock
+- input is connected or two if both clock inputs are connected.
+- minItems: 1
+- maxItems: 2
++ input is connected or two if both clock inputs are connected. The last
++ clock is the AXI bus clock that needs to be enabled so we can access the
++ core registers.
++ minItems: 2
++ maxItems: 3
++
++ clock-names:
++ oneOf:
++ - items:
++ - const: clkin1
++ - const: s_axi_aclk
++ - items:
++ - const: clkin1
++ - const: clkin2
++ - const: s_axi_aclk
+
+ '#clock-cells':
+ const: 0
+@@ -40,6 +52,7 @@ required:
+ - compatible
+ - reg
+ - clocks
++ - clock-names
+ - '#clock-cells'
+
+ additionalProperties: false
+@@ -50,5 +63,6 @@ examples:
+ compatible = "adi,axi-clkgen-2.00.a";
+ #clock-cells = <0>;
+ reg = <0xff000000 0x1000>;
+- clocks = <&osc 1>;
++ clocks = <&osc 1>, <&clkc 15>;
++ clock-names = "clkin1", "s_axi_aclk";
+ };
+diff --git a/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml b/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
+index fc8b97f820775b..41fe0003474285 100644
+--- a/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
++++ b/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
+@@ -30,7 +30,7 @@ properties:
+ maxItems: 1
+
+ spi-max-frequency:
+- maximum: 30000000
++ maximum: 66000000
+
+ reset-gpios:
+ maxItems: 1
+diff --git a/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml b/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml
+index 4dfb49b0e07f73..f82a3c7e6c29e4 100644
+--- a/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml
+@@ -91,14 +91,17 @@ allOf:
+ - if:
+ properties:
+ compatible:
+- # Match without "contains", to skip newer variants which are still
+- # compatible with samsung,exynos7-wakeup-eint
+- enum:
+- - samsung,s5pv210-wakeup-eint
+- - samsung,exynos4210-wakeup-eint
+- - samsung,exynos5433-wakeup-eint
+- - samsung,exynos7-wakeup-eint
+- - samsung,exynos7885-wakeup-eint
++ oneOf:
++ # Match without "contains", to skip newer variants which are still
++ # compatible with samsung,exynos7-wakeup-eint
++ - enum:
++ - samsung,exynos4210-wakeup-eint
++ - samsung,exynos7-wakeup-eint
++ - samsung,s5pv210-wakeup-eint
++ - contains:
++ enum:
++ - samsung,exynos5433-wakeup-eint
++ - samsung,exynos7885-wakeup-eint
+ then:
+ properties:
+ interrupts:
+diff --git a/Documentation/devicetree/bindings/serial/rs485.yaml b/Documentation/devicetree/bindings/serial/rs485.yaml
+index 9418fd66a8e95a..b93254ad2a287a 100644
+--- a/Documentation/devicetree/bindings/serial/rs485.yaml
++++ b/Documentation/devicetree/bindings/serial/rs485.yaml
+@@ -18,16 +18,15 @@ properties:
+ description: prop-encoded-array <a b>
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ items:
+- items:
+- - description: Delay between rts signal and beginning of data sent in
+- milliseconds. It corresponds to the delay before sending data.
+- default: 0
+- maximum: 100
+- - description: Delay between end of data sent and rts signal in milliseconds.
+- It corresponds to the delay after sending data and actual release
+- of the line.
+- default: 0
+- maximum: 100
++ - description: Delay between rts signal and beginning of data sent in
++ milliseconds. It corresponds to the delay before sending data.
++ default: 0
++ maximum: 100
++ - description: Delay between end of data sent and rts signal in milliseconds.
++ It corresponds to the delay after sending data and actual release
++ of the line.
++ default: 0
++ maximum: 100
+
+ rs485-rts-active-high:
+ description: drive RTS high when sending (this is the default).
+diff --git a/Documentation/devicetree/bindings/sound/mt6359.yaml b/Documentation/devicetree/bindings/sound/mt6359.yaml
+index 23d411fc4200e6..128698630c865f 100644
+--- a/Documentation/devicetree/bindings/sound/mt6359.yaml
++++ b/Documentation/devicetree/bindings/sound/mt6359.yaml
+@@ -23,8 +23,8 @@ properties:
+ Indicates how many data pins are used to transmit two channels of PDM
+ signal. 0 means two wires, 1 means one wire. Default value is 0.
+ enum:
+- - 0 # one wire
+- - 1 # two wires
++ - 0 # two wires
++ - 1 # one wire
+
+ mediatek,mic-type-0:
+ $ref: /schemas/types.yaml#/definitions/uint32
+@@ -53,9 +53,9 @@ additionalProperties: false
+
+ examples:
+ - |
+- mt6359codec: mt6359codec {
+- mediatek,dmic-mode = <0>;
+- mediatek,mic-type-0 = <2>;
++ mt6359codec: audio-codec {
++ mediatek,dmic-mode = <0>;
++ mediatek,mic-type-0 = <2>;
+ };
+
+ ...
+diff --git a/Documentation/devicetree/bindings/vendor-prefixes.yaml b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+index a70ce43b3dc032..b18f6ee6d6de48 100644
+--- a/Documentation/devicetree/bindings/vendor-prefixes.yaml
++++ b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+@@ -1009,6 +1009,8 @@ patternProperties:
+ description: Shanghai Neardi Technology Co., Ltd.
+ "^nec,.*":
+ description: NEC LCD Technologies, Ltd.
++ "^neofidelity,.*":
++ description: Neofidelity Inc.
+ "^neonode,.*":
+ description: Neonode Inc.
+ "^netgear,.*":
+diff --git a/Documentation/filesystems/mount_api.rst b/Documentation/filesystems/mount_api.rst
+index 317934c9e8fcac..d92c276f1575af 100644
+--- a/Documentation/filesystems/mount_api.rst
++++ b/Documentation/filesystems/mount_api.rst
+@@ -770,7 +770,8 @@ process the parameters it is given.
+
+ * ::
+
+- bool fs_validate_description(const struct fs_parameter_description *desc);
++ bool fs_validate_description(const char *name,
++ const struct fs_parameter_description *desc);
+
+ This performs some validation checks on a parameter description. It
+ returns true if the description is good and false if it is not. It will
+diff --git a/Documentation/locking/seqlock.rst b/Documentation/locking/seqlock.rst
+index bfda1a5fecadc6..ec6411d02ac8f5 100644
+--- a/Documentation/locking/seqlock.rst
++++ b/Documentation/locking/seqlock.rst
+@@ -153,7 +153,7 @@ Use seqcount_latch_t when the write side sections cannot be protected
+ from interruption by readers. This is typically the case when the read
+ side can be invoked from NMI handlers.
+
+-Check `raw_write_seqcount_latch()` for more information.
++Check `write_seqcount_latch()` for more information.
+
+
+ .. _seqlock_t:
+diff --git a/Documentation/networking/j1939.rst b/Documentation/networking/j1939.rst
+index e4bd7aa1f5aa9d..544bad175aae2d 100644
+--- a/Documentation/networking/j1939.rst
++++ b/Documentation/networking/j1939.rst
+@@ -121,7 +121,7 @@ format, the Group Extension is set in the PS-field.
+
+ On the other hand, when using PDU1 format, the PS-field contains a so-called
+ Destination Address, which is _not_ part of the PGN. When communicating a PGN
+-from user space to kernel (or vice versa) and PDU2 format is used, the PS-field
++from user space to kernel (or vice versa) and PDU1 format is used, the PS-field
+ of the PGN shall be set to zero. The Destination Address shall be set
+ elsewhere.
+
+diff --git a/Makefile b/Makefile
+index 4b075e003b2d61..e19ae179ca36bd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 11
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arc/kernel/devtree.c b/arch/arc/kernel/devtree.c
+index 4c9e61457b2f69..cc6ac7d128aa1a 100644
+--- a/arch/arc/kernel/devtree.c
++++ b/arch/arc/kernel/devtree.c
+@@ -62,7 +62,7 @@ const struct machine_desc * __init setup_machine_fdt(void *dt)
+ const struct machine_desc *mdesc;
+ unsigned long dt_root;
+
+- if (!early_init_dt_scan(dt))
++ if (!early_init_dt_scan(dt, __pa(dt)))
+ return NULL;
+
+ mdesc = of_flat_dt_match_machine(NULL, arch_get_next_mach);
+diff --git a/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts b/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
+index c8ca8cb7f5c94e..52ad95a2063aaf 100644
+--- a/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
++++ b/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
+@@ -280,8 +280,8 @@ reg_dcdc4: dcdc4 {
+
+ reg_dcdc5: dcdc5 {
+ regulator-always-on;
+- regulator-min-microvolt = <1425000>;
+- regulator-max-microvolt = <1575000>;
++ regulator-min-microvolt = <1450000>;
++ regulator-max-microvolt = <1550000>;
+ regulator-name = "vcc-dram";
+ };
+
+diff --git a/arch/arm/boot/dts/microchip/sam9x60.dtsi b/arch/arm/boot/dts/microchip/sam9x60.dtsi
+index d077afd5024db1..2492cd14c4e788 100644
+--- a/arch/arm/boot/dts/microchip/sam9x60.dtsi
++++ b/arch/arm/boot/dts/microchip/sam9x60.dtsi
+@@ -186,6 +186,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 13>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -384,6 +385,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 32>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -433,6 +435,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 33>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -590,6 +593,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 9>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -639,6 +643,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 10>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -688,6 +693,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 11>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -737,6 +743,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 5>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -805,6 +812,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 6>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -873,6 +881,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 7>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -941,6 +950,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 8>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -1064,6 +1074,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 15>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -1113,6 +1124,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 16>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+diff --git a/arch/arm/boot/dts/renesas/r7s72100-genmai.dts b/arch/arm/boot/dts/renesas/r7s72100-genmai.dts
+index 29ba098f5dd5e8..28e703e0f152b2 100644
+--- a/arch/arm/boot/dts/renesas/r7s72100-genmai.dts
++++ b/arch/arm/boot/dts/renesas/r7s72100-genmai.dts
+@@ -53,7 +53,7 @@ partition@0 {
+
+ partition@4000000 {
+ label = "user1";
+- reg = <0x04000000 0x40000000>;
++ reg = <0x04000000 0x04000000>;
+ };
+ };
+ };
+diff --git a/arch/arm/boot/dts/ti/omap/omap36xx.dtsi b/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
+index c3d79ecd56e398..c217094b50abc9 100644
+--- a/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
++++ b/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
+@@ -72,6 +72,7 @@ opp-1000000000 {
+ <1375000 1375000 1375000>;
+ /* only on am/dm37x with speed-binned bit set */
+ opp-supported-hw = <0xffffffff 2>;
++ turbo-mode;
+ };
+ };
+
+diff --git a/arch/arm/kernel/devtree.c b/arch/arm/kernel/devtree.c
+index fdb74e64206a8a..3b78966e750a2d 100644
+--- a/arch/arm/kernel/devtree.c
++++ b/arch/arm/kernel/devtree.c
+@@ -200,7 +200,7 @@ const struct machine_desc * __init setup_machine_fdt(void *dt_virt)
+
+ mdesc_best = &__mach_desc_GENERIC_DT;
+
+- if (!dt_virt || !early_init_dt_verify(dt_virt))
++ if (!dt_virt || !early_init_dt_verify(dt_virt, __pa(dt_virt)))
+ return NULL;
+
+ mdesc = of_flat_dt_match_machine(mdesc_best, arch_get_next_mach);
+diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
+index 28873cda464f51..f22c50d4bd4173 100644
+--- a/arch/arm/kernel/head.S
++++ b/arch/arm/kernel/head.S
+@@ -411,7 +411,11 @@ ENTRY(secondary_startup)
+ /*
+ * Use the page tables supplied from __cpu_up.
+ */
++#ifdef CONFIG_XIP_KERNEL
++ ldr r3, =(secondary_data + PLAT_PHYS_OFFSET - PAGE_OFFSET)
++#else
+ adr_l r3, secondary_data
++#endif
+ mov_l r12, __secondary_switched
+ ldrd r4, r5, [r3, #0] @ get secondary_data.pgdir
+ ARM_BE8(eor r4, r4, r5) @ Swap r5 and r4 in BE:
+diff --git a/arch/arm/kernel/psci_smp.c b/arch/arm/kernel/psci_smp.c
+index d4392e1774848d..3bb0c4dcfc5c95 100644
+--- a/arch/arm/kernel/psci_smp.c
++++ b/arch/arm/kernel/psci_smp.c
+@@ -45,8 +45,15 @@ extern void secondary_startup(void);
+ static int psci_boot_secondary(unsigned int cpu, struct task_struct *idle)
+ {
+ if (psci_ops.cpu_on)
++#ifdef CONFIG_XIP_KERNEL
++ return psci_ops.cpu_on(cpu_logical_map(cpu),
++ ((phys_addr_t)(&secondary_startup)
++ - XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR)
++ + CONFIG_XIP_PHYS_ADDR));
++#else
+ return psci_ops.cpu_on(cpu_logical_map(cpu),
+ virt_to_idmap(&secondary_startup));
++#endif
+ return -ENODEV;
+ }
+
+diff --git a/arch/arm/mm/idmap.c b/arch/arm/mm/idmap.c
+index 448e57c6f65344..4a833e89782aa2 100644
+--- a/arch/arm/mm/idmap.c
++++ b/arch/arm/mm/idmap.c
+@@ -84,8 +84,15 @@ static void identity_mapping_add(pgd_t *pgd, const char *text_start,
+ unsigned long addr, end;
+ unsigned long next;
+
++#ifdef CONFIG_XIP_KERNEL
++ addr = (phys_addr_t)(text_start) - XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR)
++ + CONFIG_XIP_PHYS_ADDR;
++ end = (phys_addr_t)(text_end) - XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR)
++ + CONFIG_XIP_PHYS_ADDR;
++#else
+ addr = virt_to_idmap(text_start);
+ end = virt_to_idmap(text_end);
++#endif
+ pr_info("Setting up static identity map for 0x%lx - 0x%lx\n", addr, end);
+
+ prot |= PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_AF;
+diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
+index 5fb9a6aecb0011..2cd93334267943 100644
+--- a/arch/arm/mm/proc-v7.S
++++ b/arch/arm/mm/proc-v7.S
+@@ -94,7 +94,7 @@ SYM_TYPED_FUNC_START(cpu_v7_dcache_clean_area)
+ ret lr
+ SYM_FUNC_END(cpu_v7_dcache_clean_area)
+
+-#ifdef CONFIG_ARM_PSCI
++#if defined(CONFIG_ARM_PSCI) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
+ .arch_extension sec
+ SYM_TYPED_FUNC_START(cpu_v7_smc_switch_mm)
+ stmfd sp!, {r0 - r3}
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso b/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso
+index 96db07fc9becea..1f2a0fe70a0a26 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso
++++ b/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso
+@@ -29,12 +29,37 @@ usb_dr_connector: endpoint {
+ };
+ };
+
++/*
++ * rst_usb_hub_hog and sel_usb_hub_hog have property 'output-high',
++ * dt overlay don't support /delete-property/. Both 'output-low' and
++ * 'output-high' will be exist under hog nodes if overlay file set
++ * 'output-low'. Workaround is disable these hog and create new hog with
++ * 'output-low'.
++ */
++
+ &rst_usb_hub_hog {
+- output-low;
++ status = "disabled";
++};
++
++&expander0 {
++ rst-usb-low-hub-hog {
++ gpio-hog;
++ gpios = <13 0>;
++ output-low;
++ line-name = "RST_USB_HUB#";
++ };
+ };
+
+ &sel_usb_hub_hog {
+- output-low;
++ status = "disabled";
++};
++
++&gpio2 {
++ sel-usb-low-hub-hog {
++ gpio-hog;
++ gpios = <1 GPIO_ACTIVE_HIGH>;
++ output-low;
++ };
+ };
+
+ &usbotg1 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt6357.dtsi b/arch/arm64/boot/dts/mediatek/mt6357.dtsi
+index 3330a03c2f7453..5fafa842d312f3 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6357.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6357.dtsi
+@@ -10,6 +10,11 @@ &pwrap {
+ mt6357_pmic: pmic {
+ compatible = "mediatek,mt6357";
+
++ pmic_adc: adc {
++ compatible = "mediatek,mt6357-auxadc";
++ #io-channel-cells = <1>;
++ };
++
+ regulators {
+ mt6357_vproc_reg: buck-vproc {
+ regulator-name = "vproc";
+diff --git a/arch/arm64/boot/dts/mediatek/mt6358.dtsi b/arch/arm64/boot/dts/mediatek/mt6358.dtsi
+index a1b96013f8141a..e23672a2eea4af 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6358.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6358.dtsi
+@@ -10,12 +10,17 @@ pmic: pmic {
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+- mt6358codec: mt6358codec {
++ pmic_adc: adc {
++ compatible = "mediatek,mt6358-auxadc";
++ #io-channel-cells = <1>;
++ };
++
++ mt6358codec: audio-codec {
+ compatible = "mediatek,mt6358-sound";
+ mediatek,dmic-mode = <0>; /* two-wires */
+ };
+
+- mt6358regulator: mt6358regulator {
++ mt6358regulator: regulators {
+ compatible = "mediatek,mt6358-regulator";
+
+ mt6358_vdram1_reg: buck_vdram1 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt6359.dtsi b/arch/arm64/boot/dts/mediatek/mt6359.dtsi
+index df3e822232d340..8e1b8c85c6ede9 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6359.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6359.dtsi
+@@ -9,6 +9,11 @@ pmic: pmic {
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
++ pmic_adc: adc {
++ compatible = "mediatek,mt6359-auxadc";
++ #io-channel-cells = <1>;
++ };
++
+ mt6359codec: mt6359codec {
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi b/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
+index 8d1cbc92bce320..ae0379fd42a91c 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
+@@ -49,6 +49,14 @@ trackpad2: trackpad@2c {
+ interrupts-extended = <&pio 117 IRQ_TYPE_LEVEL_LOW>;
+ reg = <0x2c>;
+ hid-descr-addr = <0x0020>;
++ /*
++ * The trackpad needs a post-power-on delay of 100ms,
++ * but at time of writing, the power supply for it on
++ * this board is always on. The delay is therefore not
++ * added to avoid impacting the readiness of the
++ * trackpad.
++ */
++ vdd-supply = <&mt6397_vgp6_reg>;
+ wakeup-source;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
+index 19c1e2bee494c9..20b71f2e7159ad 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
+@@ -30,3 +30,6 @@ touchscreen@2c {
+ };
+ };
+
++&i2c2 {
++ i2c-scl-internal-delay-ns = <4100>;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
+index f34964afe39b53..83bbcfe620835a 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
+@@ -18,6 +18,8 @@ &i2c_tunnel {
+ };
+
+ &i2c2 {
++ i2c-scl-internal-delay-ns = <25000>;
++
+ trackpad@2c {
+ compatible = "hid-over-i2c";
+ reg = <0x2c>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
+index 0b45aee2e29953..65860b33c01fe8 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
+@@ -30,3 +30,6 @@ &qca_wifi {
+ qcom,ath10k-calibration-variant = "GO_DAMU";
+ };
+
++&i2c2 {
++ i2c-scl-internal-delay-ns = <20000>;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
+index bbe6c338f465ee..f9c1ec366b2660 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
+@@ -25,3 +25,6 @@ trackpad@2c {
+ };
+ };
+
++&i2c2 {
++ i2c-scl-internal-delay-ns = <21500>;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+index fa4ab4d2899f9b..930e2ee83333aa 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+@@ -8,28 +8,32 @@
+ #include <arm/cros-ec-keyboard.dtsi>
+
+ / {
+- pp1200_mipibrdg: pp1200-mipibrdg {
++ pp1000_mipibrdg: pp1000-mipibrdg {
+ compatible = "regulator-fixed";
+- regulator-name = "pp1200_mipibrdg";
++ regulator-name = "pp1000_mipibrdg";
++ regulator-min-microvolt = <1000000>;
++ regulator-max-microvolt = <1000000>;
+ pinctrl-names = "default";
+- pinctrl-0 = <&pp1200_mipibrdg_en>;
++ pinctrl-0 = <&pp1000_mipibrdg_en>;
+
+ enable-active-high;
+ regulator-boot-on;
+
+ gpio = <&pio 54 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp1800_alw>;
+ };
+
+ pp1800_mipibrdg: pp1800-mipibrdg {
+ compatible = "regulator-fixed";
+ regulator-name = "pp1800_mipibrdg";
+ pinctrl-names = "default";
+- pinctrl-0 = <&pp1800_lcd_en>;
++ pinctrl-0 = <&pp1800_mipibrdg_en>;
+
+ enable-active-high;
+ regulator-boot-on;
+
+ gpio = <&pio 36 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp1800_alw>;
+ };
+
+ pp3300_panel: pp3300-panel {
+@@ -44,18 +48,20 @@ pp3300_panel: pp3300-panel {
+ regulator-boot-on;
+
+ gpio = <&pio 35 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp3300_alw>;
+ };
+
+- vddio_mipibrdg: vddio-mipibrdg {
++ pp3300_mipibrdg: pp3300-mipibrdg {
+ compatible = "regulator-fixed";
+- regulator-name = "vddio_mipibrdg";
++ regulator-name = "pp3300_mipibrdg";
+ pinctrl-names = "default";
+- pinctrl-0 = <&vddio_mipibrdg_en>;
++ pinctrl-0 = <&pp3300_mipibrdg_en>;
+
+ enable-active-high;
+ regulator-boot-on;
+
+ gpio = <&pio 37 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp3300_alw>;
+ };
+
+ volume_buttons: volume-buttons {
+@@ -151,9 +157,9 @@ anx_bridge: anx7625@58 {
+ pinctrl-0 = <&anx7625_pins>;
+ enable-gpios = <&pio 45 GPIO_ACTIVE_HIGH>;
+ reset-gpios = <&pio 73 GPIO_ACTIVE_HIGH>;
+- vdd10-supply = <&pp1200_mipibrdg>;
++ vdd10-supply = <&pp1000_mipibrdg>;
+ vdd18-supply = <&pp1800_mipibrdg>;
+- vdd33-supply = <&vddio_mipibrdg>;
++ vdd33-supply = <&pp3300_mipibrdg>;
+
+ ports {
+ #address-cells = <1>;
+@@ -396,14 +402,14 @@ &pio {
+ "",
+ "";
+
+- pp1200_mipibrdg_en: pp1200-mipibrdg-en {
++ pp1000_mipibrdg_en: pp1000-mipibrdg-en {
+ pins1 {
+ pinmux = <PINMUX_GPIO54__FUNC_GPIO54>;
+ output-low;
+ };
+ };
+
+- pp1800_lcd_en: pp1800-lcd-en {
++ pp1800_mipibrdg_en: pp1800-mipibrdg-en {
+ pins1 {
+ pinmux = <PINMUX_GPIO36__FUNC_GPIO36>;
+ output-low;
+@@ -465,7 +471,7 @@ trackpad-int {
+ };
+ };
+
+- vddio_mipibrdg_en: vddio-mipibrdg-en {
++ pp3300_mipibrdg_en: pp3300-mipibrdg-en {
+ pins1 {
+ pinmux = <PINMUX_GPIO37__FUNC_GPIO37>;
+ output-low;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
+index bfb9e42c8acaa7..ff02f63bac29b2 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
+@@ -92,9 +92,9 @@ &i2c4 {
+ clock-frequency = <400000>;
+ vbus-supply = <&mt6358_vcn18_reg>;
+
+- eeprom@54 {
++ eeprom@50 {
+ compatible = "atmel,24c32";
+- reg = <0x54>;
++ reg = <0x50>;
+ pagesize = <32>;
+ vcc-supply = <&mt6358_vcn18_reg>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
+index 5c1bf6a1e47586..da6e767b4ceede 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
+@@ -79,9 +79,9 @@ &i2c4 {
+ clock-frequency = <400000>;
+ vbus-supply = <&mt6358_vcn18_reg>;
+
+- eeprom@54 {
++ eeprom@50 {
+ compatible = "atmel,24c64";
+- reg = <0x54>;
++ reg = <0x50>;
+ pagesize = <32>;
+ vcc-supply = <&mt6358_vcn18_reg>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
+index 0f5fa893a77426..8b56b8564ed7a2 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
+@@ -88,9 +88,9 @@ &i2c4 {
+ clock-frequency = <400000>;
+ vbus-supply = <&mt6358_vcn18_reg>;
+
+- eeprom@54 {
++ eeprom@50 {
+ compatible = "atmel,24c32";
+- reg = <0x54>;
++ reg = <0x50>;
+ pagesize = <32>;
+ vcc-supply = <&mt6358_vcn18_reg>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi
+index 52ec58128d5615..b495a241b4432b 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi
+@@ -10,12 +10,6 @@
+
+ / {
+ chassis-type = "laptop";
+-
+- max98360a: max98360a {
+- compatible = "maxim,max98360a";
+- sdmode-gpios = <&pio 150 GPIO_ACTIVE_HIGH>;
+- #sound-dai-cells = <0>;
+- };
+ };
+
+ &cpu6 {
+@@ -59,19 +53,14 @@ &cluster1_opp_15 {
+ opp-hz = /bits/ 64 <2200000000>;
+ };
+
+-&rt1019p{
+- status = "disabled";
+-};
+-
+ &sound {
+ compatible = "mediatek,mt8186-mt6366-rt5682s-max98360-sound";
+- status = "okay";
++};
+
+- spk-hdmi-playback-dai-link {
+- codec {
+- sound-dai = <&it6505dptx>, <&max98360a>;
+- };
+- };
++&speaker_codec {
++ compatible = "maxim,max98360a";
++ sdmode-gpios = <&pio 150 GPIO_ACTIVE_HIGH>;
++ /delete-property/ sdb-gpios;
+ };
+
+ &spmi {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+index eccbd7b597bb11..43abeba88b1004 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+@@ -259,15 +259,15 @@ spk-hdmi-playback-dai-link {
+ mediatek,clk-provider = "cpu";
+ /* RT1019P and IT6505 connected to the same I2S line */
+ codec {
+- sound-dai = <&it6505dptx>, <&rt1019p>;
++ sound-dai = <&it6505dptx>, <&speaker_codec>;
+ };
+ };
+ };
+
+- rt1019p: speaker-codec {
++ speaker_codec: speaker-codec {
+ compatible = "realtek,rt1019p";
+ pinctrl-names = "default";
+- pinctrl-0 = <&rt1019p_pins_default>;
++ pinctrl-0 = <&speaker_codec_pins_default>;
+ #sound-dai-cells = <0>;
+ sdb-gpios = <&pio 150 GPIO_ACTIVE_HIGH>;
+ };
+@@ -1179,7 +1179,7 @@ pins {
+ };
+ };
+
+- rt1019p_pins_default: rt1019p-default-pins {
++ speaker_codec_pins_default: speaker-codec-default-pins {
+ pins-sdb {
+ pinmux = <PINMUX_GPIO150__FUNC_GPIO150>;
+ output-low;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8188.dtsi b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+index 29d012d28edb1b..b6193d73f688cc 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8188.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+@@ -508,9 +508,9 @@ mfg0: power-domain@MT8188_POWER_DOMAIN_MFG0 {
+ #size-cells = <0>;
+ #power-domain-cells = <1>;
+
+- power-domain@MT8188_POWER_DOMAIN_MFG1 {
++ mfg1: power-domain@MT8188_POWER_DOMAIN_MFG1 {
+ reg = <MT8188_POWER_DOMAIN_MFG1>;
+- clocks = <&topckgen CLK_APMIXED_MFGPLL>,
++ clocks = <&apmixedsys CLK_APMIXED_MFGPLL>,
+ <&topckgen CLK_TOP_MFG_CORE_TMP>;
+ clock-names = "mfg", "alt";
+ mediatek,infracfg = <&infracfg_ao>;
+@@ -1219,7 +1219,6 @@ u3port1: usb-phy@700 {
+ <&clk26m>;
+ clock-names = "ref", "da_ref";
+ #phy-cells = <1>;
+- status = "disabled";
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+index d3a52acbe48a14..2960772cb90655 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+@@ -438,7 +438,7 @@ audio_codec: codec@1a {
+ /* Realtek RT5682i or RT5682s, sharing the same configuration */
+ reg = <0x1a>;
+ interrupts-extended = <&pio 89 IRQ_TYPE_EDGE_BOTH>;
+- #sound-dai-cells = <0>;
++ #sound-dai-cells = <1>;
+ realtek,jd-src = <1>;
+
+ AVDD-supply = <&mt6359_vio18_ldo_reg>;
+@@ -1181,7 +1181,7 @@ hs-playback-dai-link {
+ link-name = "ETDM1_OUT_BE";
+ mediatek,clk-provider = "cpu";
+ codec {
+- sound-dai = <&audio_codec>;
++ sound-dai = <&audio_codec 0>;
+ };
+ };
+
+@@ -1189,7 +1189,7 @@ hs-capture-dai-link {
+ link-name = "ETDM2_IN_BE";
+ mediatek,clk-provider = "cpu";
+ codec {
+- sound-dai = <&audio_codec>;
++ sound-dai = <&audio_codec 0>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index 98c15eb68589a5..e56b23995c7bf0 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -487,7 +487,7 @@ topckgen: syscon@10000000 {
+ };
+
+ infracfg_ao: syscon@10001000 {
+- compatible = "mediatek,mt8195-infracfg_ao", "syscon", "simple-mfd";
++ compatible = "mediatek,mt8195-infracfg_ao", "syscon";
+ reg = <0 0x10001000 0 0x1000>;
+ #clock-cells = <1>;
+ #reset-cells = <1>;
+@@ -3330,11 +3330,9 @@ &larb19 &larb21 &larb24 &larb25
+ mutex1: mutex@1c101000 {
+ compatible = "mediatek,mt8195-disp-mutex";
+ reg = <0 0x1c101000 0 0x1000>;
+- reg-names = "vdo1_mutex";
+ interrupts = <GIC_SPI 494 IRQ_TYPE_LEVEL_HIGH 0>;
+ power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ clocks = <&vdosys1 CLK_VDO1_DISP_MUTEX>;
+- clock-names = "vdo1_mutex";
+ mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x1000 0x1000>;
+ mediatek,gce-events = <CMDQ_EVENT_VDO1_STREAM_DONE_ENG_0>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
+index a06610fff8adef..33ad8c8205f1b6 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
+@@ -187,7 +187,7 @@ mdio {
+ compatible = "snps,dwmac-mdio";
+ #address-cells = <1>;
+ #size-cells = <0>;
+- eth_phy0: eth-phy0@1 {
++ eth_phy0: ethernet-phy@1 {
+ compatible = "ethernet-phy-id001c.c916";
+ reg = <0x1>;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts b/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
+index 0d45662b8028bf..5d0167fbc70982 100644
+--- a/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
++++ b/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
+@@ -707,7 +707,7 @@ &remoteproc_cdsp {
+ };
+
+ &remoteproc_mpss {
+- firmware-name = "qcom/qcs6490/modem.mdt";
++ firmware-name = "qcom/qcs6490/modem.mbn";
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sc8180x.dtsi b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+index 6e707d993aeb36..b1451f8ad2a604 100644
+--- a/arch/arm64/boot/dts/qcom/sc8180x.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+@@ -3730,7 +3730,7 @@ lmh@18358800 {
+ };
+
+ cpufreq_hw: cpufreq@18323000 {
+- compatible = "qcom,cpufreq-hw";
++ compatible = "qcom,sc8180x-cpufreq-hw", "qcom,cpufreq-hw";
+ reg = <0 0x18323000 0 0x1400>, <0 0x18325800 0 0x1400>;
+ reg-names = "freq-domain0", "freq-domain1";
+
+diff --git a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
+index 60412281ab27de..962c8aa4004401 100644
+--- a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
++++ b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
+@@ -104,7 +104,7 @@ vreg_l10a_1p8: vreg-l10a-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "vreg_l10a_1p8";
+ regulator-min-microvolt = <1804000>;
+- regulator-max-microvolt = <1896000>;
++ regulator-max-microvolt = <1804000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index 7986ddb30f6e8c..4f8477de7e1b1e 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -1376,43 +1376,43 @@ gpu_opp_table: opp-table {
+ opp-850000000 {
+ opp-hz = /bits/ 64 <850000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>;
+- opp-supported-hw = <0x02>;
++ opp-supported-hw = <0x03>;
+ };
+
+ opp-800000000 {
+ opp-hz = /bits/ 64 <800000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_TURBO>;
+- opp-supported-hw = <0x04>;
++ opp-supported-hw = <0x07>;
+ };
+
+ opp-650000000 {
+ opp-hz = /bits/ 64 <650000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>;
+- opp-supported-hw = <0x08>;
++ opp-supported-hw = <0x0f>;
+ };
+
+ opp-565000000 {
+ opp-hz = /bits/ 64 <565000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_NOM>;
+- opp-supported-hw = <0x10>;
++ opp-supported-hw = <0x1f>;
+ };
+
+ opp-430000000 {
+ opp-hz = /bits/ 64 <430000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>;
+- opp-supported-hw = <0xff>;
++ opp-supported-hw = <0x1f>;
+ };
+
+ opp-355000000 {
+ opp-hz = /bits/ 64 <355000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_SVS>;
+- opp-supported-hw = <0xff>;
++ opp-supported-hw = <0x1f>;
+ };
+
+ opp-253000000 {
+ opp-hz = /bits/ 64 <253000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>;
+- opp-supported-hw = <0xff>;
++ opp-supported-hw = <0x1f>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+index 530db0913f91d9..7d4e039f63cf4d 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+@@ -598,8 +598,6 @@ &usb_1_ss0_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+@@ -632,8 +630,6 @@ &usb_1_ss1_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+index 884595d98e6412..769040b50b04d4 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+@@ -881,8 +881,6 @@ &usb_1_ss0_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+@@ -915,8 +913,6 @@ &usb_1_ss1_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index bb2de6469972cd..db0304c5649808 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -278,8 +278,8 @@ CLUSTER_C4: cpu-sleep-0 {
+ idle-state-name = "ret";
+ arm,psci-suspend-param = <0x00000004>;
+ entry-latency-us = <180>;
+- exit-latency-us = <320>;
+- min-residency-us = <1000>;
++ exit-latency-us = <500>;
++ min-residency-us = <600>;
+ };
+ };
+
+@@ -298,7 +298,7 @@ CLUSTER_CL5: cluster-sleep-1 {
+ idle-state-name = "ret-pll-off";
+ arm,psci-suspend-param = <0x01000054>;
+ entry-latency-us = <2200>;
+- exit-latency-us = <2500>;
++ exit-latency-us = <4000>;
+ min-residency-us = <7000>;
+ };
+ };
+@@ -5363,7 +5363,7 @@ apps_smmu: iommu@15000000 {
+ intc: interrupt-controller@17000000 {
+ compatible = "arm,gic-v3";
+ reg = <0 0x17000000 0 0x10000>, /* GICD */
+- <0 0x17080000 0 0x480000>; /* GICR * 12 */
++ <0 0x17080000 0 0x300000>; /* GICR * 12 */
+
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+
+diff --git a/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi b/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
+index 8e2db1d6ca81e2..25c55b32aafe5a 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
+@@ -69,9 +69,6 @@ &rcar_sound {
+
+ status = "okay";
+
+- /* Single DAI */
+- #sound-dai-cells = <0>;
+-
+ rsnd_port: port {
+ rsnd_endpoint: endpoint {
+ remote-endpoint = <&dw_hdmi0_snd_in>;
+diff --git a/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi b/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
+index 66f3affe046973..deb69c27277566 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
+@@ -84,9 +84,6 @@ &rcar_sound {
+ pinctrl-names = "default";
+ status = "okay";
+
+- /* Single DAI */
+- #sound-dai-cells = <0>;
+-
+ /* audio_clkout0/1/2/3 */
+ #clock-cells = <1>;
+ clock-frequency = <12288000 11289600>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso
+index ebcaeafc3800d0..fa61633aea1526 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso
++++ b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso
+@@ -49,7 +49,6 @@ vcc1v8_eth: vcc1v8-eth-regulator {
+
+ vcc3v3_eth: vcc3v3-eth-regulator {
+ compatible = "regulator-fixed";
+- enable-active-low;
+ gpio = <&gpio0 RK_PC0 GPIO_ACTIVE_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&vcc3v3_eth_enn>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
+index d8c50fdcca3b57..a4b930f6987f43 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
+@@ -62,7 +62,7 @@ sdio_pwrseq: sdio-pwrseq {
+
+ sound {
+ compatible = "audio-graph-card";
+- label = "rockchip,es8388-codec";
++ label = "rockchip,es8388";
+ widgets = "Microphone", "Mic Jack",
+ "Headphone", "Headphones";
+ routing = "LINPUT2", "Mic Jack",
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts
+index feea6b20a6bf54..6b77be64324950 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts
+@@ -71,7 +71,6 @@ vcc5v0_sys: vcc5v0-sys-regulator {
+
+ vcc_3v3_sd_s0: vcc-3v3-sd-s0-regulator {
+ compatible = "regulator-fixed";
+- enable-active-low;
+ gpios = <&gpio4 RK_PB5 GPIO_ACTIVE_LOW>;
+ regulator-name = "vcc_3v3_sd_s0";
+ regulator-boot-on;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi b/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi
+index e4633af87eb9c5..d6ce53c6d74814 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi
+@@ -433,8 +433,6 @@ &mcasp2 {
+ 0 0 0 0
+ 0 0 0 0
+ >;
+- tx-num-evt = <32>;
+- rx-num-evt = <32>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index 6593c5da82c064..df39f2b1ff6ba6 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -254,7 +254,7 @@ J721E_IOPAD(0x38, PIN_OUTPUT, 0) /* (Y21) MCAN3_TX */
+ };
+ };
+
+-&main_pmx1 {
++&main_pmx2 {
+ main_usbss0_pins_default: main-usbss0-default-pins {
+ pinctrl-single,pins = <
+ J721E_IOPAD(0x04, PIN_OUTPUT, 0) /* (T4) USB0_DRVVBUS */
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index 9386bf3ef9f684..1d11da926a8714 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -426,10 +426,28 @@ main_pmx0: pinctrl@11c000 {
+ pinctrl-single,function-mask = <0xffffffff>;
+ };
+
+- main_pmx1: pinctrl@11c11c {
++ main_pmx1: pinctrl@11c110 {
+ compatible = "ti,j7200-padconf", "pinctrl-single";
+ /* Proxy 0 addressing */
+- reg = <0x00 0x11c11c 0x00 0xc>;
++ reg = <0x00 0x11c110 0x00 0x004>;
++ #pinctrl-cells = <1>;
++ pinctrl-single,register-width = <32>;
++ pinctrl-single,function-mask = <0xffffffff>;
++ };
++
++ main_pmx2: pinctrl@11c11c {
++ compatible = "ti,j7200-padconf", "pinctrl-single";
++ /* Proxy 0 addressing */
++ reg = <0x00 0x11c11c 0x00 0x00c>;
++ #pinctrl-cells = <1>;
++ pinctrl-single,register-width = <32>;
++ pinctrl-single,function-mask = <0xffffffff>;
++ };
++
++ main_pmx3: pinctrl@11c164 {
++ compatible = "ti,j7200-padconf", "pinctrl-single";
++ /* Proxy 0 addressing */
++ reg = <0x00 0x11c164 0x00 0x008>;
+ #pinctrl-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xffffffff>;
+@@ -1145,7 +1163,7 @@ main_spi0: spi@2100000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 266 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 266 1>;
++ clocks = <&k3_clks 266 4>;
+ status = "disabled";
+ };
+
+@@ -1156,7 +1174,7 @@ main_spi1: spi@2110000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 267 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 267 1>;
++ clocks = <&k3_clks 267 4>;
+ status = "disabled";
+ };
+
+@@ -1167,7 +1185,7 @@ main_spi2: spi@2120000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 268 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 268 1>;
++ clocks = <&k3_clks 268 4>;
+ status = "disabled";
+ };
+
+@@ -1178,7 +1196,7 @@ main_spi3: spi@2130000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 269 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 269 1>;
++ clocks = <&k3_clks 269 4>;
+ status = "disabled";
+ };
+
+@@ -1189,7 +1207,7 @@ main_spi4: spi@2140000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 270 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 270 1>;
++ clocks = <&k3_clks 270 2>;
+ status = "disabled";
+ };
+
+@@ -1200,7 +1218,7 @@ main_spi5: spi@2150000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 271 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 271 1>;
++ clocks = <&k3_clks 271 4>;
+ status = "disabled";
+ };
+
+@@ -1211,7 +1229,7 @@ main_spi6: spi@2160000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 272 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 272 1>;
++ clocks = <&k3_clks 272 4>;
+ status = "disabled";
+ };
+
+@@ -1222,7 +1240,7 @@ main_spi7: spi@2170000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 273 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 273 1>;
++ clocks = <&k3_clks 273 4>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+index 5097d192c2b208..b18b2f2deb969f 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+@@ -494,7 +494,7 @@ mcu_spi0: spi@40300000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 274 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 274 0>;
++ clocks = <&k3_clks 274 4>;
+ status = "disabled";
+ };
+
+@@ -505,7 +505,7 @@ mcu_spi1: spi@40310000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 275 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 275 0>;
++ clocks = <&k3_clks 275 4>;
+ status = "disabled";
+ };
+
+@@ -516,7 +516,7 @@ mcu_spi2: spi@40320000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 276 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 276 0>;
++ clocks = <&k3_clks 276 2>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
+index 6b6ef6a3061426..e71696a8bb8339 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
+@@ -654,7 +654,7 @@ mcu_spi0: spi@40300000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 274 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 274 0>;
++ clocks = <&k3_clks 274 1>;
+ status = "disabled";
+ };
+
+@@ -665,7 +665,7 @@ mcu_spi1: spi@40310000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 275 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 275 0>;
++ clocks = <&k3_clks 275 1>;
+ status = "disabled";
+ };
+
+@@ -676,7 +676,7 @@ mcu_spi2: spi@40320000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 276 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 276 0>;
++ clocks = <&k3_clks 276 1>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
+index 9ed6949b40e9df..fae534b5c8a43f 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
+@@ -1708,7 +1708,7 @@ main_spi0: spi@2100000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 339 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 339 1>;
++ clocks = <&k3_clks 339 2>;
+ status = "disabled";
+ };
+
+@@ -1719,7 +1719,7 @@ main_spi1: spi@2110000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 340 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 340 1>;
++ clocks = <&k3_clks 340 2>;
+ status = "disabled";
+ };
+
+@@ -1730,7 +1730,7 @@ main_spi2: spi@2120000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 341 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 341 1>;
++ clocks = <&k3_clks 341 2>;
+ status = "disabled";
+ };
+
+@@ -1741,7 +1741,7 @@ main_spi3: spi@2130000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 342 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 342 1>;
++ clocks = <&k3_clks 342 2>;
+ status = "disabled";
+ };
+
+@@ -1752,7 +1752,7 @@ main_spi4: spi@2140000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 343 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 343 1>;
++ clocks = <&k3_clks 343 2>;
+ status = "disabled";
+ };
+
+@@ -1763,7 +1763,7 @@ main_spi5: spi@2150000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 344 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 344 1>;
++ clocks = <&k3_clks 344 2>;
+ status = "disabled";
+ };
+
+@@ -1774,7 +1774,7 @@ main_spi6: spi@2160000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 345 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 345 1>;
++ clocks = <&k3_clks 345 2>;
+ status = "disabled";
+ };
+
+@@ -1785,7 +1785,7 @@ main_spi7: spi@2170000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 346 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 346 1>;
++ clocks = <&k3_clks 346 2>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
+index 8feb42c89e4760..ddf8fc57fd1704 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
+@@ -425,7 +425,7 @@ mcu_spi0: spi@40300000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 347 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 347 0>;
++ clocks = <&k3_clks 347 2>;
+ status = "disabled";
+ };
+
+@@ -436,7 +436,7 @@ mcu_spi1: spi@40310000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 348 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 348 0>;
++ clocks = <&k3_clks 348 2>;
+ status = "disabled";
+ };
+
+@@ -447,7 +447,7 @@ mcu_spi2: spi@40320000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 349 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 349 0>;
++ clocks = <&k3_clks 349 2>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
+index 8c0a36f72d6fcd..bc77869dbd43b2 100644
+--- a/arch/arm64/include/asm/insn.h
++++ b/arch/arm64/include/asm/insn.h
+@@ -353,6 +353,7 @@ __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000)
+ __AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000)
+ __AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000)
+ __AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000)
++__AARCH64_INSN_FUNCS(mops, 0x3B200C00, 0x19000400)
+ __AARCH64_INSN_FUNCS(stp, 0x7FC00000, 0x29000000)
+ __AARCH64_INSN_FUNCS(ldp, 0x7FC00000, 0x29400000)
+ __AARCH64_INSN_FUNCS(stp_post, 0x7FC00000, 0x28800000)
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 428934f64e9fc1..d2e6924d6f4623 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -73,8 +73,6 @@ enum kvm_mode kvm_get_mode(void);
+ static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; };
+ #endif
+
+-DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
+-
+ extern unsigned int __ro_after_init kvm_sve_max_vl;
+ extern unsigned int __ro_after_init kvm_host_sve_max_vl;
+ int __init kvm_arm_init_sve(void);
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 646ecd3069fdd9..4901daace5a3e5 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -228,6 +228,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+ };
+
+ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
++ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_XS_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_I8MM_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_DGH_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_BF16_SHIFT, 4, 0),
+diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
+index 3496d6169e59b2..42b69936cee34b 100644
+--- a/arch/arm64/kernel/probes/decode-insn.c
++++ b/arch/arm64/kernel/probes/decode-insn.c
+@@ -58,10 +58,13 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
+ * Instructions which load PC relative literals are not going to work
+ * when executed from an XOL slot. Instructions doing an exclusive
+ * load/store are not going to complete successfully when single-step
+- * exception handling happens in the middle of the sequence.
++ * exception handling happens in the middle of the sequence. Memory
++ * copy/set instructions require that all three instructions be placed
++ * consecutively in memory.
+ */
+ if (aarch64_insn_uses_literal(insn) ||
+- aarch64_insn_is_exclusive(insn))
++ aarch64_insn_is_exclusive(insn) ||
++ aarch64_insn_is_mops(insn))
+ return false;
+
+ return true;
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 4ae31b7af6c311..ec1015d2d5fa6b 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -426,7 +426,7 @@ static void tls_thread_switch(struct task_struct *next)
+
+ if (is_compat_thread(task_thread_info(next)))
+ write_sysreg(next->thread.uw.tp_value, tpidrro_el0);
+- else if (!arm64_kernel_unmapped_at_el0())
++ else
+ write_sysreg(0, tpidrro_el0);
+
+ write_sysreg(*task_user_tls(next), tpidr_el0);
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index b22d28ec80284b..87f61fd6783c20 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -175,7 +175,11 @@ static void __init setup_machine_fdt(phys_addr_t dt_phys)
+ if (dt_virt)
+ memblock_reserve(dt_phys, size);
+
+- if (!dt_virt || !early_init_dt_scan(dt_virt)) {
++ /*
++ * dt_virt is a fixmap address, hence __pa(dt_virt) can't be used.
++ * Pass dt_phys directly.
++ */
++ if (!early_init_dt_scan(dt_virt, dt_phys)) {
+ pr_crit("\n"
+ "Error: invalid device tree blob at physical address %pa (virtual address 0x%px)\n"
+ "The dtb must be 8-byte aligned and must not exceed 2 MB in size\n"
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 55a8e310ea12c6..d294c1ea8391bd 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -288,6 +288,9 @@ SECTIONS
+ __initdata_end = .;
+ __init_end = .;
+
++ .data.rel.ro : { *(.data.rel.ro) }
++ ASSERT(SIZEOF(.data.rel.ro) == 0, "Unexpected RELRO detected!")
++
+ _data = .;
+ _sdata = .;
+ RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN)
+@@ -344,9 +347,6 @@ SECTIONS
+ *(.plt) *(.plt.*) *(.iplt) *(.igot .igot.plt)
+ }
+ ASSERT(SIZEOF(.plt) == 0, "Unexpected run-time procedure linkages detected!")
+-
+- .data.rel.ro : { *(.data.rel.ro) }
+- ASSERT(SIZEOF(.data.rel.ro) == 0, "Unexpected RELRO detected!")
+ }
+
+ #include "image-vars.h"
+diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
+index 879982b1cc739e..1215df59041856 100644
+--- a/arch/arm64/kvm/arch_timer.c
++++ b/arch/arm64/kvm/arch_timer.c
+@@ -206,8 +206,7 @@ void get_timer_map(struct kvm_vcpu *vcpu, struct timer_map *map)
+
+ static inline bool userspace_irqchip(struct kvm *kvm)
+ {
+- return static_branch_unlikely(&userspace_irqchip_in_use) &&
+- unlikely(!irqchip_in_kernel(kvm));
++ return unlikely(!irqchip_in_kernel(kvm));
+ }
+
+ static void soft_timer_start(struct hrtimer *hrt, u64 ns)
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index ebc6782e3cd073..5fe59748d24e63 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -67,7 +67,6 @@ DECLARE_KVM_NVHE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
+ static bool vgic_present, kvm_arm_initialised;
+
+ static DEFINE_PER_CPU(unsigned char, kvm_hyp_initialized);
+-DEFINE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
+
+ bool is_kvm_arm_initialised(void)
+ {
+@@ -500,9 +499,6 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
+
+ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
+ {
+- if (vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm)))
+- static_branch_dec(&userspace_irqchip_in_use);
+-
+ kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache);
+ kvm_timer_vcpu_terminate(vcpu);
+ kvm_pmu_vcpu_destroy(vcpu);
+@@ -847,14 +843,6 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
+ return ret;
+ }
+
+- if (!irqchip_in_kernel(kvm)) {
+- /*
+- * Tell the rest of the code that there are userspace irqchip
+- * VMs in the wild.
+- */
+- static_branch_inc(&userspace_irqchip_in_use);
+- }
+-
+ /*
+ * Initialize traps for protected VMs.
+ * NOTE: Move to run in EL2 directly, rather than via a hypercall, once
+@@ -1074,7 +1062,7 @@ static bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu, int *ret)
+ * state gets updated in kvm_timer_update_run and
+ * kvm_pmu_update_run below).
+ */
+- if (static_branch_unlikely(&userspace_irqchip_in_use)) {
++ if (unlikely(!irqchip_in_kernel(vcpu->kvm))) {
+ if (kvm_timer_should_notify_user(vcpu) ||
+ kvm_pmu_should_notify_user(vcpu)) {
+ *ret = -EINTR;
+@@ -1196,7 +1184,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ vcpu->mode = OUTSIDE_GUEST_MODE;
+ isb(); /* Ensure work in x_flush_hwstate is committed */
+ kvm_pmu_sync_hwstate(vcpu);
+- if (static_branch_unlikely(&userspace_irqchip_in_use))
++ if (unlikely(!irqchip_in_kernel(vcpu->kvm)))
+ kvm_timer_sync_user(vcpu);
+ kvm_vgic_sync_hwstate(vcpu);
+ local_irq_enable();
+@@ -1242,7 +1230,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ * we don't want vtimer interrupts to race with syncing the
+ * timer virtual interrupt state.
+ */
+- if (static_branch_unlikely(&userspace_irqchip_in_use))
++ if (unlikely(!irqchip_in_kernel(vcpu->kvm)))
+ kvm_timer_sync_user(vcpu);
+
+ kvm_arch_vcpu_ctxsync_fp(vcpu);
+diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c
+index cd6b7b83e2c370..ab365e839874e5 100644
+--- a/arch/arm64/kvm/mmio.c
++++ b/arch/arm64/kvm/mmio.c
+@@ -72,6 +72,31 @@ unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len)
+ return data;
+ }
+
++static bool kvm_pending_sync_exception(struct kvm_vcpu *vcpu)
++{
++ if (!vcpu_get_flag(vcpu, PENDING_EXCEPTION))
++ return false;
++
++ if (vcpu_el1_is_32bit(vcpu)) {
++ switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
++ case unpack_vcpu_flag(EXCEPT_AA32_UND):
++ case unpack_vcpu_flag(EXCEPT_AA32_IABT):
++ case unpack_vcpu_flag(EXCEPT_AA32_DABT):
++ return true;
++ default:
++ return false;
++ }
++ } else {
++ switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
++ case unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC):
++ case unpack_vcpu_flag(EXCEPT_AA64_EL2_SYNC):
++ return true;
++ default:
++ return false;
++ }
++ }
++}
++
+ /**
+ * kvm_handle_mmio_return -- Handle MMIO loads after user space emulation
+ * or in-kernel IO emulation
+@@ -84,8 +109,11 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu)
+ unsigned int len;
+ int mask;
+
+- /* Detect an already handled MMIO return */
+- if (unlikely(!vcpu->mmio_needed))
++ /*
++ * Detect if the MMIO return was already handled or if userspace aborted
++ * the MMIO access.
++ */
++ if (unlikely(!vcpu->mmio_needed || kvm_pending_sync_exception(vcpu)))
+ return 1;
+
+ vcpu->mmio_needed = 0;
+diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
+index 82a2a003259cdc..052456813fece4 100644
+--- a/arch/arm64/kvm/pmu-emul.c
++++ b/arch/arm64/kvm/pmu-emul.c
+@@ -342,7 +342,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
+
+ if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) {
+ reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+- reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+ reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
+ }
+
+diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
+index ba945ba78cc7d7..198296933e7ebf 100644
+--- a/arch/arm64/kvm/vgic/vgic-its.c
++++ b/arch/arm64/kvm/vgic/vgic-its.c
+@@ -782,6 +782,9 @@ static int vgic_its_cmd_handle_discard(struct kvm *kvm, struct vgic_its *its,
+
+ ite = find_ite(its, device_id, event_id);
+ if (ite && its_is_collection_mapped(ite->collection)) {
++ struct its_device *device = find_its_device(its, device_id);
++ int ite_esz = vgic_its_get_abi(its)->ite_esz;
++ gpa_t gpa = device->itt_addr + ite->event_id * ite_esz;
+ /*
+ * Though the spec talks about removing the pending state, we
+ * don't bother here since we clear the ITTE anyway and the
+@@ -790,7 +793,8 @@ static int vgic_its_cmd_handle_discard(struct kvm *kvm, struct vgic_its *its,
+ vgic_its_invalidate_cache(its);
+
+ its_free_ite(kvm, ite);
+- return 0;
++
++ return vgic_its_write_entry_lock(its, gpa, 0, ite_esz);
+ }
+
+ return E_ITS_DISCARD_UNMAPPED_INTERRUPT;
+@@ -1139,9 +1143,11 @@ static int vgic_its_cmd_handle_mapd(struct kvm *kvm, struct vgic_its *its,
+ bool valid = its_cmd_get_validbit(its_cmd);
+ u8 num_eventid_bits = its_cmd_get_size(its_cmd);
+ gpa_t itt_addr = its_cmd_get_ittaddr(its_cmd);
++ int dte_esz = vgic_its_get_abi(its)->dte_esz;
+ struct its_device *device;
++ gpa_t gpa;
+
+- if (!vgic_its_check_id(its, its->baser_device_table, device_id, NULL))
++ if (!vgic_its_check_id(its, its->baser_device_table, device_id, &gpa))
+ return E_ITS_MAPD_DEVICE_OOR;
+
+ if (valid && num_eventid_bits > VITS_TYPER_IDBITS)
+@@ -1162,7 +1168,7 @@ static int vgic_its_cmd_handle_mapd(struct kvm *kvm, struct vgic_its *its,
+ * is an error, so we are done in any case.
+ */
+ if (!valid)
+- return 0;
++ return vgic_its_write_entry_lock(its, gpa, 0, dte_esz);
+
+ device = vgic_its_alloc_device(its, device_id, itt_addr,
+ num_eventid_bits);
+@@ -2086,7 +2092,6 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
+ static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,
+ struct its_ite *ite, gpa_t gpa, int ite_esz)
+ {
+- struct kvm *kvm = its->dev->kvm;
+ u32 next_offset;
+ u64 val;
+
+@@ -2095,7 +2100,8 @@ static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,
+ ((u64)ite->irq->intid << KVM_ITS_ITE_PINTID_SHIFT) |
+ ite->collection->collection_id;
+ val = cpu_to_le64(val);
+- return vgic_write_guest_lock(kvm, gpa, &val, ite_esz);
++
++ return vgic_its_write_entry_lock(its, gpa, val, ite_esz);
+ }
+
+ /**
+@@ -2239,7 +2245,6 @@ static int vgic_its_restore_itt(struct vgic_its *its, struct its_device *dev)
+ static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev,
+ gpa_t ptr, int dte_esz)
+ {
+- struct kvm *kvm = its->dev->kvm;
+ u64 val, itt_addr_field;
+ u32 next_offset;
+
+@@ -2250,7 +2255,8 @@ static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev,
+ (itt_addr_field << KVM_ITS_DTE_ITTADDR_SHIFT) |
+ (dev->num_eventid_bits - 1));
+ val = cpu_to_le64(val);
+- return vgic_write_guest_lock(kvm, ptr, &val, dte_esz);
++
++ return vgic_its_write_entry_lock(its, ptr, val, dte_esz);
+ }
+
+ /**
+@@ -2437,7 +2443,8 @@ static int vgic_its_save_cte(struct vgic_its *its,
+ ((u64)collection->target_addr << KVM_ITS_CTE_RDBASE_SHIFT) |
+ collection->collection_id);
+ val = cpu_to_le64(val);
+- return vgic_write_guest_lock(its->dev->kvm, gpa, &val, esz);
++
++ return vgic_its_write_entry_lock(its, gpa, val, esz);
+ }
+
+ /*
+@@ -2453,8 +2460,7 @@ static int vgic_its_restore_cte(struct vgic_its *its, gpa_t gpa, int esz)
+ u64 val;
+ int ret;
+
+- BUG_ON(esz > sizeof(val));
+- ret = kvm_read_guest_lock(kvm, gpa, &val, esz);
++ ret = vgic_its_read_entry_lock(its, gpa, &val, esz);
+ if (ret)
+ return ret;
+ val = le64_to_cpu(val);
+@@ -2492,7 +2498,6 @@ static int vgic_its_save_collection_table(struct vgic_its *its)
+ u64 baser = its->baser_coll_table;
+ gpa_t gpa = GITS_BASER_ADDR_48_to_52(baser);
+ struct its_collection *collection;
+- u64 val;
+ size_t max_size, filled = 0;
+ int ret, cte_esz = abi->cte_esz;
+
+@@ -2516,10 +2521,7 @@ static int vgic_its_save_collection_table(struct vgic_its *its)
+ * table is not fully filled, add a last dummy element
+ * with valid bit unset
+ */
+- val = 0;
+- BUG_ON(cte_esz > sizeof(val));
+- ret = vgic_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz);
+- return ret;
++ return vgic_its_write_entry_lock(its, gpa, 0, cte_esz);
+ }
+
+ /*
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+index 9e50928f5d7dfd..70a44852cbafe3 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+@@ -530,6 +530,7 @@ static void vgic_mmio_write_invlpi(struct kvm_vcpu *vcpu,
+ unsigned long val)
+ {
+ struct vgic_irq *irq;
++ u32 intid;
+
+ /*
+ * If the guest wrote only to the upper 32bit part of the
+@@ -541,9 +542,13 @@ static void vgic_mmio_write_invlpi(struct kvm_vcpu *vcpu,
+ if ((addr & 4) || !vgic_lpis_enabled(vcpu))
+ return;
+
++ intid = lower_32_bits(val);
++ if (intid < VGIC_MIN_LPI)
++ return;
++
+ vgic_set_rdist_busy(vcpu, true);
+
+- irq = vgic_get_irq(vcpu->kvm, NULL, lower_32_bits(val));
++ irq = vgic_get_irq(vcpu->kvm, NULL, intid);
+ if (irq) {
+ vgic_its_inv_lpi(vcpu->kvm, irq);
+ vgic_put_irq(vcpu->kvm, irq);
+diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h
+index 8532bfe3fed40c..82c1d24b243f5e 100644
+--- a/arch/arm64/kvm/vgic/vgic.h
++++ b/arch/arm64/kvm/vgic/vgic.h
+@@ -146,6 +146,29 @@ static inline int vgic_write_guest_lock(struct kvm *kvm, gpa_t gpa,
+ return ret;
+ }
+
++static inline int vgic_its_read_entry_lock(struct vgic_its *its, gpa_t eaddr,
++ u64 *eval, unsigned long esize)
++{
++ struct kvm *kvm = its->dev->kvm;
++
++ if (KVM_BUG_ON(esize != sizeof(*eval), kvm))
++ return -EINVAL;
++
++ return kvm_read_guest_lock(kvm, eaddr, eval, esize);
++
++}
++
++static inline int vgic_its_write_entry_lock(struct vgic_its *its, gpa_t eaddr,
++ u64 eval, unsigned long esize)
++{
++ struct kvm *kvm = its->dev->kvm;
++
++ if (KVM_BUG_ON(esize != sizeof(eval), kvm))
++ return -EINVAL;
++
++ return vgic_write_guest_lock(kvm, eaddr, &eval, esize);
++}
++
+ /*
+ * This struct provides an intermediate representation of the fields contained
+ * in the GICH_VMCR and ICH_VMCR registers, such that code exporting the GIC
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 6e5a934ee2f55e..984fc63ed0bd9e 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -2046,6 +2046,12 @@ static void restore_args(struct jit_ctx *ctx, int args_off, int nregs)
+ }
+ }
+
++static bool is_struct_ops_tramp(const struct bpf_tramp_links *fentry_links)
++{
++ return fentry_links->nr_links == 1 &&
++ fentry_links->links[0]->link.type == BPF_LINK_TYPE_STRUCT_OPS;
++}
++
+ /* Based on the x86's implementation of arch_prepare_bpf_trampoline().
+ *
+ * bpf prog and function entry before bpf trampoline hooked:
+@@ -2075,6 +2081,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+ bool save_ret;
+ __le32 **branches = NULL;
++ bool is_struct_ops = is_struct_ops_tramp(fentry);
+
+ /* trampoline stack layout:
+ * [ parent ip ]
+@@ -2143,11 +2150,14 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ */
+ emit_bti(A64_BTI_JC, ctx);
+
+- /* frame for parent function */
+- emit(A64_PUSH(A64_FP, A64_R(9), A64_SP), ctx);
+- emit(A64_MOV(1, A64_FP, A64_SP), ctx);
++ /* x9 is not set for struct_ops */
++ if (!is_struct_ops) {
++ /* frame for parent function */
++ emit(A64_PUSH(A64_FP, A64_R(9), A64_SP), ctx);
++ emit(A64_MOV(1, A64_FP, A64_SP), ctx);
++ }
+
+- /* frame for patched function */
++ /* frame for patched function for tracing, or caller for struct_ops */
+ emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx);
+ emit(A64_MOV(1, A64_FP, A64_SP), ctx);
+
+@@ -2241,19 +2251,24 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ /* reset SP */
+ emit(A64_MOV(1, A64_SP, A64_FP), ctx);
+
+- /* pop frames */
+- emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
+- emit(A64_POP(A64_FP, A64_R(9), A64_SP), ctx);
+-
+- if (flags & BPF_TRAMP_F_SKIP_FRAME) {
+- /* skip patched function, return to parent */
+- emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
+- emit(A64_RET(A64_R(9)), ctx);
++ if (is_struct_ops) {
++ emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
++ emit(A64_RET(A64_LR), ctx);
+ } else {
+- /* return to patched function */
+- emit(A64_MOV(1, A64_R(10), A64_LR), ctx);
+- emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
+- emit(A64_RET(A64_R(10)), ctx);
++ /* pop frames */
++ emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
++ emit(A64_POP(A64_FP, A64_R(9), A64_SP), ctx);
++
++ if (flags & BPF_TRAMP_F_SKIP_FRAME) {
++ /* skip patched function, return to parent */
++ emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
++ emit(A64_RET(A64_R(9)), ctx);
++ } else {
++ /* return to patched function */
++ emit(A64_MOV(1, A64_R(10), A64_LR), ctx);
++ emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
++ emit(A64_RET(A64_R(10)), ctx);
++ }
+ }
+
+ kfree(branches);
+diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c
+index 51012e90780d6b..fe715b707fd0a4 100644
+--- a/arch/csky/kernel/setup.c
++++ b/arch/csky/kernel/setup.c
+@@ -112,9 +112,9 @@ asmlinkage __visible void __init csky_start(unsigned int unused,
+ pre_trap_init();
+
+ if (dtb_start == NULL)
+- early_init_dt_scan(__dtb_start);
++ early_init_dt_scan(__dtb_start, __pa(dtb_start));
+ else
+- early_init_dt_scan(dtb_start);
++ early_init_dt_scan(dtb_start, __pa(dtb_start));
+
+ start_kernel();
+
+diff --git a/arch/loongarch/Makefile b/arch/loongarch/Makefile
+index ae3f80622f4c60..567bd122a9ee47 100644
+--- a/arch/loongarch/Makefile
++++ b/arch/loongarch/Makefile
+@@ -59,7 +59,7 @@ endif
+
+ ifdef CONFIG_64BIT
+ ld-emul = $(64bit-emul)
+-cflags-y += -mabi=lp64s
++cflags-y += -mabi=lp64s -mcmodel=normal
+ endif
+
+ cflags-y += -pipe $(CC_FLAGS_NO_FPU)
+@@ -104,7 +104,7 @@ ifdef CONFIG_OBJTOOL
+ KBUILD_CFLAGS += -fno-jump-tables
+ endif
+
+-KBUILD_RUSTFLAGS += --target=loongarch64-unknown-none-softfloat
++KBUILD_RUSTFLAGS += --target=loongarch64-unknown-none-softfloat -Ccode-model=small
+ KBUILD_RUSTFLAGS_KERNEL += -Zdirect-access-external-data=yes
+ KBUILD_RUSTFLAGS_MODULE += -Zdirect-access-external-data=no
+
+diff --git a/arch/loongarch/include/asm/page.h b/arch/loongarch/include/asm/page.h
+index e85df33f11c772..8f21567a3188b4 100644
+--- a/arch/loongarch/include/asm/page.h
++++ b/arch/loongarch/include/asm/page.h
+@@ -113,10 +113,7 @@ struct page *tlb_virt_to_page(unsigned long kaddr);
+ extern int __virt_addr_valid(volatile void *kaddr);
+ #define virt_addr_valid(kaddr) __virt_addr_valid((volatile void *)(kaddr))
+
+-#define VM_DATA_DEFAULT_FLAGS \
+- (VM_READ | VM_WRITE | \
+- ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0) | \
+- VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
++#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC
+
+ #include <asm-generic/memory_model.h>
+ #include <asm-generic/getorder.h>
+diff --git a/arch/loongarch/kernel/acpi.c b/arch/loongarch/kernel/acpi.c
+index 929a497c987e84..de9e34414e6144 100644
+--- a/arch/loongarch/kernel/acpi.c
++++ b/arch/loongarch/kernel/acpi.c
+@@ -57,48 +57,48 @@ void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
+ return ioremap_cache(phys, size);
+ }
+
+-static int cpu_enumerated = 0;
+-
+ #ifdef CONFIG_SMP
+-static int set_processor_mask(u32 id, u32 flags)
++static int set_processor_mask(u32 id, u32 pass)
+ {
+- int nr_cpus;
+- int cpu, cpuid = id;
+-
+- if (!cpu_enumerated)
+- nr_cpus = NR_CPUS;
+- else
+- nr_cpus = nr_cpu_ids;
++ int cpu = -1, cpuid = id;
+
+- if (num_processors >= nr_cpus) {
++ if (num_processors >= NR_CPUS) {
+ pr_warn(PREFIX "nr_cpus limit of %i reached."
+- " processor 0x%x ignored.\n", nr_cpus, cpuid);
++ " processor 0x%x ignored.\n", NR_CPUS, cpuid);
+
+ return -ENODEV;
+
+ }
++
+ if (cpuid == loongson_sysconf.boot_cpu_id)
+ cpu = 0;
+- else
+- cpu = find_first_zero_bit(cpumask_bits(cpu_present_mask), NR_CPUS);
+-
+- if (!cpu_enumerated)
+- set_cpu_possible(cpu, true);
+
+- if (flags & ACPI_MADT_ENABLED) {
++ switch (pass) {
++ case 1: /* Pass 1 handle enabled processors */
++ if (cpu < 0)
++ cpu = find_first_zero_bit(cpumask_bits(cpu_present_mask), NR_CPUS);
+ num_processors++;
+ set_cpu_present(cpu, true);
+- __cpu_number_map[cpuid] = cpu;
+- __cpu_logical_map[cpu] = cpuid;
+- } else
++ break;
++ case 2: /* Pass 2 handle disabled processors */
++ if (cpu < 0)
++ cpu = find_first_zero_bit(cpumask_bits(cpu_possible_mask), NR_CPUS);
+ disabled_cpus++;
++ break;
++ default:
++ return cpu;
++ }
++
++ set_cpu_possible(cpu, true);
++ __cpu_number_map[cpuid] = cpu;
++ __cpu_logical_map[cpu] = cpuid;
+
+ return cpu;
+ }
+ #endif
+
+ static int __init
+-acpi_parse_processor(union acpi_subtable_headers *header, const unsigned long end)
++acpi_parse_p1_processor(union acpi_subtable_headers *header, const unsigned long end)
+ {
+ struct acpi_madt_core_pic *processor = NULL;
+
+@@ -109,12 +109,29 @@ acpi_parse_processor(union acpi_subtable_headers *header, const unsigned long en
+ acpi_table_print_madt_entry(&header->common);
+ #ifdef CONFIG_SMP
+ acpi_core_pic[processor->core_id] = *processor;
+- set_processor_mask(processor->core_id, processor->flags);
++ if (processor->flags & ACPI_MADT_ENABLED)
++ set_processor_mask(processor->core_id, 1);
+ #endif
+
+ return 0;
+ }
+
++static int __init
++acpi_parse_p2_processor(union acpi_subtable_headers *header, const unsigned long end)
++{
++ struct acpi_madt_core_pic *processor = NULL;
++
++ processor = (struct acpi_madt_core_pic *)header;
++ if (BAD_MADT_ENTRY(processor, end))
++ return -EINVAL;
++
++#ifdef CONFIG_SMP
++ if (!(processor->flags & ACPI_MADT_ENABLED))
++ set_processor_mask(processor->core_id, 2);
++#endif
++
++ return 0;
++}
+ static int __init
+ acpi_parse_eio_master(union acpi_subtable_headers *header, const unsigned long end)
+ {
+@@ -142,12 +159,14 @@ static void __init acpi_process_madt(void)
+ }
+ #endif
+ acpi_table_parse_madt(ACPI_MADT_TYPE_CORE_PIC,
+- acpi_parse_processor, MAX_CORE_PIC);
++ acpi_parse_p1_processor, MAX_CORE_PIC);
++
++ acpi_table_parse_madt(ACPI_MADT_TYPE_CORE_PIC,
++ acpi_parse_p2_processor, MAX_CORE_PIC);
+
+ acpi_table_parse_madt(ACPI_MADT_TYPE_EIO_PIC,
+ acpi_parse_eio_master, MAX_IO_PICS);
+
+- cpu_enumerated = 1;
+ loongson_sysconf.nr_cpus = num_processors;
+ }
+
+@@ -306,6 +325,10 @@ static int __ref acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
+ int nid;
+
+ nid = acpi_get_node(handle);
++
++ if (nid != NUMA_NO_NODE)
++ nid = early_cpu_to_node(cpu);
++
+ if (nid != NUMA_NO_NODE) {
+ set_cpuid_to_node(physid, nid);
+ node_set(nid, numa_nodes_parsed);
+@@ -320,12 +343,14 @@ int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, u32 acpi_id, int *pcpu
+ {
+ int cpu;
+
+- cpu = set_processor_mask(physid, ACPI_MADT_ENABLED);
+- if (cpu < 0) {
++ cpu = cpu_number_map(physid);
++ if (cpu < 0 || cpu >= nr_cpu_ids) {
+ pr_info(PREFIX "Unable to map lapic to logical cpu number\n");
+- return cpu;
++ return -ERANGE;
+ }
+
++ num_processors++;
++ set_cpu_present(cpu, true);
+ acpi_map_cpu2node(handle, cpu, physid);
+
+ *pcpu = cpu;
+diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
+index d2b18ed23b8a4b..ec502cb4c61dd8 100644
+--- a/arch/loongarch/kernel/setup.c
++++ b/arch/loongarch/kernel/setup.c
+@@ -291,7 +291,7 @@ static void __init fdt_setup(void)
+ if (!fdt_pointer || fdt_check_header(fdt_pointer))
+ return;
+
+- early_init_dt_scan(fdt_pointer);
++ early_init_dt_scan(fdt_pointer, __pa(fdt_pointer));
+ early_init_fdt_reserve_self();
+
+ max_low_pfn = PFN_PHYS(memblock_end_of_DRAM());
+diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c
+index b1329fe01fae90..5a8cb31a4e6b71 100644
+--- a/arch/loongarch/kernel/smp.c
++++ b/arch/loongarch/kernel/smp.c
+@@ -325,11 +325,11 @@ void __init loongson_prepare_cpus(unsigned int max_cpus)
+ int i = 0;
+
+ parse_acpi_topology();
++ cpu_data[0].global_id = cpu_logical_map(0);
+
+ for (i = 0; i < loongson_sysconf.nr_cpus; i++) {
+ set_cpu_present(i, true);
+ csr_mail_send(0, __cpu_logical_map[i], 0);
+- cpu_data[i].global_id = __cpu_logical_map[i];
+ }
+
+ per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
+@@ -374,6 +374,7 @@ void loongson_init_secondary(void)
+ cpu_logical_map(cpu) / loongson_sysconf.cores_per_package;
+ cpu_data[cpu].core = pptt_enabled ? cpu_data[cpu].core :
+ cpu_logical_map(cpu) % loongson_sysconf.cores_per_package;
++ cpu_data[cpu].global_id = cpu_logical_map(cpu);
+ }
+
+ void loongson_smp_finish(void)
+diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
+index 7dbefd4ba21071..dd350cba1252f9 100644
+--- a/arch/loongarch/net/bpf_jit.c
++++ b/arch/loongarch/net/bpf_jit.c
+@@ -179,7 +179,7 @@ static void __build_epilogue(struct jit_ctx *ctx, bool is_tail_call)
+
+ if (!is_tail_call) {
+ /* Set return value */
+- move_reg(ctx, LOONGARCH_GPR_A0, regmap[BPF_REG_0]);
++ emit_insn(ctx, addiw, LOONGARCH_GPR_A0, regmap[BPF_REG_0], 0);
+ /* Return to the caller */
+ emit_insn(ctx, jirl, LOONGARCH_GPR_RA, LOONGARCH_GPR_ZERO, 0);
+ } else {
+diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
+index d724d46b07c842..3e5e30125b2127 100644
+--- a/arch/loongarch/vdso/Makefile
++++ b/arch/loongarch/vdso/Makefile
+@@ -18,7 +18,7 @@ ccflags-vdso := \
+ cflags-vdso := $(ccflags-vdso) \
+ -isystem $(shell $(CC) -print-file-name=include) \
+ $(filter -W%,$(filter-out -Wa$(comma)%,$(KBUILD_CFLAGS))) \
+- -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
++ -std=gnu11 -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
+ -fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+ $(call cc-option, -fno-asynchronous-unwind-tables) \
+ $(call cc-option, -fno-stack-protector)
+diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c
+index 7dab46728aedaf..b6958ec2a220cf 100644
+--- a/arch/m68k/coldfire/device.c
++++ b/arch/m68k/coldfire/device.c
+@@ -93,7 +93,7 @@ static struct platform_device mcf_uart = {
+ .dev.platform_data = mcf_uart_platform_data,
+ };
+
+-#if IS_ENABLED(CONFIG_FEC)
++#ifdef MCFFEC_BASE0
+
+ #ifdef CONFIG_M5441x
+ #define FEC_NAME "enet-fec"
+@@ -145,6 +145,7 @@ static struct platform_device mcf_fec0 = {
+ .platform_data = FEC_PDATA,
+ }
+ };
++#endif /* MCFFEC_BASE0 */
+
+ #ifdef MCFFEC_BASE1
+ static struct resource mcf_fec1_resources[] = {
+@@ -182,7 +183,6 @@ static struct platform_device mcf_fec1 = {
+ }
+ };
+ #endif /* MCFFEC_BASE1 */
+-#endif /* CONFIG_FEC */
+
+ #if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
+ /*
+@@ -624,12 +624,12 @@ static struct platform_device mcf_flexcan0 = {
+
+ static struct platform_device *mcf_devices[] __initdata = {
+ &mcf_uart,
+-#if IS_ENABLED(CONFIG_FEC)
++#ifdef MCFFEC_BASE0
+ &mcf_fec0,
++#endif
+ #ifdef MCFFEC_BASE1
+ &mcf_fec1,
+ #endif
+-#endif
+ #if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
+ &mcf_qspi,
+ #endif
+diff --git a/arch/m68k/include/asm/mcfgpio.h b/arch/m68k/include/asm/mcfgpio.h
+index 019f244395464d..9c91ecdafc4539 100644
+--- a/arch/m68k/include/asm/mcfgpio.h
++++ b/arch/m68k/include/asm/mcfgpio.h
+@@ -136,7 +136,7 @@ static inline void gpio_free(unsigned gpio)
+ * read-modify-write as well as those controlled by the EPORT and GPIO modules.
+ */
+ #define MCFGPIO_SCR_START 40
+-#elif defined(CONFIGM5441x)
++#elif defined(CONFIG_M5441x)
+ /* The m5441x EPORT doesn't have its own GPIO port, uses PORT C */
+ #define MCFGPIO_SCR_START 0
+ #else
+diff --git a/arch/m68k/include/asm/mvme147hw.h b/arch/m68k/include/asm/mvme147hw.h
+index e28eb1c0e0bfb3..dbf88059e47a4d 100644
+--- a/arch/m68k/include/asm/mvme147hw.h
++++ b/arch/m68k/include/asm/mvme147hw.h
+@@ -93,8 +93,8 @@ struct pcc_regs {
+ #define M147_SCC_B_ADDR 0xfffe3000
+ #define M147_SCC_PCLK 5000000
+
+-#define MVME147_IRQ_SCSI_PORT (IRQ_USER+0x45)
+-#define MVME147_IRQ_SCSI_DMA (IRQ_USER+0x46)
++#define MVME147_IRQ_SCSI_PORT (IRQ_USER + 5)
++#define MVME147_IRQ_SCSI_DMA (IRQ_USER + 6)
+
+ /* SCC interrupts, for MVME147 */
+
+diff --git a/arch/m68k/kernel/early_printk.c b/arch/m68k/kernel/early_printk.c
+index 3cc944df04f65e..f11ef9f1f56fcf 100644
+--- a/arch/m68k/kernel/early_printk.c
++++ b/arch/m68k/kernel/early_printk.c
+@@ -13,6 +13,7 @@
+ #include <asm/setup.h>
+
+
++#include "../mvme147/mvme147.h"
+ #include "../mvme16x/mvme16x.h"
+
+ asmlinkage void __init debug_cons_nputs(const char *s, unsigned n);
+@@ -22,7 +23,9 @@ static void __ref debug_cons_write(struct console *c,
+ {
+ #if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
+ defined(CONFIG_COLDFIRE))
+- if (MACH_IS_MVME16x)
++ if (MACH_IS_MVME147)
++ mvme147_scc_write(c, s, n);
++ else if (MACH_IS_MVME16x)
+ mvme16x_cons_write(c, s, n);
+ else
+ debug_cons_nputs(s, n);
+diff --git a/arch/m68k/mvme147/config.c b/arch/m68k/mvme147/config.c
+index 8b5dc07f0811f2..cc2fb0a83cf0b4 100644
+--- a/arch/m68k/mvme147/config.c
++++ b/arch/m68k/mvme147/config.c
+@@ -32,6 +32,7 @@
+ #include <asm/mvme147hw.h>
+ #include <asm/config.h>
+
++#include "mvme147.h"
+
+ static void mvme147_get_model(char *model);
+ extern void mvme147_sched_init(void);
+@@ -185,3 +186,32 @@ int mvme147_hwclk(int op, struct rtc_time *t)
+ }
+ return 0;
+ }
++
++static void scc_delay(void)
++{
++ __asm__ __volatile__ ("nop; nop;");
++}
++
++static void scc_write(char ch)
++{
++ do {
++ scc_delay();
++ } while (!(in_8(M147_SCC_A_ADDR) & BIT(2)));
++ scc_delay();
++ out_8(M147_SCC_A_ADDR, 8);
++ scc_delay();
++ out_8(M147_SCC_A_ADDR, ch);
++}
++
++void mvme147_scc_write(struct console *co, const char *str, unsigned int count)
++{
++ unsigned long flags;
++
++ local_irq_save(flags);
++ while (count--) {
++ if (*str == '\n')
++ scc_write('\r');
++ scc_write(*str++);
++ }
++ local_irq_restore(flags);
++}
+diff --git a/arch/m68k/mvme147/mvme147.h b/arch/m68k/mvme147/mvme147.h
+new file mode 100644
+index 00000000000000..140bc98b0102aa
+--- /dev/null
++++ b/arch/m68k/mvme147/mvme147.h
+@@ -0,0 +1,6 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++struct console;
++
++/* config.c */
++void mvme147_scc_write(struct console *co, const char *str, unsigned int count);
+diff --git a/arch/microblaze/kernel/microblaze_ksyms.c b/arch/microblaze/kernel/microblaze_ksyms.c
+index c892e173ec990b..a8553f54152b76 100644
+--- a/arch/microblaze/kernel/microblaze_ksyms.c
++++ b/arch/microblaze/kernel/microblaze_ksyms.c
+@@ -16,6 +16,7 @@
+ #include <asm/page.h>
+ #include <linux/ftrace.h>
+ #include <linux/uaccess.h>
++#include <asm/xilinx_mb_manager.h>
+
+ #ifdef CONFIG_FUNCTION_TRACER
+ extern void _mcount(void);
+@@ -46,3 +47,12 @@ extern void __udivsi3(void);
+ EXPORT_SYMBOL(__udivsi3);
+ extern void __umodsi3(void);
+ EXPORT_SYMBOL(__umodsi3);
++
++#ifdef CONFIG_MB_MANAGER
++extern void xmb_manager_register(uintptr_t phys_baseaddr, u32 cr_val,
++ void (*callback)(void *data),
++ void *priv, void (*reset_callback)(void *data));
++EXPORT_SYMBOL(xmb_manager_register);
++extern asmlinkage void xmb_inject_err(void);
++EXPORT_SYMBOL(xmb_inject_err);
++#endif
+diff --git a/arch/microblaze/kernel/prom.c b/arch/microblaze/kernel/prom.c
+index e424c796e297c5..76ac4cfdfb42ce 100644
+--- a/arch/microblaze/kernel/prom.c
++++ b/arch/microblaze/kernel/prom.c
+@@ -18,7 +18,7 @@ void __init early_init_devtree(void *params)
+ {
+ pr_debug(" -> early_init_devtree(%p)\n", params);
+
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ if (!strlen(boot_command_line))
+ strscpy(boot_command_line, cmd_line, COMMAND_LINE_SIZE);
+
+diff --git a/arch/mips/include/asm/switch_to.h b/arch/mips/include/asm/switch_to.h
+index a4374b4cb88fd8..d6ccd534402133 100644
+--- a/arch/mips/include/asm/switch_to.h
++++ b/arch/mips/include/asm/switch_to.h
+@@ -97,7 +97,7 @@ do { \
+ } \
+ } while (0)
+ #else
+-# define __sanitize_fcr31(next)
++# define __sanitize_fcr31(next) do { (void) (next); } while (0)
+ #endif
+
+ /*
+diff --git a/arch/mips/kernel/prom.c b/arch/mips/kernel/prom.c
+index 6062e6fa589a87..4fd6da0a06c372 100644
+--- a/arch/mips/kernel/prom.c
++++ b/arch/mips/kernel/prom.c
+@@ -41,7 +41,7 @@ char *mips_get_machine_name(void)
+
+ void __init __dt_setup_arch(void *bph)
+ {
+- if (!early_init_dt_scan(bph))
++ if (!early_init_dt_scan(bph, __pa(bph)))
+ return;
+
+ mips_set_machine_name(of_flat_dt_get_machine_name());
+diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
+index 7eeeaf1ff95d26..cda7983e7c18d4 100644
+--- a/arch/mips/kernel/relocate.c
++++ b/arch/mips/kernel/relocate.c
+@@ -337,7 +337,7 @@ void *__init relocate_kernel(void)
+ #if defined(CONFIG_USE_OF)
+ /* Deal with the device tree */
+ fdt = plat_get_fdt();
+- early_init_dt_scan(fdt);
++ early_init_dt_scan(fdt, __pa(fdt));
+ if (boot_command_line[0]) {
+ /* Boot command line was passed in device tree */
+ strscpy(arcs_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+diff --git a/arch/nios2/kernel/prom.c b/arch/nios2/kernel/prom.c
+index 9a8393e6b4a85e..db049249766fc2 100644
+--- a/arch/nios2/kernel/prom.c
++++ b/arch/nios2/kernel/prom.c
+@@ -27,7 +27,7 @@ void __init early_init_devtree(void *params)
+ if (be32_to_cpup((__be32 *)CONFIG_NIOS2_DTB_PHYS_ADDR) ==
+ OF_DT_HEADER) {
+ params = (void *)CONFIG_NIOS2_DTB_PHYS_ADDR;
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ return;
+ }
+ #endif
+@@ -37,5 +37,5 @@ void __init early_init_devtree(void *params)
+ params = (void *)__dtb_start;
+ #endif
+
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ }
+diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
+index 69c0258700b28a..3279ef457c573a 100644
+--- a/arch/openrisc/Kconfig
++++ b/arch/openrisc/Kconfig
+@@ -65,6 +65,9 @@ config STACKTRACE_SUPPORT
+ config LOCKDEP_SUPPORT
+ def_bool y
+
++config FIX_EARLYCON_MEM
++ def_bool y
++
+ menu "Processor type and features"
+
+ choice
+diff --git a/arch/openrisc/include/asm/fixmap.h b/arch/openrisc/include/asm/fixmap.h
+index ecdb98a5839f7c..aaa6a26a3e9215 100644
+--- a/arch/openrisc/include/asm/fixmap.h
++++ b/arch/openrisc/include/asm/fixmap.h
+@@ -26,29 +26,18 @@
+ #include <linux/bug.h>
+ #include <asm/page.h>
+
+-/*
+- * On OpenRISC we use these special fixed_addresses for doing ioremap
+- * early in the boot process before memory initialization is complete.
+- * This is used, in particular, by the early serial console code.
+- *
+- * It's not really 'fixmap', per se, but fits loosely into the same
+- * paradigm.
+- */
+ enum fixed_addresses {
+- /*
+- * FIX_IOREMAP entries are useful for mapping physical address
+- * space before ioremap() is useable, e.g. really early in boot
+- * before kmalloc() is working.
+- */
+-#define FIX_N_IOREMAPS 32
+- FIX_IOREMAP_BEGIN,
+- FIX_IOREMAP_END = FIX_IOREMAP_BEGIN + FIX_N_IOREMAPS - 1,
++ FIX_EARLYCON_MEM_BASE,
+ __end_of_fixed_addresses
+ };
+
+ #define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
+ /* FIXADDR_BOTTOM might be a better name here... */
+ #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
++#define FIXMAP_PAGE_IO PAGE_KERNEL_NOCACHE
++
++extern void __set_fixmap(enum fixed_addresses idx,
++ phys_addr_t phys, pgprot_t flags);
+
+ #include <asm-generic/fixmap.h>
+
+diff --git a/arch/openrisc/kernel/prom.c b/arch/openrisc/kernel/prom.c
+index 19e6008bf114c6..e424e9bd12a793 100644
+--- a/arch/openrisc/kernel/prom.c
++++ b/arch/openrisc/kernel/prom.c
+@@ -22,6 +22,6 @@
+
+ void __init early_init_devtree(void *params)
+ {
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ memblock_allow_resize();
+ }
+diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
+index 1dcd78c8f0e99b..d0cb1a0126f95d 100644
+--- a/arch/openrisc/mm/init.c
++++ b/arch/openrisc/mm/init.c
+@@ -207,6 +207,43 @@ void __init mem_init(void)
+ return;
+ }
+
++static int __init map_page(unsigned long va, phys_addr_t pa, pgprot_t prot)
++{
++ p4d_t *p4d;
++ pud_t *pud;
++ pmd_t *pmd;
++ pte_t *pte;
++
++ p4d = p4d_offset(pgd_offset_k(va), va);
++ pud = pud_offset(p4d, va);
++ pmd = pmd_offset(pud, va);
++ pte = pte_alloc_kernel(pmd, va);
++
++ if (pte == NULL)
++ return -ENOMEM;
++
++ if (pgprot_val(prot))
++ set_pte_at(&init_mm, va, pte, pfn_pte(pa >> PAGE_SHIFT, prot));
++ else
++ pte_clear(&init_mm, va, pte);
++
++ local_flush_tlb_page(NULL, va);
++ return 0;
++}
++
++void __init __set_fixmap(enum fixed_addresses idx,
++ phys_addr_t phys, pgprot_t prot)
++{
++ unsigned long address = __fix_to_virt(idx);
++
++ if (idx >= __end_of_fixed_addresses) {
++ BUG();
++ return;
++ }
++
++ map_page(address, phys, prot);
++}
++
+ static const pgprot_t protection_map[16] = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY_X,
+diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
+index c91f9c2e61ed25..f8d08eab7db8b0 100644
+--- a/arch/parisc/kernel/ftrace.c
++++ b/arch/parisc/kernel/ftrace.c
+@@ -87,7 +87,7 @@ int ftrace_enable_ftrace_graph_caller(void)
+
+ int ftrace_disable_ftrace_graph_caller(void)
+ {
+- static_key_enable(&ftrace_graph_enable.key);
++ static_key_disable(&ftrace_graph_enable.key);
+ return 0;
+ }
+ #endif
+diff --git a/arch/powerpc/include/asm/dtl.h b/arch/powerpc/include/asm/dtl.h
+index d6f43d149f8dcb..a5c21bc623cb00 100644
+--- a/arch/powerpc/include/asm/dtl.h
++++ b/arch/powerpc/include/asm/dtl.h
+@@ -1,8 +1,8 @@
+ #ifndef _ASM_POWERPC_DTL_H
+ #define _ASM_POWERPC_DTL_H
+
++#include <linux/rwsem.h>
+ #include <asm/lppaca.h>
+-#include <linux/spinlock_types.h>
+
+ /*
+ * Layout of entries in the hypervisor's dispatch trace log buffer.
+@@ -35,7 +35,7 @@ struct dtl_entry {
+ #define DTL_LOG_ALL (DTL_LOG_CEDE | DTL_LOG_PREEMPT | DTL_LOG_FAULT)
+
+ extern struct kmem_cache *dtl_cache;
+-extern rwlock_t dtl_access_lock;
++extern struct rw_semaphore dtl_access_lock;
+
+ extern void register_dtl_buffer(int cpu);
+ extern void alloc_dtl_buffers(unsigned long *time_limit);
+diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h
+index ef40c9b6972a6e..a48f54dde4f656 100644
+--- a/arch/powerpc/include/asm/fadump.h
++++ b/arch/powerpc/include/asm/fadump.h
+@@ -19,6 +19,7 @@ extern int is_fadump_active(void);
+ extern int should_fadump_crash(void);
+ extern void crash_fadump(struct pt_regs *, const char *);
+ extern void fadump_cleanup(void);
++void fadump_setup_param_area(void);
+ extern void fadump_append_bootargs(void);
+
+ #else /* CONFIG_FA_DUMP */
+@@ -26,6 +27,7 @@ static inline int is_fadump_active(void) { return 0; }
+ static inline int should_fadump_crash(void) { return 0; }
+ static inline void crash_fadump(struct pt_regs *regs, const char *str) { }
+ static inline void fadump_cleanup(void) { }
++static inline void fadump_setup_param_area(void) { }
+ static inline void fadump_append_bootargs(void) { }
+ #endif /* !CONFIG_FA_DUMP */
+
+@@ -34,4 +36,11 @@ extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname,
+ int depth, void *data);
+ extern int fadump_reserve_mem(void);
+ #endif
++
++#if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA)
++void fadump_cma_init(void);
++#else
++static inline void fadump_cma_init(void) { }
++#endif
++
+ #endif /* _ASM_POWERPC_FADUMP_H */
+diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
+index 2ef9a5f4e5d14c..11065313d4c123 100644
+--- a/arch/powerpc/include/asm/kvm_book3s_64.h
++++ b/arch/powerpc/include/asm/kvm_book3s_64.h
+@@ -684,8 +684,8 @@ int kvmhv_nestedv2_set_ptbl_entry(unsigned long lpid, u64 dw0, u64 dw1);
+ int kvmhv_nestedv2_parse_output(struct kvm_vcpu *vcpu);
+ int kvmhv_nestedv2_set_vpa(struct kvm_vcpu *vcpu, unsigned long vpa);
+
+-int kmvhv_counters_tracepoint_regfunc(void);
+-void kmvhv_counters_tracepoint_unregfunc(void);
++int kvmhv_counters_tracepoint_regfunc(void);
++void kvmhv_counters_tracepoint_unregfunc(void);
+ int kvmhv_get_l2_counters_status(void);
+ void kvmhv_set_l2_counters_status(int cpu, bool status);
+
+diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
+index 50950deedb8734..e3d0e714ff280e 100644
+--- a/arch/powerpc/include/asm/sstep.h
++++ b/arch/powerpc/include/asm/sstep.h
+@@ -173,9 +173,4 @@ int emulate_step(struct pt_regs *regs, ppc_inst_t instr);
+ */
+ extern int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op);
+
+-extern void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
+- const void *mem, bool cross_endian);
+-extern void emulate_vsx_store(struct instruction_op *op,
+- const union vsx_reg *reg, void *mem,
+- bool cross_endian);
+ extern int emulate_dcbz(unsigned long ea, struct pt_regs *regs);
+diff --git a/arch/powerpc/include/asm/vdso.h b/arch/powerpc/include/asm/vdso.h
+index 7650b6ce14c85a..8d972bc98b55fe 100644
+--- a/arch/powerpc/include/asm/vdso.h
++++ b/arch/powerpc/include/asm/vdso.h
+@@ -25,6 +25,7 @@ int vdso_getcpu_init(void);
+ #ifdef __VDSO64__
+ #define V_FUNCTION_BEGIN(name) \
+ .globl name; \
++ .type name,@function; \
+ name: \
+
+ #define V_FUNCTION_END(name) \
+diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
+index af4263594eb2c9..1bee15c013e75f 100644
+--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
++++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
+@@ -867,7 +867,7 @@ bool __init dt_cpu_ftrs_init(void *fdt)
+ using_dt_cpu_ftrs = false;
+
+ /* Setup and verify the FDT, if it fails we just bail */
+- if (!early_init_dt_verify(fdt))
++ if (!early_init_dt_verify(fdt, __pa(fdt)))
+ return false;
+
+ if (!of_scan_flat_dt(fdt_find_cpu_features, NULL))
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index a612e7513a4f8a..4641de75f7fc1e 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -78,27 +78,23 @@ static struct cma *fadump_cma;
+ * But for some reason even if it fails we still have the memory reservation
+ * with us and we can still continue doing fadump.
+ */
+-static int __init fadump_cma_init(void)
++void __init fadump_cma_init(void)
+ {
+ unsigned long long base, size;
+ int rc;
+
+- if (!fw_dump.fadump_enabled)
+- return 0;
+-
++ if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
++ fw_dump.dump_active)
++ return;
+ /*
+ * Do not use CMA if user has provided fadump=nocma kernel parameter.
+- * Return 1 to continue with fadump old behaviour.
+ */
+- if (fw_dump.nocma)
+- return 1;
++ if (fw_dump.nocma || !fw_dump.boot_memory_size)
++ return;
+
+ base = fw_dump.reserve_dump_area_start;
+ size = fw_dump.boot_memory_size;
+
+- if (!size)
+- return 0;
+-
+ rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
+ if (rc) {
+ pr_err("Failed to init cma area for firmware-assisted dump,%d\n", rc);
+@@ -108,7 +104,7 @@ static int __init fadump_cma_init(void)
+ * blocked from production system usage. Hence return 1,
+ * so that we can continue with fadump.
+ */
+- return 1;
++ return;
+ }
+
+ /*
+@@ -125,10 +121,7 @@ static int __init fadump_cma_init(void)
+ cma_get_size(fadump_cma),
+ (unsigned long)cma_get_base(fadump_cma) >> 20,
+ fw_dump.reserve_dump_area_size);
+- return 1;
+ }
+-#else
+-static int __init fadump_cma_init(void) { return 1; }
+ #endif /* CONFIG_CMA */
+
+ /*
+@@ -143,7 +136,7 @@ void __init fadump_append_bootargs(void)
+ if (!fw_dump.dump_active || !fw_dump.param_area_supported || !fw_dump.param_area)
+ return;
+
+- if (fw_dump.param_area >= fw_dump.boot_mem_top) {
++ if (fw_dump.param_area < fw_dump.boot_mem_top) {
+ if (memblock_reserve(fw_dump.param_area, COMMAND_LINE_SIZE)) {
+ pr_warn("WARNING: Can't use additional parameters area!\n");
+ fw_dump.param_area = 0;
+@@ -637,8 +630,6 @@ int __init fadump_reserve_mem(void)
+
+ pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
+ (size >> 20), base, (memblock_phys_mem_size() >> 20));
+-
+- ret = fadump_cma_init();
+ }
+
+ return ret;
+@@ -1586,6 +1577,12 @@ static void __init fadump_init_files(void)
+ return;
+ }
+
++ if (fw_dump.param_area) {
++ rc = sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr);
++ if (rc)
++ pr_err("unable to create bootargs_append sysfs file (%d)\n", rc);
++ }
++
+ debugfs_create_file("fadump_region", 0444, arch_debugfs_dir, NULL,
+ &fadump_region_fops);
+
+@@ -1740,7 +1737,7 @@ static void __init fadump_process(void)
+ * Reserve memory to store additional parameters to be passed
+ * for fadump/capture kernel.
+ */
+-static void __init fadump_setup_param_area(void)
++void __init fadump_setup_param_area(void)
+ {
+ phys_addr_t range_start, range_end;
+
+@@ -1748,7 +1745,7 @@ static void __init fadump_setup_param_area(void)
+ return;
+
+ /* This memory can't be used by PFW or bootloader as it is shared across kernels */
+- if (radix_enabled()) {
++ if (early_radix_enabled()) {
+ /*
+ * Anywhere in the upper half should be good enough as all memory
+ * is accessible in real mode.
+@@ -1776,12 +1773,12 @@ static void __init fadump_setup_param_area(void)
+ COMMAND_LINE_SIZE,
+ range_start,
+ range_end);
+- if (!fw_dump.param_area || sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr)) {
++ if (!fw_dump.param_area) {
+ pr_warn("WARNING: Could not setup area to pass additional parameters!\n");
+ return;
+ }
+
+- memset(phys_to_virt(fw_dump.param_area), 0, COMMAND_LINE_SIZE);
++ memset((void *)fw_dump.param_area, 0, COMMAND_LINE_SIZE);
+ }
+
+ /*
+@@ -1807,7 +1804,6 @@ int __init setup_fadump(void)
+ }
+ /* Initialize the kernel dump memory structure and register with f/w */
+ else if (fw_dump.reserve_dump_area_size) {
+- fadump_setup_param_area();
+ fw_dump.ops->fadump_init_mem_struct(&fw_dump);
+ register_fadump();
+ }
+diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
+index 0be07ed407c703..e0059842a1c64b 100644
+--- a/arch/powerpc/kernel/prom.c
++++ b/arch/powerpc/kernel/prom.c
+@@ -791,7 +791,7 @@ void __init early_init_devtree(void *params)
+ DBG(" -> early_init_devtree(%px)\n", params);
+
+ /* Too early to BUG_ON(), do it by hand */
+- if (!early_init_dt_verify(params))
++ if (!early_init_dt_verify(params, __pa(params)))
+ panic("BUG: Failed verifying flat device tree, bad version?");
+
+ of_scan_flat_dt(early_init_dt_scan_model, NULL);
+@@ -908,6 +908,9 @@ void __init early_init_devtree(void *params)
+
+ mmu_early_init_devtree();
+
++ /* Setup param area for passing additional parameters to fadump capture kernel. */
++ fadump_setup_param_area();
++
+ #ifdef CONFIG_PPC_POWERNV
+ /* Scan and build the list of machine check recoverable ranges */
+ of_scan_flat_dt(early_init_dt_scan_recoverable_ranges, NULL);
+diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
+index 943430077375a4..b6b01502e50472 100644
+--- a/arch/powerpc/kernel/setup-common.c
++++ b/arch/powerpc/kernel/setup-common.c
+@@ -997,9 +997,11 @@ void __init setup_arch(char **cmdline_p)
+ initmem_init();
+
+ /*
+- * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
+- * be called after initmem_init(), so that pageblock_order is initialised.
++ * Reserve large chunks of memory for use by CMA for fadump, KVM and
++ * hugetlb. These must be called after initmem_init(), so that
++ * pageblock_order is initialised.
+ */
++ fadump_cma_init();
+ kvm_cma_reserve();
+ gigantic_hugetlb_cma_reserve();
+
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 22f83fbbc762ac..1edc7cd68c10d0 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -920,6 +920,7 @@ static int __init disable_hardlockup_detector(void)
+ hardlockup_detector_disable();
+ #else
+ if (firmware_has_feature(FW_FEATURE_LPAR)) {
++ check_kvm_guest();
+ if (is_kvm_guest())
+ hardlockup_detector_disable();
+ }
+diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
+index 9738adabeb1fee..dc65c139115772 100644
+--- a/arch/powerpc/kexec/file_load_64.c
++++ b/arch/powerpc/kexec/file_load_64.c
+@@ -736,13 +736,18 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
+ if (dn) {
+ u64 val;
+
+- of_property_read_u64(dn, "opal-base-address", &val);
++ ret = of_property_read_u64(dn, "opal-base-address", &val);
++ if (ret)
++ goto out;
++
+ ret = kexec_purgatory_get_set_symbol(image, "opal_base", &val,
+ sizeof(val), false);
+ if (ret)
+ goto out;
+
+- of_property_read_u64(dn, "opal-entry-address", &val);
++ ret = of_property_read_u64(dn, "opal-entry-address", &val);
++ if (ret)
++ goto out;
+ ret = kexec_purgatory_get_set_symbol(image, "opal_entry", &val,
+ sizeof(val), false);
+ }
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 0ed5c5c7a350d8..45b24706f75128 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4148,7 +4148,7 @@ void kvmhv_set_l2_counters_status(int cpu, bool status)
+ lppaca_of(cpu).l2_counters_enable = 0;
+ }
+
+-int kmvhv_counters_tracepoint_regfunc(void)
++int kvmhv_counters_tracepoint_regfunc(void)
+ {
+ int cpu;
+
+@@ -4158,7 +4158,7 @@ int kmvhv_counters_tracepoint_regfunc(void)
+ return 0;
+ }
+
+-void kmvhv_counters_tracepoint_unregfunc(void)
++void kvmhv_counters_tracepoint_unregfunc(void)
+ {
+ int cpu;
+
+@@ -4303,6 +4303,15 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
+ }
+ hvregs.hdec_expiry = time_limit;
+
++ /*
++ * hvregs has the doorbell status, so zero it here which
++ * enables us to receive doorbells when H_ENTER_NESTED is
++ * in progress for this vCPU
++ */
++
++ if (vcpu->arch.doorbell_request)
++ vcpu->arch.doorbell_request = 0;
++
+ /*
+ * When setting DEC, we must always deal with irq_work_raise
+ * via NMI vs setting DEC. The problem occurs right as we
+@@ -4906,7 +4915,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+ lpcr &= ~LPCR_MER;
+ }
+ } else if (vcpu->arch.pending_exceptions ||
+- vcpu->arch.doorbell_request ||
+ xive_interrupt_pending(vcpu)) {
+ vcpu->arch.ret = RESUME_HOST;
+ goto out;
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index 05f5220960c63b..125440a606ee3b 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -32,7 +32,7 @@ void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+ hr->pcr = vc->pcr | PCR_MASK;
+- hr->dpdes = vc->dpdes;
++ hr->dpdes = vcpu->arch.doorbell_request;
+ hr->hfscr = vcpu->arch.hfscr;
+ hr->tb_offset = vc->tb_offset;
+ hr->dawr0 = vcpu->arch.dawr0;
+@@ -105,7 +105,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu,
+ {
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+- hr->dpdes = vc->dpdes;
++ hr->dpdes = vcpu->arch.doorbell_request;
+ hr->purr = vcpu->arch.purr;
+ hr->spurr = vcpu->arch.spurr;
+ hr->ic = vcpu->arch.ic;
+@@ -143,7 +143,7 @@ static void restore_hv_regs(struct kvm_vcpu *vcpu, const struct hv_guest_state *
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+ vc->pcr = hr->pcr | PCR_MASK;
+- vc->dpdes = hr->dpdes;
++ vcpu->arch.doorbell_request = hr->dpdes;
+ vcpu->arch.hfscr = hr->hfscr;
+ vcpu->arch.dawr0 = hr->dawr0;
+ vcpu->arch.dawrx0 = hr->dawrx0;
+@@ -170,7 +170,13 @@ void kvmhv_restore_hv_return_state(struct kvm_vcpu *vcpu,
+ {
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+- vc->dpdes = hr->dpdes;
++ /*
++ * This L2 vCPU might have received a doorbell while H_ENTER_NESTED was being handled.
++ * Make sure we preserve the doorbell if it was either:
++ * a) Sent after H_ENTER_NESTED was called on this vCPU (arch.doorbell_request would be 1)
++ * b) Doorbell was not handled and L2 exited for some other reason (hr->dpdes would be 1)
++ */
++ vcpu->arch.doorbell_request = vcpu->arch.doorbell_request | hr->dpdes;
+ vcpu->arch.hfscr = hr->hfscr;
+ vcpu->arch.purr = hr->purr;
+ vcpu->arch.spurr = hr->spurr;
+diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h
+index 77ebc724e6cdf4..35fccaa575cc15 100644
+--- a/arch/powerpc/kvm/trace_hv.h
++++ b/arch/powerpc/kvm/trace_hv.h
+@@ -538,7 +538,7 @@ TRACE_EVENT_FN_COND(kvmppc_vcpu_stats,
+ TP_printk("VCPU %d: l1_to_l2_cs_time=%llu ns l2_to_l1_cs_time=%llu ns l2_runtime=%llu ns",
+ __entry->vcpu_id, __entry->l1_to_l2_cs,
+ __entry->l2_to_l1_cs, __entry->l2_runtime),
+- kmvhv_counters_tracepoint_regfunc, kmvhv_counters_tracepoint_unregfunc
++ kvmhv_counters_tracepoint_regfunc, kvmhv_counters_tracepoint_unregfunc
+ );
+ #endif
+ #endif /* _TRACE_KVM_HV_H */
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index e65f3fb68d06ba..ac3ee19531d8ac 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -780,8 +780,8 @@ static nokprobe_inline int emulate_stq(struct pt_regs *regs, unsigned long ea,
+ #endif /* __powerpc64 */
+
+ #ifdef CONFIG_VSX
+-void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
+- const void *mem, bool rev)
++static nokprobe_inline void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
++ const void *mem, bool rev)
+ {
+ int size, read_size;
+ int i, j;
+@@ -863,11 +863,9 @@ void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
+ break;
+ }
+ }
+-EXPORT_SYMBOL_GPL(emulate_vsx_load);
+-NOKPROBE_SYMBOL(emulate_vsx_load);
+
+-void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
+- void *mem, bool rev)
++static nokprobe_inline void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
++ void *mem, bool rev)
+ {
+ int size, write_size;
+ int i, j;
+@@ -955,8 +953,6 @@ void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
+ break;
+ }
+ }
+-EXPORT_SYMBOL_GPL(emulate_vsx_store);
+-NOKPROBE_SYMBOL(emulate_vsx_store);
+
+ static nokprobe_inline int do_vsx_load(struct instruction_op *op,
+ unsigned long ea, struct pt_regs *regs,
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index 81c77ddce2e30a..c156fe0d53c378 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -439,10 +439,16 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
+ /*
+ * The kernel should never take an execute fault nor should it
+ * take a page fault to a kernel address or a page fault to a user
+- * address outside of dedicated places
++ * address outside of dedicated places.
++ *
++ * Rather than kfence directly reporting false negatives, search whether
++ * the NIP belongs to the fixup table for cases where fault could come
++ * from functions like copy_from_kernel_nofault().
+ */
+ if (unlikely(!is_user && bad_kernel_fault(regs, error_code, address, is_write))) {
+- if (kfence_handle_page_fault(address, is_write, regs))
++ if (is_kfence_address((void *)address) &&
++ !search_exception_tables(instruction_pointer(regs)) &&
++ kfence_handle_page_fault(address, is_write, regs))
+ return 0;
+
+ return SIGSEGV;
+diff --git a/arch/powerpc/platforms/pseries/dtl.c b/arch/powerpc/platforms/pseries/dtl.c
+index 3f1cdccebc9c15..ecc04ef8c53e31 100644
+--- a/arch/powerpc/platforms/pseries/dtl.c
++++ b/arch/powerpc/platforms/pseries/dtl.c
+@@ -191,7 +191,7 @@ static int dtl_enable(struct dtl *dtl)
+ return -EBUSY;
+
+ /* ensure there are no other conflicting dtl users */
+- if (!read_trylock(&dtl_access_lock))
++ if (!down_read_trylock(&dtl_access_lock))
+ return -EBUSY;
+
+ n_entries = dtl_buf_entries;
+@@ -199,7 +199,7 @@ static int dtl_enable(struct dtl *dtl)
+ if (!buf) {
+ printk(KERN_WARNING "%s: buffer alloc failed for cpu %d\n",
+ __func__, dtl->cpu);
+- read_unlock(&dtl_access_lock);
++ up_read(&dtl_access_lock);
+ return -ENOMEM;
+ }
+
+@@ -217,7 +217,7 @@ static int dtl_enable(struct dtl *dtl)
+ spin_unlock(&dtl->lock);
+
+ if (rc) {
+- read_unlock(&dtl_access_lock);
++ up_read(&dtl_access_lock);
+ kmem_cache_free(dtl_cache, buf);
+ }
+
+@@ -232,7 +232,7 @@ static void dtl_disable(struct dtl *dtl)
+ dtl->buf = NULL;
+ dtl->buf_entries = 0;
+ spin_unlock(&dtl->lock);
+- read_unlock(&dtl_access_lock);
++ up_read(&dtl_access_lock);
+ }
+
+ /* file interface */
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index c1d8bee8f7018c..bb09990eec309a 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -169,7 +169,7 @@ struct vcpu_dispatch_data {
+ */
+ #define NR_CPUS_H NR_CPUS
+
+-DEFINE_RWLOCK(dtl_access_lock);
++DECLARE_RWSEM(dtl_access_lock);
+ static DEFINE_PER_CPU(struct vcpu_dispatch_data, vcpu_disp_data);
+ static DEFINE_PER_CPU(u64, dtl_entry_ridx);
+ static DEFINE_PER_CPU(struct dtl_worker, dtl_workers);
+@@ -463,7 +463,7 @@ static int dtl_worker_enable(unsigned long *time_limit)
+ {
+ int rc = 0, state;
+
+- if (!write_trylock(&dtl_access_lock)) {
++ if (!down_write_trylock(&dtl_access_lock)) {
+ rc = -EBUSY;
+ goto out;
+ }
+@@ -479,7 +479,7 @@ static int dtl_worker_enable(unsigned long *time_limit)
+ pr_err("vcpudispatch_stats: unable to setup workqueue for DTL processing\n");
+ free_dtl_buffers(time_limit);
+ reset_global_dtl_mask();
+- write_unlock(&dtl_access_lock);
++ up_write(&dtl_access_lock);
+ rc = -EINVAL;
+ goto out;
+ }
+@@ -494,7 +494,7 @@ static void dtl_worker_disable(unsigned long *time_limit)
+ cpuhp_remove_state(dtl_worker_state);
+ free_dtl_buffers(time_limit);
+ reset_global_dtl_mask();
+- write_unlock(&dtl_access_lock);
++ up_write(&dtl_access_lock);
+ }
+
+ static ssize_t vcpudispatch_stats_write(struct file *file, const char __user *p,
+diff --git a/arch/powerpc/platforms/pseries/plpks.c b/arch/powerpc/platforms/pseries/plpks.c
+index 4a595493d28ae3..b1667ed05f9882 100644
+--- a/arch/powerpc/platforms/pseries/plpks.c
++++ b/arch/powerpc/platforms/pseries/plpks.c
+@@ -683,7 +683,7 @@ void __init plpks_early_init_devtree(void)
+ out:
+ fdt_nop_property(fdt, chosen_node, "ibm,plpks-pw");
+ // Since we've cleared the password, we must update the FDT checksum
+- early_init_dt_verify(fdt);
++ early_init_dt_verify(fdt, __pa(fdt));
+ }
+
+ static __init int pseries_plpks_init(void)
+diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
+index 45f9c1171a486a..dfa5cdddd3671b 100644
+--- a/arch/riscv/include/asm/cpufeature.h
++++ b/arch/riscv/include/asm/cpufeature.h
+@@ -8,6 +8,7 @@
+
+ #include <linux/bitmap.h>
+ #include <linux/jump_label.h>
++#include <linux/workqueue.h>
+ #include <asm/hwcap.h>
+ #include <asm/alternative-macros.h>
+ #include <asm/errno.h>
+@@ -60,6 +61,7 @@ void riscv_user_isa_enable(void);
+
+ #if defined(CONFIG_RISCV_MISALIGNED)
+ bool check_unaligned_access_emulated_all_cpus(void);
++void check_unaligned_access_emulated(struct work_struct *work __always_unused);
+ void unaligned_emulation_finish(void);
+ bool unaligned_ctl_available(void);
+ DECLARE_PER_CPU(long, misaligned_access_speed);
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index a2cde65b69e950..26c886db4fb3d1 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -227,7 +227,7 @@ static void __init init_resources(void)
+ static void __init parse_dtb(void)
+ {
+ /* Early scan of device tree from init memory */
+- if (early_init_dt_scan(dtb_early_va)) {
++ if (early_init_dt_scan(dtb_early_va, __pa(dtb_early_va))) {
+ const char *name = of_flat_dt_get_machine_name();
+
+ if (name) {
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index 1b9867136b6100..9a80a12f6b48f2 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -524,11 +524,11 @@ int handle_misaligned_store(struct pt_regs *regs)
+ return 0;
+ }
+
+-static bool check_unaligned_access_emulated(int cpu)
++void check_unaligned_access_emulated(struct work_struct *work __always_unused)
+ {
++ int cpu = smp_processor_id();
+ long *mas_ptr = per_cpu_ptr(&misaligned_access_speed, cpu);
+ unsigned long tmp_var, tmp_val;
+- bool misaligned_emu_detected;
+
+ *mas_ptr = RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN;
+
+@@ -536,19 +536,16 @@ static bool check_unaligned_access_emulated(int cpu)
+ " "REG_L" %[tmp], 1(%[ptr])\n"
+ : [tmp] "=r" (tmp_val) : [ptr] "r" (&tmp_var) : "memory");
+
+- misaligned_emu_detected = (*mas_ptr == RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED);
+ /*
+ * If unaligned_ctl is already set, this means that we detected that all
+ * CPUS uses emulated misaligned access at boot time. If that changed
+ * when hotplugging the new cpu, this is something we don't handle.
+ */
+- if (unlikely(unaligned_ctl && !misaligned_emu_detected)) {
++ if (unlikely(unaligned_ctl && (*mas_ptr != RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED))) {
+ pr_crit("CPU misaligned accesses non homogeneous (expected all emulated)\n");
+ while (true)
+ cpu_relax();
+ }
+-
+- return misaligned_emu_detected;
+ }
+
+ bool check_unaligned_access_emulated_all_cpus(void)
+@@ -560,8 +557,11 @@ bool check_unaligned_access_emulated_all_cpus(void)
+ * accesses emulated since tasks requesting such control can run on any
+ * CPU.
+ */
++ schedule_on_each_cpu(check_unaligned_access_emulated);
++
+ for_each_online_cpu(cpu)
+- if (!check_unaligned_access_emulated(cpu))
++ if (per_cpu(misaligned_access_speed, cpu)
++ != RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED)
+ return false;
+
+ unaligned_ctl = true;
+diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
+index 160628a2116de4..f3508cc54f91ae 100644
+--- a/arch/riscv/kernel/unaligned_access_speed.c
++++ b/arch/riscv/kernel/unaligned_access_speed.c
+@@ -191,6 +191,7 @@ static int riscv_online_cpu(unsigned int cpu)
+ if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN)
+ goto exit;
+
++ check_unaligned_access_emulated(NULL);
+ buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
+ if (!buf) {
+ pr_warn("Allocation failure, not measuring misaligned performance\n");
+diff --git a/arch/riscv/kvm/aia_aplic.c b/arch/riscv/kvm/aia_aplic.c
+index da6ff1bade0df5..f59d1c0c8c43a7 100644
+--- a/arch/riscv/kvm/aia_aplic.c
++++ b/arch/riscv/kvm/aia_aplic.c
+@@ -143,7 +143,7 @@ static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending)
+ if (sm == APLIC_SOURCECFG_SM_LEVEL_HIGH ||
+ sm == APLIC_SOURCECFG_SM_LEVEL_LOW) {
+ if (!pending)
+- goto skip_write_pending;
++ goto noskip_write_pending;
+ if ((irqd->state & APLIC_IRQ_STATE_INPUT) &&
+ sm == APLIC_SOURCECFG_SM_LEVEL_LOW)
+ goto skip_write_pending;
+@@ -152,6 +152,7 @@ static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending)
+ goto skip_write_pending;
+ }
+
++noskip_write_pending:
+ if (pending)
+ irqd->state |= APLIC_IRQ_STATE_PENDING;
+ else
+diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
+index 7de128be8db9bc..6e704ed86a83a9 100644
+--- a/arch/riscv/kvm/vcpu_sbi.c
++++ b/arch/riscv/kvm/vcpu_sbi.c
+@@ -486,19 +486,22 @@ void kvm_riscv_vcpu_sbi_init(struct kvm_vcpu *vcpu)
+ struct kvm_vcpu_sbi_context *scontext = &vcpu->arch.sbi_context;
+ const struct kvm_riscv_sbi_extension_entry *entry;
+ const struct kvm_vcpu_sbi_extension *ext;
+- int i;
++ int idx, i;
+
+ for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) {
+ entry = &sbi_ext[i];
+ ext = entry->ext_ptr;
++ idx = entry->ext_idx;
++
++ if (idx < 0 || idx >= ARRAY_SIZE(scontext->ext_status))
++ continue;
+
+ if (ext->probe && !ext->probe(vcpu)) {
+- scontext->ext_status[entry->ext_idx] =
+- KVM_RISCV_SBI_EXT_STATUS_UNAVAILABLE;
++ scontext->ext_status[idx] = KVM_RISCV_SBI_EXT_STATUS_UNAVAILABLE;
+ continue;
+ }
+
+- scontext->ext_status[entry->ext_idx] = ext->default_disabled ?
++ scontext->ext_status[idx] = ext->default_disabled ?
+ KVM_RISCV_SBI_EXT_STATUS_DISABLED :
+ KVM_RISCV_SBI_EXT_STATUS_ENABLED;
+ }
+diff --git a/arch/s390/include/asm/facility.h b/arch/s390/include/asm/facility.h
+index 65ebf86506cdea..48791a02cab1e6 100644
+--- a/arch/s390/include/asm/facility.h
++++ b/arch/s390/include/asm/facility.h
+@@ -67,7 +67,7 @@ static inline int test_facility(unsigned long nr)
+ return __test_facility(nr, &stfle_fac_list);
+ }
+
+-static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
++static inline unsigned long __stfle_asm(u64 *fac_list, int size)
+ {
+ unsigned long reg0 = size - 1;
+
+@@ -75,7 +75,7 @@ static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
+ " lgr 0,%[reg0]\n"
+ " .insn s,0xb2b00000,%[list]\n" /* stfle */
+ " lgr %[reg0],0\n"
+- : [reg0] "+&d" (reg0), [list] "+Q" (*stfle_fac_list)
++ : [reg0] "+&d" (reg0), [list] "+Q" (*fac_list)
+ :
+ : "memory", "cc", "0");
+ return reg0;
+@@ -83,10 +83,10 @@ static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
+
+ /**
+ * stfle - Store facility list extended
+- * @stfle_fac_list: array where facility list can be stored
++ * @fac_list: array where facility list can be stored
+ * @size: size of passed in array in double words
+ */
+-static inline void __stfle(u64 *stfle_fac_list, int size)
++static inline void __stfle(u64 *fac_list, int size)
+ {
+ unsigned long nr;
+ u32 stfl_fac_list;
+@@ -95,20 +95,20 @@ static inline void __stfle(u64 *stfle_fac_list, int size)
+ " stfl 0(0)\n"
+ : "=m" (get_lowcore()->stfl_fac_list));
+ stfl_fac_list = get_lowcore()->stfl_fac_list;
+- memcpy(stfle_fac_list, &stfl_fac_list, 4);
++ memcpy(fac_list, &stfl_fac_list, 4);
+ nr = 4; /* bytes stored by stfl */
+ if (stfl_fac_list & 0x01000000) {
+ /* More facility bits available with stfle */
+- nr = __stfle_asm(stfle_fac_list, size);
++ nr = __stfle_asm(fac_list, size);
+ nr = min_t(unsigned long, (nr + 1) * 8, size * 8);
+ }
+- memset((char *) stfle_fac_list + nr, 0, size * 8 - nr);
++ memset((char *)fac_list + nr, 0, size * 8 - nr);
+ }
+
+-static inline void stfle(u64 *stfle_fac_list, int size)
++static inline void stfle(u64 *fac_list, int size)
+ {
+ preempt_disable();
+- __stfle(stfle_fac_list, size);
++ __stfle(fac_list, size);
+ preempt_enable();
+ }
+
+diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
+index 30820a649e6e7c..a60a291fbd58d7 100644
+--- a/arch/s390/include/asm/pci.h
++++ b/arch/s390/include/asm/pci.h
+@@ -96,7 +96,6 @@ struct zpci_bar_struct {
+ u8 size; /* order 2 exponent */
+ };
+
+-struct s390_domain;
+ struct kvm_zdev;
+
+ #define ZPCI_FUNCTIONS_PER_BUS 256
+@@ -181,9 +180,10 @@ struct zpci_dev {
+ struct dentry *debugfs_dev;
+
+ /* IOMMU and passthrough */
+- struct s390_domain *s390_domain; /* s390 IOMMU domain data */
++ struct iommu_domain *s390_domain; /* attached IOMMU domain */
+ struct kvm_zdev *kzdev;
+ struct mutex kzdev_lock;
++ spinlock_t dom_lock; /* protect s390_domain change */
+ };
+
+ static inline bool zdev_enabled(struct zpci_dev *zdev)
+diff --git a/arch/s390/include/asm/set_memory.h b/arch/s390/include/asm/set_memory.h
+index 06fbabe2f66c98..cb4cc0f59012f7 100644
+--- a/arch/s390/include/asm/set_memory.h
++++ b/arch/s390/include/asm/set_memory.h
+@@ -62,5 +62,6 @@ __SET_MEMORY_FUNC(set_memory_4k, SET_MEMORY_4K)
+
+ int set_direct_map_invalid_noflush(struct page *page);
+ int set_direct_map_default_noflush(struct page *page);
++bool kernel_page_present(struct page *page);
+
+ #endif
+diff --git a/arch/s390/kernel/syscalls/Makefile b/arch/s390/kernel/syscalls/Makefile
+index 1bb78b9468e8a9..e85c14f9058b92 100644
+--- a/arch/s390/kernel/syscalls/Makefile
++++ b/arch/s390/kernel/syscalls/Makefile
+@@ -12,7 +12,7 @@ kapi-hdrs-y := $(kapi)/unistd_nr.h
+ uapi-hdrs-y := $(uapi)/unistd_32.h
+ uapi-hdrs-y += $(uapi)/unistd_64.h
+
+-targets += $(addprefix ../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y))
++targets += $(addprefix ../../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y))
+
+ PHONY += kapi uapi
+
+diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c
+index 5f805ad42d4c3f..aec9eb16b6f7be 100644
+--- a/arch/s390/mm/pageattr.c
++++ b/arch/s390/mm/pageattr.c
+@@ -406,6 +406,21 @@ int set_direct_map_default_noflush(struct page *page)
+ return __set_memory((unsigned long)page_to_virt(page), 1, SET_MEMORY_DEF);
+ }
+
++bool kernel_page_present(struct page *page)
++{
++ unsigned long addr;
++ unsigned int cc;
++
++ addr = (unsigned long)page_address(page);
++ asm volatile(
++ " lra %[addr],0(%[addr])\n"
++ " ipm %[cc]\n"
++ : [cc] "=d" (cc), [addr] "+a" (addr)
++ :
++ : "cc");
++ return (cc >> 28) == 0;
++}
++
+ #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
+
+ static void ipte_range(pte_t *pte, unsigned long address, int nr)
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index cff4838fad2166..827b4c7b4688cd 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -160,6 +160,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev)
+ u64 req = ZPCI_CREATE_REQ(zdev->fh, 0, ZPCI_MOD_FC_SET_MEASURE);
+ struct zpci_iommu_ctrs *ctrs;
+ struct zpci_fib fib = {0};
++ unsigned long flags;
+ u8 cc, status;
+
+ if (zdev->fmb || sizeof(*zdev->fmb) < zdev->fmb_length)
+@@ -171,6 +172,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev)
+ WARN_ON((u64) zdev->fmb & 0xf);
+
+ /* reset software counters */
++ spin_lock_irqsave(&zdev->dom_lock, flags);
+ ctrs = zpci_get_iommu_ctrs(zdev);
+ if (ctrs) {
+ atomic64_set(&ctrs->mapped_pages, 0);
+@@ -179,6 +181,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev)
+ atomic64_set(&ctrs->sync_map_rpcits, 0);
+ atomic64_set(&ctrs->sync_rpcits, 0);
+ }
++ spin_unlock_irqrestore(&zdev->dom_lock, flags);
+
+
+ fib.fmb_addr = virt_to_phys(zdev->fmb);
+@@ -915,10 +918,8 @@ void zpci_device_reserved(struct zpci_dev *zdev)
+ void zpci_release_device(struct kref *kref)
+ {
+ struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
+- int ret;
+
+- if (zdev->has_hp_slot)
+- zpci_exit_slot(zdev);
++ WARN_ON(zdev->state != ZPCI_FN_STATE_RESERVED);
+
+ if (zdev->zbus->bus)
+ zpci_bus_remove_device(zdev, false);
+@@ -926,28 +927,14 @@ void zpci_release_device(struct kref *kref)
+ if (zdev_enabled(zdev))
+ zpci_disable_device(zdev);
+
+- switch (zdev->state) {
+- case ZPCI_FN_STATE_CONFIGURED:
+- ret = sclp_pci_deconfigure(zdev->fid);
+- zpci_dbg(3, "deconf fid:%x, rc:%d\n", zdev->fid, ret);
+- fallthrough;
+- case ZPCI_FN_STATE_STANDBY:
+- if (zdev->has_hp_slot)
+- zpci_exit_slot(zdev);
+- spin_lock(&zpci_list_lock);
+- list_del(&zdev->entry);
+- spin_unlock(&zpci_list_lock);
+- zpci_dbg(3, "rsv fid:%x\n", zdev->fid);
+- fallthrough;
+- case ZPCI_FN_STATE_RESERVED:
+- if (zdev->has_resources)
+- zpci_cleanup_bus_resources(zdev);
+- zpci_bus_device_unregister(zdev);
+- zpci_destroy_iommu(zdev);
+- fallthrough;
+- default:
+- break;
+- }
++ if (zdev->has_hp_slot)
++ zpci_exit_slot(zdev);
++
++ if (zdev->has_resources)
++ zpci_cleanup_bus_resources(zdev);
++
++ zpci_bus_device_unregister(zdev);
++ zpci_destroy_iommu(zdev);
+ zpci_dbg(3, "rem fid:%x\n", zdev->fid);
+ kfree_rcu(zdev, rcu);
+ }
+diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
+index 2cb5043a997d53..38014206c16b96 100644
+--- a/arch/s390/pci/pci_debug.c
++++ b/arch/s390/pci/pci_debug.c
+@@ -71,17 +71,23 @@ static void pci_fmb_show(struct seq_file *m, char *name[], int length,
+
+ static void pci_sw_counter_show(struct seq_file *m)
+ {
+- struct zpci_iommu_ctrs *ctrs = zpci_get_iommu_ctrs(m->private);
++ struct zpci_dev *zdev = m->private;
++ struct zpci_iommu_ctrs *ctrs;
+ atomic64_t *counter;
++ unsigned long flags;
+ int i;
+
++ spin_lock_irqsave(&zdev->dom_lock, flags);
++ ctrs = zpci_get_iommu_ctrs(m->private);
+ if (!ctrs)
+- return;
++ goto unlock;
+
+ counter = &ctrs->mapped_pages;
+ for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
+ seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
+ atomic64_read(counter));
++unlock:
++ spin_unlock_irqrestore(&zdev->dom_lock, flags);
+ }
+
+ static int pci_perf_show(struct seq_file *m, void *v)
+diff --git a/arch/sh/kernel/cpu/proc.c b/arch/sh/kernel/cpu/proc.c
+index a306bcd6b34130..5f6d0e827baeb0 100644
+--- a/arch/sh/kernel/cpu/proc.c
++++ b/arch/sh/kernel/cpu/proc.c
+@@ -132,7 +132,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+
+ static void *c_start(struct seq_file *m, loff_t *pos)
+ {
+- return *pos < NR_CPUS ? cpu_data + *pos : NULL;
++ return *pos < nr_cpu_ids ? cpu_data + *pos : NULL;
+ }
+ static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+diff --git a/arch/sh/kernel/setup.c b/arch/sh/kernel/setup.c
+index 620e5cf8ae1e74..f2b6f16a46b85d 100644
+--- a/arch/sh/kernel/setup.c
++++ b/arch/sh/kernel/setup.c
+@@ -255,7 +255,7 @@ void __ref sh_fdt_init(phys_addr_t dt_phys)
+ dt_virt = phys_to_virt(dt_phys);
+ #endif
+
+- if (!dt_virt || !early_init_dt_scan(dt_virt)) {
++ if (!dt_virt || !early_init_dt_scan(dt_virt, __pa(dt_virt))) {
+ pr_crit("Error: invalid device tree blob"
+ " at physical address %p\n", (void *)dt_phys);
+
+diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c
+index 77c4afb8ab9071..75d04fb4994a06 100644
+--- a/arch/um/drivers/net_kern.c
++++ b/arch/um/drivers/net_kern.c
+@@ -336,7 +336,7 @@ static struct platform_driver uml_net_driver = {
+
+ static void net_device_release(struct device *dev)
+ {
+- struct uml_net *device = dev_get_drvdata(dev);
++ struct uml_net *device = container_of(dev, struct uml_net, pdev.dev);
+ struct net_device *netdev = device->dev;
+ struct uml_net_private *lp = netdev_priv(netdev);
+
+diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
+index 7f28ec1929dc0b..2bfb17373244bb 100644
+--- a/arch/um/drivers/ubd_kern.c
++++ b/arch/um/drivers/ubd_kern.c
+@@ -779,7 +779,7 @@ static int ubd_open_dev(struct ubd *ubd_dev)
+
+ static void ubd_device_release(struct device *dev)
+ {
+- struct ubd *ubd_dev = dev_get_drvdata(dev);
++ struct ubd *ubd_dev = container_of(dev, struct ubd, pdev.dev);
+
+ blk_mq_free_tag_set(&ubd_dev->tag_set);
+ *ubd_dev = ((struct ubd) DEFAULT_UBD);
+@@ -898,6 +898,8 @@ static int ubd_add(int n, char **error_out)
+ if (err)
+ goto out_cleanup_disk;
+
++ ubd_dev->disk = disk;
++
+ return 0;
+
+ out_cleanup_disk:
+diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
+index 2d473282ab515e..907e0a9d41a536 100644
+--- a/arch/um/drivers/vector_kern.c
++++ b/arch/um/drivers/vector_kern.c
+@@ -821,7 +821,8 @@ static struct platform_driver uml_net_driver = {
+
+ static void vector_device_release(struct device *dev)
+ {
+- struct vector_device *device = dev_get_drvdata(dev);
++ struct vector_device *device =
++ container_of(dev, struct vector_device, pdev.dev);
+ struct net_device *netdev = device->dev;
+
+ list_del(&device->list);
+diff --git a/arch/um/kernel/dtb.c b/arch/um/kernel/dtb.c
+index 4954188a6a0908..8d78ced9e08f6d 100644
+--- a/arch/um/kernel/dtb.c
++++ b/arch/um/kernel/dtb.c
+@@ -17,7 +17,7 @@ void uml_dtb_init(void)
+
+ area = uml_load_file(dtb, &size);
+ if (area) {
+- if (!early_init_dt_scan(area)) {
++ if (!early_init_dt_scan(area, __pa(area))) {
+ pr_err("invalid DTB %s\n", dtb);
+ memblock_free(area, size);
+ return;
+diff --git a/arch/um/kernel/physmem.c b/arch/um/kernel/physmem.c
+index fb2adfb499452b..ee693e0b2b58bf 100644
+--- a/arch/um/kernel/physmem.c
++++ b/arch/um/kernel/physmem.c
+@@ -81,10 +81,10 @@ void __init setup_physmem(unsigned long start, unsigned long reserve_end,
+ unsigned long len, unsigned long long highmem)
+ {
+ unsigned long reserve = reserve_end - start;
+- long map_size = len - reserve;
++ unsigned long map_size = len - reserve;
+ int err;
+
+- if(map_size <= 0) {
++ if (len <= reserve) {
+ os_warn("Too few physical memory! Needed=%lu, given=%lu\n",
+ reserve, len);
+ exit(1);
+@@ -95,7 +95,7 @@ void __init setup_physmem(unsigned long start, unsigned long reserve_end,
+ err = os_map_memory((void *) reserve_end, physmem_fd, reserve,
+ map_size, 1, 1, 1);
+ if (err < 0) {
+- os_warn("setup_physmem - mapping %ld bytes of memory at 0x%p "
++ os_warn("setup_physmem - mapping %lu bytes of memory at 0x%p "
+ "failed - errno = %d\n", map_size,
+ (void *) reserve_end, err);
+ exit(1);
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index f36b63f53babd4..d3786577c6fbf3 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -292,6 +292,6 @@ int elf_core_copy_task_fpregs(struct task_struct *t, elf_fpregset_t *fpu)
+ {
+ int cpu = current_thread_info()->cpu;
+
+- return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu);
++ return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu) == 0;
+ }
+
+diff --git a/arch/um/kernel/sysrq.c b/arch/um/kernel/sysrq.c
+index 746715379f12a8..7e897e44a03da2 100644
+--- a/arch/um/kernel/sysrq.c
++++ b/arch/um/kernel/sysrq.c
+@@ -53,5 +53,5 @@ void show_stack(struct task_struct *task, unsigned long *stack,
+ }
+
+ printk("%sCall Trace:\n", loglvl);
+- dump_trace(current, &stackops, (void *)loglvl);
++ dump_trace(task ?: current, &stackops, (void *)loglvl);
+ }
+diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
+index 327c45c5013fea..2f85ed005c42f1 100644
+--- a/arch/x86/coco/tdx/tdx.c
++++ b/arch/x86/coco/tdx/tdx.c
+@@ -78,6 +78,32 @@ static inline void tdcall(u64 fn, struct tdx_module_args *args)
+ panic("TDCALL %lld failed (Buggy TDX module!)\n", fn);
+ }
+
++/* Read TD-scoped metadata */
++static inline u64 tdg_vm_rd(u64 field, u64 *value)
++{
++ struct tdx_module_args args = {
++ .rdx = field,
++ };
++ u64 ret;
++
++ ret = __tdcall_ret(TDG_VM_RD, &args);
++ *value = args.r8;
++
++ return ret;
++}
++
++/* Write TD-scoped metadata */
++static inline u64 tdg_vm_wr(u64 field, u64 value, u64 mask)
++{
++ struct tdx_module_args args = {
++ .rdx = field,
++ .r8 = value,
++ .r9 = mask,
++ };
++
++ return __tdcall(TDG_VM_WR, &args);
++}
++
+ /**
+ * tdx_mcall_get_report0() - Wrapper to get TDREPORT0 (a.k.a. TDREPORT
+ * subtype 0) using TDG.MR.REPORT TDCALL.
+@@ -168,7 +194,61 @@ static void __noreturn tdx_panic(const char *msg)
+ __tdx_hypercall(&args);
+ }
+
+-static void tdx_parse_tdinfo(u64 *cc_mask)
++/*
++ * The kernel cannot handle #VEs when accessing normal kernel memory. Ensure
++ * that no #VE will be delivered for accesses to TD-private memory.
++ *
++ * TDX 1.0 does not allow the guest to disable SEPT #VE on its own. The VMM
++ * controls if the guest will receive such #VE with TD attribute
++ * ATTR_SEPT_VE_DISABLE.
++ *
++ * Newer TDX modules allow the guest to control if it wants to receive SEPT
++ * violation #VEs.
++ *
++ * Check if the feature is available and disable SEPT #VE if possible.
++ *
++ * If the TD is allowed to disable/enable SEPT #VEs, the ATTR_SEPT_VE_DISABLE
++ * attribute is no longer reliable. It reflects the initial state of the
++ * control for the TD, but it will not be updated if someone (e.g. bootloader)
++ * changes it before the kernel starts. Kernel must check TDCS_TD_CTLS bit to
++ * determine if SEPT #VEs are enabled or disabled.
++ */
++static void disable_sept_ve(u64 td_attr)
++{
++ const char *msg = "TD misconfiguration: SEPT #VE has to be disabled";
++ bool debug = td_attr & ATTR_DEBUG;
++ u64 config, controls;
++
++ /* Is this TD allowed to disable SEPT #VE */
++ tdg_vm_rd(TDCS_CONFIG_FLAGS, &config);
++ if (!(config & TDCS_CONFIG_FLEXIBLE_PENDING_VE)) {
++ /* No SEPT #VE controls for the guest: check the attribute */
++ if (td_attr & ATTR_SEPT_VE_DISABLE)
++ return;
++
++ /* Relax SEPT_VE_DISABLE check for debug TD for backtraces */
++ if (debug)
++ pr_warn("%s\n", msg);
++ else
++ tdx_panic(msg);
++ return;
++ }
++
++ /* Check if SEPT #VE has been disabled before us */
++ tdg_vm_rd(TDCS_TD_CTLS, &controls);
++ if (controls & TD_CTLS_PENDING_VE_DISABLE)
++ return;
++
++ /* Keep #VEs enabled for splats in debugging environments */
++ if (debug)
++ return;
++
++ /* Disable SEPT #VEs */
++ tdg_vm_wr(TDCS_TD_CTLS, TD_CTLS_PENDING_VE_DISABLE,
++ TD_CTLS_PENDING_VE_DISABLE);
++}
++
++static void tdx_setup(u64 *cc_mask)
+ {
+ struct tdx_module_args args = {};
+ unsigned int gpa_width;
+@@ -193,21 +273,12 @@ static void tdx_parse_tdinfo(u64 *cc_mask)
+ gpa_width = args.rcx & GENMASK(5, 0);
+ *cc_mask = BIT_ULL(gpa_width - 1);
+
+- /*
+- * The kernel can not handle #VE's when accessing normal kernel
+- * memory. Ensure that no #VE will be delivered for accesses to
+- * TD-private memory. Only VMM-shared memory (MMIO) will #VE.
+- */
+ td_attr = args.rdx;
+- if (!(td_attr & ATTR_SEPT_VE_DISABLE)) {
+- const char *msg = "TD misconfiguration: SEPT_VE_DISABLE attribute must be set.";
+
+- /* Relax SEPT_VE_DISABLE check for debug TD. */
+- if (td_attr & ATTR_DEBUG)
+- pr_warn("%s\n", msg);
+- else
+- tdx_panic(msg);
+- }
++ /* Kernel does not use NOTIFY_ENABLES and does not need random #VEs */
++ tdg_vm_wr(TDCS_NOTIFY_ENABLES, 0, -1ULL);
++
++ disable_sept_ve(td_attr);
+ }
+
+ /*
+@@ -929,10 +1000,6 @@ static void tdx_kexec_finish(void)
+
+ void __init tdx_early_init(void)
+ {
+- struct tdx_module_args args = {
+- .rdx = TDCS_NOTIFY_ENABLES,
+- .r9 = -1ULL,
+- };
+ u64 cc_mask;
+ u32 eax, sig[3];
+
+@@ -947,11 +1014,11 @@ void __init tdx_early_init(void)
+ setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE);
+
+ cc_vendor = CC_VENDOR_INTEL;
+- tdx_parse_tdinfo(&cc_mask);
+- cc_set_mask(cc_mask);
+
+- /* Kernel does not use NOTIFY_ENABLES and does not need random #VEs */
+- tdcall(TDG_VM_WR, &args);
++ /* Configure the TD */
++ tdx_setup(&cc_mask);
++
++ cc_set_mask(cc_mask);
+
+ /*
+ * All bits above GPA width are reserved and kernel treats shared bit
+diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S
+index ad7f4c89162568..2de859173940eb 100644
+--- a/arch/x86/crypto/aegis128-aesni-asm.S
++++ b/arch/x86/crypto/aegis128-aesni-asm.S
+@@ -21,7 +21,7 @@
+ #define T1 %xmm7
+
+ #define STATEP %rdi
+-#define LEN %rsi
++#define LEN %esi
+ #define SRC %rdx
+ #define DST %rcx
+
+@@ -76,32 +76,32 @@ SYM_FUNC_START_LOCAL(__load_partial)
+ xor %r9d, %r9d
+ pxor MSG, MSG
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x1, %r8
+ jz .Lld_partial_1
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x1E, %r8
+ add SRC, %r8
+ mov (%r8), %r9b
+
+ .Lld_partial_1:
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x2, %r8
+ jz .Lld_partial_2
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x1C, %r8
+ add SRC, %r8
+ shl $0x10, %r9
+ mov (%r8), %r9w
+
+ .Lld_partial_2:
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x4, %r8
+ jz .Lld_partial_4
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x18, %r8
+ add SRC, %r8
+ shl $32, %r9
+@@ -111,11 +111,11 @@ SYM_FUNC_START_LOCAL(__load_partial)
+ .Lld_partial_4:
+ movq %r9, MSG
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x8, %r8
+ jz .Lld_partial_8
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x10, %r8
+ add SRC, %r8
+ pslldq $8, MSG
+@@ -139,7 +139,7 @@ SYM_FUNC_END(__load_partial)
+ * %r10
+ */
+ SYM_FUNC_START_LOCAL(__store_partial)
+- mov LEN, %r8
++ mov LEN, %r8d
+ mov DST, %r9
+
+ movq T0, %r10
+@@ -677,7 +677,7 @@ SYM_TYPED_FUNC_START(crypto_aegis128_aesni_dec_tail)
+ call __store_partial
+
+ /* mask with byte count: */
+- movq LEN, T0
++ movd LEN, T0
+ punpcklbw T0, T0
+ punpcklbw T0, T0
+ punpcklbw T0, T0
+@@ -702,7 +702,8 @@ SYM_FUNC_END(crypto_aegis128_aesni_dec_tail)
+
+ /*
+ * void crypto_aegis128_aesni_final(void *state, void *tag_xor,
+- * u64 assoclen, u64 cryptlen);
++ * unsigned int assoclen,
++ * unsigned int cryptlen);
+ */
+ SYM_FUNC_START(crypto_aegis128_aesni_final)
+ FRAME_BEGIN
+@@ -715,8 +716,8 @@ SYM_FUNC_START(crypto_aegis128_aesni_final)
+ movdqu 0x40(STATEP), STATE4
+
+ /* prepare length block: */
+- movq %rdx, MSG
+- movq %rcx, T0
++ movd %edx, MSG
++ movd %ecx, T0
+ pslldq $8, T0
+ pxor T0, MSG
+ psllq $3, MSG /* multiply by 8 (to get bit count) */
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 2959970dd10eb1..c5703c4afa6c87 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -828,11 +828,13 @@ static void pt_buffer_advance(struct pt_buffer *buf)
+ buf->cur_idx++;
+
+ if (buf->cur_idx == buf->cur->last) {
+- if (buf->cur == buf->last)
++ if (buf->cur == buf->last) {
+ buf->cur = buf->first;
+- else
++ buf->wrapped = true;
++ } else {
+ buf->cur = list_entry(buf->cur->list.next, struct topa,
+ list);
++ }
+ buf->cur_idx = 0;
+ }
+ }
+@@ -846,8 +848,11 @@ static void pt_buffer_advance(struct pt_buffer *buf)
+ static void pt_update_head(struct pt *pt)
+ {
+ struct pt_buffer *buf = perf_get_aux(&pt->handle);
++ bool wrapped = buf->wrapped;
+ u64 topa_idx, base, old;
+
++ buf->wrapped = false;
++
+ if (buf->single) {
+ local_set(&buf->data_size, buf->output_off);
+ return;
+@@ -865,7 +870,7 @@ static void pt_update_head(struct pt *pt)
+ } else {
+ old = (local64_xchg(&buf->head, base) &
+ ((buf->nr_pages << PAGE_SHIFT) - 1));
+- if (base < old)
++ if (base < old || (base == old && wrapped))
+ base += buf->nr_pages << PAGE_SHIFT;
+
+ local_add(base - old, &buf->data_size);
+diff --git a/arch/x86/events/intel/pt.h b/arch/x86/events/intel/pt.h
+index f5e46c04c145d0..a1b6c04b7f6848 100644
+--- a/arch/x86/events/intel/pt.h
++++ b/arch/x86/events/intel/pt.h
+@@ -65,6 +65,7 @@ struct pt_pmu {
+ * @head: logical write offset inside the buffer
+ * @snapshot: if this is for a snapshot/overwrite counter
+ * @single: use Single Range Output instead of ToPA
++ * @wrapped: buffer advance wrapped back to the first topa table
+ * @stop_pos: STOP topa entry index
+ * @intr_pos: INT topa entry index
+ * @stop_te: STOP topa entry pointer
+@@ -82,6 +83,7 @@ struct pt_buffer {
+ local64_t head;
+ bool snapshot;
+ bool single;
++ bool wrapped;
+ long stop_pos, intr_pos;
+ struct topa_entry *stop_te, *intr_te;
+ void **data_pages;
+diff --git a/arch/x86/include/asm/amd_nb.h b/arch/x86/include/asm/amd_nb.h
+index 6f3b6aef47ba99..d0caac26533f22 100644
+--- a/arch/x86/include/asm/amd_nb.h
++++ b/arch/x86/include/asm/amd_nb.h
+@@ -116,7 +116,10 @@ static inline bool amd_gart_present(void)
+
+ #define amd_nb_num(x) 0
+ #define amd_nb_has_feature(x) false
+-#define node_to_amd_nb(x) NULL
++static inline struct amd_northbridge *node_to_amd_nb(int node)
++{
++ return NULL;
++}
+ #define amd_gart_present(x) false
+
+ #endif
+diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
+index 8db2ec4d6cdac7..18818f6859f05c 100644
+--- a/arch/x86/include/asm/atomic64_32.h
++++ b/arch/x86/include/asm/atomic64_32.h
+@@ -51,7 +51,8 @@ static __always_inline s64 arch_atomic64_read_nonatomic(const atomic64_t *v)
+ #ifdef CONFIG_X86_CMPXCHG64
+ #define __alternative_atomic64(f, g, out, in...) \
+ asm volatile("call %c[func]" \
+- : out : [func] "i" (atomic64_##g##_cx8), ## in)
++ : ALT_OUTPUT_SP(out) \
++ : [func] "i" (atomic64_##g##_cx8), ## in)
+
+ #define ATOMIC64_DECL(sym) ATOMIC64_DECL_ONE(sym##_cx8)
+ #else
+diff --git a/arch/x86/include/asm/cmpxchg_32.h b/arch/x86/include/asm/cmpxchg_32.h
+index 62cef2113ca749..fd1282a783ddbf 100644
+--- a/arch/x86/include/asm/cmpxchg_32.h
++++ b/arch/x86/include/asm/cmpxchg_32.h
+@@ -94,7 +94,7 @@ static __always_inline bool __try_cmpxchg64_local(volatile u64 *ptr, u64 *oldp,
+ asm volatile(ALTERNATIVE(_lock_loc \
+ "call cmpxchg8b_emu", \
+ _lock "cmpxchg8b %a[ptr]", X86_FEATURE_CX8) \
+- : "+a" (o.low), "+d" (o.high) \
++ : ALT_OUTPUT_SP("+a" (o.low), "+d" (o.high)) \
+ : "b" (n.low), "c" (n.high), [ptr] "S" (_ptr) \
+ : "memory"); \
+ \
+@@ -123,8 +123,8 @@ static __always_inline u64 arch_cmpxchg64_local(volatile u64 *ptr, u64 old, u64
+ "call cmpxchg8b_emu", \
+ _lock "cmpxchg8b %a[ptr]", X86_FEATURE_CX8) \
+ CC_SET(e) \
+- : CC_OUT(e) (ret), \
+- "+a" (o.low), "+d" (o.high) \
++ : ALT_OUTPUT_SP(CC_OUT(e) (ret), \
++ "+a" (o.low), "+d" (o.high)) \
+ : "b" (n.low), "c" (n.high), [ptr] "S" (_ptr) \
+ : "memory"); \
+ \
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index b4bcd5108079f0..ab25d289e89550 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -26,6 +26,7 @@
+ #include <linux/irqbypass.h>
+ #include <linux/hyperv.h>
+ #include <linux/kfifo.h>
++#include <linux/sched/vhost_task.h>
+
+ #include <asm/apic.h>
+ #include <asm/pvclock-abi.h>
+@@ -1445,7 +1446,8 @@ struct kvm_arch {
+ bool sgx_provisioning_allowed;
+
+ struct kvm_x86_pmu_event_filter __rcu *pmu_event_filter;
+- struct task_struct *nx_huge_page_recovery_thread;
++ struct vhost_task *nx_huge_page_recovery_thread;
++ u64 nx_huge_page_last;
+
+ #ifdef CONFIG_X86_64
+ /* The number of TDP MMU pages across all roots. */
+diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h
+index fdfd41511b0211..fecb2a6e864be1 100644
+--- a/arch/x86/include/asm/shared/tdx.h
++++ b/arch/x86/include/asm/shared/tdx.h
+@@ -16,11 +16,20 @@
+ #define TDG_VP_VEINFO_GET 3
+ #define TDG_MR_REPORT 4
+ #define TDG_MEM_PAGE_ACCEPT 6
++#define TDG_VM_RD 7
+ #define TDG_VM_WR 8
+
+-/* TDCS fields. To be used by TDG.VM.WR and TDG.VM.RD module calls */
++/* TDX TD-Scope Metadata. To be used by TDG.VM.WR and TDG.VM.RD */
++#define TDCS_CONFIG_FLAGS 0x1110000300000016
++#define TDCS_TD_CTLS 0x1110000300000017
+ #define TDCS_NOTIFY_ENABLES 0x9100000000000010
+
++/* TDCS_CONFIG_FLAGS bits */
++#define TDCS_CONFIG_FLEXIBLE_PENDING_VE BIT_ULL(1)
++
++/* TDCS_TD_CTLS bits */
++#define TD_CTLS_PENDING_VE_DISABLE BIT_ULL(0)
++
+ /* TDX hypercall Leaf IDs */
+ #define TDVMCALL_MAP_GPA 0x10001
+ #define TDVMCALL_GET_QUOTE 0x10002
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 2b8ca66793fb03..5c35f3e6ae7476 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -798,6 +798,7 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ static const struct x86_cpu_desc erratum_1386_microcode[] = {
+ AMD_CPU_DESC(0x17, 0x1, 0x2, 0x0800126e),
+ AMD_CPU_DESC(0x17, 0x31, 0x0, 0x08301052),
++ {},
+ };
+
+ static void fix_erratum_1386(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index c472282ab40fd2..fd89f1c65c947c 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -2374,12 +2374,12 @@ void __init arch_cpu_finalize_init(void)
+ alternative_instructions();
+
+ if (IS_ENABLED(CONFIG_X86_64)) {
+- unsigned long USER_PTR_MAX = TASK_SIZE_MAX-1;
++ unsigned long USER_PTR_MAX = TASK_SIZE_MAX;
+
+ /*
+ * Enable this when LAM is gated on LASS support
+ if (cpu_feature_enabled(X86_FEATURE_LAM))
+- USER_PTR_MAX = (1ul << 63) - PAGE_SIZE - 1;
++ USER_PTR_MAX = (1ul << 63) - PAGE_SIZE;
+ */
+ runtime_const_init(ptr, USER_PTR_MAX);
+
+diff --git a/arch/x86/kernel/devicetree.c b/arch/x86/kernel/devicetree.c
+index 64280879c68c02..59d23cdf4ed0fa 100644
+--- a/arch/x86/kernel/devicetree.c
++++ b/arch/x86/kernel/devicetree.c
+@@ -305,7 +305,7 @@ void __init x86_flattree_get_config(void)
+ map_len = size;
+ }
+
+- early_init_dt_verify(dt);
++ early_init_dt_verify(dt, __pa(dt));
+ }
+
+ unflatten_and_copy_device_tree();
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index d00c28aaa5be45..d4705a348a8045 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -723,7 +723,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ state->sp = task->thread.sp + sizeof(*frame);
+ state->bp = READ_ONCE_NOCHECK(frame->bp);
+ state->ip = READ_ONCE_NOCHECK(frame->ret_addr);
+- state->signal = (void *)state->ip == ret_from_fork;
++ state->signal = (void *)state->ip == ret_from_fork_asm;
+ }
+
+ if (get_stack_info((unsigned long *)state->sp, state->task,
+diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
+index 730c2f34d34796..2a0ce65ca9effc 100644
+--- a/arch/x86/kvm/Kconfig
++++ b/arch/x86/kvm/Kconfig
+@@ -29,6 +29,7 @@ config KVM
+ select HAVE_KVM_IRQ_BYPASS
+ select HAVE_KVM_IRQ_ROUTING
+ select HAVE_KVM_READONLY_MEM
++ select VHOST_TASK
+ select KVM_ASYNC_PF
+ select USER_RETURN_NOTIFIER
+ select KVM_MMIO
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 7813d28b082f2f..2da1ec9508a58e 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -7160,7 +7160,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
+ kvm_mmu_zap_all_fast(kvm);
+ mutex_unlock(&kvm->slots_lock);
+
+- wake_up_process(kvm->arch.nx_huge_page_recovery_thread);
++ vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
+ }
+ mutex_unlock(&kvm_lock);
+ }
+@@ -7306,7 +7306,7 @@ static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel
+ mutex_lock(&kvm_lock);
+
+ list_for_each_entry(kvm, &vm_list, vm_list)
+- wake_up_process(kvm->arch.nx_huge_page_recovery_thread);
++ vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
+
+ mutex_unlock(&kvm_lock);
+ }
+@@ -7409,62 +7409,56 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm)
+ srcu_read_unlock(&kvm->srcu, rcu_idx);
+ }
+
+-static long get_nx_huge_page_recovery_timeout(u64 start_time)
++static void kvm_nx_huge_page_recovery_worker_kill(void *data)
+ {
+- bool enabled;
+- uint period;
+-
+- enabled = calc_nx_huge_pages_recovery_period(&period);
+-
+- return enabled ? start_time + msecs_to_jiffies(period) - get_jiffies_64()
+- : MAX_SCHEDULE_TIMEOUT;
+ }
+
+-static int kvm_nx_huge_page_recovery_worker(struct kvm *kvm, uintptr_t data)
++static bool kvm_nx_huge_page_recovery_worker(void *data)
+ {
+- u64 start_time;
++ struct kvm *kvm = data;
++ bool enabled;
++ uint period;
+ long remaining_time;
+
+- while (true) {
+- start_time = get_jiffies_64();
+- remaining_time = get_nx_huge_page_recovery_timeout(start_time);
+-
+- set_current_state(TASK_INTERRUPTIBLE);
+- while (!kthread_should_stop() && remaining_time > 0) {
+- schedule_timeout(remaining_time);
+- remaining_time = get_nx_huge_page_recovery_timeout(start_time);
+- set_current_state(TASK_INTERRUPTIBLE);
+- }
+-
+- set_current_state(TASK_RUNNING);
+-
+- if (kthread_should_stop())
+- return 0;
++ enabled = calc_nx_huge_pages_recovery_period(&period);
++ if (!enabled)
++ return false;
+
+- kvm_recover_nx_huge_pages(kvm);
++ remaining_time = kvm->arch.nx_huge_page_last + msecs_to_jiffies(period)
++ - get_jiffies_64();
++ if (remaining_time > 0) {
++ schedule_timeout(remaining_time);
++ /* check for signals and come back */
++ return true;
+ }
++
++ __set_current_state(TASK_RUNNING);
++ kvm_recover_nx_huge_pages(kvm);
++ kvm->arch.nx_huge_page_last = get_jiffies_64();
++ return true;
+ }
+
+ int kvm_mmu_post_init_vm(struct kvm *kvm)
+ {
+- int err;
+-
+ if (nx_hugepage_mitigation_hard_disabled)
+ return 0;
+
+- err = kvm_vm_create_worker_thread(kvm, kvm_nx_huge_page_recovery_worker, 0,
+- "kvm-nx-lpage-recovery",
+- &kvm->arch.nx_huge_page_recovery_thread);
+- if (!err)
+- kthread_unpark(kvm->arch.nx_huge_page_recovery_thread);
++ kvm->arch.nx_huge_page_last = get_jiffies_64();
++ kvm->arch.nx_huge_page_recovery_thread = vhost_task_create(
++ kvm_nx_huge_page_recovery_worker, kvm_nx_huge_page_recovery_worker_kill,
++ kvm, "kvm-nx-lpage-recovery");
+
+- return err;
++ if (!kvm->arch.nx_huge_page_recovery_thread)
++ return -ENOMEM;
++
++ vhost_task_start(kvm->arch.nx_huge_page_recovery_thread);
++ return 0;
+ }
+
+ void kvm_mmu_pre_destroy_vm(struct kvm *kvm)
+ {
+ if (kvm->arch.nx_huge_page_recovery_thread)
+- kthread_stop(kvm->arch.nx_huge_page_recovery_thread);
++ vhost_task_stop(kvm->arch.nx_huge_page_recovery_thread);
+ }
+
+ #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
+diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
+index 8f7eb3ad88fcb9..5521608077ec09 100644
+--- a/arch/x86/kvm/mmu/spte.c
++++ b/arch/x86/kvm/mmu/spte.c
+@@ -226,12 +226,20 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
+ spte |= PT_WRITABLE_MASK | shadow_mmu_writable_mask;
+
+ /*
+- * Optimization: for pte sync, if spte was writable the hash
+- * lookup is unnecessary (and expensive). Write protection
+- * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots.
+- * Same reasoning can be applied to dirty page accounting.
++ * When overwriting an existing leaf SPTE, and the old SPTE was
++ * writable, skip trying to unsync shadow pages as any relevant
++ * shadow pages must already be unsync, i.e. the hash lookup is
++ * unnecessary (and expensive).
++ *
++ * The same reasoning applies to dirty page/folio accounting;
++ * KVM will mark the folio dirty using the old SPTE, thus
++ * there's no need to immediately mark the new SPTE as dirty.
++ *
++ * Note, both cases rely on KVM not changing PFNs without first
++ * zapping the old SPTE, which is guaranteed by both the shadow
++ * MMU and the TDP MMU.
+ */
+- if (is_writable_pte(old_spte))
++ if (is_last_spte(old_spte, level) && is_writable_pte(old_spte))
+ goto out;
+
+ /*
+diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S
+index f7235ef87bc32b..acb39752e7ca32 100644
+--- a/arch/x86/platform/pvh/head.S
++++ b/arch/x86/platform/pvh/head.S
+@@ -101,7 +101,27 @@ SYM_CODE_START_LOCAL(pvh_start_xen)
+ xor %edx, %edx
+ wrmsr
+
+- call xen_prepare_pvh
++ /*
++ * Calculate load offset and store in phys_base. __pa() needs
++ * phys_base set to calculate the hypercall page in xen_pvh_init().
++ */
++ movq %rbp, %rbx
++ subq $_pa(pvh_start_xen), %rbx
++ movq %rbx, phys_base(%rip)
++
++ /* Call xen_prepare_pvh() via the kernel virtual mapping */
++ leaq xen_prepare_pvh(%rip), %rax
++ subq phys_base(%rip), %rax
++ addq $__START_KERNEL_map, %rax
++ ANNOTATE_RETPOLINE_SAFE
++ call *%rax
++
++ /*
++ * Clear phys_base. __startup_64 will *add* to its value,
++ * so reset to 0.
++ */
++ xor %rbx, %rbx
++ movq %rbx, phys_base(%rip)
+
+ /* startup_64 expects boot_params in %rsi. */
+ mov $_pa(pvh_bootparams), %rsi
+diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c
+index bdec4a773af098..e51f2060e83089 100644
+--- a/arch/xtensa/kernel/setup.c
++++ b/arch/xtensa/kernel/setup.c
+@@ -216,7 +216,7 @@ static int __init xtensa_dt_io_area(unsigned long node, const char *uname,
+
+ void __init early_init_devtree(void *params)
+ {
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ of_scan_flat_dt(xtensa_dt_io_area, NULL);
+
+ if (!command_line[0])
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 1cc40a857fb858..2d94a4bb9efa5a 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -582,23 +582,31 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd,
+ #define BFQ_LIMIT_INLINE_DEPTH 16
+
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+-static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
++static bool bfqq_request_over_limit(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic, blk_opf_t opf,
++ unsigned int act_idx, int limit)
+ {
+- struct bfq_data *bfqd = bfqq->bfqd;
+- struct bfq_entity *entity = &bfqq->entity;
+ struct bfq_entity *inline_entities[BFQ_LIMIT_INLINE_DEPTH];
+ struct bfq_entity **entities = inline_entities;
+- int depth, level, alloc_depth = BFQ_LIMIT_INLINE_DEPTH;
+- int class_idx = bfqq->ioprio_class - 1;
++ int alloc_depth = BFQ_LIMIT_INLINE_DEPTH;
+ struct bfq_sched_data *sched_data;
++ struct bfq_entity *entity;
++ struct bfq_queue *bfqq;
+ unsigned long wsum;
+ bool ret = false;
+-
+- if (!entity->on_st_or_in_serv)
+- return false;
++ int depth;
++ int level;
+
+ retry:
+ spin_lock_irq(&bfqd->lock);
++ bfqq = bic_to_bfqq(bic, op_is_sync(opf), act_idx);
++ if (!bfqq)
++ goto out;
++
++ entity = &bfqq->entity;
++ if (!entity->on_st_or_in_serv)
++ goto out;
++
+ /* +1 for bfqq entity, root cgroup not included */
+ depth = bfqg_to_blkg(bfqq_group(bfqq))->blkcg->css.cgroup->level + 1;
+ if (depth > alloc_depth) {
+@@ -643,7 +651,7 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
+ * class.
+ */
+ wsum = 0;
+- for (i = 0; i <= class_idx; i++) {
++ for (i = 0; i <= bfqq->ioprio_class - 1; i++) {
+ wsum = wsum * IOPRIO_BE_NR +
+ sched_data->service_tree[i].wsum;
+ }
+@@ -666,7 +674,9 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
+ return ret;
+ }
+ #else
+-static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
++static bool bfqq_request_over_limit(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic, blk_opf_t opf,
++ unsigned int act_idx, int limit)
+ {
+ return false;
+ }
+@@ -704,8 +714,9 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ }
+
+ for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) {
+- struct bfq_queue *bfqq =
+- bic_to_bfqq(bic, op_is_sync(opf), act_idx);
++ /* Fast path to check if bfqq is already allocated. */
++ if (!bic_to_bfqq(bic, op_is_sync(opf), act_idx))
++ continue;
+
+ /*
+ * Does queue (or any parent entity) exceed number of
+@@ -713,7 +724,7 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ * limit depth so that it cannot consume more
+ * available requests and thus starve other entities.
+ */
+- if (bfqq && bfqq_request_over_limit(bfqq, limit)) {
++ if (bfqq_request_over_limit(bfqd, bic, opf, act_idx, limit)) {
+ depth = 1;
+ break;
+ }
+diff --git a/block/blk-core.c b/block/blk-core.c
+index bc5e8c5eaac9ff..4f791a3114a12c 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -261,6 +261,8 @@ static void blk_free_queue(struct request_queue *q)
+ blk_mq_release(q);
+
+ ida_free(&blk_queue_ida, q->id);
++ lockdep_unregister_key(&q->io_lock_cls_key);
++ lockdep_unregister_key(&q->q_lock_cls_key);
+ call_rcu(&q->rcu_head, blk_free_queue_rcu);
+ }
+
+@@ -278,18 +280,20 @@ void blk_put_queue(struct request_queue *q)
+ }
+ EXPORT_SYMBOL(blk_put_queue);
+
+-void blk_queue_start_drain(struct request_queue *q)
++bool blk_queue_start_drain(struct request_queue *q)
+ {
+ /*
+ * When queue DYING flag is set, we need to block new req
+ * entering queue, so we call blk_freeze_queue_start() to
+ * prevent I/O from crossing blk_queue_enter().
+ */
+- blk_freeze_queue_start(q);
++ bool freeze = __blk_freeze_queue_start(q, current);
+ if (queue_is_mq(q))
+ blk_mq_wake_waiters(q);
+ /* Make blk_queue_enter() reexamine the DYING flag. */
+ wake_up_all(&q->mq_freeze_wq);
++
++ return freeze;
+ }
+
+ /**
+@@ -321,6 +325,8 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
+ return -ENODEV;
+ }
+
++ rwsem_acquire_read(&q->q_lockdep_map, 0, 0, _RET_IP_);
++ rwsem_release(&q->q_lockdep_map, _RET_IP_);
+ return 0;
+ }
+
+@@ -352,6 +358,8 @@ int __bio_queue_enter(struct request_queue *q, struct bio *bio)
+ goto dead;
+ }
+
++ rwsem_acquire_read(&q->io_lockdep_map, 0, 0, _RET_IP_);
++ rwsem_release(&q->io_lockdep_map, _RET_IP_);
+ return 0;
+ dead:
+ bio_io_error(bio);
+@@ -441,6 +449,12 @@ struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id)
+ PERCPU_REF_INIT_ATOMIC, GFP_KERNEL);
+ if (error)
+ goto fail_stats;
++ lockdep_register_key(&q->io_lock_cls_key);
++ lockdep_register_key(&q->q_lock_cls_key);
++ lockdep_init_map(&q->io_lockdep_map, "&q->q_usage_counter(io)",
++ &q->io_lock_cls_key, 0);
++ lockdep_init_map(&q->q_lockdep_map, "&q->q_usage_counter(queue)",
++ &q->q_lock_cls_key, 0);
+
+ q->nr_requests = BLKDEV_DEFAULT_RQ;
+
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index c7222c4685e060..32d60294a44af2 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -166,17 +166,6 @@ struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim,
+ return bio_submit_split(bio, split_sectors);
+ }
+
+-struct bio *bio_split_write_zeroes(struct bio *bio,
+- const struct queue_limits *lim, unsigned *nsegs)
+-{
+- *nsegs = 0;
+- if (!lim->max_write_zeroes_sectors)
+- return bio;
+- if (bio_sectors(bio) <= lim->max_write_zeroes_sectors)
+- return bio;
+- return bio_submit_split(bio, lim->max_write_zeroes_sectors);
+-}
+-
+ static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim,
+ bool is_atomic)
+ {
+@@ -211,7 +200,9 @@ static inline unsigned get_max_io_size(struct bio *bio,
+ * We ignore lim->max_sectors for atomic writes because it may less
+ * than the actual bio size, which we cannot tolerate.
+ */
+- if (is_atomic)
++ if (bio_op(bio) == REQ_OP_WRITE_ZEROES)
++ max_sectors = lim->max_write_zeroes_sectors;
++ else if (is_atomic)
+ max_sectors = lim->atomic_write_max_sectors;
+ else
+ max_sectors = lim->max_sectors;
+@@ -296,6 +287,14 @@ static bool bvec_split_segs(const struct queue_limits *lim,
+ return len > 0 || bv->bv_len > max_len;
+ }
+
++static unsigned int bio_split_alignment(struct bio *bio,
++ const struct queue_limits *lim)
++{
++ if (op_is_write(bio_op(bio)) && lim->zone_write_granularity)
++ return lim->zone_write_granularity;
++ return lim->logical_block_size;
++}
++
+ /**
+ * bio_split_rw_at - check if and where to split a read/write bio
+ * @bio: [in] bio to be split
+@@ -358,7 +357,7 @@ int bio_split_rw_at(struct bio *bio, const struct queue_limits *lim,
+ * split size so that each bio is properly block size aligned, even if
+ * we do not use the full hardware limits.
+ */
+- bytes = ALIGN_DOWN(bytes, lim->logical_block_size);
++ bytes = ALIGN_DOWN(bytes, bio_split_alignment(bio, lim));
+
+ /*
+ * Bio splitting may cause subtle trouble such as hang when doing sync
+@@ -378,6 +377,46 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
+ get_max_io_size(bio, lim) << SECTOR_SHIFT));
+ }
+
++/*
++ * REQ_OP_ZONE_APPEND bios must never be split by the block layer.
++ *
++ * But we want the nr_segs calculation provided by bio_split_rw_at, and having
++ * a good sanity check that the submitter built the bio correctly is nice to
++ * have as well.
++ */
++struct bio *bio_split_zone_append(struct bio *bio,
++ const struct queue_limits *lim, unsigned *nr_segs)
++{
++ unsigned int max_sectors = queue_limits_max_zone_append_sectors(lim);
++ int split_sectors;
++
++ split_sectors = bio_split_rw_at(bio, lim, nr_segs,
++ max_sectors << SECTOR_SHIFT);
++ if (WARN_ON_ONCE(split_sectors > 0))
++ split_sectors = -EINVAL;
++ return bio_submit_split(bio, split_sectors);
++}
++
++struct bio *bio_split_write_zeroes(struct bio *bio,
++ const struct queue_limits *lim, unsigned *nsegs)
++{
++ unsigned int max_sectors = get_max_io_size(bio, lim);
++
++ *nsegs = 0;
++
++ /*
++ * An unset limit should normally not happen, as bio submission is keyed
++ * off having a non-zero limit. But SCSI can clear the limit in the
++ * I/O completion handler, and we can race and see this. Splitting to a
++ * zero limit obviously doesn't make sense, so band-aid it here.
++ */
++ if (!max_sectors)
++ return bio;
++ if (bio_sectors(bio) <= max_sectors)
++ return bio;
++ return bio_submit_split(bio, max_sectors);
++}
++
+ /**
+ * bio_split_to_limits - split a bio to fit the queue limits
+ * @bio: bio to be split
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index a2401e4d8c974b..17dd8b329e6916 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -120,9 +120,59 @@ void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part,
+ inflight[1] = mi.inflight[1];
+ }
+
+-void blk_freeze_queue_start(struct request_queue *q)
++#ifdef CONFIG_LOCKDEP
++static bool blk_freeze_set_owner(struct request_queue *q,
++ struct task_struct *owner)
++{
++ if (!owner)
++ return false;
++
++ if (!q->mq_freeze_depth) {
++ q->mq_freeze_owner = owner;
++ q->mq_freeze_owner_depth = 1;
++ return true;
++ }
++
++ if (owner == q->mq_freeze_owner)
++ q->mq_freeze_owner_depth += 1;
++ return false;
++}
++
++/* verify the last unfreeze in owner context */
++static bool blk_unfreeze_check_owner(struct request_queue *q)
++{
++ if (!q->mq_freeze_owner)
++ return false;
++ if (q->mq_freeze_owner != current)
++ return false;
++ if (--q->mq_freeze_owner_depth == 0) {
++ q->mq_freeze_owner = NULL;
++ return true;
++ }
++ return false;
++}
++
++#else
++
++static bool blk_freeze_set_owner(struct request_queue *q,
++ struct task_struct *owner)
++{
++ return false;
++}
++
++static bool blk_unfreeze_check_owner(struct request_queue *q)
+ {
++ return false;
++}
++#endif
++
++bool __blk_freeze_queue_start(struct request_queue *q,
++ struct task_struct *owner)
++{
++ bool freeze;
++
+ mutex_lock(&q->mq_freeze_lock);
++ freeze = blk_freeze_set_owner(q, owner);
+ if (++q->mq_freeze_depth == 1) {
+ percpu_ref_kill(&q->q_usage_counter);
+ mutex_unlock(&q->mq_freeze_lock);
+@@ -131,6 +181,14 @@ void blk_freeze_queue_start(struct request_queue *q)
+ } else {
+ mutex_unlock(&q->mq_freeze_lock);
+ }
++
++ return freeze;
++}
++
++void blk_freeze_queue_start(struct request_queue *q)
++{
++ if (__blk_freeze_queue_start(q, current))
++ blk_freeze_acquire_lock(q, false, false);
+ }
+ EXPORT_SYMBOL_GPL(blk_freeze_queue_start);
+
+@@ -176,8 +234,10 @@ void blk_mq_freeze_queue(struct request_queue *q)
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_freeze_queue);
+
+-void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
++bool __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
+ {
++ bool unfreeze;
++
+ mutex_lock(&q->mq_freeze_lock);
+ if (force_atomic)
+ q->q_usage_counter.data->force_atomic = true;
+@@ -187,15 +247,39 @@ void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
+ percpu_ref_resurrect(&q->q_usage_counter);
+ wake_up_all(&q->mq_freeze_wq);
+ }
++ unfreeze = blk_unfreeze_check_owner(q);
+ mutex_unlock(&q->mq_freeze_lock);
++
++ return unfreeze;
+ }
+
+ void blk_mq_unfreeze_queue(struct request_queue *q)
+ {
+- __blk_mq_unfreeze_queue(q, false);
++ if (__blk_mq_unfreeze_queue(q, false))
++ blk_unfreeze_release_lock(q, false, false);
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
+
++/*
++ * non_owner variant of blk_freeze_queue_start
++ *
++ * Unlike blk_freeze_queue_start, the queue doesn't need to be unfrozen
++ * by the same task. This is fragile and should not be used if at all
++ * possible.
++ */
++void blk_freeze_queue_start_non_owner(struct request_queue *q)
++{
++ __blk_freeze_queue_start(q, NULL);
++}
++EXPORT_SYMBOL_GPL(blk_freeze_queue_start_non_owner);
++
++/* non_owner variant of blk_mq_unfreeze_queue */
++void blk_mq_unfreeze_queue_non_owner(struct request_queue *q)
++{
++ __blk_mq_unfreeze_queue(q, false);
++}
++EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue_non_owner);
++
+ /*
+ * FIXME: replace the scsi_internal_device_*block_nowait() calls in the
+ * mpt3sas driver such that this function can be removed.
+@@ -283,8 +367,9 @@ void blk_mq_quiesce_tagset(struct blk_mq_tag_set *set)
+ if (!blk_queue_skip_tagset_quiesce(q))
+ blk_mq_quiesce_queue_nowait(q);
+ }
+- blk_mq_wait_quiesce_done(set);
+ mutex_unlock(&set->tag_list_lock);
++
++ blk_mq_wait_quiesce_done(set);
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_quiesce_tagset);
+
+@@ -2202,6 +2287,24 @@ void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
+ }
+ EXPORT_SYMBOL(blk_mq_delay_run_hw_queue);
+
++static inline bool blk_mq_hw_queue_need_run(struct blk_mq_hw_ctx *hctx)
++{
++ bool need_run;
++
++ /*
++ * When queue is quiesced, we may be switching io scheduler, or
++ * updating nr_hw_queues, or other things, and we can't run queue
++ * any more, even blk_mq_hctx_has_pending() can't be called safely.
++ *
++ * And queue will be rerun in blk_mq_unquiesce_queue() if it is
++ * quiesced.
++ */
++ __blk_mq_run_dispatch_ops(hctx->queue, false,
++ need_run = !blk_queue_quiesced(hctx->queue) &&
++ blk_mq_hctx_has_pending(hctx));
++ return need_run;
++}
++
+ /**
+ * blk_mq_run_hw_queue - Start to run a hardware queue.
+ * @hctx: Pointer to the hardware queue to run.
+@@ -2222,20 +2325,23 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
+
+ might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
+
+- /*
+- * When queue is quiesced, we may be switching io scheduler, or
+- * updating nr_hw_queues, or other things, and we can't run queue
+- * any more, even __blk_mq_hctx_has_pending() can't be called safely.
+- *
+- * And queue will be rerun in blk_mq_unquiesce_queue() if it is
+- * quiesced.
+- */
+- __blk_mq_run_dispatch_ops(hctx->queue, false,
+- need_run = !blk_queue_quiesced(hctx->queue) &&
+- blk_mq_hctx_has_pending(hctx));
++ need_run = blk_mq_hw_queue_need_run(hctx);
++ if (!need_run) {
++ unsigned long flags;
+
+- if (!need_run)
+- return;
++ /*
++ * Synchronize with blk_mq_unquiesce_queue(), because we check
++ * if hw queue is quiesced locklessly above, we need the use
++ * ->queue_lock to make sure we see the up-to-date status to
++ * not miss rerunning the hw queue.
++ */
++ spin_lock_irqsave(&hctx->queue->queue_lock, flags);
++ need_run = blk_mq_hw_queue_need_run(hctx);
++ spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
++
++ if (!need_run)
++ return;
++ }
+
+ if (async || !cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
+ blk_mq_delay_run_hw_queue(hctx, 0);
+@@ -2392,6 +2498,12 @@ void blk_mq_start_stopped_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
+ return;
+
+ clear_bit(BLK_MQ_S_STOPPED, &hctx->state);
++ /*
++ * Pairs with the smp_mb() in blk_mq_hctx_stopped() to order the
++ * clearing of BLK_MQ_S_STOPPED above and the checking of dispatch
++ * list in the subsequent routine.
++ */
++ smp_mb__after_atomic();
+ blk_mq_run_hw_queue(hctx, async);
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_start_stopped_hw_queue);
+@@ -2619,6 +2731,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, 0);
++ blk_mq_run_hw_queue(hctx, false);
+ return;
+ }
+
+@@ -2649,6 +2762,7 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, 0);
++ blk_mq_run_hw_queue(hctx, false);
+ return BLK_STS_OK;
+ }
+
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index 3bd43b10032f83..f4ac1af77a267e 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -230,6 +230,19 @@ static inline struct blk_mq_tags *blk_mq_tags_from_data(struct blk_mq_alloc_data
+
+ static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx)
+ {
++ /* Fast path: hardware queue is not stopped most of the time. */
++ if (likely(!test_bit(BLK_MQ_S_STOPPED, &hctx->state)))
++ return false;
++
++ /*
++ * This barrier is used to order adding of dispatch list before and
++ * the test of BLK_MQ_S_STOPPED below. Pairs with the memory barrier
++ * in blk_mq_start_stopped_hw_queue() so that dispatch code could
++ * either see BLK_MQ_S_STOPPED is cleared or dispatch list is not
++ * empty to avoid missing dispatching requests.
++ */
++ smp_mb();
++
+ return test_bit(BLK_MQ_S_STOPPED, &hctx->state);
+ }
+
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index cd8a8eabc9a5eb..15851d07b2a254 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -249,6 +249,13 @@ static int blk_validate_limits(struct queue_limits *lim)
+ if (lim->io_min < lim->physical_block_size)
+ lim->io_min = lim->physical_block_size;
+
++ /*
++ * The optimal I/O size may not be aligned to physical block size
++ * (because it may be limited by dma engines which have no clue about
++ * block size of the disks attached to them), so we round it down here.
++ */
++ lim->io_opt = round_down(lim->io_opt, lim->physical_block_size);
++
+ /*
+ * max_hw_sectors has a somewhat weird default for historical reason,
+ * but driver really should set their own instead of relying on this
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index e85941bec857b6..207577145c54f4 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -794,10 +794,8 @@ int blk_register_queue(struct gendisk *disk)
+ * faster to shut down and is made fully functional here as
+ * request_queues for non-existent devices never get registered.
+ */
+- if (!blk_queue_init_done(q)) {
+- blk_queue_flag_set(QUEUE_FLAG_INIT_DONE, q);
+- percpu_ref_switch_to_percpu(&q->q_usage_counter);
+- }
++ blk_queue_flag_set(QUEUE_FLAG_INIT_DONE, q);
++ percpu_ref_switch_to_percpu(&q->q_usage_counter);
+
+ return ret;
+
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index af19296fa50df1..95e517723db3e4 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -1541,6 +1541,7 @@ static int disk_update_zone_resources(struct gendisk *disk,
+ unsigned int nr_seq_zones, nr_conv_zones = 0;
+ unsigned int pool_size;
+ struct queue_limits lim;
++ int ret;
+
+ disk->nr_zones = args->nr_zones;
+ disk->zone_capacity = args->zone_capacity;
+@@ -1593,7 +1594,11 @@ static int disk_update_zone_resources(struct gendisk *disk,
+ }
+
+ commit:
+- return queue_limits_commit_update(q, &lim);
++ blk_mq_freeze_queue(q);
++ ret = queue_limits_commit_update(q, &lim);
++ blk_mq_unfreeze_queue(q);
++
++ return ret;
+ }
+
+ static int blk_revalidate_conv_zone(struct blk_zone *zone, unsigned int idx,
+@@ -1814,14 +1819,15 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ * Set the new disk zone parameters only once the queue is frozen and
+ * all I/Os are completed.
+ */
+- blk_mq_freeze_queue(q);
+ if (ret > 0)
+ ret = disk_update_zone_resources(disk, &args);
+ else
+ pr_warn("%s: failed to revalidate zones\n", disk->disk_name);
+- if (ret)
++ if (ret) {
++ blk_mq_freeze_queue(q);
+ disk_free_zone_resources(disk);
+- blk_mq_unfreeze_queue(q);
++ blk_mq_unfreeze_queue(q);
++ }
+
+ kfree(args.conv_zones_bitmap);
+
+diff --git a/block/blk.h b/block/blk.h
+index 0d8cd64c126064..96aed15390c24f 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -4,6 +4,7 @@
+
+ #include <linux/bio-integrity.h>
+ #include <linux/blk-crypto.h>
++#include <linux/lockdep.h>
+ #include <linux/memblock.h> /* for max_pfn/max_low_pfn */
+ #include <linux/sched/sysctl.h>
+ #include <linux/timekeeping.h>
+@@ -35,8 +36,10 @@ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size,
+ void blk_free_flush_queue(struct blk_flush_queue *q);
+
+ void blk_freeze_queue(struct request_queue *q);
+-void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
+-void blk_queue_start_drain(struct request_queue *q);
++bool __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
++bool blk_queue_start_drain(struct request_queue *q);
++bool __blk_freeze_queue_start(struct request_queue *q,
++ struct task_struct *owner);
+ int __bio_queue_enter(struct request_queue *q, struct bio *bio);
+ void submit_bio_noacct_nocheck(struct bio *bio);
+ void bio_await_chain(struct bio *bio);
+@@ -69,8 +72,11 @@ static inline int bio_queue_enter(struct bio *bio)
+ {
+ struct request_queue *q = bdev_get_queue(bio->bi_bdev);
+
+- if (blk_try_enter_queue(q, false))
++ if (blk_try_enter_queue(q, false)) {
++ rwsem_acquire_read(&q->io_lockdep_map, 0, 0, _RET_IP_);
++ rwsem_release(&q->io_lockdep_map, _RET_IP_);
+ return 0;
++ }
+ return __bio_queue_enter(q, bio);
+ }
+
+@@ -337,6 +343,8 @@ struct bio *bio_split_write_zeroes(struct bio *bio,
+ const struct queue_limits *lim, unsigned *nsegs);
+ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
+ unsigned *nr_segs);
++struct bio *bio_split_zone_append(struct bio *bio,
++ const struct queue_limits *lim, unsigned *nr_segs);
+
+ /*
+ * All drivers must accept single-segments bios that are smaller than PAGE_SIZE.
+@@ -375,6 +383,8 @@ static inline struct bio *__bio_split_to_limits(struct bio *bio,
+ return bio_split_rw(bio, lim, nr_segs);
+ *nr_segs = 1;
+ return bio;
++ case REQ_OP_ZONE_APPEND:
++ return bio_split_zone_append(bio, lim, nr_segs);
+ case REQ_OP_DISCARD:
+ case REQ_OP_SECURE_ERASE:
+ return bio_split_discard(bio, lim, nr_segs);
+@@ -720,4 +730,22 @@ void blk_integrity_verify(struct bio *bio);
+ void blk_integrity_prepare(struct request *rq);
+ void blk_integrity_complete(struct request *rq, unsigned int nr_bytes);
+
++static inline void blk_freeze_acquire_lock(struct request_queue *q, bool
++ disk_dead, bool queue_dying)
++{
++ if (!disk_dead)
++ rwsem_acquire(&q->io_lockdep_map, 0, 1, _RET_IP_);
++ if (!queue_dying)
++ rwsem_acquire(&q->q_lockdep_map, 0, 1, _RET_IP_);
++}
++
++static inline void blk_unfreeze_release_lock(struct request_queue *q, bool
++ disk_dead, bool queue_dying)
++{
++ if (!queue_dying)
++ rwsem_release(&q->q_lockdep_map, _RET_IP_);
++ if (!disk_dead)
++ rwsem_release(&q->io_lockdep_map, _RET_IP_);
++}
++
+ #endif /* BLK_INTERNAL_H */
+diff --git a/block/elevator.c b/block/elevator.c
+index 9430cde13d1a41..43ba4ab1ada7fd 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -598,13 +598,19 @@ void elevator_init_mq(struct request_queue *q)
+ * drain any dispatch activities originated from passthrough
+ * requests, then no need to quiesce queue which may add long boot
+ * latency, especially when lots of disks are involved.
++ *
++ * Disk isn't added yet, so verifying queue lock only manually.
+ */
+- blk_mq_freeze_queue(q);
++ blk_freeze_queue_start_non_owner(q);
++ blk_freeze_acquire_lock(q, true, false);
++ blk_mq_freeze_queue_wait(q);
++
+ blk_mq_cancel_work_sync(q);
+
+ err = blk_mq_init_sched(q, e);
+
+- blk_mq_unfreeze_queue(q);
++ blk_unfreeze_release_lock(q, true, false);
++ blk_mq_unfreeze_queue_non_owner(q);
+
+ if (err) {
+ pr_warn("\"%s\" elevator initialization failed, "
+diff --git a/block/fops.c b/block/fops.c
+index 9825c1713a49a9..16bb5ae7023798 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -34,13 +34,10 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb)
+ return opf;
+ }
+
+-static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos,
+- struct iov_iter *iter, bool is_atomic)
++static bool blkdev_dio_invalid(struct block_device *bdev, struct kiocb *iocb,
++ struct iov_iter *iter)
+ {
+- if (is_atomic && !generic_atomic_write_valid(iter, pos))
+- return true;
+-
+- return pos & (bdev_logical_block_size(bdev) - 1) ||
++ return iocb->ki_pos & (bdev_logical_block_size(bdev) - 1) ||
+ !bdev_iter_is_aligned(bdev, iter);
+ }
+
+@@ -367,13 +364,12 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
+ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ {
+ struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
+- bool is_atomic = iocb->ki_flags & IOCB_ATOMIC;
+ unsigned int nr_pages;
+
+ if (!iov_iter_count(iter))
+ return 0;
+
+- if (blkdev_dio_invalid(bdev, iocb->ki_pos, iter, is_atomic))
++ if (blkdev_dio_invalid(bdev, iocb, iter))
+ return -EINVAL;
+
+ nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
+@@ -382,7 +378,7 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ return __blkdev_direct_IO_simple(iocb, iter, bdev,
+ nr_pages);
+ return __blkdev_direct_IO_async(iocb, iter, bdev, nr_pages);
+- } else if (is_atomic) {
++ } else if (iocb->ki_flags & IOCB_ATOMIC) {
+ return -EINVAL;
+ }
+ return __blkdev_direct_IO(iocb, iter, bdev, bio_max_segs(nr_pages));
+@@ -624,7 +620,7 @@ static int blkdev_open(struct inode *inode, struct file *filp)
+ if (!bdev)
+ return -ENXIO;
+
+- if (bdev_can_atomic_write(bdev) && filp->f_flags & O_DIRECT)
++ if (bdev_can_atomic_write(bdev))
+ filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
+
+ ret = bdev_open(bdev, mode, filp->private_data, NULL, filp);
+@@ -680,6 +676,7 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ struct file *file = iocb->ki_filp;
+ struct inode *bd_inode = bdev_file_inode(file);
+ struct block_device *bdev = I_BDEV(bd_inode);
++ bool atomic = iocb->ki_flags & IOCB_ATOMIC;
+ loff_t size = bdev_nr_bytes(bdev);
+ size_t shorted = 0;
+ ssize_t ret;
+@@ -699,8 +696,16 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ if ((iocb->ki_flags & (IOCB_NOWAIT | IOCB_DIRECT)) == IOCB_NOWAIT)
+ return -EOPNOTSUPP;
+
++ if (atomic) {
++ ret = generic_atomic_write_valid(iocb, from);
++ if (ret)
++ return ret;
++ }
++
+ size -= iocb->ki_pos;
+ if (iov_iter_count(from) > size) {
++ if (atomic)
++ return -EINVAL;
+ shorted = iov_iter_count(from) - size;
+ iov_iter_truncate(from, size);
+ }
+diff --git a/block/genhd.c b/block/genhd.c
+index 1c05dd4c6980b5..8645cf3b0816e4 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -581,13 +581,13 @@ static void blk_report_disk_dead(struct gendisk *disk, bool surprise)
+ rcu_read_unlock();
+ }
+
+-static void __blk_mark_disk_dead(struct gendisk *disk)
++static bool __blk_mark_disk_dead(struct gendisk *disk)
+ {
+ /*
+ * Fail any new I/O.
+ */
+ if (test_and_set_bit(GD_DEAD, &disk->state))
+- return;
++ return false;
+
+ if (test_bit(GD_OWNS_QUEUE, &disk->state))
+ blk_queue_flag_set(QUEUE_FLAG_DYING, disk->queue);
+@@ -600,7 +600,7 @@ static void __blk_mark_disk_dead(struct gendisk *disk)
+ /*
+ * Prevent new I/O from crossing bio_queue_enter().
+ */
+- blk_queue_start_drain(disk->queue);
++ return blk_queue_start_drain(disk->queue);
+ }
+
+ /**
+@@ -641,6 +641,7 @@ void del_gendisk(struct gendisk *disk)
+ struct request_queue *q = disk->queue;
+ struct block_device *part;
+ unsigned long idx;
++ bool start_drain, queue_dying;
+
+ might_sleep();
+
+@@ -668,7 +669,10 @@ void del_gendisk(struct gendisk *disk)
+ * Drop all partitions now that the disk is marked dead.
+ */
+ mutex_lock(&disk->open_mutex);
+- __blk_mark_disk_dead(disk);
++ start_drain = __blk_mark_disk_dead(disk);
++ queue_dying = blk_queue_dying(q);
++ if (start_drain)
++ blk_freeze_acquire_lock(q, true, queue_dying);
+ xa_for_each_start(&disk->part_tbl, idx, part, 1)
+ drop_partition(part);
+ mutex_unlock(&disk->open_mutex);
+@@ -718,13 +722,13 @@ void del_gendisk(struct gendisk *disk)
+ * If the disk does not own the queue, allow using passthrough requests
+ * again. Else leave the queue frozen to fail all I/O.
+ */
+- if (!test_bit(GD_OWNS_QUEUE, &disk->state)) {
+- blk_queue_flag_clear(QUEUE_FLAG_INIT_DONE, q);
++ if (!test_bit(GD_OWNS_QUEUE, &disk->state))
+ __blk_mq_unfreeze_queue(q, true);
+- } else {
+- if (queue_is_mq(q))
+- blk_mq_exit_queue(q);
+- }
++ else if (queue_is_mq(q))
++ blk_mq_exit_queue(q);
++
++ if (start_drain)
++ blk_unfreeze_release_lock(q, true, queue_dying);
+ }
+ EXPORT_SYMBOL(del_gendisk);
+
+diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
+index d0d954fe9d54f3..7fc79e7dce44a9 100644
+--- a/crypto/pcrypt.c
++++ b/crypto/pcrypt.c
+@@ -117,8 +117,10 @@ static int pcrypt_aead_encrypt(struct aead_request *req)
+ err = padata_do_parallel(ictx->psenc, padata, &ctx->cb_cpu);
+ if (!err)
+ return -EINPROGRESS;
+- if (err == -EBUSY)
+- return -EAGAIN;
++ if (err == -EBUSY) {
++ /* try non-parallel mode */
++ return crypto_aead_encrypt(creq);
++ }
+
+ return err;
+ }
+@@ -166,8 +168,10 @@ static int pcrypt_aead_decrypt(struct aead_request *req)
+ err = padata_do_parallel(ictx->psdec, padata, &ctx->cb_cpu);
+ if (!err)
+ return -EINPROGRESS;
+- if (err == -EBUSY)
+- return -EAGAIN;
++ if (err == -EBUSY) {
++ /* try non-parallel mode */
++ return crypto_aead_decrypt(creq);
++ }
+
+ return err;
+ }
+diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c
+index 78b32a8232419e..29b723039a3459 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.c
++++ b/drivers/accel/ivpu/ivpu_ipc.c
+@@ -291,15 +291,16 @@ int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ return ret;
+ }
+
+-static int
++int
+ ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+ enum vpu_ipc_msg_type expected_resp_type,
+- struct vpu_jsm_msg *resp, u32 channel,
+- unsigned long timeout_ms)
++ struct vpu_jsm_msg *resp, u32 channel, unsigned long timeout_ms)
+ {
+ struct ivpu_ipc_consumer cons;
+ int ret;
+
++ drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev));
++
+ ivpu_ipc_consumer_add(vdev, &cons, channel, NULL);
+
+ ret = ivpu_ipc_send(vdev, &cons, req);
+@@ -325,19 +326,21 @@ ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req
+ return ret;
+ }
+
+-int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+- enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+- u32 channel, unsigned long timeout_ms)
++int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
++ enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
++ u32 channel, unsigned long timeout_ms)
+ {
+ struct vpu_jsm_msg hb_req = { .type = VPU_JSM_MSG_QUERY_ENGINE_HB };
+ struct vpu_jsm_msg hb_resp;
+ int ret, hb_ret;
+
+- drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev));
++ ret = ivpu_rpm_get(vdev);
++ if (ret < 0)
++ return ret;
+
+ ret = ivpu_ipc_send_receive_internal(vdev, req, expected_resp, resp, channel, timeout_ms);
+ if (ret != -ETIMEDOUT)
+- return ret;
++ goto rpm_put;
+
+ hb_ret = ivpu_ipc_send_receive_internal(vdev, &hb_req, VPU_JSM_MSG_QUERY_ENGINE_HB_DONE,
+ &hb_resp, VPU_IPC_CHAN_ASYNC_CMD,
+@@ -345,21 +348,7 @@ int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *r
+ if (hb_ret == -ETIMEDOUT)
+ ivpu_pm_trigger_recovery(vdev, "IPC timeout");
+
+- return ret;
+-}
+-
+-int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+- enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+- u32 channel, unsigned long timeout_ms)
+-{
+- int ret;
+-
+- ret = ivpu_rpm_get(vdev);
+- if (ret < 0)
+- return ret;
+-
+- ret = ivpu_ipc_send_receive_active(vdev, req, expected_resp, resp, channel, timeout_ms);
+-
++rpm_put:
+ ivpu_rpm_put(vdev);
+ return ret;
+ }
+diff --git a/drivers/accel/ivpu/ivpu_ipc.h b/drivers/accel/ivpu/ivpu_ipc.h
+index 4fe38141045ea3..fb4de7fb8210ea 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.h
++++ b/drivers/accel/ivpu/ivpu_ipc.h
+@@ -101,10 +101,9 @@ int ivpu_ipc_send(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ struct ivpu_ipc_hdr *ipc_buf, struct vpu_jsm_msg *jsm_msg,
+ unsigned long timeout_ms);
+-
+-int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+- enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+- u32 channel, unsigned long timeout_ms);
++int ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
++ enum vpu_ipc_msg_type expected_resp_type,
++ struct vpu_jsm_msg *resp, u32 channel, unsigned long timeout_ms);
+ int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+ enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+ u32 channel, unsigned long timeout_ms);
+diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.c b/drivers/accel/ivpu/ivpu_jsm_msg.c
+index 46ef16c3c06910..88105963c1b288 100644
+--- a/drivers/accel/ivpu/ivpu_jsm_msg.c
++++ b/drivers/accel/ivpu/ivpu_jsm_msg.c
+@@ -270,9 +270,8 @@ int ivpu_jsm_pwr_d0i3_enter(struct ivpu_device *vdev)
+
+ req.payload.pwr_d0i3_enter.send_response = 1;
+
+- ret = ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_PWR_D0I3_ENTER_DONE,
+- &resp, VPU_IPC_CHAN_GEN_CMD,
+- vdev->timeout.d0i3_entry_msg);
++ ret = ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_PWR_D0I3_ENTER_DONE, &resp,
++ VPU_IPC_CHAN_GEN_CMD, vdev->timeout.d0i3_entry_msg);
+ if (ret)
+ return ret;
+
+@@ -430,8 +429,8 @@ int ivpu_jsm_hws_setup_priority_bands(struct ivpu_device *vdev)
+
+ req.payload.hws_priority_band_setup.normal_band_percentage = 10;
+
+- ret = ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP_RSP,
+- &resp, VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
++ ret = ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP_RSP,
++ &resp, VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+ if (ret)
+ ivpu_warn_ratelimited(vdev, "Failed to set priority bands: %d\n", ret);
+
+@@ -544,9 +543,8 @@ int ivpu_jsm_dct_enable(struct ivpu_device *vdev, u32 active_us, u32 inactive_us
+ req.payload.pwr_dct_control.dct_active_us = active_us;
+ req.payload.pwr_dct_control.dct_inactive_us = inactive_us;
+
+- return ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_DCT_ENABLE_DONE,
+- &resp, VPU_IPC_CHAN_ASYNC_CMD,
+- vdev->timeout.jsm);
++ return ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_DCT_ENABLE_DONE, &resp,
++ VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+ }
+
+ int ivpu_jsm_dct_disable(struct ivpu_device *vdev)
+@@ -554,7 +552,6 @@ int ivpu_jsm_dct_disable(struct ivpu_device *vdev)
+ struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_DCT_DISABLE };
+ struct vpu_jsm_msg resp;
+
+- return ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_DCT_DISABLE_DONE,
+- &resp, VPU_IPC_CHAN_ASYNC_CMD,
+- vdev->timeout.jsm);
++ return ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_DCT_DISABLE_DONE, &resp,
++ VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+ }
+diff --git a/drivers/acpi/arm64/gtdt.c b/drivers/acpi/arm64/gtdt.c
+index c0e77c1c8e09d6..eb6c2d3603874a 100644
+--- a/drivers/acpi/arm64/gtdt.c
++++ b/drivers/acpi/arm64/gtdt.c
+@@ -283,7 +283,7 @@ static int __init gtdt_parse_timer_block(struct acpi_gtdt_timer_block *block,
+ if (frame->virt_irq > 0)
+ acpi_unregister_gsi(gtdt_frame->virtual_timer_interrupt);
+ frame->virt_irq = 0;
+- } while (i-- >= 0 && gtdt_frame--);
++ } while (i-- > 0 && gtdt_frame--);
+
+ return -EINVAL;
+ }
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 544f53ae9cc0c3..3a39c0a04163c9 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -1146,7 +1146,6 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ return -EFAULT;
+ }
+ val = MASK_VAL_WRITE(reg, prev_val, val);
+- val |= prev_val;
+ }
+
+ switch (size) {
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 324a9a3c087aa2..c6664a78796979 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -829,19 +829,18 @@ static void fw_log_firmware_info(const struct firmware *fw, const char *name, st
+ shash->tfm = alg;
+
+ if (crypto_shash_digest(shash, fw->data, fw->size, sha256buf) < 0)
+- goto out_shash;
++ goto out_free;
+
+ for (int i = 0; i < SHA256_DIGEST_SIZE; i++)
+ sprintf(&outbuf[i * 2], "%02x", sha256buf[i]);
+ outbuf[SHA256_BLOCK_SIZE] = 0;
+ dev_dbg(device, "Loaded FW: %s, sha256: %s\n", name, outbuf);
+
+-out_shash:
+- crypto_free_shash(alg);
+ out_free:
+ kfree(shash);
+ kfree(outbuf);
+ kfree(sha256buf);
++ crypto_free_shash(alg);
+ }
+ #else
+ static void fw_log_firmware_info(const struct firmware *fw, const char *name,
+diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c
+index d3ec1345b5b53a..6981e5f974e9a4 100644
+--- a/drivers/base/regmap/regmap-irq.c
++++ b/drivers/base/regmap/regmap-irq.c
+@@ -514,12 +514,16 @@ static irqreturn_t regmap_irq_thread(int irq, void *d)
+ return IRQ_NONE;
+ }
+
++static struct lock_class_key regmap_irq_lock_class;
++static struct lock_class_key regmap_irq_request_class;
++
+ static int regmap_irq_map(struct irq_domain *h, unsigned int virq,
+ irq_hw_number_t hw)
+ {
+ struct regmap_irq_chip_data *data = h->host_data;
+
+ irq_set_chip_data(virq, data);
++ irq_set_lockdep_class(virq, ®map_irq_lock_class, ®map_irq_request_class);
+ irq_set_chip(virq, &data->irq_chip);
+ irq_set_nested_thread(virq, 1);
+ irq_set_parent(virq, data->irq);
+@@ -608,6 +612,30 @@ int regmap_irq_set_type_config_simple(unsigned int **buf, unsigned int type,
+ }
+ EXPORT_SYMBOL_GPL(regmap_irq_set_type_config_simple);
+
++static int regmap_irq_create_domain(struct fwnode_handle *fwnode, int irq_base,
++ const struct regmap_irq_chip *chip,
++ struct regmap_irq_chip_data *d)
++{
++ struct irq_domain_info info = {
++ .fwnode = fwnode,
++ .size = chip->num_irqs,
++ .hwirq_max = chip->num_irqs,
++ .virq_base = irq_base,
++ .ops = ®map_domain_ops,
++ .host_data = d,
++ .name_suffix = chip->domain_suffix,
++ };
++
++ d->domain = irq_domain_instantiate(&info);
++ if (IS_ERR(d->domain)) {
++ dev_err(d->map->dev, "Failed to create IRQ domain\n");
++ return PTR_ERR(d->domain);
++ }
++
++ return 0;
++}
++
++
+ /**
+ * regmap_add_irq_chip_fwnode() - Use standard regmap IRQ controller handling
+ *
+@@ -856,18 +884,9 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ }
+ }
+
+- if (irq_base)
+- d->domain = irq_domain_create_legacy(fwnode, chip->num_irqs,
+- irq_base, 0,
+- ®map_domain_ops, d);
+- else
+- d->domain = irq_domain_create_linear(fwnode, chip->num_irqs,
+- ®map_domain_ops, d);
+- if (!d->domain) {
+- dev_err(map->dev, "Failed to create IRQ domain\n");
+- ret = -ENOMEM;
++ ret = regmap_irq_create_domain(fwnode, irq_base, chip, d);
++ if (ret)
+ goto err_alloc;
+- }
+
+ ret = request_threaded_irq(irq, NULL, regmap_irq_thread,
+ irq_flags | IRQF_ONESHOT,
+diff --git a/drivers/base/trace.h b/drivers/base/trace.h
+index e52b6eae060dde..3b83b13a57ff1e 100644
+--- a/drivers/base/trace.h
++++ b/drivers/base/trace.h
+@@ -24,18 +24,18 @@ DECLARE_EVENT_CLASS(devres,
+ __field(struct device *, dev)
+ __field(const char *, op)
+ __field(void *, node)
+- __field(const char *, name)
++ __string(name, name)
+ __field(size_t, size)
+ ),
+ TP_fast_assign(
+ __assign_str(devname);
+ __entry->op = op;
+ __entry->node = node;
+- __entry->name = name;
++ __assign_str(name);
+ __entry->size = size;
+ ),
+ TP_printk("%s %3s %p %s (%zu bytes)", __get_str(devname),
+- __entry->op, __entry->node, __entry->name, __entry->size)
++ __entry->op, __entry->node, __get_str(name), __entry->size)
+ );
+
+ DEFINE_EVENT(devres, devres_log,
+diff --git a/drivers/block/brd.c b/drivers/block/brd.c
+index 2fd1ed1017481b..292f127cae0abe 100644
+--- a/drivers/block/brd.c
++++ b/drivers/block/brd.c
+@@ -231,8 +231,10 @@ static void brd_do_discard(struct brd_device *brd, sector_t sector, u32 size)
+ xa_lock(&brd->brd_pages);
+ while (size >= PAGE_SIZE && aligned_sector < rd_size * 2) {
+ page = __xa_erase(&brd->brd_pages, aligned_sector >> PAGE_SECTORS_SHIFT);
+- if (page)
++ if (page) {
+ __free_page(page);
++ brd->brd_nr_pages--;
++ }
+ aligned_sector += PAGE_SECTORS;
+ size -= PAGE_SIZE;
+ }
+@@ -316,8 +318,40 @@ __setup("ramdisk_size=", ramdisk_size);
+ * (should share code eventually).
+ */
+ static LIST_HEAD(brd_devices);
++static DEFINE_MUTEX(brd_devices_mutex);
+ static struct dentry *brd_debugfs_dir;
+
++static struct brd_device *brd_find_or_alloc_device(int i)
++{
++ struct brd_device *brd;
++
++ mutex_lock(&brd_devices_mutex);
++ list_for_each_entry(brd, &brd_devices, brd_list) {
++ if (brd->brd_number == i) {
++ mutex_unlock(&brd_devices_mutex);
++ return ERR_PTR(-EEXIST);
++ }
++ }
++
++ brd = kzalloc(sizeof(*brd), GFP_KERNEL);
++ if (!brd) {
++ mutex_unlock(&brd_devices_mutex);
++ return ERR_PTR(-ENOMEM);
++ }
++ brd->brd_number = i;
++ list_add_tail(&brd->brd_list, &brd_devices);
++ mutex_unlock(&brd_devices_mutex);
++ return brd;
++}
++
++static void brd_free_device(struct brd_device *brd)
++{
++ mutex_lock(&brd_devices_mutex);
++ list_del(&brd->brd_list);
++ mutex_unlock(&brd_devices_mutex);
++ kfree(brd);
++}
++
+ static int brd_alloc(int i)
+ {
+ struct brd_device *brd;
+@@ -340,14 +374,9 @@ static int brd_alloc(int i)
+ BLK_FEAT_NOWAIT,
+ };
+
+- list_for_each_entry(brd, &brd_devices, brd_list)
+- if (brd->brd_number == i)
+- return -EEXIST;
+- brd = kzalloc(sizeof(*brd), GFP_KERNEL);
+- if (!brd)
+- return -ENOMEM;
+- brd->brd_number = i;
+- list_add_tail(&brd->brd_list, &brd_devices);
++ brd = brd_find_or_alloc_device(i);
++ if (IS_ERR(brd))
++ return PTR_ERR(brd);
+
+ xa_init(&brd->brd_pages);
+
+@@ -378,8 +407,7 @@ static int brd_alloc(int i)
+ out_cleanup_disk:
+ put_disk(disk);
+ out_free_dev:
+- list_del(&brd->brd_list);
+- kfree(brd);
++ brd_free_device(brd);
+ return err;
+ }
+
+@@ -398,8 +426,7 @@ static void brd_cleanup(void)
+ del_gendisk(brd->brd_disk);
+ put_disk(brd->brd_disk);
+ brd_free_pages(brd);
+- list_del(&brd->brd_list);
+- kfree(brd);
++ brd_free_device(brd);
+ }
+ }
+
+@@ -426,16 +453,6 @@ static int __init brd_init(void)
+ {
+ int err, i;
+
+- brd_check_and_reset_par();
+-
+- brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
+-
+- for (i = 0; i < rd_nr; i++) {
+- err = brd_alloc(i);
+- if (err)
+- goto out_free;
+- }
+-
+ /*
+ * brd module now has a feature to instantiate underlying device
+ * structure on-demand, provided that there is an access dev node.
+@@ -451,11 +468,18 @@ static int __init brd_init(void)
+ * dynamically.
+ */
+
++ brd_check_and_reset_par();
++
++ brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
++
+ if (__register_blkdev(RAMDISK_MAJOR, "ramdisk", brd_probe)) {
+ err = -EIO;
+ goto out_free;
+ }
+
++ for (i = 0; i < rd_nr; i++)
++ brd_alloc(i);
++
+ pr_info("brd: module loaded\n");
+ return 0;
+
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 78a7bb28defe4c..86cc3b19faae86 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -173,7 +173,7 @@ static loff_t get_loop_size(struct loop_device *lo, struct file *file)
+ static bool lo_bdev_can_use_dio(struct loop_device *lo,
+ struct block_device *backing_bdev)
+ {
+- unsigned short sb_bsize = bdev_logical_block_size(backing_bdev);
++ unsigned int sb_bsize = bdev_logical_block_size(backing_bdev);
+
+ if (queue_logical_block_size(lo->lo_queue) < sb_bsize)
+ return false;
+@@ -977,7 +977,7 @@ loop_set_status_from_info(struct loop_device *lo,
+ return 0;
+ }
+
+-static unsigned short loop_default_blocksize(struct loop_device *lo,
++static unsigned int loop_default_blocksize(struct loop_device *lo,
+ struct block_device *backing_bdev)
+ {
+ /* In case of direct I/O, match underlying block size */
+@@ -986,7 +986,7 @@ static unsigned short loop_default_blocksize(struct loop_device *lo,
+ return SECTOR_SIZE;
+ }
+
+-static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
++static int loop_reconfigure_limits(struct loop_device *lo, unsigned int bsize)
+ {
+ struct file *file = lo->lo_backing_file;
+ struct inode *inode = file->f_mapping->host;
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 2633f7356fac72..3ad5fdbc681141 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -664,12 +664,21 @@ static inline char *ublk_queue_cmd_buf(struct ublk_device *ub, int q_id)
+ return ublk_get_queue(ub, q_id)->io_cmd_buf;
+ }
+
++static inline int __ublk_queue_cmd_buf_size(int depth)
++{
++ return round_up(depth * sizeof(struct ublksrv_io_desc), PAGE_SIZE);
++}
++
+ static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub, int q_id)
+ {
+ struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
+
+- return round_up(ubq->q_depth * sizeof(struct ublksrv_io_desc),
+- PAGE_SIZE);
++ return __ublk_queue_cmd_buf_size(ubq->q_depth);
++}
++
++static int ublk_max_cmd_buf_size(void)
++{
++ return __ublk_queue_cmd_buf_size(UBLK_MAX_QUEUE_DEPTH);
+ }
+
+ static inline bool ublk_queue_can_use_recovery_reissue(
+@@ -1322,7 +1331,7 @@ static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
+ {
+ struct ublk_device *ub = filp->private_data;
+ size_t sz = vma->vm_end - vma->vm_start;
+- unsigned max_sz = UBLK_MAX_QUEUE_DEPTH * sizeof(struct ublksrv_io_desc);
++ unsigned max_sz = ublk_max_cmd_buf_size();
+ unsigned long pfn, end, phys_off = vma->vm_pgoff << PAGE_SHIFT;
+ int q_id, ret = 0;
+
+@@ -2966,7 +2975,7 @@ static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd,
+ ret = ublk_ctrl_end_recovery(ub, cmd);
+ break;
+ default:
+- ret = -ENOTSUPP;
++ ret = -EOPNOTSUPP;
+ break;
+ }
+
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 194417abc1053c..43c96b73a7118f 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -471,18 +471,18 @@ static bool virtblk_prep_rq_batch(struct request *req)
+ return virtblk_prep_rq(req->mq_hctx, vblk, req, vbr) == BLK_STS_OK;
+ }
+
+-static bool virtblk_add_req_batch(struct virtio_blk_vq *vq,
++static void virtblk_add_req_batch(struct virtio_blk_vq *vq,
+ struct request **rqlist)
+ {
++ struct request *req;
+ unsigned long flags;
+- int err;
+ bool kick;
+
+ spin_lock_irqsave(&vq->lock, flags);
+
+- while (!rq_list_empty(*rqlist)) {
+- struct request *req = rq_list_pop(rqlist);
++ while ((req = rq_list_pop(rqlist))) {
+ struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
++ int err;
+
+ err = virtblk_add_req(vq->vq, vbr);
+ if (err) {
+@@ -495,37 +495,33 @@ static bool virtblk_add_req_batch(struct virtio_blk_vq *vq,
+ kick = virtqueue_kick_prepare(vq->vq);
+ spin_unlock_irqrestore(&vq->lock, flags);
+
+- return kick;
++ if (kick)
++ virtqueue_notify(vq->vq);
+ }
+
+ static void virtio_queue_rqs(struct request **rqlist)
+ {
+- struct request *req, *next, *prev = NULL;
++ struct request *submit_list = NULL;
+ struct request *requeue_list = NULL;
++ struct request **requeue_lastp = &requeue_list;
++ struct virtio_blk_vq *vq = NULL;
++ struct request *req;
+
+- rq_list_for_each_safe(rqlist, req, next) {
+- struct virtio_blk_vq *vq = get_virtio_blk_vq(req->mq_hctx);
+- bool kick;
+-
+- if (!virtblk_prep_rq_batch(req)) {
+- rq_list_move(rqlist, &requeue_list, req, prev);
+- req = prev;
+- if (!req)
+- continue;
+- }
++ while ((req = rq_list_pop(rqlist))) {
++ struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req->mq_hctx);
+
+- if (!next || req->mq_hctx != next->mq_hctx) {
+- req->rq_next = NULL;
+- kick = virtblk_add_req_batch(vq, rqlist);
+- if (kick)
+- virtqueue_notify(vq->vq);
++ if (vq && vq != this_vq)
++ virtblk_add_req_batch(vq, &submit_list);
++ vq = this_vq;
+
+- *rqlist = next;
+- prev = NULL;
+- } else
+- prev = req;
++ if (virtblk_prep_rq_batch(req))
++ rq_list_add(&submit_list, req); /* reverse order */
++ else
++ rq_list_add_tail(&requeue_lastp, req);
+ }
+
++ if (vq)
++ virtblk_add_req_batch(vq, &submit_list);
+ *rqlist = requeue_list;
+ }
+
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index f9a7c790d7e2ec..723db989d9cec6 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -541,11 +541,10 @@ static const struct bcm_subver_table bcm_usb_subver_table[] = {
+ static const char *btbcm_get_board_name(struct device *dev)
+ {
+ #ifdef CONFIG_OF
+- struct device_node *root;
++ struct device_node *root __free(device_node) = of_find_node_by_path("/");
+ char *board_type;
+ const char *tmp;
+
+- root = of_find_node_by_path("/");
+ if (!root)
+ return NULL;
+
+@@ -555,7 +554,6 @@ static const char *btbcm_get_board_name(struct device *dev)
+ /* get rid of any '/' in the compatible string */
+ board_type = devm_kstrdup(dev, tmp, GFP_KERNEL);
+ strreplace(board_type, '/', '-');
+- of_node_put(root);
+
+ return board_type;
+ #else
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 24d2f4f37d0fd5..deb8729f73a91d 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -1841,6 +1841,37 @@ static int btintel_boot_wait(struct hci_dev *hdev, ktime_t calltime, int msec)
+ return 0;
+ }
+
++static int btintel_boot_wait_d0(struct hci_dev *hdev, ktime_t calltime,
++ int msec)
++{
++ ktime_t delta, rettime;
++ unsigned long long duration;
++ int err;
++
++ bt_dev_info(hdev, "Waiting for device transition to d0");
++
++ err = btintel_wait_on_flag_timeout(hdev, INTEL_WAIT_FOR_D0,
++ TASK_INTERRUPTIBLE,
++ msecs_to_jiffies(msec));
++ if (err == -EINTR) {
++ bt_dev_err(hdev, "Device d0 move interrupted");
++ return -EINTR;
++ }
++
++ if (err) {
++ bt_dev_err(hdev, "Device d0 move timeout");
++ return -ETIMEDOUT;
++ }
++
++ rettime = ktime_get();
++ delta = ktime_sub(rettime, calltime);
++ duration = (unsigned long long)ktime_to_ns(delta) >> 10;
++
++ bt_dev_info(hdev, "Device moved to D0 in %llu usecs", duration);
++
++ return 0;
++}
++
+ static int btintel_boot(struct hci_dev *hdev, u32 boot_addr)
+ {
+ ktime_t calltime;
+@@ -1849,6 +1880,7 @@ static int btintel_boot(struct hci_dev *hdev, u32 boot_addr)
+ calltime = ktime_get();
+
+ btintel_set_flag(hdev, INTEL_BOOTING);
++ btintel_set_flag(hdev, INTEL_WAIT_FOR_D0);
+
+ err = btintel_send_intel_reset(hdev, boot_addr);
+ if (err) {
+@@ -1861,13 +1893,28 @@ static int btintel_boot(struct hci_dev *hdev, u32 boot_addr)
+ * is done by the operational firmware sending bootup notification.
+ *
+ * Booting into operational firmware should not take longer than
+- * 1 second. However if that happens, then just fail the setup
++ * 5 second. However if that happens, then just fail the setup
+ * since something went wrong.
+ */
+- err = btintel_boot_wait(hdev, calltime, 1000);
+- if (err == -ETIMEDOUT)
++ err = btintel_boot_wait(hdev, calltime, 5000);
++ if (err == -ETIMEDOUT) {
+ btintel_reset_to_bootloader(hdev);
++ goto exit_error;
++ }
+
++ if (hdev->bus == HCI_PCI) {
++ /* In case of PCIe, after receiving bootup event, driver performs
++ * D0 entry by writing 0 to sleep control register (check
++ * btintel_pcie_recv_event())
++ * Firmware acks with alive interrupt indicating host is full ready to
++ * perform BT operation. Lets wait here till INTEL_WAIT_FOR_D0
++ * bit is cleared.
++ */
++ calltime = ktime_get();
++ err = btintel_boot_wait_d0(hdev, calltime, 2000);
++ }
++
++exit_error:
+ return err;
+ }
+
+@@ -3273,7 +3320,7 @@ int btintel_configure_setup(struct hci_dev *hdev, const char *driver_name)
+ }
+ EXPORT_SYMBOL_GPL(btintel_configure_setup);
+
+-static int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
++int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+ struct intel_tlv *tlv = (void *)&skb->data[5];
+
+@@ -3301,6 +3348,7 @@ static int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
+ recv_frame:
+ return hci_recv_frame(hdev, skb);
+ }
++EXPORT_SYMBOL_GPL(btintel_diagnostics);
+
+ int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+@@ -3320,7 +3368,8 @@ int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ * indicating that the bootup completed.
+ */
+ btintel_bootup(hdev, ptr, len);
+- break;
++ kfree_skb(skb);
++ return 0;
+ case 0x06:
+ /* When the firmware loading completes the
+ * device sends out a vendor specific event
+@@ -3328,7 +3377,8 @@ int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ * loading.
+ */
+ btintel_secure_send_result(hdev, ptr, len);
+- break;
++ kfree_skb(skb);
++ return 0;
+ }
+ }
+
+diff --git a/drivers/bluetooth/btintel.h b/drivers/bluetooth/btintel.h
+index aa70e4c2741653..b448c67e8ed94d 100644
+--- a/drivers/bluetooth/btintel.h
++++ b/drivers/bluetooth/btintel.h
+@@ -178,6 +178,7 @@ enum {
+ INTEL_ROM_LEGACY,
+ INTEL_ROM_LEGACY_NO_WBS_SUPPORT,
+ INTEL_ACPI_RESET_ACTIVE,
++ INTEL_WAIT_FOR_D0,
+
+ __INTEL_NUM_FLAGS,
+ };
+@@ -249,6 +250,7 @@ int btintel_bootloader_setup_tlv(struct hci_dev *hdev,
+ int btintel_shutdown_combined(struct hci_dev *hdev);
+ void btintel_hw_error(struct hci_dev *hdev, u8 code);
+ void btintel_print_fseq_info(struct hci_dev *hdev);
++int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb);
+ #else
+
+ static inline int btintel_check_bdaddr(struct hci_dev *hdev)
+@@ -382,4 +384,9 @@ static inline void btintel_hw_error(struct hci_dev *hdev, u8 code)
+ static inline void btintel_print_fseq_info(struct hci_dev *hdev)
+ {
+ }
++
++static inline int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
++{
++ return -EOPNOTSUPP;
++}
+ #endif
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 5a95eb267676e9..d239175718624b 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -47,6 +47,17 @@ MODULE_DEVICE_TABLE(pci, btintel_pcie_table);
+ #define BTINTEL_PCIE_HCI_SCO_PKT 0x00000003
+ #define BTINTEL_PCIE_HCI_EVT_PKT 0x00000004
+
++/* Alive interrupt context */
++enum {
++ BTINTEL_PCIE_ROM,
++ BTINTEL_PCIE_FW_DL,
++ BTINTEL_PCIE_HCI_RESET,
++ BTINTEL_PCIE_INTEL_HCI_RESET1,
++ BTINTEL_PCIE_INTEL_HCI_RESET2,
++ BTINTEL_PCIE_D0,
++ BTINTEL_PCIE_D3
++};
++
+ static inline void ipc_print_ia_ring(struct hci_dev *hdev, struct ia *ia,
+ u16 queue_num)
+ {
+@@ -289,8 +300,9 @@ static int btintel_pcie_enable_bt(struct btintel_pcie_data *data)
+ /* wait for interrupt from the device after booting up to primary
+ * bootloader.
+ */
++ data->alive_intr_ctxt = BTINTEL_PCIE_ROM;
+ err = wait_event_timeout(data->gp0_wait_q, data->gp0_received,
+- msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT));
++ msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
+ if (!err)
+ return -ETIME;
+
+@@ -301,12 +313,78 @@ static int btintel_pcie_enable_bt(struct btintel_pcie_data *data)
+ return 0;
+ }
+
++/* BIT(0) - ROM, BIT(1) - IML and BIT(3) - OP
++ * Sometimes during firmware image switching from ROM to IML or IML to OP image,
++ * the previous image bit is not cleared by firmware when alive interrupt is
++ * received. Driver needs to take care of these sticky bits when deciding the
++ * current image running on controller.
++ * Ex: 0x10 and 0x11 - both represents that controller is running IML
++ */
++static inline bool btintel_pcie_in_rom(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_ROM &&
++ !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML) &&
++ !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW);
++}
++
++static inline bool btintel_pcie_in_op(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW;
++}
++
++static inline bool btintel_pcie_in_iml(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML &&
++ !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW);
++}
++
++static inline bool btintel_pcie_in_d3(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY;
++}
++
++static inline bool btintel_pcie_in_d0(struct btintel_pcie_data *data)
++{
++ return !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY);
++}
++
++static void btintel_pcie_wr_sleep_cntrl(struct btintel_pcie_data *data,
++ u32 dxstate)
++{
++ bt_dev_dbg(data->hdev, "writing sleep_ctl_reg: 0x%8.8x", dxstate);
++ btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_IPC_SLEEP_CTL_REG, dxstate);
++}
++
++static inline char *btintel_pcie_alivectxt_state2str(u32 alive_intr_ctxt)
++{
++ switch (alive_intr_ctxt) {
++ case BTINTEL_PCIE_ROM:
++ return "rom";
++ case BTINTEL_PCIE_FW_DL:
++ return "fw_dl";
++ case BTINTEL_PCIE_D0:
++ return "d0";
++ case BTINTEL_PCIE_D3:
++ return "d3";
++ case BTINTEL_PCIE_HCI_RESET:
++ return "hci_reset";
++ case BTINTEL_PCIE_INTEL_HCI_RESET1:
++ return "intel_reset1";
++ case BTINTEL_PCIE_INTEL_HCI_RESET2:
++ return "intel_reset2";
++ default:
++ return "unknown";
++ }
++ return "null";
++}
++
+ /* This function handles the MSI-X interrupt for gp0 cause (bit 0 in
+ * BTINTEL_PCIE_CSR_MSIX_HW_INT_CAUSES) which is sent for boot stage and image response.
+ */
+ static void btintel_pcie_msix_gp0_handler(struct btintel_pcie_data *data)
+ {
+- u32 reg;
++ bool submit_rx, signal_waitq;
++ u32 reg, old_ctxt;
+
+ /* This interrupt is for three different causes and it is not easy to
+ * know what causes the interrupt. So, it compares each register value
+@@ -316,20 +394,87 @@ static void btintel_pcie_msix_gp0_handler(struct btintel_pcie_data *data)
+ if (reg != data->boot_stage_cache)
+ data->boot_stage_cache = reg;
+
++ bt_dev_dbg(data->hdev, "Alive context: %s old_boot_stage: 0x%8.8x new_boot_stage: 0x%8.8x",
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt),
++ data->boot_stage_cache, reg);
+ reg = btintel_pcie_rd_reg32(data, BTINTEL_PCIE_CSR_IMG_RESPONSE_REG);
+ if (reg != data->img_resp_cache)
+ data->img_resp_cache = reg;
+
+ data->gp0_received = true;
+
+- /* If the boot stage is OP or IML, reset IA and start RX again */
+- if (data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW ||
+- data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML) {
++ old_ctxt = data->alive_intr_ctxt;
++ submit_rx = false;
++ signal_waitq = false;
++
++ switch (data->alive_intr_ctxt) {
++ case BTINTEL_PCIE_ROM:
++ data->alive_intr_ctxt = BTINTEL_PCIE_FW_DL;
++ signal_waitq = true;
++ break;
++ case BTINTEL_PCIE_FW_DL:
++ /* Error case is already handled. Ideally control shall not
++ * reach here
++ */
++ break;
++ case BTINTEL_PCIE_INTEL_HCI_RESET1:
++ if (btintel_pcie_in_op(data)) {
++ submit_rx = true;
++ break;
++ }
++
++ if (btintel_pcie_in_iml(data)) {
++ submit_rx = true;
++ data->alive_intr_ctxt = BTINTEL_PCIE_FW_DL;
++ break;
++ }
++ break;
++ case BTINTEL_PCIE_INTEL_HCI_RESET2:
++ if (btintel_test_and_clear_flag(data->hdev, INTEL_WAIT_FOR_D0)) {
++ btintel_wake_up_flag(data->hdev, INTEL_WAIT_FOR_D0);
++ data->alive_intr_ctxt = BTINTEL_PCIE_D0;
++ }
++ break;
++ case BTINTEL_PCIE_D0:
++ if (btintel_pcie_in_d3(data)) {
++ data->alive_intr_ctxt = BTINTEL_PCIE_D3;
++ signal_waitq = true;
++ break;
++ }
++ break;
++ case BTINTEL_PCIE_D3:
++ if (btintel_pcie_in_d0(data)) {
++ data->alive_intr_ctxt = BTINTEL_PCIE_D0;
++ submit_rx = true;
++ signal_waitq = true;
++ break;
++ }
++ break;
++ case BTINTEL_PCIE_HCI_RESET:
++ data->alive_intr_ctxt = BTINTEL_PCIE_D0;
++ submit_rx = true;
++ signal_waitq = true;
++ break;
++ default:
++ bt_dev_err(data->hdev, "Unknown state: 0x%2.2x",
++ data->alive_intr_ctxt);
++ break;
++ }
++
++ if (submit_rx) {
+ btintel_pcie_reset_ia(data);
+ btintel_pcie_start_rx(data);
+ }
+
+- wake_up(&data->gp0_wait_q);
++ if (signal_waitq) {
++ bt_dev_dbg(data->hdev, "wake up gp0 wait_q");
++ wake_up(&data->gp0_wait_q);
++ }
++
++ if (old_ctxt != data->alive_intr_ctxt)
++ bt_dev_dbg(data->hdev, "alive context changed: %s -> %s",
++ btintel_pcie_alivectxt_state2str(old_ctxt),
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
+ }
+
+ /* This function handles the MSX-X interrupt for rx queue 0 which is for TX
+@@ -363,6 +508,83 @@ static void btintel_pcie_msix_tx_handle(struct btintel_pcie_data *data)
+ }
+ }
+
++static int btintel_pcie_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
++{
++ struct hci_event_hdr *hdr = (void *)skb->data;
++ const char diagnostics_hdr[] = { 0x87, 0x80, 0x03 };
++ struct btintel_pcie_data *data = hci_get_drvdata(hdev);
++
++ if (skb->len > HCI_EVENT_HDR_SIZE && hdr->evt == 0xff &&
++ hdr->plen > 0) {
++ const void *ptr = skb->data + HCI_EVENT_HDR_SIZE + 1;
++ unsigned int len = skb->len - HCI_EVENT_HDR_SIZE - 1;
++
++ if (btintel_test_flag(hdev, INTEL_BOOTLOADER)) {
++ switch (skb->data[2]) {
++ case 0x02:
++ /* When switching to the operational firmware
++ * the device sends a vendor specific event
++ * indicating that the bootup completed.
++ */
++ btintel_bootup(hdev, ptr, len);
++
++ /* If bootup event is from operational image,
++ * driver needs to write sleep control register to
++ * move into D0 state
++ */
++ if (btintel_pcie_in_op(data)) {
++ btintel_pcie_wr_sleep_cntrl(data, BTINTEL_PCIE_STATE_D0);
++ data->alive_intr_ctxt = BTINTEL_PCIE_INTEL_HCI_RESET2;
++ kfree_skb(skb);
++ return 0;
++ }
++
++ if (btintel_pcie_in_iml(data)) {
++ /* In case of IML, there is no concept
++ * of D0 transition. Just mimic as if
++ * IML moved to D0 by clearing INTEL_WAIT_FOR_D0
++ * bit and waking up the task waiting on
++ * INTEL_WAIT_FOR_D0. This is required
++ * as intel_boot() is common function for
++ * both IML and OP image loading.
++ */
++ if (btintel_test_and_clear_flag(data->hdev,
++ INTEL_WAIT_FOR_D0))
++ btintel_wake_up_flag(data->hdev,
++ INTEL_WAIT_FOR_D0);
++ }
++ kfree_skb(skb);
++ return 0;
++ case 0x06:
++ /* When the firmware loading completes the
++ * device sends out a vendor specific event
++ * indicating the result of the firmware
++ * loading.
++ */
++ btintel_secure_send_result(hdev, ptr, len);
++ kfree_skb(skb);
++ return 0;
++ }
++ }
++
++ /* Handle all diagnostics events separately. May still call
++ * hci_recv_frame.
++ */
++ if (len >= sizeof(diagnostics_hdr) &&
++ memcmp(&skb->data[2], diagnostics_hdr,
++ sizeof(diagnostics_hdr)) == 0) {
++ return btintel_diagnostics(hdev, skb);
++ }
++
++ /* This is a debug event that comes from IML and OP image when it
++ * starts execution. There is no need pass this event to stack.
++ */
++ if (skb->data[2] == 0x97)
++ return 0;
++ }
++
++ return hci_recv_frame(hdev, skb);
++}
+ /* Process the received rx data
+ * It check the frame header to identify the data type and create skb
+ * and calling HCI API
+@@ -452,7 +674,7 @@ static int btintel_pcie_recv_frame(struct btintel_pcie_data *data,
+ hdev->stat.byte_rx += plen;
+
+ if (pcie_pkt_type == BTINTEL_PCIE_HCI_EVT_PKT)
+- ret = btintel_recv_event(hdev, new_skb);
++ ret = btintel_pcie_recv_event(hdev, new_skb);
+ else
+ ret = hci_recv_frame(hdev, new_skb);
+
+@@ -1040,8 +1262,11 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ struct sk_buff *skb)
+ {
+ struct btintel_pcie_data *data = hci_get_drvdata(hdev);
++ struct hci_command_hdr *cmd;
++ __u16 opcode = ~0;
+ int ret;
+ u32 type;
++ u32 old_ctxt;
+
+ /* Due to the fw limitation, the type header of the packet should be
+ * 4 bytes unlike 1 byte for UART. In UART, the firmware can read
+@@ -1060,6 +1285,8 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ switch (hci_skb_pkt_type(skb)) {
+ case HCI_COMMAND_PKT:
+ type = BTINTEL_PCIE_HCI_CMD_PKT;
++ cmd = (void *)skb->data;
++ opcode = le16_to_cpu(cmd->opcode);
+ if (btintel_test_flag(hdev, INTEL_BOOTLOADER)) {
+ struct hci_command_hdr *cmd = (void *)skb->data;
+ __u16 opcode = le16_to_cpu(cmd->opcode);
+@@ -1095,6 +1322,30 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ bt_dev_err(hdev, "Failed to send frame (%d)", ret);
+ goto exit_error;
+ }
++
++ if (type == BTINTEL_PCIE_HCI_CMD_PKT &&
++ (opcode == HCI_OP_RESET || opcode == 0xfc01)) {
++ old_ctxt = data->alive_intr_ctxt;
++ data->alive_intr_ctxt =
++ (opcode == 0xfc01 ? BTINTEL_PCIE_INTEL_HCI_RESET1 :
++ BTINTEL_PCIE_HCI_RESET);
++ bt_dev_dbg(data->hdev, "sent cmd: 0x%4.4x alive context changed: %s -> %s",
++ opcode, btintel_pcie_alivectxt_state2str(old_ctxt),
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
++ if (opcode == HCI_OP_RESET) {
++ data->gp0_received = false;
++ ret = wait_event_timeout(data->gp0_wait_q,
++ data->gp0_received,
++ msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
++ if (!ret) {
++ hdev->stat.err_tx++;
++ bt_dev_err(hdev, "No alive interrupt received for %s",
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
++ ret = -ETIME;
++ goto exit_error;
++ }
++ }
++ }
+ hdev->stat.byte_tx += skb->len;
+ kfree_skb(skb);
+
+diff --git a/drivers/bluetooth/btintel_pcie.h b/drivers/bluetooth/btintel_pcie.h
+index baaff70420f575..8b7824ad005a2a 100644
+--- a/drivers/bluetooth/btintel_pcie.h
++++ b/drivers/bluetooth/btintel_pcie.h
+@@ -12,6 +12,7 @@
+ #define BTINTEL_PCIE_CSR_HW_REV_REG (BTINTEL_PCIE_CSR_BASE + 0x028)
+ #define BTINTEL_PCIE_CSR_RF_ID_REG (BTINTEL_PCIE_CSR_BASE + 0x09C)
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_REG (BTINTEL_PCIE_CSR_BASE + 0x108)
++#define BTINTEL_PCIE_CSR_IPC_SLEEP_CTL_REG (BTINTEL_PCIE_CSR_BASE + 0x114)
+ #define BTINTEL_PCIE_CSR_CI_ADDR_LSB_REG (BTINTEL_PCIE_CSR_BASE + 0x118)
+ #define BTINTEL_PCIE_CSR_CI_ADDR_MSB_REG (BTINTEL_PCIE_CSR_BASE + 0x11C)
+ #define BTINTEL_PCIE_CSR_IMG_RESPONSE_REG (BTINTEL_PCIE_CSR_BASE + 0x12C)
+@@ -32,6 +33,7 @@
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_IML_LOCKDOWN (BIT(11))
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_MAC_ACCESS_ON (BIT(16))
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_ALIVE (BIT(23))
++#define BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY (BIT(24))
+
+ /* Registers for MSI-X */
+ #define BTINTEL_PCIE_CSR_MSIX_BASE (0x2000)
+@@ -55,6 +57,16 @@ enum msix_hw_int_causes {
+ BTINTEL_PCIE_MSIX_HW_INT_CAUSES_GP0 = BIT(0), /* cause 32 */
+ };
+
++/* PCIe device states
++ * Host-Device interface is active
++ * Host-Device interface is inactive(as reflected by IPC_SLEEP_CONTROL_CSR_AD)
++ * Host-Device interface is inactive(as reflected by IPC_SLEEP_CONTROL_CSR_AD)
++ */
++enum {
++ BTINTEL_PCIE_STATE_D0 = 0,
++ BTINTEL_PCIE_STATE_D3_HOT = 2,
++ BTINTEL_PCIE_STATE_D3_COLD = 3,
++};
+ #define BTINTEL_PCIE_MSIX_NON_AUTO_CLEAR_CAUSE BIT(7)
+
+ /* Minimum and Maximum number of MSI-X Vector
+@@ -67,7 +79,7 @@ enum msix_hw_int_causes {
+ #define BTINTEL_DEFAULT_MAC_ACCESS_TIMEOUT_US 200000
+
+ /* Default interrupt timeout in msec */
+-#define BTINTEL_DEFAULT_INTR_TIMEOUT 3000
++#define BTINTEL_DEFAULT_INTR_TIMEOUT_MS 3000
+
+ /* The number of descriptors in TX/RX queues */
+ #define BTINTEL_DESCS_COUNT 16
+@@ -343,6 +355,7 @@ struct rxq {
+ * @ia: Index Array struct
+ * @txq: TX Queue struct
+ * @rxq: RX Queue struct
++ * @alive_intr_ctxt: Alive interrupt context
+ */
+ struct btintel_pcie_data {
+ struct pci_dev *pdev;
+@@ -389,6 +402,7 @@ struct btintel_pcie_data {
+ struct ia ia;
+ struct txq txq;
+ struct rxq rxq;
++ u32 alive_intr_ctxt;
+ };
+
+ static inline u32 btintel_pcie_rd_reg32(struct btintel_pcie_data *data,
+diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c
+index 2b7c80043aa2eb..4b30d13ad5e960 100644
+--- a/drivers/bluetooth/btmtk.c
++++ b/drivers/bluetooth/btmtk.c
+@@ -1215,7 +1215,6 @@ static int btmtk_usb_isointf_init(struct hci_dev *hdev)
+ struct sk_buff *skb;
+ int err;
+
+- init_usb_anchor(&btmtk_data->isopkt_anchor);
+ spin_lock_init(&btmtk_data->isorxlock);
+
+ __set_mtk_intr_interface(hdev);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 2408e50743ca64..09afd75145ed9c 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2670,6 +2670,7 @@ static void btusb_mtk_claim_iso_intf(struct btusb_data *data)
+ }
+
+ set_bit(BTMTK_ISOPKT_OVER_INTR, &btmtk_data->flags);
++ init_usb_anchor(&btmtk_data->isopkt_anchor);
+ }
+
+ static void btusb_mtk_release_iso_intf(struct btusb_data *data)
+diff --git a/drivers/bus/mhi/host/trace.h b/drivers/bus/mhi/host/trace.h
+index 95613c8ebe0691..3e0c41777429eb 100644
+--- a/drivers/bus/mhi/host/trace.h
++++ b/drivers/bus/mhi/host/trace.h
+@@ -9,6 +9,7 @@
+ #if !defined(_TRACE_EVENT_MHI_HOST_H) || defined(TRACE_HEADER_MULTI_READ)
+ #define _TRACE_EVENT_MHI_HOST_H
+
++#include <linux/byteorder/generic.h>
+ #include <linux/tracepoint.h>
+ #include <linux/trace_seq.h>
+ #include "../common.h"
+@@ -97,18 +98,18 @@ TRACE_EVENT(mhi_gen_tre,
+ __string(name, mhi_cntrl->mhi_dev->name)
+ __field(int, ch_num)
+ __field(void *, wp)
+- __field(__le64, tre_ptr)
+- __field(__le32, dword0)
+- __field(__le32, dword1)
++ __field(uint64_t, tre_ptr)
++ __field(uint32_t, dword0)
++ __field(uint32_t, dword1)
+ ),
+
+ TP_fast_assign(
+ __assign_str(name);
+ __entry->ch_num = mhi_chan->chan;
+ __entry->wp = mhi_tre;
+- __entry->tre_ptr = mhi_tre->ptr;
+- __entry->dword0 = mhi_tre->dword[0];
+- __entry->dword1 = mhi_tre->dword[1];
++ __entry->tre_ptr = le64_to_cpu(mhi_tre->ptr);
++ __entry->dword0 = le32_to_cpu(mhi_tre->dword[0]);
++ __entry->dword1 = le32_to_cpu(mhi_tre->dword[1]);
+ ),
+
+ TP_printk("%s: Chan: %d TRE: 0x%p TRE buf: 0x%llx DWORD0: 0x%08x DWORD1: 0x%08x\n",
+@@ -176,19 +177,19 @@ DECLARE_EVENT_CLASS(mhi_process_event_ring,
+
+ TP_STRUCT__entry(
+ __string(name, mhi_cntrl->mhi_dev->name)
+- __field(__le32, dword0)
+- __field(__le32, dword1)
++ __field(uint32_t, dword0)
++ __field(uint32_t, dword1)
+ __field(int, state)
+- __field(__le64, ptr)
++ __field(uint64_t, ptr)
+ __field(void *, rp)
+ ),
+
+ TP_fast_assign(
+ __assign_str(name);
+ __entry->rp = rp;
+- __entry->ptr = rp->ptr;
+- __entry->dword0 = rp->dword[0];
+- __entry->dword1 = rp->dword[1];
++ __entry->ptr = le64_to_cpu(rp->ptr);
++ __entry->dword0 = le32_to_cpu(rp->dword[0]);
++ __entry->dword1 = le32_to_cpu(rp->dword[1]);
+ __entry->state = MHI_TRE_GET_EV_STATE(rp);
+ ),
+
+diff --git a/drivers/clk/clk-apple-nco.c b/drivers/clk/clk-apple-nco.c
+index 39472a51530a34..457a48d4894128 100644
+--- a/drivers/clk/clk-apple-nco.c
++++ b/drivers/clk/clk-apple-nco.c
+@@ -297,6 +297,9 @@ static int applnco_probe(struct platform_device *pdev)
+ memset(&init, 0, sizeof(init));
+ init.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "%s-%d", np->name, i);
++ if (!init.name)
++ return -ENOMEM;
++
+ init.ops = &applnco_ops;
+ init.parent_data = &pdata;
+ init.num_parents = 1;
+diff --git a/drivers/clk/clk-axi-clkgen.c b/drivers/clk/clk-axi-clkgen.c
+index bf4d8ddc93aea1..934e53a96dddac 100644
+--- a/drivers/clk/clk-axi-clkgen.c
++++ b/drivers/clk/clk-axi-clkgen.c
+@@ -7,6 +7,7 @@
+ */
+
+ #include <linux/platform_device.h>
++#include <linux/clk.h>
+ #include <linux/clk-provider.h>
+ #include <linux/slab.h>
+ #include <linux/io.h>
+@@ -512,6 +513,7 @@ static int axi_clkgen_probe(struct platform_device *pdev)
+ struct clk_init_data init;
+ const char *parent_names[2];
+ const char *clk_name;
++ struct clk *axi_clk;
+ unsigned int i;
+ int ret;
+
+@@ -528,8 +530,24 @@ static int axi_clkgen_probe(struct platform_device *pdev)
+ return PTR_ERR(axi_clkgen->base);
+
+ init.num_parents = of_clk_get_parent_count(pdev->dev.of_node);
+- if (init.num_parents < 1 || init.num_parents > 2)
+- return -EINVAL;
++
++ axi_clk = devm_clk_get_enabled(&pdev->dev, "s_axi_aclk");
++ if (!IS_ERR(axi_clk)) {
++ if (init.num_parents < 2 || init.num_parents > 3)
++ return -EINVAL;
++
++ init.num_parents -= 1;
++ } else {
++ /*
++ * Legacy... So that old DTs which do not have clock-names still
++ * work. In this case we don't explicitly enable the AXI bus
++ * clock.
++ */
++ if (PTR_ERR(axi_clk) != -ENOENT)
++ return PTR_ERR(axi_clk);
++ if (init.num_parents < 1 || init.num_parents > 2)
++ return -EINVAL;
++ }
+
+ for (i = 0; i < init.num_parents; i++) {
+ parent_names[i] = of_clk_get_parent_name(pdev->dev.of_node, i);
+diff --git a/drivers/clk/clk-en7523.c b/drivers/clk/clk-en7523.c
+index 22fbea61c3dcc0..fdd8ea989ed24a 100644
+--- a/drivers/clk/clk-en7523.c
++++ b/drivers/clk/clk-en7523.c
+@@ -3,8 +3,10 @@
+ #include <linux/delay.h>
+ #include <linux/clk-provider.h>
+ #include <linux/io.h>
++#include <linux/mfd/syscon.h>
+ #include <linux/platform_device.h>
+ #include <linux/property.h>
++#include <linux/regmap.h>
+ #include <linux/reset-controller.h>
+ #include <dt-bindings/clock/en7523-clk.h>
+ #include <dt-bindings/reset/airoha,en7581-reset.h>
+@@ -31,16 +33,11 @@
+ #define REG_RESET_CONTROL_PCIE1 BIT(27)
+ #define REG_RESET_CONTROL_PCIE2 BIT(26)
+ /* EN7581 */
+-#define REG_PCIE0_MEM 0x00
+-#define REG_PCIE0_MEM_MASK 0x04
+-#define REG_PCIE1_MEM 0x08
+-#define REG_PCIE1_MEM_MASK 0x0c
+-#define REG_PCIE2_MEM 0x10
+-#define REG_PCIE2_MEM_MASK 0x14
+ #define REG_NP_SCU_PCIC 0x88
+ #define REG_NP_SCU_SSTR 0x9c
+ #define REG_PCIE_XSI0_SEL_MASK GENMASK(14, 13)
+ #define REG_PCIE_XSI1_SEL_MASK GENMASK(12, 11)
++#define REG_CRYPTO_CLKSRC2 0x20c
+
+ #define REG_RST_CTRL2 0x00
+ #define REG_RST_CTRL1 0x04
+@@ -84,7 +81,8 @@ struct en_clk_soc_data {
+ const u16 *idx_map;
+ u16 idx_map_nr;
+ } reset;
+- int (*hw_init)(struct platform_device *pdev, void __iomem *np_base);
++ int (*hw_init)(struct platform_device *pdev,
++ struct clk_hw_onecell_data *clk_data);
+ };
+
+ static const u32 gsw_base[] = { 400000000, 500000000 };
+@@ -92,6 +90,10 @@ static const u32 emi_base[] = { 333000000, 400000000 };
+ static const u32 bus_base[] = { 500000000, 540000000 };
+ static const u32 slic_base[] = { 100000000, 3125000 };
+ static const u32 npu_base[] = { 333000000, 400000000, 500000000 };
++/* EN7581 */
++static const u32 emi7581_base[] = { 540000000, 480000000, 400000000, 300000000 };
++static const u32 npu7581_base[] = { 800000000, 750000000, 720000000, 600000000 };
++static const u32 crypto_base[] = { 540000000, 480000000 };
+
+ static const struct en_clk_desc en7523_base_clks[] = {
+ {
+@@ -189,6 +191,102 @@ static const struct en_clk_desc en7523_base_clks[] = {
+ }
+ };
+
++static const struct en_clk_desc en7581_base_clks[] = {
++ {
++ .id = EN7523_CLK_GSW,
++ .name = "gsw",
++
++ .base_reg = REG_GSW_CLK_DIV_SEL,
++ .base_bits = 1,
++ .base_shift = 8,
++ .base_values = gsw_base,
++ .n_base_values = ARRAY_SIZE(gsw_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_EMI,
++ .name = "emi",
++
++ .base_reg = REG_EMI_CLK_DIV_SEL,
++ .base_bits = 2,
++ .base_shift = 8,
++ .base_values = emi7581_base,
++ .n_base_values = ARRAY_SIZE(emi7581_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_BUS,
++ .name = "bus",
++
++ .base_reg = REG_BUS_CLK_DIV_SEL,
++ .base_bits = 1,
++ .base_shift = 8,
++ .base_values = bus_base,
++ .n_base_values = ARRAY_SIZE(bus_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_SLIC,
++ .name = "slic",
++
++ .base_reg = REG_SPI_CLK_FREQ_SEL,
++ .base_bits = 1,
++ .base_shift = 0,
++ .base_values = slic_base,
++ .n_base_values = ARRAY_SIZE(slic_base),
++
++ .div_reg = REG_SPI_CLK_DIV_SEL,
++ .div_bits = 5,
++ .div_shift = 24,
++ .div_val0 = 20,
++ .div_step = 2,
++ }, {
++ .id = EN7523_CLK_SPI,
++ .name = "spi",
++
++ .base_reg = REG_SPI_CLK_DIV_SEL,
++
++ .base_value = 400000000,
++
++ .div_bits = 5,
++ .div_shift = 8,
++ .div_val0 = 40,
++ .div_step = 2,
++ }, {
++ .id = EN7523_CLK_NPU,
++ .name = "npu",
++
++ .base_reg = REG_NPU_CLK_DIV_SEL,
++ .base_bits = 2,
++ .base_shift = 8,
++ .base_values = npu7581_base,
++ .n_base_values = ARRAY_SIZE(npu7581_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_CRYPTO,
++ .name = "crypto",
++
++ .base_reg = REG_CRYPTO_CLKSRC2,
++ .base_bits = 1,
++ .base_shift = 0,
++ .base_values = crypto_base,
++ .n_base_values = ARRAY_SIZE(crypto_base),
++ }
++};
++
+ static const u16 en7581_rst_ofs[] = {
+ REG_RST_CTRL2,
+ REG_RST_CTRL1,
+@@ -252,15 +350,11 @@ static const u16 en7581_rst_map[] = {
+ [EN7581_XPON_MAC_RST] = RST_NR_PER_BANK + 31,
+ };
+
+-static unsigned int en7523_get_base_rate(void __iomem *base, unsigned int i)
++static u32 en7523_get_base_rate(const struct en_clk_desc *desc, u32 val)
+ {
+- const struct en_clk_desc *desc = &en7523_base_clks[i];
+- u32 val;
+-
+ if (!desc->base_bits)
+ return desc->base_value;
+
+- val = readl(base + desc->base_reg);
+ val >>= desc->base_shift;
+ val &= (1 << desc->base_bits) - 1;
+
+@@ -270,16 +364,11 @@ static unsigned int en7523_get_base_rate(void __iomem *base, unsigned int i)
+ return desc->base_values[val];
+ }
+
+-static u32 en7523_get_div(void __iomem *base, int i)
++static u32 en7523_get_div(const struct en_clk_desc *desc, u32 val)
+ {
+- const struct en_clk_desc *desc = &en7523_base_clks[i];
+- u32 reg, val;
+-
+ if (!desc->div_bits)
+ return 1;
+
+- reg = desc->div_reg ? desc->div_reg : desc->base_reg;
+- val = readl(base + reg);
+ val >>= desc->div_shift;
+ val &= (1 << desc->div_bits) - 1;
+
+@@ -412,44 +501,83 @@ static void en7581_pci_disable(struct clk_hw *hw)
+ usleep_range(1000, 2000);
+ }
+
+-static int en7581_clk_hw_init(struct platform_device *pdev,
+- void __iomem *np_base)
++static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data,
++ void __iomem *base, void __iomem *np_base)
+ {
+- void __iomem *pb_base;
+- u32 val;
++ struct clk_hw *hw;
++ u32 rate;
++ int i;
++
++ for (i = 0; i < ARRAY_SIZE(en7523_base_clks); i++) {
++ const struct en_clk_desc *desc = &en7523_base_clks[i];
++ u32 reg = desc->div_reg ? desc->div_reg : desc->base_reg;
++ u32 val = readl(base + desc->base_reg);
+
+- pb_base = devm_platform_ioremap_resource(pdev, 3);
+- if (IS_ERR(pb_base))
+- return PTR_ERR(pb_base);
++ rate = en7523_get_base_rate(desc, val);
++ val = readl(base + reg);
++ rate /= en7523_get_div(desc, val);
+
+- val = readl(np_base + REG_NP_SCU_SSTR);
+- val &= ~(REG_PCIE_XSI0_SEL_MASK | REG_PCIE_XSI1_SEL_MASK);
+- writel(val, np_base + REG_NP_SCU_SSTR);
+- val = readl(np_base + REG_NP_SCU_PCIC);
+- writel(val | 3, np_base + REG_NP_SCU_PCIC);
++ hw = clk_hw_register_fixed_rate(dev, desc->name, NULL, 0, rate);
++ if (IS_ERR(hw)) {
++ pr_err("Failed to register clk %s: %ld\n",
++ desc->name, PTR_ERR(hw));
++ continue;
++ }
++
++ clk_data->hws[desc->id] = hw;
++ }
++
++ hw = en7523_register_pcie_clk(dev, np_base);
++ clk_data->hws[EN7523_CLK_PCIE] = hw;
++
++ clk_data->num = EN7523_NUM_CLOCKS;
++}
++
++static int en7523_clk_hw_init(struct platform_device *pdev,
++ struct clk_hw_onecell_data *clk_data)
++{
++ void __iomem *base, *np_base;
++
++ base = devm_platform_ioremap_resource(pdev, 0);
++ if (IS_ERR(base))
++ return PTR_ERR(base);
++
++ np_base = devm_platform_ioremap_resource(pdev, 1);
++ if (IS_ERR(np_base))
++ return PTR_ERR(np_base);
+
+- writel(0x20000000, pb_base + REG_PCIE0_MEM);
+- writel(0xfc000000, pb_base + REG_PCIE0_MEM_MASK);
+- writel(0x24000000, pb_base + REG_PCIE1_MEM);
+- writel(0xfc000000, pb_base + REG_PCIE1_MEM_MASK);
+- writel(0x28000000, pb_base + REG_PCIE2_MEM);
+- writel(0xfc000000, pb_base + REG_PCIE2_MEM_MASK);
++ en7523_register_clocks(&pdev->dev, clk_data, base, np_base);
+
+ return 0;
+ }
+
+-static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data,
+- void __iomem *base, void __iomem *np_base)
++static void en7581_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data,
++ struct regmap *map, void __iomem *base)
+ {
+ struct clk_hw *hw;
+ u32 rate;
+ int i;
+
+- for (i = 0; i < ARRAY_SIZE(en7523_base_clks); i++) {
+- const struct en_clk_desc *desc = &en7523_base_clks[i];
++ for (i = 0; i < ARRAY_SIZE(en7581_base_clks); i++) {
++ const struct en_clk_desc *desc = &en7581_base_clks[i];
++ u32 val, reg = desc->div_reg ? desc->div_reg : desc->base_reg;
++ int err;
+
+- rate = en7523_get_base_rate(base, i);
+- rate /= en7523_get_div(base, i);
++ err = regmap_read(map, desc->base_reg, &val);
++ if (err) {
++ pr_err("Failed reading fixed clk rate %s: %d\n",
++ desc->name, err);
++ continue;
++ }
++ rate = en7523_get_base_rate(desc, val);
++
++ err = regmap_read(map, reg, &val);
++ if (err) {
++ pr_err("Failed reading fixed clk div %s: %d\n",
++ desc->name, err);
++ continue;
++ }
++ rate /= en7523_get_div(desc, val);
+
+ hw = clk_hw_register_fixed_rate(dev, desc->name, NULL, 0, rate);
+ if (IS_ERR(hw)) {
+@@ -461,12 +589,38 @@ static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_dat
+ clk_data->hws[desc->id] = hw;
+ }
+
+- hw = en7523_register_pcie_clk(dev, np_base);
++ hw = en7523_register_pcie_clk(dev, base);
+ clk_data->hws[EN7523_CLK_PCIE] = hw;
+
+ clk_data->num = EN7523_NUM_CLOCKS;
+ }
+
++static int en7581_clk_hw_init(struct platform_device *pdev,
++ struct clk_hw_onecell_data *clk_data)
++{
++ void __iomem *np_base;
++ struct regmap *map;
++ u32 val;
++
++ map = syscon_regmap_lookup_by_compatible("airoha,en7581-chip-scu");
++ if (IS_ERR(map))
++ return PTR_ERR(map);
++
++ np_base = devm_platform_ioremap_resource(pdev, 0);
++ if (IS_ERR(np_base))
++ return PTR_ERR(np_base);
++
++ en7581_register_clocks(&pdev->dev, clk_data, map, np_base);
++
++ val = readl(np_base + REG_NP_SCU_SSTR);
++ val &= ~(REG_PCIE_XSI0_SEL_MASK | REG_PCIE_XSI1_SEL_MASK);
++ writel(val, np_base + REG_NP_SCU_SSTR);
++ val = readl(np_base + REG_NP_SCU_PCIC);
++ writel(val | 3, np_base + REG_NP_SCU_PCIC);
++
++ return 0;
++}
++
+ static int en7523_reset_update(struct reset_controller_dev *rcdev,
+ unsigned long id, bool assert)
+ {
+@@ -533,7 +687,7 @@ static int en7523_reset_register(struct platform_device *pdev,
+ if (!soc_data->reset.idx_map_nr)
+ return 0;
+
+- base = devm_platform_ioremap_resource(pdev, 2);
++ base = devm_platform_ioremap_resource(pdev, 1);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+@@ -561,31 +715,18 @@ static int en7523_clk_probe(struct platform_device *pdev)
+ struct device_node *node = pdev->dev.of_node;
+ const struct en_clk_soc_data *soc_data;
+ struct clk_hw_onecell_data *clk_data;
+- void __iomem *base, *np_base;
+ int r;
+
+- base = devm_platform_ioremap_resource(pdev, 0);
+- if (IS_ERR(base))
+- return PTR_ERR(base);
+-
+- np_base = devm_platform_ioremap_resource(pdev, 1);
+- if (IS_ERR(np_base))
+- return PTR_ERR(np_base);
+-
+- soc_data = device_get_match_data(&pdev->dev);
+- if (soc_data->hw_init) {
+- r = soc_data->hw_init(pdev, np_base);
+- if (r)
+- return r;
+- }
+-
+ clk_data = devm_kzalloc(&pdev->dev,
+ struct_size(clk_data, hws, EN7523_NUM_CLOCKS),
+ GFP_KERNEL);
+ if (!clk_data)
+ return -ENOMEM;
+
+- en7523_register_clocks(&pdev->dev, clk_data, base, np_base);
++ soc_data = device_get_match_data(&pdev->dev);
++ r = soc_data->hw_init(pdev, clk_data);
++ if (r)
++ return r;
+
+ r = of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
+ if (r)
+@@ -608,6 +749,7 @@ static const struct en_clk_soc_data en7523_data = {
+ .prepare = en7523_pci_prepare,
+ .unprepare = en7523_pci_unprepare,
+ },
++ .hw_init = en7523_clk_hw_init,
+ };
+
+ static const struct en_clk_soc_data en7581_data = {
+diff --git a/drivers/clk/clk-loongson2.c b/drivers/clk/clk-loongson2.c
+index 820bb1e9e3b79a..7082b4309c6f15 100644
+--- a/drivers/clk/clk-loongson2.c
++++ b/drivers/clk/clk-loongson2.c
+@@ -29,8 +29,10 @@ enum loongson2_clk_type {
+ struct loongson2_clk_provider {
+ void __iomem *base;
+ struct device *dev;
+- struct clk_hw_onecell_data clk_data;
+ spinlock_t clk_lock; /* protect access to DIV registers */
++
++ /* Must be last --ends in a flexible-array member. */
++ struct clk_hw_onecell_data clk_data;
+ };
+
+ struct loongson2_clk_data {
+@@ -304,7 +306,7 @@ static int loongson2_clk_probe(struct platform_device *pdev)
+ return PTR_ERR(clp->base);
+
+ spin_lock_init(&clp->clk_lock);
+- clp->clk_data.num = clks_num + 1;
++ clp->clk_data.num = clks_num;
+ clp->dev = dev;
+
+ for (i = 0; i < clks_num; i++) {
+diff --git a/drivers/clk/imx/clk-fracn-gppll.c b/drivers/clk/imx/clk-fracn-gppll.c
+index 1becba2b62d0be..b12b00a2f07fab 100644
+--- a/drivers/clk/imx/clk-fracn-gppll.c
++++ b/drivers/clk/imx/clk-fracn-gppll.c
+@@ -252,9 +252,11 @@ static int clk_fracn_gppll_set_rate(struct clk_hw *hw, unsigned long drate,
+ pll_div = FIELD_PREP(PLL_RDIV_MASK, rate->rdiv) | rate->odiv |
+ FIELD_PREP(PLL_MFI_MASK, rate->mfi);
+ writel_relaxed(pll_div, pll->base + PLL_DIV);
++ readl(pll->base + PLL_DIV);
+ if (pll->flags & CLK_FRACN_GPPLL_FRACN) {
+ writel_relaxed(rate->mfd, pll->base + PLL_DENOMINATOR);
+ writel_relaxed(FIELD_PREP(PLL_MFN_MASK, rate->mfn), pll->base + PLL_NUMERATOR);
++ readl(pll->base + PLL_NUMERATOR);
+ }
+
+ /* Wait for 5us according to fracn mode pll doc */
+@@ -263,6 +265,7 @@ static int clk_fracn_gppll_set_rate(struct clk_hw *hw, unsigned long drate,
+ /* Enable Powerup */
+ tmp |= POWERUP_MASK;
+ writel_relaxed(tmp, pll->base + PLL_CTRL);
++ readl(pll->base + PLL_CTRL);
+
+ /* Wait Lock */
+ ret = clk_fracn_gppll_wait_lock(pll);
+@@ -300,14 +303,15 @@ static int clk_fracn_gppll_prepare(struct clk_hw *hw)
+
+ val |= POWERUP_MASK;
+ writel_relaxed(val, pll->base + PLL_CTRL);
+-
+- val |= CLKMUX_EN;
+- writel_relaxed(val, pll->base + PLL_CTRL);
++ readl(pll->base + PLL_CTRL);
+
+ ret = clk_fracn_gppll_wait_lock(pll);
+ if (ret)
+ return ret;
+
++ val |= CLKMUX_EN;
++ writel_relaxed(val, pll->base + PLL_CTRL);
++
+ val &= ~CLKMUX_BYPASS;
+ writel_relaxed(val, pll->base + PLL_CTRL);
+
+diff --git a/drivers/clk/imx/clk-imx8-acm.c b/drivers/clk/imx/clk-imx8-acm.c
+index 1bdb480cc96c6b..37c61d10f213ce 100644
+--- a/drivers/clk/imx/clk-imx8-acm.c
++++ b/drivers/clk/imx/clk-imx8-acm.c
+@@ -289,9 +289,9 @@ static int clk_imx_acm_attach_pm_domains(struct device *dev,
+ DL_FLAG_STATELESS |
+ DL_FLAG_PM_RUNTIME |
+ DL_FLAG_RPM_ACTIVE);
+- if (IS_ERR(dev_pm->pd_dev_link[i])) {
++ if (!dev_pm->pd_dev_link[i]) {
+ dev_pm_domain_detach(dev_pm->pd_dev[i], false);
+- ret = PTR_ERR(dev_pm->pd_dev_link[i]);
++ ret = -EINVAL;
+ goto detach_pm;
+ }
+ }
+diff --git a/drivers/clk/imx/clk-lpcg-scu.c b/drivers/clk/imx/clk-lpcg-scu.c
+index dd5abd09f3e206..620afdf8dc03e9 100644
+--- a/drivers/clk/imx/clk-lpcg-scu.c
++++ b/drivers/clk/imx/clk-lpcg-scu.c
+@@ -6,10 +6,12 @@
+
+ #include <linux/bits.h>
+ #include <linux/clk-provider.h>
++#include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/io.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
++#include <linux/units.h>
+
+ #include "clk-scu.h"
+
+@@ -41,6 +43,29 @@ struct clk_lpcg_scu {
+
+ #define to_clk_lpcg_scu(_hw) container_of(_hw, struct clk_lpcg_scu, hw)
+
++/* e10858 -LPCG clock gating register synchronization errata */
++static void lpcg_e10858_writel(unsigned long rate, void __iomem *reg, u32 val)
++{
++ writel(val, reg);
++
++ if (rate >= 24 * HZ_PER_MHZ || rate == 0) {
++ /*
++ * The time taken to access the LPCG registers from the AP core
++ * through the interconnect is longer than the minimum delay
++ * of 4 clock cycles required by the errata.
++ * Adding a readl will provide sufficient delay to prevent
++ * back-to-back writes.
++ */
++ readl(reg);
++ } else {
++ /*
++ * For clocks running below 24MHz, wait a minimum of
++ * 4 clock cycles.
++ */
++ ndelay(4 * (DIV_ROUND_UP(1000 * HZ_PER_MHZ, rate)));
++ }
++}
++
+ static int clk_lpcg_scu_enable(struct clk_hw *hw)
+ {
+ struct clk_lpcg_scu *clk = to_clk_lpcg_scu(hw);
+@@ -57,7 +82,8 @@ static int clk_lpcg_scu_enable(struct clk_hw *hw)
+ val |= CLK_GATE_SCU_LPCG_HW_SEL;
+
+ reg |= val << clk->bit_idx;
+- writel(reg, clk->reg);
++
++ lpcg_e10858_writel(clk_hw_get_rate(hw), clk->reg, reg);
+
+ spin_unlock_irqrestore(&imx_lpcg_scu_lock, flags);
+
+@@ -74,7 +100,7 @@ static void clk_lpcg_scu_disable(struct clk_hw *hw)
+
+ reg = readl_relaxed(clk->reg);
+ reg &= ~(CLK_GATE_SCU_LPCG_MASK << clk->bit_idx);
+- writel(reg, clk->reg);
++ lpcg_e10858_writel(clk_hw_get_rate(hw), clk->reg, reg);
+
+ spin_unlock_irqrestore(&imx_lpcg_scu_lock, flags);
+ }
+@@ -145,13 +171,8 @@ static int __maybe_unused imx_clk_lpcg_scu_resume(struct device *dev)
+ {
+ struct clk_lpcg_scu *clk = dev_get_drvdata(dev);
+
+- /*
+- * FIXME: Sometimes writes don't work unless the CPU issues
+- * them twice
+- */
+-
+- writel(clk->state, clk->reg);
+ writel(clk->state, clk->reg);
++ lpcg_e10858_writel(0, clk->reg, clk->state);
+ dev_dbg(dev, "restore lpcg state 0x%x\n", clk->state);
+
+ return 0;
+diff --git a/drivers/clk/imx/clk-scu.c b/drivers/clk/imx/clk-scu.c
+index b1dd0c08e091b6..b27186aaf2a156 100644
+--- a/drivers/clk/imx/clk-scu.c
++++ b/drivers/clk/imx/clk-scu.c
+@@ -596,7 +596,7 @@ static int __maybe_unused imx_clk_scu_suspend(struct device *dev)
+ clk->rate = clk_scu_recalc_rate(&clk->hw, 0);
+ else
+ clk->rate = clk_hw_get_rate(&clk->hw);
+- clk->is_enabled = clk_hw_is_enabled(&clk->hw);
++ clk->is_enabled = clk_hw_is_prepared(&clk->hw);
+
+ if (clk->parent)
+ dev_dbg(dev, "save parent %s idx %u\n", clk_hw_get_name(clk->parent),
+diff --git a/drivers/clk/mediatek/Kconfig b/drivers/clk/mediatek/Kconfig
+index 70a005e7e1b180..486401e1f2f19c 100644
+--- a/drivers/clk/mediatek/Kconfig
++++ b/drivers/clk/mediatek/Kconfig
+@@ -887,13 +887,6 @@ config COMMON_CLK_MT8195_APUSYS
+ help
+ This driver supports MediaTek MT8195 AI Processor Unit System clocks.
+
+-config COMMON_CLK_MT8195_AUDSYS
+- tristate "Clock driver for MediaTek MT8195 audsys"
+- depends on COMMON_CLK_MT8195
+- default COMMON_CLK_MT8195
+- help
+- This driver supports MediaTek MT8195 audsys clocks.
+-
+ config COMMON_CLK_MT8195_IMP_IIC_WRAP
+ tristate "Clock driver for MediaTek MT8195 imp_iic_wrap"
+ depends on COMMON_CLK_MT8195
+@@ -908,14 +901,6 @@ config COMMON_CLK_MT8195_MFGCFG
+ help
+ This driver supports MediaTek MT8195 mfgcfg clocks.
+
+-config COMMON_CLK_MT8195_MSDC
+- tristate "Clock driver for MediaTek MT8195 msdc"
+- depends on COMMON_CLK_MT8195
+- default COMMON_CLK_MT8195
+- help
+- This driver supports MediaTek MT8195 MMC and SD Controller's
+- msdc and msdc_top clocks.
+-
+ config COMMON_CLK_MT8195_SCP_ADSP
+ tristate "Clock driver for MediaTek MT8195 scp_adsp"
+ depends on COMMON_CLK_MT8195
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index 11ae28430dadbc..8e1b7f63e95a13 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -1203,11 +1203,11 @@ config SM_VIDEOCC_8350
+ config SM_VIDEOCC_8550
+ tristate "SM8550 Video Clock Controller"
+ depends on ARM64 || COMPILE_TEST
+- select SM_GCC_8550
++ depends on SM_GCC_8550 || SM_GCC_8650
+ select QCOM_GDSC
+ help
+ Support for the video clock controller on Qualcomm Technologies, Inc.
+- SM8550 devices.
++ SM8550 or SM8650 devices.
+ Say Y if you want to support video devices and functionality such as
+ video encode/decode.
+
+diff --git a/drivers/clk/ralink/clk-mtmips.c b/drivers/clk/ralink/clk-mtmips.c
+index 50a443bf79ecd3..76285fbbdeaa2d 100644
+--- a/drivers/clk/ralink/clk-mtmips.c
++++ b/drivers/clk/ralink/clk-mtmips.c
+@@ -263,8 +263,9 @@ static int mtmips_register_pherip_clocks(struct device_node *np,
+ .rate = _rate \
+ }
+
+-static struct mtmips_clk_fixed rt305x_fixed_clocks[] = {
+- CLK_FIXED("xtal", NULL, 40000000)
++static struct mtmips_clk_fixed rt3883_fixed_clocks[] = {
++ CLK_FIXED("xtal", NULL, 40000000),
++ CLK_FIXED("periph", "xtal", 40000000)
+ };
+
+ static struct mtmips_clk_fixed rt3352_fixed_clocks[] = {
+@@ -366,6 +367,12 @@ static inline struct mtmips_clk *to_mtmips_clk(struct clk_hw *hw)
+ return container_of(hw, struct mtmips_clk, hw);
+ }
+
++static unsigned long rt2880_xtal_recalc_rate(struct clk_hw *hw,
++ unsigned long parent_rate)
++{
++ return 40000000;
++}
++
+ static unsigned long rt5350_xtal_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+ {
+@@ -677,10 +684,12 @@ static unsigned long mt76x8_cpu_recalc_rate(struct clk_hw *hw,
+ }
+
+ static struct mtmips_clk rt2880_clks_base[] = {
++ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
+ { CLK_BASE("cpu", "xtal", rt2880_cpu_recalc_rate) }
+ };
+
+ static struct mtmips_clk rt305x_clks_base[] = {
++ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
+ { CLK_BASE("cpu", "xtal", rt305x_cpu_recalc_rate) }
+ };
+
+@@ -690,6 +699,7 @@ static struct mtmips_clk rt3352_clks_base[] = {
+ };
+
+ static struct mtmips_clk rt3883_clks_base[] = {
++ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
+ { CLK_BASE("cpu", "xtal", rt3883_cpu_recalc_rate) },
+ { CLK_BASE("bus", "cpu", rt3883_bus_recalc_rate) }
+ };
+@@ -746,8 +756,8 @@ static int mtmips_register_clocks(struct device_node *np,
+ static const struct mtmips_clk_data rt2880_clk_data = {
+ .clk_base = rt2880_clks_base,
+ .num_clk_base = ARRAY_SIZE(rt2880_clks_base),
+- .clk_fixed = rt305x_fixed_clocks,
+- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
++ .clk_fixed = NULL,
++ .num_clk_fixed = 0,
+ .clk_factor = rt2880_factor_clocks,
+ .num_clk_factor = ARRAY_SIZE(rt2880_factor_clocks),
+ .clk_periph = rt2880_pherip_clks,
+@@ -757,8 +767,8 @@ static const struct mtmips_clk_data rt2880_clk_data = {
+ static const struct mtmips_clk_data rt305x_clk_data = {
+ .clk_base = rt305x_clks_base,
+ .num_clk_base = ARRAY_SIZE(rt305x_clks_base),
+- .clk_fixed = rt305x_fixed_clocks,
+- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
++ .clk_fixed = NULL,
++ .num_clk_fixed = 0,
+ .clk_factor = rt305x_factor_clocks,
+ .num_clk_factor = ARRAY_SIZE(rt305x_factor_clocks),
+ .clk_periph = rt305x_pherip_clks,
+@@ -779,8 +789,8 @@ static const struct mtmips_clk_data rt3352_clk_data = {
+ static const struct mtmips_clk_data rt3883_clk_data = {
+ .clk_base = rt3883_clks_base,
+ .num_clk_base = ARRAY_SIZE(rt3883_clks_base),
+- .clk_fixed = rt305x_fixed_clocks,
+- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
++ .clk_fixed = rt3883_fixed_clocks,
++ .num_clk_fixed = ARRAY_SIZE(rt3883_fixed_clocks),
+ .clk_factor = NULL,
+ .num_clk_factor = 0,
+ .clk_periph = rt5350_pherip_clks,
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index 04b78064d4e01c..640ab2cc28cfa4 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -552,7 +552,7 @@ static unsigned long
+ rzg2l_cpg_get_foutpostdiv_rate(struct rzg2l_pll5_param *params,
+ unsigned long rate)
+ {
+- unsigned long foutpostdiv_rate;
++ unsigned long foutpostdiv_rate, foutvco_rate;
+
+ params->pl5_intin = rate / MEGA;
+ params->pl5_fracin = div_u64(((u64)rate % MEGA) << 24, MEGA);
+@@ -561,10 +561,11 @@ rzg2l_cpg_get_foutpostdiv_rate(struct rzg2l_pll5_param *params,
+ params->pl5_postdiv2 = 1;
+ params->pl5_spread = 0x16;
+
+- foutpostdiv_rate =
+- EXTAL_FREQ_IN_MEGA_HZ * MEGA / params->pl5_refdiv *
+- ((((params->pl5_intin << 24) + params->pl5_fracin)) >> 24) /
+- (params->pl5_postdiv1 * params->pl5_postdiv2);
++ foutvco_rate = div_u64(mul_u32_u32(EXTAL_FREQ_IN_MEGA_HZ * MEGA,
++ (params->pl5_intin << 24) + params->pl5_fracin),
++ params->pl5_refdiv) >> 24;
++ foutpostdiv_rate = DIV_ROUND_CLOSEST_ULL(foutvco_rate,
++ params->pl5_postdiv1 * params->pl5_postdiv2);
+
+ return foutpostdiv_rate;
+ }
+diff --git a/drivers/clk/sophgo/clk-sg2042-pll.c b/drivers/clk/sophgo/clk-sg2042-pll.c
+index ff9deeef509b8f..1537f4f05860ea 100644
+--- a/drivers/clk/sophgo/clk-sg2042-pll.c
++++ b/drivers/clk/sophgo/clk-sg2042-pll.c
+@@ -153,7 +153,7 @@ static unsigned long sg2042_pll_recalc_rate(unsigned int reg_value,
+
+ sg2042_pll_ctrl_decode(reg_value, &ctrl_table);
+
+- numerator = parent_rate * ctrl_table.fbdiv;
++ numerator = (u64)parent_rate * ctrl_table.fbdiv;
+ denominator = ctrl_table.refdiv * ctrl_table.postdiv1 * ctrl_table.postdiv2;
+ do_div(numerator, denominator);
+ return numerator;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
+index 9b5cfac2ee70cb..3f095515f54f91 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
++++ b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
+@@ -1371,7 +1371,7 @@ static int sun20i_d1_ccu_probe(struct platform_device *pdev)
+
+ /* Enforce m1 = 0, m0 = 0 for PLL_AUDIO0 */
+ val = readl(reg + SUN20I_D1_PLL_AUDIO0_REG);
+- val &= ~BIT(1) | BIT(0);
++ val &= ~(BIT(1) | BIT(0));
+ writel(val, reg + SUN20I_D1_PLL_AUDIO0_REG);
+
+ /* Force fanout-27M factor N to 0. */
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index 95dd4660b5b659..d546903dba4f3a 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -400,7 +400,8 @@ config ARM_GT_INITIAL_PRESCALER_VAL
+ This affects CPU_FREQ max delta from the initial frequency.
+
+ config ARM_TIMER_SP804
+- bool "Support for Dual Timer SP804 module" if COMPILE_TEST
++ bool "Support for Dual Timer SP804 module"
++ depends on ARM || ARM64 || COMPILE_TEST
+ depends on GENERIC_SCHED_CLOCK && HAVE_CLK
+ select CLKSRC_MMIO
+ select TIMER_OF if OF
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index c2dcd8d68e4587..d1c144d6f328cf 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -686,9 +686,9 @@ subsys_initcall(dmtimer_percpu_timer_startup);
+
+ static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
+ {
+- struct device_node *arm_timer;
++ struct device_node *arm_timer __free(device_node) =
++ of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
+
+- arm_timer = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
+ if (of_device_is_available(arm_timer)) {
+ pr_warn_once("ARM architected timer wrap issue i940 detected\n");
+ return 0;
+diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c
+index 1b481731df964e..b9df9b19d4bd97 100644
+--- a/drivers/comedi/comedi_fops.c
++++ b/drivers/comedi/comedi_fops.c
+@@ -2407,6 +2407,18 @@ static int comedi_mmap(struct file *file, struct vm_area_struct *vma)
+
+ start += PAGE_SIZE;
+ }
++
++#ifdef CONFIG_MMU
++ /*
++ * Leaving behind a partial mapping of a buffer we're about to
++ * drop is unsafe, see remap_pfn_range_notrack().
++ * We need to zap the range here ourselves instead of relying
++ * on the automatic zapping in remap_pfn_range() because we call
++ * remap_pfn_range() in a loop.
++ */
++ if (retval)
++ zap_vma_ptes(vma, vma->vm_start, size);
++#endif
+ }
+
+ if (retval == 0) {
+diff --git a/drivers/counter/stm32-timer-cnt.c b/drivers/counter/stm32-timer-cnt.c
+index 186e73d6ccb455..87b6ec567b5447 100644
+--- a/drivers/counter/stm32-timer-cnt.c
++++ b/drivers/counter/stm32-timer-cnt.c
+@@ -214,11 +214,17 @@ static int stm32_count_enable_write(struct counter_device *counter,
+ {
+ struct stm32_timer_cnt *const priv = counter_priv(counter);
+ u32 cr1;
++ int ret;
+
+ if (enable) {
+ regmap_read(priv->regmap, TIM_CR1, &cr1);
+- if (!(cr1 & TIM_CR1_CEN))
+- clk_enable(priv->clk);
++ if (!(cr1 & TIM_CR1_CEN)) {
++ ret = clk_enable(priv->clk);
++ if (ret) {
++ dev_err(counter->parent, "Cannot enable clock %d\n", ret);
++ return ret;
++ }
++ }
+
+ regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN,
+ TIM_CR1_CEN);
+@@ -694,6 +700,7 @@ static int stm32_timer_cnt_probe_encoder(struct device *dev,
+ }
+
+ ret = of_property_read_u32(tnode, "reg", &idx);
++ of_node_put(tnode);
+ if (ret) {
+ dev_err(dev, "Can't get index (%d)\n", ret);
+ return ret;
+@@ -816,7 +823,11 @@ static int __maybe_unused stm32_timer_cnt_resume(struct device *dev)
+ return ret;
+
+ if (priv->enabled) {
+- clk_enable(priv->clk);
++ ret = clk_enable(priv->clk);
++ if (ret) {
++ dev_err(dev, "Cannot enable clock %d\n", ret);
++ return ret;
++ }
+
+ /* Restore registers that may have been lost */
+ regmap_write(priv->regmap, TIM_SMCR, priv->bak.smcr);
+diff --git a/drivers/counter/ti-ecap-capture.c b/drivers/counter/ti-ecap-capture.c
+index 675447315cafb8..b119aeede693ec 100644
+--- a/drivers/counter/ti-ecap-capture.c
++++ b/drivers/counter/ti-ecap-capture.c
+@@ -574,8 +574,13 @@ static int ecap_cnt_resume(struct device *dev)
+ {
+ struct counter_device *counter_dev = dev_get_drvdata(dev);
+ struct ecap_cnt_dev *ecap_dev = counter_priv(counter_dev);
++ int ret;
+
+- clk_enable(ecap_dev->clk);
++ ret = clk_enable(ecap_dev->clk);
++ if (ret) {
++ dev_err(dev, "Cannot enable clock %d\n", ret);
++ return ret;
++ }
+
+ ecap_cnt_capture_set_evmode(counter_dev, ecap_dev->pm_ctx.ev_mode);
+
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 929b9097a6c17c..c2d650ac0a9f16 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -698,34 +698,12 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
+ static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on)
+ {
+ struct amd_cpudata *cpudata = policy->driver_data;
+- struct cppc_perf_ctrls perf_ctrls;
+- u32 highest_perf, nominal_perf, nominal_freq, max_freq;
++ u32 nominal_freq, max_freq;
+ int ret = 0;
+
+- highest_perf = READ_ONCE(cpudata->highest_perf);
+- nominal_perf = READ_ONCE(cpudata->nominal_perf);
+ nominal_freq = READ_ONCE(cpudata->nominal_freq);
+ max_freq = READ_ONCE(cpudata->max_freq);
+
+- if (boot_cpu_has(X86_FEATURE_CPPC)) {
+- u64 value = READ_ONCE(cpudata->cppc_req_cached);
+-
+- value &= ~GENMASK_ULL(7, 0);
+- value |= on ? highest_perf : nominal_perf;
+- WRITE_ONCE(cpudata->cppc_req_cached, value);
+-
+- wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+- } else {
+- perf_ctrls.max_perf = on ? highest_perf : nominal_perf;
+- ret = cppc_set_perf(cpudata->cpu, &perf_ctrls);
+- if (ret) {
+- cpufreq_cpu_release(policy);
+- pr_debug("Failed to set max perf on CPU:%d. ret:%d\n",
+- cpudata->cpu, ret);
+- return ret;
+- }
+- }
+-
+ if (on)
+ policy->cpuinfo.max_freq = max_freq;
+ else if (policy->cpuinfo.max_freq > nominal_freq * 1000)
+@@ -1606,7 +1584,7 @@ static void amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
+ value = READ_ONCE(cpudata->cppc_req_cached);
+
+ if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
+- min_perf = max_perf;
++ min_perf = min(cpudata->nominal_perf, max_perf);
+
+ /* Initial min/max values for CPPC Performance Controls Register */
+ value &= ~AMD_CPPC_MIN_PERF(~0L);
+diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
+index bafa32dd375d5e..6bea89bfe3e95a 100644
+--- a/drivers/cpufreq/cppc_cpufreq.c
++++ b/drivers/cpufreq/cppc_cpufreq.c
+@@ -118,6 +118,9 @@ static void cppc_scale_freq_workfn(struct kthread_work *work)
+
+ perf = cppc_perf_from_fbctrs(cpu_data, &cppc_fi->prev_perf_fb_ctrs,
+ &fb_ctrs);
++ if (!perf)
++ return;
++
+ cppc_fi->prev_perf_fb_ctrs = fb_ctrs;
+
+ perf <<= SCHED_CAPACITY_SHIFT;
+@@ -420,6 +423,9 @@ static int cppc_get_cpu_power(struct device *cpu_dev,
+ struct cppc_cpudata *cpu_data;
+
+ policy = cpufreq_cpu_get_raw(cpu_dev->id);
++ if (!policy)
++ return -EINVAL;
++
+ cpu_data = policy->driver_data;
+ perf_caps = &cpu_data->perf_caps;
+ max_cap = arch_scale_cpu_capacity(cpu_dev->id);
+@@ -487,6 +493,9 @@ static int cppc_get_cpu_cost(struct device *cpu_dev, unsigned long KHz,
+ int step;
+
+ policy = cpufreq_cpu_get_raw(cpu_dev->id);
++ if (!policy)
++ return -EINVAL;
++
+ cpu_data = policy->driver_data;
+ perf_caps = &cpu_data->perf_caps;
+ max_cap = arch_scale_cpu_capacity(cpu_dev->id);
+@@ -724,13 +733,31 @@ static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
+ delta_delivered = get_delta(fb_ctrs_t1->delivered,
+ fb_ctrs_t0->delivered);
+
+- /* Check to avoid divide-by zero and invalid delivered_perf */
++ /*
++ * Avoid divide-by zero and unchanged feedback counters.
++ * Leave it for callers to handle.
++ */
+ if (!delta_reference || !delta_delivered)
+- return cpu_data->perf_ctrls.desired_perf;
++ return 0;
+
+ return (reference_perf * delta_delivered) / delta_reference;
+ }
+
++static int cppc_get_perf_ctrs_sample(int cpu,
++ struct cppc_perf_fb_ctrs *fb_ctrs_t0,
++ struct cppc_perf_fb_ctrs *fb_ctrs_t1)
++{
++ int ret;
++
++ ret = cppc_get_perf_ctrs(cpu, fb_ctrs_t0);
++ if (ret)
++ return ret;
++
++ udelay(2); /* 2usec delay between sampling */
++
++ return cppc_get_perf_ctrs(cpu, fb_ctrs_t1);
++}
++
+ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
+ {
+ struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0};
+@@ -746,18 +773,32 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
+
+ cpufreq_cpu_put(policy);
+
+- ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t0);
+- if (ret)
+- return 0;
+-
+- udelay(2); /* 2usec delay between sampling */
+-
+- ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t1);
+- if (ret)
+- return 0;
++ ret = cppc_get_perf_ctrs_sample(cpu, &fb_ctrs_t0, &fb_ctrs_t1);
++ if (ret) {
++ if (ret == -EFAULT)
++ /* Any of the associated CPPC regs is 0. */
++ goto out_invalid_counters;
++ else
++ return 0;
++ }
+
+ delivered_perf = cppc_perf_from_fbctrs(cpu_data, &fb_ctrs_t0,
+ &fb_ctrs_t1);
++ if (!delivered_perf)
++ goto out_invalid_counters;
++
++ return cppc_perf_to_khz(&cpu_data->perf_caps, delivered_perf);
++
++out_invalid_counters:
++ /*
++ * Feedback counters could be unchanged or 0 when a cpu enters a
++ * low-power idle state, e.g. clock-gated or power-gated.
++ * Use desired perf for reflecting frequency. Get the latest register
++ * value first as some platforms may update the actual delivered perf
++ * there; if failed, resort to the cached desired perf.
++ */
++ if (cppc_get_desired_perf(cpu, &delivered_perf))
++ delivered_perf = cpu_data->perf_ctrls.desired_perf;
+
+ return cppc_perf_to_khz(&cpu_data->perf_caps, delivered_perf);
+ }
+diff --git a/drivers/cpufreq/loongson2_cpufreq.c b/drivers/cpufreq/loongson2_cpufreq.c
+index 6a8e97896d38ca..ed1a6dbad63894 100644
+--- a/drivers/cpufreq/loongson2_cpufreq.c
++++ b/drivers/cpufreq/loongson2_cpufreq.c
+@@ -148,7 +148,9 @@ static int __init cpufreq_init(void)
+
+ ret = cpufreq_register_driver(&loongson2_cpufreq_driver);
+
+- if (!ret && !nowait) {
++ if (ret) {
++ platform_driver_unregister(&platform_driver);
++ } else if (!nowait) {
+ saved_cpu_wait = cpu_wait;
+ cpu_wait = loongson2_cpu_wait;
+ }
+diff --git a/drivers/cpufreq/loongson3_cpufreq.c b/drivers/cpufreq/loongson3_cpufreq.c
+index 6b5e6798d9a283..a923e196ec86e7 100644
+--- a/drivers/cpufreq/loongson3_cpufreq.c
++++ b/drivers/cpufreq/loongson3_cpufreq.c
+@@ -346,8 +346,11 @@ static int loongson3_cpufreq_probe(struct platform_device *pdev)
+ {
+ int i, ret;
+
+- for (i = 0; i < MAX_PACKAGES; i++)
+- devm_mutex_init(&pdev->dev, &cpufreq_mutex[i]);
++ for (i = 0; i < MAX_PACKAGES; i++) {
++ ret = devm_mutex_init(&pdev->dev, &cpufreq_mutex[i]);
++ if (ret)
++ return ret;
++ }
+
+ ret = do_service_request(0, 0, CMD_GET_VERSION, 0, 0);
+ if (ret <= 0)
+diff --git a/drivers/cpufreq/mediatek-cpufreq-hw.c b/drivers/cpufreq/mediatek-cpufreq-hw.c
+index 8925e096d5b9a0..aeb5e63045421b 100644
+--- a/drivers/cpufreq/mediatek-cpufreq-hw.c
++++ b/drivers/cpufreq/mediatek-cpufreq-hw.c
+@@ -62,7 +62,7 @@ mtk_cpufreq_get_cpu_power(struct device *cpu_dev, unsigned long *uW,
+
+ policy = cpufreq_cpu_get_raw(cpu_dev->id);
+ if (!policy)
+- return 0;
++ return -EINVAL;
+
+ data = policy->driver_data;
+
+diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
+index 1a3ecd44cbaf65..20f6453670aa49 100644
+--- a/drivers/crypto/bcm/cipher.c
++++ b/drivers/crypto/bcm/cipher.c
+@@ -2415,6 +2415,7 @@ static int ahash_hmac_setkey(struct crypto_ahash *ahash, const u8 *key,
+
+ static int ahash_hmac_init(struct ahash_request *req)
+ {
++ int ret;
+ struct iproc_reqctx_s *rctx = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct iproc_ctx_s *ctx = crypto_ahash_ctx(tfm);
+@@ -2424,7 +2425,9 @@ static int ahash_hmac_init(struct ahash_request *req)
+ flow_log("ahash_hmac_init()\n");
+
+ /* init the context as a hash */
+- ahash_init(req);
++ ret = ahash_init(req);
++ if (ret)
++ return ret;
+
+ if (!spu_no_incr_hash(ctx)) {
+ /* SPU-M can do incr hashing but needs sw for outer HMAC */
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index 887a5f2fb9279b..cb001aa1de6618 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -984,7 +984,7 @@ static int caam_rsa_set_pub_key(struct crypto_akcipher *tfm, const void *key,
+ return -ENOMEM;
+ }
+
+-static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
++static int caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+ struct rsa_key *raw_key)
+ {
+ struct caam_rsa_key *rsa_key = &ctx->key;
+@@ -994,7 +994,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+
+ rsa_key->p = caam_read_raw_data(raw_key->p, &p_sz);
+ if (!rsa_key->p)
+- return;
++ return -ENOMEM;
+ rsa_key->p_sz = p_sz;
+
+ rsa_key->q = caam_read_raw_data(raw_key->q, &q_sz);
+@@ -1029,7 +1029,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+
+ rsa_key->priv_form = FORM3;
+
+- return;
++ return 0;
+
+ free_dq:
+ kfree_sensitive(rsa_key->dq);
+@@ -1043,6 +1043,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+ kfree_sensitive(rsa_key->q);
+ free_p:
+ kfree_sensitive(rsa_key->p);
++ return -ENOMEM;
+ }
+
+ static int caam_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+@@ -1088,7 +1089,9 @@ static int caam_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+ rsa_key->e_sz = raw_key.e_sz;
+ rsa_key->n_sz = raw_key.n_sz;
+
+- caam_rsa_set_priv_key_form(ctx, &raw_key);
++ ret = caam_rsa_set_priv_key_form(ctx, &raw_key);
++ if (ret)
++ goto err;
+
+ return 0;
+
+diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c
+index ba8fb5d8a7b265..dcb9069a254c78 100644
+--- a/drivers/crypto/caam/qi.c
++++ b/drivers/crypto/caam/qi.c
+@@ -793,7 +793,7 @@ int caam_qi_init(struct platform_device *caam_pdev)
+
+ caam_debugfs_qi_init(ctrlpriv);
+
+- err = devm_add_action_or_reset(qidev, caam_qi_shutdown, ctrlpriv);
++ err = devm_add_action_or_reset(qidev, caam_qi_shutdown, qidev);
+ if (err)
+ goto fail2;
+
+diff --git a/drivers/crypto/cavium/cpt/cptpf_main.c b/drivers/crypto/cavium/cpt/cptpf_main.c
+index 6872ac3440010f..54de869e5374c2 100644
+--- a/drivers/crypto/cavium/cpt/cptpf_main.c
++++ b/drivers/crypto/cavium/cpt/cptpf_main.c
+@@ -44,7 +44,7 @@ static void cpt_disable_cores(struct cpt_device *cpt, u64 coremask,
+ dev_err(dev, "Cores still busy %llx", coremask);
+ grp = cpt_read_csr64(cpt->reg_base,
+ CPTX_PF_EXEC_BUSY(0));
+- if (timeout--)
++ if (!timeout--)
+ break;
+
+ udelay(CSR_DELAY);
+@@ -302,6 +302,8 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
+
+ ret = do_cpt_init(cpt, mcode);
+ if (ret) {
++ dma_free_coherent(&cpt->pdev->dev, mcode->code_size,
++ mcode->code, mcode->phys_base);
+ dev_err(dev, "do_cpt_init failed with ret: %d\n", ret);
+ goto fw_release;
+ }
+@@ -394,7 +396,7 @@ static void cpt_disable_all_cores(struct cpt_device *cpt)
+ dev_err(dev, "Cores still busy");
+ grp = cpt_read_csr64(cpt->reg_base,
+ CPTX_PF_EXEC_BUSY(0));
+- if (timeout--)
++ if (!timeout--)
+ break;
+
+ udelay(CSR_DELAY);
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
+index 6b536ad2ada52a..34d30b78381343 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
+@@ -1280,11 +1280,15 @@ static u32 hpre_get_hw_err_status(struct hisi_qm *qm)
+
+ static void hpre_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
+ {
+- u32 nfe;
+-
+ writel(err_sts, qm->io_base + HPRE_HAC_SOURCE_INT);
+- nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
+- writel(nfe, qm->io_base + HPRE_RAS_NFE_ENB);
++}
++
++static void hpre_disable_error_report(struct hisi_qm *qm, u32 err_type)
++{
++ u32 nfe_mask;
++
++ nfe_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
++ writel(nfe_mask & (~err_type), qm->io_base + HPRE_RAS_NFE_ENB);
+ }
+
+ static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
+@@ -1298,6 +1302,27 @@ static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
+ qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
+ }
+
++static enum acc_err_result hpre_get_err_result(struct hisi_qm *qm)
++{
++ u32 err_status;
++
++ err_status = hpre_get_hw_err_status(qm);
++ if (err_status) {
++ if (err_status & qm->err_info.ecc_2bits_mask)
++ qm->err_status.is_dev_ecc_mbit = true;
++ hpre_log_hw_error(qm, err_status);
++
++ if (err_status & qm->err_info.dev_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ hpre_disable_error_report(qm, err_status);
++ return ACC_ERR_NEED_RESET;
++ }
++ hpre_clear_hw_err_status(qm, err_status);
++ }
++
++ return ACC_ERR_RECOVERED;
++}
++
+ static void hpre_err_info_init(struct hisi_qm *qm)
+ {
+ struct hisi_qm_err_info *err_info = &qm->err_info;
+@@ -1324,12 +1349,12 @@ static const struct hisi_qm_err_ini hpre_err_ini = {
+ .hw_err_disable = hpre_hw_error_disable,
+ .get_dev_hw_err_status = hpre_get_hw_err_status,
+ .clear_dev_hw_err_status = hpre_clear_hw_err_status,
+- .log_dev_hw_err = hpre_log_hw_error,
+ .open_axi_master_ooo = hpre_open_axi_master_ooo,
+ .open_sva_prefetch = hpre_open_sva_prefetch,
+ .close_sva_prefetch = hpre_close_sva_prefetch,
+ .show_last_dfx_regs = hpre_show_last_dfx_regs,
+ .err_info_init = hpre_err_info_init,
++ .get_err_result = hpre_get_err_result,
+ };
+
+ static int hpre_pf_probe_init(struct hpre *hpre)
+diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
+index 07983af9e3e229..b18692ee7fd563 100644
+--- a/drivers/crypto/hisilicon/qm.c
++++ b/drivers/crypto/hisilicon/qm.c
+@@ -271,12 +271,6 @@ enum vft_type {
+ SHAPER_VFT,
+ };
+
+-enum acc_err_result {
+- ACC_ERR_NONE,
+- ACC_ERR_NEED_RESET,
+- ACC_ERR_RECOVERED,
+-};
+-
+ enum qm_alg_type {
+ ALG_TYPE_0,
+ ALG_TYPE_1,
+@@ -1425,22 +1419,25 @@ static void qm_log_hw_error(struct hisi_qm *qm, u32 error_status)
+
+ static enum acc_err_result qm_hw_error_handle_v2(struct hisi_qm *qm)
+ {
+- u32 error_status, tmp;
+-
+- /* read err sts */
+- tmp = readl(qm->io_base + QM_ABNORMAL_INT_STATUS);
+- error_status = qm->error_mask & tmp;
++ u32 error_status;
+
+- if (error_status) {
++ error_status = qm_get_hw_error_status(qm);
++ if (error_status & qm->error_mask) {
+ if (error_status & QM_ECC_MBIT)
+ qm->err_status.is_qm_ecc_mbit = true;
+
+ qm_log_hw_error(qm, error_status);
+- if (error_status & qm->err_info.qm_reset_mask)
++ if (error_status & qm->err_info.qm_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ writel(qm->err_info.nfe & (~error_status),
++ qm->io_base + QM_RAS_NFE_ENABLE);
+ return ACC_ERR_NEED_RESET;
++ }
+
++ /* Clear error source if not need reset. */
+ writel(error_status, qm->io_base + QM_ABNORMAL_INT_SOURCE);
+ writel(qm->err_info.nfe, qm->io_base + QM_RAS_NFE_ENABLE);
++ writel(qm->err_info.ce, qm->io_base + QM_RAS_CE_ENABLE);
+ }
+
+ return ACC_ERR_RECOVERED;
+@@ -3861,30 +3858,12 @@ EXPORT_SYMBOL_GPL(hisi_qm_sriov_configure);
+
+ static enum acc_err_result qm_dev_err_handle(struct hisi_qm *qm)
+ {
+- u32 err_sts;
+-
+- if (!qm->err_ini->get_dev_hw_err_status) {
+- dev_err(&qm->pdev->dev, "Device doesn't support get hw error status!\n");
++ if (!qm->err_ini->get_err_result) {
++ dev_err(&qm->pdev->dev, "Device doesn't support reset!\n");
+ return ACC_ERR_NONE;
+ }
+
+- /* get device hardware error status */
+- err_sts = qm->err_ini->get_dev_hw_err_status(qm);
+- if (err_sts) {
+- if (err_sts & qm->err_info.ecc_2bits_mask)
+- qm->err_status.is_dev_ecc_mbit = true;
+-
+- if (qm->err_ini->log_dev_hw_err)
+- qm->err_ini->log_dev_hw_err(qm, err_sts);
+-
+- if (err_sts & qm->err_info.dev_reset_mask)
+- return ACC_ERR_NEED_RESET;
+-
+- if (qm->err_ini->clear_dev_hw_err_status)
+- qm->err_ini->clear_dev_hw_err_status(qm, err_sts);
+- }
+-
+- return ACC_ERR_RECOVERED;
++ return qm->err_ini->get_err_result(qm);
+ }
+
+ static enum acc_err_result qm_process_dev_error(struct hisi_qm *qm)
+diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
+index c35533d8930b21..75c25f0d5f2b82 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_main.c
++++ b/drivers/crypto/hisilicon/sec2/sec_main.c
+@@ -1010,11 +1010,15 @@ static u32 sec_get_hw_err_status(struct hisi_qm *qm)
+
+ static void sec_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
+ {
+- u32 nfe;
+-
+ writel(err_sts, qm->io_base + SEC_CORE_INT_SOURCE);
+- nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
+- writel(nfe, qm->io_base + SEC_RAS_NFE_REG);
++}
++
++static void sec_disable_error_report(struct hisi_qm *qm, u32 err_type)
++{
++ u32 nfe_mask;
++
++ nfe_mask = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
++ writel(nfe_mask & (~err_type), qm->io_base + SEC_RAS_NFE_REG);
+ }
+
+ static void sec_open_axi_master_ooo(struct hisi_qm *qm)
+@@ -1026,6 +1030,27 @@ static void sec_open_axi_master_ooo(struct hisi_qm *qm)
+ writel(val | SEC_AXI_SHUTDOWN_ENABLE, qm->io_base + SEC_CONTROL_REG);
+ }
+
++static enum acc_err_result sec_get_err_result(struct hisi_qm *qm)
++{
++ u32 err_status;
++
++ err_status = sec_get_hw_err_status(qm);
++ if (err_status) {
++ if (err_status & qm->err_info.ecc_2bits_mask)
++ qm->err_status.is_dev_ecc_mbit = true;
++ sec_log_hw_error(qm, err_status);
++
++ if (err_status & qm->err_info.dev_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ sec_disable_error_report(qm, err_status);
++ return ACC_ERR_NEED_RESET;
++ }
++ sec_clear_hw_err_status(qm, err_status);
++ }
++
++ return ACC_ERR_RECOVERED;
++}
++
+ static void sec_err_info_init(struct hisi_qm *qm)
+ {
+ struct hisi_qm_err_info *err_info = &qm->err_info;
+@@ -1052,12 +1077,12 @@ static const struct hisi_qm_err_ini sec_err_ini = {
+ .hw_err_disable = sec_hw_error_disable,
+ .get_dev_hw_err_status = sec_get_hw_err_status,
+ .clear_dev_hw_err_status = sec_clear_hw_err_status,
+- .log_dev_hw_err = sec_log_hw_error,
+ .open_axi_master_ooo = sec_open_axi_master_ooo,
+ .open_sva_prefetch = sec_open_sva_prefetch,
+ .close_sva_prefetch = sec_close_sva_prefetch,
+ .show_last_dfx_regs = sec_show_last_dfx_regs,
+ .err_info_init = sec_err_info_init,
++ .get_err_result = sec_get_err_result,
+ };
+
+ static int sec_pf_probe_init(struct sec_dev *sec)
+diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c
+index d07e47b48be06a..80c2fcb1d26dcf 100644
+--- a/drivers/crypto/hisilicon/zip/zip_main.c
++++ b/drivers/crypto/hisilicon/zip/zip_main.c
+@@ -1059,11 +1059,15 @@ static u32 hisi_zip_get_hw_err_status(struct hisi_qm *qm)
+
+ static void hisi_zip_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
+ {
+- u32 nfe;
+-
+ writel(err_sts, qm->io_base + HZIP_CORE_INT_SOURCE);
+- nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
+- writel(nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
++}
++
++static void hisi_zip_disable_error_report(struct hisi_qm *qm, u32 err_type)
++{
++ u32 nfe_mask;
++
++ nfe_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
++ writel(nfe_mask & (~err_type), qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
+ }
+
+ static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm)
+@@ -1093,6 +1097,27 @@ static void hisi_zip_close_axi_master_ooo(struct hisi_qm *qm)
+ qm->io_base + HZIP_CORE_INT_SET);
+ }
+
++static enum acc_err_result hisi_zip_get_err_result(struct hisi_qm *qm)
++{
++ u32 err_status;
++
++ err_status = hisi_zip_get_hw_err_status(qm);
++ if (err_status) {
++ if (err_status & qm->err_info.ecc_2bits_mask)
++ qm->err_status.is_dev_ecc_mbit = true;
++ hisi_zip_log_hw_error(qm, err_status);
++
++ if (err_status & qm->err_info.dev_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ hisi_zip_disable_error_report(qm, err_status);
++ return ACC_ERR_NEED_RESET;
++ }
++ hisi_zip_clear_hw_err_status(qm, err_status);
++ }
++
++ return ACC_ERR_RECOVERED;
++}
++
+ static void hisi_zip_err_info_init(struct hisi_qm *qm)
+ {
+ struct hisi_qm_err_info *err_info = &qm->err_info;
+@@ -1120,13 +1145,13 @@ static const struct hisi_qm_err_ini hisi_zip_err_ini = {
+ .hw_err_disable = hisi_zip_hw_error_disable,
+ .get_dev_hw_err_status = hisi_zip_get_hw_err_status,
+ .clear_dev_hw_err_status = hisi_zip_clear_hw_err_status,
+- .log_dev_hw_err = hisi_zip_log_hw_error,
+ .open_axi_master_ooo = hisi_zip_open_axi_master_ooo,
+ .close_axi_master_ooo = hisi_zip_close_axi_master_ooo,
+ .open_sva_prefetch = hisi_zip_open_sva_prefetch,
+ .close_sva_prefetch = hisi_zip_close_sva_prefetch,
+ .show_last_dfx_regs = hisi_zip_show_last_dfx_regs,
+ .err_info_init = hisi_zip_err_info_init,
++ .get_err_result = hisi_zip_get_err_result,
+ };
+
+ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
+diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
+index e17577b785c33a..f44c08f5f5ec4a 100644
+--- a/drivers/crypto/inside-secure/safexcel_hash.c
++++ b/drivers/crypto/inside-secure/safexcel_hash.c
+@@ -2093,7 +2093,7 @@ static int safexcel_xcbcmac_cra_init(struct crypto_tfm *tfm)
+
+ safexcel_ahash_cra_init(tfm);
+ ctx->aes = kmalloc(sizeof(*ctx->aes), GFP_KERNEL);
+- return PTR_ERR_OR_ZERO(ctx->aes);
++ return ctx->aes == NULL ? -ENOMEM : 0;
+ }
+
+ static void safexcel_xcbcmac_cra_exit(struct crypto_tfm *tfm)
+diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+index 78f0ea49254dbb..9faef33e54bd32 100644
+--- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+@@ -375,7 +375,7 @@ static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
+ else
+ id = -EINVAL;
+
+- if (id < 0 || id > num_objs)
++ if (id < 0 || id >= num_objs)
+ return NULL;
+
+ return fw_objs[id];
+diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+index 9fd7ec53b9f3d8..bbd92c017c28ed 100644
+--- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+@@ -334,7 +334,7 @@ static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
+ else
+ id = -EINVAL;
+
+- if (id < 0 || id > num_objs)
++ if (id < 0 || id >= num_objs)
+ return NULL;
+
+ return fw_objs[id];
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_aer.c b/drivers/crypto/intel/qat/qat_common/adf_aer.c
+index 04260f61d04294..a6db216f5b7614 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_aer.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_aer.c
+@@ -281,8 +281,11 @@ int adf_init_aer(void)
+ return -EFAULT;
+
+ device_sriov_wq = alloc_workqueue("qat_device_sriov_wq", 0, 0);
+- if (!device_sriov_wq)
++ if (!device_sriov_wq) {
++ destroy_workqueue(device_reset_wq);
++ device_reset_wq = NULL;
+ return -EFAULT;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c b/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
+index c42f5c25aabdfa..4c11ad1ebcf0f8 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
+@@ -22,18 +22,13 @@
+ void adf_dbgfs_init(struct adf_accel_dev *accel_dev)
+ {
+ char name[ADF_DEVICE_NAME_LENGTH];
+- void *ret;
+
+ /* Create dev top level debugfs entry */
+ snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX,
+ accel_dev->hw_device->dev_class->name,
+ pci_name(accel_dev->accel_pci_dev.pci_dev));
+
+- ret = debugfs_create_dir(name, NULL);
+- if (IS_ERR_OR_NULL(ret))
+- return;
+-
+- accel_dev->debugfs_dir = ret;
++ accel_dev->debugfs_dir = debugfs_create_dir(name, NULL);
+
+ adf_cfg_dev_dbgfs_add(accel_dev);
+ }
+@@ -59,9 +54,6 @@ EXPORT_SYMBOL_GPL(adf_dbgfs_exit);
+ */
+ void adf_dbgfs_add(struct adf_accel_dev *accel_dev)
+ {
+- if (!accel_dev->debugfs_dir)
+- return;
+-
+ if (!accel_dev->is_vf) {
+ adf_fw_counters_dbgfs_add(accel_dev);
+ adf_heartbeat_dbgfs_add(accel_dev);
+@@ -77,9 +69,6 @@ void adf_dbgfs_add(struct adf_accel_dev *accel_dev)
+ */
+ void adf_dbgfs_rm(struct adf_accel_dev *accel_dev)
+ {
+- if (!accel_dev->debugfs_dir)
+- return;
+-
+ if (!accel_dev->is_vf) {
+ adf_tl_dbgfs_rm(accel_dev);
+ adf_cnv_dbgfs_rm(accel_dev);
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c b/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
+index 65bd26b25abce9..f93d9cca70cee4 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
+@@ -90,10 +90,6 @@ void adf_exit_arb(struct adf_accel_dev *accel_dev)
+
+ hw_data->get_arb_info(&info);
+
+- /* Reset arbiter configuration */
+- for (i = 0; i < ADF_ARB_NUM; i++)
+- WRITE_CSR_ARB_SARCONFIG(csr, arb_off, i, 0);
+-
+ /* Unmap worker threads to service arbiters */
+ for (i = 0; i < hw_data->num_engines; i++)
+ WRITE_CSR_ARB_WT2SAM(csr, arb_off, wt_off, i, 0);
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index c82775dbb557a7..77a6301f37f0af 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -225,21 +225,22 @@ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
+ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ struct skcipher_request *req, int init)
+ {
+- dma_addr_t key_phys = 0;
+- dma_addr_t src_phys, dst_phys;
++ dma_addr_t key_phys, src_phys, dst_phys;
+ struct dcp *sdcp = global_sdcp;
+ struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
+ struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req);
+ bool key_referenced = actx->key_referenced;
+ int ret;
+
+- if (!key_referenced) {
++ if (key_referenced)
++ key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key + AES_KEYSIZE_128,
++ AES_KEYSIZE_128, DMA_TO_DEVICE);
++ else
+ key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key,
+ 2 * AES_KEYSIZE_128, DMA_TO_DEVICE);
+- ret = dma_mapping_error(sdcp->dev, key_phys);
+- if (ret)
+- return ret;
+- }
++ ret = dma_mapping_error(sdcp->dev, key_phys);
++ if (ret)
++ return ret;
+
+ src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf,
+ DCP_BUF_SZ, DMA_TO_DEVICE);
+@@ -300,7 +301,10 @@ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ err_dst:
+ dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
+ err_src:
+- if (!key_referenced)
++ if (key_referenced)
++ dma_unmap_single(sdcp->dev, key_phys, AES_KEYSIZE_128,
++ DMA_TO_DEVICE);
++ else
+ dma_unmap_single(sdcp->dev, key_phys, 2 * AES_KEYSIZE_128,
+ DMA_TO_DEVICE);
+ return ret;
+diff --git a/drivers/dax/pmem/Makefile b/drivers/dax/pmem/Makefile
+deleted file mode 100644
+index 191c31f0d4f008..00000000000000
+--- a/drivers/dax/pmem/Makefile
++++ /dev/null
+@@ -1,7 +0,0 @@
+-# SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o
+-obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem_core.o
+-
+-dax_pmem-y := pmem.o
+-dax_pmem_core-y := core.o
+-dax_pmem_compat-y := compat.o
+diff --git a/drivers/dax/pmem/pmem.c b/drivers/dax/pmem/pmem.c
+deleted file mode 100644
+index dfe91a2990fec4..00000000000000
+--- a/drivers/dax/pmem/pmem.c
++++ /dev/null
+@@ -1,10 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
+-#include <linux/percpu-refcount.h>
+-#include <linux/memremap.h>
+-#include <linux/module.h>
+-#include <linux/pfn_t.h>
+-#include <linux/nd.h>
+-#include "../bus.h"
+-
+-
+diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
+index b46eb8a552d7be..fee04fdb08220c 100644
+--- a/drivers/dma-buf/Kconfig
++++ b/drivers/dma-buf/Kconfig
+@@ -36,6 +36,7 @@ config UDMABUF
+ depends on DMA_SHARED_BUFFER
+ depends on MEMFD_CREATE || COMPILE_TEST
+ depends on MMU
++ select VMAP_PFN
+ help
+ A driver to let userspace turn memfd regions into dma-bufs.
+ Qemu can use this to create host dmabufs for guest framebuffers.
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index 047c3cd2cefff6..a3638ccc15f571 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -74,21 +74,29 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
+ static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
+ {
+ struct udmabuf *ubuf = buf->priv;
+- struct page **pages;
++ unsigned long *pfns;
+ void *vaddr;
+ pgoff_t pg;
+
+ dma_resv_assert_held(buf->resv);
+
+- pages = kmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL);
+- if (!pages)
++ /**
++ * HVO may free tail pages, so just use pfn to map each folio
++ * into vmalloc area.
++ */
++ pfns = kvmalloc_array(ubuf->pagecount, sizeof(*pfns), GFP_KERNEL);
++ if (!pfns)
+ return -ENOMEM;
+
+- for (pg = 0; pg < ubuf->pagecount; pg++)
+- pages[pg] = &ubuf->folios[pg]->page;
++ for (pg = 0; pg < ubuf->pagecount; pg++) {
++ unsigned long pfn = folio_pfn(ubuf->folios[pg]);
+
+- vaddr = vm_map_ram(pages, ubuf->pagecount, -1);
+- kfree(pages);
++ pfn += ubuf->offsets[pg] >> PAGE_SHIFT;
++ pfns[pg] = pfn;
++ }
++
++ vaddr = vmap_pfn(pfns, ubuf->pagecount, PAGE_KERNEL);
++ kvfree(pfns);
+ if (!vaddr)
+ return -EINVAL;
+
+@@ -196,8 +204,8 @@ static void release_udmabuf(struct dma_buf *buf)
+ put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
+
+ unpin_all_folios(&ubuf->unpin_list);
+- kfree(ubuf->offsets);
+- kfree(ubuf->folios);
++ kvfree(ubuf->offsets);
++ kvfree(ubuf->folios);
+ kfree(ubuf);
+ }
+
+@@ -322,14 +330,14 @@ static long udmabuf_create(struct miscdevice *device,
+ if (!ubuf->pagecount)
+ goto err;
+
+- ubuf->folios = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
+- GFP_KERNEL);
++ ubuf->folios = kvmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
++ GFP_KERNEL);
+ if (!ubuf->folios) {
+ ret = -ENOMEM;
+ goto err;
+ }
+- ubuf->offsets = kcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
+- GFP_KERNEL);
++ ubuf->offsets = kvcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
++ GFP_KERNEL);
+ if (!ubuf->offsets) {
+ ret = -ENOMEM;
+ goto err;
+@@ -343,7 +351,7 @@ static long udmabuf_create(struct miscdevice *device,
+ goto err;
+
+ pgcnt = list[i].size >> PAGE_SHIFT;
+- folios = kmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
++ folios = kvmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
+ if (!folios) {
+ ret = -ENOMEM;
+ goto err;
+@@ -353,7 +361,7 @@ static long udmabuf_create(struct miscdevice *device,
+ ret = memfd_pin_folios(memfd, list[i].offset, end,
+ folios, pgcnt, &pgoff);
+ if (ret <= 0) {
+- kfree(folios);
++ kvfree(folios);
+ if (!ret)
+ ret = -EINVAL;
+ goto err;
+@@ -382,7 +390,7 @@ static long udmabuf_create(struct miscdevice *device,
+ }
+ }
+
+- kfree(folios);
++ kvfree(folios);
+ fput(memfd);
+ memfd = NULL;
+ }
+@@ -398,8 +406,8 @@ static long udmabuf_create(struct miscdevice *device,
+ if (memfd)
+ fput(memfd);
+ unpin_all_folios(&ubuf->unpin_list);
+- kfree(ubuf->offsets);
+- kfree(ubuf->folios);
++ kvfree(ubuf->offsets);
++ kvfree(ubuf->folios);
+ kfree(ubuf);
+ return ret;
+ }
+diff --git a/drivers/edac/bluefield_edac.c b/drivers/edac/bluefield_edac.c
+index 5b3164560648ee..0e539c1073510a 100644
+--- a/drivers/edac/bluefield_edac.c
++++ b/drivers/edac/bluefield_edac.c
+@@ -180,7 +180,7 @@ static void bluefield_edac_check(struct mem_ctl_info *mci)
+ static void bluefield_edac_init_dimms(struct mem_ctl_info *mci)
+ {
+ struct bluefield_edac_priv *priv = mci->pvt_info;
+- int mem_ctrl_idx = mci->mc_idx;
++ u64 mem_ctrl_idx = mci->mc_idx;
+ struct dimm_info *dimm;
+ u64 smc_info, smc_arg;
+ int is_empty = 1, i;
+diff --git a/drivers/edac/fsl_ddr_edac.c b/drivers/edac/fsl_ddr_edac.c
+index d148d262d0d4de..339d94b3d04c7d 100644
+--- a/drivers/edac/fsl_ddr_edac.c
++++ b/drivers/edac/fsl_ddr_edac.c
+@@ -328,21 +328,25 @@ static void fsl_mc_check(struct mem_ctl_info *mci)
+ * TODO: Add support for 32-bit wide buses
+ */
+ if ((err_detect & DDR_EDE_SBE) && (bus_width == 64)) {
++ u64 cap = (u64)cap_high << 32 | cap_low;
++ u32 s = syndrome;
++
+ sbe_ecc_decode(cap_high, cap_low, syndrome,
+ &bad_data_bit, &bad_ecc_bit);
+
+- if (bad_data_bit != -1)
+- fsl_mc_printk(mci, KERN_ERR,
+- "Faulty Data bit: %d\n", bad_data_bit);
+- if (bad_ecc_bit != -1)
+- fsl_mc_printk(mci, KERN_ERR,
+- "Faulty ECC bit: %d\n", bad_ecc_bit);
++ if (bad_data_bit >= 0) {
++ fsl_mc_printk(mci, KERN_ERR, "Faulty Data bit: %d\n", bad_data_bit);
++ cap ^= 1ULL << bad_data_bit;
++ }
++
++ if (bad_ecc_bit >= 0) {
++ fsl_mc_printk(mci, KERN_ERR, "Faulty ECC bit: %d\n", bad_ecc_bit);
++ s ^= 1 << bad_ecc_bit;
++ }
+
+ fsl_mc_printk(mci, KERN_ERR,
+ "Expected Data / ECC:\t%#8.8x_%08x / %#2.2x\n",
+- cap_high ^ (1 << (bad_data_bit - 32)),
+- cap_low ^ (1 << bad_data_bit),
+- syndrome ^ (1 << bad_ecc_bit));
++ upper_32_bits(cap), lower_32_bits(cap), s);
+ }
+
+ fsl_mc_printk(mci, KERN_ERR,
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index 24dd896d9a9d58..4e98fe16f09139 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -1089,6 +1089,7 @@ static int __init i10nm_init(void)
+ return -ENODEV;
+
+ cfg = (struct res_config *)id->driver_data;
++ skx_set_res_cfg(cfg);
+ res_cfg = cfg;
+
+ rc = skx_get_hi_lo(0x09a2, off, &tolm, &tohm);
+diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c
+index 189a2fc29e74f5..07dacf8c10be3d 100644
+--- a/drivers/edac/igen6_edac.c
++++ b/drivers/edac/igen6_edac.c
+@@ -1245,6 +1245,7 @@ static int igen6_register_mci(int mc, u64 mchbar, struct pci_dev *pdev)
+ imc->mci = mci;
+ return 0;
+ fail3:
++ mci->pvt_info = NULL;
+ kfree(mci->ctl_name);
+ fail2:
+ edac_mc_free(mci);
+@@ -1269,6 +1270,7 @@ static void igen6_unregister_mcis(void)
+
+ edac_mc_del_mc(mci->pdev);
+ kfree(mci->ctl_name);
++ mci->pvt_info = NULL;
+ edac_mc_free(mci);
+ iounmap(imc->window);
+ }
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index 8d18099fd528cf..0b8aaf5f77d9f6 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -47,6 +47,7 @@ static skx_show_retry_log_f skx_show_retry_rd_err_log;
+ static u64 skx_tolm, skx_tohm;
+ static LIST_HEAD(dev_edac_list);
+ static bool skx_mem_cfg_2lm;
++static struct res_config *skx_res_cfg;
+
+ int skx_adxl_get(void)
+ {
+@@ -119,7 +120,7 @@ void skx_adxl_put(void)
+ }
+ EXPORT_SYMBOL_GPL(skx_adxl_put);
+
+-static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_mem)
++static bool skx_adxl_decode(struct decoded_addr *res, enum error_source err_src)
+ {
+ struct skx_dev *d;
+ int i, len = 0;
+@@ -135,8 +136,24 @@ static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_me
+ return false;
+ }
+
++ /*
++ * GNR with a Flat2LM memory configuration may mistakenly classify
++ * a near-memory error(DDR5) as a far-memory error(CXL), resulting
++ * in the incorrect selection of decoded ADXL components.
++ * To address this, prefetch the decoded far-memory controller ID
++ * and adjust the error source to near-memory if the far-memory
++ * controller ID is invalid.
++ */
++ if (skx_res_cfg && skx_res_cfg->type == GNR && err_src == ERR_SRC_2LM_FM) {
++ res->imc = (int)adxl_values[component_indices[INDEX_MEMCTRL]];
++ if (res->imc == -1) {
++ err_src = ERR_SRC_2LM_NM;
++ edac_dbg(0, "Adjust the error source to near-memory.\n");
++ }
++ }
++
+ res->socket = (int)adxl_values[component_indices[INDEX_SOCKET]];
+- if (error_in_1st_level_mem) {
++ if (err_src == ERR_SRC_2LM_NM) {
+ res->imc = (adxl_nm_bitmap & BIT_NM_MEMCTRL) ?
+ (int)adxl_values[component_indices[INDEX_NM_MEMCTRL]] : -1;
+ res->channel = (adxl_nm_bitmap & BIT_NM_CHANNEL) ?
+@@ -191,6 +208,12 @@ void skx_set_mem_cfg(bool mem_cfg_2lm)
+ }
+ EXPORT_SYMBOL_GPL(skx_set_mem_cfg);
+
++void skx_set_res_cfg(struct res_config *cfg)
++{
++ skx_res_cfg = cfg;
++}
++EXPORT_SYMBOL_GPL(skx_set_res_cfg);
++
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log)
+ {
+ driver_decode = decode;
+@@ -620,31 +643,27 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
+ optype, skx_msg);
+ }
+
+-static bool skx_error_in_1st_level_mem(const struct mce *m)
++static enum error_source skx_error_source(const struct mce *m)
+ {
+- u32 errcode;
++ u32 errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
+
+- if (!skx_mem_cfg_2lm)
+- return false;
+-
+- errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
+-
+- return errcode == MCACOD_EXT_MEM_ERR;
+-}
++ if (errcode != MCACOD_MEM_CTL_ERR && errcode != MCACOD_EXT_MEM_ERR)
++ return ERR_SRC_NOT_MEMORY;
+
+-static bool skx_error_in_mem(const struct mce *m)
+-{
+- u32 errcode;
++ if (!skx_mem_cfg_2lm)
++ return ERR_SRC_1LM;
+
+- errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
++ if (errcode == MCACOD_EXT_MEM_ERR)
++ return ERR_SRC_2LM_NM;
+
+- return (errcode == MCACOD_MEM_CTL_ERR || errcode == MCACOD_EXT_MEM_ERR);
++ return ERR_SRC_2LM_FM;
+ }
+
+ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ void *data)
+ {
+ struct mce *mce = (struct mce *)data;
++ enum error_source err_src;
+ struct decoded_addr res;
+ struct mem_ctl_info *mci;
+ char *type;
+@@ -652,8 +671,10 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ if (mce->kflags & MCE_HANDLED_CEC)
+ return NOTIFY_DONE;
+
++ err_src = skx_error_source(mce);
++
+ /* Ignore unless this is memory related with an address */
+- if (!skx_error_in_mem(mce) || !(mce->status & MCI_STATUS_ADDRV))
++ if (err_src == ERR_SRC_NOT_MEMORY || !(mce->status & MCI_STATUS_ADDRV))
+ return NOTIFY_DONE;
+
+ memset(&res, 0, sizeof(res));
+@@ -667,7 +688,7 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ /* Try driver decoder first */
+ if (!(driver_decode && driver_decode(&res))) {
+ /* Then try firmware decoder (ACPI DSM methods) */
+- if (!(adxl_component_count && skx_adxl_decode(&res, skx_error_in_1st_level_mem(mce))))
++ if (!(adxl_component_count && skx_adxl_decode(&res, err_src)))
+ return NOTIFY_DONE;
+ }
+
+diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
+index 473421ba7a18a5..5a7111791c1093 100644
+--- a/drivers/edac/skx_common.h
++++ b/drivers/edac/skx_common.h
+@@ -146,6 +146,13 @@ enum {
+ INDEX_MAX
+ };
+
++enum error_source {
++ ERR_SRC_1LM,
++ ERR_SRC_2LM_NM,
++ ERR_SRC_2LM_FM,
++ ERR_SRC_NOT_MEMORY,
++};
++
+ #define BIT_NM_MEMCTRL BIT_ULL(INDEX_NM_MEMCTRL)
+ #define BIT_NM_CHANNEL BIT_ULL(INDEX_NM_CHANNEL)
+ #define BIT_NM_DIMM BIT_ULL(INDEX_NM_DIMM)
+@@ -234,6 +241,7 @@ int skx_adxl_get(void);
+ void skx_adxl_put(void);
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log);
+ void skx_set_mem_cfg(bool mem_cfg_2lm);
++void skx_set_res_cfg(struct res_config *cfg);
+
+ int skx_get_src_id(struct skx_dev *d, int off, u8 *id);
+ int skx_get_node_id(struct skx_dev *d, u8 *id);
+diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
+index 4b8c5250cdb57e..cd30499b2555f4 100644
+--- a/drivers/firmware/arm_scmi/common.h
++++ b/drivers/firmware/arm_scmi/common.h
+@@ -163,6 +163,7 @@ void scmi_protocol_release(const struct scmi_handle *handle, u8 protocol_id);
+ * used to initialize this channel
+ * @dev: Reference to device in the SCMI hierarchy corresponding to this
+ * channel
++ * @is_p2a: A flag to identify a channel as P2A (RX)
+ * @rx_timeout_ms: The configured RX timeout in milliseconds.
+ * @handle: Pointer to SCMI entity handle
+ * @no_completion_irq: Flag to indicate that this channel has no completion
+@@ -174,6 +175,7 @@ void scmi_protocol_release(const struct scmi_handle *handle, u8 protocol_id);
+ struct scmi_chan_info {
+ int id;
+ struct device *dev;
++ bool is_p2a;
+ unsigned int rx_timeout_ms;
+ struct scmi_handle *handle;
+ bool no_completion_irq;
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index dc09f2d755f414..80b44bd1a3f3e7 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -1034,6 +1034,11 @@ static inline void scmi_xfer_command_release(struct scmi_info *info,
+ static inline void scmi_clear_channel(struct scmi_info *info,
+ struct scmi_chan_info *cinfo)
+ {
++ if (!cinfo->is_p2a) {
++ dev_warn(cinfo->dev, "Invalid clear on A2P channel !\n");
++ return;
++ }
++
+ if (info->desc->ops->clear_channel)
+ info->desc->ops->clear_channel(cinfo);
+ }
+@@ -2614,6 +2619,7 @@ static int scmi_chan_setup(struct scmi_info *info, struct device_node *of_node,
+ if (!cinfo)
+ return -ENOMEM;
+
++ cinfo->is_p2a = !tx;
+ cinfo->rx_timeout_ms = info->desc->max_rx_timeout_ms;
+
+ /* Create a unique name for this transport device */
+diff --git a/drivers/firmware/arm_scpi.c b/drivers/firmware/arm_scpi.c
+index 94a6b4e667de14..f4d47577f83ee7 100644
+--- a/drivers/firmware/arm_scpi.c
++++ b/drivers/firmware/arm_scpi.c
+@@ -630,6 +630,9 @@ static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain)
+ if (ret)
+ return ERR_PTR(ret);
+
++ if (!buf.opp_count)
++ return ERR_PTR(-ENOENT);
++
+ info = kmalloc(sizeof(*info), GFP_KERNEL);
+ if (!info)
+ return ERR_PTR(-ENOMEM);
+diff --git a/drivers/firmware/efi/libstub/efi-stub.c b/drivers/firmware/efi/libstub/efi-stub.c
+index 958a680e0660d4..2a1b43f9e0fa2b 100644
+--- a/drivers/firmware/efi/libstub/efi-stub.c
++++ b/drivers/firmware/efi/libstub/efi-stub.c
+@@ -129,7 +129,7 @@ efi_status_t efi_handle_cmdline(efi_loaded_image_t *image, char **cmdline_ptr)
+
+ if (IS_ENABLED(CONFIG_CMDLINE_EXTEND) ||
+ IS_ENABLED(CONFIG_CMDLINE_FORCE) ||
+- cmdline_size == 0) {
++ cmdline[0] == 0) {
+ status = efi_parse_options(CONFIG_CMDLINE);
+ if (status != EFI_SUCCESS) {
+ efi_err("Failed to parse options\n");
+diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c
+index e8d69bd548f3fe..9c3613e6af158f 100644
+--- a/drivers/firmware/efi/tpm.c
++++ b/drivers/firmware/efi/tpm.c
+@@ -40,7 +40,8 @@ int __init efi_tpm_eventlog_init(void)
+ {
+ struct linux_efi_tpm_eventlog *log_tbl;
+ struct efi_tcg2_final_events_table *final_tbl;
+- int tbl_size;
++ unsigned int tbl_size;
++ int final_tbl_size;
+ int ret = 0;
+
+ if (efi.tpm_log == EFI_INVALID_TABLE_ADDR) {
+@@ -80,26 +81,26 @@ int __init efi_tpm_eventlog_init(void)
+ goto out;
+ }
+
+- tbl_size = 0;
++ final_tbl_size = 0;
+ if (final_tbl->nr_events != 0) {
+ void *events = (void *)efi.tpm_final_log
+ + sizeof(final_tbl->version)
+ + sizeof(final_tbl->nr_events);
+
+- tbl_size = tpm2_calc_event_log_size(events,
+- final_tbl->nr_events,
+- log_tbl->log);
++ final_tbl_size = tpm2_calc_event_log_size(events,
++ final_tbl->nr_events,
++ log_tbl->log);
+ }
+
+- if (tbl_size < 0) {
++ if (final_tbl_size < 0) {
+ pr_err(FW_BUG "Failed to parse event in TPM Final Events Log\n");
+ ret = -EINVAL;
+ goto out_calc;
+ }
+
+ memblock_reserve(efi.tpm_final_log,
+- tbl_size + sizeof(*final_tbl));
+- efi_tpm_final_log_size = tbl_size;
++ final_tbl_size + sizeof(*final_tbl));
++ efi_tpm_final_log_size = final_tbl_size;
+
+ out_calc:
+ early_memunmap(final_tbl, sizeof(*final_tbl));
+diff --git a/drivers/firmware/google/gsmi.c b/drivers/firmware/google/gsmi.c
+index d304913314e494..24e666d5c3d1a2 100644
+--- a/drivers/firmware/google/gsmi.c
++++ b/drivers/firmware/google/gsmi.c
+@@ -918,7 +918,8 @@ static __init int gsmi_init(void)
+ gsmi_dev.pdev = platform_device_register_full(&gsmi_dev_info);
+ if (IS_ERR(gsmi_dev.pdev)) {
+ printk(KERN_ERR "gsmi: unable to register platform device\n");
+- return PTR_ERR(gsmi_dev.pdev);
++ ret = PTR_ERR(gsmi_dev.pdev);
++ goto out_unregister;
+ }
+
+ /* SMI access needs to be serialized */
+@@ -1056,10 +1057,11 @@ static __init int gsmi_init(void)
+ gsmi_buf_free(gsmi_dev.name_buf);
+ kmem_cache_destroy(gsmi_dev.mem_pool);
+ platform_device_unregister(gsmi_dev.pdev);
+- pr_info("gsmi: failed to load: %d\n", ret);
++out_unregister:
+ #ifdef CONFIG_PM
+ platform_driver_unregister(&gsmi_driver_info);
+ #endif
++ pr_info("gsmi: failed to load: %d\n", ret);
+ return ret;
+ }
+
+diff --git a/drivers/gpio/gpio-exar.c b/drivers/gpio/gpio-exar.c
+index 5170fe7599cdf8..d5909a4f0433c1 100644
+--- a/drivers/gpio/gpio-exar.c
++++ b/drivers/gpio/gpio-exar.c
+@@ -99,11 +99,13 @@ static void exar_set_value(struct gpio_chip *chip, unsigned int offset,
+ struct exar_gpio_chip *exar_gpio = gpiochip_get_data(chip);
+ unsigned int addr = exar_offset_to_lvl_addr(exar_gpio, offset);
+ unsigned int bit = exar_offset_to_bit(exar_gpio, offset);
++ unsigned int bit_value = value ? BIT(bit) : 0;
+
+- if (value)
+- regmap_set_bits(exar_gpio->regmap, addr, BIT(bit));
+- else
+- regmap_clear_bits(exar_gpio->regmap, addr, BIT(bit));
++ /*
++ * regmap_write_bits() forces value to be written when an external
++ * pull up/down might otherwise indicate value was already set.
++ */
++ regmap_write_bits(exar_gpio->regmap, addr, BIT(bit), bit_value);
+ }
+
+ static int exar_direction_output(struct gpio_chip *chip, unsigned int offset,
+diff --git a/drivers/gpio/gpio-zevio.c b/drivers/gpio/gpio-zevio.c
+index 2de61337ad3b54..d7230fd83f5d68 100644
+--- a/drivers/gpio/gpio-zevio.c
++++ b/drivers/gpio/gpio-zevio.c
+@@ -11,6 +11,7 @@
+ #include <linux/io.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/platform_device.h>
++#include <linux/property.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+
+@@ -169,6 +170,7 @@ static const struct gpio_chip zevio_gpio_chip = {
+ /* Initialization */
+ static int zevio_gpio_probe(struct platform_device *pdev)
+ {
++ struct device *dev = &pdev->dev;
+ struct zevio_gpio *controller;
+ int status, i;
+
+@@ -180,6 +182,10 @@ static int zevio_gpio_probe(struct platform_device *pdev)
+ controller->chip = zevio_gpio_chip;
+ controller->chip.parent = &pdev->dev;
+
++ controller->chip.label = devm_kasprintf(dev, GFP_KERNEL, "%pfw", dev_fwnode(dev));
++ if (!controller->chip.label)
++ return -ENOMEM;
++
+ controller->regs = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(controller->regs))
+ return dev_err_probe(&pdev->dev, PTR_ERR(controller->regs),
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+index 43f3e72fb247a7..aa66c501580b98 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+@@ -158,7 +158,7 @@ static int aca_smu_get_valid_aca_banks(struct amdgpu_device *adev, enum aca_smu_
+ return -EINVAL;
+ }
+
+- if (start + count >= max_count)
++ if (start + count > max_count)
+ return -EINVAL;
+
+ count = min_t(int, count, max_count);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index c272461d70a9a0..b397c783867391 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -850,6 +850,9 @@ int amdgpu_amdkfd_unmap_hiq(struct amdgpu_device *adev, u32 doorbell_off,
+ if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+ return -EINVAL;
+
++ if (!kiq_ring->sched.ready || adev->job_hang)
++ return 0;
++
+ ring_funcs = kzalloc(sizeof(*ring_funcs), GFP_KERNEL);
+ if (!ring_funcs)
+ return -ENOMEM;
+@@ -874,8 +877,14 @@ int amdgpu_amdkfd_unmap_hiq(struct amdgpu_device *adev, u32 doorbell_off,
+
+ kiq->pmf->kiq_unmap_queues(kiq_ring, ring, RESET_QUEUES, 0, 0);
+
+- if (kiq_ring->sched.ready && !adev->job_hang)
+- r = amdgpu_ring_test_helper(kiq_ring);
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that unmap queues that is submitted before got
++ * processed successfully before returning.
++ */
++ r = amdgpu_ring_test_helper(kiq_ring);
+
+ spin_unlock(&kiq->ring_lock);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index 4bd61c169ca8d4..ca8091fd3a24f4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -1757,11 +1757,13 @@ int amdgpu_discovery_get_nps_info(struct amdgpu_device *adev,
+
+ switch (le16_to_cpu(nps_info->v1.header.version_major)) {
+ case 1:
++ mem_ranges = kvcalloc(nps_info->v1.count,
++ sizeof(*mem_ranges),
++ GFP_KERNEL);
++ if (!mem_ranges)
++ return -ENOMEM;
+ *nps_type = nps_info->v1.nps_type;
+ *range_cnt = nps_info->v1.count;
+- mem_ranges = kvzalloc(
+- *range_cnt * sizeof(struct amdgpu_gmc_memrange),
+- GFP_KERNEL);
+ for (i = 0; i < *range_cnt; i++) {
+ mem_ranges[i].base_address =
+ nps_info->v1.instance_info[i].base_address;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 3ff39d3ec317c0..0058f1796c75c9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -522,6 +522,17 @@ int amdgpu_gfx_disable_kcq(struct amdgpu_device *adev, int xcc_id)
+ if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+ return -EINVAL;
+
++ if (!kiq_ring->sched.ready || adev->job_hang)
++ return 0;
++ /**
++ * This is workaround: only skip kiq_ring test
++ * during ras recovery in suspend stage for gfx9.4.3
++ */
++ if ((amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3) ||
++ amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 4)) &&
++ amdgpu_ras_in_recovery(adev))
++ return 0;
++
+ spin_lock(&kiq->ring_lock);
+ if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size *
+ adev->gfx.num_compute_rings)) {
+@@ -535,20 +546,15 @@ int amdgpu_gfx_disable_kcq(struct amdgpu_device *adev, int xcc_id)
+ &adev->gfx.compute_ring[j],
+ RESET_QUEUES, 0, 0);
+ }
+-
+- /**
+- * This is workaround: only skip kiq_ring test
+- * during ras recovery in suspend stage for gfx9.4.3
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that unmap queues that is submitted before got
++ * processed successfully before returning.
+ */
+- if ((amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3) ||
+- amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 4)) &&
+- amdgpu_ras_in_recovery(adev)) {
+- spin_unlock(&kiq->ring_lock);
+- return 0;
+- }
++ r = amdgpu_ring_test_helper(kiq_ring);
+
+- if (kiq_ring->sched.ready && !adev->job_hang)
+- r = amdgpu_ring_test_helper(kiq_ring);
+ spin_unlock(&kiq->ring_lock);
+
+ return r;
+@@ -576,8 +582,11 @@ int amdgpu_gfx_disable_kgq(struct amdgpu_device *adev, int xcc_id)
+ if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+ return -EINVAL;
+
+- spin_lock(&kiq->ring_lock);
++ if (!adev->gfx.kiq[0].ring.sched.ready || adev->job_hang)
++ return 0;
++
+ if (amdgpu_gfx_is_master_xcc(adev, xcc_id)) {
++ spin_lock(&kiq->ring_lock);
+ if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size *
+ adev->gfx.num_gfx_rings)) {
+ spin_unlock(&kiq->ring_lock);
+@@ -590,11 +599,17 @@ int amdgpu_gfx_disable_kgq(struct amdgpu_device *adev, int xcc_id)
+ &adev->gfx.gfx_ring[j],
+ PREEMPT_QUEUES, 0, 0);
+ }
+- }
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
+
+- if (adev->gfx.kiq[0].ring.sched.ready && !adev->job_hang)
++ /*
++ * Ring test will do a basic scratch register change check.
++ * Just run this to ensure that unmap queues that is submitted
++ * before got processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+- spin_unlock(&kiq->ring_lock);
++ spin_unlock(&kiq->ring_lock);
++ }
+
+ return r;
+ }
+@@ -699,7 +714,13 @@ int amdgpu_gfx_enable_kcq(struct amdgpu_device *adev, int xcc_id)
+ kiq->pmf->kiq_map_queues(kiq_ring,
+ &adev->gfx.compute_ring[j]);
+ }
+-
++ /* Submit map queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that map queues that is submitted before got
++ * processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+ spin_unlock(&kiq->ring_lock);
+ if (r)
+@@ -750,7 +771,13 @@ int amdgpu_gfx_enable_kgq(struct amdgpu_device *adev, int xcc_id)
+ &adev->gfx.gfx_ring[j]);
+ }
+ }
+-
++ /* Submit map queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that map queues that is submitted before got
++ * processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+ spin_unlock(&kiq->ring_lock);
+ if (r)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index b4658c7db0e167..618972c43aac53 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -4823,6 +4823,13 @@ static int gfx_v8_0_kcq_disable(struct amdgpu_device *adev)
+ amdgpu_ring_write(kiq_ring, 0);
+ amdgpu_ring_write(kiq_ring, 0);
+ }
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that unmap queues that is submitted before got
++ * processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+ if (r)
+ DRM_ERROR("KCQ disable failed\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+index b55041f38cec9b..0dbc16ec25c82a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+@@ -668,11 +668,12 @@ void jpeg_v4_0_3_dec_ring_insert_start(struct amdgpu_ring *ring)
+ amdgpu_ring_write(ring, PACKETJ(regUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET,
+ 0, 0, PACKETJ_TYPE0));
+ amdgpu_ring_write(ring, 0x62a04); /* PCTL0_MMHUB_DEEPSLEEP_IB */
+- }
+
+- amdgpu_ring_write(ring, PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR,
+- 0, 0, PACKETJ_TYPE0));
+- amdgpu_ring_write(ring, 0x80004000);
++ amdgpu_ring_write(ring,
++ PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR, 0,
++ 0, PACKETJ_TYPE0));
++ amdgpu_ring_write(ring, 0x80004000);
++ }
+ }
+
+ /**
+@@ -688,11 +689,12 @@ void jpeg_v4_0_3_dec_ring_insert_end(struct amdgpu_ring *ring)
+ amdgpu_ring_write(ring, PACKETJ(regUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET,
+ 0, 0, PACKETJ_TYPE0));
+ amdgpu_ring_write(ring, 0x62a04);
+- }
+
+- amdgpu_ring_write(ring, PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR,
+- 0, 0, PACKETJ_TYPE0));
+- amdgpu_ring_write(ring, 0x00004000);
++ amdgpu_ring_write(ring,
++ PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR, 0,
++ 0, PACKETJ_TYPE0));
++ amdgpu_ring_write(ring, 0x00004000);
++ }
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 8343b3e4de7b58..378ce360b861b2 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -312,8 +312,8 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr,
+ attr_sdma);
+ struct kfd_sdma_activity_handler_workarea sdma_activity_work_handler;
+
+- INIT_WORK(&sdma_activity_work_handler.sdma_activity_work,
+- kfd_sdma_activity_worker);
++ INIT_WORK_ONSTACK(&sdma_activity_work_handler.sdma_activity_work,
++ kfd_sdma_activity_worker);
+
+ sdma_activity_work_handler.pdd = pdd;
+ sdma_activity_work_handler.sdma_activity_counter = 0;
+@@ -321,6 +321,7 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr,
+ schedule_work(&sdma_activity_work_handler.sdma_activity_work);
+
+ flush_work(&sdma_activity_work_handler.sdma_activity_work);
++ destroy_work_on_stack(&sdma_activity_work_handler.sdma_activity_work);
+
+ return snprintf(buffer, PAGE_SIZE, "%llu\n",
+ (sdma_activity_work_handler.sdma_activity_counter)/
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 339bdfb7af2f81..9451552184966b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1681,6 +1681,26 @@ dm_allocate_gpu_mem(
+ return da->cpu_ptr;
+ }
+
++void
++dm_free_gpu_mem(
++ struct amdgpu_device *adev,
++ enum dc_gpu_mem_alloc_type type,
++ void *pvMem)
++{
++ struct dal_allocation *da;
++
++ /* walk the da list in DM */
++ list_for_each_entry(da, &adev->dm.da_list, list) {
++ if (pvMem == da->cpu_ptr) {
++ amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr);
++ list_del(&da->list);
++ kfree(da);
++ break;
++ }
++ }
++
++}
++
+ static enum dmub_status
+ dm_dmub_send_vbios_gpint_command(struct amdgpu_device *adev,
+ enum dmub_gpint_command command_code,
+@@ -1747,16 +1767,20 @@ static struct dml2_soc_bb *dm_dmub_get_vbios_bounding_box(struct amdgpu_device *
+ /* Send the chunk */
+ ret = dm_dmub_send_vbios_gpint_command(adev, send_addrs[i], chunk, 30000);
+ if (ret != DMUB_STATUS_OK)
+- /* No need to free bb here since it shall be done in dm_sw_fini() */
+- return NULL;
++ goto free_bb;
+ }
+
+ /* Now ask DMUB to copy the bb */
+ ret = dm_dmub_send_vbios_gpint_command(adev, DMUB_GPINT__BB_COPY, 1, 200000);
+ if (ret != DMUB_STATUS_OK)
+- return NULL;
++ goto free_bb;
+
+ return bb;
++
++free_bb:
++ dm_free_gpu_mem(adev, DC_MEM_ALLOC_TYPE_GART, (void *) bb);
++ return NULL;
++
+ }
+
+ static enum dmub_ips_disable_type dm_get_default_ips_mode(
+@@ -2520,11 +2544,11 @@ static int dm_sw_fini(void *handle)
+ amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr);
+ list_del(&da->list);
+ kfree(da);
++ adev->dm.bb_from_dmub = NULL;
+ break;
+ }
+ }
+
+- adev->dm.bb_from_dmub = NULL;
+
+ kfree(adev->dm.dmub_fb_info);
+ adev->dm.dmub_fb_info = NULL;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index 87a8d1d4f9a1b6..86b10471f2bdac 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -1004,6 +1004,9 @@ void *dm_allocate_gpu_mem(struct amdgpu_device *adev,
+ enum dc_gpu_mem_alloc_type type,
+ size_t size,
+ long long *addr);
++void dm_free_gpu_mem(struct amdgpu_device *adev,
++ enum dc_gpu_mem_alloc_type type,
++ void *addr);
+
+ bool amdgpu_dm_is_headless(struct amdgpu_device *adev);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index df3d124c4d7a10..5ca9cdbac3d726 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -35,8 +35,8 @@
+ #include "amdgpu_dm_trace.h"
+ #include "amdgpu_dm_debugfs.h"
+
+-#define HPD_DETECTION_PERIOD_uS 5000000
+-#define HPD_DETECTION_TIME_uS 1000
++#define HPD_DETECTION_PERIOD_uS 2000000
++#define HPD_DETECTION_TIME_uS 100000
+
+ void amdgpu_dm_crtc_handle_vblank(struct amdgpu_crtc *acrtc)
+ {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index 28db4c1a0a2ad8..3dfbdf4ed3ee04 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -1055,17 +1055,8 @@ void dm_helpers_free_gpu_mem(
+ void *pvMem)
+ {
+ struct amdgpu_device *adev = ctx->driver_context;
+- struct dal_allocation *da;
+-
+- /* walk the da list in DM */
+- list_for_each_entry(da, &adev->dm.da_list, list) {
+- if (pvMem == da->cpu_ptr) {
+- amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr);
+- list_del(&da->list);
+- kfree(da);
+- break;
+- }
+- }
++
++ dm_free_gpu_mem(adev, type, pvMem);
+ }
+
+ bool dm_helpers_dmub_outbox_interrupt_control(struct dc_context *ctx, bool enable)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 905c11af017164..c6865851e7325d 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1120,6 +1120,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ int i, k, ret;
+ bool debugfs_overwrite = false;
+ uint16_t fec_overhead_multiplier_x1000 = get_fec_overhead_multiplier(dc_link);
++ struct drm_connector_state *new_conn_state;
+
+ memset(params, 0, sizeof(params));
+
+@@ -1127,7 +1128,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ return PTR_ERR(mst_state);
+
+ /* Set up params */
+- DRM_DEBUG_DRIVER("%s: MST_DSC Set up params for %d streams\n", __func__, dc_state->stream_count);
++ DRM_DEBUG_DRIVER("%s: MST_DSC Try to set up params from %d streams\n", __func__, dc_state->stream_count);
+ for (i = 0; i < dc_state->stream_count; i++) {
+ struct dc_dsc_policy dsc_policy = {0};
+
+@@ -1143,6 +1144,14 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ if (!aconnector->mst_output_port)
+ continue;
+
++ new_conn_state = drm_atomic_get_new_connector_state(state, &aconnector->base);
++
++ if (!new_conn_state) {
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC Skip the stream 0x%p with invalid new_conn_state\n",
++ __func__, __LINE__, stream);
++ continue;
++ }
++
+ stream->timing.flags.DSC = 0;
+
+ params[count].timing = &stream->timing;
+@@ -1175,6 +1184,8 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ count++;
+ }
+
++ DRM_DEBUG_DRIVER("%s: MST_DSC Params set up for %d streams\n", __func__, count);
++
+ if (count == 0) {
+ ASSERT(0);
+ return 0;
+@@ -1302,7 +1313,7 @@ static bool is_dsc_need_re_compute(
+ continue;
+
+ aconnector = (struct amdgpu_dm_connector *) stream->dm_stream_context;
+- if (!aconnector || !aconnector->dsc_aux)
++ if (!aconnector)
+ continue;
+
+ stream_on_link[new_stream_on_link_num] = aconnector;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+index 4a88412fdfab13..29ef24f9a40f2c 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+@@ -881,6 +881,9 @@ void hwss_setup_dpp(union block_sequence_params *params)
+ struct dpp *dpp = pipe_ctx->plane_res.dpp;
+ struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+
++ if (!plane_state)
++ return;
++
+ if (dpp && dpp->funcs->dpp_setup) {
+ // program the input csc
+ dpp->funcs->dpp_setup(dpp,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index 936c0ec076bc4c..18c1e529249963 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -1922,9 +1922,9 @@ static void dcn20_program_pipe(
+ dc->res_pool->hubbub, pipe_ctx->plane_res.hubp->inst, pipe_ctx->hubp_regs.det_size);
+ }
+
+- if (pipe_ctx->update_flags.raw ||
+- (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.raw) ||
+- pipe_ctx->stream->update_flags.raw)
++ if (pipe_ctx->plane_state && (pipe_ctx->update_flags.raw ||
++ pipe_ctx->plane_state->update_flags.raw ||
++ pipe_ctx->stream->update_flags.raw))
+ dcn20_update_dchubp_dpp(dc, pipe_ctx, context);
+
+ if (pipe_ctx->update_flags.bits.enable ||
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 88e4aa5830f3c7..5c6bd7be25c0e0 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -2561,6 +2561,8 @@ static int __maybe_unused anx7625_runtime_pm_suspend(struct device *dev)
+ mutex_lock(&ctx->lock);
+
+ anx7625_stop_dp_work(ctx);
++ if (!ctx->pdata.panel_bridge)
++ anx7625_remove_edid(ctx);
+ anx7625_power_standby(ctx);
+
+ mutex_unlock(&ctx->lock);
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index 1e1c06fdf20645..bb449efac2f4e7 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -3101,6 +3101,8 @@ static __maybe_unused int it6505_bridge_suspend(struct device *dev)
+ {
+ struct it6505 *it6505 = dev_get_drvdata(dev);
+
++ it6505_remove_edid(it6505);
++
+ return it6505_poweroff(it6505);
+ }
+
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index b8b7a227addfb3..c4cc7f90d112cc 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -1695,6 +1695,13 @@ static const struct drm_edid *tc_edid_read(struct drm_bridge *bridge,
+ struct drm_connector *connector)
+ {
+ struct tc_data *tc = bridge_to_tc(bridge);
++ int ret;
++
++ ret = tc_get_display_props(tc);
++ if (ret < 0) {
++ dev_err(tc->dev, "failed to read display props: %d\n", ret);
++ return 0;
++ }
+
+ return drm_edid_read_ddc(connector, &tc->aux.ddc);
+ }
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index f917b259b33421..26fc813d37edfa 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -138,7 +138,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
+ */
+ struct drm_file *drm_file_alloc(struct drm_minor *minor)
+ {
+- static atomic64_t ident = ATOMIC_INIT(0);
++ static atomic64_t ident = ATOMIC64_INIT(0);
+ struct drm_device *dev = minor->dev;
+ struct drm_file *file;
+ int ret;
+diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
+index 5ace481c190117..1ed68d3cd80bad 100644
+--- a/drivers/gpu/drm/drm_mm.c
++++ b/drivers/gpu/drm/drm_mm.c
+@@ -151,7 +151,7 @@ static void show_leaks(struct drm_mm *mm) { }
+
+ INTERVAL_TREE_DEFINE(struct drm_mm_node, rb,
+ u64, __subtree_last,
+- START, LAST, static inline, drm_mm_interval_tree)
++ START, LAST, static inline __maybe_unused, drm_mm_interval_tree)
+
+ struct drm_mm_node *
+ __drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last)
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 0830cae9a4d0f5..2d84d7ea1ab7a0 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -403,7 +403,6 @@ static const struct dmi_system_id orientation_data[] = {
+ }, { /* Lenovo Yoga Tab 3 X90F */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"),
+ },
+ .driver_data = (void *)&lcd1600x2560_rightside_up,
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+index 6500f3999c5fa5..19ec67a5a918e3 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+@@ -538,6 +538,16 @@ static int etnaviv_bind(struct device *dev)
+ priv->num_gpus = 0;
+ priv->shm_gfp_mask = GFP_HIGHUSER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
+
++ /*
++ * If the GPU is part of a system with DMA addressing limitations,
++ * request pages for our SHM backend buffers from the DMA32 zone to
++ * hopefully avoid performance killing SWIOTLB bounce buffering.
++ */
++ if (dma_addressing_limited(dev)) {
++ priv->shm_gfp_mask |= GFP_DMA32;
++ priv->shm_gfp_mask &= ~__GFP_HIGHMEM;
++ }
++
+ priv->cmdbuf_suballoc = etnaviv_cmdbuf_suballoc_new(drm->dev);
+ if (IS_ERR(priv->cmdbuf_suballoc)) {
+ dev_err(drm->dev, "Failed to create cmdbuf suballocator\n");
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index 7c7f97793ddd0c..df0bc828a23483 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -839,14 +839,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ if (ret)
+ goto fail;
+
+- /*
+- * If the GPU is part of a system with DMA addressing limitations,
+- * request pages for our SHM backend buffers from the DMA32 zone to
+- * hopefully avoid performance killing SWIOTLB bounce buffering.
+- */
+- if (dma_addressing_limited(gpu->dev))
+- priv->shm_gfp_mask |= GFP_DMA32;
+-
+ /* Create buffer: */
+ ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &gpu->buffer,
+ PAGE_SIZE);
+@@ -1330,6 +1322,8 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu,
+ {
+ u32 val;
+
++ mutex_lock(&gpu->lock);
++
+ /* disable clock gating */
+ val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS);
+ val &= ~VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
+@@ -1341,6 +1335,8 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu,
+ gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, val);
+
+ sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_PRE);
++
++ mutex_unlock(&gpu->lock);
+ }
+
+ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
+@@ -1350,13 +1346,9 @@ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
+ unsigned int i;
+ u32 val;
+
+- sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST);
+-
+- for (i = 0; i < submit->nr_pmrs; i++) {
+- const struct etnaviv_perfmon_request *pmr = submit->pmrs + i;
++ mutex_lock(&gpu->lock);
+
+- *pmr->bo_vma = pmr->sequence;
+- }
++ sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST);
+
+ /* disable debug register */
+ val = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL);
+@@ -1367,6 +1359,14 @@ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
+ val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS);
+ val |= VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
+ gpu_write_power(gpu, VIVS_PM_POWER_CONTROLS, val);
++
++ mutex_unlock(&gpu->lock);
++
++ for (i = 0; i < submit->nr_pmrs; i++) {
++ const struct etnaviv_perfmon_request *pmr = submit->pmrs + i;
++
++ *pmr->bo_vma = pmr->sequence;
++ }
+ }
+
+
+diff --git a/drivers/gpu/drm/fsl-dcu/Kconfig b/drivers/gpu/drm/fsl-dcu/Kconfig
+index 5ca71ef8732590..c9ee98693b48a4 100644
+--- a/drivers/gpu/drm/fsl-dcu/Kconfig
++++ b/drivers/gpu/drm/fsl-dcu/Kconfig
+@@ -8,6 +8,7 @@ config DRM_FSL_DCU
+ select DRM_PANEL
+ select REGMAP_MMIO
+ select VIDEOMODE_HELPERS
++ select MFD_SYSCON if SOC_LS1021A
+ help
+ Choose this option if you have an Freescale DCU chipset.
+ If M is selected the module will be called fsl-dcu-drm.
+diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
+index ab6c0c6cd0e2e3..c4c3d41ee53097 100644
+--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
++++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
+@@ -100,6 +100,7 @@ static void fsl_dcu_irq_uninstall(struct drm_device *dev)
+ static int fsl_dcu_load(struct drm_device *dev, unsigned long flags)
+ {
+ struct fsl_dcu_drm_device *fsl_dev = dev->dev_private;
++ struct regmap *scfg;
+ int ret;
+
+ ret = fsl_dcu_drm_modeset_init(fsl_dev);
+@@ -108,6 +109,20 @@ static int fsl_dcu_load(struct drm_device *dev, unsigned long flags)
+ return ret;
+ }
+
++ scfg = syscon_regmap_lookup_by_compatible("fsl,ls1021a-scfg");
++ if (PTR_ERR(scfg) != -ENODEV) {
++ /*
++ * For simplicity, enable the PIXCLK unconditionally,
++ * resulting in increased power consumption. Disabling
++ * the clock in PM or on unload could be implemented as
++ * a future improvement.
++ */
++ ret = regmap_update_bits(scfg, SCFG_PIXCLKCR, SCFG_PIXCLKCR_PXCEN,
++ SCFG_PIXCLKCR_PXCEN);
++ if (ret < 0)
++ return dev_err_probe(dev->dev, ret, "failed to enable pixclk\n");
++ }
++
+ ret = drm_vblank_init(dev, dev->mode_config.num_crtc);
+ if (ret < 0) {
+ dev_err(dev->dev, "failed to initialize vblank\n");
+diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
+index e2049a0e8a92a5..566396013c04a5 100644
+--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
++++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
+@@ -160,6 +160,9 @@
+ #define FSL_DCU_ARGB4444 12
+ #define FSL_DCU_YUV422 14
+
++#define SCFG_PIXCLKCR 0x28
++#define SCFG_PIXCLKCR_PXCEN BIT(31)
++
+ #define VF610_LAYER_REG_NUM 9
+ #define LS1021A_LAYER_REG_NUM 10
+
+diff --git a/drivers/gpu/drm/imagination/pvr_ccb.c b/drivers/gpu/drm/imagination/pvr_ccb.c
+index 4deeac7ed40a4d..2bbdc05a3b9779 100644
+--- a/drivers/gpu/drm/imagination/pvr_ccb.c
++++ b/drivers/gpu/drm/imagination/pvr_ccb.c
+@@ -321,7 +321,7 @@ static int pvr_kccb_reserve_slot_sync(struct pvr_device *pvr_dev)
+ bool reserved = false;
+ u32 retries = 0;
+
+- while ((jiffies - start_timestamp) < (u32)RESERVE_SLOT_TIMEOUT ||
++ while (time_before(jiffies, start_timestamp + RESERVE_SLOT_TIMEOUT) ||
+ retries < RESERVE_SLOT_MIN_RETRIES) {
+ reserved = pvr_kccb_try_reserve_slot(pvr_dev);
+ if (reserved)
+diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
+index 7bd6ba4c6e8ab6..363f885a709826 100644
+--- a/drivers/gpu/drm/imagination/pvr_vm.c
++++ b/drivers/gpu/drm/imagination/pvr_vm.c
+@@ -654,9 +654,7 @@ pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
+
+ xa_lock(&pvr_file->vm_ctx_handles);
+ vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle);
+- if (vm_ctx)
+- kref_get(&vm_ctx->ref_count);
+-
++ pvr_vm_context_get(vm_ctx);
+ xa_unlock(&pvr_file->vm_ctx_handles);
+
+ return vm_ctx;
+diff --git a/drivers/gpu/drm/imx/dcss/dcss-crtc.c b/drivers/gpu/drm/imx/dcss/dcss-crtc.c
+index 31267c00782fc1..af91e45b5d13b7 100644
+--- a/drivers/gpu/drm/imx/dcss/dcss-crtc.c
++++ b/drivers/gpu/drm/imx/dcss/dcss-crtc.c
+@@ -206,15 +206,13 @@ int dcss_crtc_init(struct dcss_crtc *crtc, struct drm_device *drm)
+ if (crtc->irq < 0)
+ return crtc->irq;
+
+- ret = request_irq(crtc->irq, dcss_crtc_irq_handler,
+- 0, "dcss_drm", crtc);
++ ret = request_irq(crtc->irq, dcss_crtc_irq_handler, IRQF_NO_AUTOEN,
++ "dcss_drm", crtc);
+ if (ret) {
+ dev_err(dcss->dev, "irq request failed with %d.\n", ret);
+ return ret;
+ }
+
+- disable_irq(crtc->irq);
+-
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
+index ef29c9a61a4617..99db53e167bd02 100644
+--- a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
++++ b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
+@@ -410,14 +410,12 @@ static int ipu_drm_bind(struct device *dev, struct device *master, void *data)
+ }
+
+ ipu_crtc->irq = ipu_plane_irq(ipu_crtc->plane[0]);
+- ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler, 0,
+- "imx_drm", ipu_crtc);
++ ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler,
++ IRQF_NO_AUTOEN, "imx_drm", ipu_crtc);
+ if (ret < 0) {
+ dev_err(ipu_crtc->dev, "irq request failed with %d.\n", ret);
+ return ret;
+ }
+- /* Only enable IRQ when we actually need it to trigger work. */
+- disable_irq(ipu_crtc->irq);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index cb538a262d1c13..db36c81d0f1234 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -1505,15 +1505,13 @@ static int a6xx_gmu_get_irq(struct a6xx_gmu *gmu, struct platform_device *pdev,
+
+ irq = platform_get_irq_byname(pdev, name);
+
+- ret = request_irq(irq, handler, IRQF_TRIGGER_HIGH, name, gmu);
++ ret = request_irq(irq, handler, IRQF_TRIGGER_HIGH | IRQF_NO_AUTOEN, name, gmu);
+ if (ret) {
+ DRM_DEV_ERROR(&pdev->dev, "Unable to get interrupt %s %d\n",
+ name, ret);
+ return ret;
+ }
+
+- disable_irq(irq);
+-
+ return irq;
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+index 1d3e9666c7411e..64c94e919a6980 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+@@ -156,18 +156,6 @@ static const struct dpu_lm_cfg msm8998_lm[] = {
+ .sblk = &msm8998_lm_sblk,
+ .lm_pair = LM_5,
+ .pingpong = PINGPONG_2,
+- }, {
+- .name = "lm_3", .id = LM_3,
+- .base = 0x47000, .len = 0x320,
+- .features = MIXER_MSM8998_MASK,
+- .sblk = &msm8998_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+- }, {
+- .name = "lm_4", .id = LM_4,
+- .base = 0x48000, .len = 0x320,
+- .features = MIXER_MSM8998_MASK,
+- .sblk = &msm8998_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+ }, {
+ .name = "lm_5", .id = LM_5,
+ .base = 0x49000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+index 7a23389a573272..72bd4f7e9e504c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+@@ -155,19 +155,6 @@ static const struct dpu_lm_cfg sdm845_lm[] = {
+ .lm_pair = LM_5,
+ .pingpong = PINGPONG_2,
+ .dspp = DSPP_2,
+- }, {
+- .name = "lm_3", .id = LM_3,
+- .base = 0x0, .len = 0x320,
+- .features = MIXER_SDM845_MASK,
+- .sblk = &sdm845_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+- .dspp = DSPP_3,
+- }, {
+- .name = "lm_4", .id = LM_4,
+- .base = 0x0, .len = 0x320,
+- .features = MIXER_SDM845_MASK,
+- .sblk = &sdm845_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+ }, {
+ .name = "lm_5", .id = LM_5,
+ .base = 0x49000, .len = 0x320,
+@@ -175,6 +162,7 @@ static const struct dpu_lm_cfg sdm845_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ },
+ };
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+index 68fae048a9a837..260accc151d4b4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+@@ -80,7 +80,7 @@ static u64 _dpu_core_perf_calc_clk(const struct dpu_perf_cfg *perf_cfg,
+
+ mode = &state->adjusted_mode;
+
+- crtc_clk = mode->vtotal * mode->hdisplay * drm_mode_vrefresh(mode);
++ crtc_clk = (u64)mode->vtotal * mode->hdisplay * drm_mode_vrefresh(mode);
+
+ drm_atomic_crtc_for_each_plane(plane, crtc) {
+ pstate = to_dpu_plane_state(plane->state);
+diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+index ea70c1c32d9401..6970b0f7f457c8 100644
+--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
++++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+@@ -140,6 +140,7 @@ void msm_devfreq_init(struct msm_gpu *gpu)
+ {
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
+ struct msm_drm_private *priv = gpu->dev->dev_private;
++ int ret;
+
+ /* We need target support to do devfreq */
+ if (!gpu->funcs->gpu_busy)
+@@ -156,8 +157,12 @@ void msm_devfreq_init(struct msm_gpu *gpu)
+
+ mutex_init(&df->lock);
+
+- dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
+- DEV_PM_QOS_MIN_FREQUENCY, 0);
++ ret = dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
++ DEV_PM_QOS_MIN_FREQUENCY, 0);
++ if (ret < 0) {
++ DRM_DEV_ERROR(&gpu->pdev->dev, "Couldn't initialize QoS\n");
++ return;
++ }
+
+ msm_devfreq_profile.initial_freq = gpu->fast_rate;
+
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+index 060c74a80eb14b..3ea447f6a45b51 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+@@ -443,6 +443,7 @@ gf100_gr_chan_new(struct nvkm_gr *base, struct nvkm_chan *fifoch,
+ ret = gf100_grctx_generate(gr, chan, fifoch->inst);
+ if (ret) {
+ nvkm_error(&base->engine.subdev, "failed to construct context\n");
++ mutex_unlock(&gr->fecs.mutex);
+ return ret;
+ }
+ }
+diff --git a/drivers/gpu/drm/omapdrm/dss/base.c b/drivers/gpu/drm/omapdrm/dss/base.c
+index 050ca7eafac586..556e0f9026bedd 100644
+--- a/drivers/gpu/drm/omapdrm/dss/base.c
++++ b/drivers/gpu/drm/omapdrm/dss/base.c
+@@ -139,21 +139,13 @@ static bool omapdss_device_is_connected(struct omap_dss_device *dssdev)
+ }
+
+ int omapdss_device_connect(struct dss_device *dss,
+- struct omap_dss_device *src,
+ struct omap_dss_device *dst)
+ {
+- dev_dbg(&dss->pdev->dev, "connect(%s, %s)\n",
+- src ? dev_name(src->dev) : "NULL",
++ dev_dbg(&dss->pdev->dev, "connect(%s)\n",
+ dst ? dev_name(dst->dev) : "NULL");
+
+- if (!dst) {
+- /*
+- * The destination is NULL when the source is connected to a
+- * bridge instead of a DSS device. Stop here, we will attach
+- * the bridge later when we will have a DRM encoder.
+- */
+- return src && src->bridge ? 0 : -EINVAL;
+- }
++ if (!dst)
++ return -EINVAL;
+
+ if (omapdss_device_is_connected(dst))
+ return -EBUSY;
+@@ -163,19 +155,14 @@ int omapdss_device_connect(struct dss_device *dss,
+ return 0;
+ }
+
+-void omapdss_device_disconnect(struct omap_dss_device *src,
++void omapdss_device_disconnect(struct dss_device *dss,
+ struct omap_dss_device *dst)
+ {
+- struct dss_device *dss = src ? src->dss : dst->dss;
+-
+- dev_dbg(&dss->pdev->dev, "disconnect(%s, %s)\n",
+- src ? dev_name(src->dev) : "NULL",
++ dev_dbg(&dss->pdev->dev, "disconnect(%s)\n",
+ dst ? dev_name(dst->dev) : "NULL");
+
+- if (!dst) {
+- WARN_ON(!src->bridge);
++ if (WARN_ON(!dst))
+ return;
+- }
+
+ if (!dst->id && !omapdss_device_is_connected(dst)) {
+ WARN_ON(1);
+diff --git a/drivers/gpu/drm/omapdrm/dss/omapdss.h b/drivers/gpu/drm/omapdrm/dss/omapdss.h
+index 040d5a3e33d680..4c22c09c93d523 100644
+--- a/drivers/gpu/drm/omapdrm/dss/omapdss.h
++++ b/drivers/gpu/drm/omapdrm/dss/omapdss.h
+@@ -242,9 +242,8 @@ struct omap_dss_device *omapdss_device_get(struct omap_dss_device *dssdev);
+ void omapdss_device_put(struct omap_dss_device *dssdev);
+ struct omap_dss_device *omapdss_find_device_by_node(struct device_node *node);
+ int omapdss_device_connect(struct dss_device *dss,
+- struct omap_dss_device *src,
+ struct omap_dss_device *dst);
+-void omapdss_device_disconnect(struct omap_dss_device *src,
++void omapdss_device_disconnect(struct dss_device *dss,
+ struct omap_dss_device *dst);
+
+ int omap_dss_get_num_overlay_managers(void);
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
+index d3eac4817d7687..a982378aa14119 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.c
++++ b/drivers/gpu/drm/omapdrm/omap_drv.c
+@@ -307,7 +307,7 @@ static void omap_disconnect_pipelines(struct drm_device *ddev)
+ for (i = 0; i < priv->num_pipes; i++) {
+ struct omap_drm_pipeline *pipe = &priv->pipes[i];
+
+- omapdss_device_disconnect(NULL, pipe->output);
++ omapdss_device_disconnect(priv->dss, pipe->output);
+
+ omapdss_device_put(pipe->output);
+ pipe->output = NULL;
+@@ -325,7 +325,7 @@ static int omap_connect_pipelines(struct drm_device *ddev)
+ int r;
+
+ for_each_dss_output(output) {
+- r = omapdss_device_connect(priv->dss, NULL, output);
++ r = omapdss_device_connect(priv->dss, output);
+ if (r == -EPROBE_DEFER) {
+ omapdss_device_put(output);
+ return r;
+diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
+index fdae677558f3ef..b9c67e4ca36054 100644
+--- a/drivers/gpu/drm/omapdrm/omap_gem.c
++++ b/drivers/gpu/drm/omapdrm/omap_gem.c
+@@ -1402,8 +1402,6 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
+
+ omap_obj = to_omap_bo(obj);
+
+- mutex_lock(&omap_obj->lock);
+-
+ omap_obj->sgt = sgt;
+
+ if (omap_gem_sgt_is_contiguous(sgt, size)) {
+@@ -1418,21 +1416,17 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
+ pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
+ if (!pages) {
+ omap_gem_free_object(obj);
+- obj = ERR_PTR(-ENOMEM);
+- goto done;
++ return ERR_PTR(-ENOMEM);
+ }
+
+ omap_obj->pages = pages;
+ ret = drm_prime_sg_to_page_array(sgt, pages, npages);
+ if (ret) {
+ omap_gem_free_object(obj);
+- obj = ERR_PTR(-ENOMEM);
+- goto done;
++ return ERR_PTR(-ENOMEM);
+ }
+ }
+
+-done:
+- mutex_unlock(&omap_obj->lock);
+ return obj;
+ }
+
+diff --git a/drivers/gpu/drm/panel/panel-newvision-nv3052c.c b/drivers/gpu/drm/panel/panel-newvision-nv3052c.c
+index d3baccfe6286b2..06e16a7c14a756 100644
+--- a/drivers/gpu/drm/panel/panel-newvision-nv3052c.c
++++ b/drivers/gpu/drm/panel/panel-newvision-nv3052c.c
+@@ -917,7 +917,7 @@ static const struct nv3052c_panel_info wl_355608_a8_panel_info = {
+ static const struct spi_device_id nv3052c_ids[] = {
+ { "ltk035c5444t", },
+ { "fs035vg158", },
+- { "wl-355608-a8", },
++ { "rg35xx-plus-panel", },
+ { /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(spi, nv3052c_ids);
+diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35510.c b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+index d3bfdfc9cff641..f54e0fa4293789 100644
+--- a/drivers/gpu/drm/panel/panel-novatek-nt35510.c
++++ b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+@@ -38,6 +38,7 @@
+
+ #define NT35510_CMD_CORRECT_GAMMA BIT(0)
+ #define NT35510_CMD_CONTROL_DISPLAY BIT(1)
++#define NT35510_CMD_SETVCMOFF BIT(2)
+
+ #define MCS_CMD_MAUCCTR 0xF0 /* Manufacturer command enable */
+ #define MCS_CMD_READ_ID1 0xDA
+@@ -721,11 +722,13 @@ static int nt35510_setup_power(struct nt35510 *nt)
+ if (ret)
+ return ret;
+
+- ret = nt35510_send_long(nt, dsi, NT35510_P1_SETVCMOFF,
+- NT35510_P1_VCMOFF_LEN,
+- nt->conf->vcmoff);
+- if (ret)
+- return ret;
++ if (nt->conf->cmds & NT35510_CMD_SETVCMOFF) {
++ ret = nt35510_send_long(nt, dsi, NT35510_P1_SETVCMOFF,
++ NT35510_P1_VCMOFF_LEN,
++ nt->conf->vcmoff);
++ if (ret)
++ return ret;
++ }
+
+ /* Typically 10 ms */
+ usleep_range(10000, 20000);
+@@ -1319,7 +1322,7 @@ static const struct nt35510_config nt35510_frida_frd400b25025 = {
+ },
+ .mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
+ MIPI_DSI_MODE_LPM,
+- .cmds = NT35510_CMD_CONTROL_DISPLAY,
++ .cmds = NT35510_CMD_CONTROL_DISPLAY | NT35510_CMD_SETVCMOFF,
+ /* 0x03: AVDD = 6.2V */
+ .avdd = { 0x03, 0x03, 0x03 },
+ /* 0x46: PCK = 2 x Hsync, BTP = 2.5 x VDDB */
+diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+index 2d30da38c2c3e4..3385fd3ef41a47 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c
++++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+@@ -38,7 +38,7 @@ static int panfrost_devfreq_target(struct device *dev, unsigned long *freq,
+ return PTR_ERR(opp);
+ dev_pm_opp_put(opp);
+
+- err = dev_pm_opp_set_rate(dev, *freq);
++ err = dev_pm_opp_set_rate(dev, *freq);
+ if (!err)
+ ptdev->pfdevfreq.current_frequency = *freq;
+
+@@ -182,6 +182,7 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev)
+ * if any and will avoid a switch off by regulator_late_cleanup()
+ */
+ ret = dev_pm_opp_set_opp(dev, opp);
++ dev_pm_opp_put(opp);
+ if (ret) {
+ DRM_DEV_ERROR(dev, "Couldn't set recommended OPP\n");
+ return ret;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+index fd8e44992184fa..b52dd510e0367b 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gpu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+@@ -177,7 +177,6 @@ static void panfrost_gpu_init_quirks(struct panfrost_device *pfdev)
+ struct panfrost_model {
+ const char *name;
+ u32 id;
+- u32 id_mask;
+ u64 features;
+ u64 issues;
+ struct {
+diff --git a/drivers/gpu/drm/panthor/panthor_devfreq.c b/drivers/gpu/drm/panthor/panthor_devfreq.c
+index c6d3c327cc24c0..ecc7a52bd688ee 100644
+--- a/drivers/gpu/drm/panthor/panthor_devfreq.c
++++ b/drivers/gpu/drm/panthor/panthor_devfreq.c
+@@ -62,14 +62,20 @@ static void panthor_devfreq_update_utilization(struct panthor_devfreq *pdevfreq)
+ static int panthor_devfreq_target(struct device *dev, unsigned long *freq,
+ u32 flags)
+ {
++ struct panthor_device *ptdev = dev_get_drvdata(dev);
+ struct dev_pm_opp *opp;
++ int err;
+
+ opp = devfreq_recommended_opp(dev, freq, flags);
+ if (IS_ERR(opp))
+ return PTR_ERR(opp);
+ dev_pm_opp_put(opp);
+
+- return dev_pm_opp_set_rate(dev, *freq);
++ err = dev_pm_opp_set_rate(dev, *freq);
++ if (!err)
++ ptdev->current_frequency = *freq;
++
++ return err;
+ }
+
+ static void panthor_devfreq_reset(struct panthor_devfreq *pdevfreq)
+@@ -130,6 +136,7 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
+ struct panthor_devfreq *pdevfreq;
+ struct dev_pm_opp *opp;
+ unsigned long cur_freq;
++ unsigned long freq = ULONG_MAX;
+ int ret;
+
+ pdevfreq = drmm_kzalloc(&ptdev->base, sizeof(*ptdev->devfreq), GFP_KERNEL);
+@@ -156,12 +163,6 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
+
+ cur_freq = clk_get_rate(ptdev->clks.core);
+
+- opp = devfreq_recommended_opp(dev, &cur_freq, 0);
+- if (IS_ERR(opp))
+- return PTR_ERR(opp);
+-
+- panthor_devfreq_profile.initial_freq = cur_freq;
+-
+ /* Regulator coupling only takes care of synchronizing/balancing voltage
+ * updates, but the coupled regulator needs to be enabled manually.
+ *
+@@ -192,16 +193,30 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
+ return ret;
+ }
+
++ opp = devfreq_recommended_opp(dev, &cur_freq, 0);
++ if (IS_ERR(opp))
++ return PTR_ERR(opp);
++
++ panthor_devfreq_profile.initial_freq = cur_freq;
++ ptdev->current_frequency = cur_freq;
++
+ /*
+ * Set the recommend OPP this will enable and configure the regulator
+ * if any and will avoid a switch off by regulator_late_cleanup()
+ */
+ ret = dev_pm_opp_set_opp(dev, opp);
++ dev_pm_opp_put(opp);
+ if (ret) {
+ DRM_DEV_ERROR(dev, "Couldn't set recommended OPP\n");
+ return ret;
+ }
+
++ /* Find the fastest defined rate */
++ opp = dev_pm_opp_find_freq_floor(dev, &freq);
++ if (IS_ERR(opp))
++ return PTR_ERR(opp);
++ ptdev->fast_rate = freq;
++
+ dev_pm_opp_put(opp);
+
+ /*
+diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
+index e388c0472ba783..2109905813e8c4 100644
+--- a/drivers/gpu/drm/panthor/panthor_device.h
++++ b/drivers/gpu/drm/panthor/panthor_device.h
+@@ -66,6 +66,25 @@ struct panthor_irq {
+ atomic_t suspended;
+ };
+
++/**
++ * enum panthor_device_profiling_mode - Profiling state
++ */
++enum panthor_device_profiling_flags {
++ /** @PANTHOR_DEVICE_PROFILING_DISABLED: Profiling is disabled. */
++ PANTHOR_DEVICE_PROFILING_DISABLED = 0,
++
++ /** @PANTHOR_DEVICE_PROFILING_CYCLES: Sampling job cycles. */
++ PANTHOR_DEVICE_PROFILING_CYCLES = BIT(0),
++
++ /** @PANTHOR_DEVICE_PROFILING_TIMESTAMP: Sampling job timestamp. */
++ PANTHOR_DEVICE_PROFILING_TIMESTAMP = BIT(1),
++
++ /** @PANTHOR_DEVICE_PROFILING_ALL: Sampling everything. */
++ PANTHOR_DEVICE_PROFILING_ALL =
++ PANTHOR_DEVICE_PROFILING_CYCLES |
++ PANTHOR_DEVICE_PROFILING_TIMESTAMP,
++};
++
+ /**
+ * struct panthor_device - Panthor device
+ */
+@@ -162,6 +181,15 @@ struct panthor_device {
+ */
+ struct page *dummy_latest_flush;
+ } pm;
++
++ /** @profile_mask: User-set profiling flags for job accounting. */
++ u32 profile_mask;
++
++ /** @current_frequency: Device clock frequency at present. Set by DVFS*/
++ unsigned long current_frequency;
++
++ /** @fast_rate: Maximum device clock frequency. Set by DVFS */
++ unsigned long fast_rate;
+ };
+
+ /**
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index e9234488dc2b43..db0874c8cce078 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -93,6 +93,9 @@
+ #define MIN_CSGS 3
+ #define MAX_CSG_PRIO 0xf
+
++#define NUM_INSTRS_PER_CACHE_LINE (64 / sizeof(u64))
++#define MAX_INSTRS_PER_JOB 24
++
+ struct panthor_group;
+
+ /**
+@@ -476,6 +479,18 @@ struct panthor_queue {
+ */
+ struct list_head in_flight_jobs;
+ } fence_ctx;
++
++ /** @profiling: Job profiling data slots and access information. */
++ struct {
++ /** @slots: Kernel BO holding the slots. */
++ struct panthor_kernel_bo *slots;
++
++ /** @slot_count: Number of jobs ringbuffer can hold at once. */
++ u32 slot_count;
++
++ /** @seqno: Index of the next available profiling information slot. */
++ u32 seqno;
++ } profiling;
+ };
+
+ /**
+@@ -662,6 +677,18 @@ struct panthor_group {
+ struct list_head wait_node;
+ };
+
++struct panthor_job_profiling_data {
++ struct {
++ u64 before;
++ u64 after;
++ } cycles;
++
++ struct {
++ u64 before;
++ u64 after;
++ } time;
++};
++
+ /**
+ * group_queue_work() - Queue a group work
+ * @group: Group to queue the work for.
+@@ -775,6 +802,15 @@ struct panthor_job {
+
+ /** @done_fence: Fence signaled when the job is finished or cancelled. */
+ struct dma_fence *done_fence;
++
++ /** @profiling: Job profiling information. */
++ struct {
++ /** @mask: Current device job profiling enablement bitmask. */
++ u32 mask;
++
++ /** @slot: Job index in the profiling slots BO. */
++ u32 slot;
++ } profiling;
+ };
+
+ static void
+@@ -839,6 +875,7 @@ static void group_free_queue(struct panthor_group *group, struct panthor_queue *
+
+ panthor_kernel_bo_destroy(queue->ringbuf);
+ panthor_kernel_bo_destroy(queue->iface.mem);
++ panthor_kernel_bo_destroy(queue->profiling.slots);
+
+ /* Release the last_fence we were holding, if any. */
+ dma_fence_put(queue->fence_ctx.last_fence);
+@@ -1989,8 +2026,6 @@ tick_ctx_init(struct panthor_scheduler *sched,
+ }
+ }
+
+-#define NUM_INSTRS_PER_SLOT 16
+-
+ static void
+ group_term_post_processing(struct panthor_group *group)
+ {
+@@ -2829,65 +2864,198 @@ static void group_sync_upd_work(struct work_struct *work)
+ group_put(group);
+ }
+
+-static struct dma_fence *
+-queue_run_job(struct drm_sched_job *sched_job)
++struct panthor_job_ringbuf_instrs {
++ u64 buffer[MAX_INSTRS_PER_JOB];
++ u32 count;
++};
++
++struct panthor_job_instr {
++ u32 profile_mask;
++ u64 instr;
++};
++
++#define JOB_INSTR(__prof, __instr) \
++ { \
++ .profile_mask = __prof, \
++ .instr = __instr, \
++ }
++
++static void
++copy_instrs_to_ringbuf(struct panthor_queue *queue,
++ struct panthor_job *job,
++ struct panthor_job_ringbuf_instrs *instrs)
++{
++ u64 ringbuf_size = panthor_kernel_bo_size(queue->ringbuf);
++ u64 start = job->ringbuf.start & (ringbuf_size - 1);
++ u64 size, written;
++
++ /*
++ * We need to write a whole slot, including any trailing zeroes
++ * that may come at the end of it. Also, because instrs.buffer has
++ * been zero-initialised, there's no need to pad it with 0's
++ */
++ instrs->count = ALIGN(instrs->count, NUM_INSTRS_PER_CACHE_LINE);
++ size = instrs->count * sizeof(u64);
++ WARN_ON(size > ringbuf_size);
++ written = min(ringbuf_size - start, size);
++
++ memcpy(queue->ringbuf->kmap + start, instrs->buffer, written);
++
++ if (written < size)
++ memcpy(queue->ringbuf->kmap,
++ &instrs->buffer[written / sizeof(u64)],
++ size - written);
++}
++
++struct panthor_job_cs_params {
++ u32 profile_mask;
++ u64 addr_reg; u64 val_reg;
++ u64 cycle_reg; u64 time_reg;
++ u64 sync_addr; u64 times_addr;
++ u64 cs_start; u64 cs_size;
++ u32 last_flush; u32 waitall_mask;
++};
++
++static void
++get_job_cs_params(struct panthor_job *job, struct panthor_job_cs_params *params)
+ {
+- struct panthor_job *job = container_of(sched_job, struct panthor_job, base);
+ struct panthor_group *group = job->group;
+ struct panthor_queue *queue = group->queues[job->queue_idx];
+ struct panthor_device *ptdev = group->ptdev;
+ struct panthor_scheduler *sched = ptdev->scheduler;
+- u32 ringbuf_size = panthor_kernel_bo_size(queue->ringbuf);
+- u32 ringbuf_insert = queue->iface.input->insert & (ringbuf_size - 1);
+- u64 addr_reg = ptdev->csif_info.cs_reg_count -
+- ptdev->csif_info.unpreserved_cs_reg_count;
+- u64 val_reg = addr_reg + 2;
+- u64 sync_addr = panthor_kernel_bo_gpuva(group->syncobjs) +
+- job->queue_idx * sizeof(struct panthor_syncobj_64b);
+- u32 waitall_mask = GENMASK(sched->sb_slot_count - 1, 0);
+- struct dma_fence *done_fence;
+- int ret;
+
+- u64 call_instrs[NUM_INSTRS_PER_SLOT] = {
+- /* MOV32 rX+2, cs.latest_flush */
+- (2ull << 56) | (val_reg << 48) | job->call_info.latest_flush,
++ params->addr_reg = ptdev->csif_info.cs_reg_count -
++ ptdev->csif_info.unpreserved_cs_reg_count;
++ params->val_reg = params->addr_reg + 2;
++ params->cycle_reg = params->addr_reg;
++ params->time_reg = params->val_reg;
+
+- /* FLUSH_CACHE2.clean_inv_all.no_wait.signal(0) rX+2 */
+- (36ull << 56) | (0ull << 48) | (val_reg << 40) | (0 << 16) | 0x233,
++ params->sync_addr = panthor_kernel_bo_gpuva(group->syncobjs) +
++ job->queue_idx * sizeof(struct panthor_syncobj_64b);
++ params->times_addr = panthor_kernel_bo_gpuva(queue->profiling.slots) +
++ (job->profiling.slot * sizeof(struct panthor_job_profiling_data));
++ params->waitall_mask = GENMASK(sched->sb_slot_count - 1, 0);
+
+- /* MOV48 rX:rX+1, cs.start */
+- (1ull << 56) | (addr_reg << 48) | job->call_info.start,
++ params->cs_start = job->call_info.start;
++ params->cs_size = job->call_info.size;
++ params->last_flush = job->call_info.latest_flush;
+
+- /* MOV32 rX+2, cs.size */
+- (2ull << 56) | (val_reg << 48) | job->call_info.size,
++ params->profile_mask = job->profiling.mask;
++}
+
+- /* WAIT(0) => waits for FLUSH_CACHE2 instruction */
+- (3ull << 56) | (1 << 16),
++#define JOB_INSTR_ALWAYS(instr) \
++ JOB_INSTR(PANTHOR_DEVICE_PROFILING_DISABLED, (instr))
++#define JOB_INSTR_TIMESTAMP(instr) \
++ JOB_INSTR(PANTHOR_DEVICE_PROFILING_TIMESTAMP, (instr))
++#define JOB_INSTR_CYCLES(instr) \
++ JOB_INSTR(PANTHOR_DEVICE_PROFILING_CYCLES, (instr))
+
++static void
++prepare_job_instrs(const struct panthor_job_cs_params *params,
++ struct panthor_job_ringbuf_instrs *instrs)
++{
++ const struct panthor_job_instr instr_seq[] = {
++ /* MOV32 rX+2, cs.latest_flush */
++ JOB_INSTR_ALWAYS((2ull << 56) | (params->val_reg << 48) | params->last_flush),
++ /* FLUSH_CACHE2.clean_inv_all.no_wait.signal(0) rX+2 */
++ JOB_INSTR_ALWAYS((36ull << 56) | (0ull << 48) | (params->val_reg << 40) |
++ (0 << 16) | 0x233),
++ /* MOV48 rX:rX+1, cycles_offset */
++ JOB_INSTR_CYCLES((1ull << 56) | (params->cycle_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, cycles.before))),
++ /* STORE_STATE cycles */
++ JOB_INSTR_CYCLES((40ull << 56) | (params->cycle_reg << 40) | (1ll << 32)),
++ /* MOV48 rX:rX+1, time_offset */
++ JOB_INSTR_TIMESTAMP((1ull << 56) | (params->time_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, time.before))),
++ /* STORE_STATE timer */
++ JOB_INSTR_TIMESTAMP((40ull << 56) | (params->time_reg << 40) | (0ll << 32)),
++ /* MOV48 rX:rX+1, cs.start */
++ JOB_INSTR_ALWAYS((1ull << 56) | (params->addr_reg << 48) | params->cs_start),
++ /* MOV32 rX+2, cs.size */
++ JOB_INSTR_ALWAYS((2ull << 56) | (params->val_reg << 48) | params->cs_size),
++ /* WAIT(0) => waits for FLUSH_CACHE2 instruction */
++ JOB_INSTR_ALWAYS((3ull << 56) | (1 << 16)),
+ /* CALL rX:rX+1, rX+2 */
+- (32ull << 56) | (addr_reg << 40) | (val_reg << 32),
+-
++ JOB_INSTR_ALWAYS((32ull << 56) | (params->addr_reg << 40) |
++ (params->val_reg << 32)),
++ /* MOV48 rX:rX+1, cycles_offset */
++ JOB_INSTR_CYCLES((1ull << 56) | (params->cycle_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, cycles.after))),
++ /* STORE_STATE cycles */
++ JOB_INSTR_CYCLES((40ull << 56) | (params->cycle_reg << 40) | (1ll << 32)),
++ /* MOV48 rX:rX+1, time_offset */
++ JOB_INSTR_TIMESTAMP((1ull << 56) | (params->time_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, time.after))),
++ /* STORE_STATE timer */
++ JOB_INSTR_TIMESTAMP((40ull << 56) | (params->time_reg << 40) | (0ll << 32)),
+ /* MOV48 rX:rX+1, sync_addr */
+- (1ull << 56) | (addr_reg << 48) | sync_addr,
+-
++ JOB_INSTR_ALWAYS((1ull << 56) | (params->addr_reg << 48) | params->sync_addr),
+ /* MOV48 rX+2, #1 */
+- (1ull << 56) | (val_reg << 48) | 1,
+-
++ JOB_INSTR_ALWAYS((1ull << 56) | (params->val_reg << 48) | 1),
+ /* WAIT(all) */
+- (3ull << 56) | (waitall_mask << 16),
+-
++ JOB_INSTR_ALWAYS((3ull << 56) | (params->waitall_mask << 16)),
+ /* SYNC_ADD64.system_scope.propage_err.nowait rX:rX+1, rX+2*/
+- (51ull << 56) | (0ull << 48) | (addr_reg << 40) | (val_reg << 32) | (0 << 16) | 1,
++ JOB_INSTR_ALWAYS((51ull << 56) | (0ull << 48) | (params->addr_reg << 40) |
++ (params->val_reg << 32) | (0 << 16) | 1),
++ /* ERROR_BARRIER, so we can recover from faults at job boundaries. */
++ JOB_INSTR_ALWAYS((47ull << 56)),
++ };
++ u32 pad;
+
+- /* ERROR_BARRIER, so we can recover from faults at job
+- * boundaries.
+- */
+- (47ull << 56),
++ instrs->count = 0;
++
++ /* NEED to be cacheline aligned to please the prefetcher. */
++ static_assert(sizeof(instrs->buffer) % 64 == 0,
++ "panthor_job_ringbuf_instrs::buffer is not aligned on a cacheline");
++
++ /* Make sure we have enough storage to store the whole sequence. */
++ static_assert(ALIGN(ARRAY_SIZE(instr_seq), NUM_INSTRS_PER_CACHE_LINE) ==
++ ARRAY_SIZE(instrs->buffer),
++ "instr_seq vs panthor_job_ringbuf_instrs::buffer size mismatch");
++
++ for (u32 i = 0; i < ARRAY_SIZE(instr_seq); i++) {
++ /* If the profile mask of this instruction is not enabled, skip it. */
++ if (instr_seq[i].profile_mask &&
++ !(instr_seq[i].profile_mask & params->profile_mask))
++ continue;
++
++ instrs->buffer[instrs->count++] = instr_seq[i].instr;
++ }
++
++ pad = ALIGN(instrs->count, NUM_INSTRS_PER_CACHE_LINE);
++ memset(&instrs->buffer[instrs->count], 0,
++ (pad - instrs->count) * sizeof(instrs->buffer[0]));
++ instrs->count = pad;
++}
++
++static u32 calc_job_credits(u32 profile_mask)
++{
++ struct panthor_job_ringbuf_instrs instrs;
++ struct panthor_job_cs_params params = {
++ .profile_mask = profile_mask,
+ };
+
+- /* Need to be cacheline aligned to please the prefetcher. */
+- static_assert(sizeof(call_instrs) % 64 == 0,
+- "call_instrs is not aligned on a cacheline");
++ prepare_job_instrs(¶ms, &instrs);
++ return instrs.count;
++}
++
++static struct dma_fence *
++queue_run_job(struct drm_sched_job *sched_job)
++{
++ struct panthor_job *job = container_of(sched_job, struct panthor_job, base);
++ struct panthor_group *group = job->group;
++ struct panthor_queue *queue = group->queues[job->queue_idx];
++ struct panthor_device *ptdev = group->ptdev;
++ struct panthor_scheduler *sched = ptdev->scheduler;
++ struct panthor_job_ringbuf_instrs instrs;
++ struct panthor_job_cs_params cs_params;
++ struct dma_fence *done_fence;
++ int ret;
+
+ /* Stream size is zero, nothing to do except making sure all previously
+ * submitted jobs are done before we signal the
+@@ -2914,17 +3082,23 @@ queue_run_job(struct drm_sched_job *sched_job)
+ queue->fence_ctx.id,
+ atomic64_inc_return(&queue->fence_ctx.seqno));
+
+- memcpy(queue->ringbuf->kmap + ringbuf_insert,
+- call_instrs, sizeof(call_instrs));
++ job->profiling.slot = queue->profiling.seqno++;
++ if (queue->profiling.seqno == queue->profiling.slot_count)
++ queue->profiling.seqno = 0;
++
++ job->ringbuf.start = queue->iface.input->insert;
++
++ get_job_cs_params(job, &cs_params);
++ prepare_job_instrs(&cs_params, &instrs);
++ copy_instrs_to_ringbuf(queue, job, &instrs);
++
++ job->ringbuf.end = job->ringbuf.start + (instrs.count * sizeof(u64));
+
+ panthor_job_get(&job->base);
+ spin_lock(&queue->fence_ctx.lock);
+ list_add_tail(&job->node, &queue->fence_ctx.in_flight_jobs);
+ spin_unlock(&queue->fence_ctx.lock);
+
+- job->ringbuf.start = queue->iface.input->insert;
+- job->ringbuf.end = job->ringbuf.start + sizeof(call_instrs);
+-
+ /* Make sure the ring buffer is updated before the INSERT
+ * register.
+ */
+@@ -3017,6 +3191,33 @@ static const struct drm_sched_backend_ops panthor_queue_sched_ops = {
+ .free_job = queue_free_job,
+ };
+
++static u32 calc_profiling_ringbuf_num_slots(struct panthor_device *ptdev,
++ u32 cs_ringbuf_size)
++{
++ u32 min_profiled_job_instrs = U32_MAX;
++ u32 last_flag = fls(PANTHOR_DEVICE_PROFILING_ALL);
++
++ /*
++ * We want to calculate the minimum size of a profiled job's CS,
++ * because since they need additional instructions for the sampling
++ * of performance metrics, they might take up further slots in
++ * the queue's ringbuffer. This means we might not need as many job
++ * slots for keeping track of their profiling information. What we
++ * need is the maximum number of slots we should allocate to this end,
++ * which matches the maximum number of profiled jobs we can place
++ * simultaneously in the queue's ring buffer.
++ * That has to be calculated separately for every single job profiling
++ * flag, but not in the case job profiling is disabled, since unprofiled
++ * jobs don't need to keep track of this at all.
++ */
++ for (u32 i = 0; i < last_flag; i++) {
++ min_profiled_job_instrs =
++ min(min_profiled_job_instrs, calc_job_credits(BIT(i)));
++ }
++
++ return DIV_ROUND_UP(cs_ringbuf_size, min_profiled_job_instrs * sizeof(u64));
++}
++
+ static struct panthor_queue *
+ group_create_queue(struct panthor_group *group,
+ const struct drm_panthor_queue_create *args)
+@@ -3070,9 +3271,35 @@ group_create_queue(struct panthor_group *group,
+ goto err_free_queue;
+ }
+
++ queue->profiling.slot_count =
++ calc_profiling_ringbuf_num_slots(group->ptdev, args->ringbuf_size);
++
++ queue->profiling.slots =
++ panthor_kernel_bo_create(group->ptdev, group->vm,
++ queue->profiling.slot_count *
++ sizeof(struct panthor_job_profiling_data),
++ DRM_PANTHOR_BO_NO_MMAP,
++ DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC |
++ DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED,
++ PANTHOR_VM_KERNEL_AUTO_VA);
++
++ if (IS_ERR(queue->profiling.slots)) {
++ ret = PTR_ERR(queue->profiling.slots);
++ goto err_free_queue;
++ }
++
++ ret = panthor_kernel_bo_vmap(queue->profiling.slots);
++ if (ret)
++ goto err_free_queue;
++
++ /*
++ * Credit limit argument tells us the total number of instructions
++ * across all CS slots in the ringbuffer, with some jobs requiring
++ * twice as many as others, depending on their profiling status.
++ */
+ ret = drm_sched_init(&queue->scheduler, &panthor_queue_sched_ops,
+ group->ptdev->scheduler->wq, 1,
+- args->ringbuf_size / (NUM_INSTRS_PER_SLOT * sizeof(u64)),
++ args->ringbuf_size / sizeof(u64),
+ 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
+ group->ptdev->reset.wq,
+ NULL, "panthor-queue", group->ptdev->base.dev);
+@@ -3380,6 +3607,7 @@ panthor_job_create(struct panthor_file *pfile,
+ {
+ struct panthor_group_pool *gpool = pfile->groups;
+ struct panthor_job *job;
++ u32 credits;
+ int ret;
+
+ if (qsubmit->pad)
+@@ -3438,9 +3666,16 @@ panthor_job_create(struct panthor_file *pfile,
+ }
+ }
+
++ job->profiling.mask = pfile->ptdev->profile_mask;
++ credits = calc_job_credits(job->profiling.mask);
++ if (credits == 0) {
++ ret = -EINVAL;
++ goto err_put_job;
++ }
++
+ ret = drm_sched_job_init(&job->base,
+ &job->group->queues[job->queue_idx]->entity,
+- 1, job->group);
++ credits, job->group);
+ if (ret)
+ goto err_put_job;
+
+diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
+index 03e6871b30653c..c82e0fbc49b4bd 100644
+--- a/drivers/gpu/drm/radeon/atombios_encoders.c
++++ b/drivers/gpu/drm/radeon/atombios_encoders.c
+@@ -2179,7 +2179,7 @@ int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx)
+ void
+ radeon_atom_encoder_init(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_encoder *encoder;
+
+ list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
+index b5e96a8fc2c162..11a492f21157fe 100644
+--- a/drivers/gpu/drm/radeon/cik.c
++++ b/drivers/gpu/drm/radeon/cik.c
+@@ -7585,7 +7585,7 @@ int cik_irq_process(struct radeon_device *rdev)
+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
+
+ if (rdev->irq.crtc_vblank_int[0]) {
+- drm_handle_vblank(rdev->ddev, 0);
++ drm_handle_vblank(rdev_to_drm(rdev), 0);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -7615,7 +7615,7 @@ int cik_irq_process(struct radeon_device *rdev)
+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
+
+ if (rdev->irq.crtc_vblank_int[1]) {
+- drm_handle_vblank(rdev->ddev, 1);
++ drm_handle_vblank(rdev_to_drm(rdev), 1);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -7645,7 +7645,7 @@ int cik_irq_process(struct radeon_device *rdev)
+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
+
+ if (rdev->irq.crtc_vblank_int[2]) {
+- drm_handle_vblank(rdev->ddev, 2);
++ drm_handle_vblank(rdev_to_drm(rdev), 2);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -7675,7 +7675,7 @@ int cik_irq_process(struct radeon_device *rdev)
+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
+
+ if (rdev->irq.crtc_vblank_int[3]) {
+- drm_handle_vblank(rdev->ddev, 3);
++ drm_handle_vblank(rdev_to_drm(rdev), 3);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -7705,7 +7705,7 @@ int cik_irq_process(struct radeon_device *rdev)
+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
+
+ if (rdev->irq.crtc_vblank_int[4]) {
+- drm_handle_vblank(rdev->ddev, 4);
++ drm_handle_vblank(rdev_to_drm(rdev), 4);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -7735,7 +7735,7 @@ int cik_irq_process(struct radeon_device *rdev)
+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
+
+ if (rdev->irq.crtc_vblank_int[5]) {
+- drm_handle_vblank(rdev->ddev, 5);
++ drm_handle_vblank(rdev_to_drm(rdev), 5);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -8581,7 +8581,7 @@ int cik_init(struct radeon_device *rdev)
+ /* Initialize surface registers */
+ radeon_surface_init(rdev);
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+
+ /* Fence driver */
+ radeon_fence_driver_init(rdev);
+diff --git a/drivers/gpu/drm/radeon/dce6_afmt.c b/drivers/gpu/drm/radeon/dce6_afmt.c
+index 4c06f47453fd2a..d6ab93ed9ec4c7 100644
+--- a/drivers/gpu/drm/radeon/dce6_afmt.c
++++ b/drivers/gpu/drm/radeon/dce6_afmt.c
+@@ -91,7 +91,7 @@ struct r600_audio_pin *dce6_audio_get_pin(struct radeon_device *rdev)
+ pin = &rdev->audio.pin[i];
+ pin_count = 0;
+
+- list_for_each_entry(encoder, &rdev->ddev->mode_config.encoder_list, head) {
++ list_for_each_entry(encoder, &rdev_to_drm(rdev)->mode_config.encoder_list, head) {
+ if (radeon_encoder_is_digital(encoder)) {
+ radeon_encoder = to_radeon_encoder(encoder);
+ dig = radeon_encoder->enc_priv;
+diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
+index c634dc28e6c300..bc4ab71613a55b 100644
+--- a/drivers/gpu/drm/radeon/evergreen.c
++++ b/drivers/gpu/drm/radeon/evergreen.c
+@@ -1673,7 +1673,7 @@ void evergreen_pm_misc(struct radeon_device *rdev)
+ */
+ void evergreen_pm_prepare(struct radeon_device *rdev)
+ {
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+ u32 tmp;
+@@ -1698,7 +1698,7 @@ void evergreen_pm_prepare(struct radeon_device *rdev)
+ */
+ void evergreen_pm_finish(struct radeon_device *rdev)
+ {
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+ u32 tmp;
+@@ -1763,7 +1763,7 @@ void evergreen_hpd_set_polarity(struct radeon_device *rdev,
+ */
+ void evergreen_hpd_init(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_connector *connector;
+ unsigned enabled = 0;
+ u32 tmp = DC_HPDx_CONNECTION_TIMER(0x9c4) |
+@@ -1804,7 +1804,7 @@ void evergreen_hpd_init(struct radeon_device *rdev)
+ */
+ void evergreen_hpd_fini(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_connector *connector;
+ unsigned disabled = 0;
+
+@@ -4753,7 +4753,7 @@ int evergreen_irq_process(struct radeon_device *rdev)
+ event_name = "vblank";
+
+ if (rdev->irq.crtc_vblank_int[crtc_idx]) {
+- drm_handle_vblank(rdev->ddev, crtc_idx);
++ drm_handle_vblank(rdev_to_drm(rdev), crtc_idx);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -5211,7 +5211,7 @@ int evergreen_init(struct radeon_device *rdev)
+ /* Initialize surface registers */
+ radeon_surface_init(rdev);
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* Fence driver */
+ radeon_fence_driver_init(rdev);
+ /* initialize AGP */
+diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
+index 77aee99e473a62..3890911fe693c6 100644
+--- a/drivers/gpu/drm/radeon/ni.c
++++ b/drivers/gpu/drm/radeon/ni.c
+@@ -2360,7 +2360,7 @@ int cayman_init(struct radeon_device *rdev)
+ /* Initialize surface registers */
+ radeon_surface_init(rdev);
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* Fence driver */
+ radeon_fence_driver_init(rdev);
+ /* initialize memory controller */
+diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
+index bfd42e3e161e98..80703417d8a18c 100644
+--- a/drivers/gpu/drm/radeon/r100.c
++++ b/drivers/gpu/drm/radeon/r100.c
+@@ -459,7 +459,7 @@ void r100_pm_misc(struct radeon_device *rdev)
+ */
+ void r100_pm_prepare(struct radeon_device *rdev)
+ {
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+ u32 tmp;
+@@ -490,7 +490,7 @@ void r100_pm_prepare(struct radeon_device *rdev)
+ */
+ void r100_pm_finish(struct radeon_device *rdev)
+ {
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+ u32 tmp;
+@@ -603,7 +603,7 @@ void r100_hpd_set_polarity(struct radeon_device *rdev,
+ */
+ void r100_hpd_init(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_connector *connector;
+ unsigned enable = 0;
+
+@@ -626,7 +626,7 @@ void r100_hpd_init(struct radeon_device *rdev)
+ */
+ void r100_hpd_fini(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_connector *connector;
+ unsigned disable = 0;
+
+@@ -798,7 +798,7 @@ int r100_irq_process(struct radeon_device *rdev)
+ /* Vertical blank interrupts */
+ if (status & RADEON_CRTC_VBLANK_STAT) {
+ if (rdev->irq.crtc_vblank_int[0]) {
+- drm_handle_vblank(rdev->ddev, 0);
++ drm_handle_vblank(rdev_to_drm(rdev), 0);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -807,7 +807,7 @@ int r100_irq_process(struct radeon_device *rdev)
+ }
+ if (status & RADEON_CRTC2_VBLANK_STAT) {
+ if (rdev->irq.crtc_vblank_int[1]) {
+- drm_handle_vblank(rdev->ddev, 1);
++ drm_handle_vblank(rdev_to_drm(rdev), 1);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -1491,7 +1491,7 @@ int r100_cs_packet_parse_vline(struct radeon_cs_parser *p)
+ header = radeon_get_ib_value(p, h_idx);
+ crtc_id = radeon_get_ib_value(p, h_idx + 5);
+ reg = R100_CP_PACKET0_GET_REG(header);
+- crtc = drm_crtc_find(p->rdev->ddev, p->filp, crtc_id);
++ crtc = drm_crtc_find(rdev_to_drm(p->rdev), p->filp, crtc_id);
+ if (!crtc) {
+ DRM_ERROR("cannot find crtc %d\n", crtc_id);
+ return -ENOENT;
+@@ -3079,7 +3079,7 @@ DEFINE_SHOW_ATTRIBUTE(r100_debugfs_mc_info);
+ void r100_debugfs_rbbm_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("r100_rbbm_info", 0444, root, rdev,
+ &r100_debugfs_rbbm_info_fops);
+@@ -3089,7 +3089,7 @@ void r100_debugfs_rbbm_init(struct radeon_device *rdev)
+ void r100_debugfs_cp_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("r100_cp_ring_info", 0444, root, rdev,
+ &r100_debugfs_cp_ring_info_fops);
+@@ -3101,7 +3101,7 @@ void r100_debugfs_cp_init(struct radeon_device *rdev)
+ void r100_debugfs_mc_info_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("r100_mc_info", 0444, root, rdev,
+ &r100_debugfs_mc_info_fops);
+@@ -3967,7 +3967,7 @@ int r100_resume(struct radeon_device *rdev)
+ RREG32(R_0007C0_CP_STAT));
+ }
+ /* post */
+- radeon_combios_asic_init(rdev->ddev);
++ radeon_combios_asic_init(rdev_to_drm(rdev));
+ /* Resume clock after posting */
+ r100_clock_startup(rdev);
+ /* Initialize surface registers */
+@@ -4076,7 +4076,7 @@ int r100_init(struct radeon_device *rdev)
+ /* Set asic errata */
+ r100_errata(rdev);
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* initialize AGP */
+ if (rdev->flags & RADEON_IS_AGP) {
+ r = radeon_agp_init(rdev);
+diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
+index 1620f534f55f68..05c13102a8cb8f 100644
+--- a/drivers/gpu/drm/radeon/r300.c
++++ b/drivers/gpu/drm/radeon/r300.c
+@@ -616,7 +616,7 @@ DEFINE_SHOW_ATTRIBUTE(rv370_debugfs_pcie_gart_info);
+ static void rv370_debugfs_pcie_gart_info_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("rv370_pcie_gart_info", 0444, root, rdev,
+ &rv370_debugfs_pcie_gart_info_fops);
+@@ -1452,7 +1452,7 @@ int r300_resume(struct radeon_device *rdev)
+ RREG32(R_0007C0_CP_STAT));
+ }
+ /* post */
+- radeon_combios_asic_init(rdev->ddev);
++ radeon_combios_asic_init(rdev_to_drm(rdev));
+ /* Resume clock after posting */
+ r300_clock_startup(rdev);
+ /* Initialize surface registers */
+@@ -1538,7 +1538,7 @@ int r300_init(struct radeon_device *rdev)
+ /* Set asic errata */
+ r300_errata(rdev);
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* initialize AGP */
+ if (rdev->flags & RADEON_IS_AGP) {
+ r = radeon_agp_init(rdev);
+diff --git a/drivers/gpu/drm/radeon/r420.c b/drivers/gpu/drm/radeon/r420.c
+index a979662eaa73ba..9a31cdec641575 100644
+--- a/drivers/gpu/drm/radeon/r420.c
++++ b/drivers/gpu/drm/radeon/r420.c
+@@ -322,7 +322,7 @@ int r420_resume(struct radeon_device *rdev)
+ if (rdev->is_atom_bios) {
+ atom_asic_init(rdev->mode_info.atom_context);
+ } else {
+- radeon_combios_asic_init(rdev->ddev);
++ radeon_combios_asic_init(rdev_to_drm(rdev));
+ }
+ /* Resume clock after posting */
+ r420_clock_resume(rdev);
+@@ -414,7 +414,7 @@ int r420_init(struct radeon_device *rdev)
+ return -EINVAL;
+
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* initialize AGP */
+ if (rdev->flags & RADEON_IS_AGP) {
+ r = radeon_agp_init(rdev);
+@@ -493,7 +493,7 @@ DEFINE_SHOW_ATTRIBUTE(r420_debugfs_pipes_info);
+ void r420_debugfs_pipes_info_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("r420_pipes_info", 0444, root, rdev,
+ &r420_debugfs_pipes_info_fops);
+diff --git a/drivers/gpu/drm/radeon/r520.c b/drivers/gpu/drm/radeon/r520.c
+index 6cbcaa8451924c..08e127b3249a22 100644
+--- a/drivers/gpu/drm/radeon/r520.c
++++ b/drivers/gpu/drm/radeon/r520.c
+@@ -287,7 +287,7 @@ int r520_init(struct radeon_device *rdev)
+ atom_asic_init(rdev->mode_info.atom_context);
+ }
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* initialize AGP */
+ if (rdev->flags & RADEON_IS_AGP) {
+ r = radeon_agp_init(rdev);
+diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
+index 087d41e370fdc6..8b62f7faa5b99f 100644
+--- a/drivers/gpu/drm/radeon/r600.c
++++ b/drivers/gpu/drm/radeon/r600.c
+@@ -950,7 +950,7 @@ void r600_hpd_set_polarity(struct radeon_device *rdev,
+
+ void r600_hpd_init(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_connector *connector;
+ unsigned enable = 0;
+
+@@ -1017,7 +1017,7 @@ void r600_hpd_init(struct radeon_device *rdev)
+
+ void r600_hpd_fini(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_connector *connector;
+ unsigned disable = 0;
+
+@@ -3280,7 +3280,7 @@ int r600_init(struct radeon_device *rdev)
+ /* Initialize surface registers */
+ radeon_surface_init(rdev);
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* Fence driver */
+ radeon_fence_driver_init(rdev);
+ if (rdev->flags & RADEON_IS_AGP) {
+@@ -4136,7 +4136,7 @@ int r600_irq_process(struct radeon_device *rdev)
+ DRM_DEBUG("IH: D1 vblank - IH event w/o asserted irq bit?\n");
+
+ if (rdev->irq.crtc_vblank_int[0]) {
+- drm_handle_vblank(rdev->ddev, 0);
++ drm_handle_vblank(rdev_to_drm(rdev), 0);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -4166,7 +4166,7 @@ int r600_irq_process(struct radeon_device *rdev)
+ DRM_DEBUG("IH: D2 vblank - IH event w/o asserted irq bit?\n");
+
+ if (rdev->irq.crtc_vblank_int[1]) {
+- drm_handle_vblank(rdev->ddev, 1);
++ drm_handle_vblank(rdev_to_drm(rdev), 1);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -4358,7 +4358,7 @@ DEFINE_SHOW_ATTRIBUTE(r600_debugfs_mc_info);
+ static void r600_debugfs_mc_info_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("r600_mc_info", 0444, root, rdev,
+ &r600_debugfs_mc_info_fops);
+diff --git a/drivers/gpu/drm/radeon/r600_cs.c b/drivers/gpu/drm/radeon/r600_cs.c
+index 6cf54a747749d3..1b2d31c4d77caa 100644
+--- a/drivers/gpu/drm/radeon/r600_cs.c
++++ b/drivers/gpu/drm/radeon/r600_cs.c
+@@ -884,7 +884,7 @@ int r600_cs_common_vline_parse(struct radeon_cs_parser *p,
+ crtc_id = radeon_get_ib_value(p, h_idx + 2 + 7 + 1);
+ reg = R600_CP_PACKET0_GET_REG(header);
+
+- crtc = drm_crtc_find(p->rdev->ddev, p->filp, crtc_id);
++ crtc = drm_crtc_find(rdev_to_drm(p->rdev), p->filp, crtc_id);
+ if (!crtc) {
+ DRM_ERROR("cannot find crtc %d\n", crtc_id);
+ return -ENOENT;
+diff --git a/drivers/gpu/drm/radeon/r600_dpm.c b/drivers/gpu/drm/radeon/r600_dpm.c
+index 64980a61d38a8e..81d58ef667dd48 100644
+--- a/drivers/gpu/drm/radeon/r600_dpm.c
++++ b/drivers/gpu/drm/radeon/r600_dpm.c
+@@ -153,7 +153,7 @@ void r600_dpm_print_ps_status(struct radeon_device *rdev,
+
+ u32 r600_dpm_get_vblank_time(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+ u32 vblank_in_pixels;
+@@ -180,7 +180,7 @@ u32 r600_dpm_get_vblank_time(struct radeon_device *rdev)
+
+ u32 r600_dpm_get_vrefresh(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+ u32 vrefresh = 0;
+diff --git a/drivers/gpu/drm/radeon/r600_hdmi.c b/drivers/gpu/drm/radeon/r600_hdmi.c
+index f3551ebaa2f08a..661f374f5f27a7 100644
+--- a/drivers/gpu/drm/radeon/r600_hdmi.c
++++ b/drivers/gpu/drm/radeon/r600_hdmi.c
+@@ -116,7 +116,7 @@ void r600_audio_update_hdmi(struct work_struct *work)
+ {
+ struct radeon_device *rdev = container_of(work, struct radeon_device,
+ audio_work);
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct r600_audio_pin audio_status = r600_audio_status(rdev);
+ struct drm_encoder *encoder;
+ bool changed = false;
+diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
+index 0999c8eaae94ad..ae35c102a487e1 100644
+--- a/drivers/gpu/drm/radeon/radeon.h
++++ b/drivers/gpu/drm/radeon/radeon.h
+@@ -2476,6 +2476,11 @@ void r100_io_wreg(struct radeon_device *rdev, u32 reg, u32 v);
+ u32 cik_mm_rdoorbell(struct radeon_device *rdev, u32 index);
+ void cik_mm_wdoorbell(struct radeon_device *rdev, u32 index, u32 v);
+
++static inline struct drm_device *rdev_to_drm(struct radeon_device *rdev)
++{
++ return rdev->ddev;
++}
++
+ /*
+ * Cast helper
+ */
+diff --git a/drivers/gpu/drm/radeon/radeon_acpi.c b/drivers/gpu/drm/radeon/radeon_acpi.c
+index 603a78e41ba55c..22ce61bdfc0603 100644
+--- a/drivers/gpu/drm/radeon/radeon_acpi.c
++++ b/drivers/gpu/drm/radeon/radeon_acpi.c
+@@ -405,11 +405,11 @@ static int radeon_atif_handler(struct radeon_device *rdev,
+ if (req.pending & ATIF_DGPU_DISPLAY_EVENT) {
+ if ((rdev->flags & RADEON_IS_PX) &&
+ radeon_atpx_dgpu_req_power_for_displays()) {
+- pm_runtime_get_sync(rdev->ddev->dev);
++ pm_runtime_get_sync(rdev_to_drm(rdev)->dev);
+ /* Just fire off a uevent and let userspace tell us what to do */
+- drm_helper_hpd_irq_event(rdev->ddev);
+- pm_runtime_mark_last_busy(rdev->ddev->dev);
+- pm_runtime_put_autosuspend(rdev->ddev->dev);
++ drm_helper_hpd_irq_event(rdev_to_drm(rdev));
++ pm_runtime_mark_last_busy(rdev_to_drm(rdev)->dev);
++ pm_runtime_put_autosuspend(rdev_to_drm(rdev)->dev);
+ }
+ }
+ /* TODO: check other events */
+@@ -736,7 +736,7 @@ int radeon_acpi_init(struct radeon_device *rdev)
+ struct radeon_encoder *target = NULL;
+
+ /* Find the encoder controlling the brightness */
+- list_for_each_entry(tmp, &rdev->ddev->mode_config.encoder_list,
++ list_for_each_entry(tmp, &rdev_to_drm(rdev)->mode_config.encoder_list,
+ head) {
+ struct radeon_encoder *enc = to_radeon_encoder(tmp);
+
+diff --git a/drivers/gpu/drm/radeon/radeon_agp.c b/drivers/gpu/drm/radeon/radeon_agp.c
+index a3d749e350f9c2..89d7b0e9e79f82 100644
+--- a/drivers/gpu/drm/radeon/radeon_agp.c
++++ b/drivers/gpu/drm/radeon/radeon_agp.c
+@@ -161,7 +161,7 @@ struct radeon_agp_head *radeon_agp_head_init(struct drm_device *dev)
+
+ static int radeon_agp_head_acquire(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct pci_dev *pdev = to_pci_dev(dev->dev);
+
+ if (!rdev->agp)
+diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c
+index d698a899eaf4cf..168f3f94003bff 100644
+--- a/drivers/gpu/drm/radeon/radeon_atombios.c
++++ b/drivers/gpu/drm/radeon/radeon_atombios.c
+@@ -187,7 +187,7 @@ void radeon_atombios_i2c_init(struct radeon_device *rdev)
+
+ if (i2c.valid) {
+ sprintf(stmp, "0x%x", i2c.i2c_id);
+- rdev->i2c_bus[i] = radeon_i2c_create(rdev->ddev, &i2c, stmp);
++ rdev->i2c_bus[i] = radeon_i2c_create(rdev_to_drm(rdev), &i2c, stmp);
+ }
+ gpio = (ATOM_GPIO_I2C_ASSIGMENT *)
+ ((u8 *)gpio + sizeof(ATOM_GPIO_I2C_ASSIGMENT));
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index 0bcd767b9f4719..5b69cc8011b42b 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -196,7 +196,7 @@ static void radeon_audio_enable(struct radeon_device *rdev,
+ return;
+
+ if (rdev->mode_info.mode_config_initialized) {
+- list_for_each_entry(encoder, &rdev->ddev->mode_config.encoder_list, head) {
++ list_for_each_entry(encoder, &rdev_to_drm(rdev)->mode_config.encoder_list, head) {
+ if (radeon_encoder_is_digital(encoder)) {
+ radeon_encoder = to_radeon_encoder(encoder);
+ dig = radeon_encoder->enc_priv;
+@@ -760,16 +760,20 @@ static int radeon_audio_component_get_eld(struct device *kdev, int port,
+ if (!rdev->audio.enabled || !rdev->mode_info.mode_config_initialized)
+ return 0;
+
+- list_for_each_entry(encoder, &rdev->ddev->mode_config.encoder_list, head) {
++ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
++ const struct drm_connector_helper_funcs *connector_funcs =
++ connector->helper_private;
++ encoder = connector_funcs->best_encoder(connector);
++
++ if (!encoder)
++ continue;
++
+ if (!radeon_encoder_is_digital(encoder))
+ continue;
+ radeon_encoder = to_radeon_encoder(encoder);
+ dig = radeon_encoder->enc_priv;
+ if (!dig->pin || dig->pin->id != port)
+ continue;
+- connector = radeon_get_connector_for_encoder(encoder);
+- if (!connector)
+- continue;
+ *enabled = true;
+ ret = drm_eld_size(connector->eld);
+ memcpy(buf, connector->eld, min(max_bytes, ret));
+diff --git a/drivers/gpu/drm/radeon/radeon_combios.c b/drivers/gpu/drm/radeon/radeon_combios.c
+index 6952b1273b0f78..41ddc576f8f8b2 100644
+--- a/drivers/gpu/drm/radeon/radeon_combios.c
++++ b/drivers/gpu/drm/radeon/radeon_combios.c
+@@ -372,7 +372,7 @@ bool radeon_combios_check_hardcoded_edid(struct radeon_device *rdev)
+ int edid_info, size;
+ struct edid *edid;
+ unsigned char *raw;
+- edid_info = combios_get_table_offset(rdev->ddev, COMBIOS_HARDCODED_EDID_TABLE);
++ edid_info = combios_get_table_offset(rdev_to_drm(rdev), COMBIOS_HARDCODED_EDID_TABLE);
+ if (!edid_info)
+ return false;
+
+@@ -642,7 +642,7 @@ static struct radeon_i2c_bus_rec combios_setup_i2c_bus(struct radeon_device *rde
+
+ static struct radeon_i2c_bus_rec radeon_combios_get_i2c_info_from_table(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct radeon_i2c_bus_rec i2c;
+ u16 offset;
+ u8 id, blocks, clk, data;
+@@ -670,7 +670,7 @@ static struct radeon_i2c_bus_rec radeon_combios_get_i2c_info_from_table(struct r
+
+ void radeon_combios_i2c_init(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct radeon_i2c_bus_rec i2c;
+
+ /* actual hw pads
+@@ -812,7 +812,7 @@ bool radeon_combios_get_clock_info(struct drm_device *dev)
+
+ bool radeon_combios_sideport_present(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ u16 igp_info;
+
+ /* sideport is AMD only */
+@@ -915,7 +915,7 @@ struct radeon_encoder_primary_dac *radeon_combios_get_primary_dac_info(struct
+ enum radeon_tv_std
+ radeon_combios_get_tv_info(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ uint16_t tv_info;
+ enum radeon_tv_std tv_std = TV_STD_NTSC;
+
+@@ -2637,7 +2637,7 @@ static const char *thermal_controller_names[] = {
+
+ void radeon_combios_get_power_modes(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ u16 offset, misc, misc2 = 0;
+ u8 rev, tmp;
+ int state_index = 0;
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index afbb3a80c0c6b5..32851632643db9 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -760,7 +760,7 @@ bool radeon_boot_test_post_card(struct radeon_device *rdev)
+ if (rdev->is_atom_bios)
+ atom_asic_init(rdev->mode_info.atom_context);
+ else
+- radeon_combios_asic_init(rdev->ddev);
++ radeon_combios_asic_init(rdev_to_drm(rdev));
+ return true;
+ } else {
+ dev_err(rdev->dev, "Card not posted and no BIOS - ignoring\n");
+@@ -980,7 +980,7 @@ int radeon_atombios_init(struct radeon_device *rdev)
+ return -ENOMEM;
+
+ rdev->mode_info.atom_card_info = atom_card_info;
+- atom_card_info->dev = rdev->ddev;
++ atom_card_info->dev = rdev_to_drm(rdev);
+ atom_card_info->reg_read = cail_reg_read;
+ atom_card_info->reg_write = cail_reg_write;
+ /* needed for iio ops */
+@@ -1005,7 +1005,7 @@ int radeon_atombios_init(struct radeon_device *rdev)
+
+ mutex_init(&rdev->mode_info.atom_context->mutex);
+ mutex_init(&rdev->mode_info.atom_context->scratch_mutex);
+- radeon_atom_initialize_bios_scratch_regs(rdev->ddev);
++ radeon_atom_initialize_bios_scratch_regs(rdev_to_drm(rdev));
+ atom_allocate_fb_scratch(rdev->mode_info.atom_context);
+ return 0;
+ }
+@@ -1049,7 +1049,7 @@ void radeon_atombios_fini(struct radeon_device *rdev)
+ */
+ int radeon_combios_init(struct radeon_device *rdev)
+ {
+- radeon_combios_initialize_bios_scratch_regs(rdev->ddev);
++ radeon_combios_initialize_bios_scratch_regs(rdev_to_drm(rdev));
+ return 0;
+ }
+
+@@ -1847,7 +1847,7 @@ int radeon_gpu_reset(struct radeon_device *rdev)
+
+ downgrade_write(&rdev->exclusive_lock);
+
+- drm_helper_resume_force_mode(rdev->ddev);
++ drm_helper_resume_force_mode(rdev_to_drm(rdev));
+
+ /* set the power state here in case we are a PX system or headless */
+ if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled)
+diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
+index 843383f7237fb6..10fd58f400bc59 100644
+--- a/drivers/gpu/drm/radeon/radeon_display.c
++++ b/drivers/gpu/drm/radeon/radeon_display.c
+@@ -302,13 +302,13 @@ void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id)
+ if ((radeon_use_pflipirq == 2) && ASIC_IS_DCE4(rdev))
+ return;
+
+- spin_lock_irqsave(&rdev->ddev->event_lock, flags);
++ spin_lock_irqsave(&rdev_to_drm(rdev)->event_lock, flags);
+ if (radeon_crtc->flip_status != RADEON_FLIP_SUBMITTED) {
+ DRM_DEBUG_DRIVER("radeon_crtc->flip_status = %d != "
+ "RADEON_FLIP_SUBMITTED(%d)\n",
+ radeon_crtc->flip_status,
+ RADEON_FLIP_SUBMITTED);
+- spin_unlock_irqrestore(&rdev->ddev->event_lock, flags);
++ spin_unlock_irqrestore(&rdev_to_drm(rdev)->event_lock, flags);
+ return;
+ }
+
+@@ -334,7 +334,7 @@ void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id)
+ */
+ if (update_pending &&
+ (DRM_SCANOUTPOS_VALID &
+- radeon_get_crtc_scanoutpos(rdev->ddev, crtc_id,
++ radeon_get_crtc_scanoutpos(rdev_to_drm(rdev), crtc_id,
+ GET_DISTANCE_TO_VBLANKSTART,
+ &vpos, &hpos, NULL, NULL,
+ &rdev->mode_info.crtcs[crtc_id]->base.hwmode)) &&
+@@ -347,7 +347,7 @@ void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id)
+ */
+ update_pending = 0;
+ }
+- spin_unlock_irqrestore(&rdev->ddev->event_lock, flags);
++ spin_unlock_irqrestore(&rdev_to_drm(rdev)->event_lock, flags);
+ if (!update_pending)
+ radeon_crtc_handle_flip(rdev, crtc_id);
+ }
+@@ -370,14 +370,14 @@ void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id)
+ if (radeon_crtc == NULL)
+ return;
+
+- spin_lock_irqsave(&rdev->ddev->event_lock, flags);
++ spin_lock_irqsave(&rdev_to_drm(rdev)->event_lock, flags);
+ work = radeon_crtc->flip_work;
+ if (radeon_crtc->flip_status != RADEON_FLIP_SUBMITTED) {
+ DRM_DEBUG_DRIVER("radeon_crtc->flip_status = %d != "
+ "RADEON_FLIP_SUBMITTED(%d)\n",
+ radeon_crtc->flip_status,
+ RADEON_FLIP_SUBMITTED);
+- spin_unlock_irqrestore(&rdev->ddev->event_lock, flags);
++ spin_unlock_irqrestore(&rdev_to_drm(rdev)->event_lock, flags);
+ return;
+ }
+
+@@ -389,7 +389,7 @@ void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id)
+ if (work->event)
+ drm_crtc_send_vblank_event(&radeon_crtc->base, work->event);
+
+- spin_unlock_irqrestore(&rdev->ddev->event_lock, flags);
++ spin_unlock_irqrestore(&rdev_to_drm(rdev)->event_lock, flags);
+
+ drm_crtc_vblank_put(&radeon_crtc->base);
+ radeon_irq_kms_pflip_irq_put(rdev, work->crtc_id);
+@@ -408,7 +408,7 @@ static void radeon_flip_work_func(struct work_struct *__work)
+ struct radeon_flip_work *work =
+ container_of(__work, struct radeon_flip_work, flip_work);
+ struct radeon_device *rdev = work->rdev;
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct radeon_crtc *radeon_crtc = rdev->mode_info.crtcs[work->crtc_id];
+
+ struct drm_crtc *crtc = &radeon_crtc->base;
+@@ -1401,7 +1401,7 @@ static int radeon_modeset_create_props(struct radeon_device *rdev)
+
+ if (rdev->is_atom_bios) {
+ rdev->mode_info.coherent_mode_property =
+- drm_property_create_range(rdev->ddev, 0 , "coherent", 0, 1);
++ drm_property_create_range(rdev_to_drm(rdev), 0, "coherent", 0, 1);
+ if (!rdev->mode_info.coherent_mode_property)
+ return -ENOMEM;
+ }
+@@ -1409,57 +1409,57 @@ static int radeon_modeset_create_props(struct radeon_device *rdev)
+ if (!ASIC_IS_AVIVO(rdev)) {
+ sz = ARRAY_SIZE(radeon_tmds_pll_enum_list);
+ rdev->mode_info.tmds_pll_property =
+- drm_property_create_enum(rdev->ddev, 0,
++ drm_property_create_enum(rdev_to_drm(rdev), 0,
+ "tmds_pll",
+ radeon_tmds_pll_enum_list, sz);
+ }
+
+ rdev->mode_info.load_detect_property =
+- drm_property_create_range(rdev->ddev, 0, "load detection", 0, 1);
++ drm_property_create_range(rdev_to_drm(rdev), 0, "load detection", 0, 1);
+ if (!rdev->mode_info.load_detect_property)
+ return -ENOMEM;
+
+- drm_mode_create_scaling_mode_property(rdev->ddev);
++ drm_mode_create_scaling_mode_property(rdev_to_drm(rdev));
+
+ sz = ARRAY_SIZE(radeon_tv_std_enum_list);
+ rdev->mode_info.tv_std_property =
+- drm_property_create_enum(rdev->ddev, 0,
++ drm_property_create_enum(rdev_to_drm(rdev), 0,
+ "tv standard",
+ radeon_tv_std_enum_list, sz);
+
+ sz = ARRAY_SIZE(radeon_underscan_enum_list);
+ rdev->mode_info.underscan_property =
+- drm_property_create_enum(rdev->ddev, 0,
++ drm_property_create_enum(rdev_to_drm(rdev), 0,
+ "underscan",
+ radeon_underscan_enum_list, sz);
+
+ rdev->mode_info.underscan_hborder_property =
+- drm_property_create_range(rdev->ddev, 0,
++ drm_property_create_range(rdev_to_drm(rdev), 0,
+ "underscan hborder", 0, 128);
+ if (!rdev->mode_info.underscan_hborder_property)
+ return -ENOMEM;
+
+ rdev->mode_info.underscan_vborder_property =
+- drm_property_create_range(rdev->ddev, 0,
++ drm_property_create_range(rdev_to_drm(rdev), 0,
+ "underscan vborder", 0, 128);
+ if (!rdev->mode_info.underscan_vborder_property)
+ return -ENOMEM;
+
+ sz = ARRAY_SIZE(radeon_audio_enum_list);
+ rdev->mode_info.audio_property =
+- drm_property_create_enum(rdev->ddev, 0,
++ drm_property_create_enum(rdev_to_drm(rdev), 0,
+ "audio",
+ radeon_audio_enum_list, sz);
+
+ sz = ARRAY_SIZE(radeon_dither_enum_list);
+ rdev->mode_info.dither_property =
+- drm_property_create_enum(rdev->ddev, 0,
++ drm_property_create_enum(rdev_to_drm(rdev), 0,
+ "dither",
+ radeon_dither_enum_list, sz);
+
+ sz = ARRAY_SIZE(radeon_output_csc_enum_list);
+ rdev->mode_info.output_csc_property =
+- drm_property_create_enum(rdev->ddev, 0,
++ drm_property_create_enum(rdev_to_drm(rdev), 0,
+ "output_csc",
+ radeon_output_csc_enum_list, sz);
+
+@@ -1578,29 +1578,29 @@ int radeon_modeset_init(struct radeon_device *rdev)
+ int i;
+ int ret;
+
+- drm_mode_config_init(rdev->ddev);
++ drm_mode_config_init(rdev_to_drm(rdev));
+ rdev->mode_info.mode_config_initialized = true;
+
+- rdev->ddev->mode_config.funcs = &radeon_mode_funcs;
++ rdev_to_drm(rdev)->mode_config.funcs = &radeon_mode_funcs;
+
+ if (radeon_use_pflipirq == 2 && rdev->family >= CHIP_R600)
+- rdev->ddev->mode_config.async_page_flip = true;
++ rdev_to_drm(rdev)->mode_config.async_page_flip = true;
+
+ if (ASIC_IS_DCE5(rdev)) {
+- rdev->ddev->mode_config.max_width = 16384;
+- rdev->ddev->mode_config.max_height = 16384;
++ rdev_to_drm(rdev)->mode_config.max_width = 16384;
++ rdev_to_drm(rdev)->mode_config.max_height = 16384;
+ } else if (ASIC_IS_AVIVO(rdev)) {
+- rdev->ddev->mode_config.max_width = 8192;
+- rdev->ddev->mode_config.max_height = 8192;
++ rdev_to_drm(rdev)->mode_config.max_width = 8192;
++ rdev_to_drm(rdev)->mode_config.max_height = 8192;
+ } else {
+- rdev->ddev->mode_config.max_width = 4096;
+- rdev->ddev->mode_config.max_height = 4096;
++ rdev_to_drm(rdev)->mode_config.max_width = 4096;
++ rdev_to_drm(rdev)->mode_config.max_height = 4096;
+ }
+
+- rdev->ddev->mode_config.preferred_depth = 24;
+- rdev->ddev->mode_config.prefer_shadow = 1;
++ rdev_to_drm(rdev)->mode_config.preferred_depth = 24;
++ rdev_to_drm(rdev)->mode_config.prefer_shadow = 1;
+
+- rdev->ddev->mode_config.fb_modifiers_not_supported = true;
++ rdev_to_drm(rdev)->mode_config.fb_modifiers_not_supported = true;
+
+ ret = radeon_modeset_create_props(rdev);
+ if (ret) {
+@@ -1618,11 +1618,11 @@ int radeon_modeset_init(struct radeon_device *rdev)
+
+ /* allocate crtcs */
+ for (i = 0; i < rdev->num_crtc; i++) {
+- radeon_crtc_init(rdev->ddev, i);
++ radeon_crtc_init(rdev_to_drm(rdev), i);
+ }
+
+ /* okay we should have all the bios connectors */
+- ret = radeon_setup_enc_conn(rdev->ddev);
++ ret = radeon_setup_enc_conn(rdev_to_drm(rdev));
+ if (!ret) {
+ return ret;
+ }
+@@ -1639,7 +1639,7 @@ int radeon_modeset_init(struct radeon_device *rdev)
+ /* setup afmt */
+ radeon_afmt_init(rdev);
+
+- drm_kms_helper_poll_init(rdev->ddev);
++ drm_kms_helper_poll_init(rdev_to_drm(rdev));
+
+ /* do pm late init */
+ ret = radeon_pm_late_init(rdev);
+@@ -1650,11 +1650,11 @@ int radeon_modeset_init(struct radeon_device *rdev)
+ void radeon_modeset_fini(struct radeon_device *rdev)
+ {
+ if (rdev->mode_info.mode_config_initialized) {
+- drm_kms_helper_poll_fini(rdev->ddev);
++ drm_kms_helper_poll_fini(rdev_to_drm(rdev));
+ radeon_hpd_fini(rdev);
+- drm_helper_force_disable_all(rdev->ddev);
++ drm_helper_force_disable_all(rdev_to_drm(rdev));
+ radeon_afmt_fini(rdev);
+- drm_mode_config_cleanup(rdev->ddev);
++ drm_mode_config_cleanup(rdev_to_drm(rdev));
+ rdev->mode_info.mode_config_initialized = false;
+ }
+
+diff --git a/drivers/gpu/drm/radeon/radeon_fbdev.c b/drivers/gpu/drm/radeon/radeon_fbdev.c
+index 02bf25759059a7..fb70de29545c6f 100644
+--- a/drivers/gpu/drm/radeon/radeon_fbdev.c
++++ b/drivers/gpu/drm/radeon/radeon_fbdev.c
+@@ -67,7 +67,7 @@ static int radeon_fbdev_create_pinned_object(struct drm_fb_helper *fb_helper,
+ int height = mode_cmd->height;
+ u32 cpp;
+
+- info = drm_get_format_info(rdev->ddev, mode_cmd);
++ info = drm_get_format_info(rdev_to_drm(rdev), mode_cmd);
+ cpp = info->cpp[0];
+
+ /* need to align pitch with crtc limits */
+@@ -148,15 +148,15 @@ static int radeon_fbdev_fb_open(struct fb_info *info, int user)
+ struct radeon_device *rdev = fb_helper->dev->dev_private;
+ int ret;
+
+- ret = pm_runtime_get_sync(rdev->ddev->dev);
++ ret = pm_runtime_get_sync(rdev_to_drm(rdev)->dev);
+ if (ret < 0 && ret != -EACCES)
+ goto err_pm_runtime_mark_last_busy;
+
+ return 0;
+
+ err_pm_runtime_mark_last_busy:
+- pm_runtime_mark_last_busy(rdev->ddev->dev);
+- pm_runtime_put_autosuspend(rdev->ddev->dev);
++ pm_runtime_mark_last_busy(rdev_to_drm(rdev)->dev);
++ pm_runtime_put_autosuspend(rdev_to_drm(rdev)->dev);
+ return ret;
+ }
+
+@@ -165,8 +165,8 @@ static int radeon_fbdev_fb_release(struct fb_info *info, int user)
+ struct drm_fb_helper *fb_helper = info->par;
+ struct radeon_device *rdev = fb_helper->dev->dev_private;
+
+- pm_runtime_mark_last_busy(rdev->ddev->dev);
+- pm_runtime_put_autosuspend(rdev->ddev->dev);
++ pm_runtime_mark_last_busy(rdev_to_drm(rdev)->dev);
++ pm_runtime_put_autosuspend(rdev_to_drm(rdev)->dev);
+
+ return 0;
+ }
+@@ -236,7 +236,7 @@ static int radeon_fbdev_fb_helper_fb_probe(struct drm_fb_helper *fb_helper,
+ ret = -ENOMEM;
+ goto err_radeon_fbdev_destroy_pinned_object;
+ }
+- ret = radeon_framebuffer_init(rdev->ddev, fb, &mode_cmd, gobj);
++ ret = radeon_framebuffer_init(rdev_to_drm(rdev), fb, &mode_cmd, gobj);
+ if (ret) {
+ DRM_ERROR("failed to initialize framebuffer %d\n", ret);
+ goto err_kfree;
+@@ -374,12 +374,12 @@ void radeon_fbdev_setup(struct radeon_device *rdev)
+ fb_helper = kzalloc(sizeof(*fb_helper), GFP_KERNEL);
+ if (!fb_helper)
+ return;
+- drm_fb_helper_prepare(rdev->ddev, fb_helper, bpp_sel, &radeon_fbdev_fb_helper_funcs);
++ drm_fb_helper_prepare(rdev_to_drm(rdev), fb_helper, bpp_sel, &radeon_fbdev_fb_helper_funcs);
+
+- ret = drm_client_init(rdev->ddev, &fb_helper->client, "radeon-fbdev",
++ ret = drm_client_init(rdev_to_drm(rdev), &fb_helper->client, "radeon-fbdev",
+ &radeon_fbdev_client_funcs);
+ if (ret) {
+- drm_err(rdev->ddev, "Failed to register client: %d\n", ret);
++ drm_err(rdev_to_drm(rdev), "Failed to register client: %d\n", ret);
+ goto err_drm_client_init;
+ }
+
+@@ -394,13 +394,13 @@ void radeon_fbdev_setup(struct radeon_device *rdev)
+
+ void radeon_fbdev_set_suspend(struct radeon_device *rdev, int state)
+ {
+- if (rdev->ddev->fb_helper)
+- drm_fb_helper_set_suspend(rdev->ddev->fb_helper, state);
++ if (rdev_to_drm(rdev)->fb_helper)
++ drm_fb_helper_set_suspend(rdev_to_drm(rdev)->fb_helper, state);
+ }
+
+ bool radeon_fbdev_robj_is_fb(struct radeon_device *rdev, struct radeon_bo *robj)
+ {
+- struct drm_fb_helper *fb_helper = rdev->ddev->fb_helper;
++ struct drm_fb_helper *fb_helper = rdev_to_drm(rdev)->fb_helper;
+ struct drm_gem_object *gobj;
+
+ if (!fb_helper)
+diff --git a/drivers/gpu/drm/radeon/radeon_fence.c b/drivers/gpu/drm/radeon/radeon_fence.c
+index 4fb780d96f32a7..daff61586be52b 100644
+--- a/drivers/gpu/drm/radeon/radeon_fence.c
++++ b/drivers/gpu/drm/radeon/radeon_fence.c
+@@ -150,7 +150,7 @@ int radeon_fence_emit(struct radeon_device *rdev,
+ rdev->fence_context + ring,
+ seq);
+ radeon_fence_ring_emit(rdev, ring, *fence);
+- trace_radeon_fence_emit(rdev->ddev, ring, (*fence)->seq);
++ trace_radeon_fence_emit(rdev_to_drm(rdev), ring, (*fence)->seq);
+ radeon_fence_schedule_check(rdev, ring);
+ return 0;
+ }
+@@ -489,7 +489,7 @@ static long radeon_fence_wait_seq_timeout(struct radeon_device *rdev,
+ if (!target_seq[i])
+ continue;
+
+- trace_radeon_fence_wait_begin(rdev->ddev, i, target_seq[i]);
++ trace_radeon_fence_wait_begin(rdev_to_drm(rdev), i, target_seq[i]);
+ radeon_irq_kms_sw_irq_get(rdev, i);
+ }
+
+@@ -511,7 +511,7 @@ static long radeon_fence_wait_seq_timeout(struct radeon_device *rdev,
+ continue;
+
+ radeon_irq_kms_sw_irq_put(rdev, i);
+- trace_radeon_fence_wait_end(rdev->ddev, i, target_seq[i]);
++ trace_radeon_fence_wait_end(rdev_to_drm(rdev), i, target_seq[i]);
+ }
+
+ return r;
+@@ -995,7 +995,7 @@ DEFINE_DEBUGFS_ATTRIBUTE(radeon_debugfs_gpu_reset_fops,
+ void radeon_debugfs_fence_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("radeon_gpu_reset", 0444, root, rdev,
+ &radeon_debugfs_gpu_reset_fops);
+diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
+index e66a230331eefc..210e8d43bb23a0 100644
+--- a/drivers/gpu/drm/radeon/radeon_gem.c
++++ b/drivers/gpu/drm/radeon/radeon_gem.c
+@@ -899,7 +899,7 @@ DEFINE_SHOW_ATTRIBUTE(radeon_debugfs_gem_info);
+ void radeon_gem_debugfs_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("radeon_gem_info", 0444, root, rdev,
+ &radeon_debugfs_gem_info_fops);
+diff --git a/drivers/gpu/drm/radeon/radeon_i2c.c b/drivers/gpu/drm/radeon/radeon_i2c.c
+index 3d174390a8afe7..1f16619ed06ed4 100644
+--- a/drivers/gpu/drm/radeon/radeon_i2c.c
++++ b/drivers/gpu/drm/radeon/radeon_i2c.c
+@@ -1011,7 +1011,7 @@ void radeon_i2c_add(struct radeon_device *rdev,
+ struct radeon_i2c_bus_rec *rec,
+ const char *name)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ int i;
+
+ for (i = 0; i < RADEON_MAX_I2C_BUS; i++) {
+diff --git a/drivers/gpu/drm/radeon/radeon_ib.c b/drivers/gpu/drm/radeon/radeon_ib.c
+index 63d914f3414d30..1aa41cc3f99110 100644
+--- a/drivers/gpu/drm/radeon/radeon_ib.c
++++ b/drivers/gpu/drm/radeon/radeon_ib.c
+@@ -309,7 +309,7 @@ DEFINE_SHOW_ATTRIBUTE(radeon_debugfs_sa_info);
+ static void radeon_debugfs_sa_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("radeon_sa_info", 0444, root, rdev,
+ &radeon_debugfs_sa_info_fops);
+diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c
+index c4dda908666cfc..9961251b44ba06 100644
+--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c
+@@ -80,7 +80,7 @@ static void radeon_hotplug_work_func(struct work_struct *work)
+ {
+ struct radeon_device *rdev = container_of(work, struct radeon_device,
+ hotplug_work.work);
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_mode_config *mode_config = &dev->mode_config;
+ struct drm_connector *connector;
+
+@@ -101,7 +101,7 @@ static void radeon_dp_work_func(struct work_struct *work)
+ {
+ struct radeon_device *rdev = container_of(work, struct radeon_device,
+ dp_work);
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_mode_config *mode_config = &dev->mode_config;
+ struct drm_connector *connector;
+
+@@ -197,7 +197,7 @@ static void radeon_driver_irq_uninstall_kms(struct drm_device *dev)
+
+ static int radeon_irq_install(struct radeon_device *rdev, int irq)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ int ret;
+
+ if (irq == IRQ_NOTCONNECTED)
+@@ -218,7 +218,7 @@ static int radeon_irq_install(struct radeon_device *rdev, int irq)
+
+ static void radeon_irq_uninstall(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct pci_dev *pdev = to_pci_dev(dev->dev);
+
+ radeon_driver_irq_uninstall_kms(dev);
+@@ -322,9 +322,9 @@ int radeon_irq_kms_init(struct radeon_device *rdev)
+ spin_lock_init(&rdev->irq.lock);
+
+ /* Disable vblank irqs aggressively for power-saving */
+- rdev->ddev->vblank_disable_immediate = true;
++ rdev_to_drm(rdev)->vblank_disable_immediate = true;
+
+- r = drm_vblank_init(rdev->ddev, rdev->num_crtc);
++ r = drm_vblank_init(rdev_to_drm(rdev), rdev->num_crtc);
+ if (r) {
+ return r;
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
+index a955f8a2f7feec..450ff7daa46cf3 100644
+--- a/drivers/gpu/drm/radeon/radeon_object.c
++++ b/drivers/gpu/drm/radeon/radeon_object.c
+@@ -150,7 +150,7 @@ int radeon_bo_create(struct radeon_device *rdev,
+ bo = kzalloc(sizeof(struct radeon_bo), GFP_KERNEL);
+ if (bo == NULL)
+ return -ENOMEM;
+- drm_gem_private_object_init(rdev->ddev, &bo->tbo.base, size);
++ drm_gem_private_object_init(rdev_to_drm(rdev), &bo->tbo.base, size);
+ bo->rdev = rdev;
+ bo->surface_reg = -1;
+ INIT_LIST_HEAD(&bo->list);
+diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c
+index 2d9d9f46f24370..b4fb7e70320b80 100644
+--- a/drivers/gpu/drm/radeon/radeon_pm.c
++++ b/drivers/gpu/drm/radeon/radeon_pm.c
+@@ -282,7 +282,7 @@ static void radeon_pm_set_clocks(struct radeon_device *rdev)
+
+ if (rdev->irq.installed) {
+ i = 0;
+- drm_for_each_crtc(crtc, rdev->ddev) {
++ drm_for_each_crtc(crtc, rdev_to_drm(rdev)) {
+ if (rdev->pm.active_crtcs & (1 << i)) {
+ /* This can fail if a modeset is in progress */
+ if (drm_crtc_vblank_get(crtc) == 0)
+@@ -299,7 +299,7 @@ static void radeon_pm_set_clocks(struct radeon_device *rdev)
+
+ if (rdev->irq.installed) {
+ i = 0;
+- drm_for_each_crtc(crtc, rdev->ddev) {
++ drm_for_each_crtc(crtc, rdev_to_drm(rdev)) {
+ if (rdev->pm.req_vblank & (1 << i)) {
+ rdev->pm.req_vblank &= ~(1 << i);
+ drm_crtc_vblank_put(crtc);
+@@ -671,7 +671,7 @@ static ssize_t radeon_hwmon_show_temp(struct device *dev,
+ char *buf)
+ {
+ struct radeon_device *rdev = dev_get_drvdata(dev);
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+ int temp;
+
+ /* Can't get temperature when the card is off */
+@@ -715,7 +715,7 @@ static ssize_t radeon_hwmon_show_sclk(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct radeon_device *rdev = dev_get_drvdata(dev);
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+ u32 sclk = 0;
+
+ /* Can't get clock frequency when the card is off */
+@@ -740,7 +740,7 @@ static ssize_t radeon_hwmon_show_vddc(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct radeon_device *rdev = dev_get_drvdata(dev);
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+ u16 vddc = 0;
+
+ /* Can't get vddc when the card is off */
+@@ -1692,7 +1692,7 @@ void radeon_pm_fini(struct radeon_device *rdev)
+
+ static void radeon_pm_compute_clocks_old(struct radeon_device *rdev)
+ {
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+
+@@ -1765,7 +1765,7 @@ static void radeon_pm_compute_clocks_old(struct radeon_device *rdev)
+
+ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev)
+ {
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+ struct radeon_connector *radeon_connector;
+@@ -1826,7 +1826,7 @@ static bool radeon_pm_in_vbl(struct radeon_device *rdev)
+ */
+ for (crtc = 0; (crtc < rdev->num_crtc) && in_vbl; crtc++) {
+ if (rdev->pm.active_crtcs & (1 << crtc)) {
+- vbl_status = radeon_get_crtc_scanoutpos(rdev->ddev,
++ vbl_status = radeon_get_crtc_scanoutpos(rdev_to_drm(rdev),
+ crtc,
+ USE_REAL_VBLANKSTART,
+ &vpos, &hpos, NULL, NULL,
+@@ -1918,7 +1918,7 @@ static void radeon_dynpm_idle_work_handler(struct work_struct *work)
+ static int radeon_debugfs_pm_info_show(struct seq_file *m, void *unused)
+ {
+ struct radeon_device *rdev = m->private;
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+
+ if ((rdev->flags & RADEON_IS_PX) &&
+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON)) {
+@@ -1955,7 +1955,7 @@ DEFINE_SHOW_ATTRIBUTE(radeon_debugfs_pm_info);
+ static void radeon_debugfs_pm_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("radeon_pm_info", 0444, root, rdev,
+ &radeon_debugfs_pm_info_fops);
+diff --git a/drivers/gpu/drm/radeon/radeon_ring.c b/drivers/gpu/drm/radeon/radeon_ring.c
+index 8d1d458286a842..581ae20c46e4b5 100644
+--- a/drivers/gpu/drm/radeon/radeon_ring.c
++++ b/drivers/gpu/drm/radeon/radeon_ring.c
+@@ -550,7 +550,7 @@ static void radeon_debugfs_ring_init(struct radeon_device *rdev, struct radeon_r
+ {
+ #if defined(CONFIG_DEBUG_FS)
+ const char *ring_name = radeon_debugfs_ring_idx_to_name(ring->idx);
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ if (ring_name)
+ debugfs_create_file(ring_name, 0444, root, ring,
+diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
+index 5c65b6dfb99af7..69d0c12fa419fd 100644
+--- a/drivers/gpu/drm/radeon/radeon_ttm.c
++++ b/drivers/gpu/drm/radeon/radeon_ttm.c
+@@ -682,8 +682,8 @@ int radeon_ttm_init(struct radeon_device *rdev)
+
+ /* No others user of address space so set it to 0 */
+ r = ttm_device_init(&rdev->mman.bdev, &radeon_bo_driver, rdev->dev,
+- rdev->ddev->anon_inode->i_mapping,
+- rdev->ddev->vma_offset_manager,
++ rdev_to_drm(rdev)->anon_inode->i_mapping,
++ rdev_to_drm(rdev)->vma_offset_manager,
+ rdev->need_swiotlb,
+ dma_addressing_limited(&rdev->pdev->dev));
+ if (r) {
+@@ -890,7 +890,7 @@ static const struct file_operations radeon_ttm_gtt_fops = {
+ static void radeon_ttm_debugfs_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct drm_minor *minor = rdev->ddev->primary;
++ struct drm_minor *minor = rdev_to_drm(rdev)->primary;
+ struct dentry *root = minor->debugfs_root;
+
+ debugfs_create_file("radeon_vram", 0444, root, rdev,
+diff --git a/drivers/gpu/drm/radeon/rs400.c b/drivers/gpu/drm/radeon/rs400.c
+index d4d1501e6576dd..d6c18fd740ec6a 100644
+--- a/drivers/gpu/drm/radeon/rs400.c
++++ b/drivers/gpu/drm/radeon/rs400.c
+@@ -379,7 +379,7 @@ DEFINE_SHOW_ATTRIBUTE(rs400_debugfs_gart_info);
+ static void rs400_debugfs_pcie_gart_info_init(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("rs400_gart_info", 0444, root, rdev,
+ &rs400_debugfs_gart_info_fops);
+@@ -474,7 +474,7 @@ int rs400_resume(struct radeon_device *rdev)
+ RREG32(R_0007C0_CP_STAT));
+ }
+ /* post */
+- radeon_combios_asic_init(rdev->ddev);
++ radeon_combios_asic_init(rdev_to_drm(rdev));
+ /* Resume clock after posting */
+ r300_clock_startup(rdev);
+ /* Initialize surface registers */
+@@ -552,7 +552,7 @@ int rs400_init(struct radeon_device *rdev)
+ return -EINVAL;
+
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* initialize memory controller */
+ rs400_mc_init(rdev);
+ /* Fence driver */
+diff --git a/drivers/gpu/drm/radeon/rs600.c b/drivers/gpu/drm/radeon/rs600.c
+index 5c162778899b09..88c8e91ea65128 100644
+--- a/drivers/gpu/drm/radeon/rs600.c
++++ b/drivers/gpu/drm/radeon/rs600.c
+@@ -321,7 +321,7 @@ void rs600_pm_misc(struct radeon_device *rdev)
+
+ void rs600_pm_prepare(struct radeon_device *rdev)
+ {
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+ u32 tmp;
+@@ -339,7 +339,7 @@ void rs600_pm_prepare(struct radeon_device *rdev)
+
+ void rs600_pm_finish(struct radeon_device *rdev)
+ {
+- struct drm_device *ddev = rdev->ddev;
++ struct drm_device *ddev = rdev_to_drm(rdev);
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+ u32 tmp;
+@@ -408,7 +408,7 @@ void rs600_hpd_set_polarity(struct radeon_device *rdev,
+
+ void rs600_hpd_init(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_connector *connector;
+ unsigned enable = 0;
+
+@@ -435,7 +435,7 @@ void rs600_hpd_init(struct radeon_device *rdev)
+
+ void rs600_hpd_fini(struct radeon_device *rdev)
+ {
+- struct drm_device *dev = rdev->ddev;
++ struct drm_device *dev = rdev_to_drm(rdev);
+ struct drm_connector *connector;
+ unsigned disable = 0;
+
+@@ -797,7 +797,7 @@ int rs600_irq_process(struct radeon_device *rdev)
+ /* Vertical blank interrupts */
+ if (G_007EDC_LB_D1_VBLANK_INTERRUPT(rdev->irq.stat_regs.r500.disp_int)) {
+ if (rdev->irq.crtc_vblank_int[0]) {
+- drm_handle_vblank(rdev->ddev, 0);
++ drm_handle_vblank(rdev_to_drm(rdev), 0);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -806,7 +806,7 @@ int rs600_irq_process(struct radeon_device *rdev)
+ }
+ if (G_007EDC_LB_D2_VBLANK_INTERRUPT(rdev->irq.stat_regs.r500.disp_int)) {
+ if (rdev->irq.crtc_vblank_int[1]) {
+- drm_handle_vblank(rdev->ddev, 1);
++ drm_handle_vblank(rdev_to_drm(rdev), 1);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -1133,7 +1133,7 @@ int rs600_init(struct radeon_device *rdev)
+ return -EINVAL;
+
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* initialize memory controller */
+ rs600_mc_init(rdev);
+ r100_debugfs_rbbm_init(rdev);
+diff --git a/drivers/gpu/drm/radeon/rs690.c b/drivers/gpu/drm/radeon/rs690.c
+index 14fb0819b8c19c..016eb4992803dc 100644
+--- a/drivers/gpu/drm/radeon/rs690.c
++++ b/drivers/gpu/drm/radeon/rs690.c
+@@ -845,7 +845,7 @@ int rs690_init(struct radeon_device *rdev)
+ return -EINVAL;
+
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* initialize memory controller */
+ rs690_mc_init(rdev);
+ rv515_debugfs(rdev);
+diff --git a/drivers/gpu/drm/radeon/rv515.c b/drivers/gpu/drm/radeon/rv515.c
+index bbc6ccabf78876..1b4dfb6455858e 100644
+--- a/drivers/gpu/drm/radeon/rv515.c
++++ b/drivers/gpu/drm/radeon/rv515.c
+@@ -255,7 +255,7 @@ DEFINE_SHOW_ATTRIBUTE(rv515_debugfs_ga_info);
+ void rv515_debugfs(struct radeon_device *rdev)
+ {
+ #if defined(CONFIG_DEBUG_FS)
+- struct dentry *root = rdev->ddev->primary->debugfs_root;
++ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
+
+ debugfs_create_file("rv515_pipes_info", 0444, root, rdev,
+ &rv515_debugfs_pipes_info_fops);
+@@ -636,7 +636,7 @@ int rv515_init(struct radeon_device *rdev)
+ if (radeon_boot_test_post_card(rdev) == false)
+ return -EINVAL;
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* initialize AGP */
+ if (rdev->flags & RADEON_IS_AGP) {
+ r = radeon_agp_init(rdev);
+diff --git a/drivers/gpu/drm/radeon/rv770.c b/drivers/gpu/drm/radeon/rv770.c
+index 9ce12fa3c35683..7d4b0bf591090a 100644
+--- a/drivers/gpu/drm/radeon/rv770.c
++++ b/drivers/gpu/drm/radeon/rv770.c
+@@ -1935,7 +1935,7 @@ int rv770_init(struct radeon_device *rdev)
+ /* Initialize surface registers */
+ radeon_surface_init(rdev);
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+ /* Fence driver */
+ radeon_fence_driver_init(rdev);
+ /* initialize AGP */
+diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
+index 15759c8ca5b7ba..6c95575ce109fc 100644
+--- a/drivers/gpu/drm/radeon/si.c
++++ b/drivers/gpu/drm/radeon/si.c
+@@ -6277,7 +6277,7 @@ int si_irq_process(struct radeon_device *rdev)
+ event_name = "vblank";
+
+ if (rdev->irq.crtc_vblank_int[crtc_idx]) {
+- drm_handle_vblank(rdev->ddev, crtc_idx);
++ drm_handle_vblank(rdev_to_drm(rdev), crtc_idx);
+ rdev->pm.vblank_sync = true;
+ wake_up(&rdev->irq.vblank_queue);
+ }
+@@ -6839,7 +6839,7 @@ int si_init(struct radeon_device *rdev)
+ /* Initialize surface registers */
+ radeon_surface_init(rdev);
+ /* Initialize clocks */
+- radeon_get_clock_info(rdev->ddev);
++ radeon_get_clock_info(rdev_to_drm(rdev));
+
+ /* Fence driver */
+ radeon_fence_driver_init(rdev);
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index a0febdb6f21452..eae37dcaaaeb71 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -560,6 +560,7 @@ void v3d_irq_disable(struct v3d_dev *v3d);
+ void v3d_irq_reset(struct v3d_dev *v3d);
+
+ /* v3d_mmu.c */
++int v3d_mmu_flush_all(struct v3d_dev *v3d);
+ int v3d_mmu_set_page_table(struct v3d_dev *v3d);
+ void v3d_mmu_insert_ptes(struct v3d_bo *bo);
+ void v3d_mmu_remove_ptes(struct v3d_bo *bo);
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index d469bda52c1a5e..20bf33702c3c4f 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -70,6 +70,8 @@ v3d_overflow_mem_work(struct work_struct *work)
+ list_add_tail(&bo->unref_head, &v3d->bin_job->render->unref_list);
+ spin_unlock_irqrestore(&v3d->job_lock, irqflags);
+
++ v3d_mmu_flush_all(v3d);
++
+ V3D_CORE_WRITE(0, V3D_PTB_BPOA, bo->node.start << V3D_MMU_PAGE_SHIFT);
+ V3D_CORE_WRITE(0, V3D_PTB_BPOS, obj->size);
+
+diff --git a/drivers/gpu/drm/v3d/v3d_mmu.c b/drivers/gpu/drm/v3d/v3d_mmu.c
+index 14f3af40d6f6d1..5bb7821c0243c6 100644
+--- a/drivers/gpu/drm/v3d/v3d_mmu.c
++++ b/drivers/gpu/drm/v3d/v3d_mmu.c
+@@ -28,36 +28,27 @@
+ #define V3D_PTE_WRITEABLE BIT(29)
+ #define V3D_PTE_VALID BIT(28)
+
+-static int v3d_mmu_flush_all(struct v3d_dev *v3d)
++int v3d_mmu_flush_all(struct v3d_dev *v3d)
+ {
+ int ret;
+
+- /* Make sure that another flush isn't already running when we
+- * start this one.
+- */
+- ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
+- V3D_MMU_CTL_TLB_CLEARING), 100);
+- if (ret)
+- dev_err(v3d->drm.dev, "TLB clear wait idle pre-wait failed\n");
+-
+- V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL) |
+- V3D_MMU_CTL_TLB_CLEAR);
+-
+- V3D_WRITE(V3D_MMUC_CONTROL,
+- V3D_MMUC_CONTROL_FLUSH |
++ V3D_WRITE(V3D_MMUC_CONTROL, V3D_MMUC_CONTROL_FLUSH |
+ V3D_MMUC_CONTROL_ENABLE);
+
+- ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
+- V3D_MMU_CTL_TLB_CLEARING), 100);
++ ret = wait_for(!(V3D_READ(V3D_MMUC_CONTROL) &
++ V3D_MMUC_CONTROL_FLUSHING), 100);
+ if (ret) {
+- dev_err(v3d->drm.dev, "TLB clear wait idle failed\n");
++ dev_err(v3d->drm.dev, "MMUC flush wait idle failed\n");
+ return ret;
+ }
+
+- ret = wait_for(!(V3D_READ(V3D_MMUC_CONTROL) &
+- V3D_MMUC_CONTROL_FLUSHING), 100);
++ V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL) |
++ V3D_MMU_CTL_TLB_CLEAR);
++
++ ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
++ V3D_MMU_CTL_TLB_CLEARING), 100);
+ if (ret)
+- dev_err(v3d->drm.dev, "MMUC flush wait idle failed\n");
++ dev_err(v3d->drm.dev, "MMU TLB clear wait idle failed\n");
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index ad1e6236ff6ff1..255bd9100c78a7 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -133,8 +133,31 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue)
+ struct v3d_stats *global_stats = &v3d->queue[queue].stats;
+ struct v3d_stats *local_stats = &file->stats[queue];
+ u64 now = local_clock();
+-
+- preempt_disable();
++ unsigned long flags;
++
++ /*
++ * We only need to disable local interrupts to appease lockdep who
++ * otherwise would think v3d_job_start_stats vs v3d_stats_update has an
++ * unsafe in-irq vs no-irq-off usage problem. This is a false positive
++ * because all the locks are per queue and stats type, and all jobs are
++ * completely one at a time serialised. More specifically:
++ *
++ * 1. Locks for GPU queues are updated from interrupt handlers under a
++ * spin lock and started here with preemption disabled.
++ *
++ * 2. Locks for CPU queues are updated from the worker with preemption
++ * disabled and equally started here with preemption disabled.
++ *
++ * Therefore both are consistent.
++ *
++ * 3. Because next job can only be queued after the previous one has
++ * been signaled, and locks are per queue, there is also no scope for
++ * the start part to race with the update part.
++ */
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_save(flags);
++ else
++ preempt_disable();
+
+ write_seqcount_begin(&local_stats->lock);
+ local_stats->start_ns = now;
+@@ -144,7 +167,10 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue)
+ global_stats->start_ns = now;
+ write_seqcount_end(&global_stats->lock);
+
+- preempt_enable();
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_restore(flags);
++ else
++ preempt_enable();
+ }
+
+ static void
+@@ -165,11 +191,21 @@ v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue)
+ struct v3d_stats *global_stats = &v3d->queue[queue].stats;
+ struct v3d_stats *local_stats = &file->stats[queue];
+ u64 now = local_clock();
++ unsigned long flags;
++
++ /* See comment in v3d_job_start_stats() */
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_save(flags);
++ else
++ preempt_disable();
+
+- preempt_disable();
+ v3d_stats_update(local_stats, now);
+ v3d_stats_update(global_stats, now);
+- preempt_enable();
++
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_restore(flags);
++ else
++ preempt_enable();
+ }
+
+ static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job)
+diff --git a/drivers/gpu/drm/vc4/tests/vc4_mock.c b/drivers/gpu/drm/vc4/tests/vc4_mock.c
+index 0731a7d85d7abc..922849dd4b4787 100644
+--- a/drivers/gpu/drm/vc4/tests/vc4_mock.c
++++ b/drivers/gpu/drm/vc4/tests/vc4_mock.c
+@@ -155,11 +155,11 @@ KUNIT_DEFINE_ACTION_WRAPPER(kunit_action_drm_dev_unregister,
+ drm_dev_unregister,
+ struct drm_device *);
+
+-static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5)
++static struct vc4_dev *__mock_device(struct kunit *test, enum vc4_gen gen)
+ {
+ struct drm_device *drm;
+- const struct drm_driver *drv = is_vc5 ? &vc5_drm_driver : &vc4_drm_driver;
+- const struct vc4_mock_desc *desc = is_vc5 ? &vc5_mock : &vc4_mock;
++ const struct drm_driver *drv = (gen == VC4_GEN_5) ? &vc5_drm_driver : &vc4_drm_driver;
++ const struct vc4_mock_desc *desc = (gen == VC4_GEN_5) ? &vc5_mock : &vc4_mock;
+ struct vc4_dev *vc4;
+ struct device *dev;
+ int ret;
+@@ -173,7 +173,7 @@ static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5)
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4);
+
+ vc4->dev = dev;
+- vc4->is_vc5 = is_vc5;
++ vc4->gen = gen;
+
+ vc4->hvs = __vc4_hvs_alloc(vc4, NULL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4->hvs);
+@@ -198,10 +198,10 @@ static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5)
+
+ struct vc4_dev *vc4_mock_device(struct kunit *test)
+ {
+- return __mock_device(test, false);
++ return __mock_device(test, VC4_GEN_4);
+ }
+
+ struct vc4_dev *vc5_mock_device(struct kunit *test)
+ {
+- return __mock_device(test, true);
++ return __mock_device(test, VC4_GEN_5);
+ }
+diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
+index 86d629e45307d2..89e427c9ed327a 100644
+--- a/drivers/gpu/drm/vc4/vc4_bo.c
++++ b/drivers/gpu/drm/vc4/vc4_bo.c
+@@ -251,7 +251,7 @@ void vc4_bo_add_to_purgeable_pool(struct vc4_bo *bo)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4->purgeable.lock);
+@@ -265,7 +265,7 @@ static void vc4_bo_remove_from_purgeable_pool_locked(struct vc4_bo *bo)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* list_del_init() is used here because the caller might release
+@@ -396,7 +396,7 @@ struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return ERR_PTR(-ENODEV);
+
+ bo = kzalloc(sizeof(*bo), GFP_KERNEL);
+@@ -427,7 +427,7 @@ struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size,
+ struct drm_gem_dma_object *dma_obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return ERR_PTR(-ENODEV);
+
+ if (size == 0)
+@@ -496,7 +496,7 @@ int vc4_bo_dumb_create(struct drm_file *file_priv,
+ struct vc4_bo *bo = NULL;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ ret = vc4_dumb_fixup_args(args);
+@@ -622,7 +622,7 @@ int vc4_bo_inc_usecnt(struct vc4_bo *bo)
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ /* Fast path: if the BO is already retained by someone, no need to
+@@ -661,7 +661,7 @@ void vc4_bo_dec_usecnt(struct vc4_bo *bo)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* Fast path: if the BO is still retained by someone, no need to test
+@@ -783,7 +783,7 @@ int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo = NULL;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ ret = vc4_grab_bin_bo(vc4, vc4file);
+@@ -813,7 +813,7 @@ int vc4_mmap_bo_ioctl(struct drm_device *dev, void *data,
+ struct drm_vc4_mmap_bo *args = data;
+ struct drm_gem_object *gem_obj;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ gem_obj = drm_gem_object_lookup(file_priv, args->handle);
+@@ -839,7 +839,7 @@ vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo = NULL;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->size == 0)
+@@ -918,7 +918,7 @@ int vc4_set_tiling_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo;
+ bool t_format;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->flags != 0)
+@@ -964,7 +964,7 @@ int vc4_get_tiling_ioctl(struct drm_device *dev, void *data,
+ struct drm_gem_object *gem_obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->flags != 0 || args->modifier != 0)
+@@ -1007,7 +1007,7 @@ int vc4_bo_cache_init(struct drm_device *dev)
+ int ret;
+ int i;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ /* Create the initial set of BO labels that the kernel will
+@@ -1071,7 +1071,7 @@ int vc4_label_bo_ioctl(struct drm_device *dev, void *data,
+ struct drm_gem_object *gem_obj;
+ int ret = 0, label;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!args->len)
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 8b5a7e5eb1466c..26a7cf7f646515 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -263,7 +263,7 @@ static u32 vc4_get_fifo_full_level(struct vc4_crtc *vc4_crtc, u32 format)
+ * Removing 1 from the FIFO full level however
+ * seems to completely remove that issue.
+ */
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1;
+
+ return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX;
+@@ -428,7 +428,7 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode
+ if (is_dsi)
+ CRTC_WRITE(PV_HACT_ACT, mode->hdisplay * pixel_rep);
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ CRTC_WRITE(PV_MUX_CFG,
+ VC4_SET_FIELD(PV_MUX_CFG_RGB_PIXEL_MUX_MODE_NO_SWAP,
+ PV_MUX_CFG_RGB_PIXEL_MUX_MODE));
+@@ -913,7 +913,7 @@ static int vc4_async_set_fence_cb(struct drm_device *dev,
+ struct dma_fence *fence;
+ int ret;
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ struct vc4_bo *bo = to_vc4_bo(&dma_bo->base);
+
+ return vc4_queue_seqno_cb(dev, &flip_state->cb.seqno, bo->seqno,
+@@ -1000,7 +1000,7 @@ static int vc4_async_page_flip(struct drm_crtc *crtc,
+ struct vc4_bo *bo = to_vc4_bo(&dma_bo->base);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ /*
+@@ -1043,7 +1043,7 @@ int vc4_page_flip(struct drm_crtc *crtc,
+ struct drm_device *dev = crtc->dev;
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ return vc5_async_page_flip(crtc, fb, event, flags);
+ else
+ return vc4_async_page_flip(crtc, fb, event, flags);
+@@ -1338,9 +1338,8 @@ int __vc4_crtc_init(struct drm_device *drm,
+
+ drm_crtc_helper_add(crtc, crtc_helper_funcs);
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ drm_mode_crtc_set_gamma_size(crtc, ARRAY_SIZE(vc4_crtc->lut_r));
+-
+ drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size);
+
+ /* We support CTM, but only for one CRTC at a time. It's therefore
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c
+index c133e96b8aca25..550324819f37fc 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.c
++++ b/drivers/gpu/drm/vc4/vc4_drv.c
+@@ -98,7 +98,7 @@ static int vc4_get_param_ioctl(struct drm_device *dev, void *data,
+ if (args->pad != 0)
+ return -EINVAL;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d)
+@@ -147,7 +147,7 @@ static int vc4_open(struct drm_device *dev, struct drm_file *file)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_file *vc4file;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ vc4file = kzalloc(sizeof(*vc4file), GFP_KERNEL);
+@@ -165,7 +165,7 @@ static void vc4_close(struct drm_device *dev, struct drm_file *file)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_file *vc4file = file->driver_priv;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (vc4file->bin_bo_used)
+@@ -291,13 +291,17 @@ static int vc4_drm_bind(struct device *dev)
+ struct vc4_dev *vc4;
+ struct device_node *node;
+ struct drm_crtc *crtc;
+- bool is_vc5;
++ enum vc4_gen gen;
+ int ret = 0;
+
+ dev->coherent_dma_mask = DMA_BIT_MASK(32);
+
+- is_vc5 = of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5");
+- if (is_vc5)
++ if (of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5"))
++ gen = VC4_GEN_5;
++ else
++ gen = VC4_GEN_4;
++
++ if (gen == VC4_GEN_5)
+ driver = &vc5_drm_driver;
+ else
+ driver = &vc4_drm_driver;
+@@ -315,13 +319,13 @@ static int vc4_drm_bind(struct device *dev)
+ vc4 = devm_drm_dev_alloc(dev, driver, struct vc4_dev, base);
+ if (IS_ERR(vc4))
+ return PTR_ERR(vc4);
+- vc4->is_vc5 = is_vc5;
++ vc4->gen = gen;
+ vc4->dev = dev;
+
+ drm = &vc4->base;
+ platform_set_drvdata(pdev, drm);
+
+- if (!is_vc5) {
++ if (gen == VC4_GEN_4) {
+ ret = drmm_mutex_init(drm, &vc4->bin_bo_lock);
+ if (ret)
+ goto err;
+@@ -335,7 +339,7 @@ static int vc4_drm_bind(struct device *dev)
+ if (ret)
+ goto err;
+
+- if (!is_vc5) {
++ if (gen == VC4_GEN_4) {
+ ret = vc4_gem_init(drm);
+ if (ret)
+ goto err;
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
+index 08e29fa825635d..dd452e6a114304 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -80,11 +80,16 @@ struct vc4_perfmon {
+ u64 counters[] __counted_by(ncounters);
+ };
+
++enum vc4_gen {
++ VC4_GEN_4,
++ VC4_GEN_5,
++};
++
+ struct vc4_dev {
+ struct drm_device base;
+ struct device *dev;
+
+- bool is_vc5;
++ enum vc4_gen gen;
+
+ unsigned int irq;
+
+@@ -315,6 +320,7 @@ struct vc4_hvs {
+ struct platform_device *pdev;
+ void __iomem *regs;
+ u32 __iomem *dlist;
++ unsigned int dlist_mem_size;
+
+ struct clk *core_clk;
+
+diff --git a/drivers/gpu/drm/vc4/vc4_gem.c b/drivers/gpu/drm/vc4/vc4_gem.c
+index 03648f954985e5..b4f72f2aaf1ba7 100644
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -76,7 +76,7 @@ vc4_get_hang_state_ioctl(struct drm_device *dev, void *data,
+ u32 i;
+ int ret = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -389,7 +389,7 @@ vc4_wait_for_seqno(struct drm_device *dev, uint64_t seqno, uint64_t timeout_ns,
+ unsigned long timeout_expire;
+ DEFINE_WAIT(wait);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (vc4->finished_seqno >= seqno)
+@@ -474,7 +474,7 @@ vc4_submit_next_bin_job(struct drm_device *dev)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_exec_info *exec;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ again:
+@@ -522,7 +522,7 @@ vc4_submit_next_render_job(struct drm_device *dev)
+ if (!exec)
+ return;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* A previous RCL may have written to one of our textures, and
+@@ -543,7 +543,7 @@ vc4_move_job_to_render(struct drm_device *dev, struct vc4_exec_info *exec)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ bool was_empty = list_empty(&vc4->render_job_list);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ list_move_tail(&exec->head, &vc4->render_job_list);
+@@ -970,7 +970,7 @@ vc4_job_handle_completed(struct vc4_dev *vc4)
+ unsigned long irqflags;
+ struct vc4_seqno_cb *cb, *cb_temp;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ spin_lock_irqsave(&vc4->job_lock, irqflags);
+@@ -1009,7 +1009,7 @@ int vc4_queue_seqno_cb(struct drm_device *dev,
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ unsigned long irqflags;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ cb->func = func;
+@@ -1065,7 +1065,7 @@ vc4_wait_seqno_ioctl(struct drm_device *dev, void *data,
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct drm_vc4_wait_seqno *args = data;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ return vc4_wait_for_seqno_ioctl_helper(dev, args->seqno,
+@@ -1082,7 +1082,7 @@ vc4_wait_bo_ioctl(struct drm_device *dev, void *data,
+ struct drm_gem_object *gem_obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->pad != 0)
+@@ -1131,7 +1131,7 @@ vc4_submit_cl_ioctl(struct drm_device *dev, void *data,
+ args->shader_rec_size,
+ args->bo_handle_count);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -1268,7 +1268,7 @@ int vc4_gem_init(struct drm_device *dev)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ vc4->dma_fence_context = dma_fence_context_alloc(1);
+@@ -1327,7 +1327,7 @@ int vc4_gem_madvise_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ switch (args->madv) {
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index cb424604484f1c..727575fdc2841b 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -147,6 +147,8 @@ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused)
+ if (!drm_dev_enter(drm, &idx))
+ return -ENODEV;
+
++ WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev));
++
+ drm_print_regset32(&p, &vc4_hdmi->hdmi_regset);
+ drm_print_regset32(&p, &vc4_hdmi->hd_regset);
+ drm_print_regset32(&p, &vc4_hdmi->cec_regset);
+@@ -156,6 +158,8 @@ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused)
+ drm_print_regset32(&p, &vc4_hdmi->ram_regset);
+ drm_print_regset32(&p, &vc4_hdmi->rm_regset);
+
++ pm_runtime_put(&vc4_hdmi->pdev->dev);
++
+ drm_dev_exit(idx);
+
+ return 0;
+@@ -2047,6 +2051,7 @@ static int vc4_hdmi_audio_prepare(struct device *dev, void *data,
+ struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
+ struct drm_device *drm = vc4_hdmi->connector.dev;
+ struct drm_connector *connector = &vc4_hdmi->connector;
++ struct vc4_dev *vc4 = to_vc4_dev(drm);
+ unsigned int sample_rate = params->sample_rate;
+ unsigned int channels = params->channels;
+ unsigned long flags;
+@@ -2104,11 +2109,18 @@ static int vc4_hdmi_audio_prepare(struct device *dev, void *data,
+ VC4_HDMI_AUDIO_PACKET_CEA_MASK);
+
+ /* Set the MAI threshold */
+- HDMI_WRITE(HDMI_MAI_THR,
+- VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICHIGH) |
+- VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICLOW) |
+- VC4_SET_FIELD(0x06, VC4_HD_MAI_THR_DREQHIGH) |
+- VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_DREQLOW));
++ if (vc4->gen >= VC4_GEN_5)
++ HDMI_WRITE(HDMI_MAI_THR,
++ VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICHIGH) |
++ VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICLOW) |
++ VC4_SET_FIELD(0x1c, VC4_HD_MAI_THR_DREQHIGH) |
++ VC4_SET_FIELD(0x1c, VC4_HD_MAI_THR_DREQLOW));
++ else
++ HDMI_WRITE(HDMI_MAI_THR,
++ VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_PANICHIGH) |
++ VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_PANICLOW) |
++ VC4_SET_FIELD(0x6, VC4_HD_MAI_THR_DREQHIGH) |
++ VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_DREQLOW));
+
+ HDMI_WRITE(HDMI_MAI_CONFIG,
+ VC4_HDMI_MAI_CONFIG_BIT_REVERSE |
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index 04af672caacb1b..df71bc68cdd009 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -110,7 +110,8 @@ static int vc4_hvs_debugfs_dlist(struct seq_file *m, void *data)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_hvs *hvs = vc4->hvs;
+ struct drm_printer p = drm_seq_file_printer(m);
+- unsigned int next_entry_start = 0;
++ unsigned int dlist_mem_size = hvs->dlist_mem_size;
++ unsigned int next_entry_start;
+ unsigned int i, j;
+ u32 dlist_word, dispstat;
+
+@@ -124,8 +125,9 @@ static int vc4_hvs_debugfs_dlist(struct seq_file *m, void *data)
+ }
+
+ drm_printf(&p, "HVS chan %u:\n", i);
++ next_entry_start = 0;
+
+- for (j = HVS_READ(SCALER_DISPLISTX(i)); j < 256; j++) {
++ for (j = HVS_READ(SCALER_DISPLISTX(i)); j < dlist_mem_size; j++) {
+ dlist_word = readl((u32 __iomem *)vc4->hvs->dlist + j);
+ drm_printf(&p, "dlist: %02d: 0x%08x\n", j,
+ dlist_word);
+@@ -222,6 +224,9 @@ static void vc4_hvs_lut_load(struct vc4_hvs *hvs,
+ if (!drm_dev_enter(drm, &idx))
+ return;
+
++ if (hvs->vc4->gen != VC4_GEN_4)
++ goto exit;
++
+ /* The LUT memory is laid out with each HVS channel in order,
+ * each of which takes 256 writes for R, 256 for G, then 256
+ * for B.
+@@ -237,6 +242,7 @@ static void vc4_hvs_lut_load(struct vc4_hvs *hvs,
+ for (i = 0; i < crtc->gamma_size; i++)
+ HVS_WRITE(SCALER_GAMDATA, vc4_crtc->lut_b[i]);
+
++exit:
+ drm_dev_exit(idx);
+ }
+
+@@ -291,7 +297,7 @@ int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output)
+ u32 reg;
+ int ret;
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ return output;
+
+ /*
+@@ -372,7 +378,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
+ dispctrl = SCALER_DISPCTRLX_ENABLE;
+ dispbkgndx = HVS_READ(SCALER_DISPBKGNDX(chan));
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ dispctrl |= VC4_SET_FIELD(mode->hdisplay,
+ SCALER_DISPCTRLX_WIDTH) |
+ VC4_SET_FIELD(mode->vdisplay,
+@@ -394,7 +400,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
+ dispbkgndx &= ~SCALER_DISPBKGND_INTERLACE;
+
+ HVS_WRITE(SCALER_DISPBKGNDX(chan), dispbkgndx |
+- ((!vc4->is_vc5) ? SCALER_DISPBKGND_GAMMA : 0) |
++ ((vc4->gen == VC4_GEN_4) ? SCALER_DISPBKGND_GAMMA : 0) |
+ (interlace ? SCALER_DISPBKGND_INTERLACE : 0));
+
+ /* Reload the LUT, since the SRAMs would have been disabled if
+@@ -415,13 +421,11 @@ void vc4_hvs_stop_channel(struct vc4_hvs *hvs, unsigned int chan)
+ if (!drm_dev_enter(drm, &idx))
+ return;
+
+- if (HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE)
++ if (!(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE))
+ goto out;
+
+- HVS_WRITE(SCALER_DISPCTRLX(chan),
+- HVS_READ(SCALER_DISPCTRLX(chan)) | SCALER_DISPCTRLX_RESET);
+- HVS_WRITE(SCALER_DISPCTRLX(chan),
+- HVS_READ(SCALER_DISPCTRLX(chan)) & ~SCALER_DISPCTRLX_ENABLE);
++ HVS_WRITE(SCALER_DISPCTRLX(chan), SCALER_DISPCTRLX_RESET);
++ HVS_WRITE(SCALER_DISPCTRLX(chan), 0);
+
+ /* Once we leave, the scaler should be disabled and its fifo empty. */
+ WARN_ON_ONCE(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_RESET);
+@@ -580,7 +584,7 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc,
+ }
+
+ if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED)
+- return;
++ goto exit;
+
+ if (debug_dump_regs) {
+ DRM_INFO("CRTC %d HVS before:\n", drm_crtc_index(crtc));
+@@ -663,12 +667,14 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc,
+ vc4_hvs_dump_state(hvs);
+ }
+
++exit:
+ drm_dev_exit(idx);
+ }
+
+ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel)
+ {
+- struct drm_device *drm = &hvs->vc4->base;
++ struct vc4_dev *vc4 = hvs->vc4;
++ struct drm_device *drm = &vc4->base;
+ u32 dispctrl;
+ int idx;
+
+@@ -676,8 +682,9 @@ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel)
+ return;
+
+ dispctrl = HVS_READ(SCALER_DISPCTRL);
+- dispctrl &= ~(hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
+- SCALER_DISPCTRL_DSPEISLUR(channel));
++ dispctrl &= ~((vc4->gen == VC4_GEN_5) ?
++ SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel));
+
+ HVS_WRITE(SCALER_DISPCTRL, dispctrl);
+
+@@ -686,7 +693,8 @@ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel)
+
+ void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel)
+ {
+- struct drm_device *drm = &hvs->vc4->base;
++ struct vc4_dev *vc4 = hvs->vc4;
++ struct drm_device *drm = &vc4->base;
+ u32 dispctrl;
+ int idx;
+
+@@ -694,8 +702,9 @@ void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel)
+ return;
+
+ dispctrl = HVS_READ(SCALER_DISPCTRL);
+- dispctrl |= (hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
+- SCALER_DISPCTRL_DSPEISLUR(channel));
++ dispctrl |= ((vc4->gen == VC4_GEN_5) ?
++ SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel));
+
+ HVS_WRITE(SCALER_DISPSTAT,
+ SCALER_DISPSTAT_EUFLOW(channel));
+@@ -738,8 +747,10 @@ static irqreturn_t vc4_hvs_irq_handler(int irq, void *data)
+ control = HVS_READ(SCALER_DISPCTRL);
+
+ for (channel = 0; channel < SCALER_CHANNELS_COUNT; channel++) {
+- dspeislur = vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
+- SCALER_DISPCTRL_DSPEISLUR(channel);
++ dspeislur = (vc4->gen == VC4_GEN_5) ?
++ SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel);
++
+ /* Interrupt masking is not always honored, so check it here. */
+ if (status & SCALER_DISPSTAT_EUFLOW(channel) &&
+ control & dspeislur) {
+@@ -767,7 +778,7 @@ int vc4_hvs_debugfs_init(struct drm_minor *minor)
+ if (!vc4->hvs)
+ return -ENODEV;
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ debugfs_create_bool("hvs_load_tracker", S_IRUGO | S_IWUSR,
+ minor->debugfs_root,
+ &vc4->load_tracker_enabled);
+@@ -800,16 +811,17 @@ struct vc4_hvs *__vc4_hvs_alloc(struct vc4_dev *vc4, struct platform_device *pde
+ * our 16K), since we don't want to scramble the screen when
+ * transitioning from the firmware's boot setup to runtime.
+ */
++ hvs->dlist_mem_size = (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END;
+ drm_mm_init(&hvs->dlist_mm,
+ HVS_BOOTLOADER_DLIST_END,
+- (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END);
++ hvs->dlist_mem_size);
+
+ /* Set up the HVS LBM memory manager. We could have some more
+ * complicated data structure that allowed reuse of LBM areas
+ * between planes when they don't overlap on the screen, but
+ * for now we just allocate globally.
+ */
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ /* 48k words of 2x12-bit pixels */
+ drm_mm_init(&hvs->lbm_mm, 0, 48 * 1024);
+ else
+@@ -843,7 +855,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ hvs->regset.regs = hvs_regs;
+ hvs->regset.nregs = ARRAY_SIZE(hvs_regs);
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ struct rpi_firmware *firmware;
+ struct device_node *node;
+ unsigned int max_rate;
+@@ -881,7 +893,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ }
+ }
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ hvs->dlist = hvs->regs + SCALER_DLIST_START;
+ else
+ hvs->dlist = hvs->regs + SCALER5_DLIST_START;
+@@ -922,7 +934,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ SCALER_DISPCTRL_DISPEIRQ(1) |
+ SCALER_DISPCTRL_DISPEIRQ(2);
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ |
+ SCALER_DISPCTRL_SLVWREIRQ |
+ SCALER_DISPCTRL_SLVRDEIRQ |
+@@ -966,7 +978,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+
+ /* Recompute Composite Output Buffer (COB) allocations for the displays
+ */
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ /* The COB is 20736 pixels, or just over 10 lines at 2048 wide.
+ * The bottom 2048 pixels are full 32bpp RGBA (intended for the
+ * TXP composing RGBA to memory), whilst the remainder are only
+diff --git a/drivers/gpu/drm/vc4/vc4_irq.c b/drivers/gpu/drm/vc4/vc4_irq.c
+index 563b3dfeb9b90b..c006d20b5a78da 100644
+--- a/drivers/gpu/drm/vc4/vc4_irq.c
++++ b/drivers/gpu/drm/vc4/vc4_irq.c
+@@ -263,7 +263,7 @@ vc4_irq_enable(struct drm_device *dev)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (!vc4->v3d)
+@@ -280,7 +280,7 @@ vc4_irq_disable(struct drm_device *dev)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (!vc4->v3d)
+@@ -303,7 +303,7 @@ int vc4_irq_install(struct drm_device *dev, int irq)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (irq == IRQ_NOTCONNECTED)
+@@ -324,7 +324,7 @@ void vc4_irq_uninstall(struct drm_device *dev)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ vc4_irq_disable(dev);
+@@ -337,7 +337,7 @@ void vc4_irq_reset(struct drm_device *dev)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ unsigned long irqflags;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* Acknowledge any stale IRQs. */
+diff --git a/drivers/gpu/drm/vc4/vc4_kms.c b/drivers/gpu/drm/vc4/vc4_kms.c
+index 5495f2a94fa926..bddfcad1095013 100644
+--- a/drivers/gpu/drm/vc4/vc4_kms.c
++++ b/drivers/gpu/drm/vc4/vc4_kms.c
+@@ -369,7 +369,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
+ old_hvs_state->fifo_state[channel].pending_commit = NULL;
+ }
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ unsigned long state_rate = max(old_hvs_state->core_clock_rate,
+ new_hvs_state->core_clock_rate);
+ unsigned long core_rate = clamp_t(unsigned long, state_rate,
+@@ -388,7 +388,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
+
+ vc4_ctm_commit(vc4, state);
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ vc5_hvs_pv_muxing_commit(vc4, state);
+ else
+ vc4_hvs_pv_muxing_commit(vc4, state);
+@@ -406,7 +406,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
+
+ drm_atomic_helper_cleanup_planes(dev, state);
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ unsigned long core_rate = min_t(unsigned long,
+ hvs->max_core_rate,
+ new_hvs_state->core_clock_rate);
+@@ -461,7 +461,7 @@ static struct drm_framebuffer *vc4_fb_create(struct drm_device *dev,
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct drm_mode_fb_cmd2 mode_cmd_local;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return ERR_PTR(-ENODEV);
+
+ /* If the user didn't specify a modifier, use the
+@@ -1040,7 +1040,7 @@ int vc4_kms_load(struct drm_device *dev)
+ * the BCM2711, but the load tracker computations are used for
+ * the core clock rate calculation.
+ */
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ /* Start with the load tracker enabled. Can be
+ * disabled through the debugfs load_tracker file.
+ */
+@@ -1056,7 +1056,7 @@ int vc4_kms_load(struct drm_device *dev)
+ return ret;
+ }
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ dev->mode_config.max_width = 7680;
+ dev->mode_config.max_height = 7680;
+ } else {
+@@ -1064,7 +1064,7 @@ int vc4_kms_load(struct drm_device *dev)
+ dev->mode_config.max_height = 2048;
+ }
+
+- dev->mode_config.funcs = vc4->is_vc5 ? &vc5_mode_funcs : &vc4_mode_funcs;
++ dev->mode_config.funcs = (vc4->gen > VC4_GEN_4) ? &vc5_mode_funcs : &vc4_mode_funcs;
+ dev->mode_config.helper_private = &vc4_mode_config_helpers;
+ dev->mode_config.preferred_depth = 24;
+ dev->mode_config.async_page_flip = true;
+diff --git a/drivers/gpu/drm/vc4/vc4_perfmon.c b/drivers/gpu/drm/vc4/vc4_perfmon.c
+index c00a5cc2316d20..e4fda72c19f92f 100644
+--- a/drivers/gpu/drm/vc4/vc4_perfmon.c
++++ b/drivers/gpu/drm/vc4/vc4_perfmon.c
+@@ -23,7 +23,7 @@ void vc4_perfmon_get(struct vc4_perfmon *perfmon)
+ return;
+
+ vc4 = perfmon->dev;
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ refcount_inc(&perfmon->refcnt);
+@@ -37,7 +37,7 @@ void vc4_perfmon_put(struct vc4_perfmon *perfmon)
+ return;
+
+ vc4 = perfmon->dev;
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (refcount_dec_and_test(&perfmon->refcnt))
+@@ -49,7 +49,7 @@ void vc4_perfmon_start(struct vc4_dev *vc4, struct vc4_perfmon *perfmon)
+ unsigned int i;
+ u32 mask;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (WARN_ON_ONCE(!perfmon || vc4->active_perfmon))
+@@ -69,7 +69,7 @@ void vc4_perfmon_stop(struct vc4_dev *vc4, struct vc4_perfmon *perfmon,
+ {
+ unsigned int i;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (WARN_ON_ONCE(!vc4->active_perfmon ||
+@@ -90,7 +90,7 @@ struct vc4_perfmon *vc4_perfmon_find(struct vc4_file *vc4file, int id)
+ struct vc4_dev *vc4 = vc4file->dev;
+ struct vc4_perfmon *perfmon;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return NULL;
+
+ mutex_lock(&vc4file->perfmon.lock);
+@@ -105,7 +105,7 @@ void vc4_perfmon_open_file(struct vc4_file *vc4file)
+ {
+ struct vc4_dev *vc4 = vc4file->dev;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_init(&vc4file->perfmon.lock);
+@@ -131,7 +131,7 @@ void vc4_perfmon_close_file(struct vc4_file *vc4file)
+ {
+ struct vc4_dev *vc4 = vc4file->dev;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4file->perfmon.lock);
+@@ -151,7 +151,7 @@ int vc4_perfmon_create_ioctl(struct drm_device *dev, void *data,
+ unsigned int i;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -205,7 +205,7 @@ int vc4_perfmon_destroy_ioctl(struct drm_device *dev, void *data,
+ struct drm_vc4_perfmon_destroy *req = data;
+ struct vc4_perfmon *perfmon;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -233,7 +233,7 @@ int vc4_perfmon_get_values_ioctl(struct drm_device *dev, void *data,
+ struct vc4_perfmon *perfmon;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index 07caf2a47c6cef..866bc46ee6d53a 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -587,10 +587,10 @@ static u32 vc4_lbm_size(struct drm_plane_state *state)
+ }
+
+ /* Align it to 64 or 128 (hvs5) bytes */
+- lbm = roundup(lbm, vc4->is_vc5 ? 128 : 64);
++ lbm = roundup(lbm, vc4->gen == VC4_GEN_5 ? 128 : 64);
+
+ /* Each "word" of the LBM memory contains 2 or 4 (hvs5) pixels */
+- lbm /= vc4->is_vc5 ? 4 : 2;
++ lbm /= vc4->gen == VC4_GEN_5 ? 4 : 2;
+
+ return lbm;
+ }
+@@ -706,7 +706,7 @@ static int vc4_plane_allocate_lbm(struct drm_plane_state *state)
+ ret = drm_mm_insert_node_generic(&vc4->hvs->lbm_mm,
+ &vc4_state->lbm,
+ lbm_size,
+- vc4->is_vc5 ? 64 : 32,
++ vc4->gen == VC4_GEN_5 ? 64 : 32,
+ 0, 0);
+ spin_unlock_irqrestore(&vc4->hvs->mm_lock, irqflags);
+
+@@ -1057,7 +1057,7 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ mix_plane_alpha = state->alpha != DRM_BLEND_ALPHA_OPAQUE &&
+ fb->format->has_alpha;
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ /* Control word */
+ vc4_dlist_write(vc4_state,
+ SCALER_CTL0_VALID |
+@@ -1632,7 +1632,7 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
+ };
+
+ for (i = 0; i < ARRAY_SIZE(hvs_formats); i++) {
+- if (!hvs_formats[i].hvs5_only || vc4->is_vc5) {
++ if (!hvs_formats[i].hvs5_only || vc4->gen == VC4_GEN_5) {
+ formats[num_formats] = hvs_formats[i].drm;
+ num_formats++;
+ }
+@@ -1647,7 +1647,7 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
+ return ERR_CAST(vc4_plane);
+ plane = &vc4_plane->base;
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ drm_plane_helper_add(plane, &vc5_plane_helper_funcs);
+ else
+ drm_plane_helper_add(plane, &vc4_plane_helper_funcs);
+diff --git a/drivers/gpu/drm/vc4/vc4_render_cl.c b/drivers/gpu/drm/vc4/vc4_render_cl.c
+index 1bda5010f15a86..ae4ad956f04ff8 100644
+--- a/drivers/gpu/drm/vc4/vc4_render_cl.c
++++ b/drivers/gpu/drm/vc4/vc4_render_cl.c
+@@ -599,7 +599,7 @@ int vc4_get_rcl(struct drm_device *dev, struct vc4_exec_info *exec)
+ bool has_bin = args->bin_cl_size != 0;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->min_x_tile > args->max_x_tile ||
+diff --git a/drivers/gpu/drm/vc4/vc4_v3d.c b/drivers/gpu/drm/vc4/vc4_v3d.c
+index 04ac7805e6d5fe..f703e6e9ace8a2 100644
+--- a/drivers/gpu/drm/vc4/vc4_v3d.c
++++ b/drivers/gpu/drm/vc4/vc4_v3d.c
+@@ -127,7 +127,7 @@ static int vc4_v3d_debugfs_ident(struct seq_file *m, void *unused)
+ int
+ vc4_v3d_pm_get(struct vc4_dev *vc4)
+ {
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ mutex_lock(&vc4->power_lock);
+@@ -148,7 +148,7 @@ vc4_v3d_pm_get(struct vc4_dev *vc4)
+ void
+ vc4_v3d_pm_put(struct vc4_dev *vc4)
+ {
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4->power_lock);
+@@ -178,7 +178,7 @@ int vc4_v3d_get_bin_slot(struct vc4_dev *vc4)
+ uint64_t seqno = 0;
+ struct vc4_exec_info *exec;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ try_again:
+@@ -325,7 +325,7 @@ int vc4_v3d_bin_bo_get(struct vc4_dev *vc4, bool *used)
+ {
+ int ret = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ mutex_lock(&vc4->bin_bo_lock);
+@@ -360,7 +360,7 @@ static void bin_bo_release(struct kref *ref)
+
+ void vc4_v3d_bin_bo_put(struct vc4_dev *vc4)
+ {
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4->bin_bo_lock);
+diff --git a/drivers/gpu/drm/vc4/vc4_validate.c b/drivers/gpu/drm/vc4/vc4_validate.c
+index 7dff3ca5af6ba6..4f14cba6b46fb9 100644
+--- a/drivers/gpu/drm/vc4/vc4_validate.c
++++ b/drivers/gpu/drm/vc4/vc4_validate.c
+@@ -109,7 +109,7 @@ vc4_use_bo(struct vc4_exec_info *exec, uint32_t hindex)
+ struct drm_gem_dma_object *obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return NULL;
+
+ if (hindex >= exec->bo_count) {
+@@ -169,7 +169,7 @@ vc4_check_tex_size(struct vc4_exec_info *exec, struct drm_gem_dma_object *fbo,
+ uint32_t utile_w = utile_width(cpp);
+ uint32_t utile_h = utile_height(cpp);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return false;
+
+ /* The shaded vertex format stores signed 12.4 fixed point
+@@ -495,7 +495,7 @@ vc4_validate_bin_cl(struct drm_device *dev,
+ uint32_t dst_offset = 0;
+ uint32_t src_offset = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ while (src_offset < len) {
+@@ -942,7 +942,7 @@ vc4_validate_shader_recs(struct drm_device *dev,
+ uint32_t i;
+ int ret = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ for (i = 0; i < exec->shader_state_count; i++) {
+diff --git a/drivers/gpu/drm/vc4/vc4_validate_shaders.c b/drivers/gpu/drm/vc4/vc4_validate_shaders.c
+index 9745f8810eca6d..afb1a4d8268465 100644
+--- a/drivers/gpu/drm/vc4/vc4_validate_shaders.c
++++ b/drivers/gpu/drm/vc4/vc4_validate_shaders.c
+@@ -786,7 +786,7 @@ vc4_validate_shader(struct drm_gem_dma_object *shader_obj)
+ struct vc4_validated_shader_info *validated_shader = NULL;
+ struct vc4_shader_validation_state validation_state;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return NULL;
+
+ memset(&validation_state, 0, sizeof(validation_state));
+diff --git a/drivers/gpu/drm/vkms/vkms_output.c b/drivers/gpu/drm/vkms/vkms_output.c
+index 5ce70dd946aa63..24589b947dea3d 100644
+--- a/drivers/gpu/drm/vkms/vkms_output.c
++++ b/drivers/gpu/drm/vkms/vkms_output.c
+@@ -84,7 +84,7 @@ int vkms_output_init(struct vkms_device *vkmsdev, int index)
+ DRM_MODE_CONNECTOR_VIRTUAL);
+ if (ret) {
+ DRM_ERROR("Failed to init connector\n");
+- goto err_connector;
++ return ret;
+ }
+
+ drm_connector_helper_add(connector, &vkms_conn_helper_funcs);
+@@ -119,8 +119,5 @@ int vkms_output_init(struct vkms_device *vkmsdev, int index)
+ err_encoder:
+ drm_connector_cleanup(connector);
+
+-err_connector:
+- drm_crtc_cleanup(crtc);
+-
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+index 0af667ebebf982..09b87d70e34c6d 100644
+--- a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
++++ b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+@@ -43,7 +43,7 @@ bool intel_hdcp_gsc_check_status(struct xe_device *xe)
+ struct xe_gsc *gsc = >->uc.gsc;
+ bool ret = true;
+
+- if (!gsc && !xe_uc_fw_is_enabled(&gsc->fw)) {
++ if (!gsc || !xe_uc_fw_is_enabled(&gsc->fw)) {
+ drm_dbg_kms(&xe->drm,
+ "GSC Components not ready for HDCP2.x\n");
+ return false;
+diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
+index 9d77f2d4096f59..6e17bace67dd13 100644
+--- a/drivers/gpu/drm/xe/xe_sync.c
++++ b/drivers/gpu/drm/xe/xe_sync.c
+@@ -85,8 +85,12 @@ static void user_fence_worker(struct work_struct *w)
+ mmput(ufence->mm);
+ }
+
+- wake_up_all(&ufence->xe->ufence_wq);
++ /*
++ * Wake up waiters only after updating the ufence state, allowing the UMD
++ * to safely reuse the same ufence without encountering -EBUSY errors.
++ */
+ WRITE_ONCE(ufence->signalled, 1);
++ wake_up_all(&ufence->xe->ufence_wq);
+ user_fence_put(ufence);
+ }
+
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_disp.c b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+index 9368acf56eaf79..e4e0e299e8a7d5 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_disp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+@@ -1200,6 +1200,9 @@ static void zynqmp_disp_layer_release_dma(struct zynqmp_disp *disp,
+ {
+ unsigned int i;
+
++ if (!layer->info)
++ return;
++
+ for (i = 0; i < layer->info->num_channels; i++) {
+ struct zynqmp_disp_layer_dma *dma = &layer->dmas[i];
+
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_kms.c b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+index bd1368df787034..4556af2faa0f19 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_kms.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+@@ -536,7 +536,7 @@ void zynqmp_dpsub_drm_cleanup(struct zynqmp_dpsub *dpsub)
+ {
+ struct drm_device *drm = &dpsub->drm->dev;
+
+- drm_dev_unregister(drm);
++ drm_dev_unplug(drm);
+ drm_atomic_helper_shutdown(drm);
+ drm_encoder_cleanup(&dpsub->drm->encoder);
+ drm_kms_helper_poll_fini(drm);
+diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c
+index f33485d83d24ff..0fb210e40a4127 100644
+--- a/drivers/hid/hid-hyperv.c
++++ b/drivers/hid/hid-hyperv.c
+@@ -422,6 +422,25 @@ static int mousevsc_hid_raw_request(struct hid_device *hid,
+ return 0;
+ }
+
++static int mousevsc_hid_probe(struct hid_device *hid_dev, const struct hid_device_id *id)
++{
++ int ret;
++
++ ret = hid_parse(hid_dev);
++ if (ret) {
++ hid_err(hid_dev, "parse failed\n");
++ return ret;
++ }
++
++ ret = hid_hw_start(hid_dev, HID_CONNECT_HIDINPUT | HID_CONNECT_HIDDEV);
++ if (ret) {
++ hid_err(hid_dev, "hw start failed\n");
++ return ret;
++ }
++
++ return 0;
++}
++
+ static const struct hid_ll_driver mousevsc_ll_driver = {
+ .parse = mousevsc_hid_parse,
+ .open = mousevsc_hid_open,
+@@ -431,7 +450,16 @@ static const struct hid_ll_driver mousevsc_ll_driver = {
+ .raw_request = mousevsc_hid_raw_request,
+ };
+
+-static struct hid_driver mousevsc_hid_driver;
++static const struct hid_device_id mousevsc_devices[] = {
++ { HID_DEVICE(BUS_VIRTUAL, HID_GROUP_ANY, 0x045E, 0x0621) },
++ { }
++};
++
++static struct hid_driver mousevsc_hid_driver = {
++ .name = "hid-hyperv",
++ .id_table = mousevsc_devices,
++ .probe = mousevsc_hid_probe,
++};
+
+ static int mousevsc_probe(struct hv_device *device,
+ const struct hv_vmbus_device_id *dev_id)
+@@ -473,7 +501,6 @@ static int mousevsc_probe(struct hv_device *device,
+ }
+
+ hid_dev->ll_driver = &mousevsc_ll_driver;
+- hid_dev->driver = &mousevsc_hid_driver;
+ hid_dev->bus = BUS_VIRTUAL;
+ hid_dev->vendor = input_dev->hid_dev_info.vendor;
+ hid_dev->product = input_dev->hid_dev_info.product;
+@@ -488,20 +515,6 @@ static int mousevsc_probe(struct hv_device *device,
+ if (ret)
+ goto probe_err2;
+
+-
+- ret = hid_parse(hid_dev);
+- if (ret) {
+- hid_err(hid_dev, "parse failed\n");
+- goto probe_err2;
+- }
+-
+- ret = hid_hw_start(hid_dev, HID_CONNECT_HIDINPUT | HID_CONNECT_HIDDEV);
+-
+- if (ret) {
+- hid_err(hid_dev, "hw start failed\n");
+- goto probe_err2;
+- }
+-
+ device_init_wakeup(&device->device, true);
+
+ input_dev->connected = true;
+@@ -579,12 +592,23 @@ static struct hv_driver mousevsc_drv = {
+
+ static int __init mousevsc_init(void)
+ {
+- return vmbus_driver_register(&mousevsc_drv);
++ int ret;
++
++ ret = hid_register_driver(&mousevsc_hid_driver);
++ if (ret)
++ return ret;
++
++ ret = vmbus_driver_register(&mousevsc_drv);
++ if (ret)
++ hid_unregister_driver(&mousevsc_hid_driver);
++
++ return ret;
+ }
+
+ static void __exit mousevsc_exit(void)
+ {
+ vmbus_driver_unregister(&mousevsc_drv);
++ hid_unregister_driver(&mousevsc_hid_driver);
+ }
+
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 7a16c65f20148a..001d7f996dc85d 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1353,9 +1353,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ rotation -= 1800;
+
+ input_report_abs(pen_input, ABS_TILT_X,
+- (char)frame[7]);
++ (signed char)frame[7]);
+ input_report_abs(pen_input, ABS_TILT_Y,
+- (char)frame[8]);
++ (signed char)frame[8]);
+ input_report_abs(pen_input, ABS_Z, rotation);
+ input_report_abs(pen_input, ABS_WHEEL,
+ get_unaligned_le16(&frame[11]));
+diff --git a/drivers/hwmon/aquacomputer_d5next.c b/drivers/hwmon/aquacomputer_d5next.c
+index 8e55cd2f46f53e..a72315a08f161c 100644
+--- a/drivers/hwmon/aquacomputer_d5next.c
++++ b/drivers/hwmon/aquacomputer_d5next.c
+@@ -597,7 +597,7 @@ struct aqc_data {
+
+ /* Sensor values */
+ s32 temp_input[20]; /* Max 4 physical and 16 virtual or 8 physical and 12 virtual */
+- s32 speed_input[8];
++ s32 speed_input[9];
+ u32 speed_input_min[1];
+ u32 speed_input_target[1];
+ u32 speed_input_max[1];
+diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
+index 934fed3dd58661..ee04795b98aabe 100644
+--- a/drivers/hwmon/nct6775-core.c
++++ b/drivers/hwmon/nct6775-core.c
+@@ -2878,8 +2878,7 @@ store_target_temp(struct device *dev, struct device_attribute *attr,
+ if (err < 0)
+ return err;
+
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0,
+- data->target_temp_mask);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, data->target_temp_mask * 1000), 1000);
+
+ mutex_lock(&data->update_lock);
+ data->target_temp[nr] = val;
+@@ -2959,7 +2958,7 @@ store_temp_tolerance(struct device *dev, struct device_attribute *attr,
+ return err;
+
+ /* Limit tolerance as needed */
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, data->tolerance_mask);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, data->tolerance_mask * 1000), 1000);
+
+ mutex_lock(&data->update_lock);
+ data->temp_tolerance[index][nr] = val;
+@@ -3085,7 +3084,7 @@ store_weight_temp(struct device *dev, struct device_attribute *attr,
+ if (err < 0)
+ return err;
+
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 255);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 255000), 1000);
+
+ mutex_lock(&data->update_lock);
+ data->weight_temp[index][nr] = val;
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index e592446b26653e..019c5982ba564b 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -3199,7 +3199,17 @@ static int pmbus_regulator_notify(struct pmbus_data *data, int page, int event)
+
+ static int pmbus_write_smbalert_mask(struct i2c_client *client, u8 page, u8 reg, u8 val)
+ {
+- return _pmbus_write_word_data(client, page, PMBUS_SMBALERT_MASK, reg | (val << 8));
++ int ret;
++
++ ret = _pmbus_write_word_data(client, page, PMBUS_SMBALERT_MASK, reg | (val << 8));
++
++ /*
++ * Clear fault systematically in case writing PMBUS_SMBALERT_MASK
++ * is not supported by the chip.
++ */
++ pmbus_clear_fault_page(client, page);
++
++ return ret;
+ }
+
+ static irqreturn_t pmbus_fault_handler(int irq, void *pdata)
+diff --git a/drivers/hwmon/tps23861.c b/drivers/hwmon/tps23861.c
+index dfcfb09d9f3cdf..80fb03f30c302d 100644
+--- a/drivers/hwmon/tps23861.c
++++ b/drivers/hwmon/tps23861.c
+@@ -132,7 +132,7 @@ static int tps23861_read_temp(struct tps23861_data *data, long *val)
+ if (err < 0)
+ return err;
+
+- *val = (regval * TEMPERATURE_LSB) - 20000;
++ *val = ((long)regval * TEMPERATURE_LSB) - 20000;
+
+ return 0;
+ }
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index f4fb212b7f3920..db5f1498e8690b 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -251,10 +251,8 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
+ return -EOPNOTSUPP;
+
+ data_ptrs = kmalloc_array(nmsgs, sizeof(u8 __user *), GFP_KERNEL);
+- if (data_ptrs == NULL) {
+- kfree(msgs);
++ if (!data_ptrs)
+ return -ENOMEM;
+- }
+
+ res = 0;
+ for (i = 0; i < nmsgs; i++) {
+@@ -302,7 +300,6 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
+ for (j = 0; j < i; ++j)
+ kfree(msgs[j].buf);
+ kfree(data_ptrs);
+- kfree(msgs);
+ return res;
+ }
+
+@@ -316,7 +313,6 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
+ kfree(msgs[i].buf);
+ }
+ kfree(data_ptrs);
+- kfree(msgs);
+ return res;
+ }
+
+@@ -446,6 +442,7 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ case I2C_RDWR: {
+ struct i2c_rdwr_ioctl_data rdwr_arg;
+ struct i2c_msg *rdwr_pa;
++ int res;
+
+ if (copy_from_user(&rdwr_arg,
+ (struct i2c_rdwr_ioctl_data __user *)arg,
+@@ -467,7 +464,9 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ if (IS_ERR(rdwr_pa))
+ return PTR_ERR(rdwr_pa);
+
+- return i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ res = i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ kfree(rdwr_pa);
++ return res;
+ }
+
+ case I2C_SMBUS: {
+@@ -540,7 +539,7 @@ static long compat_i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned lo
+ struct i2c_rdwr_ioctl_data32 rdwr_arg;
+ struct i2c_msg32 __user *p;
+ struct i2c_msg *rdwr_pa;
+- int i;
++ int i, res;
+
+ if (copy_from_user(&rdwr_arg,
+ (struct i2c_rdwr_ioctl_data32 __user *)arg,
+@@ -573,7 +572,9 @@ static long compat_i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned lo
+ };
+ }
+
+- return i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ res = i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ kfree(rdwr_pa);
++ return res;
+ }
+ case I2C_SMBUS: {
+ struct i2c_smbus_ioctl_data32 data32;
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 7028f03c2c42e2..82f031928e4130 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -2039,11 +2039,16 @@ int i3c_master_add_i3c_dev_locked(struct i3c_master_controller *master,
+ ibireq.max_payload_len = olddev->ibi->max_payload_len;
+ ibireq.num_slots = olddev->ibi->num_slots;
+
+- if (olddev->ibi->enabled) {
++ if (olddev->ibi->enabled)
+ enable_ibi = true;
+- i3c_dev_disable_ibi_locked(olddev);
+- }
+-
++ /*
++ * The olddev should not receive any commands on the
++ * i3c bus as it does not exist and has been assigned
++ * a new address. This will result in NACK or timeout.
++ * So, update the olddev->ibi->enabled flag to false
++ * to avoid DISEC with OldAddr.
++ */
++ olddev->ibi->enabled = false;
+ i3c_dev_free_ibi_locked(olddev);
+ }
+ mutex_unlock(&olddev->ibi_lock);
+diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c
+index 6d56428e623d82..91cb251f8e48d2 100644
+--- a/drivers/iio/dac/adi-axi-dac.c
++++ b/drivers/iio/dac/adi-axi-dac.c
+@@ -46,7 +46,7 @@
+ #define AXI_DAC_REG_CNTRL_1 0x0044
+ #define AXI_DAC_SYNC BIT(0)
+ #define AXI_DAC_REG_CNTRL_2 0x0048
+-#define ADI_DAC_R1_MODE BIT(4)
++#define ADI_DAC_R1_MODE BIT(5)
+ #define AXI_DAC_DRP_STATUS 0x0074
+ #define AXI_DAC_DRP_LOCKED BIT(17)
+ /* DAC Channel controls */
+diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c
+index 5f131bc1a01e97..4ad949672210ba 100644
+--- a/drivers/iio/industrialio-gts-helper.c
++++ b/drivers/iio/industrialio-gts-helper.c
+@@ -167,7 +167,7 @@ static int iio_gts_gain_cmp(const void *a, const void *b)
+
+ static int gain_to_scaletables(struct iio_gts *gts, int **gains, int **scales)
+ {
+- int ret, i, j, new_idx, time_idx;
++ int i, j, new_idx, time_idx, ret = 0;
+ int *all_gains;
+ size_t gain_bytes;
+
+diff --git a/drivers/iio/light/al3010.c b/drivers/iio/light/al3010.c
+index 53569587ccb7ba..7cbb8b20330090 100644
+--- a/drivers/iio/light/al3010.c
++++ b/drivers/iio/light/al3010.c
+@@ -87,7 +87,12 @@ static int al3010_init(struct al3010_data *data)
+ int ret;
+
+ ret = al3010_set_pwr(data->client, true);
++ if (ret < 0)
++ return ret;
+
++ ret = devm_add_action_or_reset(&data->client->dev,
++ al3010_set_pwr_off,
++ data);
+ if (ret < 0)
+ return ret;
+
+@@ -190,12 +195,6 @@ static int al3010_probe(struct i2c_client *client)
+ return ret;
+ }
+
+- ret = devm_add_action_or_reset(&client->dev,
+- al3010_set_pwr_off,
+- data);
+- if (ret < 0)
+- return ret;
+-
+ return devm_iio_device_register(&client->dev, indio_dev);
+ }
+
+diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h
+index 821d93c8f7123c..dfd2e5a86e6fe5 100644
+--- a/drivers/infiniband/core/uverbs.h
++++ b/drivers/infiniband/core/uverbs.h
+@@ -160,6 +160,8 @@ struct ib_uverbs_file {
+ struct page *disassociate_page;
+
+ struct xarray idr;
++
++ struct mutex disassociation_lock;
+ };
+
+ struct ib_uverbs_event {
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index bc099287de9a60..48b97e6c1fc6fa 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -76,6 +76,7 @@ static dev_t dynamic_uverbs_dev;
+ static DEFINE_IDA(uverbs_ida);
+ static int ib_uverbs_add_one(struct ib_device *device);
+ static void ib_uverbs_remove_one(struct ib_device *device, void *client_data);
++static struct ib_client uverbs_client;
+
+ static char *uverbs_devnode(const struct device *dev, umode_t *mode)
+ {
+@@ -217,6 +218,7 @@ void ib_uverbs_release_file(struct kref *ref)
+
+ if (file->disassociate_page)
+ __free_pages(file->disassociate_page, 0);
++ mutex_destroy(&file->disassociation_lock);
+ mutex_destroy(&file->umap_lock);
+ mutex_destroy(&file->ucontext_lock);
+ kfree(file);
+@@ -700,8 +702,13 @@ static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma)
+ ret = PTR_ERR(ucontext);
+ goto out;
+ }
++
++ mutex_lock(&file->disassociation_lock);
++
+ vma->vm_ops = &rdma_umap_ops;
+ ret = ucontext->device->ops.mmap(ucontext, vma);
++
++ mutex_unlock(&file->disassociation_lock);
+ out:
+ srcu_read_unlock(&file->device->disassociate_srcu, srcu_key);
+ return ret;
+@@ -723,6 +730,8 @@ static void rdma_umap_open(struct vm_area_struct *vma)
+ /* We are racing with disassociation */
+ if (!down_read_trylock(&ufile->hw_destroy_rwsem))
+ goto out_zap;
++ mutex_lock(&ufile->disassociation_lock);
++
+ /*
+ * Disassociation already completed, the VMA should already be zapped.
+ */
+@@ -734,10 +743,12 @@ static void rdma_umap_open(struct vm_area_struct *vma)
+ goto out_unlock;
+ rdma_umap_priv_init(priv, vma, opriv->entry);
+
++ mutex_unlock(&ufile->disassociation_lock);
+ up_read(&ufile->hw_destroy_rwsem);
+ return;
+
+ out_unlock:
++ mutex_unlock(&ufile->disassociation_lock);
+ up_read(&ufile->hw_destroy_rwsem);
+ out_zap:
+ /*
+@@ -821,7 +832,7 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ {
+ struct rdma_umap_priv *priv, *next_priv;
+
+- lockdep_assert_held(&ufile->hw_destroy_rwsem);
++ mutex_lock(&ufile->disassociation_lock);
+
+ while (1) {
+ struct mm_struct *mm = NULL;
+@@ -847,8 +858,10 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ break;
+ }
+ mutex_unlock(&ufile->umap_lock);
+- if (!mm)
++ if (!mm) {
++ mutex_unlock(&ufile->disassociation_lock);
+ return;
++ }
+
+ /*
+ * The umap_lock is nested under mmap_lock since it used within
+@@ -878,7 +891,31 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ mmap_read_unlock(mm);
+ mmput(mm);
+ }
++
++ mutex_unlock(&ufile->disassociation_lock);
++}
++
++/**
++ * rdma_user_mmap_disassociate() - Revoke mmaps for a device
++ * @device: device to revoke
++ *
++ * This function should be called by drivers that need to disable mmaps for the
++ * device, for instance because it is going to be reset.
++ */
++void rdma_user_mmap_disassociate(struct ib_device *device)
++{
++ struct ib_uverbs_device *uverbs_dev =
++ ib_get_client_data(device, &uverbs_client);
++ struct ib_uverbs_file *ufile;
++
++ mutex_lock(&uverbs_dev->lists_mutex);
++ list_for_each_entry(ufile, &uverbs_dev->uverbs_file_list, list) {
++ if (ufile->ucontext)
++ uverbs_user_mmap_disassociate(ufile);
++ }
++ mutex_unlock(&uverbs_dev->lists_mutex);
+ }
++EXPORT_SYMBOL(rdma_user_mmap_disassociate);
+
+ /*
+ * ib_uverbs_open() does not need the BKL:
+@@ -949,6 +986,8 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp)
+ mutex_init(&file->umap_lock);
+ INIT_LIST_HEAD(&file->umaps);
+
++ mutex_init(&file->disassociation_lock);
++
+ filp->private_data = file;
+ list_add_tail(&file->list, &dev->uverbs_file_list);
+ mutex_unlock(&dev->lists_mutex);
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 982e85ba211bc6..1ec7c563a5e597 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -3584,7 +3584,7 @@ static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *gsi_sqp,
+ wc->byte_len = orig_cqe->length;
+ wc->qp = &gsi_qp->ib_qp;
+
+- wc->ex.imm_data = cpu_to_be32(le32_to_cpu(orig_cqe->immdata));
++ wc->ex.imm_data = cpu_to_be32(orig_cqe->immdata);
+ wc->src_qp = orig_cqe->src_qp;
+ memcpy(wc->smac, orig_cqe->smac, ETH_ALEN);
+ if (bnxt_re_is_vlan_pkt(orig_cqe, &vlan_id, &sl)) {
+@@ -3729,7 +3729,10 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
+ (unsigned long)(cqe->qp_handle),
+ struct bnxt_re_qp, qplib_qp);
+ wc->qp = &qp->ib_qp;
+- wc->ex.imm_data = cpu_to_be32(le32_to_cpu(cqe->immdata));
++ if (cqe->flags & CQ_RES_RC_FLAGS_IMM)
++ wc->ex.imm_data = cpu_to_be32(cqe->immdata);
++ else
++ wc->ex.invalidate_rkey = cqe->invrkey;
+ wc->src_qp = cqe->src_qp;
+ memcpy(wc->smac, cqe->smac, ETH_ALEN);
+ wc->port_num = 1;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index 389862df818d99..010b07e87acb88 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -390,7 +390,7 @@ struct bnxt_qplib_cqe {
+ u16 cfa_meta;
+ u64 wr_id;
+ union {
+- __le32 immdata;
++ u32 immdata;
+ u32 invrkey;
+ };
+ u64 qp_handle;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
+index 4ec66611a14340..4106423a1b399d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
+@@ -179,8 +179,8 @@ static void free_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
+ ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_CQC,
+ hr_cq->cqn);
+ if (ret)
+- dev_err(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n", ret,
+- hr_cq->cqn);
++ dev_err_ratelimited(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n",
++ ret, hr_cq->cqn);
+
+ xa_erase_irq(&cq_table->array, hr_cq->cqn);
+
+diff --git a/drivers/infiniband/hw/hns/hns_roce_debugfs.c b/drivers/infiniband/hw/hns/hns_roce_debugfs.c
+index e8febb40f6450c..b869cdc5411893 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_debugfs.c
++++ b/drivers/infiniband/hw/hns/hns_roce_debugfs.c
+@@ -5,6 +5,7 @@
+
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
++#include <linux/pci.h>
+
+ #include "hns_roce_device.h"
+
+@@ -86,7 +87,7 @@ void hns_roce_register_debugfs(struct hns_roce_dev *hr_dev)
+ {
+ struct hns_roce_dev_debugfs *dbgfs = &hr_dev->dbgfs;
+
+- dbgfs->root = debugfs_create_dir(dev_name(&hr_dev->ib_dev.dev),
++ dbgfs->root = debugfs_create_dir(pci_name(hr_dev->pci_dev),
+ hns_roce_dbgfs_root);
+
+ create_sw_stat_debugfs(hr_dev, dbgfs->root);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 0b1e21cb6d2d38..560a1d9de408ff 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -489,12 +489,6 @@ struct hns_roce_bank {
+ u32 next; /* Next ID to allocate. */
+ };
+
+-struct hns_roce_idx_table {
+- u32 *spare_idx;
+- u32 head;
+- u32 tail;
+-};
+-
+ struct hns_roce_qp_table {
+ struct hns_roce_hem_table qp_table;
+ struct hns_roce_hem_table irrl_table;
+@@ -503,7 +497,7 @@ struct hns_roce_qp_table {
+ struct mutex scc_mutex;
+ struct hns_roce_bank bank[HNS_ROCE_QP_BANK_NUM];
+ struct mutex bank_mutex;
+- struct hns_roce_idx_table idx_table;
++ struct xarray dip_xa;
+ };
+
+ struct hns_roce_cq_table {
+@@ -593,6 +587,7 @@ struct hns_roce_dev;
+
+ enum {
+ HNS_ROCE_FLUSH_FLAG = 0,
++ HNS_ROCE_STOP_FLUSH_FLAG = 1,
+ };
+
+ struct hns_roce_work {
+@@ -656,6 +651,8 @@ struct hns_roce_qp {
+ enum hns_roce_cong_type cong_type;
+ u8 tc_mode;
+ u8 priority;
++ spinlock_t flush_lock;
++ struct hns_roce_dip *dip;
+ };
+
+ struct hns_roce_ib_iboe {
+@@ -982,8 +979,6 @@ struct hns_roce_dev {
+ enum hns_roce_device_state state;
+ struct list_head qp_list; /* list of all qps on this dev */
+ spinlock_t qp_list_lock; /* protect qp_list */
+- struct list_head dip_list; /* list of all dest ips on this dev */
+- spinlock_t dip_list_lock; /* protect dip_list */
+
+ struct list_head pgdir_list;
+ struct mutex pgdir_mutex;
+@@ -1289,6 +1284,7 @@ void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn);
+ void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type);
+ void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp);
+ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type);
++void hns_roce_flush_cqe(struct hns_roce_dev *hr_dev, u32 qpn);
+ void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type);
+ void hns_roce_handle_device_err(struct hns_roce_dev *hr_dev);
+ int hns_roce_init(struct hns_roce_dev *hr_dev);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index c7c167e2a04513..f84521be3bea4a 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -300,7 +300,7 @@ static int calc_hem_config(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_mhop *mhop,
+ struct hns_roce_hem_index *index)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
++ struct device *dev = hr_dev->dev;
+ unsigned long mhop_obj = obj;
+ u32 l0_idx, l1_idx, l2_idx;
+ u32 chunk_ba_num;
+@@ -331,14 +331,14 @@ static int calc_hem_config(struct hns_roce_dev *hr_dev,
+ index->buf = l0_idx;
+ break;
+ default:
+- ibdev_err(ibdev, "table %u not support mhop.hop_num = %u!\n",
+- table->type, mhop->hop_num);
++ dev_err(dev, "table %u not support mhop.hop_num = %u!\n",
++ table->type, mhop->hop_num);
+ return -EINVAL;
+ }
+
+ if (unlikely(index->buf >= table->num_hem)) {
+- ibdev_err(ibdev, "table %u exceed hem limt idx %llu, max %lu!\n",
+- table->type, index->buf, table->num_hem);
++ dev_err(dev, "table %u exceed hem limt idx %llu, max %lu!\n",
++ table->type, index->buf, table->num_hem);
+ return -EINVAL;
+ }
+
+@@ -448,14 +448,14 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_mhop *mhop,
+ struct hns_roce_hem_index *index)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
++ struct device *dev = hr_dev->dev;
+ u32 step_idx;
+ int ret = 0;
+
+ if (index->inited & HEM_INDEX_L0) {
+ ret = hr_dev->hw->set_hem(hr_dev, table, obj, 0);
+ if (ret) {
+- ibdev_err(ibdev, "set HEM step 0 failed!\n");
++ dev_err(dev, "set HEM step 0 failed!\n");
+ goto out;
+ }
+ }
+@@ -463,7 +463,7 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
+ if (index->inited & HEM_INDEX_L1) {
+ ret = hr_dev->hw->set_hem(hr_dev, table, obj, 1);
+ if (ret) {
+- ibdev_err(ibdev, "set HEM step 1 failed!\n");
++ dev_err(dev, "set HEM step 1 failed!\n");
+ goto out;
+ }
+ }
+@@ -475,7 +475,7 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
+ step_idx = mhop->hop_num;
+ ret = hr_dev->hw->set_hem(hr_dev, table, obj, step_idx);
+ if (ret)
+- ibdev_err(ibdev, "set HEM step last failed!\n");
++ dev_err(dev, "set HEM step last failed!\n");
+ }
+ out:
+ return ret;
+@@ -485,14 +485,14 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_table *table,
+ unsigned long obj)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_hem_index index = {};
+ struct hns_roce_hem_mhop mhop = {};
++ struct device *dev = hr_dev->dev;
+ int ret;
+
+ ret = calc_hem_config(hr_dev, table, obj, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "calc hem config failed!\n");
++ dev_err(dev, "calc hem config failed!\n");
+ return ret;
+ }
+
+@@ -504,7 +504,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+
+ ret = alloc_mhop_hem(hr_dev, table, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "alloc mhop hem failed!\n");
++ dev_err(dev, "alloc mhop hem failed!\n");
+ goto out;
+ }
+
+@@ -512,7 +512,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+ if (table->type < HEM_TYPE_MTT) {
+ ret = set_mhop_hem(hr_dev, table, obj, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "set HEM address to HW failed!\n");
++ dev_err(dev, "set HEM address to HW failed!\n");
+ goto err_alloc;
+ }
+ }
+@@ -575,7 +575,7 @@ static void clear_mhop_hem(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_mhop *mhop,
+ struct hns_roce_hem_index *index)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
++ struct device *dev = hr_dev->dev;
+ u32 hop_num = mhop->hop_num;
+ u32 chunk_ba_num;
+ u32 step_idx;
+@@ -605,21 +605,21 @@ static void clear_mhop_hem(struct hns_roce_dev *hr_dev,
+
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, step_idx);
+ if (ret)
+- ibdev_warn(ibdev, "failed to clear hop%u HEM, ret = %d.\n",
+- hop_num, ret);
++ dev_warn(dev, "failed to clear hop%u HEM, ret = %d.\n",
++ hop_num, ret);
+
+ if (index->inited & HEM_INDEX_L1) {
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, 1);
+ if (ret)
+- ibdev_warn(ibdev, "failed to clear HEM step 1, ret = %d.\n",
+- ret);
++ dev_warn(dev, "failed to clear HEM step 1, ret = %d.\n",
++ ret);
+ }
+
+ if (index->inited & HEM_INDEX_L0) {
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, 0);
+ if (ret)
+- ibdev_warn(ibdev, "failed to clear HEM step 0, ret = %d.\n",
+- ret);
++ dev_warn(dev, "failed to clear HEM step 0, ret = %d.\n",
++ ret);
+ }
+ }
+ }
+@@ -629,14 +629,14 @@ static void hns_roce_table_mhop_put(struct hns_roce_dev *hr_dev,
+ unsigned long obj,
+ int check_refcount)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_hem_index index = {};
+ struct hns_roce_hem_mhop mhop = {};
++ struct device *dev = hr_dev->dev;
+ int ret;
+
+ ret = calc_hem_config(hr_dev, table, obj, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "calc hem config failed!\n");
++ dev_err(dev, "calc hem config failed!\n");
+ return;
+ }
+
+@@ -672,8 +672,8 @@ void hns_roce_table_put(struct hns_roce_dev *hr_dev,
+
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT);
+ if (ret)
+- dev_warn(dev, "failed to clear HEM base address, ret = %d.\n",
+- ret);
++ dev_warn_ratelimited(dev, "failed to clear HEM base address, ret = %d.\n",
++ ret);
+
+ hns_roce_free_hem(hr_dev, table->hem[i]);
+ table->hem[i] = NULL;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 24e906b9d3ae13..697b17cca02e71 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -373,19 +373,12 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
+ static int check_send_valid(struct hns_roce_dev *hr_dev,
+ struct hns_roce_qp *hr_qp)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
+-
+ if (unlikely(hr_qp->state == IB_QPS_RESET ||
+ hr_qp->state == IB_QPS_INIT ||
+- hr_qp->state == IB_QPS_RTR)) {
+- ibdev_err(ibdev, "failed to post WQE, QP state %u!\n",
+- hr_qp->state);
++ hr_qp->state == IB_QPS_RTR))
+ return -EINVAL;
+- } else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN)) {
+- ibdev_err(ibdev, "failed to post WQE, dev state %d!\n",
+- hr_dev->state);
++ else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN))
+ return -EIO;
+- }
+
+ return 0;
+ }
+@@ -582,7 +575,7 @@ static inline int set_rc_wqe(struct hns_roce_qp *qp,
+ if (WARN_ON(ret))
+ return ret;
+
+- hr_reg_write(rc_sq_wqe, RC_SEND_WQE_FENCE,
++ hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SO,
+ (wr->send_flags & IB_SEND_FENCE) ? 1 : 0);
+
+ hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SE,
+@@ -2560,20 +2553,19 @@ static void hns_roce_free_link_table(struct hns_roce_dev *hr_dev)
+ free_link_table_buf(hr_dev, &priv->ext_llm);
+ }
+
+-static void free_dip_list(struct hns_roce_dev *hr_dev)
++static void free_dip_entry(struct hns_roce_dev *hr_dev)
+ {
+ struct hns_roce_dip *hr_dip;
+- struct hns_roce_dip *tmp;
+- unsigned long flags;
++ unsigned long idx;
+
+- spin_lock_irqsave(&hr_dev->dip_list_lock, flags);
++ xa_lock(&hr_dev->qp_table.dip_xa);
+
+- list_for_each_entry_safe(hr_dip, tmp, &hr_dev->dip_list, node) {
+- list_del(&hr_dip->node);
++ xa_for_each(&hr_dev->qp_table.dip_xa, idx, hr_dip) {
++ __xa_erase(&hr_dev->qp_table.dip_xa, hr_dip->dip_idx);
+ kfree(hr_dip);
+ }
+
+- spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags);
++ xa_unlock(&hr_dev->qp_table.dip_xa);
+ }
+
+ static struct ib_pd *free_mr_init_pd(struct hns_roce_dev *hr_dev)
+@@ -2775,8 +2767,8 @@ static int free_mr_modify_rsv_qp(struct hns_roce_dev *hr_dev,
+ ret = hr_dev->hw->modify_qp(&hr_qp->ibqp, attr, mask, IB_QPS_INIT,
+ IB_QPS_INIT, NULL);
+ if (ret) {
+- ibdev_err(ibdev, "failed to modify qp to init, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(ibdev, "failed to modify qp to init, ret = %d.\n",
++ ret);
+ return ret;
+ }
+
+@@ -2981,7 +2973,7 @@ static void hns_roce_v2_exit(struct hns_roce_dev *hr_dev)
+ hns_roce_free_link_table(hr_dev);
+
+ if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP09)
+- free_dip_list(hr_dev);
++ free_dip_entry(hr_dev);
+ }
+
+ static int hns_roce_mbox_post(struct hns_roce_dev *hr_dev,
+@@ -3421,8 +3413,8 @@ static int free_mr_post_send_lp_wqe(struct hns_roce_qp *hr_qp)
+
+ ret = hns_roce_v2_post_send(&hr_qp->ibqp, send_wr, &bad_wr);
+ if (ret) {
+- ibdev_err(ibdev, "failed to post wqe for free mr, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(ibdev, "failed to post wqe for free mr, ret = %d.\n",
++ ret);
+ return ret;
+ }
+
+@@ -3461,9 +3453,9 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
+
+ ret = free_mr_post_send_lp_wqe(hr_qp);
+ if (ret) {
+- ibdev_err(ibdev,
+- "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n",
+- hr_qp->qpn, ret);
++ ibdev_err_ratelimited(ibdev,
++ "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n",
++ hr_qp->qpn, ret);
+ break;
+ }
+
+@@ -3474,16 +3466,16 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
+ while (cqe_cnt) {
+ npolled = hns_roce_v2_poll_cq(&free_mr->rsv_cq->ib_cq, cqe_cnt, wc);
+ if (npolled < 0) {
+- ibdev_err(ibdev,
+- "failed to poll cqe for free mr, remain %d cqe.\n",
+- cqe_cnt);
++ ibdev_err_ratelimited(ibdev,
++ "failed to poll cqe for free mr, remain %d cqe.\n",
++ cqe_cnt);
+ goto out;
+ }
+
+ if (time_after(jiffies, end)) {
+- ibdev_err(ibdev,
+- "failed to poll cqe for free mr and timeout, remain %d cqe.\n",
+- cqe_cnt);
++ ibdev_err_ratelimited(ibdev,
++ "failed to poll cqe for free mr and timeout, remain %d cqe.\n",
++ cqe_cnt);
+ goto out;
+ }
+ cqe_cnt -= npolled;
+@@ -4701,26 +4693,49 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp, int attr_mask,
+ return 0;
+ }
+
++static int alloc_dip_entry(struct xarray *dip_xa, u32 qpn)
++{
++ struct hns_roce_dip *hr_dip;
++ int ret;
++
++ hr_dip = xa_load(dip_xa, qpn);
++ if (hr_dip)
++ return 0;
++
++ hr_dip = kzalloc(sizeof(*hr_dip), GFP_KERNEL);
++ if (!hr_dip)
++ return -ENOMEM;
++
++ ret = xa_err(xa_store(dip_xa, qpn, hr_dip, GFP_KERNEL));
++ if (ret)
++ kfree(hr_dip);
++
++ return ret;
++}
++
+ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ u32 *dip_idx)
+ {
+ const struct ib_global_route *grh = rdma_ah_read_grh(&attr->ah_attr);
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+- u32 *spare_idx = hr_dev->qp_table.idx_table.spare_idx;
+- u32 *head = &hr_dev->qp_table.idx_table.head;
+- u32 *tail = &hr_dev->qp_table.idx_table.tail;
++ struct xarray *dip_xa = &hr_dev->qp_table.dip_xa;
++ struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+ struct hns_roce_dip *hr_dip;
+- unsigned long flags;
++ unsigned long idx;
+ int ret = 0;
+
+- spin_lock_irqsave(&hr_dev->dip_list_lock, flags);
++ ret = alloc_dip_entry(dip_xa, ibqp->qp_num);
++ if (ret)
++ return ret;
+
+- spare_idx[*tail] = ibqp->qp_num;
+- *tail = (*tail == hr_dev->caps.num_qps - 1) ? 0 : (*tail + 1);
++ xa_lock(dip_xa);
+
+- list_for_each_entry(hr_dip, &hr_dev->dip_list, node) {
+- if (!memcmp(grh->dgid.raw, hr_dip->dgid, GID_LEN_V2)) {
++ xa_for_each(dip_xa, idx, hr_dip) {
++ if (hr_dip->qp_cnt &&
++ !memcmp(grh->dgid.raw, hr_dip->dgid, GID_LEN_V2)) {
+ *dip_idx = hr_dip->dip_idx;
++ hr_dip->qp_cnt++;
++ hr_qp->dip = hr_dip;
+ goto out;
+ }
+ }
+@@ -4728,19 +4743,24 @@ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ /* If no dgid is found, a new dip and a mapping between dgid and
+ * dip_idx will be created.
+ */
+- hr_dip = kzalloc(sizeof(*hr_dip), GFP_ATOMIC);
+- if (!hr_dip) {
+- ret = -ENOMEM;
+- goto out;
++ xa_for_each(dip_xa, idx, hr_dip) {
++ if (hr_dip->qp_cnt)
++ continue;
++
++ *dip_idx = idx;
++ memcpy(hr_dip->dgid, grh->dgid.raw, sizeof(grh->dgid.raw));
++ hr_dip->dip_idx = idx;
++ hr_dip->qp_cnt++;
++ hr_qp->dip = hr_dip;
++ break;
+ }
+
+- memcpy(hr_dip->dgid, grh->dgid.raw, sizeof(grh->dgid.raw));
+- hr_dip->dip_idx = *dip_idx = spare_idx[*head];
+- *head = (*head == hr_dev->caps.num_qps - 1) ? 0 : (*head + 1);
+- list_add_tail(&hr_dip->node, &hr_dev->dip_list);
++ /* This should never happen. */
++ if (WARN_ON_ONCE(!hr_qp->dip))
++ ret = -ENOSPC;
+
+ out:
+- spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags);
++ xa_unlock(dip_xa);
+ return ret;
+ }
+
+@@ -5061,10 +5081,8 @@ static int hns_roce_v2_set_abs_fields(struct ib_qp *ibqp,
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+ int ret = 0;
+
+- if (!check_qp_state(cur_state, new_state)) {
+- ibdev_err(&hr_dev->ib_dev, "Illegal state for QP!\n");
++ if (!check_qp_state(cur_state, new_state))
+ return -EINVAL;
+- }
+
+ if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
+ memset(qpc_mask, 0, hr_dev->caps.qpc_sz);
+@@ -5325,7 +5343,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ /* SW pass context to HW */
+ ret = hns_roce_v2_qp_modify(hr_dev, context, qpc_mask, hr_qp);
+ if (ret) {
+- ibdev_err(ibdev, "failed to modify QP, ret = %d.\n", ret);
++ ibdev_err_ratelimited(ibdev, "failed to modify QP, ret = %d.\n", ret);
+ goto out;
+ }
+
+@@ -5463,7 +5481,9 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+
+ ret = hns_roce_v2_query_qpc(hr_dev, hr_qp->qpn, &context);
+ if (ret) {
+- ibdev_err(ibdev, "failed to query QPC, ret = %d.\n", ret);
++ ibdev_err_ratelimited(ibdev,
++ "failed to query QPC, ret = %d.\n",
++ ret);
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -5471,7 +5491,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ state = hr_reg_read(&context, QPC_QP_ST);
+ tmp_qp_state = to_ib_qp_st((enum hns_roce_v2_qp_state)state);
+ if (tmp_qp_state == -1) {
+- ibdev_err(ibdev, "Illegal ib_qp_state\n");
++ ibdev_err_ratelimited(ibdev, "Illegal ib_qp_state\n");
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -5564,9 +5584,9 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
+ ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, NULL, 0,
+ hr_qp->state, IB_QPS_RESET, udata);
+ if (ret)
+- ibdev_err(ibdev,
+- "failed to modify QP to RST, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(ibdev,
++ "failed to modify QP to RST, ret = %d.\n",
++ ret);
+ }
+
+ send_cq = hr_qp->ibqp.send_cq ? to_hr_cq(hr_qp->ibqp.send_cq) : NULL;
+@@ -5594,17 +5614,41 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
+ return ret;
+ }
+
++static void put_dip_ctx_idx(struct hns_roce_dev *hr_dev,
++ struct hns_roce_qp *hr_qp)
++{
++ struct hns_roce_dip *hr_dip = hr_qp->dip;
++
++ xa_lock(&hr_dev->qp_table.dip_xa);
++
++ hr_dip->qp_cnt--;
++ if (!hr_dip->qp_cnt)
++ memset(hr_dip->dgid, 0, GID_LEN_V2);
++
++ xa_unlock(&hr_dev->qp_table.dip_xa);
++}
++
+ int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ {
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+ struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
++ unsigned long flags;
+ int ret;
+
++ /* Make sure flush_cqe() is completed */
++ spin_lock_irqsave(&hr_qp->flush_lock, flags);
++ set_bit(HNS_ROCE_STOP_FLUSH_FLAG, &hr_qp->flush_flag);
++ spin_unlock_irqrestore(&hr_qp->flush_lock, flags);
++ flush_work(&hr_qp->flush_work.work);
++
++ if (hr_qp->cong_type == CONG_TYPE_DIP)
++ put_dip_ctx_idx(hr_dev, hr_qp);
++
+ ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata);
+ if (ret)
+- ibdev_err(&hr_dev->ib_dev,
+- "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n",
+- hr_qp->qpn, ret);
++ ibdev_err_ratelimited(&hr_dev->ib_dev,
++ "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n",
++ hr_qp->qpn, ret);
+
+ hns_roce_qp_destroy(hr_dev, hr_qp, udata);
+
+@@ -5898,9 +5942,9 @@ static int hns_roce_v2_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
+ HNS_ROCE_CMD_MODIFY_CQC, hr_cq->cqn);
+ hns_roce_free_cmd_mailbox(hr_dev, mailbox);
+ if (ret)
+- ibdev_err(&hr_dev->ib_dev,
+- "failed to process cmd when modifying CQ, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(&hr_dev->ib_dev,
++ "failed to process cmd when modifying CQ, ret = %d.\n",
++ ret);
+
+ err_out:
+ if (ret)
+@@ -5924,9 +5968,9 @@ static int hns_roce_v2_query_cqc(struct hns_roce_dev *hr_dev, u32 cqn,
+ ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma,
+ HNS_ROCE_CMD_QUERY_CQC, cqn);
+ if (ret) {
+- ibdev_err(&hr_dev->ib_dev,
+- "failed to process cmd when querying CQ, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(&hr_dev->ib_dev,
++ "failed to process cmd when querying CQ, ret = %d.\n",
++ ret);
+ goto err_mailbox;
+ }
+
+@@ -5967,11 +6011,10 @@ static int hns_roce_v2_query_mpt(struct hns_roce_dev *hr_dev, u32 key,
+ return ret;
+ }
+
+-static void hns_roce_irq_work_handle(struct work_struct *work)
++static void dump_aeqe_log(struct hns_roce_work *irq_work)
+ {
+- struct hns_roce_work *irq_work =
+- container_of(work, struct hns_roce_work, work);
+- struct ib_device *ibdev = &irq_work->hr_dev->ib_dev;
++ struct hns_roce_dev *hr_dev = irq_work->hr_dev;
++ struct ib_device *ibdev = &hr_dev->ib_dev;
+
+ switch (irq_work->event_type) {
+ case HNS_ROCE_EVENT_TYPE_PATH_MIG:
+@@ -6015,6 +6058,8 @@ static void hns_roce_irq_work_handle(struct work_struct *work)
+ case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
+ ibdev_warn(ibdev, "DB overflow.\n");
+ break;
++ case HNS_ROCE_EVENT_TYPE_MB:
++ break;
+ case HNS_ROCE_EVENT_TYPE_FLR:
+ ibdev_warn(ibdev, "function level reset.\n");
+ break;
+@@ -6025,8 +6070,46 @@ static void hns_roce_irq_work_handle(struct work_struct *work)
+ ibdev_err(ibdev, "invalid xrceth error.\n");
+ break;
+ default:
++ ibdev_info(ibdev, "Undefined event %d.\n",
++ irq_work->event_type);
+ break;
+ }
++}
++
++static void hns_roce_irq_work_handle(struct work_struct *work)
++{
++ struct hns_roce_work *irq_work =
++ container_of(work, struct hns_roce_work, work);
++ struct hns_roce_dev *hr_dev = irq_work->hr_dev;
++ int event_type = irq_work->event_type;
++ u32 queue_num = irq_work->queue_num;
++
++ switch (event_type) {
++ case HNS_ROCE_EVENT_TYPE_PATH_MIG:
++ case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
++ case HNS_ROCE_EVENT_TYPE_COMM_EST:
++ case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
++ case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
++ case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
++ case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
++ case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
++ case HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION:
++ case HNS_ROCE_EVENT_TYPE_INVALID_XRCETH:
++ hns_roce_qp_event(hr_dev, queue_num, event_type);
++ break;
++ case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
++ case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
++ hns_roce_srq_event(hr_dev, queue_num, event_type);
++ break;
++ case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
++ case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
++ hns_roce_cq_event(hr_dev, queue_num, event_type);
++ break;
++ default:
++ break;
++ }
++
++ dump_aeqe_log(irq_work);
+
+ kfree(irq_work);
+ }
+@@ -6087,14 +6170,14 @@ static struct hns_roce_aeqe *next_aeqe_sw_v2(struct hns_roce_eq *eq)
+ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+ {
+- struct device *dev = hr_dev->dev;
+ struct hns_roce_aeqe *aeqe = next_aeqe_sw_v2(eq);
+ irqreturn_t aeqe_found = IRQ_NONE;
++ int num_aeqes = 0;
+ int event_type;
+ u32 queue_num;
+ int sub_type;
+
+- while (aeqe) {
++ while (aeqe && num_aeqes < HNS_AEQ_POLLING_BUDGET) {
+ /* Make sure we read AEQ entry after we have checked the
+ * ownership bit
+ */
+@@ -6105,25 +6188,12 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ queue_num = hr_reg_read(aeqe, AEQE_EVENT_QUEUE_NUM);
+
+ switch (event_type) {
+- case HNS_ROCE_EVENT_TYPE_PATH_MIG:
+- case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
+- case HNS_ROCE_EVENT_TYPE_COMM_EST:
+- case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
+ case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
+- case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
+ case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
+ case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
+ case HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION:
+ case HNS_ROCE_EVENT_TYPE_INVALID_XRCETH:
+- hns_roce_qp_event(hr_dev, queue_num, event_type);
+- break;
+- case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
+- case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
+- hns_roce_srq_event(hr_dev, queue_num, event_type);
+- break;
+- case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
+- case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
+- hns_roce_cq_event(hr_dev, queue_num, event_type);
++ hns_roce_flush_cqe(hr_dev, queue_num);
+ break;
+ case HNS_ROCE_EVENT_TYPE_MB:
+ hns_roce_cmd_event(hr_dev,
+@@ -6131,12 +6201,7 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ aeqe->event.cmd.status,
+ le64_to_cpu(aeqe->event.cmd.out_param));
+ break;
+- case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
+- case HNS_ROCE_EVENT_TYPE_FLR:
+- break;
+ default:
+- dev_err(dev, "unhandled event %d on EQ %d at idx %u.\n",
+- event_type, eq->eqn, eq->cons_index);
+ break;
+ }
+
+@@ -6150,6 +6215,7 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ hns_roce_v2_init_irq_work(hr_dev, eq, queue_num);
+
+ aeqe = next_aeqe_sw_v2(eq);
++ ++num_aeqes;
+ }
+
+ update_eq_db(eq);
+@@ -6699,6 +6765,9 @@ static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev)
+ int ret;
+ int i;
+
++ if (hr_dev->caps.aeqe_depth < HNS_AEQ_POLLING_BUDGET)
++ return -EINVAL;
++
+ other_num = hr_dev->caps.num_other_vectors;
+ comp_num = hr_dev->caps.num_comp_vectors;
+ aeq_num = hr_dev->caps.num_aeq_vectors;
+@@ -7017,6 +7086,7 @@ static void hns_roce_hw_v2_uninit_instance(struct hnae3_handle *handle,
+
+ handle->rinfo.instance_state = HNS_ROCE_STATE_NON_INIT;
+ }
++
+ static int hns_roce_hw_v2_reset_notify_down(struct hnae3_handle *handle)
+ {
+ struct hns_roce_dev *hr_dev;
+@@ -7035,6 +7105,9 @@ static int hns_roce_hw_v2_reset_notify_down(struct hnae3_handle *handle)
+
+ hr_dev->active = false;
+ hr_dev->dis_db = true;
++
++ rdma_user_mmap_disassociate(&hr_dev->ib_dev);
++
+ hr_dev->state = HNS_ROCE_DEVICE_STATE_RST_DOWN;
+
+ return 0;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index c65f68a14a2608..cbdbc9edbce6ec 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -85,6 +85,11 @@
+
+ #define HNS_ROCE_V2_TABLE_CHUNK_SIZE (1 << 18)
+
++/* budget must be smaller than aeqe_depth to guarantee that we update
++ * the ci before we polled all the entries in the EQ.
++ */
++#define HNS_AEQ_POLLING_BUDGET 64
++
+ enum {
+ HNS_ROCE_CMD_FLAG_IN = BIT(0),
+ HNS_ROCE_CMD_FLAG_OUT = BIT(1),
+@@ -919,6 +924,7 @@ struct hns_roce_v2_rc_send_wqe {
+ #define RC_SEND_WQE_OWNER RC_SEND_WQE_FIELD_LOC(7, 7)
+ #define RC_SEND_WQE_CQE RC_SEND_WQE_FIELD_LOC(8, 8)
+ #define RC_SEND_WQE_FENCE RC_SEND_WQE_FIELD_LOC(9, 9)
++#define RC_SEND_WQE_SO RC_SEND_WQE_FIELD_LOC(10, 10)
+ #define RC_SEND_WQE_SE RC_SEND_WQE_FIELD_LOC(11, 11)
+ #define RC_SEND_WQE_INLINE RC_SEND_WQE_FIELD_LOC(12, 12)
+ #define RC_SEND_WQE_WQE_INDEX RC_SEND_WQE_FIELD_LOC(30, 15)
+@@ -1342,7 +1348,7 @@ struct hns_roce_v2_priv {
+ struct hns_roce_dip {
+ u8 dgid[GID_LEN_V2];
+ u32 dip_idx;
+- struct list_head node; /* all dips are on a list */
++ u32 qp_cnt;
+ };
+
+ struct fmea_ram_ecc {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index 4cb0af73358708..ae24c81c9812d9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -466,6 +466,11 @@ static int hns_roce_mmap(struct ib_ucontext *uctx, struct vm_area_struct *vma)
+ pgprot_t prot;
+ int ret;
+
++ if (hr_dev->dis_db) {
++ atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_MMAP_ERR_CNT]);
++ return -EPERM;
++ }
++
+ rdma_entry = rdma_user_mmap_entry_get_pgoff(uctx, vma->vm_pgoff);
+ if (!rdma_entry) {
+ atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_MMAP_ERR_CNT]);
+@@ -1130,8 +1135,6 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
+
+ INIT_LIST_HEAD(&hr_dev->qp_list);
+ spin_lock_init(&hr_dev->qp_list_lock);
+- INIT_LIST_HEAD(&hr_dev->dip_list);
+- spin_lock_init(&hr_dev->dip_list_lock);
+
+ ret = hns_roce_register_device(hr_dev);
+ if (ret)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 846da8c78b8b72..bf30b3a65a9ba9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -138,8 +138,8 @@ static void hns_roce_mr_free(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr
+ key_to_hw_index(mr->key) &
+ (hr_dev->caps.num_mtpts - 1));
+ if (ret)
+- ibdev_warn(ibdev, "failed to destroy mpt, ret = %d.\n",
+- ret);
++ ibdev_warn_ratelimited(ibdev, "failed to destroy mpt, ret = %d.\n",
++ ret);
+ }
+
+ free_mr_pbl(hr_dev, mr);
+@@ -435,15 +435,16 @@ static int hns_roce_set_page(struct ib_mr *ibmr, u64 addr)
+ }
+
+ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+- unsigned int *sg_offset)
++ unsigned int *sg_offset_p)
+ {
++ unsigned int sg_offset = sg_offset_p ? *sg_offset_p : 0;
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibmr->device);
+ struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_mr *mr = to_hr_mr(ibmr);
+ struct hns_roce_mtr *mtr = &mr->pbl_mtr;
+ int ret, sg_num = 0;
+
+- if (!IS_ALIGNED(*sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) ||
++ if (!IS_ALIGNED(sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) ||
+ ibmr->page_size < HNS_HW_PAGE_SIZE ||
+ ibmr->page_size > HNS_HW_MAX_PAGE_SIZE)
+ return sg_num;
+@@ -454,7 +455,7 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ if (!mr->page_list)
+ return sg_num;
+
+- sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, hns_roce_set_page);
++ sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset_p, hns_roce_set_page);
+ if (sg_num < 1) {
+ ibdev_err(ibdev, "failed to store sg pages %u %u, cnt = %d.\n",
+ mr->npages, mr->pbl_mtr.hem_cfg.buf_pg_count, sg_num);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 6b03ba671ff8f3..9e2e76c5940636 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -39,6 +39,25 @@
+ #include "hns_roce_device.h"
+ #include "hns_roce_hem.h"
+
++static struct hns_roce_qp *hns_roce_qp_lookup(struct hns_roce_dev *hr_dev,
++ u32 qpn)
++{
++ struct device *dev = hr_dev->dev;
++ struct hns_roce_qp *qp;
++ unsigned long flags;
++
++ xa_lock_irqsave(&hr_dev->qp_table_xa, flags);
++ qp = __hns_roce_qp_lookup(hr_dev, qpn);
++ if (qp)
++ refcount_inc(&qp->refcount);
++ xa_unlock_irqrestore(&hr_dev->qp_table_xa, flags);
++
++ if (!qp)
++ dev_warn(dev, "async event for bogus QP %08x\n", qpn);
++
++ return qp;
++}
++
+ static void flush_work_handle(struct work_struct *work)
+ {
+ struct hns_roce_work *flush_work = container_of(work,
+@@ -71,11 +90,18 @@ static void flush_work_handle(struct work_struct *work)
+ void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
+ {
+ struct hns_roce_work *flush_work = &hr_qp->flush_work;
++ unsigned long flags;
++
++ spin_lock_irqsave(&hr_qp->flush_lock, flags);
++ /* Exit directly after destroy_qp() */
++ if (test_bit(HNS_ROCE_STOP_FLUSH_FLAG, &hr_qp->flush_flag)) {
++ spin_unlock_irqrestore(&hr_qp->flush_lock, flags);
++ return;
++ }
+
+- flush_work->hr_dev = hr_dev;
+- INIT_WORK(&flush_work->work, flush_work_handle);
+ refcount_inc(&hr_qp->refcount);
+ queue_work(hr_dev->irq_workq, &flush_work->work);
++ spin_unlock_irqrestore(&hr_qp->flush_lock, flags);
+ }
+
+ void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp)
+@@ -95,31 +121,28 @@ void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp)
+
+ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
+ {
+- struct device *dev = hr_dev->dev;
+ struct hns_roce_qp *qp;
+
+- xa_lock(&hr_dev->qp_table_xa);
+- qp = __hns_roce_qp_lookup(hr_dev, qpn);
+- if (qp)
+- refcount_inc(&qp->refcount);
+- xa_unlock(&hr_dev->qp_table_xa);
+-
+- if (!qp) {
+- dev_warn(dev, "async event for bogus QP %08x\n", qpn);
++ qp = hns_roce_qp_lookup(hr_dev, qpn);
++ if (!qp)
+ return;
+- }
+
+- if (event_type == HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR ||
+- event_type == HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR ||
+- event_type == HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR ||
+- event_type == HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION ||
+- event_type == HNS_ROCE_EVENT_TYPE_INVALID_XRCETH) {
+- qp->state = IB_QPS_ERR;
++ qp->event(qp, (enum hns_roce_event)event_type);
+
+- flush_cqe(hr_dev, qp);
+- }
++ if (refcount_dec_and_test(&qp->refcount))
++ complete(&qp->free);
++}
+
+- qp->event(qp, (enum hns_roce_event)event_type);
++void hns_roce_flush_cqe(struct hns_roce_dev *hr_dev, u32 qpn)
++{
++ struct hns_roce_qp *qp;
++
++ qp = hns_roce_qp_lookup(hr_dev, qpn);
++ if (!qp)
++ return;
++
++ qp->state = IB_QPS_ERR;
++ flush_cqe(hr_dev, qp);
+
+ if (refcount_dec_and_test(&qp->refcount))
+ complete(&qp->free);
+@@ -1124,6 +1147,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ struct ib_udata *udata,
+ struct hns_roce_qp *hr_qp)
+ {
++ struct hns_roce_work *flush_work = &hr_qp->flush_work;
+ struct hns_roce_ib_create_qp_resp resp = {};
+ struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_ib_create_qp ucmd = {};
+@@ -1132,9 +1156,12 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ mutex_init(&hr_qp->mutex);
+ spin_lock_init(&hr_qp->sq.lock);
+ spin_lock_init(&hr_qp->rq.lock);
++ spin_lock_init(&hr_qp->flush_lock);
+
+ hr_qp->state = IB_QPS_RESET;
+ hr_qp->flush_flag = 0;
++ flush_work->hr_dev = hr_dev;
++ INIT_WORK(&flush_work->work, flush_work_handle);
+
+ if (init_attr->create_flags)
+ return -EOPNOTSUPP;
+@@ -1546,14 +1573,10 @@ int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
+ unsigned int reserved_from_bot;
+ unsigned int i;
+
+- qp_table->idx_table.spare_idx = kcalloc(hr_dev->caps.num_qps,
+- sizeof(u32), GFP_KERNEL);
+- if (!qp_table->idx_table.spare_idx)
+- return -ENOMEM;
+-
+ mutex_init(&qp_table->scc_mutex);
+ mutex_init(&qp_table->bank_mutex);
+ xa_init(&hr_dev->qp_table_xa);
++ xa_init(&qp_table->dip_xa);
+
+ reserved_from_bot = hr_dev->caps.reserved_qps;
+
+@@ -1578,7 +1601,7 @@ void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev)
+
+ for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++)
+ ida_destroy(&hr_dev->qp_table.bank[i].ida);
++ xa_destroy(&hr_dev->qp_table.dip_xa);
+ mutex_destroy(&hr_dev->qp_table.bank_mutex);
+ mutex_destroy(&hr_dev->qp_table.scc_mutex);
+- kfree(hr_dev->qp_table.idx_table.spare_idx);
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
+index c9b8233f4b0577..70c06ef65603d8 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
+@@ -151,8 +151,8 @@ static void free_srqc(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq)
+ ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_SRQ,
+ srq->srqn);
+ if (ret)
+- dev_err(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n",
+- ret, srq->srqn);
++ dev_err_ratelimited(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n",
++ ret, srq->srqn);
+
+ xa_erase_irq(&srq_table->xa, srq->srqn);
+
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 5926fd07a60212..a7e5daeaacc7e6 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2968,7 +2968,6 @@ int mlx5_ib_dev_res_srq_init(struct mlx5_ib_dev *dev)
+ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
+ {
+ struct mlx5_ib_resources *devr = &dev->devr;
+- int port;
+ int ret;
+
+ if (!MLX5_CAP_GEN(dev->mdev, xrc))
+@@ -2984,10 +2983,6 @@ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
+ return ret;
+ }
+
+- for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
+- INIT_WORK(&devr->ports[port].pkey_change_work,
+- pkey_change_handler);
+-
+ mutex_init(&devr->cq_lock);
+ mutex_init(&devr->srq_lock);
+
+@@ -2997,16 +2992,6 @@ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
+ static void mlx5_ib_dev_res_cleanup(struct mlx5_ib_dev *dev)
+ {
+ struct mlx5_ib_resources *devr = &dev->devr;
+- int port;
+-
+- /*
+- * Make sure no change P_Key work items are still executing.
+- *
+- * At this stage, the mlx5_ib_event should be unregistered
+- * and it ensures that no new works are added.
+- */
+- for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
+- cancel_work_sync(&devr->ports[port].pkey_change_work);
+
+ /* After s0/s1 init, they are not unset during the device lifetime. */
+ if (devr->s1) {
+@@ -4279,6 +4264,13 @@ static void mlx5_ib_stage_delay_drop_cleanup(struct mlx5_ib_dev *dev)
+
+ static int mlx5_ib_stage_dev_notifier_init(struct mlx5_ib_dev *dev)
+ {
++ struct mlx5_ib_resources *devr = &dev->devr;
++ int port;
++
++ for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
++ INIT_WORK(&devr->ports[port].pkey_change_work,
++ pkey_change_handler);
++
+ dev->mdev_events.notifier_call = mlx5_ib_event;
+ mlx5_notifier_register(dev->mdev, &dev->mdev_events);
+
+@@ -4289,8 +4281,14 @@ static int mlx5_ib_stage_dev_notifier_init(struct mlx5_ib_dev *dev)
+
+ static void mlx5_ib_stage_dev_notifier_cleanup(struct mlx5_ib_dev *dev)
+ {
++ struct mlx5_ib_resources *devr = &dev->devr;
++ int port;
++
+ mlx5r_macsec_event_unregister(dev);
+ mlx5_notifier_unregister(dev->mdev, &dev->mdev_events);
++
++ for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
++ cancel_work_sync(&devr->ports[port].pkey_change_work);
+ }
+
+ void __mlx5_ib_remove(struct mlx5_ib_dev *dev,
+@@ -4364,9 +4362,6 @@ static const struct mlx5_ib_profile pf_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES,
+ mlx5_ib_dev_res_init,
+ mlx5_ib_dev_res_cleanup),
+- STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
+- mlx5_ib_stage_dev_notifier_init,
+- mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_ODP,
+ mlx5_ib_odp_init_one,
+ mlx5_ib_odp_cleanup_one),
+@@ -4391,6 +4386,9 @@ static const struct mlx5_ib_profile pf_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_IB_REG,
+ mlx5_ib_stage_ib_reg_init,
+ mlx5_ib_stage_ib_reg_cleanup),
++ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
++ mlx5_ib_stage_dev_notifier_init,
++ mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
+ mlx5_ib_stage_post_ib_reg_umr_init,
+ NULL),
+@@ -4427,9 +4425,6 @@ const struct mlx5_ib_profile raw_eth_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES,
+ mlx5_ib_dev_res_init,
+ mlx5_ib_dev_res_cleanup),
+- STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
+- mlx5_ib_stage_dev_notifier_init,
+- mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_COUNTERS,
+ mlx5_ib_counters_init,
+ mlx5_ib_counters_cleanup),
+@@ -4451,6 +4446,9 @@ const struct mlx5_ib_profile raw_eth_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_IB_REG,
+ mlx5_ib_stage_ib_reg_init,
+ mlx5_ib_stage_ib_reg_cleanup),
++ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
++ mlx5_ib_stage_dev_notifier_init,
++ mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
+ mlx5_ib_stage_post_ib_reg_umr_init,
+ NULL),
+diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+index 85118b7cb63dbb..8e25afe36390a8 100644
+--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
++++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+@@ -971,7 +971,6 @@ enum mlx5_ib_stages {
+ MLX5_IB_STAGE_QP,
+ MLX5_IB_STAGE_SRQ,
+ MLX5_IB_STAGE_DEVICE_RESOURCES,
+- MLX5_IB_STAGE_DEVICE_NOTIFIER,
+ MLX5_IB_STAGE_ODP,
+ MLX5_IB_STAGE_COUNTERS,
+ MLX5_IB_STAGE_CONG_DEBUGFS,
+@@ -980,6 +979,7 @@ enum mlx5_ib_stages {
+ MLX5_IB_STAGE_PRE_IB_REG_UMR,
+ MLX5_IB_STAGE_WHITELIST_UID,
+ MLX5_IB_STAGE_IB_REG,
++ MLX5_IB_STAGE_DEVICE_NOTIFIER,
+ MLX5_IB_STAGE_POST_IB_REG_UMR,
+ MLX5_IB_STAGE_DELAY_DROP,
+ MLX5_IB_STAGE_RESTRACK,
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index d2f7b5195c19dd..91d329e903083c 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -775,6 +775,7 @@ int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask)
+ * Yield the processor
+ */
+ spin_lock_irqsave(&qp->state_lock, flags);
++ attr->cur_qp_state = qp_state(qp);
+ if (qp->attr.sq_draining) {
+ spin_unlock_irqrestore(&qp->state_lock, flags);
+ cond_resched();
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 479c07e6e4ed3e..87a02f0deb0001 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -663,10 +663,12 @@ int rxe_requester(struct rxe_qp *qp)
+ if (unlikely(qp_state(qp) == IB_QPS_ERR)) {
+ wqe = __req_next_wqe(qp);
+ spin_unlock_irqrestore(&qp->state_lock, flags);
+- if (wqe)
++ if (wqe) {
++ wqe->status = IB_WC_WR_FLUSH_ERR;
+ goto err;
+- else
++ } else {
+ goto exit;
++ }
+ }
+
+ if (unlikely(qp_state(qp) == IB_QPS_RESET)) {
+diff --git a/drivers/input/misc/cs40l50-vibra.c b/drivers/input/misc/cs40l50-vibra.c
+index 03bdb7c26ec09f..dce3b0ec8cf368 100644
+--- a/drivers/input/misc/cs40l50-vibra.c
++++ b/drivers/input/misc/cs40l50-vibra.c
+@@ -334,11 +334,12 @@ static int cs40l50_add(struct input_dev *dev, struct ff_effect *effect,
+ work_data.custom_len = effect->u.periodic.custom_len;
+ work_data.vib = vib;
+ work_data.effect = effect;
+- INIT_WORK(&work_data.work, cs40l50_add_worker);
++ INIT_WORK_ONSTACK(&work_data.work, cs40l50_add_worker);
+
+ /* Push to the workqueue to serialize with playbacks */
+ queue_work(vib->vib_wq, &work_data.work);
+ flush_work(&work_data.work);
++ destroy_work_on_stack(&work_data.work);
+
+ kfree(work_data.custom_data);
+
+@@ -467,11 +468,12 @@ static int cs40l50_erase(struct input_dev *dev, int effect_id)
+ work_data.vib = vib;
+ work_data.effect = &dev->ff->effects[effect_id];
+
+- INIT_WORK(&work_data.work, cs40l50_erase_worker);
++ INIT_WORK_ONSTACK(&work_data.work, cs40l50_erase_worker);
+
+ /* Push to workqueue to serialize with playbacks */
+ queue_work(vib->vib_wq, &work_data.work);
+ flush_work(&work_data.work);
++ destroy_work_on_stack(&work_data.work);
+
+ return work_data.error;
+ }
+diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c
+index f49a8e0cb03c06..adacd6f7d6a8f7 100644
+--- a/drivers/interconnect/qcom/icc-rpmh.c
++++ b/drivers/interconnect/qcom/icc-rpmh.c
+@@ -311,6 +311,9 @@ int qcom_icc_rpmh_probe(struct platform_device *pdev)
+ }
+
+ qp->num_clks = devm_clk_bulk_get_all(qp->dev, &qp->clks);
++ if (qp->num_clks == -EPROBE_DEFER)
++ return dev_err_probe(dev, qp->num_clks, "Failed to get QoS clocks\n");
++
+ if (qp->num_clks < 0 || (!qp->num_clks && desc->qos_clks_required)) {
+ dev_info(dev, "Skipping QoS, failed to get clk: %d\n", qp->num_clks);
+ goto skip_qos_config;
+diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
+index 2d5945c982bde5..5459f726fb29d6 100644
+--- a/drivers/iommu/amd/amd_iommu.h
++++ b/drivers/iommu/amd/amd_iommu.h
+@@ -45,7 +45,7 @@ extern enum io_pgtable_fmt amd_iommu_pgtable;
+ extern int amd_iommu_gpt_level;
+
+ /* Protection domain ops */
+-struct protection_domain *protection_domain_alloc(unsigned int type);
++struct protection_domain *protection_domain_alloc(unsigned int type, int nid);
+ void protection_domain_free(struct protection_domain *domain);
+ struct iommu_domain *amd_iommu_domain_alloc_sva(struct device *dev,
+ struct mm_struct *mm);
+@@ -143,19 +143,6 @@ static inline void *iommu_phys_to_virt(unsigned long paddr)
+ return phys_to_virt(__sme_clr(paddr));
+ }
+
+-static inline
+-void amd_iommu_domain_set_pt_root(struct protection_domain *domain, u64 root)
+-{
+- domain->iop.root = (u64 *)(root & PAGE_MASK);
+- domain->iop.mode = root & 7; /* lowest 3 bits encode pgtable mode */
+-}
+-
+-static inline
+-void amd_iommu_domain_clr_pt_root(struct protection_domain *domain)
+-{
+- amd_iommu_domain_set_pt_root(domain, 0);
+-}
+-
+ static inline int get_pci_sbdf_id(struct pci_dev *pdev)
+ {
+ int seg = pci_domain_nr(pdev->bus);
+diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
+index 2b76b5dedc1d9b..caa469abf03272 100644
+--- a/drivers/iommu/amd/amd_iommu_types.h
++++ b/drivers/iommu/amd/amd_iommu_types.h
+@@ -527,7 +527,7 @@ struct amd_irte_ops;
+ #define AMD_IOMMU_FLAG_TRANS_PRE_ENABLED (1 << 0)
+
+ #define io_pgtable_to_data(x) \
+- container_of((x), struct amd_io_pgtable, iop)
++ container_of((x), struct amd_io_pgtable, pgtbl)
+
+ #define io_pgtable_ops_to_data(x) \
+ io_pgtable_to_data(io_pgtable_ops_to_pgtable(x))
+@@ -548,7 +548,7 @@ struct gcr3_tbl_info {
+
+ struct amd_io_pgtable {
+ struct io_pgtable_cfg pgtbl_cfg;
+- struct io_pgtable iop;
++ struct io_pgtable pgtbl;
+ int mode;
+ u64 *root;
+ u64 *pgd; /* v2 pgtable pgd pointer */
+@@ -580,7 +580,6 @@ struct protection_domain {
+ struct amd_io_pgtable iop;
+ spinlock_t lock; /* mostly used to lock the page table*/
+ u16 id; /* the domain id written to the device table */
+- int nid; /* Node ID */
+ enum protection_domain_mode pd_mode; /* Track page table type */
+ bool dirty_tracking; /* dirty tracking is enabled in the domain */
+ unsigned dev_cnt; /* devices assigned to this domain */
+diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
+index 05aed3cb46f1bf..667718db7ffde8 100644
+--- a/drivers/iommu/amd/io_pgtable.c
++++ b/drivers/iommu/amd/io_pgtable.c
+@@ -132,57 +132,42 @@ static void free_sub_pt(u64 *root, int mode, struct list_head *freelist)
+ }
+ }
+
+-void amd_iommu_domain_set_pgtable(struct protection_domain *domain,
+- u64 *root, int mode)
+-{
+- u64 pt_root;
+-
+- /* lowest 3 bits encode pgtable mode */
+- pt_root = mode & 7;
+- pt_root |= (u64)root;
+-
+- amd_iommu_domain_set_pt_root(domain, pt_root);
+-}
+-
+ /*
+ * This function is used to add another level to an IO page table. Adding
+ * another level increases the size of the address space by 9 bits to a size up
+ * to 64 bits.
+ */
+-static bool increase_address_space(struct protection_domain *domain,
++static bool increase_address_space(struct amd_io_pgtable *pgtable,
+ unsigned long address,
+ gfp_t gfp)
+ {
++ struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
++ struct protection_domain *domain =
++ container_of(pgtable, struct protection_domain, iop);
+ unsigned long flags;
+ bool ret = true;
+ u64 *pte;
+
+- pte = iommu_alloc_page_node(domain->nid, gfp);
++ pte = iommu_alloc_page_node(cfg->amd.nid, gfp);
+ if (!pte)
+ return false;
+
+ spin_lock_irqsave(&domain->lock, flags);
+
+- if (address <= PM_LEVEL_SIZE(domain->iop.mode))
++ if (address <= PM_LEVEL_SIZE(pgtable->mode))
+ goto out;
+
+ ret = false;
+- if (WARN_ON_ONCE(domain->iop.mode == PAGE_MODE_6_LEVEL))
++ if (WARN_ON_ONCE(pgtable->mode == PAGE_MODE_6_LEVEL))
+ goto out;
+
+- *pte = PM_LEVEL_PDE(domain->iop.mode, iommu_virt_to_phys(domain->iop.root));
++ *pte = PM_LEVEL_PDE(pgtable->mode, iommu_virt_to_phys(pgtable->root));
+
+- domain->iop.root = pte;
+- domain->iop.mode += 1;
++ pgtable->root = pte;
++ pgtable->mode += 1;
+ amd_iommu_update_and_flush_device_table(domain);
+ amd_iommu_domain_flush_complete(domain);
+
+- /*
+- * Device Table needs to be updated and flushed before the new root can
+- * be published.
+- */
+- amd_iommu_domain_set_pgtable(domain, pte, domain->iop.mode);
+-
+ pte = NULL;
+ ret = true;
+
+@@ -193,30 +178,31 @@ static bool increase_address_space(struct protection_domain *domain,
+ return ret;
+ }
+
+-static u64 *alloc_pte(struct protection_domain *domain,
++static u64 *alloc_pte(struct amd_io_pgtable *pgtable,
+ unsigned long address,
+ unsigned long page_size,
+ u64 **pte_page,
+ gfp_t gfp,
+ bool *updated)
+ {
++ struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
+ int level, end_lvl;
+ u64 *pte, *page;
+
+ BUG_ON(!is_power_of_2(page_size));
+
+- while (address > PM_LEVEL_SIZE(domain->iop.mode)) {
++ while (address > PM_LEVEL_SIZE(pgtable->mode)) {
+ /*
+ * Return an error if there is no memory to update the
+ * page-table.
+ */
+- if (!increase_address_space(domain, address, gfp))
++ if (!increase_address_space(pgtable, address, gfp))
+ return NULL;
+ }
+
+
+- level = domain->iop.mode - 1;
+- pte = &domain->iop.root[PM_LEVEL_INDEX(level, address)];
++ level = pgtable->mode - 1;
++ pte = &pgtable->root[PM_LEVEL_INDEX(level, address)];
+ address = PAGE_SIZE_ALIGN(address, page_size);
+ end_lvl = PAGE_SIZE_LEVEL(page_size);
+
+@@ -251,7 +237,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
+
+ if (!IOMMU_PTE_PRESENT(__pte) ||
+ pte_level == PAGE_MODE_NONE) {
+- page = iommu_alloc_page_node(domain->nid, gfp);
++ page = iommu_alloc_page_node(cfg->amd.nid, gfp);
+
+ if (!page)
+ return NULL;
+@@ -365,7 +351,7 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+ phys_addr_t paddr, size_t pgsize, size_t pgcount,
+ int prot, gfp_t gfp, size_t *mapped)
+ {
+- struct protection_domain *dom = io_pgtable_ops_to_domain(ops);
++ struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
+ LIST_HEAD(freelist);
+ bool updated = false;
+ u64 __pte, *pte;
+@@ -382,7 +368,7 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+
+ while (pgcount > 0) {
+ count = PAGE_SIZE_PTE_COUNT(pgsize);
+- pte = alloc_pte(dom, iova, pgsize, NULL, gfp, &updated);
++ pte = alloc_pte(pgtable, iova, pgsize, NULL, gfp, &updated);
+
+ ret = -ENOMEM;
+ if (!pte)
+@@ -419,6 +405,7 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+
+ out:
+ if (updated) {
++ struct protection_domain *dom = io_pgtable_ops_to_domain(ops);
+ unsigned long flags;
+
+ spin_lock_irqsave(&dom->lock, flags);
+@@ -560,34 +547,25 @@ static int iommu_v1_read_and_clear_dirty(struct io_pgtable_ops *ops,
+ */
+ static void v1_free_pgtable(struct io_pgtable *iop)
+ {
+- struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop);
+- struct protection_domain *dom;
++ struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, pgtbl);
+ LIST_HEAD(freelist);
+
+ if (pgtable->mode == PAGE_MODE_NONE)
+ return;
+
+- dom = container_of(pgtable, struct protection_domain, iop);
+-
+ /* Page-table is not visible to IOMMU anymore, so free it */
+ BUG_ON(pgtable->mode < PAGE_MODE_NONE ||
+ pgtable->mode > PAGE_MODE_6_LEVEL);
+
+ free_sub_pt(pgtable->root, pgtable->mode, &freelist);
+ iommu_put_pages_list(&freelist);
+-
+- /* Update data structure */
+- amd_iommu_domain_clr_pt_root(dom);
+-
+- /* Make changes visible to IOMMUs */
+- amd_iommu_domain_update(dom);
+ }
+
+ static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
+ {
+ struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);
+
+- pgtable->root = iommu_alloc_page(GFP_KERNEL);
++ pgtable->root = iommu_alloc_page_node(cfg->amd.nid, GFP_KERNEL);
+ if (!pgtable->root)
+ return NULL;
+ pgtable->mode = PAGE_MODE_3_LEVEL;
+@@ -597,12 +575,12 @@ static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
+ cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE;
+ cfg->tlb = &v1_flush_ops;
+
+- pgtable->iop.ops.map_pages = iommu_v1_map_pages;
+- pgtable->iop.ops.unmap_pages = iommu_v1_unmap_pages;
+- pgtable->iop.ops.iova_to_phys = iommu_v1_iova_to_phys;
+- pgtable->iop.ops.read_and_clear_dirty = iommu_v1_read_and_clear_dirty;
++ pgtable->pgtbl.ops.map_pages = iommu_v1_map_pages;
++ pgtable->pgtbl.ops.unmap_pages = iommu_v1_unmap_pages;
++ pgtable->pgtbl.ops.iova_to_phys = iommu_v1_iova_to_phys;
++ pgtable->pgtbl.ops.read_and_clear_dirty = iommu_v1_read_and_clear_dirty;
+
+- return &pgtable->iop;
++ return &pgtable->pgtbl;
+ }
+
+ struct io_pgtable_init_fns io_pgtable_amd_iommu_v1_init_fns = {
+diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
+index f9227cbf75dfe0..2c5db78065113e 100644
+--- a/drivers/iommu/amd/io_pgtable_v2.c
++++ b/drivers/iommu/amd/io_pgtable_v2.c
+@@ -233,8 +233,8 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+ phys_addr_t paddr, size_t pgsize, size_t pgcount,
+ int prot, gfp_t gfp, size_t *mapped)
+ {
+- struct protection_domain *pdom = io_pgtable_ops_to_domain(ops);
+- struct io_pgtable_cfg *cfg = &pdom->iop.iop.cfg;
++ struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
++ struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
+ u64 *pte;
+ unsigned long map_size;
+ unsigned long mapped_size = 0;
+@@ -251,7 +251,7 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+
+ while (mapped_size < size) {
+ map_size = get_alloc_page_size(pgsize);
+- pte = v2_alloc_pte(pdom->nid, pdom->iop.pgd,
++ pte = v2_alloc_pte(cfg->amd.nid, pgtable->pgd,
+ iova, map_size, gfp, &updated);
+ if (!pte) {
+ ret = -EINVAL;
+@@ -266,8 +266,14 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+ }
+
+ out:
+- if (updated)
++ if (updated) {
++ struct protection_domain *pdom = io_pgtable_ops_to_domain(ops);
++ unsigned long flags;
++
++ spin_lock_irqsave(&pdom->lock, flags);
+ amd_iommu_domain_flush_pages(pdom, o_iova, size);
++ spin_unlock_irqrestore(&pdom->lock, flags);
++ }
+
+ if (mapped)
+ *mapped += mapped_size;
+@@ -281,7 +287,7 @@ static unsigned long iommu_v2_unmap_pages(struct io_pgtable_ops *ops,
+ struct iommu_iotlb_gather *gather)
+ {
+ struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
+- struct io_pgtable_cfg *cfg = &pgtable->iop.cfg;
++ struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
+ unsigned long unmap_size;
+ unsigned long unmapped = 0;
+ size_t size = pgcount << __ffs(pgsize);
+@@ -346,7 +352,7 @@ static const struct iommu_flush_ops v2_flush_ops = {
+
+ static void v2_free_pgtable(struct io_pgtable *iop)
+ {
+- struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop);
++ struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, pgtbl);
+
+ if (!pgtable || !pgtable->pgd)
+ return;
+@@ -359,26 +365,25 @@ static void v2_free_pgtable(struct io_pgtable *iop)
+ static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
+ {
+ struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);
+- struct protection_domain *pdom = (struct protection_domain *)cookie;
+ int ias = IOMMU_IN_ADDR_BIT_SIZE;
+
+- pgtable->pgd = iommu_alloc_page_node(pdom->nid, GFP_KERNEL);
++ pgtable->pgd = iommu_alloc_page_node(cfg->amd.nid, GFP_KERNEL);
+ if (!pgtable->pgd)
+ return NULL;
+
+ if (get_pgtable_level() == PAGE_MODE_5_LEVEL)
+ ias = 57;
+
+- pgtable->iop.ops.map_pages = iommu_v2_map_pages;
+- pgtable->iop.ops.unmap_pages = iommu_v2_unmap_pages;
+- pgtable->iop.ops.iova_to_phys = iommu_v2_iova_to_phys;
++ pgtable->pgtbl.ops.map_pages = iommu_v2_map_pages;
++ pgtable->pgtbl.ops.unmap_pages = iommu_v2_unmap_pages;
++ pgtable->pgtbl.ops.iova_to_phys = iommu_v2_iova_to_phys;
+
+ cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES_V2,
+ cfg->ias = ias,
+ cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE,
+ cfg->tlb = &v2_flush_ops;
+
+- return &pgtable->iop;
++ return &pgtable->pgtbl;
+ }
+
+ struct io_pgtable_init_fns io_pgtable_amd_iommu_v2_init_fns = {
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 1a61f14459e4fe..554179f7de29eb 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -2030,6 +2030,7 @@ static int do_attach(struct iommu_dev_data *dev_data,
+ struct protection_domain *domain)
+ {
+ struct amd_iommu *iommu = get_amd_iommu_from_dev_data(dev_data);
++ struct io_pgtable_cfg *cfg = &domain->iop.pgtbl.cfg;
+ int ret = 0;
+
+ /* Update data structures */
+@@ -2037,8 +2038,8 @@ static int do_attach(struct iommu_dev_data *dev_data,
+ list_add(&dev_data->list, &domain->dev_list);
+
+ /* Update NUMA Node ID */
+- if (domain->nid == NUMA_NO_NODE)
+- domain->nid = dev_to_node(dev_data->dev);
++ if (cfg->amd.nid == NUMA_NO_NODE)
++ cfg->amd.nid = dev_to_node(dev_data->dev);
+
+ /* Do reference counting */
+ domain->dev_iommu[iommu->index] += 1;
+@@ -2262,8 +2263,10 @@ void protection_domain_free(struct protection_domain *domain)
+ if (!domain)
+ return;
+
++ WARN_ON(!list_empty(&domain->dev_list));
++
+ if (domain->iop.pgtbl_cfg.tlb)
+- free_io_pgtable_ops(&domain->iop.iop.ops);
++ free_io_pgtable_ops(&domain->iop.pgtbl.ops);
+
+ if (domain->id)
+ domain_id_free(domain->id);
+@@ -2271,7 +2274,7 @@ void protection_domain_free(struct protection_domain *domain)
+ kfree(domain);
+ }
+
+-struct protection_domain *protection_domain_alloc(unsigned int type)
++struct protection_domain *protection_domain_alloc(unsigned int type, int nid)
+ {
+ struct io_pgtable_ops *pgtbl_ops;
+ struct protection_domain *domain;
+@@ -2288,7 +2291,7 @@ struct protection_domain *protection_domain_alloc(unsigned int type)
+ spin_lock_init(&domain->lock);
+ INIT_LIST_HEAD(&domain->dev_list);
+ INIT_LIST_HEAD(&domain->dev_data_list);
+- domain->nid = NUMA_NO_NODE;
++ domain->iop.pgtbl.cfg.amd.nid = nid;
+
+ switch (type) {
+ /* No need to allocate io pgtable ops in passthrough mode */
+@@ -2364,14 +2367,15 @@ static struct iommu_domain *do_iommu_domain_alloc(unsigned int type,
+ if (dirty_tracking && !amd_iommu_hd_support(iommu))
+ return ERR_PTR(-EOPNOTSUPP);
+
+- domain = protection_domain_alloc(type);
++ domain = protection_domain_alloc(type,
++ dev ? dev_to_node(dev) : NUMA_NO_NODE);
+ if (!domain)
+ return ERR_PTR(-ENOMEM);
+
+ domain->domain.geometry.aperture_start = 0;
+ domain->domain.geometry.aperture_end = dma_max_address();
+ domain->domain.geometry.force_aperture = true;
+- domain->domain.pgsize_bitmap = domain->iop.iop.cfg.pgsize_bitmap;
++ domain->domain.pgsize_bitmap = domain->iop.pgtbl.cfg.pgsize_bitmap;
+
+ if (iommu) {
+ domain->domain.type = type;
+@@ -2492,7 +2496,7 @@ static int amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
+ unsigned long iova, size_t size)
+ {
+ struct protection_domain *domain = to_pdomain(dom);
+- struct io_pgtable_ops *ops = &domain->iop.iop.ops;
++ struct io_pgtable_ops *ops = &domain->iop.pgtbl.ops;
+
+ if (ops->map_pages)
+ domain_flush_np_cache(domain, iova, size);
+@@ -2504,7 +2508,7 @@ static int amd_iommu_map_pages(struct iommu_domain *dom, unsigned long iova,
+ int iommu_prot, gfp_t gfp, size_t *mapped)
+ {
+ struct protection_domain *domain = to_pdomain(dom);
+- struct io_pgtable_ops *ops = &domain->iop.iop.ops;
++ struct io_pgtable_ops *ops = &domain->iop.pgtbl.ops;
+ int prot = 0;
+ int ret = -EINVAL;
+
+@@ -2551,7 +2555,7 @@ static size_t amd_iommu_unmap_pages(struct iommu_domain *dom, unsigned long iova
+ struct iommu_iotlb_gather *gather)
+ {
+ struct protection_domain *domain = to_pdomain(dom);
+- struct io_pgtable_ops *ops = &domain->iop.iop.ops;
++ struct io_pgtable_ops *ops = &domain->iop.pgtbl.ops;
+ size_t r;
+
+ if ((domain->pd_mode == PD_MODE_V1) &&
+@@ -2570,7 +2574,7 @@ static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
+ dma_addr_t iova)
+ {
+ struct protection_domain *domain = to_pdomain(dom);
+- struct io_pgtable_ops *ops = &domain->iop.iop.ops;
++ struct io_pgtable_ops *ops = &domain->iop.pgtbl.ops;
+
+ return ops->iova_to_phys(ops, iova);
+ }
+@@ -2648,7 +2652,7 @@ static int amd_iommu_read_and_clear_dirty(struct iommu_domain *domain,
+ struct iommu_dirty_bitmap *dirty)
+ {
+ struct protection_domain *pdomain = to_pdomain(domain);
+- struct io_pgtable_ops *ops = &pdomain->iop.iop.ops;
++ struct io_pgtable_ops *ops = &pdomain->iop.pgtbl.ops;
+ unsigned long lflags;
+
+ if (!ops || !ops->read_and_clear_dirty)
+diff --git a/drivers/iommu/amd/pasid.c b/drivers/iommu/amd/pasid.c
+index a68215f2b3e1d6..0657b9373be547 100644
+--- a/drivers/iommu/amd/pasid.c
++++ b/drivers/iommu/amd/pasid.c
+@@ -181,7 +181,7 @@ struct iommu_domain *amd_iommu_domain_alloc_sva(struct device *dev,
+ struct protection_domain *pdom;
+ int ret;
+
+- pdom = protection_domain_alloc(IOMMU_DOMAIN_SVA);
++ pdom = protection_domain_alloc(IOMMU_DOMAIN_SVA, dev_to_node(dev));
+ if (!pdom)
+ return ERR_PTR(-ENOMEM);
+
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index dda6dea7cce099..cdb8701a968fdb 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -722,14 +722,15 @@ static void pgtable_walk(struct intel_iommu *iommu, unsigned long pfn,
+ while (1) {
+ offset = pfn_level_offset(pfn, level);
+ pte = &parent[offset];
+- if (!pte || (dma_pte_superpage(pte) || !dma_pte_present(pte))) {
+- pr_info("PTE not present at level %d\n", level);
+- break;
+- }
+
+ pr_info("pte level: %d, pte value: 0x%016llx\n", level, pte->val);
+
+- if (level == 1)
++ if (!dma_pte_present(pte)) {
++ pr_info("page table not present at level %d\n", level - 1);
++ break;
++ }
++
++ if (level == 1 || dma_pte_superpage(pte))
+ break;
+
+ parent = phys_to_virt(dma_pte_addr(pte));
+@@ -752,11 +753,11 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ pr_info("Dump %s table entries for IOVA 0x%llx\n", iommu->name, addr);
+
+ /* root entry dump */
+- rt_entry = &iommu->root_entry[bus];
+- if (!rt_entry) {
+- pr_info("root table entry is not present\n");
++ if (!iommu->root_entry) {
++ pr_info("root table is not present\n");
+ return;
+ }
++ rt_entry = &iommu->root_entry[bus];
+
+ if (sm_supported(iommu))
+ pr_info("scalable mode root entry: hi 0x%016llx, low 0x%016llx\n",
+@@ -767,7 +768,7 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ /* context entry dump */
+ ctx_entry = iommu_context_addr(iommu, bus, devfn, 0);
+ if (!ctx_entry) {
+- pr_info("context table entry is not present\n");
++ pr_info("context table is not present\n");
+ return;
+ }
+
+@@ -776,17 +777,23 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+
+ /* legacy mode does not require PASID entries */
+ if (!sm_supported(iommu)) {
++ if (!context_present(ctx_entry)) {
++ pr_info("legacy mode page table is not present\n");
++ return;
++ }
+ level = agaw_to_level(ctx_entry->hi & 7);
+ pgtable = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
+ goto pgtable_walk;
+ }
+
+- /* get the pointer to pasid directory entry */
+- dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
+- if (!dir) {
+- pr_info("pasid directory entry is not present\n");
++ if (!context_present(ctx_entry)) {
++ pr_info("pasid directory table is not present\n");
+ return;
+ }
++
++ /* get the pointer to pasid directory entry */
++ dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
++
+ /* For request-without-pasid, get the pasid from context entry */
+ if (intel_iommu_sm && pasid == IOMMU_PASID_INVALID)
+ pasid = IOMMU_NO_PASID;
+@@ -798,7 +805,7 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ /* get the pointer to the pasid table entry */
+ entries = get_pasid_table_from_pde(pde);
+ if (!entries) {
+- pr_info("pasid table entry is not present\n");
++ pr_info("pasid table is not present\n");
+ return;
+ }
+ index = pasid & PASID_PTE_MASK;
+@@ -806,6 +813,11 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ for (i = 0; i < ARRAY_SIZE(pte->val); i++)
+ pr_info("pasid table entry[%d]: 0x%016llx\n", i, pte->val[i]);
+
++ if (!pasid_pte_is_present(pte)) {
++ pr_info("scalable mode page table is not present\n");
++ return;
++ }
++
+ if (pasid_pte_get_pgtt(pte) == PASID_ENTRY_PGTT_FL_ONLY) {
+ level = pte->val[2] & BIT_ULL(2) ? 5 : 4;
+ pgtable = phys_to_virt(pte->val[2] & VTD_PAGE_MASK);
+diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
+index d8eaa7ea380bb0..fbdeded3d48b59 100644
+--- a/drivers/iommu/s390-iommu.c
++++ b/drivers/iommu/s390-iommu.c
+@@ -33,6 +33,8 @@ struct s390_domain {
+ struct rcu_head rcu;
+ };
+
++static struct iommu_domain blocking_domain;
++
+ static inline unsigned int calc_rtx(dma_addr_t ptr)
+ {
+ return ((unsigned long)ptr >> ZPCI_RT_SHIFT) & ZPCI_INDEX_MASK;
+@@ -369,20 +371,36 @@ static void s390_domain_free(struct iommu_domain *domain)
+ call_rcu(&s390_domain->rcu, s390_iommu_rcu_free_domain);
+ }
+
+-static void s390_iommu_detach_device(struct iommu_domain *domain,
+- struct device *dev)
++static void zdev_s390_domain_update(struct zpci_dev *zdev,
++ struct iommu_domain *domain)
++{
++ unsigned long flags;
++
++ spin_lock_irqsave(&zdev->dom_lock, flags);
++ zdev->s390_domain = domain;
++ spin_unlock_irqrestore(&zdev->dom_lock, flags);
++}
++
++static int blocking_domain_attach_device(struct iommu_domain *domain,
++ struct device *dev)
+ {
+- struct s390_domain *s390_domain = to_s390_domain(domain);
+ struct zpci_dev *zdev = to_zpci_dev(dev);
++ struct s390_domain *s390_domain;
+ unsigned long flags;
+
++ if (zdev->s390_domain->type == IOMMU_DOMAIN_BLOCKED)
++ return 0;
++
++ s390_domain = to_s390_domain(zdev->s390_domain);
+ spin_lock_irqsave(&s390_domain->list_lock, flags);
+ list_del_rcu(&zdev->iommu_list);
+ spin_unlock_irqrestore(&s390_domain->list_lock, flags);
+
+ zpci_unregister_ioat(zdev, 0);
+- zdev->s390_domain = NULL;
+ zdev->dma_table = NULL;
++ zdev_s390_domain_update(zdev, domain);
++
++ return 0;
+ }
+
+ static int s390_iommu_attach_device(struct iommu_domain *domain,
+@@ -401,20 +419,15 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
+ domain->geometry.aperture_end < zdev->start_dma))
+ return -EINVAL;
+
+- if (zdev->s390_domain)
+- s390_iommu_detach_device(&zdev->s390_domain->domain, dev);
++ blocking_domain_attach_device(&blocking_domain, dev);
+
++ /* If we fail now DMA remains blocked via blocking domain */
+ cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
+ virt_to_phys(s390_domain->dma_table), &status);
+- /*
+- * If the device is undergoing error recovery the reset code
+- * will re-establish the new domain.
+- */
+ if (cc && status != ZPCI_PCI_ST_FUNC_NOT_AVAIL)
+ return -EIO;
+-
+ zdev->dma_table = s390_domain->dma_table;
+- zdev->s390_domain = s390_domain;
++ zdev_s390_domain_update(zdev, domain);
+
+ spin_lock_irqsave(&s390_domain->list_lock, flags);
+ list_add_rcu(&zdev->iommu_list, &s390_domain->devices);
+@@ -466,19 +479,11 @@ static struct iommu_device *s390_iommu_probe_device(struct device *dev)
+ if (zdev->tlb_refresh)
+ dev->iommu->shadow_on_flush = 1;
+
+- return &zdev->iommu_dev;
+-}
++ /* Start with DMA blocked */
++ spin_lock_init(&zdev->dom_lock);
++ zdev_s390_domain_update(zdev, &blocking_domain);
+
+-static void s390_iommu_release_device(struct device *dev)
+-{
+- struct zpci_dev *zdev = to_zpci_dev(dev);
+-
+- /*
+- * release_device is expected to detach any domain currently attached
+- * to the device, but keep it attached to other devices in the group.
+- */
+- if (zdev)
+- s390_iommu_detach_device(&zdev->s390_domain->domain, dev);
++ return &zdev->iommu_dev;
+ }
+
+ static int zpci_refresh_all(struct zpci_dev *zdev)
+@@ -697,9 +702,15 @@ static size_t s390_iommu_unmap_pages(struct iommu_domain *domain,
+
+ struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev)
+ {
+- if (!zdev || !zdev->s390_domain)
++ struct s390_domain *s390_domain;
++
++ lockdep_assert_held(&zdev->dom_lock);
++
++ if (zdev->s390_domain->type == IOMMU_DOMAIN_BLOCKED)
+ return NULL;
+- return &zdev->s390_domain->ctrs;
++
++ s390_domain = to_s390_domain(zdev->s390_domain);
++ return &s390_domain->ctrs;
+ }
+
+ int zpci_init_iommu(struct zpci_dev *zdev)
+@@ -776,11 +787,19 @@ static int __init s390_iommu_init(void)
+ }
+ subsys_initcall(s390_iommu_init);
+
++static struct iommu_domain blocking_domain = {
++ .type = IOMMU_DOMAIN_BLOCKED,
++ .ops = &(const struct iommu_domain_ops) {
++ .attach_dev = blocking_domain_attach_device,
++ }
++};
++
+ static const struct iommu_ops s390_iommu_ops = {
++ .blocked_domain = &blocking_domain,
++ .release_domain = &blocking_domain,
+ .capable = s390_iommu_capable,
+ .domain_alloc_paging = s390_domain_alloc_paging,
+ .probe_device = s390_iommu_probe_device,
+- .release_device = s390_iommu_release_device,
+ .device_group = generic_device_group,
+ .pgsize_bitmap = SZ_4K,
+ .get_resv_regions = s390_iommu_get_resv_regions,
+diff --git a/drivers/irqchip/irq-mvebu-sei.c b/drivers/irqchip/irq-mvebu-sei.c
+index f8c70f2d100a11..065166ab5dbc04 100644
+--- a/drivers/irqchip/irq-mvebu-sei.c
++++ b/drivers/irqchip/irq-mvebu-sei.c
+@@ -192,7 +192,6 @@ static void mvebu_sei_domain_free(struct irq_domain *domain, unsigned int virq,
+ }
+
+ static const struct irq_domain_ops mvebu_sei_domain_ops = {
+- .select = msi_lib_irq_domain_select,
+ .alloc = mvebu_sei_domain_alloc,
+ .free = mvebu_sei_domain_free,
+ };
+@@ -306,6 +305,7 @@ static void mvebu_sei_cp_domain_free(struct irq_domain *domain,
+ }
+
+ static const struct irq_domain_ops mvebu_sei_cp_domain_ops = {
++ .select = msi_lib_irq_domain_select,
+ .alloc = mvebu_sei_cp_domain_alloc,
+ .free = mvebu_sei_cp_domain_free,
+ };
+diff --git a/drivers/leds/flash/leds-ktd2692.c b/drivers/leds/flash/leds-ktd2692.c
+index 7bb0aa2753e365..0a8632b0da3048 100644
+--- a/drivers/leds/flash/leds-ktd2692.c
++++ b/drivers/leds/flash/leds-ktd2692.c
+@@ -293,6 +293,7 @@ static int ktd2692_probe(struct platform_device *pdev)
+
+ fled_cdev = &led->fled_cdev;
+ led_cdev = &fled_cdev->led_cdev;
++ led->props.timing = ktd2692_timing;
+
+ ret = ktd2692_parse_dt(led, &pdev->dev, &led_cfg);
+ if (ret)
+diff --git a/drivers/leds/leds-max5970.c b/drivers/leds/leds-max5970.c
+index 56a584311581af..285074c53b2344 100644
+--- a/drivers/leds/leds-max5970.c
++++ b/drivers/leds/leds-max5970.c
+@@ -45,7 +45,7 @@ static int max5970_led_set_brightness(struct led_classdev *cdev,
+
+ static int max5970_led_probe(struct platform_device *pdev)
+ {
+- struct fwnode_handle *led_node, *child;
++ struct fwnode_handle *child;
+ struct device *dev = &pdev->dev;
+ struct regmap *regmap;
+ struct max5970_led *ddata;
+@@ -55,7 +55,8 @@ static int max5970_led_probe(struct platform_device *pdev)
+ if (!regmap)
+ return -ENODEV;
+
+- led_node = device_get_named_child_node(dev->parent, "leds");
++ struct fwnode_handle *led_node __free(fwnode_handle) =
++ device_get_named_child_node(dev->parent, "leds");
+ if (!led_node)
+ return -ENODEV;
+
+diff --git a/drivers/mailbox/arm_mhuv2.c b/drivers/mailbox/arm_mhuv2.c
+index 0ec21dcdbde723..cff7c343ee082a 100644
+--- a/drivers/mailbox/arm_mhuv2.c
++++ b/drivers/mailbox/arm_mhuv2.c
+@@ -500,7 +500,7 @@ static const struct mhuv2_protocol_ops mhuv2_data_transfer_ops = {
+ static struct mbox_chan *get_irq_chan_comb(struct mhuv2 *mhu, u32 __iomem *reg)
+ {
+ struct mbox_chan *chans = mhu->mbox.chans;
+- int channel = 0, i, offset = 0, windows, protocol, ch_wn;
++ int channel = 0, i, j, offset = 0, windows, protocol, ch_wn;
+ u32 stat;
+
+ for (i = 0; i < MHUV2_CMB_INT_ST_REG_CNT; i++) {
+@@ -510,9 +510,9 @@ static struct mbox_chan *get_irq_chan_comb(struct mhuv2 *mhu, u32 __iomem *reg)
+
+ ch_wn = i * MHUV2_STAT_BITS + __builtin_ctz(stat);
+
+- for (i = 0; i < mhu->length; i += 2) {
+- protocol = mhu->protocols[i];
+- windows = mhu->protocols[i + 1];
++ for (j = 0; j < mhu->length; j += 2) {
++ protocol = mhu->protocols[j];
++ windows = mhu->protocols[j + 1];
+
+ if (ch_wn >= offset + windows) {
+ if (protocol == DOORBELL)
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index 4bff73532085bd..9c43ed9bdd37b5 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -584,7 +584,7 @@ static int cmdq_get_clocks(struct device *dev, struct cmdq *cmdq)
+ struct clk_bulk_data *clks;
+
+ cmdq->clocks = devm_kcalloc(dev, cmdq->pdata->gce_num,
+- sizeof(cmdq->clocks), GFP_KERNEL);
++ sizeof(*cmdq->clocks), GFP_KERNEL);
+ if (!cmdq->clocks)
+ return -ENOMEM;
+
+diff --git a/drivers/mailbox/omap-mailbox.c b/drivers/mailbox/omap-mailbox.c
+index 7a87424657a158..bd0b9762cef4f0 100644
+--- a/drivers/mailbox/omap-mailbox.c
++++ b/drivers/mailbox/omap-mailbox.c
+@@ -15,6 +15,7 @@
+ #include <linux/slab.h>
+ #include <linux/kfifo.h>
+ #include <linux/err.h>
++#include <linux/io.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
+index 098bf526136c94..468cf819930a1d 100644
+--- a/drivers/md/dm-bufio.c
++++ b/drivers/md/dm-bufio.c
+@@ -2474,7 +2474,8 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
+ int r;
+ unsigned int num_locks;
+ struct dm_bufio_client *c;
+- char slab_name[27];
++ char slab_name[64];
++ static atomic_t seqno = ATOMIC_INIT(0);
+
+ if (!block_size || block_size & ((1 << SECTOR_SHIFT) - 1)) {
+ DMERR("%s: block size not specified or is not multiple of 512b", __func__);
+@@ -2525,7 +2526,8 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
+ (block_size < PAGE_SIZE || !is_power_of_2(block_size))) {
+ unsigned int align = min(1U << __ffs(block_size), (unsigned int)PAGE_SIZE);
+
+- snprintf(slab_name, sizeof(slab_name), "dm_bufio_cache-%u", block_size);
++ snprintf(slab_name, sizeof(slab_name), "dm_bufio_cache-%u-%u",
++ block_size, atomic_inc_return(&seqno));
+ c->slab_cache = kmem_cache_create(slab_name, block_size, align,
+ SLAB_RECLAIM_ACCOUNT, NULL);
+ if (!c->slab_cache) {
+@@ -2534,9 +2536,11 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
+ }
+ }
+ if (aux_size)
+- snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer-%u", aux_size);
++ snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer-%u-%u",
++ aux_size, atomic_inc_return(&seqno));
+ else
+- snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer");
++ snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer-%u",
++ atomic_inc_return(&seqno));
+ c->slab_buffer = kmem_cache_create(slab_name, sizeof(struct dm_buffer) + aux_size,
+ 0, SLAB_RECLAIM_ACCOUNT, NULL);
+ if (!c->slab_buffer) {
+diff --git a/drivers/md/dm-cache-background-tracker.c b/drivers/md/dm-cache-background-tracker.c
+index 9c5308298cf124..f3051bd7d2df07 100644
+--- a/drivers/md/dm-cache-background-tracker.c
++++ b/drivers/md/dm-cache-background-tracker.c
+@@ -11,12 +11,6 @@
+
+ #define DM_MSG_PREFIX "dm-background-tracker"
+
+-struct bt_work {
+- struct list_head list;
+- struct rb_node node;
+- struct policy_work work;
+-};
+-
+ struct background_tracker {
+ unsigned int max_work;
+ atomic_t pending_promotes;
+@@ -26,10 +20,10 @@ struct background_tracker {
+ struct list_head issued;
+ struct list_head queued;
+ struct rb_root pending;
+-
+- struct kmem_cache *work_cache;
+ };
+
++struct kmem_cache *btracker_work_cache = NULL;
++
+ struct background_tracker *btracker_create(unsigned int max_work)
+ {
+ struct background_tracker *b = kmalloc(sizeof(*b), GFP_KERNEL);
+@@ -48,12 +42,6 @@ struct background_tracker *btracker_create(unsigned int max_work)
+ INIT_LIST_HEAD(&b->queued);
+
+ b->pending = RB_ROOT;
+- b->work_cache = KMEM_CACHE(bt_work, 0);
+- if (!b->work_cache) {
+- DMERR("couldn't create mempool for background work items");
+- kfree(b);
+- b = NULL;
+- }
+
+ return b;
+ }
+@@ -66,10 +54,9 @@ void btracker_destroy(struct background_tracker *b)
+ BUG_ON(!list_empty(&b->issued));
+ list_for_each_entry_safe (w, tmp, &b->queued, list) {
+ list_del(&w->list);
+- kmem_cache_free(b->work_cache, w);
++ kmem_cache_free(btracker_work_cache, w);
+ }
+
+- kmem_cache_destroy(b->work_cache);
+ kfree(b);
+ }
+ EXPORT_SYMBOL_GPL(btracker_destroy);
+@@ -180,7 +167,7 @@ static struct bt_work *alloc_work(struct background_tracker *b)
+ if (max_work_reached(b))
+ return NULL;
+
+- return kmem_cache_alloc(b->work_cache, GFP_NOWAIT);
++ return kmem_cache_alloc(btracker_work_cache, GFP_NOWAIT);
+ }
+
+ int btracker_queue(struct background_tracker *b,
+@@ -203,7 +190,7 @@ int btracker_queue(struct background_tracker *b,
+ * There was a race, we'll just ignore this second
+ * bit of work for the same oblock.
+ */
+- kmem_cache_free(b->work_cache, w);
++ kmem_cache_free(btracker_work_cache, w);
+ return -EINVAL;
+ }
+
+@@ -244,7 +231,7 @@ void btracker_complete(struct background_tracker *b,
+ update_stats(b, &w->work, -1);
+ rb_erase(&w->node, &b->pending);
+ list_del(&w->list);
+- kmem_cache_free(b->work_cache, w);
++ kmem_cache_free(btracker_work_cache, w);
+ }
+ EXPORT_SYMBOL_GPL(btracker_complete);
+
+diff --git a/drivers/md/dm-cache-background-tracker.h b/drivers/md/dm-cache-background-tracker.h
+index 5b8f5c667b81b4..09c8fc59f7bb7b 100644
+--- a/drivers/md/dm-cache-background-tracker.h
++++ b/drivers/md/dm-cache-background-tracker.h
+@@ -26,6 +26,14 @@
+ * protected with a spinlock.
+ */
+
++struct bt_work {
++ struct list_head list;
++ struct rb_node node;
++ struct policy_work work;
++};
++
++extern struct kmem_cache *btracker_work_cache;
++
+ struct background_work;
+ struct background_tracker;
+
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 8c0ede33af6fb5..ccc3f3a0b54173 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -10,6 +10,7 @@
+ #include "dm-bio-record.h"
+ #include "dm-cache-metadata.h"
+ #include "dm-io-tracker.h"
++#include "dm-cache-background-tracker.h"
+
+ #include <linux/dm-io.h>
+ #include <linux/dm-kcopyd.h>
+@@ -2263,7 +2264,7 @@ static int parse_cache_args(struct cache_args *ca, int argc, char **argv,
+
+ /*----------------------------------------------------------------*/
+
+-static struct kmem_cache *migration_cache;
++static struct kmem_cache *migration_cache = NULL;
+
+ #define NOT_CORE_OPTION 1
+
+@@ -3449,22 +3450,36 @@ static int __init dm_cache_init(void)
+ int r;
+
+ migration_cache = KMEM_CACHE(dm_cache_migration, 0);
+- if (!migration_cache)
+- return -ENOMEM;
++ if (!migration_cache) {
++ r = -ENOMEM;
++ goto err;
++ }
++
++ btracker_work_cache = kmem_cache_create("dm_cache_bt_work",
++ sizeof(struct bt_work), __alignof__(struct bt_work), 0, NULL);
++ if (!btracker_work_cache) {
++ r = -ENOMEM;
++ goto err;
++ }
+
+ r = dm_register_target(&cache_target);
+ if (r) {
+- kmem_cache_destroy(migration_cache);
+- return r;
++ goto err;
+ }
+
+ return 0;
++
++err:
++ kmem_cache_destroy(migration_cache);
++ kmem_cache_destroy(btracker_work_cache);
++ return r;
+ }
+
+ static void __exit dm_cache_exit(void)
+ {
+ dm_unregister_target(&cache_target);
+ kmem_cache_destroy(migration_cache);
++ kmem_cache_destroy(btracker_work_cache);
+ }
+
+ module_init(dm_cache_init);
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index 272945a878b3ce..a3f4b4ad35aab9 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -1405,12 +1405,13 @@ static int stdi2dv_timings(struct v4l2_subdev *sd,
+ if (v4l2_detect_cvt(stdi->lcf + 1, hfreq, stdi->lcvs, 0,
+ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, timings))
++ false, adv76xx_get_dv_timings_cap(sd, -1), timings))
+ return 0;
+ if (v4l2_detect_gtf(stdi->lcf + 1, hfreq, stdi->lcvs,
+ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, state->aspect_ratio, timings))
++ false, state->aspect_ratio,
++ adv76xx_get_dv_timings_cap(sd, -1), timings))
+ return 0;
+
+ v4l2_dbg(2, debug, sd,
+diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
+index f2d4217310e7ff..bbba0a78703a69 100644
+--- a/drivers/media/i2c/adv7842.c
++++ b/drivers/media/i2c/adv7842.c
+@@ -1431,14 +1431,15 @@ static int stdi2dv_timings(struct v4l2_subdev *sd,
+ }
+
+ if (v4l2_detect_cvt(stdi->lcf + 1, hfreq, stdi->lcvs, 0,
+- (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+- (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, timings))
++ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
++ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
++ false, adv7842_get_dv_timings_cap(sd), timings))
+ return 0;
+ if (v4l2_detect_gtf(stdi->lcf + 1, hfreq, stdi->lcvs,
+- (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+- (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, state->aspect_ratio, timings))
++ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
++ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
++ false, state->aspect_ratio,
++ adv7842_get_dv_timings_cap(sd), timings))
+ return 0;
+
+ v4l2_dbg(2, debug, sd,
+diff --git a/drivers/media/i2c/ds90ub960.c b/drivers/media/i2c/ds90ub960.c
+index ffe5f25f864762..58424d8f72af03 100644
+--- a/drivers/media/i2c/ds90ub960.c
++++ b/drivers/media/i2c/ds90ub960.c
+@@ -1286,7 +1286,7 @@ static int ub960_rxport_get_strobe_pos(struct ub960_data *priv,
+
+ clk_delay += v & UB960_IR_RX_ANA_STROBE_SET_CLK_DELAY_MASK;
+
+- ub960_rxport_read(priv, nport, UB960_RR_SFILTER_STS_1, &v);
++ ret = ub960_rxport_read(priv, nport, UB960_RR_SFILTER_STS_1, &v);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/media/i2c/vgxy61.c b/drivers/media/i2c/vgxy61.c
+index 30378e9620166d..8034e21051bece 100644
+--- a/drivers/media/i2c/vgxy61.c
++++ b/drivers/media/i2c/vgxy61.c
+@@ -1617,7 +1617,7 @@ static int vgxy61_detect(struct vgxy61_dev *sensor)
+
+ ret = cci_read(sensor->regmap, VGXY61_REG_NVM, &st, NULL);
+ if (ret < 0)
+- return st;
++ return ret;
+ if (st != VGXY61_NVM_OK)
+ dev_warn(&client->dev, "Bad nvm state got %u\n", (u8)st);
+
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-bus.c b/drivers/media/pci/intel/ipu6/ipu6-bus.c
+index 149ec098cdbfe1..37d88ddb6ee7cd 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-bus.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-bus.c
+@@ -94,8 +94,6 @@ ipu6_bus_initialize_device(struct pci_dev *pdev, struct device *parent,
+ if (!adev)
+ return ERR_PTR(-ENOMEM);
+
+- adev->dma_mask = DMA_BIT_MASK(isp->secure_mode ? IPU6_MMU_ADDR_BITS :
+- IPU6_MMU_ADDR_BITS_NON_SECURE);
+ adev->isp = isp;
+ adev->ctrl = ctrl;
+ adev->pdata = pdata;
+@@ -106,10 +104,6 @@ ipu6_bus_initialize_device(struct pci_dev *pdev, struct device *parent,
+
+ auxdev->dev.parent = parent;
+ auxdev->dev.release = ipu6_bus_release;
+- auxdev->dev.dma_ops = &ipu6_dma_ops;
+- auxdev->dev.dma_mask = &adev->dma_mask;
+- auxdev->dev.dma_parms = pdev->dev.dma_parms;
+- auxdev->dev.coherent_dma_mask = adev->dma_mask;
+
+ ret = auxiliary_device_init(auxdev);
+ if (ret < 0) {
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-buttress.c b/drivers/media/pci/intel/ipu6/ipu6-buttress.c
+index e47f84c30e10d6..1ee63ef4a40b22 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-buttress.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-buttress.c
+@@ -24,6 +24,7 @@
+
+ #include "ipu6.h"
+ #include "ipu6-bus.h"
++#include "ipu6-dma.h"
+ #include "ipu6-buttress.h"
+ #include "ipu6-platform-buttress-regs.h"
+
+@@ -345,12 +346,16 @@ irqreturn_t ipu6_buttress_isr(int irq, void *isp_ptr)
+ u32 disable_irqs = 0;
+ u32 irq_status;
+ u32 i, count = 0;
++ int active;
+
+- pm_runtime_get_noresume(&isp->pdev->dev);
++ active = pm_runtime_get_if_active(&isp->pdev->dev);
++ if (!active)
++ return IRQ_NONE;
+
+ irq_status = readl(isp->base + reg_irq_sts);
+- if (!irq_status) {
+- pm_runtime_put_noidle(&isp->pdev->dev);
++ if (irq_status == 0 || WARN_ON_ONCE(irq_status == 0xffffffffu)) {
++ if (active > 0)
++ pm_runtime_put_noidle(&isp->pdev->dev);
+ return IRQ_NONE;
+ }
+
+@@ -426,7 +431,8 @@ irqreturn_t ipu6_buttress_isr(int irq, void *isp_ptr)
+ writel(BUTTRESS_IRQS & ~disable_irqs,
+ isp->base + BUTTRESS_REG_ISR_ENABLE);
+
+- pm_runtime_put(&isp->pdev->dev);
++ if (active > 0)
++ pm_runtime_put(&isp->pdev->dev);
+
+ return ret;
+ }
+@@ -553,6 +559,7 @@ int ipu6_buttress_map_fw_image(struct ipu6_bus_device *sys,
+ const struct firmware *fw, struct sg_table *sgt)
+ {
+ bool is_vmalloc = is_vmalloc_addr(fw->data);
++ struct pci_dev *pdev = sys->isp->pdev;
+ struct page **pages;
+ const void *addr;
+ unsigned long n_pages;
+@@ -588,14 +595,20 @@ int ipu6_buttress_map_fw_image(struct ipu6_bus_device *sys,
+ goto out;
+ }
+
+- ret = dma_map_sgtable(&sys->auxdev.dev, sgt, DMA_TO_DEVICE, 0);
+- if (ret < 0) {
+- ret = -ENOMEM;
++ ret = dma_map_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0);
++ if (ret) {
+ sg_free_table(sgt);
+ goto out;
+ }
+
+- dma_sync_sgtable_for_device(&sys->auxdev.dev, sgt, DMA_TO_DEVICE);
++ ret = ipu6_dma_map_sgtable(sys, sgt, DMA_TO_DEVICE, 0);
++ if (ret) {
++ dma_unmap_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0);
++ sg_free_table(sgt);
++ goto out;
++ }
++
++ ipu6_dma_sync_sgtable(sys, sgt);
+
+ out:
+ kfree(pages);
+@@ -607,7 +620,10 @@ EXPORT_SYMBOL_NS_GPL(ipu6_buttress_map_fw_image, INTEL_IPU6);
+ void ipu6_buttress_unmap_fw_image(struct ipu6_bus_device *sys,
+ struct sg_table *sgt)
+ {
+- dma_unmap_sgtable(&sys->auxdev.dev, sgt, DMA_TO_DEVICE, 0);
++ struct pci_dev *pdev = sys->isp->pdev;
++
++ ipu6_dma_unmap_sgtable(sys, sgt, DMA_TO_DEVICE, 0);
++ dma_unmap_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0);
+ sg_free_table(sgt);
+ }
+ EXPORT_SYMBOL_NS_GPL(ipu6_buttress_unmap_fw_image, INTEL_IPU6);
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-cpd.c b/drivers/media/pci/intel/ipu6/ipu6-cpd.c
+index 715b21ab4b8e98..21c1c128a7eaa5 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-cpd.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-cpd.c
+@@ -15,6 +15,7 @@
+ #include "ipu6.h"
+ #include "ipu6-bus.h"
+ #include "ipu6-cpd.h"
++#include "ipu6-dma.h"
+
+ /* 15 entries + header*/
+ #define MAX_PKG_DIR_ENT_CNT 16
+@@ -162,7 +163,6 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ {
+ dma_addr_t dma_addr_src = sg_dma_address(adev->fw_sgt.sgl);
+ const struct ipu6_cpd_ent *ent, *man_ent, *met_ent;
+- struct device *dev = &adev->auxdev.dev;
+ struct ipu6_device *isp = adev->isp;
+ unsigned int man_sz, met_sz;
+ void *pkg_dir_pos;
+@@ -175,8 +175,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ met_sz = met_ent->len;
+
+ adev->pkg_dir_size = PKG_DIR_SIZE + man_sz + met_sz;
+- adev->pkg_dir = dma_alloc_attrs(dev, adev->pkg_dir_size,
+- &adev->pkg_dir_dma_addr, GFP_KERNEL, 0);
++ adev->pkg_dir = ipu6_dma_alloc(adev, adev->pkg_dir_size,
++ &adev->pkg_dir_dma_addr, GFP_KERNEL, 0);
+ if (!adev->pkg_dir)
+ return -ENOMEM;
+
+@@ -198,8 +198,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ met_ent->len);
+ if (ret) {
+ dev_err(&isp->pdev->dev, "Failed to parse module data\n");
+- dma_free_attrs(dev, adev->pkg_dir_size,
+- adev->pkg_dir, adev->pkg_dir_dma_addr, 0);
++ ipu6_dma_free(adev, adev->pkg_dir_size,
++ adev->pkg_dir, adev->pkg_dir_dma_addr, 0);
+ return ret;
+ }
+
+@@ -211,8 +211,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ pkg_dir_pos += man_sz;
+ memcpy(pkg_dir_pos, src + met_ent->offset, met_sz);
+
+- dma_sync_single_range_for_device(dev, adev->pkg_dir_dma_addr,
+- 0, adev->pkg_dir_size, DMA_TO_DEVICE);
++ ipu6_dma_sync_single(adev, adev->pkg_dir_dma_addr,
++ adev->pkg_dir_size);
+
+ return 0;
+ }
+@@ -220,8 +220,8 @@ EXPORT_SYMBOL_NS_GPL(ipu6_cpd_create_pkg_dir, INTEL_IPU6);
+
+ void ipu6_cpd_free_pkg_dir(struct ipu6_bus_device *adev)
+ {
+- dma_free_attrs(&adev->auxdev.dev, adev->pkg_dir_size, adev->pkg_dir,
+- adev->pkg_dir_dma_addr, 0);
++ ipu6_dma_free(adev, adev->pkg_dir_size, adev->pkg_dir,
++ adev->pkg_dir_dma_addr, 0);
+ }
+ EXPORT_SYMBOL_NS_GPL(ipu6_cpd_free_pkg_dir, INTEL_IPU6);
+
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.c b/drivers/media/pci/intel/ipu6/ipu6-dma.c
+index 92530a1cc90f51..b71f66bd8c1fdb 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-dma.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-dma.c
+@@ -39,8 +39,7 @@ static struct vm_info *get_vm_info(struct ipu6_mmu *mmu, dma_addr_t iova)
+ return NULL;
+ }
+
+-static void __dma_clear_buffer(struct page *page, size_t size,
+- unsigned long attrs)
++static void __clear_buffer(struct page *page, size_t size, unsigned long attrs)
+ {
+ void *ptr;
+
+@@ -56,8 +55,7 @@ static void __dma_clear_buffer(struct page *page, size_t size,
+ clflush_cache_range(ptr, size);
+ }
+
+-static struct page **__dma_alloc_buffer(struct device *dev, size_t size,
+- gfp_t gfp, unsigned long attrs)
++static struct page **__alloc_buffer(size_t size, gfp_t gfp, unsigned long attrs)
+ {
+ int count = PHYS_PFN(size);
+ int array_size = count * sizeof(struct page *);
+@@ -86,7 +84,7 @@ static struct page **__dma_alloc_buffer(struct device *dev, size_t size,
+ pages[i + j] = pages[i] + j;
+ }
+
+- __dma_clear_buffer(pages[i], PAGE_SIZE << order, attrs);
++ __clear_buffer(pages[i], PAGE_SIZE << order, attrs);
+ i += 1 << order;
+ count -= 1 << order;
+ }
+@@ -100,29 +98,26 @@ static struct page **__dma_alloc_buffer(struct device *dev, size_t size,
+ return NULL;
+ }
+
+-static void __dma_free_buffer(struct device *dev, struct page **pages,
+- size_t size, unsigned long attrs)
++static void __free_buffer(struct page **pages, size_t size, unsigned long attrs)
+ {
+ int count = PHYS_PFN(size);
+ unsigned int i;
+
+ for (i = 0; i < count && pages[i]; i++) {
+- __dma_clear_buffer(pages[i], PAGE_SIZE, attrs);
++ __clear_buffer(pages[i], PAGE_SIZE, attrs);
+ __free_pages(pages[i], 0);
+ }
+
+ kvfree(pages);
+ }
+
+-static void ipu6_dma_sync_single_for_cpu(struct device *dev,
+- dma_addr_t dma_handle,
+- size_t size,
+- enum dma_data_direction dir)
++void ipu6_dma_sync_single(struct ipu6_bus_device *sys, dma_addr_t dma_handle,
++ size_t size)
+ {
+ void *vaddr;
+ u32 offset;
+ struct vm_info *info;
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
++ struct ipu6_mmu *mmu = sys->mmu;
+
+ info = get_vm_info(mmu, dma_handle);
+ if (WARN_ON(!info))
+@@ -135,10 +130,10 @@ static void ipu6_dma_sync_single_for_cpu(struct device *dev,
+ vaddr = info->vaddr + offset;
+ clflush_cache_range(vaddr, size);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_single, INTEL_IPU6);
+
+-static void ipu6_dma_sync_sg_for_cpu(struct device *dev,
+- struct scatterlist *sglist,
+- int nents, enum dma_data_direction dir)
++void ipu6_dma_sync_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents)
+ {
+ struct scatterlist *sg;
+ int i;
+@@ -146,14 +141,22 @@ static void ipu6_dma_sync_sg_for_cpu(struct device *dev,
+ for_each_sg(sglist, sg, nents, i)
+ clflush_cache_range(page_to_virt(sg_page(sg)), sg->length);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_sg, INTEL_IPU6);
+
+-static void *ipu6_dma_alloc(struct device *dev, size_t size,
+- dma_addr_t *dma_handle, gfp_t gfp,
+- unsigned long attrs)
++void ipu6_dma_sync_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
++ ipu6_dma_sync_sg(sys, sgt->sgl, sgt->orig_nents);
++}
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_sgtable, INTEL_IPU6);
++
++void *ipu6_dma_alloc(struct ipu6_bus_device *sys, size_t size,
++ dma_addr_t *dma_handle, gfp_t gfp,
++ unsigned long attrs)
++{
++ struct device *dev = &sys->auxdev.dev;
++ struct pci_dev *pdev = sys->isp->pdev;
+ dma_addr_t pci_dma_addr, ipu6_iova;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct vm_info *info;
+ unsigned long count;
+ struct page **pages;
+@@ -173,7 +176,7 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size,
+ if (!iova)
+ goto out_kfree;
+
+- pages = __dma_alloc_buffer(dev, size, gfp, attrs);
++ pages = __alloc_buffer(size, gfp, attrs);
+ if (!pages)
+ goto out_free_iova;
+
+@@ -227,7 +230,7 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size,
+ ipu6_mmu_unmap(mmu->dmap->mmu_info, ipu6_iova, PAGE_SIZE);
+ }
+
+- __dma_free_buffer(dev, pages, size, attrs);
++ __free_buffer(pages, size, attrs);
+
+ out_free_iova:
+ __free_iova(&mmu->dmap->iovad, iova);
+@@ -236,13 +239,13 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size,
+
+ return NULL;
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_alloc, INTEL_IPU6);
+
+-static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr,
+- dma_addr_t dma_handle,
+- unsigned long attrs)
++void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr,
++ dma_addr_t dma_handle, unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
++ struct ipu6_mmu *mmu = sys->mmu;
++ struct pci_dev *pdev = sys->isp->pdev;
+ struct iova *iova = find_iova(&mmu->dmap->iovad, PHYS_PFN(dma_handle));
+ dma_addr_t pci_dma_addr, ipu6_iova;
+ struct vm_info *info;
+@@ -281,7 +284,7 @@ static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr,
+ ipu6_mmu_unmap(mmu->dmap->mmu_info, PFN_PHYS(iova->pfn_lo),
+ PFN_PHYS(iova_size(iova)));
+
+- __dma_free_buffer(dev, pages, size, attrs);
++ __free_buffer(pages, size, attrs);
+
+ mmu->tlb_invalidate(mmu);
+
+@@ -289,13 +292,14 @@ static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr,
+
+ kfree(info);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_free, INTEL_IPU6);
+
+-static int ipu6_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+- void *addr, dma_addr_t iova, size_t size,
+- unsigned long attrs)
++int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma,
++ void *addr, dma_addr_t iova, size_t size,
++ unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- size_t count = PHYS_PFN(PAGE_ALIGN(size));
++ struct ipu6_mmu *mmu = sys->mmu;
++ size_t count = PFN_UP(size);
+ struct vm_info *info;
+ size_t i;
+ int ret;
+@@ -323,18 +327,17 @@ static int ipu6_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+ return 0;
+ }
+
+-static void ipu6_dma_unmap_sg(struct device *dev,
+- struct scatterlist *sglist,
+- int nents, enum dma_data_direction dir,
+- unsigned long attrs)
++void ipu6_dma_unmap_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs)
+ {
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
++ struct device *dev = &sys->auxdev.dev;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct iova *iova = find_iova(&mmu->dmap->iovad,
+ PHYS_PFN(sg_dma_address(sglist)));
+- int i, npages, count;
+ struct scatterlist *sg;
+ dma_addr_t pci_dma_addr;
++ unsigned int i;
+
+ if (!nents)
+ return;
+@@ -342,31 +345,15 @@ static void ipu6_dma_unmap_sg(struct device *dev,
+ if (WARN_ON(!iova))
+ return;
+
+- if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0)
+- ipu6_dma_sync_sg_for_cpu(dev, sglist, nents, DMA_BIDIRECTIONAL);
+-
+- /* get the nents as orig_nents given by caller */
+- count = 0;
+- npages = iova_size(iova);
+- for_each_sg(sglist, sg, nents, i) {
+- if (sg_dma_len(sg) == 0 ||
+- sg_dma_address(sg) == DMA_MAPPING_ERROR)
+- break;
+-
+- npages -= PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg)));
+- count++;
+- if (npages <= 0)
+- break;
+- }
+-
+ /*
+ * Before IPU6 mmu unmap, return the pci dma address back to sg
+ * assume the nents is less than orig_nents as the least granule
+ * is 1 SZ_4K page
+ */
+- dev_dbg(dev, "trying to unmap concatenated %u ents\n", count);
+- for_each_sg(sglist, sg, count, i) {
+- dev_dbg(dev, "ipu unmap sg[%d] %pad\n", i, &sg_dma_address(sg));
++ dev_dbg(dev, "trying to unmap concatenated %u ents\n", nents);
++ for_each_sg(sglist, sg, nents, i) {
++ dev_dbg(dev, "unmap sg[%d] %pad size %u\n", i,
++ &sg_dma_address(sg), sg_dma_len(sg));
+ pci_dma_addr = ipu6_mmu_iova_to_phys(mmu->dmap->mmu_info,
+ sg_dma_address(sg));
+ dev_dbg(dev, "return pci_dma_addr %pad back to sg[%d]\n",
+@@ -380,23 +367,21 @@ static void ipu6_dma_unmap_sg(struct device *dev,
+ PFN_PHYS(iova_size(iova)));
+
+ mmu->tlb_invalidate(mmu);
+-
+- dma_unmap_sg_attrs(&pdev->dev, sglist, nents, dir, attrs);
+-
+ __free_iova(&mmu->dmap->iovad, iova);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_unmap_sg, INTEL_IPU6);
+
+-static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+- int nents, enum dma_data_direction dir,
+- unsigned long attrs)
++int ipu6_dma_map_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
++ struct device *dev = &sys->auxdev.dev;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct scatterlist *sg;
+ struct iova *iova;
+ size_t npages = 0;
+ unsigned long iova_addr;
+- int i, count;
++ int i;
+
+ for_each_sg(sglist, sg, nents, i) {
+ if (sg->offset) {
+@@ -406,18 +391,12 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+ }
+ }
+
+- dev_dbg(dev, "pci_dma_map_sg trying to map %d ents\n", nents);
+- count = dma_map_sg_attrs(&pdev->dev, sglist, nents, dir, attrs);
+- if (count <= 0) {
+- dev_err(dev, "pci_dma_map_sg %d ents failed\n", nents);
+- return 0;
+- }
+-
+- dev_dbg(dev, "pci_dma_map_sg %d ents mapped\n", count);
+-
+- for_each_sg(sglist, sg, count, i)
++ for_each_sg(sglist, sg, nents, i)
+ npages += PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg)));
+
++ dev_dbg(dev, "dmamap trying to map %d ents %zu pages\n",
++ nents, npages);
++
+ iova = alloc_iova(&mmu->dmap->iovad, npages,
+ PHYS_PFN(dma_get_mask(dev)), 0);
+ if (!iova)
+@@ -427,12 +406,13 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+ iova->pfn_hi);
+
+ iova_addr = iova->pfn_lo;
+- for_each_sg(sglist, sg, count, i) {
++ for_each_sg(sglist, sg, nents, i) {
++ phys_addr_t iova_pa;
+ int ret;
+
+- dev_dbg(dev, "mapping entry %d: iova 0x%llx phy %pad size %d\n",
+- i, PFN_PHYS(iova_addr), &sg_dma_address(sg),
+- sg_dma_len(sg));
++ iova_pa = PFN_PHYS(iova_addr);
++ dev_dbg(dev, "mapping entry %d: iova %pap phy %pap size %d\n",
++ i, &iova_pa, &sg_dma_address(sg), sg_dma_len(sg));
+
+ ret = ipu6_mmu_map(mmu->dmap->mmu_info, PFN_PHYS(iova_addr),
+ sg_dma_address(sg),
+@@ -445,25 +425,48 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+ iova_addr += PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg)));
+ }
+
+- if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0)
+- ipu6_dma_sync_sg_for_cpu(dev, sglist, nents, DMA_BIDIRECTIONAL);
++ dev_dbg(dev, "dmamap %d ents %zu pages mapped\n", nents, npages);
+
+- return count;
++ return nents;
+
+ out_fail:
+- ipu6_dma_unmap_sg(dev, sglist, i, dir, attrs);
++ ipu6_dma_unmap_sg(sys, sglist, i, dir, attrs);
+
+ return 0;
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_map_sg, INTEL_IPU6);
++
++int ipu6_dma_map_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs)
++{
++ int nents;
++
++ nents = ipu6_dma_map_sg(sys, sgt->sgl, sgt->nents, dir, attrs);
++ if (nents < 0)
++ return nents;
++
++ sgt->nents = nents;
++
++ return 0;
++}
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_map_sgtable, INTEL_IPU6);
++
++void ipu6_dma_unmap_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs)
++{
++ ipu6_dma_unmap_sg(sys, sgt->sgl, sgt->nents, dir, attrs);
++}
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_unmap_sgtable, INTEL_IPU6);
+
+ /*
+ * Create scatter-list for the already allocated DMA buffer
+ */
+-static int ipu6_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+- void *cpu_addr, dma_addr_t handle, size_t size,
+- unsigned long attrs)
++int ipu6_dma_get_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ void *cpu_addr, dma_addr_t handle, size_t size,
++ unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
++ struct device *dev = &sys->auxdev.dev;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct vm_info *info;
+ int n_pages;
+ int ret = 0;
+@@ -483,20 +486,7 @@ static int ipu6_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+ ret = sg_alloc_table_from_pages(sgt, info->pages, n_pages, 0, size,
+ GFP_KERNEL);
+ if (ret)
+- dev_warn(dev, "IPU6 get sgt table failed\n");
++ dev_warn(dev, "get sgt table failed\n");
+
+ return ret;
+ }
+-
+-const struct dma_map_ops ipu6_dma_ops = {
+- .alloc = ipu6_dma_alloc,
+- .free = ipu6_dma_free,
+- .mmap = ipu6_dma_mmap,
+- .map_sg = ipu6_dma_map_sg,
+- .unmap_sg = ipu6_dma_unmap_sg,
+- .sync_single_for_cpu = ipu6_dma_sync_single_for_cpu,
+- .sync_single_for_device = ipu6_dma_sync_single_for_cpu,
+- .sync_sg_for_cpu = ipu6_dma_sync_sg_for_cpu,
+- .sync_sg_for_device = ipu6_dma_sync_sg_for_cpu,
+- .get_sgtable = ipu6_dma_get_sgtable,
+-};
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.h b/drivers/media/pci/intel/ipu6/ipu6-dma.h
+index 847ea5b7c925c3..b51244add9e611 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-dma.h
++++ b/drivers/media/pci/intel/ipu6/ipu6-dma.h
+@@ -5,7 +5,13 @@
+ #define IPU6_DMA_H
+
+ #include <linux/dma-map-ops.h>
++#include <linux/dma-mapping.h>
+ #include <linux/iova.h>
++#include <linux/iova.h>
++#include <linux/scatterlist.h>
++#include <linux/types.h>
++
++#include "ipu6-bus.h"
+
+ struct ipu6_mmu_info;
+
+@@ -14,6 +20,30 @@ struct ipu6_dma_mapping {
+ struct iova_domain iovad;
+ };
+
+-extern const struct dma_map_ops ipu6_dma_ops;
+-
++void ipu6_dma_sync_single(struct ipu6_bus_device *sys, dma_addr_t dma_handle,
++ size_t size);
++void ipu6_dma_sync_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents);
++void ipu6_dma_sync_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt);
++void *ipu6_dma_alloc(struct ipu6_bus_device *sys, size_t size,
++ dma_addr_t *dma_handle, gfp_t gfp,
++ unsigned long attrs);
++void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr,
++ dma_addr_t dma_handle, unsigned long attrs);
++int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma,
++ void *addr, dma_addr_t iova, size_t size,
++ unsigned long attrs);
++int ipu6_dma_map_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs);
++void ipu6_dma_unmap_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs);
++int ipu6_dma_map_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs);
++void ipu6_dma_unmap_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs);
++int ipu6_dma_get_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ void *cpu_addr, dma_addr_t handle, size_t size,
++ unsigned long attrs);
+ #endif /* IPU6_DMA_H */
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-fw-com.c b/drivers/media/pci/intel/ipu6/ipu6-fw-com.c
+index 0b33fe9e703dcb..7d3d9314cb306b 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-fw-com.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-fw-com.c
+@@ -12,6 +12,7 @@
+ #include <linux/types.h>
+
+ #include "ipu6-bus.h"
++#include "ipu6-dma.h"
+ #include "ipu6-fw-com.h"
+
+ /*
+@@ -88,7 +89,6 @@ struct ipu6_fw_com_context {
+ void *dma_buffer;
+ dma_addr_t dma_addr;
+ unsigned int dma_size;
+- unsigned long attrs;
+
+ struct ipu6_fw_sys_queue *input_queue; /* array of host to SP queues */
+ struct ipu6_fw_sys_queue *output_queue; /* array of SP to host */
+@@ -164,7 +164,6 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg,
+ struct ipu6_fw_com_context *ctx;
+ struct device *dev = &adev->auxdev.dev;
+ size_t sizeall, offset;
+- unsigned long attrs = 0;
+ void *specific_host_addr;
+ unsigned int i;
+
+@@ -206,9 +205,8 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg,
+
+ sizeall += sizeinput + sizeoutput;
+
+- ctx->dma_buffer = dma_alloc_attrs(dev, sizeall, &ctx->dma_addr,
+- GFP_KERNEL, attrs);
+- ctx->attrs = attrs;
++ ctx->dma_buffer = ipu6_dma_alloc(adev, sizeall, &ctx->dma_addr,
++ GFP_KERNEL, 0);
+ if (!ctx->dma_buffer) {
+ dev_err(dev, "failed to allocate dma memory\n");
+ kfree(ctx);
+@@ -239,6 +237,8 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg,
+ memcpy(specific_host_addr, cfg->specific_addr,
+ cfg->specific_size);
+
++ ipu6_dma_sync_single(adev, ctx->config_vied_addr, sizeall);
++
+ /* initialize input queues */
+ offset += specific_size;
+ res.reg = SYSCOM_QPR_BASE_REG;
+@@ -315,8 +315,8 @@ int ipu6_fw_com_release(struct ipu6_fw_com_context *ctx, unsigned int force)
+ if (!force && !ctx->cell_ready(ctx->adev))
+ return -EBUSY;
+
+- dma_free_attrs(&ctx->adev->auxdev.dev, ctx->dma_size,
+- ctx->dma_buffer, ctx->dma_addr, ctx->attrs);
++ ipu6_dma_free(ctx->adev, ctx->dma_size,
++ ctx->dma_buffer, ctx->dma_addr, 0);
+ kfree(ctx);
+ return 0;
+ }
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.c b/drivers/media/pci/intel/ipu6/ipu6-mmu.c
+index c3a20507d6dbcc..57298ac73d0722 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-mmu.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.c
+@@ -97,13 +97,15 @@ static void page_table_dump(struct ipu6_mmu_info *mmu_info)
+ for (l1_idx = 0; l1_idx < ISP_L1PT_PTES; l1_idx++) {
+ u32 l2_idx;
+ u32 iova = (phys_addr_t)l1_idx << ISP_L1PT_SHIFT;
++ phys_addr_t l2_phys;
+
+ if (mmu_info->l1_pt[l1_idx] == mmu_info->dummy_l2_pteval)
+ continue;
++
++ l2_phys = TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx];)
+ dev_dbg(mmu_info->dev,
+- "l1 entry %u; iovas 0x%8.8x-0x%8.8x, at %pa\n",
+- l1_idx, iova, iova + ISP_PAGE_SIZE,
+- TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx]));
++ "l1 entry %u; iovas 0x%8.8x-0x%8.8x, at %pap\n",
++ l1_idx, iova, iova + ISP_PAGE_SIZE, &l2_phys);
+
+ for (l2_idx = 0; l2_idx < ISP_L2PT_PTES; l2_idx++) {
+ u32 *l2_pt = mmu_info->l2_pts[l1_idx];
+@@ -227,7 +229,7 @@ static u32 *alloc_l1_pt(struct ipu6_mmu_info *mmu_info)
+ }
+
+ mmu_info->l1_pt_dma = dma >> ISP_PADDR_SHIFT;
+- dev_dbg(mmu_info->dev, "l1 pt %p mapped at %llx\n", pt, dma);
++ dev_dbg(mmu_info->dev, "l1 pt %p mapped at %pad\n", pt, &dma);
+
+ return pt;
+
+@@ -330,8 +332,8 @@ static int __ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova,
+ u32 iova_end = ALIGN(iova + size, ISP_PAGE_SIZE);
+
+ dev_dbg(mmu_info->dev,
+- "mapping iova 0x%8.8x--0x%8.8x, size %zu at paddr 0x%10.10llx\n",
+- iova_start, iova_end, size, paddr);
++ "mapping iova 0x%8.8x--0x%8.8x, size %zu at paddr %pap\n",
++ iova_start, iova_end, size, &paddr);
+
+ return l2_map(mmu_info, iova_start, paddr, size);
+ }
+@@ -361,10 +363,13 @@ static size_t l2_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova,
+ for (l2_idx = (iova_start & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT;
+ (iova_start & ISP_L1PT_MASK) + (l2_idx << ISP_PAGE_SHIFT)
+ < iova_start + size && l2_idx < ISP_L2PT_PTES; l2_idx++) {
++ phys_addr_t pteval;
++
+ l2_pt = mmu_info->l2_pts[l1_idx];
++ pteval = TBL_PHYS_ADDR(l2_pt[l2_idx]);
+ dev_dbg(mmu_info->dev,
+- "unmap l2 index %u with pteval 0x%10.10llx\n",
+- l2_idx, TBL_PHYS_ADDR(l2_pt[l2_idx]));
++ "unmap l2 index %u with pteval 0x%p\n",
++ l2_idx, &pteval);
+ l2_pt[l2_idx] = mmu_info->dummy_page_pteval;
+
+ clflush_cache_range((void *)&l2_pt[l2_idx],
+@@ -525,9 +530,10 @@ static struct ipu6_mmu_info *ipu6_mmu_alloc(struct ipu6_device *isp)
+ return NULL;
+
+ mmu_info->aperture_start = 0;
+- mmu_info->aperture_end = DMA_BIT_MASK(isp->secure_mode ?
+- IPU6_MMU_ADDR_BITS :
+- IPU6_MMU_ADDR_BITS_NON_SECURE);
++ mmu_info->aperture_end =
++ (dma_addr_t)DMA_BIT_MASK(isp->secure_mode ?
++ IPU6_MMU_ADDR_BITS :
++ IPU6_MMU_ADDR_BITS_NON_SECURE);
+ mmu_info->pgsize_bitmap = SZ_4K;
+ mmu_info->dev = &isp->pdev->dev;
+
+diff --git a/drivers/media/pci/intel/ipu6/ipu6.c b/drivers/media/pci/intel/ipu6/ipu6.c
+index bbd646378ab3ed..518d3377bfbecb 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6.c
++++ b/drivers/media/pci/intel/ipu6/ipu6.c
+@@ -758,6 +758,9 @@ static void ipu6_pci_reset_done(struct pci_dev *pdev)
+ */
+ static int ipu6_suspend(struct device *dev)
+ {
++ struct pci_dev *pdev = to_pci_dev(dev);
++
++ synchronize_irq(pdev->irq);
+ return 0;
+ }
+
+diff --git a/drivers/media/radio/wl128x/fmdrv_common.c b/drivers/media/radio/wl128x/fmdrv_common.c
+index 3d36f323a8f8f7..4d032436691c1b 100644
+--- a/drivers/media/radio/wl128x/fmdrv_common.c
++++ b/drivers/media/radio/wl128x/fmdrv_common.c
+@@ -466,11 +466,12 @@ int fmc_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
+ jiffies_to_msecs(FM_DRV_TX_TIMEOUT) / 1000);
+ return -ETIMEDOUT;
+ }
++ spin_lock_irqsave(&fmdev->resp_skb_lock, flags);
+ if (!fmdev->resp_skb) {
++ spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags);
+ fmerr("Response SKB is missing\n");
+ return -EFAULT;
+ }
+- spin_lock_irqsave(&fmdev->resp_skb_lock, flags);
+ skb = fmdev->resp_skb;
+ fmdev->resp_skb = NULL;
+ spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags);
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+index 6a790ac8cbe689..f25e011153642e 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+@@ -1459,12 +1459,19 @@ static bool valid_cvt_gtf_timings(struct v4l2_dv_timings *timings)
+ h_freq = (u32)bt->pixelclock / total_h_pixel;
+
+ if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_CVT)) {
++ struct v4l2_dv_timings cvt = {};
++
+ if (v4l2_detect_cvt(total_v_lines, h_freq, bt->vsync, bt->width,
+- bt->polarities, bt->interlaced, timings))
++ bt->polarities, bt->interlaced,
++ &vivid_dv_timings_cap, &cvt) &&
++ cvt.bt.width == bt->width && cvt.bt.height == bt->height) {
++ *timings = cvt;
+ return true;
++ }
+ }
+
+ if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_GTF)) {
++ struct v4l2_dv_timings gtf = {};
+ struct v4l2_fract aspect_ratio;
+
+ find_aspect_ratio(bt->width, bt->height,
+@@ -1472,8 +1479,12 @@ static bool valid_cvt_gtf_timings(struct v4l2_dv_timings *timings)
+ &aspect_ratio.denominator);
+ if (v4l2_detect_gtf(total_v_lines, h_freq, bt->vsync,
+ bt->polarities, bt->interlaced,
+- aspect_ratio, timings))
++ aspect_ratio, &vivid_dv_timings_cap,
++ >f) &&
++ gtf.bt.width == bt->width && gtf.bt.height == bt->height) {
++ *timings = gtf;
+ return true;
++ }
+ }
+ return false;
+ }
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index 942d0005c55e82..2cf5dcee0ce800 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -481,25 +481,28 @@ EXPORT_SYMBOL_GPL(v4l2_calc_timeperframe);
+ * @polarities - the horizontal and vertical polarities (same as struct
+ * v4l2_bt_timings polarities).
+ * @interlaced - if this flag is true, it indicates interlaced format
+- * @fmt - the resulting timings.
++ * @cap - the v4l2_dv_timings_cap capabilities.
++ * @timings - the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid CVT format. If so, then it will return true, and fmt will be filled
+ * in with the found CVT timings.
+ */
+-bool v4l2_detect_cvt(unsigned frame_height,
+- unsigned hfreq,
+- unsigned vsync,
+- unsigned active_width,
++bool v4l2_detect_cvt(unsigned int frame_height,
++ unsigned int hfreq,
++ unsigned int vsync,
++ unsigned int active_width,
+ u32 polarities,
+ bool interlaced,
+- struct v4l2_dv_timings *fmt)
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *timings)
+ {
+- int v_fp, v_bp, h_fp, h_bp, hsync;
+- int frame_width, image_height, image_width;
++ struct v4l2_dv_timings t = {};
++ int v_fp, v_bp, h_fp, h_bp, hsync;
++ int frame_width, image_height, image_width;
+ bool reduced_blanking;
+ bool rb_v2 = false;
+- unsigned pix_clk;
++ unsigned int pix_clk;
+
+ if (vsync < 4 || vsync > 8)
+ return false;
+@@ -625,36 +628,39 @@ bool v4l2_detect_cvt(unsigned frame_height,
+ h_fp = h_blank - hsync - h_bp;
+ }
+
+- fmt->type = V4L2_DV_BT_656_1120;
+- fmt->bt.polarities = polarities;
+- fmt->bt.width = image_width;
+- fmt->bt.height = image_height;
+- fmt->bt.hfrontporch = h_fp;
+- fmt->bt.vfrontporch = v_fp;
+- fmt->bt.hsync = hsync;
+- fmt->bt.vsync = vsync;
+- fmt->bt.hbackporch = frame_width - image_width - h_fp - hsync;
++ t.type = V4L2_DV_BT_656_1120;
++ t.bt.polarities = polarities;
++ t.bt.width = image_width;
++ t.bt.height = image_height;
++ t.bt.hfrontporch = h_fp;
++ t.bt.vfrontporch = v_fp;
++ t.bt.hsync = hsync;
++ t.bt.vsync = vsync;
++ t.bt.hbackporch = frame_width - image_width - h_fp - hsync;
+
+ if (!interlaced) {
+- fmt->bt.vbackporch = frame_height - image_height - v_fp - vsync;
+- fmt->bt.interlaced = V4L2_DV_PROGRESSIVE;
++ t.bt.vbackporch = frame_height - image_height - v_fp - vsync;
++ t.bt.interlaced = V4L2_DV_PROGRESSIVE;
+ } else {
+- fmt->bt.vbackporch = (frame_height - image_height - 2 * v_fp -
++ t.bt.vbackporch = (frame_height - image_height - 2 * v_fp -
+ 2 * vsync) / 2;
+- fmt->bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
+- 2 * vsync - fmt->bt.vbackporch;
+- fmt->bt.il_vfrontporch = v_fp;
+- fmt->bt.il_vsync = vsync;
+- fmt->bt.flags |= V4L2_DV_FL_HALF_LINE;
+- fmt->bt.interlaced = V4L2_DV_INTERLACED;
++ t.bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
++ 2 * vsync - t.bt.vbackporch;
++ t.bt.il_vfrontporch = v_fp;
++ t.bt.il_vsync = vsync;
++ t.bt.flags |= V4L2_DV_FL_HALF_LINE;
++ t.bt.interlaced = V4L2_DV_INTERLACED;
+ }
+
+- fmt->bt.pixelclock = pix_clk;
+- fmt->bt.standards = V4L2_DV_BT_STD_CVT;
++ t.bt.pixelclock = pix_clk;
++ t.bt.standards = V4L2_DV_BT_STD_CVT;
+
+ if (reduced_blanking)
+- fmt->bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
++ t.bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+
++ if (!v4l2_valid_dv_timings(&t, cap, NULL, NULL))
++ return false;
++ *timings = t;
+ return true;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_detect_cvt);
+@@ -699,22 +705,25 @@ EXPORT_SYMBOL_GPL(v4l2_detect_cvt);
+ * image height, so it has to be passed explicitly. Usually
+ * the native screen aspect ratio is used for this. If it
+ * is not filled in correctly, then 16:9 will be assumed.
+- * @fmt - the resulting timings.
++ * @cap - the v4l2_dv_timings_cap capabilities.
++ * @timings - the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid GTF format. If so, then it will return true, and fmt will be filled
+ * in with the found GTF timings.
+ */
+-bool v4l2_detect_gtf(unsigned frame_height,
+- unsigned hfreq,
+- unsigned vsync,
+- u32 polarities,
+- bool interlaced,
+- struct v4l2_fract aspect,
+- struct v4l2_dv_timings *fmt)
++bool v4l2_detect_gtf(unsigned int frame_height,
++ unsigned int hfreq,
++ unsigned int vsync,
++ u32 polarities,
++ bool interlaced,
++ struct v4l2_fract aspect,
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *timings)
+ {
++ struct v4l2_dv_timings t = {};
+ int pix_clk;
+- int v_fp, v_bp, h_fp, hsync;
++ int v_fp, v_bp, h_fp, hsync;
+ int frame_width, image_height, image_width;
+ bool default_gtf;
+ int h_blank;
+@@ -783,36 +792,39 @@ bool v4l2_detect_gtf(unsigned frame_height,
+
+ h_fp = h_blank / 2 - hsync;
+
+- fmt->type = V4L2_DV_BT_656_1120;
+- fmt->bt.polarities = polarities;
+- fmt->bt.width = image_width;
+- fmt->bt.height = image_height;
+- fmt->bt.hfrontporch = h_fp;
+- fmt->bt.vfrontporch = v_fp;
+- fmt->bt.hsync = hsync;
+- fmt->bt.vsync = vsync;
+- fmt->bt.hbackporch = frame_width - image_width - h_fp - hsync;
++ t.type = V4L2_DV_BT_656_1120;
++ t.bt.polarities = polarities;
++ t.bt.width = image_width;
++ t.bt.height = image_height;
++ t.bt.hfrontporch = h_fp;
++ t.bt.vfrontporch = v_fp;
++ t.bt.hsync = hsync;
++ t.bt.vsync = vsync;
++ t.bt.hbackporch = frame_width - image_width - h_fp - hsync;
+
+ if (!interlaced) {
+- fmt->bt.vbackporch = frame_height - image_height - v_fp - vsync;
+- fmt->bt.interlaced = V4L2_DV_PROGRESSIVE;
++ t.bt.vbackporch = frame_height - image_height - v_fp - vsync;
++ t.bt.interlaced = V4L2_DV_PROGRESSIVE;
+ } else {
+- fmt->bt.vbackporch = (frame_height - image_height - 2 * v_fp -
++ t.bt.vbackporch = (frame_height - image_height - 2 * v_fp -
+ 2 * vsync) / 2;
+- fmt->bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
+- 2 * vsync - fmt->bt.vbackporch;
+- fmt->bt.il_vfrontporch = v_fp;
+- fmt->bt.il_vsync = vsync;
+- fmt->bt.flags |= V4L2_DV_FL_HALF_LINE;
+- fmt->bt.interlaced = V4L2_DV_INTERLACED;
++ t.bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
++ 2 * vsync - t.bt.vbackporch;
++ t.bt.il_vfrontporch = v_fp;
++ t.bt.il_vsync = vsync;
++ t.bt.flags |= V4L2_DV_FL_HALF_LINE;
++ t.bt.interlaced = V4L2_DV_INTERLACED;
+ }
+
+- fmt->bt.pixelclock = pix_clk;
+- fmt->bt.standards = V4L2_DV_BT_STD_GTF;
++ t.bt.pixelclock = pix_clk;
++ t.bt.standards = V4L2_DV_BT_STD_GTF;
+
+ if (!default_gtf)
+- fmt->bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
++ t.bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+
++ if (!v4l2_valid_dv_timings(&t, cap, NULL, NULL))
++ return false;
++ *timings = t;
+ return true;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_detect_gtf);
+diff --git a/drivers/message/fusion/mptsas.c b/drivers/message/fusion/mptsas.c
+index a0bcb0864ecd2c..a798e26c6402d4 100644
+--- a/drivers/message/fusion/mptsas.c
++++ b/drivers/message/fusion/mptsas.c
+@@ -4231,10 +4231,8 @@ mptsas_find_phyinfo_by_phys_disk_num(MPT_ADAPTER *ioc, u8 phys_disk_num,
+ static void
+ mptsas_reprobe_lun(struct scsi_device *sdev, void *data)
+ {
+- int rc;
+-
+ sdev->no_uld_attach = data ? 1 : 0;
+- rc = scsi_device_reprobe(sdev);
++ WARN_ON(scsi_device_reprobe(sdev));
+ }
+
+ static void
+diff --git a/drivers/mfd/da9052-spi.c b/drivers/mfd/da9052-spi.c
+index be5f2b34e18aeb..80fc5c0cac2fb0 100644
+--- a/drivers/mfd/da9052-spi.c
++++ b/drivers/mfd/da9052-spi.c
+@@ -37,7 +37,7 @@ static int da9052_spi_probe(struct spi_device *spi)
+ spi_set_drvdata(spi, da9052);
+
+ config = da9052_regmap_config;
+- config.read_flag_mask = 1;
++ config.write_flag_mask = 1;
+ config.reg_bits = 7;
+ config.pad_bits = 1;
+ config.val_bits = 8;
+diff --git a/drivers/mfd/intel_soc_pmic_bxtwc.c b/drivers/mfd/intel_soc_pmic_bxtwc.c
+index ba32cacfc499f3..b65f604a1eec3b 100644
+--- a/drivers/mfd/intel_soc_pmic_bxtwc.c
++++ b/drivers/mfd/intel_soc_pmic_bxtwc.c
+@@ -149,6 +149,7 @@ static struct regmap_irq_chip bxtwc_regmap_irq_chip = {
+
+ static struct regmap_irq_chip bxtwc_regmap_irq_chip_pwrbtn = {
+ .name = "bxtwc_irq_chip_pwrbtn",
++ .domain_suffix = "PWRBTN",
+ .status_base = BXTWC_PWRBTNIRQ,
+ .mask_base = BXTWC_MPWRBTNIRQ,
+ .irqs = bxtwc_regmap_irqs_pwrbtn,
+@@ -158,6 +159,7 @@ static struct regmap_irq_chip bxtwc_regmap_irq_chip_pwrbtn = {
+
+ static struct regmap_irq_chip bxtwc_regmap_irq_chip_tmu = {
+ .name = "bxtwc_irq_chip_tmu",
++ .domain_suffix = "TMU",
+ .status_base = BXTWC_TMUIRQ,
+ .mask_base = BXTWC_MTMUIRQ,
+ .irqs = bxtwc_regmap_irqs_tmu,
+@@ -167,6 +169,7 @@ static struct regmap_irq_chip bxtwc_regmap_irq_chip_tmu = {
+
+ static struct regmap_irq_chip bxtwc_regmap_irq_chip_bcu = {
+ .name = "bxtwc_irq_chip_bcu",
++ .domain_suffix = "BCU",
+ .status_base = BXTWC_BCUIRQ,
+ .mask_base = BXTWC_MBCUIRQ,
+ .irqs = bxtwc_regmap_irqs_bcu,
+@@ -176,6 +179,7 @@ static struct regmap_irq_chip bxtwc_regmap_irq_chip_bcu = {
+
+ static struct regmap_irq_chip bxtwc_regmap_irq_chip_adc = {
+ .name = "bxtwc_irq_chip_adc",
++ .domain_suffix = "ADC",
+ .status_base = BXTWC_ADCIRQ,
+ .mask_base = BXTWC_MADCIRQ,
+ .irqs = bxtwc_regmap_irqs_adc,
+@@ -185,6 +189,7 @@ static struct regmap_irq_chip bxtwc_regmap_irq_chip_adc = {
+
+ static struct regmap_irq_chip bxtwc_regmap_irq_chip_chgr = {
+ .name = "bxtwc_irq_chip_chgr",
++ .domain_suffix = "CHGR",
+ .status_base = BXTWC_CHGR0IRQ,
+ .mask_base = BXTWC_MCHGR0IRQ,
+ .irqs = bxtwc_regmap_irqs_chgr,
+@@ -194,6 +199,7 @@ static struct regmap_irq_chip bxtwc_regmap_irq_chip_chgr = {
+
+ static struct regmap_irq_chip bxtwc_regmap_irq_chip_crit = {
+ .name = "bxtwc_irq_chip_crit",
++ .domain_suffix = "CRIT",
+ .status_base = BXTWC_CRITIRQ,
+ .mask_base = BXTWC_MCRITIRQ,
+ .irqs = bxtwc_regmap_irqs_crit,
+@@ -231,44 +237,55 @@ static const struct resource tmu_resources[] = {
+ };
+
+ static struct mfd_cell bxt_wc_dev[] = {
+- {
+- .name = "bxt_wcove_gpadc",
+- .num_resources = ARRAY_SIZE(adc_resources),
+- .resources = adc_resources,
+- },
+ {
+ .name = "bxt_wcove_thermal",
+ .num_resources = ARRAY_SIZE(thermal_resources),
+ .resources = thermal_resources,
+ },
+ {
+- .name = "bxt_wcove_usbc",
+- .num_resources = ARRAY_SIZE(usbc_resources),
+- .resources = usbc_resources,
++ .name = "bxt_wcove_gpio",
++ .num_resources = ARRAY_SIZE(gpio_resources),
++ .resources = gpio_resources,
+ },
+ {
+- .name = "bxt_wcove_ext_charger",
+- .num_resources = ARRAY_SIZE(charger_resources),
+- .resources = charger_resources,
++ .name = "bxt_wcove_region",
++ },
++};
++
++static const struct mfd_cell bxt_wc_tmu_dev[] = {
++ {
++ .name = "bxt_wcove_tmu",
++ .num_resources = ARRAY_SIZE(tmu_resources),
++ .resources = tmu_resources,
+ },
++};
++
++static const struct mfd_cell bxt_wc_bcu_dev[] = {
+ {
+ .name = "bxt_wcove_bcu",
+ .num_resources = ARRAY_SIZE(bcu_resources),
+ .resources = bcu_resources,
+ },
++};
++
++static const struct mfd_cell bxt_wc_adc_dev[] = {
+ {
+- .name = "bxt_wcove_tmu",
+- .num_resources = ARRAY_SIZE(tmu_resources),
+- .resources = tmu_resources,
++ .name = "bxt_wcove_gpadc",
++ .num_resources = ARRAY_SIZE(adc_resources),
++ .resources = adc_resources,
+ },
++};
+
++static struct mfd_cell bxt_wc_chgr_dev[] = {
+ {
+- .name = "bxt_wcove_gpio",
+- .num_resources = ARRAY_SIZE(gpio_resources),
+- .resources = gpio_resources,
++ .name = "bxt_wcove_usbc",
++ .num_resources = ARRAY_SIZE(usbc_resources),
++ .resources = usbc_resources,
+ },
+ {
+- .name = "bxt_wcove_region",
++ .name = "bxt_wcove_ext_charger",
++ .num_resources = ARRAY_SIZE(charger_resources),
++ .resources = charger_resources,
+ },
+ };
+
+@@ -426,6 +443,26 @@ static int bxtwc_add_chained_irq_chip(struct intel_soc_pmic *pmic,
+ 0, chip, data);
+ }
+
++static int bxtwc_add_chained_devices(struct intel_soc_pmic *pmic,
++ const struct mfd_cell *cells, int n_devs,
++ struct regmap_irq_chip_data *pdata,
++ int pirq, int irq_flags,
++ const struct regmap_irq_chip *chip,
++ struct regmap_irq_chip_data **data)
++{
++ struct device *dev = pmic->dev;
++ struct irq_domain *domain;
++ int ret;
++
++ ret = bxtwc_add_chained_irq_chip(pmic, pdata, pirq, irq_flags, chip, data);
++ if (ret)
++ return dev_err_probe(dev, ret, "Failed to add %s IRQ chip\n", chip->name);
++
++ domain = regmap_irq_get_domain(*data);
++
++ return devm_mfd_add_devices(dev, PLATFORM_DEVID_NONE, cells, n_devs, NULL, 0, domain);
++}
++
+ static int bxtwc_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -467,6 +504,15 @@ static int bxtwc_probe(struct platform_device *pdev)
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to add IRQ chip\n");
+
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_tmu_dev, ARRAY_SIZE(bxt_wc_tmu_dev),
++ pmic->irq_chip_data,
++ BXTWC_TMU_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_tmu,
++ &pmic->irq_chip_data_tmu);
++ if (ret)
++ return ret;
++
+ ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+ BXTWC_PWRBTN_LVL1_IRQ,
+ IRQF_ONESHOT,
+@@ -475,40 +521,32 @@ static int bxtwc_probe(struct platform_device *pdev)
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to add PWRBTN IRQ chip\n");
+
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_TMU_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_tmu,
+- &pmic->irq_chip_data_tmu);
+- if (ret)
+- return dev_err_probe(dev, ret, "Failed to add TMU IRQ chip\n");
+-
+- /* Add chained IRQ handler for BCU IRQs */
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_BCU_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_bcu,
+- &pmic->irq_chip_data_bcu);
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_bcu_dev, ARRAY_SIZE(bxt_wc_bcu_dev),
++ pmic->irq_chip_data,
++ BXTWC_BCU_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_bcu,
++ &pmic->irq_chip_data_bcu);
+ if (ret)
+- return dev_err_probe(dev, ret, "Failed to add BUC IRQ chip\n");
++ return ret;
+
+- /* Add chained IRQ handler for ADC IRQs */
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_ADC_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_adc,
+- &pmic->irq_chip_data_adc);
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_adc_dev, ARRAY_SIZE(bxt_wc_adc_dev),
++ pmic->irq_chip_data,
++ BXTWC_ADC_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_adc,
++ &pmic->irq_chip_data_adc);
+ if (ret)
+- return dev_err_probe(dev, ret, "Failed to add ADC IRQ chip\n");
++ return ret;
+
+- /* Add chained IRQ handler for CHGR IRQs */
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_CHGR_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_chgr,
+- &pmic->irq_chip_data_chgr);
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_chgr_dev, ARRAY_SIZE(bxt_wc_chgr_dev),
++ pmic->irq_chip_data,
++ BXTWC_CHGR_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_chgr,
++ &pmic->irq_chip_data_chgr);
+ if (ret)
+- return dev_err_probe(dev, ret, "Failed to add CHGR IRQ chip\n");
++ return ret;
+
+ /* Add chained IRQ handler for CRIT IRQs */
+ ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+diff --git a/drivers/mfd/rt5033.c b/drivers/mfd/rt5033.c
+index 7e23ab3d5842c8..84ebc96f58e48d 100644
+--- a/drivers/mfd/rt5033.c
++++ b/drivers/mfd/rt5033.c
+@@ -81,8 +81,8 @@ static int rt5033_i2c_probe(struct i2c_client *i2c)
+ chip_rev = dev_id & RT5033_CHIP_REV_MASK;
+ dev_info(&i2c->dev, "Device found (rev. %d)\n", chip_rev);
+
+- ret = regmap_add_irq_chip(rt5033->regmap, rt5033->irq,
+- IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
++ ret = devm_regmap_add_irq_chip(rt5033->dev, rt5033->regmap,
++ rt5033->irq, IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ 0, &rt5033_irq_chip, &rt5033->irq_data);
+ if (ret) {
+ dev_err(&i2c->dev, "Failed to request IRQ %d: %d\n",
+diff --git a/drivers/mfd/tps65010.c b/drivers/mfd/tps65010.c
+index 2b9105295f3012..710364435b6b9e 100644
+--- a/drivers/mfd/tps65010.c
++++ b/drivers/mfd/tps65010.c
+@@ -544,17 +544,13 @@ static int tps65010_probe(struct i2c_client *client)
+ */
+ if (client->irq > 0) {
+ status = request_irq(client->irq, tps65010_irq,
+- IRQF_TRIGGER_FALLING, DRIVER_NAME, tps);
++ IRQF_TRIGGER_FALLING | IRQF_NO_AUTOEN,
++ DRIVER_NAME, tps);
+ if (status < 0) {
+ dev_dbg(&client->dev, "can't get IRQ %d, err %d\n",
+ client->irq, status);
+ return status;
+ }
+- /* annoying race here, ideally we'd have an option
+- * to claim the irq now and enable it later.
+- * FIXME genirq IRQF_NOAUTOEN now solves that ...
+- */
+- disable_irq(client->irq);
+ set_bit(FLAG_IRQ_ENABLE, &tps->flags);
+ } else
+ dev_warn(&client->dev, "IRQ not configured!\n");
+diff --git a/drivers/misc/apds990x.c b/drivers/misc/apds990x.c
+index 6d4edd69db126a..e7d73c972f65dc 100644
+--- a/drivers/misc/apds990x.c
++++ b/drivers/misc/apds990x.c
+@@ -1147,7 +1147,7 @@ static int apds990x_probe(struct i2c_client *client)
+ err = chip->pdata->setup_resources();
+ if (err) {
+ err = -EINVAL;
+- goto fail3;
++ goto fail4;
+ }
+ }
+
+@@ -1155,7 +1155,7 @@ static int apds990x_probe(struct i2c_client *client)
+ apds990x_attribute_group);
+ if (err < 0) {
+ dev_err(&chip->client->dev, "Sysfs registration failed\n");
+- goto fail4;
++ goto fail5;
+ }
+
+ err = request_threaded_irq(client->irq, NULL,
+@@ -1166,15 +1166,17 @@ static int apds990x_probe(struct i2c_client *client)
+ if (err) {
+ dev_err(&client->dev, "could not get IRQ %d\n",
+ client->irq);
+- goto fail5;
++ goto fail6;
+ }
+ return err;
+-fail5:
++fail6:
+ sysfs_remove_group(&chip->client->dev.kobj,
+ &apds990x_attribute_group[0]);
+-fail4:
++fail5:
+ if (chip->pdata && chip->pdata->release_resources)
+ chip->pdata->release_resources();
++fail4:
++ pm_runtime_disable(&client->dev);
+ fail3:
+ regulator_bulk_disable(ARRAY_SIZE(chip->regs), chip->regs);
+ fail2:
+diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
+index 62ba0152547975..376047beea3d64 100644
+--- a/drivers/misc/lkdtm/bugs.c
++++ b/drivers/misc/lkdtm/bugs.c
+@@ -445,7 +445,7 @@ static void lkdtm_FAM_BOUNDS(void)
+
+ pr_err("FAIL: survived access of invalid flexible array member index!\n");
+
+- if (!__has_attribute(__counted_by__))
++ if (!IS_ENABLED(CONFIG_CC_HAS_COUNTED_BY))
+ pr_warn("This is expected since this %s was built with a compiler that does not support __counted_by\n",
+ lkdtm_kernel_info);
+ else if (IS_ENABLED(CONFIG_UBSAN_BOUNDS))
+diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
+index c9caa1ece7ef91..5a19d3e949134e 100644
+--- a/drivers/mmc/host/mmc_spi.c
++++ b/drivers/mmc/host/mmc_spi.c
+@@ -222,10 +222,6 @@ static int mmc_spi_response_get(struct mmc_spi_host *host,
+ u8 leftover = 0;
+ unsigned short rotator;
+ int i;
+- char tag[32];
+-
+- snprintf(tag, sizeof(tag), " ... CMD%d response SPI_%s",
+- cmd->opcode, maptype(cmd));
+
+ /* Except for data block reads, the whole response will already
+ * be stored in the scratch buffer. It's somewhere after the
+@@ -378,8 +374,9 @@ static int mmc_spi_response_get(struct mmc_spi_host *host,
+ }
+
+ if (value < 0)
+- dev_dbg(&host->spi->dev, "%s: resp %04x %08x\n",
+- tag, cmd->resp[0], cmd->resp[1]);
++ dev_dbg(&host->spi->dev,
++ " ... CMD%d response SPI_%s: resp %04x %08x\n",
++ cmd->opcode, maptype(cmd), cmd->resp[0], cmd->resp[1]);
+
+ /* disable chipselect on errors and some success cases */
+ if (value >= 0 && cs_on)
+diff --git a/drivers/mtd/hyperbus/rpc-if.c b/drivers/mtd/hyperbus/rpc-if.c
+index b22aa57119f238..e7a28f3316c3f2 100644
+--- a/drivers/mtd/hyperbus/rpc-if.c
++++ b/drivers/mtd/hyperbus/rpc-if.c
+@@ -163,9 +163,16 @@ static void rpcif_hb_remove(struct platform_device *pdev)
+ pm_runtime_disable(hyperbus->rpc.dev);
+ }
+
++static const struct platform_device_id rpc_if_hyperflash_id_table[] = {
++ { .name = "rpc-if-hyperflash" },
++ { /* sentinel */ }
++};
++MODULE_DEVICE_TABLE(platform, rpc_if_hyperflash_id_table);
++
+ static struct platform_driver rpcif_platform_driver = {
+ .probe = rpcif_hb_probe,
+ .remove_new = rpcif_hb_remove,
++ .id_table = rpc_if_hyperflash_id_table,
+ .driver = {
+ .name = "rpc-if-hyperflash",
+ },
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.c b/drivers/mtd/nand/raw/atmel/pmecc.c
+index 4d7dc8a9c37385..a22aab4ed4e8ab 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.c
++++ b/drivers/mtd/nand/raw/atmel/pmecc.c
+@@ -362,7 +362,7 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
+ size = ALIGN(size, sizeof(s32));
+ size += (req->ecc.strength + 1) * sizeof(s32) * 3;
+
+- user = kzalloc(size, GFP_KERNEL);
++ user = devm_kzalloc(pmecc->dev, size, GFP_KERNEL);
+ if (!user)
+ return ERR_PTR(-ENOMEM);
+
+@@ -408,12 +408,6 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
+ }
+ EXPORT_SYMBOL_GPL(atmel_pmecc_create_user);
+
+-void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user)
+-{
+- kfree(user);
+-}
+-EXPORT_SYMBOL_GPL(atmel_pmecc_destroy_user);
+-
+ static int get_strength(struct atmel_pmecc_user *user)
+ {
+ const int *strengths = user->pmecc->caps->strengths;
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.h b/drivers/mtd/nand/raw/atmel/pmecc.h
+index 7851c05126cf15..cc0c5af1f4f1ab 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.h
++++ b/drivers/mtd/nand/raw/atmel/pmecc.h
+@@ -55,8 +55,6 @@ struct atmel_pmecc *devm_atmel_pmecc_get(struct device *dev);
+ struct atmel_pmecc_user *
+ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
+ struct atmel_pmecc_user_req *req);
+-void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user);
+-
+ void atmel_pmecc_reset(struct atmel_pmecc *pmecc);
+ int atmel_pmecc_enable(struct atmel_pmecc_user *user, int op);
+ void atmel_pmecc_disable(struct atmel_pmecc_user *user);
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index e0c4efc424f498..f0324e83234db4 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -89,7 +89,7 @@ void spi_nor_spimem_setup_op(const struct spi_nor *nor,
+ op->addr.buswidth = spi_nor_get_protocol_addr_nbits(proto);
+
+ if (op->dummy.nbytes)
+- op->dummy.buswidth = spi_nor_get_protocol_addr_nbits(proto);
++ op->dummy.buswidth = spi_nor_get_protocol_data_nbits(proto);
+
+ if (op->data.nbytes)
+ op->data.buswidth = spi_nor_get_protocol_data_nbits(proto);
+diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c
+index 6cc237c24e0728..020994374ea13b 100644
+--- a/drivers/mtd/spi-nor/spansion.c
++++ b/drivers/mtd/spi-nor/spansion.c
+@@ -106,6 +106,7 @@ static int cypress_nor_sr_ready_and_clear_reg(struct spi_nor *nor, u64 addr)
+ int ret;
+
+ if (nor->reg_proto == SNOR_PROTO_8_8_8_DTR) {
++ op.addr.nbytes = nor->addr_nbytes;
+ op.dummy.nbytes = params->rdsr_dummy;
+ op.data.nbytes = 2;
+ }
+diff --git a/drivers/mtd/ubi/attach.c b/drivers/mtd/ubi/attach.c
+index ae5abe492b52ab..adc47b87b38a5f 100644
+--- a/drivers/mtd/ubi/attach.c
++++ b/drivers/mtd/ubi/attach.c
+@@ -1447,7 +1447,7 @@ static int scan_all(struct ubi_device *ubi, struct ubi_attach_info *ai,
+ return err;
+ }
+
+-static struct ubi_attach_info *alloc_ai(void)
++static struct ubi_attach_info *alloc_ai(const char *slab_name)
+ {
+ struct ubi_attach_info *ai;
+
+@@ -1461,7 +1461,7 @@ static struct ubi_attach_info *alloc_ai(void)
+ INIT_LIST_HEAD(&ai->alien);
+ INIT_LIST_HEAD(&ai->fastmap);
+ ai->volumes = RB_ROOT;
+- ai->aeb_slab_cache = kmem_cache_create("ubi_aeb_slab_cache",
++ ai->aeb_slab_cache = kmem_cache_create(slab_name,
+ sizeof(struct ubi_ainf_peb),
+ 0, 0, NULL);
+ if (!ai->aeb_slab_cache) {
+@@ -1491,7 +1491,7 @@ static int scan_fast(struct ubi_device *ubi, struct ubi_attach_info **ai)
+
+ err = -ENOMEM;
+
+- scan_ai = alloc_ai();
++ scan_ai = alloc_ai("ubi_aeb_slab_cache_fastmap");
+ if (!scan_ai)
+ goto out;
+
+@@ -1557,7 +1557,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
+ int err;
+ struct ubi_attach_info *ai;
+
+- ai = alloc_ai();
++ ai = alloc_ai("ubi_aeb_slab_cache");
+ if (!ai)
+ return -ENOMEM;
+
+@@ -1575,7 +1575,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
+ if (err > 0 || mtd_is_eccerr(err)) {
+ if (err != UBI_NO_FASTMAP) {
+ destroy_ai(ai);
+- ai = alloc_ai();
++ ai = alloc_ai("ubi_aeb_slab_cache");
+ if (!ai)
+ return -ENOMEM;
+
+@@ -1614,7 +1614,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
+ if (ubi->fm && ubi_dbg_chk_fastmap(ubi)) {
+ struct ubi_attach_info *scan_ai;
+
+- scan_ai = alloc_ai();
++ scan_ai = alloc_ai("ubi_aeb_slab_cache_dbg_chk_fastmap");
+ if (!scan_ai) {
+ err = -ENOMEM;
+ goto out_wl;
+diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
+index 2a9cc9413c427d..9bdb6525f1281f 100644
+--- a/drivers/mtd/ubi/fastmap-wl.c
++++ b/drivers/mtd/ubi/fastmap-wl.c
+@@ -346,14 +346,27 @@ int ubi_wl_get_peb(struct ubi_device *ubi)
+ * WL sub-system.
+ *
+ * @ubi: UBI device description object
++ * @need_fill: whether to fill wear-leveling pool when no PEBs are found
+ */
+-static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi)
++static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi,
++ bool need_fill)
+ {
+ struct ubi_fm_pool *pool = &ubi->fm_wl_pool;
+ int pnum;
+
+- if (pool->used == pool->size)
++ if (pool->used == pool->size) {
++ if (need_fill && !ubi->fm_work_scheduled) {
++ /*
++ * We cannot update the fastmap here because this
++ * function is called in atomic context.
++ * Let's fail here and refill/update it as soon as
++ * possible.
++ */
++ ubi->fm_work_scheduled = 1;
++ schedule_work(&ubi->fm_work);
++ }
+ return NULL;
++ }
+
+ pnum = pool->pebs[pool->used];
+ return ubi->lookuptbl[pnum];
+@@ -375,7 +388,7 @@ static bool need_wear_leveling(struct ubi_device *ubi)
+ if (!ubi->used.rb_node)
+ return false;
+
+- e = next_peb_for_wl(ubi);
++ e = next_peb_for_wl(ubi, false);
+ if (!e) {
+ if (!ubi->free.rb_node)
+ return false;
+diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
+index 5a3558bbb90356..e5cf3bdca3b012 100644
+--- a/drivers/mtd/ubi/vmt.c
++++ b/drivers/mtd/ubi/vmt.c
+@@ -143,8 +143,10 @@ static struct fwnode_handle *find_volume_fwnode(struct ubi_volume *vol)
+ vol->vol_id != volid)
+ continue;
+
++ fwnode_handle_put(fw_vols);
+ return fw_vol;
+ }
++ fwnode_handle_put(fw_vols);
+
+ return NULL;
+ }
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index a357f3d27f2f3d..fbd399cf650337 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -683,7 +683,7 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ ubi_assert(!ubi->move_to_put);
+
+ #ifdef CONFIG_MTD_UBI_FASTMAP
+- if (!next_peb_for_wl(ubi) ||
++ if (!next_peb_for_wl(ubi, true) ||
+ #else
+ if (!ubi->free.rb_node ||
+ #endif
+@@ -846,7 +846,14 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ goto out_not_moved;
+ }
+ if (err == MOVE_RETRY) {
+- scrubbing = 1;
++ /*
++ * For source PEB:
++ * 1. The scrubbing is set for scrub type PEB, it will
++ * be put back into ubi->scrub list.
++ * 2. Non-scrub type PEB will be put back into ubi->used
++ * list.
++ */
++ keep = 1;
+ dst_leb_clean = 1;
+ goto out_not_moved;
+ }
+diff --git a/drivers/mtd/ubi/wl.h b/drivers/mtd/ubi/wl.h
+index 7b6715ef6d4a35..a69169c35e310f 100644
+--- a/drivers/mtd/ubi/wl.h
++++ b/drivers/mtd/ubi/wl.h
+@@ -5,7 +5,8 @@
+ static void update_fastmap_work_fn(struct work_struct *wrk);
+ static struct ubi_wl_entry *find_anchor_wl_entry(struct rb_root *root);
+ static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi);
+-static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi);
++static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi,
++ bool need_fill);
+ static bool need_wear_leveling(struct ubi_device *ubi);
+ static void ubi_fastmap_close(struct ubi_device *ubi);
+ static inline void ubi_fastmap_init(struct ubi_device *ubi, int *count)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 103e6aa604c33e..68e6e202d4ecdc 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -4589,7 +4589,7 @@ int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
+ struct net_device *dev = bp->dev;
+
+ if (page_mode) {
+- bp->flags &= ~BNXT_FLAG_AGG_RINGS;
++ bp->flags &= ~(BNXT_FLAG_AGG_RINGS | BNXT_FLAG_NO_AGG_RINGS);
+ bp->flags |= BNXT_FLAG_RX_PAGE_MODE;
+
+ if (bp->xdp_prog->aux->xdp_has_frags)
+@@ -9008,7 +9008,6 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
+ struct hwrm_port_mac_ptp_qcfg_output *resp;
+ struct hwrm_port_mac_ptp_qcfg_input *req;
+ struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
+- bool phc_cfg;
+ u8 flags;
+ int rc;
+
+@@ -9055,8 +9054,9 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
+ rc = -ENODEV;
+ goto exit;
+ }
+- phc_cfg = (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0;
+- rc = bnxt_ptp_init(bp, phc_cfg);
++ ptp->rtc_configured =
++ (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0;
++ rc = bnxt_ptp_init(bp);
+ if (rc)
+ netdev_warn(bp->dev, "PTP initialization failed.\n");
+ exit:
+@@ -14454,6 +14454,14 @@ static int bnxt_change_mtu(struct net_device *dev, int new_mtu)
+ bnxt_close_nic(bp, true, false);
+
+ WRITE_ONCE(dev->mtu, new_mtu);
++
++ /* MTU change may change the AGG ring settings if an XDP multi-buffer
++ * program is attached. We need to set the AGG rings settings and
++ * rx_skb_func accordingly.
++ */
++ if (READ_ONCE(bp->xdp_prog))
++ bnxt_set_rx_skb_mode(bp, true);
++
+ bnxt_set_ring_params(bp);
+
+ if (netif_running(dev))
+@@ -15925,6 +15933,7 @@ static void bnxt_shutdown(struct pci_dev *pdev)
+ if (netif_running(dev))
+ dev_close(dev);
+
++ bnxt_ptp_clear(bp);
+ bnxt_clear_int_mode(bp);
+ pci_disable_device(pdev);
+
+@@ -15952,6 +15961,7 @@ static int bnxt_suspend(struct device *device)
+ rc = bnxt_close(dev);
+ }
+ bnxt_hwrm_func_drv_unrgtr(bp);
++ bnxt_ptp_clear(bp);
+ pci_disable_device(bp->pdev);
+ bnxt_free_ctx_mem(bp);
+ rtnl_unlock();
+@@ -15993,6 +16003,10 @@ static int bnxt_resume(struct device *device)
+ goto resume_exit;
+ }
+
++ if (bnxt_ptp_init(bp)) {
++ kfree(bp->ptp_cfg);
++ bp->ptp_cfg = NULL;
++ }
+ bnxt_get_wol_settings(bp);
+ if (netif_running(dev)) {
+ rc = bnxt_open(dev);
+@@ -16171,8 +16185,12 @@ static void bnxt_io_resume(struct pci_dev *pdev)
+ rtnl_lock();
+
+ err = bnxt_hwrm_func_qcaps(bp);
+- if (!err && netif_running(netdev))
+- err = bnxt_open(netdev);
++ if (!err) {
++ if (netif_running(netdev))
++ err = bnxt_open(netdev);
++ else
++ err = bnxt_reserve_rings(bp, true);
++ }
+
+ if (!err)
+ netif_device_attach(netdev);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index ac06f4a4cf97ce..0f9441de2f2d7c 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -2837,19 +2837,24 @@ static int bnxt_get_link_ksettings(struct net_device *dev,
+ }
+
+ base->port = PORT_NONE;
+- if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP) {
++ if (media == BNXT_MEDIA_TP) {
+ base->port = PORT_TP;
+ linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT,
+ lk_ksettings->link_modes.supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT,
+ lk_ksettings->link_modes.advertising);
++ } else if (media == BNXT_MEDIA_KR) {
++ linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT,
++ lk_ksettings->link_modes.supported);
++ linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT,
++ lk_ksettings->link_modes.advertising);
+ } else {
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+ lk_ksettings->link_modes.supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+ lk_ksettings->link_modes.advertising);
+
+- if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_DAC)
++ if (media == BNXT_MEDIA_CR)
+ base->port = PORT_DA;
+ else
+ base->port = PORT_FIBRE;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+index fa514be8765028..781225d3ba8ffc 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+@@ -1024,7 +1024,7 @@ static void bnxt_ptp_free(struct bnxt *bp)
+ }
+ }
+
+-int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
++int bnxt_ptp_init(struct bnxt *bp)
+ {
+ struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
+ int rc;
+@@ -1047,7 +1047,7 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
+
+ if (BNXT_PTP_USE_RTC(bp)) {
+ bnxt_ptp_timecounter_init(bp, false);
+- rc = bnxt_ptp_init_rtc(bp, phc_cfg);
++ rc = bnxt_ptp_init_rtc(bp, ptp->rtc_configured);
+ if (rc)
+ goto out;
+ } else {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+index f322466ecad350..61e89bb2d2690c 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+@@ -133,6 +133,7 @@ struct bnxt_ptp_cfg {
+ BNXT_PTP_MSG_PDELAY_REQ | \
+ BNXT_PTP_MSG_PDELAY_RESP)
+ u8 tx_tstamp_en:1;
++ u8 rtc_configured:1;
+ int rx_filter;
+ u32 tstamp_filters;
+
+@@ -180,6 +181,6 @@ void bnxt_tx_ts_cmp(struct bnxt *bp, struct bnxt_napi *bnapi,
+ struct tx_ts_cmp *tscmp);
+ void bnxt_ptp_rtc_timecounter_init(struct bnxt_ptp_cfg *ptp, u64 ns);
+ int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg);
+-int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg);
++int bnxt_ptp_init(struct bnxt *bp);
+ void bnxt_ptp_clear(struct bnxt *bp);
+ #endif
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index 0ec5f01551f980..8d228308620677 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -17805,6 +17805,9 @@ static int tg3_init_one(struct pci_dev *pdev,
+ } else
+ persist_dma_mask = dma_mask = DMA_BIT_MASK(64);
+
++ if (tg3_asic_rev(tp) == ASIC_REV_57766)
++ persist_dma_mask = DMA_BIT_MASK(31);
++
+ /* Configure DMA attributes. */
+ if (dma_mask > DMA_BIT_MASK(32)) {
+ err = dma_set_mask(&pdev->dev, dma_mask);
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index c5bbc1b7524ed2..d4a8d66dbf7134 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -1208,10 +1208,10 @@ gve_adminq_configure_flow_rule(struct gve_priv *priv,
+ sizeof(struct gve_adminq_configure_flow_rule),
+ flow_rule_cmd);
+
+- if (err) {
++ if (err == -ETIME) {
+ dev_err(&priv->pdev->dev, "Timeout to configure the flow rule, trigger reset");
+ gve_reset(priv, true);
+- } else {
++ } else if (!err) {
+ priv->flow_rules_cache.rules_cache_synced = false;
+ }
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 1d0d2e526adb45..5f4517f1f4f4c1 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -5303,7 +5303,7 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
+ }
+
+ flags_complete:
+- bitmap_xor(changed_flags, pf->flags, orig_flags, I40E_PF_FLAGS_NBITS);
++ bitmap_xor(changed_flags, new_flags, orig_flags, I40E_PF_FLAGS_NBITS);
+
+ if (test_bit(I40E_FLAG_FW_LLDP_DIS, changed_flags))
+ reset_needed = I40E_PF_RESET_AND_REBUILD_FLAG;
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index 1c6ce0c4ed4ee9..d7fd4bc18f0219 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -1711,8 +1711,8 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+
+ /* copy Tx queue info from VF into VSI */
+ if (qpi->txq.ring_len > 0) {
+- vsi->tx_rings[i]->dma = qpi->txq.dma_ring_addr;
+- vsi->tx_rings[i]->count = qpi->txq.ring_len;
++ vsi->tx_rings[q_idx]->dma = qpi->txq.dma_ring_addr;
++ vsi->tx_rings[q_idx]->count = qpi->txq.ring_len;
+
+ /* Disable any existing queue first */
+ if (ice_vf_vsi_dis_single_txq(vf, vsi, q_idx))
+@@ -1721,7 +1721,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+ /* Configure a queue with the requested settings */
+ if (ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx)) {
+ dev_warn(ice_pf_to_dev(pf), "VF-%d failed to configure TX queue %d\n",
+- vf->vf_id, i);
++ vf->vf_id, q_idx);
+ goto error_param;
+ }
+ }
+@@ -1729,24 +1729,23 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+ /* copy Rx queue info from VF into VSI */
+ if (qpi->rxq.ring_len > 0) {
+ u16 max_frame_size = ice_vc_get_max_frame_size(vf);
++ struct ice_rx_ring *ring = vsi->rx_rings[q_idx];
+ u32 rxdid;
+
+- vsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr;
+- vsi->rx_rings[i]->count = qpi->rxq.ring_len;
++ ring->dma = qpi->rxq.dma_ring_addr;
++ ring->count = qpi->rxq.ring_len;
+
+ if (qpi->rxq.crc_disable)
+- vsi->rx_rings[q_idx]->flags |=
+- ICE_RX_FLAGS_CRC_STRIP_DIS;
++ ring->flags |= ICE_RX_FLAGS_CRC_STRIP_DIS;
+ else
+- vsi->rx_rings[q_idx]->flags &=
+- ~ICE_RX_FLAGS_CRC_STRIP_DIS;
++ ring->flags &= ~ICE_RX_FLAGS_CRC_STRIP_DIS;
+
+ if (qpi->rxq.databuffer_size != 0 &&
+ (qpi->rxq.databuffer_size > ((16 * 1024) - 128) ||
+ qpi->rxq.databuffer_size < 1024))
+ goto error_param;
+ vsi->rx_buf_len = qpi->rxq.databuffer_size;
+- vsi->rx_rings[i]->rx_buf_len = vsi->rx_buf_len;
++ ring->rx_buf_len = vsi->rx_buf_len;
+ if (qpi->rxq.max_pkt_size > max_frame_size ||
+ qpi->rxq.max_pkt_size < 64)
+ goto error_param;
+@@ -1761,7 +1760,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+
+ if (ice_vsi_cfg_single_rxq(vsi, q_idx)) {
+ dev_warn(ice_pf_to_dev(pf), "VF-%d failed to configure RX queue %d\n",
+- vf->vf_id, i);
++ vf->vf_id, q_idx);
+ goto error_param;
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 27935c54b91bc7..8216f843a7cd5f 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -112,6 +112,11 @@ struct mac_ops *get_mac_ops(void *cgxd)
+ return ((struct cgx *)cgxd)->mac_ops;
+ }
+
++u32 cgx_get_fifo_len(void *cgxd)
++{
++ return ((struct cgx *)cgxd)->fifo_len;
++}
++
+ void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val)
+ {
+ writeq(val, cgx->reg_base + (lmac << cgx->mac_ops->lmac_offset) +
+@@ -209,6 +214,24 @@ u8 cgx_lmac_get_p2x(int cgx_id, int lmac_id)
+ return (cfg & CMR_P2X_SEL_MASK) >> CMR_P2X_SEL_SHIFT;
+ }
+
++static u8 cgx_get_nix_resetbit(struct cgx *cgx)
++{
++ int first_lmac;
++ u8 p2x;
++
++ /* non 98XX silicons supports only NIX0 block */
++ if (cgx->pdev->subsystem_device != PCI_SUBSYS_DEVID_98XX)
++ return CGX_NIX0_RESET;
++
++ first_lmac = find_first_bit(&cgx->lmac_bmap, cgx->max_lmac_per_mac);
++ p2x = cgx_lmac_get_p2x(cgx->cgx_id, first_lmac);
++
++ if (p2x == CMR_P2X_SEL_NIX1)
++ return CGX_NIX1_RESET;
++ else
++ return CGX_NIX0_RESET;
++}
++
+ /* Ensure the required lock for event queue(where asynchronous events are
+ * posted) is acquired before calling this API. Else an asynchronous event(with
+ * latest link status) can reach the destination before this function returns
+@@ -501,7 +524,7 @@ static u32 cgx_get_lmac_fifo_len(void *cgxd, int lmac_id)
+ u8 num_lmacs;
+ u32 fifo_len;
+
+- fifo_len = cgx->mac_ops->fifo_len;
++ fifo_len = cgx->fifo_len;
+ num_lmacs = cgx->mac_ops->get_nr_lmacs(cgx);
+
+ switch (num_lmacs) {
+@@ -1719,6 +1742,8 @@ static int cgx_lmac_init(struct cgx *cgx)
+ lmac->lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac->lmac_id);
+ }
+
++ /* Start X2P reset on given MAC block */
++ cgx->mac_ops->mac_x2p_reset(cgx, true);
+ return cgx_lmac_verify_fwi_version(cgx);
+
+ err_bitmap_free:
+@@ -1764,7 +1789,7 @@ static void cgx_populate_features(struct cgx *cgx)
+ u64 cfg;
+
+ cfg = cgx_read(cgx, 0, CGX_CONST);
+- cgx->mac_ops->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg);
++ cgx->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg);
+ cgx->max_lmac_per_mac = FIELD_GET(CGX_CONST_MAX_LMACS, cfg);
+
+ if (is_dev_rpm(cgx))
+@@ -1784,6 +1809,45 @@ static u8 cgx_get_rxid_mapoffset(struct cgx *cgx)
+ return 0x60;
+ }
+
++static void cgx_x2p_reset(void *cgxd, bool enable)
++{
++ struct cgx *cgx = cgxd;
++ int lmac_id;
++ u64 cfg;
++
++ if (enable) {
++ for_each_set_bit(lmac_id, &cgx->lmac_bmap, cgx->max_lmac_per_mac)
++ cgx->mac_ops->mac_enadis_rx(cgx, lmac_id, false);
++
++ usleep_range(1000, 2000);
++
++ cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG);
++ cfg |= cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP;
++ cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg);
++ } else {
++ cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG);
++ cfg &= ~(cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP);
++ cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg);
++ }
++}
++
++static int cgx_enadis_rx(void *cgxd, int lmac_id, bool enable)
++{
++ struct cgx *cgx = cgxd;
++ u64 cfg;
++
++ if (!is_lmac_valid(cgx, lmac_id))
++ return -ENODEV;
++
++ cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG);
++ if (enable)
++ cfg |= DATA_PKT_RX_EN;
++ else
++ cfg &= ~DATA_PKT_RX_EN;
++ cgx_write(cgx, lmac_id, CGXX_CMRX_CFG, cfg);
++ return 0;
++}
++
+ static struct mac_ops cgx_mac_ops = {
+ .name = "cgx",
+ .csr_offset = 0,
+@@ -1815,6 +1879,8 @@ static struct mac_ops cgx_mac_ops = {
+ .mac_get_pfc_frm_cfg = cgx_lmac_get_pfc_frm_cfg,
+ .mac_reset = cgx_lmac_reset,
+ .mac_stats_reset = cgx_stats_reset,
++ .mac_x2p_reset = cgx_x2p_reset,
++ .mac_enadis_rx = cgx_enadis_rx,
+ };
+
+ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+index dc9ace30554af6..1cf12e5c7da873 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+@@ -32,6 +32,10 @@
+ #define CGX_LMAC_TYPE_MASK 0xF
+ #define CGXX_CMRX_INT 0x040
+ #define FW_CGX_INT BIT_ULL(1)
++#define CGXX_CMR_GLOBAL_CONFIG 0x08
++#define CGX_NIX0_RESET BIT_ULL(2)
++#define CGX_NIX1_RESET BIT_ULL(3)
++#define CGX_NSCI_DROP BIT_ULL(9)
+ #define CGXX_CMRX_INT_ENA_W1S 0x058
+ #define CGXX_CMRX_RX_ID_MAP 0x060
+ #define CGXX_CMRX_RX_STAT0 0x070
+@@ -185,4 +189,5 @@ int cgx_lmac_get_pfc_frm_cfg(void *cgxd, int lmac_id, u8 *tx_pause,
+ int verify_lmac_fc_cfg(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause,
+ int pfvf_idx);
+ int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr);
++u32 cgx_get_fifo_len(void *cgxd);
+ #endif /* CGX_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
+index 9ffc6790c51307..6180e68e1765a7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
+@@ -72,7 +72,6 @@ struct mac_ops {
+ u8 irq_offset;
+ u8 int_ena_bit;
+ u8 lmac_fwi;
+- u32 fifo_len;
+ bool non_contiguous_serdes_lane;
+ /* RPM & CGX differs in number of Receive/transmit stats */
+ u8 rx_stats_cnt;
+@@ -133,6 +132,8 @@ struct mac_ops {
+ int (*get_fec_stats)(void *cgxd, int lmac_id,
+ struct cgx_fec_stats_rsp *rsp);
+ int (*mac_stats_reset)(void *cgxd, int lmac_id);
++ void (*mac_x2p_reset)(void *cgxd, bool enable);
++ int (*mac_enadis_rx)(void *cgxd, int lmac_id, bool enable);
+ };
+
+ struct cgx {
+@@ -142,6 +143,10 @@ struct cgx {
+ u8 lmac_count;
+ /* number of LMACs per MAC could be 4 or 8 */
+ u8 max_lmac_per_mac;
++ /* length of fifo varies depending on the number
++ * of LMACS
++ */
++ u32 fifo_len;
+ #define MAX_LMAC_COUNT 8
+ struct lmac *lmac_idmap[MAX_LMAC_COUNT];
+ struct work_struct cgx_cmd_work;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+index 1b34cf9c97035a..2e9945446199ec 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+@@ -39,6 +39,8 @@ static struct mac_ops rpm_mac_ops = {
+ .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
+ .mac_reset = rpm_lmac_reset,
+ .mac_stats_reset = rpm_stats_reset,
++ .mac_x2p_reset = rpm_x2p_reset,
++ .mac_enadis_rx = rpm_enadis_rx,
+ };
+
+ static struct mac_ops rpm2_mac_ops = {
+@@ -72,6 +74,8 @@ static struct mac_ops rpm2_mac_ops = {
+ .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
+ .mac_reset = rpm_lmac_reset,
+ .mac_stats_reset = rpm_stats_reset,
++ .mac_x2p_reset = rpm_x2p_reset,
++ .mac_enadis_rx = rpm_enadis_rx,
+ };
+
+ bool is_dev_rpm2(void *rpmd)
+@@ -467,7 +471,7 @@ u8 rpm_get_lmac_type(void *rpmd, int lmac_id)
+ int err;
+
+ req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_LINK_STS, req);
+- err = cgx_fwi_cmd_generic(req, &resp, rpm, 0);
++ err = cgx_fwi_cmd_generic(req, &resp, rpm, lmac_id);
+ if (!err)
+ return FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, resp);
+ return err;
+@@ -480,7 +484,7 @@ u32 rpm_get_lmac_fifo_len(void *rpmd, int lmac_id)
+ u8 num_lmacs;
+ u32 fifo_len;
+
+- fifo_len = rpm->mac_ops->fifo_len;
++ fifo_len = rpm->fifo_len;
+ num_lmacs = rpm->mac_ops->get_nr_lmacs(rpm);
+
+ switch (num_lmacs) {
+@@ -533,9 +537,9 @@ u32 rpm2_get_lmac_fifo_len(void *rpmd, int lmac_id)
+ */
+ max_lmac = (rpm_read(rpm, 0, CGX_CONST) >> 24) & 0xFF;
+ if (max_lmac > 4)
+- fifo_len = rpm->mac_ops->fifo_len / 2;
++ fifo_len = rpm->fifo_len / 2;
+ else
+- fifo_len = rpm->mac_ops->fifo_len;
++ fifo_len = rpm->fifo_len;
+
+ if (lmac_id < 4) {
+ num_lmacs = hweight8(lmac_info & 0xF);
+@@ -699,46 +703,51 @@ int rpm_get_fec_stats(void *rpmd, int lmac_id, struct cgx_fec_stats_rsp *rsp)
+ if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_NONE)
+ return 0;
+
++ /* latched registers FCFECX_CW_HI/RSFEC_STAT_FAST_DATA_HI_CDC are common
++ * for all counters. Acquire lock to ensure serialized reads
++ */
++ mutex_lock(&rpm->lock);
+ if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_BASER) {
+- val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_CCW_LO);
+- val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_CCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_corr_blks = (val_hi << 16 | val_lo);
+
+- val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_NCCW_LO);
+- val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_NCCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_uncorr_blks = (val_hi << 16 | val_lo);
+
+ /* 50G uses 2 Physical serdes lines */
+ if (rpm->lmac_idmap[lmac_id]->link_info.lmac_type_id ==
+ LMAC_MODE_50G_R) {
+- val_lo = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_VL1_CCW_LO);
+- val_hi = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_VL1_CCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_corr_blks += (val_hi << 16 | val_lo);
+
+- val_lo = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_VL1_NCCW_LO);
+- val_hi = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_VL1_NCCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_uncorr_blks += (val_hi << 16 | val_lo);
+ }
+ } else {
+ /* enable RS-FEC capture */
+- cfg = rpm_read(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL);
++ cfg = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL);
+ cfg |= RPMX_RSFEC_RX_CAPTURE | BIT(lmac_id);
+- rpm_write(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL, cfg);
++ rpm_write(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL, cfg);
+
+ val_lo = rpm_read(rpm, 0,
+ RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2);
+- val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC);
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC);
+ rsp->fec_corr_blks = (val_hi << 32 | val_lo);
+
+ val_lo = rpm_read(rpm, 0,
+ RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3);
+- val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC);
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC);
+ rsp->fec_uncorr_blks = (val_hi << 32 | val_lo);
+ }
++ mutex_unlock(&rpm->lock);
+
+ return 0;
+ }
+@@ -763,3 +772,41 @@ int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr)
+
+ return 0;
+ }
++
++void rpm_x2p_reset(void *rpmd, bool enable)
++{
++ rpm_t *rpm = rpmd;
++ int lmac_id;
++ u64 cfg;
++
++ if (enable) {
++ for_each_set_bit(lmac_id, &rpm->lmac_bmap, rpm->max_lmac_per_mac)
++ rpm->mac_ops->mac_enadis_rx(rpm, lmac_id, false);
++
++ usleep_range(1000, 2000);
++
++ cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG);
++ rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg | RPM_NIX0_RESET);
++ } else {
++ cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG);
++ cfg &= ~RPM_NIX0_RESET;
++ rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg);
++ }
++}
++
++int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable)
++{
++ rpm_t *rpm = rpmd;
++ u64 cfg;
++
++ if (!is_lmac_valid(rpm, lmac_id))
++ return -ENODEV;
++
++ cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
++ if (enable)
++ cfg |= RPM_RX_EN;
++ else
++ cfg &= ~RPM_RX_EN;
++ rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
++ return 0;
++}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
+index 34b11deb0f3c1d..b8d3972e096aed 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
+@@ -17,6 +17,8 @@
+
+ /* Registers */
+ #define RPMX_CMRX_CFG 0x00
++#define RPMX_CMR_GLOBAL_CFG 0x08
++#define RPM_NIX0_RESET BIT_ULL(3)
+ #define RPMX_RX_TS_PREPEND BIT_ULL(22)
+ #define RPMX_TX_PTP_1S_SUPPORT BIT_ULL(17)
+ #define RPMX_CMRX_RX_ID_MAP 0x80
+@@ -84,16 +86,18 @@
+ /* FEC stats */
+ #define RPMX_MTI_STAT_STATN_CONTROL 0x10018
+ #define RPMX_MTI_STAT_DATA_HI_CDC 0x10038
+-#define RPMX_RSFEC_RX_CAPTURE BIT_ULL(27)
++#define RPMX_RSFEC_RX_CAPTURE BIT_ULL(28)
+ #define RPMX_CMD_CLEAR_RX BIT_ULL(30)
+ #define RPMX_CMD_CLEAR_TX BIT_ULL(31)
++#define RPMX_MTI_RSFEC_STAT_STATN_CONTROL 0x40018
++#define RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC 0x40000
+ #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2 0x40050
+ #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3 0x40058
+-#define RPMX_MTI_FCFECX_VL0_CCW_LO 0x38618
+-#define RPMX_MTI_FCFECX_VL0_NCCW_LO 0x38620
+-#define RPMX_MTI_FCFECX_VL1_CCW_LO 0x38628
+-#define RPMX_MTI_FCFECX_VL1_NCCW_LO 0x38630
+-#define RPMX_MTI_FCFECX_CW_HI 0x38638
++#define RPMX_MTI_FCFECX_VL0_CCW_LO(a) (0x38618 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_VL0_NCCW_LO(a) (0x38620 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_VL1_CCW_LO(a) (0x38628 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_VL1_NCCW_LO(a) (0x38630 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_CW_HI(a) (0x38638 + ((a) * 0x40))
+
+ /* CN10KB CSR Declaration */
+ #define RPM2_CMRX_SW_INT 0x1b0
+@@ -137,4 +141,6 @@ bool is_dev_rpm2(void *rpmd);
+ int rpm_get_fec_stats(void *cgxd, int lmac_id, struct cgx_fec_stats_rsp *rsp);
+ int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr);
+ int rpm_stats_reset(void *rpmd, int lmac_id);
++void rpm_x2p_reset(void *rpmd, bool enable);
++int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable);
+ #endif /* RPM_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index ac7ee3f3598c90..02ebfe71b6910e 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -1162,6 +1162,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
+ }
+
+ rvu_program_channels(rvu);
++ cgx_start_linkup(rvu);
+
+ err = rvu_mcs_init(rvu);
+ if (err) {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index db2db0738ee42a..9ada11f114b1ec 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -967,6 +967,7 @@ int rvu_cgx_prio_flow_ctrl_cfg(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_
+ int rvu_cgx_cfg_pause_frm(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause);
+ void rvu_mac_reset(struct rvu *rvu, u16 pcifunc);
+ u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac);
++void cgx_start_linkup(struct rvu *rvu);
+ int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf,
+ int type);
+ bool is_mcam_entry_enabled(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index 266ecbc1b97a68..992fa0b82e8d2d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -349,6 +349,7 @@ static void rvu_cgx_wq_destroy(struct rvu *rvu)
+
+ int rvu_cgx_init(struct rvu *rvu)
+ {
++ struct mac_ops *mac_ops;
+ int cgx, err;
+ void *cgxd;
+
+@@ -375,6 +376,15 @@ int rvu_cgx_init(struct rvu *rvu)
+ if (err)
+ return err;
+
++ /* Clear X2P reset on all MAC blocks */
++ for (cgx = 0; cgx < rvu->cgx_cnt_max; cgx++) {
++ cgxd = rvu_cgx_pdata(cgx, rvu);
++ if (!cgxd)
++ continue;
++ mac_ops = get_mac_ops(cgxd);
++ mac_ops->mac_x2p_reset(cgxd, false);
++ }
++
+ /* Register for CGX events */
+ err = cgx_lmac_event_handler_init(rvu);
+ if (err)
+@@ -382,10 +392,26 @@ int rvu_cgx_init(struct rvu *rvu)
+
+ mutex_init(&rvu->cgx_cfg_lock);
+
+- /* Ensure event handler registration is completed, before
+- * we turn on the links
+- */
+- mb();
++ return 0;
++}
++
++void cgx_start_linkup(struct rvu *rvu)
++{
++ unsigned long lmac_bmap;
++ struct mac_ops *mac_ops;
++ int cgx, lmac, err;
++ void *cgxd;
++
++ /* Enable receive on all LMACS */
++ for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) {
++ cgxd = rvu_cgx_pdata(cgx, rvu);
++ if (!cgxd)
++ continue;
++ mac_ops = get_mac_ops(cgxd);
++ lmac_bmap = cgx_get_lmac_bmap(cgxd);
++ for_each_set_bit(lmac, &lmac_bmap, rvu->hw->lmac_per_cgx)
++ mac_ops->mac_enadis_rx(cgxd, lmac, true);
++ }
+
+ /* Do link up for all CGX ports */
+ for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) {
+@@ -398,8 +424,6 @@ int rvu_cgx_init(struct rvu *rvu)
+ "Link up process failed to start on cgx %d\n",
+ cgx);
+ }
+-
+- return 0;
+ }
+
+ int rvu_cgx_exit(struct rvu *rvu)
+@@ -923,13 +947,12 @@ int rvu_mbox_handler_cgx_features_get(struct rvu *rvu,
+
+ u32 rvu_cgx_get_fifolen(struct rvu *rvu)
+ {
+- struct mac_ops *mac_ops;
+- u32 fifo_len;
++ void *cgxd = rvu_first_cgx_pdata(rvu);
+
+- mac_ops = get_mac_ops(rvu_first_cgx_pdata(rvu));
+- fifo_len = mac_ops ? mac_ops->fifo_len : 0;
++ if (!cgxd)
++ return 0;
+
+- return fifo_len;
++ return cgx_get_fifo_len(cgxd);
+ }
+
+ u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+index c1c99d7054f87f..7417087b6db597 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+@@ -203,6 +203,11 @@ int cn10k_alloc_leaf_profile(struct otx2_nic *pfvf, u16 *leaf)
+
+ rsp = (struct nix_bandprof_alloc_rsp *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ rc = PTR_ERR(rsp);
++ goto out;
++ }
++
+ if (!rsp->prof_count[BAND_PROF_LEAF_LAYER]) {
+ rc = -EIO;
+ goto out;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 87d5776e3b88e9..7510a918d942c0 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -1837,6 +1837,10 @@ u16 otx2_get_max_mtu(struct otx2_nic *pfvf)
+ if (!rc) {
+ rsp = (struct nix_hw_info *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ rc = PTR_ERR(rsp);
++ goto out;
++ }
+
+ /* HW counts VLAN insertion bytes (8 for double tag)
+ * irrespective of whether SQE is requesting to insert VLAN
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
+index aa01110f04a339..294fba58b67095 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
+@@ -315,6 +315,11 @@ int otx2_config_priority_flow_ctrl(struct otx2_nic *pfvf)
+ if (!otx2_sync_mbox_msg(&pfvf->mbox)) {
+ rsp = (struct cgx_pfc_rsp *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ err = PTR_ERR(rsp);
++ goto unlock;
++ }
++
+ if (req->rx_pause != rsp->rx_pause || req->tx_pause != rsp->tx_pause) {
+ dev_warn(pfvf->dev,
+ "Failed to config PFC\n");
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
+index 80d853b343f98f..2046dd0da00d85 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
+@@ -28,6 +28,11 @@ static int otx2_dmacflt_do_add(struct otx2_nic *pf, const u8 *mac,
+ if (!err) {
+ rsp = (struct cgx_mac_addr_add_rsp *)
+ otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ mutex_unlock(&pf->mbox.lock);
++ return PTR_ERR(rsp);
++ }
++
+ *dmac_index = rsp->index;
+ }
+
+@@ -200,6 +205,10 @@ int otx2_dmacflt_update(struct otx2_nic *pf, u8 *mac, u32 bit_pos)
+
+ rsp = (struct cgx_mac_addr_update_rsp *)
+ otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ rc = PTR_ERR(rsp);
++ goto out;
++ }
+
+ pf->flow_cfg->bmap_to_dmacindex[bit_pos] = rsp->index;
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+index 0db62eb0dab3f0..09317860e73827 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+@@ -343,6 +343,11 @@ static void otx2_get_pauseparam(struct net_device *netdev,
+ if (!otx2_sync_mbox_msg(&pfvf->mbox)) {
+ rsp = (struct cgx_pause_frm_cfg *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ mutex_unlock(&pfvf->mbox.lock);
++ return;
++ }
++
+ pause->rx_pause = rsp->rx_pause;
+ pause->tx_pause = rsp->tx_pause;
+ }
+@@ -1074,6 +1079,11 @@ static int otx2_set_fecparam(struct net_device *netdev,
+
+ rsp = (struct fec_mode *)otx2_mbox_get_rsp(&pfvf->mbox.mbox,
+ 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ err = PTR_ERR(rsp);
++ goto end;
++ }
++
+ if (rsp->fec >= 0)
+ pfvf->linfo.fec = rsp->fec;
+ else
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+index 98c31a16c70b4f..58720a161ee24a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+@@ -119,6 +119,8 @@ int otx2_alloc_mcam_entries(struct otx2_nic *pfvf, u16 count)
+
+ rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp
+ (&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp))
++ goto exit;
+
+ for (ent = 0; ent < rsp->count; ent++)
+ flow_cfg->flow_ent[ent + allocated] = rsp->entry_list[ent];
+@@ -197,6 +199,10 @@ int otx2_mcam_entry_init(struct otx2_nic *pfvf)
+
+ rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp
+ (&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ mutex_unlock(&pfvf->mbox.lock);
++ return PTR_ERR(rsp);
++ }
+
+ if (rsp->count != req->count) {
+ netdev_info(pfvf->netdev,
+@@ -232,6 +238,10 @@ int otx2_mcam_entry_init(struct otx2_nic *pfvf)
+
+ frsp = (struct npc_get_field_status_rsp *)otx2_mbox_get_rsp
+ (&pfvf->mbox.mbox, 0, &freq->hdr);
++ if (IS_ERR(frsp)) {
++ mutex_unlock(&pfvf->mbox.lock);
++ return PTR_ERR(frsp);
++ }
+
+ if (frsp->enable) {
+ pfvf->flags |= OTX2_FLAG_RX_VLAN_SUPPORT;
+diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
+index 1a59c952aa01c1..45f115e41857ba 100644
+--- a/drivers/net/ethernet/marvell/pxa168_eth.c
++++ b/drivers/net/ethernet/marvell/pxa168_eth.c
+@@ -1394,18 +1394,15 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+
+ printk(KERN_NOTICE "PXA168 10/100 Ethernet Driver\n");
+
+- clk = devm_clk_get(&pdev->dev, NULL);
++ clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ if (IS_ERR(clk)) {
+- dev_err(&pdev->dev, "Fast Ethernet failed to get clock\n");
++ dev_err(&pdev->dev, "Fast Ethernet failed to get and enable clock\n");
+ return -ENODEV;
+ }
+- clk_prepare_enable(clk);
+
+ dev = alloc_etherdev(sizeof(struct pxa168_eth_private));
+- if (!dev) {
+- err = -ENOMEM;
+- goto err_clk;
+- }
++ if (!dev)
++ return -ENOMEM;
+
+ platform_set_drvdata(pdev, dev);
+ pep = netdev_priv(dev);
+@@ -1523,8 +1520,6 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+ mdiobus_free(pep->smi_bus);
+ err_netdev:
+ free_netdev(dev);
+-err_clk:
+- clk_disable_unprepare(clk);
+ return err;
+ }
+
+@@ -1542,7 +1537,6 @@ static void pxa168_eth_remove(struct platform_device *pdev)
+ if (dev->phydev)
+ phy_disconnect(dev->phydev);
+
+- clk_disable_unprepare(pep->clk);
+ mdiobus_unregister(pep->smi_bus);
+ mdiobus_free(pep->smi_bus);
+ unregister_netdev(dev);
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
+index a4809fe0fc2496..268489b15616fd 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
+@@ -319,7 +319,6 @@ static int fbnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ free_irqs:
+ fbnic_free_irqs(fbd);
+ free_fbd:
+- pci_disable_device(pdev);
+ fbnic_devlink_free(fbd);
+
+ return err;
+@@ -349,7 +348,6 @@ static void fbnic_remove(struct pci_dev *pdev)
+ fbnic_fw_disable_mbx(fbd);
+ fbnic_free_irqs(fbd);
+
+- pci_disable_device(pdev);
+ fbnic_devlink_free(fbd);
+ }
+
+diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+index 7251121ab196e3..16eb3de60eb6df 100644
+--- a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
++++ b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+@@ -366,12 +366,13 @@ static void vcap_api_iterator_init_test(struct kunit *test)
+ struct vcap_typegroup typegroups[] = {
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 156, .width = 1, .value = 0, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ struct vcap_typegroup typegroups2[] = {
+ { .offset = 0, .width = 3, .value = 4, },
+ { .offset = 49, .width = 2, .value = 0, },
+ { .offset = 98, .width = 2, .value = 0, },
++ { }
+ };
+
+ vcap_iter_init(&iter, 52, typegroups, 86);
+@@ -399,6 +400,7 @@ static void vcap_api_iterator_next_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 0, },
+ { .offset = 196, .width = 2, .value = 0, },
+ { .offset = 245, .width = 1, .value = 0, },
++ { }
+ };
+ int idx;
+
+@@ -433,7 +435,7 @@ static void vcap_api_encode_typegroups_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 5, },
+ { .offset = 196, .width = 2, .value = 2, },
+ { .offset = 245, .width = 5, .value = 27, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+
+ vcap_encode_typegroups(stream, 49, typegroups, false);
+@@ -463,6 +465,7 @@ static void vcap_api_encode_bit_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 5, },
+ { .offset = 196, .width = 2, .value = 2, },
+ { .offset = 245, .width = 1, .value = 0, },
++ { }
+ };
+
+ vcap_iter_init(&iter, 49, typegroups, 44);
+@@ -489,7 +492,7 @@ static void vcap_api_encode_field_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 5, },
+ { .offset = 196, .width = 2, .value = 2, },
+ { .offset = 245, .width = 5, .value = 27, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ struct vcap_field rf = {
+ .type = VCAP_FIELD_U32,
+@@ -538,7 +541,7 @@ static void vcap_api_encode_short_field_test(struct kunit *test)
+ { .offset = 0, .width = 3, .value = 7, },
+ { .offset = 21, .width = 2, .value = 3, },
+ { .offset = 42, .width = 1, .value = 1, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ struct vcap_field rf = {
+ .type = VCAP_FIELD_U32,
+@@ -608,7 +611,7 @@ static void vcap_api_encode_keyfield_test(struct kunit *test)
+ struct vcap_typegroup tgt[] = {
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 156, .width = 1, .value = 1, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+
+ vcap_test_api_init(&admin);
+@@ -671,7 +674,7 @@ static void vcap_api_encode_max_keyfield_test(struct kunit *test)
+ struct vcap_typegroup tgt[] = {
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 156, .width = 1, .value = 1, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ u32 keyres[] = {
+ 0x928e8a84,
+@@ -732,7 +735,7 @@ static void vcap_api_encode_actionfield_test(struct kunit *test)
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 21, .width = 1, .value = 1, },
+ { .offset = 42, .width = 1, .value = 0, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+
+ vcap_encode_actionfield(&rule, &caf, &rf, tgt);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index fdb4c773ec98ab..e897b49aa9e05e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -486,6 +486,8 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
+ plat_dat->pcs_exit = socfpga_dwmac_pcs_exit;
+ plat_dat->select_pcs = socfpga_dwmac_select_pcs;
+
++ plat_dat->riwt_off = 1;
++
+ ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+ if (ret)
+ return ret;
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
+index a4cf682dca650e..0ee73a265545c3 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
+@@ -72,14 +72,6 @@ int txgbe_request_queue_irqs(struct wx *wx)
+ return err;
+ }
+
+-static int txgbe_request_gpio_irq(struct txgbe *txgbe)
+-{
+- txgbe->gpio_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_GPIO);
+- return request_threaded_irq(txgbe->gpio_irq, NULL,
+- txgbe_gpio_irq_handler,
+- IRQF_ONESHOT, "txgbe-gpio-irq", txgbe);
+-}
+-
+ static int txgbe_request_link_irq(struct txgbe *txgbe)
+ {
+ txgbe->link_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_LINK);
+@@ -149,11 +141,6 @@ static irqreturn_t txgbe_misc_irq_thread_fn(int irq, void *data)
+ u32 eicr;
+
+ eicr = wx_misc_isb(wx, WX_ISB_MISC);
+- if (eicr & TXGBE_PX_MISC_GPIO) {
+- sub_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_GPIO);
+- handle_nested_irq(sub_irq);
+- nhandled++;
+- }
+ if (eicr & (TXGBE_PX_MISC_ETH_LK | TXGBE_PX_MISC_ETH_LKDN |
+ TXGBE_PX_MISC_ETH_AN)) {
+ sub_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_LINK);
+@@ -179,7 +166,6 @@ static void txgbe_del_irq_domain(struct txgbe *txgbe)
+
+ void txgbe_free_misc_irq(struct txgbe *txgbe)
+ {
+- free_irq(txgbe->gpio_irq, txgbe);
+ free_irq(txgbe->link_irq, txgbe);
+ free_irq(txgbe->misc.irq, txgbe);
+ txgbe_del_irq_domain(txgbe);
+@@ -191,7 +177,7 @@ int txgbe_setup_misc_irq(struct txgbe *txgbe)
+ struct wx *wx = txgbe->wx;
+ int hwirq, err;
+
+- txgbe->misc.nirqs = 2;
++ txgbe->misc.nirqs = 1;
+ txgbe->misc.domain = irq_domain_add_simple(NULL, txgbe->misc.nirqs, 0,
+ &txgbe_misc_irq_domain_ops, txgbe);
+ if (!txgbe->misc.domain)
+@@ -216,20 +202,14 @@ int txgbe_setup_misc_irq(struct txgbe *txgbe)
+ if (err)
+ goto del_misc_irq;
+
+- err = txgbe_request_gpio_irq(txgbe);
+- if (err)
+- goto free_msic_irq;
+-
+ err = txgbe_request_link_irq(txgbe);
+ if (err)
+- goto free_gpio_irq;
++ goto free_msic_irq;
+
+ wx->misc_irq_domain = true;
+
+ return 0;
+
+-free_gpio_irq:
+- free_irq(txgbe->gpio_irq, txgbe);
+ free_msic_irq:
+ free_irq(txgbe->misc.irq, txgbe);
+ del_misc_irq:
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+index 93180225a6f14c..f7745026803643 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+@@ -82,7 +82,6 @@ static void txgbe_up_complete(struct wx *wx)
+ {
+ struct net_device *netdev = wx->netdev;
+
+- txgbe_reinit_gpio_intr(wx);
+ wx_control_hw(wx, true);
+ wx_configure_vectors(wx);
+
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
+index 5f502265f0a63e..65449105358984 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
+@@ -162,7 +162,7 @@ static struct phylink_pcs *txgbe_phylink_mac_select(struct phylink_config *confi
+ struct wx *wx = phylink_to_wx(config);
+ struct txgbe *txgbe = wx->priv;
+
+- if (interface == PHY_INTERFACE_MODE_10GBASER)
++ if (wx->media_type != sp_media_copper)
+ return &txgbe->xpcs->pcs;
+
+ return NULL;
+@@ -358,169 +358,8 @@ static int txgbe_gpio_direction_out(struct gpio_chip *chip, unsigned int offset,
+ return 0;
+ }
+
+-static void txgbe_gpio_irq_ack(struct irq_data *d)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- unsigned long flags;
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- wr32(wx, WX_GPIO_EOI, BIT(hwirq));
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-}
+-
+-static void txgbe_gpio_irq_mask(struct irq_data *d)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- unsigned long flags;
+-
+- gpiochip_disable_irq(gc, hwirq);
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- wr32m(wx, WX_GPIO_INTMASK, BIT(hwirq), BIT(hwirq));
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-}
+-
+-static void txgbe_gpio_irq_unmask(struct irq_data *d)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- unsigned long flags;
+-
+- gpiochip_enable_irq(gc, hwirq);
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- wr32m(wx, WX_GPIO_INTMASK, BIT(hwirq), 0);
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-}
+-
+-static void txgbe_toggle_trigger(struct gpio_chip *gc, unsigned int offset)
+-{
+- struct wx *wx = gpiochip_get_data(gc);
+- u32 pol, val;
+-
+- pol = rd32(wx, WX_GPIO_POLARITY);
+- val = rd32(wx, WX_GPIO_EXT);
+-
+- if (val & BIT(offset))
+- pol &= ~BIT(offset);
+- else
+- pol |= BIT(offset);
+-
+- wr32(wx, WX_GPIO_POLARITY, pol);
+-}
+-
+-static int txgbe_gpio_set_type(struct irq_data *d, unsigned int type)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- u32 level, polarity, mask;
+- unsigned long flags;
+-
+- mask = BIT(hwirq);
+-
+- if (type & IRQ_TYPE_LEVEL_MASK) {
+- level = 0;
+- irq_set_handler_locked(d, handle_level_irq);
+- } else {
+- level = mask;
+- irq_set_handler_locked(d, handle_edge_irq);
+- }
+-
+- if (type == IRQ_TYPE_EDGE_RISING || type == IRQ_TYPE_LEVEL_HIGH)
+- polarity = mask;
+- else
+- polarity = 0;
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+-
+- wr32m(wx, WX_GPIO_INTEN, mask, mask);
+- wr32m(wx, WX_GPIO_INTTYPE_LEVEL, mask, level);
+- if (type == IRQ_TYPE_EDGE_BOTH)
+- txgbe_toggle_trigger(gc, hwirq);
+- else
+- wr32m(wx, WX_GPIO_POLARITY, mask, polarity);
+-
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-
+- return 0;
+-}
+-
+-static const struct irq_chip txgbe_gpio_irq_chip = {
+- .name = "txgbe-gpio-irq",
+- .irq_ack = txgbe_gpio_irq_ack,
+- .irq_mask = txgbe_gpio_irq_mask,
+- .irq_unmask = txgbe_gpio_irq_unmask,
+- .irq_set_type = txgbe_gpio_set_type,
+- .flags = IRQCHIP_IMMUTABLE,
+- GPIOCHIP_IRQ_RESOURCE_HELPERS,
+-};
+-
+-irqreturn_t txgbe_gpio_irq_handler(int irq, void *data)
+-{
+- struct txgbe *txgbe = data;
+- struct wx *wx = txgbe->wx;
+- irq_hw_number_t hwirq;
+- unsigned long gpioirq;
+- struct gpio_chip *gc;
+- unsigned long flags;
+-
+- gpioirq = rd32(wx, WX_GPIO_INTSTATUS);
+-
+- gc = txgbe->gpio;
+- for_each_set_bit(hwirq, &gpioirq, gc->ngpio) {
+- int gpio = irq_find_mapping(gc->irq.domain, hwirq);
+- struct irq_data *d = irq_get_irq_data(gpio);
+- u32 irq_type = irq_get_trigger_type(gpio);
+-
+- txgbe_gpio_irq_ack(d);
+- handle_nested_irq(gpio);
+-
+- if ((irq_type & IRQ_TYPE_SENSE_MASK) == IRQ_TYPE_EDGE_BOTH) {
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- txgbe_toggle_trigger(gc, hwirq);
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+- }
+- }
+-
+- return IRQ_HANDLED;
+-}
+-
+-void txgbe_reinit_gpio_intr(struct wx *wx)
+-{
+- struct txgbe *txgbe = wx->priv;
+- irq_hw_number_t hwirq;
+- unsigned long gpioirq;
+- struct gpio_chip *gc;
+- unsigned long flags;
+-
+- /* for gpio interrupt pending before irq enable */
+- gpioirq = rd32(wx, WX_GPIO_INTSTATUS);
+-
+- gc = txgbe->gpio;
+- for_each_set_bit(hwirq, &gpioirq, gc->ngpio) {
+- int gpio = irq_find_mapping(gc->irq.domain, hwirq);
+- struct irq_data *d = irq_get_irq_data(gpio);
+- u32 irq_type = irq_get_trigger_type(gpio);
+-
+- txgbe_gpio_irq_ack(d);
+-
+- if ((irq_type & IRQ_TYPE_SENSE_MASK) == IRQ_TYPE_EDGE_BOTH) {
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- txgbe_toggle_trigger(gc, hwirq);
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+- }
+- }
+-}
+-
+ static int txgbe_gpio_init(struct txgbe *txgbe)
+ {
+- struct gpio_irq_chip *girq;
+ struct gpio_chip *gc;
+ struct device *dev;
+ struct wx *wx;
+@@ -550,11 +389,6 @@ static int txgbe_gpio_init(struct txgbe *txgbe)
+ gc->direction_input = txgbe_gpio_direction_in;
+ gc->direction_output = txgbe_gpio_direction_out;
+
+- girq = &gc->irq;
+- gpio_irq_chip_set_chip(girq, &txgbe_gpio_irq_chip);
+- girq->default_type = IRQ_TYPE_NONE;
+- girq->handler = handle_bad_irq;
+-
+ ret = devm_gpiochip_add_data(dev, gc, wx);
+ if (ret)
+ return ret;
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
+index 8a026d804fe24c..3938985355ed6c 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
+@@ -4,8 +4,6 @@
+ #ifndef _TXGBE_PHY_H_
+ #define _TXGBE_PHY_H_
+
+-irqreturn_t txgbe_gpio_irq_handler(int irq, void *data);
+-void txgbe_reinit_gpio_intr(struct wx *wx);
+ irqreturn_t txgbe_link_irq_handler(int irq, void *data);
+ int txgbe_init_phy(struct txgbe *txgbe);
+ void txgbe_remove_phy(struct txgbe *txgbe);
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+index 959102c4c3797e..8ea413a7abe9d3 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+@@ -75,8 +75,7 @@
+ #define TXGBE_PX_MISC_IEN_MASK \
+ (TXGBE_PX_MISC_ETH_LKDN | TXGBE_PX_MISC_DEV_RST | \
+ TXGBE_PX_MISC_ETH_EVENT | TXGBE_PX_MISC_ETH_LK | \
+- TXGBE_PX_MISC_ETH_AN | TXGBE_PX_MISC_INT_ERR | \
+- TXGBE_PX_MISC_GPIO)
++ TXGBE_PX_MISC_ETH_AN | TXGBE_PX_MISC_INT_ERR)
+
+ /* Port cfg registers */
+ #define TXGBE_CFG_PORT_ST 0x14404
+@@ -313,8 +312,7 @@ struct txgbe_nodes {
+ };
+
+ enum txgbe_misc_irqs {
+- TXGBE_IRQ_GPIO = 0,
+- TXGBE_IRQ_LINK,
++ TXGBE_IRQ_LINK = 0,
+ TXGBE_IRQ_MAX
+ };
+
+@@ -335,7 +333,6 @@ struct txgbe {
+ struct clk_lookup *clock;
+ struct clk *clk;
+ struct gpio_chip *gpio;
+- unsigned int gpio_irq;
+ unsigned int link_irq;
+
+ /* flow director */
+diff --git a/drivers/net/mdio/mdio-ipq4019.c b/drivers/net/mdio/mdio-ipq4019.c
+index 9d8f43b28aac5b..ea1f64596a85cf 100644
+--- a/drivers/net/mdio/mdio-ipq4019.c
++++ b/drivers/net/mdio/mdio-ipq4019.c
+@@ -352,8 +352,11 @@ static int ipq4019_mdio_probe(struct platform_device *pdev)
+ /* The platform resource is provided on the chipset IPQ5018 */
+ /* This resource is optional */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+- if (res)
++ if (res) {
+ priv->eth_ldo_rdy = devm_ioremap_resource(&pdev->dev, res);
++ if (IS_ERR(priv->eth_ldo_rdy))
++ return PTR_ERR(priv->eth_ldo_rdy);
++ }
+
+ bus->name = "ipq4019_mdio";
+ bus->read = ipq4019_mdio_read_c22;
+diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c
+index f0d58092e7e961..3612b0633bd177 100644
+--- a/drivers/net/netdevsim/ipsec.c
++++ b/drivers/net/netdevsim/ipsec.c
+@@ -176,14 +176,13 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs,
+ return ret;
+ }
+
+- if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) {
++ if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN)
+ sa.rx = true;
+
+- if (xs->props.family == AF_INET6)
+- memcpy(sa.ipaddr, &xs->id.daddr.a6, 16);
+- else
+- memcpy(&sa.ipaddr[3], &xs->id.daddr.a4, 4);
+- }
++ if (xs->props.family == AF_INET6)
++ memcpy(sa.ipaddr, &xs->id.daddr.a6, 16);
++ else
++ memcpy(&sa.ipaddr[3], &xs->id.daddr.a4, 4);
+
+ /* the preparations worked, so save the info */
+ memcpy(&ipsec->sa[sa_idx], &sa, sizeof(sa));
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 8adf77e3557e7a..531b1b6a37d190 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1652,13 +1652,13 @@ static int lan78xx_set_wol(struct net_device *netdev,
+ struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]);
+ int ret;
+
++ if (wol->wolopts & ~WAKE_ALL)
++ return -EINVAL;
++
+ ret = usb_autopm_get_interface(dev->intf);
+ if (ret < 0)
+ return ret;
+
+- if (wol->wolopts & ~WAKE_ALL)
+- return -EINVAL;
+-
+ pdata->wol = wol->wolopts;
+
+ device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts);
+@@ -2380,6 +2380,7 @@ static int lan78xx_phy_init(struct lan78xx_net *dev)
+ if (dev->chipid == ID_REV_CHIP_ID_7801_) {
+ if (phy_is_pseudo_fixed_link(phydev)) {
+ fixed_phy_unregister(phydev);
++ phy_device_free(phydev);
+ } else {
+ phy_unregister_fixup_for_uid(PHY_KSZ9031RNX,
+ 0xfffffff0);
+@@ -4246,8 +4247,10 @@ static void lan78xx_disconnect(struct usb_interface *intf)
+
+ phy_disconnect(net->phydev);
+
+- if (phy_is_pseudo_fixed_link(phydev))
++ if (phy_is_pseudo_fixed_link(phydev)) {
+ fixed_phy_unregister(phydev);
++ phy_device_free(phydev);
++ }
+
+ usb_scuttle_anchored_urbs(&dev->deferred);
+
+@@ -4414,29 +4417,30 @@ static int lan78xx_probe(struct usb_interface *intf,
+
+ period = ep_intr->desc.bInterval;
+ maxp = usb_maxpacket(dev->udev, dev->pipe_intr);
+- buf = kmalloc(maxp, GFP_KERNEL);
+- if (!buf) {
++
++ dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL);
++ if (!dev->urb_intr) {
+ ret = -ENOMEM;
+ goto out5;
+ }
+
+- dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL);
+- if (!dev->urb_intr) {
++ buf = kmalloc(maxp, GFP_KERNEL);
++ if (!buf) {
+ ret = -ENOMEM;
+- goto out6;
+- } else {
+- usb_fill_int_urb(dev->urb_intr, dev->udev,
+- dev->pipe_intr, buf, maxp,
+- intr_complete, dev, period);
+- dev->urb_intr->transfer_flags |= URB_FREE_BUFFER;
++ goto free_urbs;
+ }
+
++ usb_fill_int_urb(dev->urb_intr, dev->udev,
++ dev->pipe_intr, buf, maxp,
++ intr_complete, dev, period);
++ dev->urb_intr->transfer_flags |= URB_FREE_BUFFER;
++
+ dev->maxpacket = usb_maxpacket(dev->udev, dev->pipe_out);
+
+ /* Reject broken descriptors. */
+ if (dev->maxpacket == 0) {
+ ret = -ENODEV;
+- goto out6;
++ goto free_urbs;
+ }
+
+ /* driver requires remote-wakeup capability during autosuspend. */
+@@ -4444,7 +4448,7 @@ static int lan78xx_probe(struct usb_interface *intf,
+
+ ret = lan78xx_phy_init(dev);
+ if (ret < 0)
+- goto out7;
++ goto free_urbs;
+
+ ret = register_netdev(netdev);
+ if (ret != 0) {
+@@ -4466,10 +4470,8 @@ static int lan78xx_probe(struct usb_interface *intf,
+
+ out8:
+ phy_disconnect(netdev->phydev);
+-out7:
++free_urbs:
+ usb_free_urb(dev->urb_intr);
+-out6:
+- kfree(buf);
+ out5:
+ lan78xx_unbind(dev, intf);
+ out4:
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index f137c82f1c0f7f..0c011d8f5d4db2 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1076,6 +1076,7 @@ static const struct usb_device_id products[] = {
+ USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x581d, USB_CLASS_VENDOR_SPEC, 1, 7),
+ .driver_info = (unsigned long)&qmi_wwan_info,
+ },
++ {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0122)}, /* Quectel RG650V */
+ {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0125)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */
+ {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0306)}, /* Quectel EP06/EG06/EM06 */
+ {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0512)}, /* Quectel EG12/EM12 */
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index a5612c799f5ef7..468c739740463d 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -10069,6 +10069,7 @@ static const struct usb_device_id rtl8152_table[] = {
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3062) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3069) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3082) },
++ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3098) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x7205) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x720c) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x7214) },
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index a5da32e87106c7..e62b251405fc08 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -9121,7 +9121,7 @@ static const struct ath10k_index_vht_data_rate_type supported_vht_mcs_rate_nss1[
+ {6, {2633, 2925}, {1215, 1350}, {585, 650} },
+ {7, {2925, 3250}, {1350, 1500}, {650, 722} },
+ {8, {3510, 3900}, {1620, 1800}, {780, 867} },
+- {9, {3900, 4333}, {1800, 2000}, {780, 867} }
++ {9, {3900, 4333}, {1800, 2000}, {865, 960} }
+ };
+
+ /*MCS parameters with Nss = 2 */
+@@ -9136,7 +9136,7 @@ static const struct ath10k_index_vht_data_rate_type supported_vht_mcs_rate_nss2[
+ {6, {5265, 5850}, {2430, 2700}, {1170, 1300} },
+ {7, {5850, 6500}, {2700, 3000}, {1300, 1444} },
+ {8, {7020, 7800}, {3240, 3600}, {1560, 1733} },
+- {9, {7800, 8667}, {3600, 4000}, {1560, 1733} }
++ {9, {7800, 8667}, {3600, 4000}, {1730, 1920} }
+ };
+
+ static void ath10k_mac_get_rate_flags_ht(struct ath10k *ar, u32 rate, u8 nss, u8 mcs,
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index f477afd325deaf..7a22483b35cd98 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -2180,6 +2180,9 @@ static int ath11k_qmi_request_device_info(struct ath11k_base *ab)
+ ab->mem = bar_addr_va;
+ ab->mem_len = resp.bar_size;
+
++ if (!ab->hw_params.ce_remap)
++ ab->mem_ce = ab->mem;
++
+ return 0;
+ out:
+ return ret;
+diff --git a/drivers/net/wireless/ath/ath12k/dp.c b/drivers/net/wireless/ath/ath12k/dp.c
+index 61aa78d8bd8c8f..217eb57663f058 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.c
++++ b/drivers/net/wireless/ath/ath12k/dp.c
+@@ -1202,10 +1202,16 @@ static void ath12k_dp_cc_cleanup(struct ath12k_base *ab)
+ if (!skb)
+ continue;
+
+- skb_cb = ATH12K_SKB_CB(skb);
+- ar = skb_cb->ar;
+- if (atomic_dec_and_test(&ar->dp.num_tx_pending))
+- wake_up(&ar->dp.tx_empty_waitq);
++ /* if we are unregistering, hw would've been destroyed and
++ * ar is no longer valid.
++ */
++ if (!(test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags))) {
++ skb_cb = ATH12K_SKB_CB(skb);
++ ar = skb_cb->ar;
++
++ if (atomic_dec_and_test(&ar->dp.num_tx_pending))
++ wake_up(&ar->dp.tx_empty_waitq);
++ }
+
+ dma_unmap_single(ab->dev, ATH12K_SKB_CB(skb)->paddr,
+ skb->len, DMA_TO_DEVICE);
+@@ -1241,6 +1247,7 @@ static void ath12k_dp_cc_cleanup(struct ath12k_base *ab)
+ }
+
+ kfree(dp->spt_info);
++ dp->spt_info = NULL;
+ }
+
+ static void ath12k_dp_reoq_lut_cleanup(struct ath12k_base *ab)
+@@ -1276,8 +1283,10 @@ void ath12k_dp_free(struct ath12k_base *ab)
+
+ ath12k_dp_rx_reo_cmd_list_cleanup(ab);
+
+- for (i = 0; i < ab->hw_params->max_tx_ring; i++)
++ for (i = 0; i < ab->hw_params->max_tx_ring; i++) {
+ kfree(dp->tx_ring[i].tx_status);
++ dp->tx_ring[i].tx_status = NULL;
++ }
+
+ ath12k_dp_rx_free(ab);
+ /* Deinit any SOC level resource */
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index a3248d97753298..3608ede55b1eb7 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -917,7 +917,10 @@ void ath12k_mac_peer_cleanup_all(struct ath12k *ar)
+
+ spin_lock_bh(&ab->base_lock);
+ list_for_each_entry_safe(peer, tmp, &ab->peers, list) {
+- ath12k_dp_rx_peer_tid_cleanup(ar, peer);
++ /* Skip Rx TID cleanup for self peer */
++ if (peer->sta)
++ ath12k_dp_rx_peer_tid_cleanup(ar, peer);
++
+ list_del(&peer->list);
+ kfree(peer);
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/wow.c b/drivers/net/wireless/ath/ath12k/wow.c
+index 9b8684abbe40ae..3624180b25b970 100644
+--- a/drivers/net/wireless/ath/ath12k/wow.c
++++ b/drivers/net/wireless/ath/ath12k/wow.c
+@@ -191,7 +191,7 @@ ath12k_wow_convert_8023_to_80211(struct ath12k *ar,
+ memcpy(bytemask, eth_bytemask, eth_pat_len);
+
+ pat_len = eth_pat_len;
+- } else if (eth_pkt_ofs + eth_pat_len < prot_ofs) {
++ } else if (size_add(eth_pkt_ofs, eth_pat_len) < prot_ofs) {
+ memcpy(pat, eth_pat, ETH_ALEN - eth_pkt_ofs);
+ memcpy(bytemask, eth_bytemask, ETH_ALEN - eth_pkt_ofs);
+
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index eb631fd3336d8d..b5257b2b4aa527 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -294,6 +294,9 @@ int htc_connect_service(struct htc_target *target,
+ return -ETIMEDOUT;
+ }
+
++ if (target->conn_rsp_epid < 0 || target->conn_rsp_epid >= ENDPOINT_MAX)
++ return -EINVAL;
++
+ *conn_rsp_epid = target->conn_rsp_epid;
+ return 0;
+ err:
+diff --git a/drivers/net/wireless/ath/wil6210/txrx.c b/drivers/net/wireless/ath/wil6210/txrx.c
+index f29ac6de713994..19702b6f09c329 100644
+--- a/drivers/net/wireless/ath/wil6210/txrx.c
++++ b/drivers/net/wireless/ath/wil6210/txrx.c
+@@ -306,7 +306,7 @@ static void wil_rx_add_radiotap_header(struct wil6210_priv *wil,
+ struct sk_buff *skb)
+ {
+ struct wil6210_rtap {
+- struct ieee80211_radiotap_header rthdr;
++ struct ieee80211_radiotap_header_fixed rthdr;
+ /* fields should be in the order of bits in rthdr.it_present */
+ /* flags */
+ u8 flags;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+index fe4f657561056c..af930e34c21f8a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+@@ -110,9 +110,8 @@ void brcmf_of_probe(struct device *dev, enum brcmf_bus_type bus_type,
+ }
+ strreplace(board_type, '/', '-');
+ settings->board_type = board_type;
+-
+- of_node_put(root);
+ }
++ of_node_put(root);
+
+ if (!np || !of_device_is_compatible(np, "brcm,bcm4329-fmac"))
+ return;
+diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2100.c b/drivers/net/wireless/intel/ipw2x00/ipw2100.c
+index b6636002c7d220..fe75941c584d15 100644
+--- a/drivers/net/wireless/intel/ipw2x00/ipw2100.c
++++ b/drivers/net/wireless/intel/ipw2x00/ipw2100.c
+@@ -2518,7 +2518,7 @@ static void isr_rx_monitor(struct ipw2100_priv *priv, int i,
+ * to build this manually element by element, we can write it much
+ * more efficiently than we can parse it. ORDER MATTERS HERE */
+ struct ipw_rt_hdr {
+- struct ieee80211_radiotap_header rt_hdr;
++ struct ieee80211_radiotap_header_fixed rt_hdr;
+ s8 rt_dbmsignal; /* signal in dbM, kluged to signed */
+ } *ipw_rt;
+
+diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2200.h b/drivers/net/wireless/intel/ipw2x00/ipw2200.h
+index 8ebf09121e173c..226286cb7eb822 100644
+--- a/drivers/net/wireless/intel/ipw2x00/ipw2200.h
++++ b/drivers/net/wireless/intel/ipw2x00/ipw2200.h
+@@ -1143,7 +1143,7 @@ struct ipw_prom_priv {
+ * structure is provided regardless of any bits unset.
+ */
+ struct ipw_rt_hdr {
+- struct ieee80211_radiotap_header rt_hdr;
++ struct ieee80211_radiotap_header_fixed rt_hdr;
+ u64 rt_tsf; /* TSF */ /* XXX */
+ u8 rt_flags; /* radiotap packet flags */
+ u8 rt_rate; /* rate in 500kb/s */
+diff --git a/drivers/net/wireless/intel/iwlegacy/3945.c b/drivers/net/wireless/intel/iwlegacy/3945.c
+index 1fab7849f56d1a..a773939b8c2aac 100644
+--- a/drivers/net/wireless/intel/iwlegacy/3945.c
++++ b/drivers/net/wireless/intel/iwlegacy/3945.c
+@@ -566,7 +566,7 @@ il3945_hdl_rx(struct il_priv *il, struct il_rx_buf *rxb)
+ if (!(rx_end->status & RX_RES_STATUS_NO_CRC32_ERROR) ||
+ !(rx_end->status & RX_RES_STATUS_NO_RXE_OVERFLOW)) {
+ D_RX("Bad CRC or FIFO: 0x%08X.\n", rx_end->status);
+- rx_status.flag |= RX_FLAG_FAILED_FCS_CRC;
++ return;
+ }
+
+ /* Convert 3945's rssi indicator to dBm */
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+index 1600c344edbb20..11c15e0bc779a4 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+@@ -664,7 +664,7 @@ il4965_hdl_rx(struct il_priv *il, struct il_rx_buf *rxb)
+ if (!(rx_pkt_status & RX_RES_STATUS_NO_CRC32_ERROR) ||
+ !(rx_pkt_status & RX_RES_STATUS_NO_RXE_OVERFLOW)) {
+ D_RX("Bad CRC or FIFO: 0x%08X.\n", le32_to_cpu(rx_pkt_status));
+- rx_status.flag |= RX_FLAG_FAILED_FCS_CRC;
++ return;
+ }
+
+ /* This will be used in several places later */
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index a7cea0a55b35af..0bc32291815e1b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -429,38 +429,28 @@ int iwl_acpi_get_eckv(struct iwl_fw_runtime *fwrt, u32 *extl_clk)
+ return ret;
+ }
+
+-static int iwl_acpi_sar_set_profile(union acpi_object *table,
+- struct iwl_sar_profile *profile,
+- bool enabled, u8 num_chains,
+- u8 num_sub_bands)
++static int
++iwl_acpi_parse_chains_table(union acpi_object *table,
++ struct iwl_sar_profile_chain *chains,
++ u8 num_chains, u8 num_sub_bands)
+ {
+- int i, j, idx = 0;
+-
+- /*
+- * The table from ACPI is flat, but we store it in a
+- * structured array.
+- */
+- for (i = 0; i < BIOS_SAR_MAX_CHAINS_PER_PROFILE; i++) {
+- for (j = 0; j < BIOS_SAR_MAX_SUB_BANDS_NUM; j++) {
++ for (u8 chain = 0; chain < num_chains; chain++) {
++ for (u8 subband = 0; subband < BIOS_SAR_MAX_SUB_BANDS_NUM;
++ subband++) {
+ /* if we don't have the values, use the default */
+- if (i >= num_chains || j >= num_sub_bands) {
+- profile->chains[i].subbands[j] = 0;
++ if (subband >= num_sub_bands) {
++ chains[chain].subbands[subband] = 0;
++ } else if (table->type != ACPI_TYPE_INTEGER ||
++ table->integer.value > U8_MAX) {
++ return -EINVAL;
+ } else {
+- if (table[idx].type != ACPI_TYPE_INTEGER ||
+- table[idx].integer.value > U8_MAX)
+- return -EINVAL;
+-
+- profile->chains[i].subbands[j] =
+- table[idx].integer.value;
+-
+- idx++;
++ chains[chain].subbands[subband] =
++ table->integer.value;
++ table++;
+ }
+ }
+ }
+
+- /* Only if all values were valid can the profile be enabled */
+- profile->enabled = enabled;
+-
+ return 0;
+ }
+
+@@ -543,9 +533,11 @@ int iwl_acpi_get_wrds_table(struct iwl_fw_runtime *fwrt)
+ /* The profile from WRDS is officially profile 1, but goes
+ * into sar_profiles[0] (because we don't have a profile 0).
+ */
+- ret = iwl_acpi_sar_set_profile(table, &fwrt->sar_profiles[0],
+- flags & IWL_SAR_ENABLE_MSK,
+- num_chains, num_sub_bands);
++ ret = iwl_acpi_parse_chains_table(table, fwrt->sar_profiles[0].chains,
++ num_chains, num_sub_bands);
++ if (!ret && flags & IWL_SAR_ENABLE_MSK)
++ fwrt->sar_profiles[0].enabled = true;
++
+ out_free:
+ kfree(data);
+ return ret;
+@@ -557,7 +549,7 @@ int iwl_acpi_get_ewrd_table(struct iwl_fw_runtime *fwrt)
+ bool enabled;
+ int i, n_profiles, tbl_rev, pos;
+ int ret = 0;
+- u8 num_chains, num_sub_bands;
++ u8 num_sub_bands;
+
+ data = iwl_acpi_get_object(fwrt->dev, ACPI_EWRD_METHOD);
+ if (IS_ERR(data))
+@@ -573,7 +565,6 @@ int iwl_acpi_get_ewrd_table(struct iwl_fw_runtime *fwrt)
+ goto out_free;
+ }
+
+- num_chains = ACPI_SAR_NUM_CHAINS_REV2;
+ num_sub_bands = ACPI_SAR_NUM_SUB_BANDS_REV2;
+
+ goto read_table;
+@@ -589,7 +580,6 @@ int iwl_acpi_get_ewrd_table(struct iwl_fw_runtime *fwrt)
+ goto out_free;
+ }
+
+- num_chains = ACPI_SAR_NUM_CHAINS_REV1;
+ num_sub_bands = ACPI_SAR_NUM_SUB_BANDS_REV1;
+
+ goto read_table;
+@@ -605,7 +595,6 @@ int iwl_acpi_get_ewrd_table(struct iwl_fw_runtime *fwrt)
+ goto out_free;
+ }
+
+- num_chains = ACPI_SAR_NUM_CHAINS_REV0;
+ num_sub_bands = ACPI_SAR_NUM_SUB_BANDS_REV0;
+
+ goto read_table;
+@@ -637,23 +626,54 @@ int iwl_acpi_get_ewrd_table(struct iwl_fw_runtime *fwrt)
+ /* the tables start at element 3 */
+ pos = 3;
+
++ BUILD_BUG_ON(ACPI_SAR_NUM_CHAINS_REV0 != ACPI_SAR_NUM_CHAINS_REV1);
++ BUILD_BUG_ON(ACPI_SAR_NUM_CHAINS_REV2 != 2 * ACPI_SAR_NUM_CHAINS_REV0);
++
++ /* parse non-cdb chains for all profiles */
+ for (i = 0; i < n_profiles; i++) {
+ union acpi_object *table = &wifi_pkg->package.elements[pos];
++
+ /* The EWRD profiles officially go from 2 to 4, but we
+ * save them in sar_profiles[1-3] (because we don't
+ * have profile 0). So in the array we start from 1.
+ */
+- ret = iwl_acpi_sar_set_profile(table,
+- &fwrt->sar_profiles[i + 1],
+- enabled, num_chains,
+- num_sub_bands);
++ ret = iwl_acpi_parse_chains_table(table,
++ fwrt->sar_profiles[i + 1].chains,
++ ACPI_SAR_NUM_CHAINS_REV0,
++ num_sub_bands);
+ if (ret < 0)
+- break;
++ goto out_free;
+
+ /* go to the next table */
+- pos += num_chains * num_sub_bands;
++ pos += ACPI_SAR_NUM_CHAINS_REV0 * num_sub_bands;
+ }
+
++ /* non-cdb table revisions */
++ if (tbl_rev < 2)
++ goto set_enabled;
++
++ /* parse cdb chains for all profiles */
++ for (i = 0; i < n_profiles; i++) {
++ struct iwl_sar_profile_chain *chains;
++ union acpi_object *table;
++
++ table = &wifi_pkg->package.elements[pos];
++ chains = &fwrt->sar_profiles[i + 1].chains[ACPI_SAR_NUM_CHAINS_REV0];
++ ret = iwl_acpi_parse_chains_table(table,
++ chains,
++ ACPI_SAR_NUM_CHAINS_REV0,
++ num_sub_bands);
++ if (ret < 0)
++ goto out_free;
++
++ /* go to the next table */
++ pos += ACPI_SAR_NUM_CHAINS_REV0 * num_sub_bands;
++ }
++
++set_enabled:
++ for (i = 0; i < n_profiles; i++)
++ fwrt->sar_profiles[i + 1].enabled = enabled;
++
+ out_free:
+ kfree(data);
+ return ret;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/init.c b/drivers/net/wireless/intel/iwlwifi/fw/init.c
+index d8b083be5b6b5e..de87e0e3e0725d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/init.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/init.c
+@@ -39,10 +39,12 @@ void iwl_fw_runtime_init(struct iwl_fw_runtime *fwrt, struct iwl_trans *trans,
+ }
+ IWL_EXPORT_SYMBOL(iwl_fw_runtime_init);
+
++/* Assumes the appropriate lock is held by the caller */
+ void iwl_fw_runtime_suspend(struct iwl_fw_runtime *fwrt)
+ {
+ iwl_fw_suspend_timestamp(fwrt);
+- iwl_dbg_tlv_time_point(fwrt, IWL_FW_INI_TIME_POINT_HOST_D3_START, NULL);
++ iwl_dbg_tlv_time_point_sync(fwrt, IWL_FW_INI_TIME_POINT_HOST_D3_START,
++ NULL);
+ }
+ IWL_EXPORT_SYMBOL(iwl_fw_runtime_suspend);
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index 99a541d442bb18..f4f87edb86e5c1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -1398,7 +1398,9 @@ int iwl_mvm_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan)
+
+ iwl_mvm_pause_tcm(mvm, true);
+
++ mutex_lock(&mvm->mutex);
+ iwl_fw_runtime_suspend(&mvm->fwrt);
++ mutex_unlock(&mvm->mutex);
+
+ return __iwl_mvm_suspend(hw, wowlan, false);
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 63b2c6fe3f8abd..e9979f8a8827ee 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1236,6 +1236,7 @@ int __iwl_mvm_mac_start(struct iwl_mvm *mvm)
+ fast_resume = mvm->fast_resume;
+
+ if (fast_resume) {
++ iwl_mvm_mei_device_state(mvm, true);
+ ret = iwl_mvm_fast_resume(mvm);
+ if (ret) {
+ iwl_mvm_stop_device(mvm);
+@@ -1376,10 +1377,13 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm, bool suspend)
+ iwl_mvm_rm_aux_sta(mvm);
+
+ if (suspend &&
+- mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
++ mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_22000) {
+ iwl_mvm_fast_suspend(mvm);
+- else
++ /* From this point on, we won't touch the device */
++ iwl_mvm_mei_device_state(mvm, false);
++ } else {
+ iwl_mvm_stop_device(mvm);
++ }
+
+ iwl_mvm_async_handlers_purge(mvm);
+ /* async_handlers_list is empty and will stay empty: HW is stopped */
+diff --git a/drivers/net/wireless/intersil/p54/p54spi.c b/drivers/net/wireless/intersil/p54/p54spi.c
+index d33a994906a7bb..27f44a9f0bc1f9 100644
+--- a/drivers/net/wireless/intersil/p54/p54spi.c
++++ b/drivers/net/wireless/intersil/p54/p54spi.c
+@@ -624,7 +624,7 @@ static int p54spi_probe(struct spi_device *spi)
+ gpio_direction_input(p54spi_gpio_irq);
+
+ ret = request_irq(gpio_to_irq(p54spi_gpio_irq),
+- p54spi_interrupt, 0, "p54spi",
++ p54spi_interrupt, IRQF_NO_AUTOEN, "p54spi",
+ priv->spi);
+ if (ret < 0) {
+ dev_err(&priv->spi->dev, "request_irq() failed");
+@@ -633,8 +633,6 @@ static int p54spi_probe(struct spi_device *spi)
+
+ irq_set_irq_type(gpio_to_irq(p54spi_gpio_irq), IRQ_TYPE_EDGE_RISING);
+
+- disable_irq(gpio_to_irq(p54spi_gpio_irq));
+-
+ INIT_WORK(&priv->work, p54spi_work);
+ init_completion(&priv->fw_comp);
+ INIT_LIST_HEAD(&priv->tx_pending);
+diff --git a/drivers/net/wireless/marvell/libertas/radiotap.h b/drivers/net/wireless/marvell/libertas/radiotap.h
+index 1ed5608d353ff5..d543bfe739dcb9 100644
+--- a/drivers/net/wireless/marvell/libertas/radiotap.h
++++ b/drivers/net/wireless/marvell/libertas/radiotap.h
+@@ -2,7 +2,7 @@
+ #include <net/ieee80211_radiotap.h>
+
+ struct tx_radiotap_hdr {
+- struct ieee80211_radiotap_header hdr;
++ struct ieee80211_radiotap_header_fixed hdr;
+ u8 rate;
+ u8 txpower;
+ u8 rts_retries;
+@@ -31,7 +31,7 @@ struct tx_radiotap_hdr {
+ #define IEEE80211_FC_DSTODS 0x0300
+
+ struct rx_radiotap_hdr {
+- struct ieee80211_radiotap_header hdr;
++ struct ieee80211_radiotap_header_fixed hdr;
+ u8 flags;
+ u8 rate;
+ u8 antsignal;
+diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
+index 5b072120e3f213..324efd55c8e4fe 100644
+--- a/drivers/net/wireless/marvell/mwifiex/fw.h
++++ b/drivers/net/wireless/marvell/mwifiex/fw.h
+@@ -842,7 +842,7 @@ struct mwifiex_ietypes_chanstats {
+ struct mwifiex_ie_types_wildcard_ssid_params {
+ struct mwifiex_ie_types_header header;
+ u8 max_ssid_length;
+- u8 ssid[1];
++ u8 ssid[];
+ } __packed;
+
+ #define TSF_DATA_SIZE 8
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c
+index d99127dc466ecb..6c60a4c21a3128 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.c
++++ b/drivers/net/wireless/marvell/mwifiex/main.c
+@@ -1633,7 +1633,8 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter)
+ }
+
+ ret = devm_request_irq(dev, adapter->irq_wakeup,
+- mwifiex_irq_wakeup_handler, IRQF_TRIGGER_LOW,
++ mwifiex_irq_wakeup_handler,
++ IRQF_TRIGGER_LOW | IRQF_NO_AUTOEN,
+ "wifi_wake", adapter);
+ if (ret) {
+ dev_err(dev, "Failed to request irq_wakeup %d (%d)\n",
+@@ -1641,7 +1642,6 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter)
+ goto err_exit;
+ }
+
+- disable_irq(adapter->irq_wakeup);
+ if (device_init_wakeup(dev, true)) {
+ dev_err(dev, "fail to init wakeup for mwifiex\n");
+ goto err_exit;
+diff --git a/drivers/net/wireless/microchip/wilc1000/mon.c b/drivers/net/wireless/microchip/wilc1000/mon.c
+index 03b7229a0ff5aa..c3d27aaec29742 100644
+--- a/drivers/net/wireless/microchip/wilc1000/mon.c
++++ b/drivers/net/wireless/microchip/wilc1000/mon.c
+@@ -7,12 +7,12 @@
+ #include "cfg80211.h"
+
+ struct wilc_wfi_radiotap_hdr {
+- struct ieee80211_radiotap_header hdr;
++ struct ieee80211_radiotap_header_fixed hdr;
+ u8 rate;
+ } __packed;
+
+ struct wilc_wfi_radiotap_cb_hdr {
+- struct ieee80211_radiotap_header hdr;
++ struct ieee80211_radiotap_header_fixed hdr;
+ u8 rate;
+ u8 dump;
+ u16 tx_flags;
+diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
+index 9ecf3fb29b558f..8bc127c5a538cb 100644
+--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
++++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
+@@ -608,6 +608,9 @@ static int wilc_mac_open(struct net_device *ndev)
+ return ret;
+ }
+
++ wilc_set_operation_mode(vif, wilc_get_vif_idx(vif), vif->iftype,
++ vif->idx);
++
+ netdev_dbg(ndev, "Mac address: %pM\n", ndev->dev_addr);
+ ret = wilc_set_mac_address(vif, ndev->dev_addr);
+ if (ret) {
+@@ -618,9 +621,6 @@ static int wilc_mac_open(struct net_device *ndev)
+ return ret;
+ }
+
+- wilc_set_operation_mode(vif, wilc_get_vif_idx(vif), vif->iftype,
+- vif->idx);
+-
+ mgmt_regs.interface_stypes = vif->mgmt_reg_stypes;
+ /* so we detect a change */
+ vif->mgmt_reg_stypes = 0;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/core.c b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+index 043fa364e70148..a7e74ece2b4d1d 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+@@ -5058,10 +5058,12 @@ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ }
+
+ if (changed & BSS_CHANGED_BEACON_ENABLED) {
+- if (bss_conf->enable_beacon)
++ if (bss_conf->enable_beacon) {
+ rtl8xxxu_start_tx_beacon(priv);
+- else
++ schedule_delayed_work(&priv->update_beacon_work, 0);
++ } else {
+ rtl8xxxu_stop_tx_beacon(priv);
++ }
+ }
+
+ if (changed & BSS_CHANGED_BEACON)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/efuse.c b/drivers/net/wireless/realtek/rtlwifi/efuse.c
+index 82cf5fb5175fef..6518e77b89f578 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/efuse.c
++++ b/drivers/net/wireless/realtek/rtlwifi/efuse.c
+@@ -162,10 +162,19 @@ void efuse_write_1byte(struct ieee80211_hw *hw, u16 address, u8 value)
+ void read_efuse_byte(struct ieee80211_hw *hw, u16 _offset, u8 *pbuf)
+ {
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
++ u16 max_attempts = 10000;
+ u32 value32;
+ u8 readbyte;
+ u16 retry;
+
++ /*
++ * In case of USB devices, transfer speeds are limited, hence
++ * efuse I/O reads could be (way) slower. So, decrease (a lot)
++ * the read attempts in case of failures.
++ */
++ if (rtlpriv->rtlhal.interface == INTF_USB)
++ max_attempts = 10;
++
+ rtl_write_byte(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL] + 1,
+ (_offset & 0xff));
+ readbyte = rtl_read_byte(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL] + 2);
+@@ -178,7 +187,7 @@ void read_efuse_byte(struct ieee80211_hw *hw, u16 _offset, u8 *pbuf)
+
+ retry = 0;
+ value32 = rtl_read_dword(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL]);
+- while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < 10000)) {
++ while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < max_attempts)) {
+ value32 = rtl_read_dword(rtlpriv,
+ rtlpriv->cfg->maps[EFUSE_CTRL]);
+ retry++;
+diff --git a/drivers/net/wireless/realtek/rtw89/coex.c b/drivers/net/wireless/realtek/rtw89/coex.c
+index 24929ef534e08b..3f4754b7c9d091 100644
+--- a/drivers/net/wireless/realtek/rtw89/coex.c
++++ b/drivers/net/wireless/realtek/rtw89/coex.c
+@@ -2449,6 +2449,8 @@ static void btc_fw_set_monreg(struct rtw89_dev *rtwdev)
+ if (ver->fcxmreg == 7) {
+ sz = struct_size(v7, regs, n);
+ v7 = kmalloc(sz, GFP_KERNEL);
++ if (!v7)
++ return;
+ v7->type = RPT_EN_MREG;
+ v7->fver = ver->fcxmreg;
+ v7->len = n;
+@@ -2463,6 +2465,8 @@ static void btc_fw_set_monreg(struct rtw89_dev *rtwdev)
+ } else {
+ sz = struct_size(v1, regs, n);
+ v1 = kmalloc(sz, GFP_KERNEL);
++ if (!v1)
++ return;
+ v1->fver = ver->fcxmreg;
+ v1->reg_num = n;
+ memcpy(v1->regs, chip->mon_reg, flex_array_size(v1, regs, n));
+diff --git a/drivers/net/wireless/silabs/wfx/main.c b/drivers/net/wireless/silabs/wfx/main.c
+index e7198520bdffc7..64441c8bc4606c 100644
+--- a/drivers/net/wireless/silabs/wfx/main.c
++++ b/drivers/net/wireless/silabs/wfx/main.c
+@@ -480,10 +480,23 @@ static int __init wfx_core_init(void)
+ {
+ int ret = 0;
+
+- if (IS_ENABLED(CONFIG_SPI))
++ if (IS_ENABLED(CONFIG_SPI)) {
+ ret = spi_register_driver(&wfx_spi_driver);
+- if (IS_ENABLED(CONFIG_MMC) && !ret)
++ if (ret)
++ goto out;
++ }
++ if (IS_ENABLED(CONFIG_MMC)) {
+ ret = sdio_register_driver(&wfx_sdio_driver);
++ if (ret)
++ goto unregister_spi;
++ }
++
++ return 0;
++
++unregister_spi:
++ if (IS_ENABLED(CONFIG_SPI))
++ spi_unregister_driver(&wfx_spi_driver);
++out:
+ return ret;
+ }
+ module_init(wfx_core_init);
+diff --git a/drivers/net/wireless/st/cw1200/cw1200_spi.c b/drivers/net/wireless/st/cw1200/cw1200_spi.c
+index 4f346fb977a989..862964a8cc8761 100644
+--- a/drivers/net/wireless/st/cw1200/cw1200_spi.c
++++ b/drivers/net/wireless/st/cw1200/cw1200_spi.c
+@@ -450,7 +450,7 @@ static int __maybe_unused cw1200_spi_suspend(struct device *dev)
+ {
+ struct hwbus_priv *self = spi_get_drvdata(to_spi_device(dev));
+
+- if (!cw1200_can_suspend(self->core))
++ if (self && !cw1200_can_suspend(self->core))
+ return -EAGAIN;
+
+ /* XXX notify host that we have to keep CW1200 powered on? */
+diff --git a/drivers/net/wireless/virtual/mac80211_hwsim.c b/drivers/net/wireless/virtual/mac80211_hwsim.c
+index 5fe9e4e261429d..52468e16c3df61 100644
+--- a/drivers/net/wireless/virtual/mac80211_hwsim.c
++++ b/drivers/net/wireless/virtual/mac80211_hwsim.c
+@@ -763,7 +763,7 @@ static const struct rhashtable_params hwsim_rht_params = {
+ };
+
+ struct hwsim_radiotap_hdr {
+- struct ieee80211_radiotap_header hdr;
++ struct ieee80211_radiotap_header_fixed hdr;
+ __le64 rt_tsft;
+ u8 rt_flags;
+ u8 rt_rate;
+@@ -772,7 +772,7 @@ struct hwsim_radiotap_hdr {
+ } __packed;
+
+ struct hwsim_radiotap_ack_hdr {
+- struct ieee80211_radiotap_header hdr;
++ struct ieee80211_radiotap_header_fixed hdr;
+ u8 rt_flags;
+ u8 pad;
+ __le16 rt_channel;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 128932c849a1a4..f49431cbc8dfcd 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4551,6 +4551,11 @@ EXPORT_SYMBOL_GPL(nvme_alloc_admin_tag_set);
+
+ void nvme_remove_admin_tag_set(struct nvme_ctrl *ctrl)
+ {
++ /*
++ * As we're about to destroy the queue and free tagset
++ * we can not have keep-alive work running.
++ */
++ nvme_stop_keep_alive(ctrl);
+ blk_mq_destroy_queue(ctrl->admin_q);
+ blk_put_queue(ctrl->admin_q);
+ if (ctrl->ops->flags & NVME_F_FABRICS) {
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index a43982aaa40d74..07fcf98540c565 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -165,7 +165,8 @@ void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu)) {
+ if (!ns->head->disk)
+ continue;
+ kblockd_schedule_work(&ns->head->requeue_work);
+@@ -209,7 +210,8 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu)) {
+ nvme_mpath_clear_current_path(ns);
+ kblockd_schedule_work(&ns->head->requeue_work);
+ }
+@@ -224,7 +226,8 @@ void nvme_mpath_revalidate_paths(struct nvme_ns *ns)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&head->srcu);
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (capacity != get_capacity(ns->disk))
+ clear_bit(NVME_NS_READY, &ns->flags);
+ }
+@@ -257,7 +260,8 @@ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node)
+ int found_distance = INT_MAX, fallback_distance = INT_MAX, distance;
+ struct nvme_ns *found = NULL, *fallback = NULL, *ns;
+
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (nvme_path_is_disabled(ns))
+ continue;
+
+@@ -356,7 +360,8 @@ static struct nvme_ns *nvme_queue_depth_path(struct nvme_ns_head *head)
+ unsigned int min_depth_opt = UINT_MAX, min_depth_nonopt = UINT_MAX;
+ unsigned int depth;
+
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (nvme_path_is_disabled(ns))
+ continue;
+
+@@ -421,7 +426,11 @@ static bool nvme_available_path(struct nvme_ns_head *head)
+ {
+ struct nvme_ns *ns;
+
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ if (!test_bit(NVME_NSHEAD_DISK_LIVE, &head->flags))
++ return NULL;
++
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ns->ctrl->flags))
+ continue;
+ switch (nvme_ctrl_state(ns->ctrl)) {
+@@ -783,7 +792,8 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
+ return 0;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu)) {
+ unsigned nsid;
+ again:
+ nsid = le32_to_cpu(desc->nsids[n]);
+@@ -995,8 +1005,7 @@ void nvme_mpath_shutdown_disk(struct nvme_ns_head *head)
+ {
+ if (!head->disk)
+ return;
+- kblockd_schedule_work(&head->requeue_work);
+- if (test_bit(NVME_NSHEAD_DISK_LIVE, &head->flags)) {
++ if (test_and_clear_bit(NVME_NSHEAD_DISK_LIVE, &head->flags)) {
+ nvme_cdev_del(&head->cdev, &head->cdev_device);
+ /*
+ * requeue I/O after NVME_NSHEAD_DISK_LIVE has been cleared
+@@ -1006,6 +1015,12 @@ void nvme_mpath_shutdown_disk(struct nvme_ns_head *head)
+ kblockd_schedule_work(&head->requeue_work);
+ del_gendisk(head->disk);
+ }
++ /*
++ * requeue I/O after NVME_NSHEAD_DISK_LIVE has been cleared
++ * to allow multipath to fail all I/O.
++ */
++ synchronize_srcu(&head->srcu);
++ kblockd_schedule_work(&head->requeue_work);
+ }
+
+ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 4b9fda0b1d9a33..55af3dfbc2607b 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -153,6 +153,7 @@ struct nvme_dev {
+ /* host memory buffer support: */
+ u64 host_mem_size;
+ u32 nr_host_mem_descs;
++ u32 host_mem_descs_size;
+ dma_addr_t host_mem_descs_dma;
+ struct nvme_host_mem_buf_desc *host_mem_descs;
+ void **host_mem_desc_bufs;
+@@ -904,9 +905,10 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
+
+ static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
+ {
++ struct request *req;
++
+ spin_lock(&nvmeq->sq_lock);
+- while (!rq_list_empty(*rqlist)) {
+- struct request *req = rq_list_pop(rqlist);
++ while ((req = rq_list_pop(rqlist))) {
+ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+
+ nvme_sq_copy_cmd(nvmeq, &iod->cmd);
+@@ -931,31 +933,25 @@ static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
+
+ static void nvme_queue_rqs(struct request **rqlist)
+ {
+- struct request *req, *next, *prev = NULL;
++ struct request *submit_list = NULL;
+ struct request *requeue_list = NULL;
++ struct request **requeue_lastp = &requeue_list;
++ struct nvme_queue *nvmeq = NULL;
++ struct request *req;
+
+- rq_list_for_each_safe(rqlist, req, next) {
+- struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+-
+- if (!nvme_prep_rq_batch(nvmeq, req)) {
+- /* detach 'req' and add to remainder list */
+- rq_list_move(rqlist, &requeue_list, req, prev);
+-
+- req = prev;
+- if (!req)
+- continue;
+- }
++ while ((req = rq_list_pop(rqlist))) {
++ if (nvmeq && nvmeq != req->mq_hctx->driver_data)
++ nvme_submit_cmds(nvmeq, &submit_list);
++ nvmeq = req->mq_hctx->driver_data;
+
+- if (!next || req->mq_hctx != next->mq_hctx) {
+- /* detach rest of list, and submit */
+- req->rq_next = NULL;
+- nvme_submit_cmds(nvmeq, rqlist);
+- *rqlist = next;
+- prev = NULL;
+- } else
+- prev = req;
++ if (nvme_prep_rq_batch(nvmeq, req))
++ rq_list_add(&submit_list, req); /* reverse order */
++ else
++ rq_list_add_tail(&requeue_lastp, req);
+ }
+
++ if (nvmeq)
++ nvme_submit_cmds(nvmeq, &submit_list);
+ *rqlist = requeue_list;
+ }
+
+@@ -1966,10 +1962,10 @@ static void nvme_free_host_mem(struct nvme_dev *dev)
+
+ kfree(dev->host_mem_desc_bufs);
+ dev->host_mem_desc_bufs = NULL;
+- dma_free_coherent(dev->dev,
+- dev->nr_host_mem_descs * sizeof(*dev->host_mem_descs),
++ dma_free_coherent(dev->dev, dev->host_mem_descs_size,
+ dev->host_mem_descs, dev->host_mem_descs_dma);
+ dev->host_mem_descs = NULL;
++ dev->host_mem_descs_size = 0;
+ dev->nr_host_mem_descs = 0;
+ }
+
+@@ -1977,7 +1973,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+ u32 chunk_size)
+ {
+ struct nvme_host_mem_buf_desc *descs;
+- u32 max_entries, len;
++ u32 max_entries, len, descs_size;
+ dma_addr_t descs_dma;
+ int i = 0;
+ void **bufs;
+@@ -1990,8 +1986,9 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+ if (dev->ctrl.hmmaxd && dev->ctrl.hmmaxd < max_entries)
+ max_entries = dev->ctrl.hmmaxd;
+
+- descs = dma_alloc_coherent(dev->dev, max_entries * sizeof(*descs),
+- &descs_dma, GFP_KERNEL);
++ descs_size = max_entries * sizeof(*descs);
++ descs = dma_alloc_coherent(dev->dev, descs_size, &descs_dma,
++ GFP_KERNEL);
+ if (!descs)
+ goto out;
+
+@@ -2020,6 +2017,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+ dev->host_mem_size = size;
+ dev->host_mem_descs = descs;
+ dev->host_mem_descs_dma = descs_dma;
++ dev->host_mem_descs_size = descs_size;
+ dev->host_mem_desc_bufs = bufs;
+ return 0;
+
+@@ -2034,8 +2032,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+
+ kfree(bufs);
+ out_free_descs:
+- dma_free_coherent(dev->dev, max_entries * sizeof(*descs), descs,
+- descs_dma);
++ dma_free_coherent(dev->dev, descs_size, descs, descs_dma);
+ out:
+ dev->host_mem_descs = NULL;
+ return -ENOMEM;
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 68103ad230eebd..a7cfbbfbf1cb9b 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -457,6 +457,7 @@ int __initdata dt_root_addr_cells;
+ int __initdata dt_root_size_cells;
+
+ void *initial_boot_params __ro_after_init;
++phys_addr_t initial_boot_params_pa __ro_after_init;
+
+ #ifdef CONFIG_OF_EARLY_FLATTREE
+
+@@ -1136,17 +1137,18 @@ static void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
+ return ptr;
+ }
+
+-bool __init early_init_dt_verify(void *params)
++bool __init early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys)
+ {
+- if (!params)
++ if (!dt_virt)
+ return false;
+
+ /* check device tree validity */
+- if (fdt_check_header(params))
++ if (fdt_check_header(dt_virt))
+ return false;
+
+ /* Setup flat device-tree pointer */
+- initial_boot_params = params;
++ initial_boot_params = dt_virt;
++ initial_boot_params_pa = dt_phys;
+ of_fdt_crc32 = crc32_be(~0, initial_boot_params,
+ fdt_totalsize(initial_boot_params));
+
+@@ -1173,11 +1175,11 @@ void __init early_init_dt_scan_nodes(void)
+ early_init_dt_check_for_usable_mem_range();
+ }
+
+-bool __init early_init_dt_scan(void *params)
++bool __init early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys)
+ {
+ bool status;
+
+- status = early_init_dt_verify(params);
++ status = early_init_dt_verify(dt_virt, dt_phys);
+ if (!status)
+ return false;
+
+diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c
+index 9ccde2fd77cbf5..5b924597a4debe 100644
+--- a/drivers/of/kexec.c
++++ b/drivers/of/kexec.c
+@@ -301,7 +301,7 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage *image,
+ }
+
+ /* Remove memory reservation for the current device tree. */
+- ret = fdt_find_and_del_mem_rsv(fdt, __pa(initial_boot_params),
++ ret = fdt_find_and_del_mem_rsv(fdt, initial_boot_params_pa,
+ fdt_totalsize(initial_boot_params));
+ if (ret == -EINVAL) {
+ pr_err("Error removing memory reservation.\n");
+diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
+index 85718246016b73..9a0e760e4b523e 100644
+--- a/drivers/pci/controller/cadence/pci-j721e.c
++++ b/drivers/pci/controller/cadence/pci-j721e.c
+@@ -7,6 +7,8 @@
+ */
+
+ #include <linux/clk.h>
++#include <linux/clk-provider.h>
++#include <linux/container_of.h>
+ #include <linux/delay.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/io.h>
+@@ -22,6 +24,8 @@
+ #include "../../pci.h"
+ #include "pcie-cadence.h"
+
++#define cdns_pcie_to_rc(p) container_of(p, struct cdns_pcie_rc, pcie)
++
+ #define ENABLE_REG_SYS_2 0x108
+ #define STATUS_REG_SYS_2 0x508
+ #define STATUS_CLR_REG_SYS_2 0x708
+@@ -52,6 +56,7 @@ struct j721e_pcie {
+ u32 mode;
+ u32 num_lanes;
+ u32 max_lanes;
++ struct gpio_desc *reset_gpio;
+ void __iomem *user_cfg_base;
+ void __iomem *intd_cfg_base;
+ u32 linkdown_irq_regfield;
+@@ -510,6 +515,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ dev_err(dev, "Failed to get reset GPIO\n");
+ goto err_get_sync;
+ }
++ pcie->reset_gpio = gpiod;
+
+ ret = cdns_pcie_init_phy(dev, cdns_pcie);
+ if (ret) {
+@@ -532,15 +538,14 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ pcie->refclk = clk;
+
+ /*
+- * "Power Sequencing and Reset Signal Timings" table in
+- * PCI EXPRESS CARD ELECTROMECHANICAL SPECIFICATION, REV. 3.0
+- * indicates PERST# should be deasserted after minimum of 100us
+- * once REFCLK is stable. The REFCLK to the connector in RC
+- * mode is selected while enabling the PHY. So deassert PERST#
+- * after 100 us.
++ * Section 2.2 of the PCI Express Card Electromechanical
++ * Specification (Revision 5.1) mandates that the deassertion
++ * of the PERST# signal should be delayed by 100 ms (TPVPERL).
++ * This shall ensure that the power and the reference clock
++ * are stable.
+ */
+ if (gpiod) {
+- usleep_range(100, 200);
++ msleep(PCIE_T_PVPERL_MS);
+ gpiod_set_value_cansleep(gpiod, 1);
+ }
+
+@@ -589,6 +594,86 @@ static void j721e_pcie_remove(struct platform_device *pdev)
+ pm_runtime_disable(dev);
+ }
+
++static int j721e_pcie_suspend_noirq(struct device *dev)
++{
++ struct j721e_pcie *pcie = dev_get_drvdata(dev);
++
++ if (pcie->mode == PCI_MODE_RC) {
++ gpiod_set_value_cansleep(pcie->reset_gpio, 0);
++ clk_disable_unprepare(pcie->refclk);
++ }
++
++ cdns_pcie_disable_phy(pcie->cdns_pcie);
++
++ return 0;
++}
++
++static int j721e_pcie_resume_noirq(struct device *dev)
++{
++ struct j721e_pcie *pcie = dev_get_drvdata(dev);
++ struct cdns_pcie *cdns_pcie = pcie->cdns_pcie;
++ int ret;
++
++ ret = j721e_pcie_ctrl_init(pcie);
++ if (ret < 0)
++ return ret;
++
++ j721e_pcie_config_link_irq(pcie);
++
++ /*
++ * This is not called explicitly in the probe, it is called by
++ * cdns_pcie_init_phy().
++ */
++ ret = cdns_pcie_enable_phy(pcie->cdns_pcie);
++ if (ret < 0)
++ return ret;
++
++ if (pcie->mode == PCI_MODE_RC) {
++ struct cdns_pcie_rc *rc = cdns_pcie_to_rc(cdns_pcie);
++
++ ret = clk_prepare_enable(pcie->refclk);
++ if (ret < 0)
++ return ret;
++
++ /*
++ * Section 2.2 of the PCI Express Card Electromechanical
++ * Specification (Revision 5.1) mandates that the deassertion
++ * of the PERST# signal should be delayed by 100 ms (TPVPERL).
++ * This shall ensure that the power and the reference clock
++ * are stable.
++ */
++ if (pcie->reset_gpio) {
++ msleep(PCIE_T_PVPERL_MS);
++ gpiod_set_value_cansleep(pcie->reset_gpio, 1);
++ }
++
++ ret = cdns_pcie_host_link_setup(rc);
++ if (ret < 0) {
++ clk_disable_unprepare(pcie->refclk);
++ return ret;
++ }
++
++ /*
++ * Reset internal status of BARs to force reinitialization in
++ * cdns_pcie_host_init().
++ */
++ for (enum cdns_pcie_rp_bar bar = RP_BAR0; bar <= RP_NO_BAR; bar++)
++ rc->avail_ib_bar[bar] = true;
++
++ ret = cdns_pcie_host_init(rc);
++ if (ret) {
++ clk_disable_unprepare(pcie->refclk);
++ return ret;
++ }
++ }
++
++ return 0;
++}
++
++static DEFINE_NOIRQ_DEV_PM_OPS(j721e_pcie_pm_ops,
++ j721e_pcie_suspend_noirq,
++ j721e_pcie_resume_noirq);
++
+ static struct platform_driver j721e_pcie_driver = {
+ .probe = j721e_pcie_probe,
+ .remove_new = j721e_pcie_remove,
+@@ -596,6 +681,7 @@ static struct platform_driver j721e_pcie_driver = {
+ .name = "j721e-pcie",
+ .of_match_table = of_j721e_pcie_match,
+ .suppress_bind_attrs = true,
++ .pm = pm_sleep_ptr(&j721e_pcie_pm_ops),
+ },
+ };
+ builtin_platform_driver(j721e_pcie_driver);
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index 5b14f7ee3c798e..8af95e9da7cec6 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -485,8 +485,7 @@ static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc)
+ return cdns_pcie_host_map_dma_ranges(rc);
+ }
+
+-static int cdns_pcie_host_init(struct device *dev,
+- struct cdns_pcie_rc *rc)
++int cdns_pcie_host_init(struct cdns_pcie_rc *rc)
+ {
+ int err;
+
+@@ -497,6 +496,30 @@ static int cdns_pcie_host_init(struct device *dev,
+ return cdns_pcie_host_init_address_translation(rc);
+ }
+
++int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc)
++{
++ struct cdns_pcie *pcie = &rc->pcie;
++ struct device *dev = rc->pcie.dev;
++ int ret;
++
++ if (rc->quirk_detect_quiet_flag)
++ cdns_pcie_detect_quiet_min_delay_set(&rc->pcie);
++
++ cdns_pcie_host_enable_ptm_response(pcie);
++
++ ret = cdns_pcie_start_link(pcie);
++ if (ret) {
++ dev_err(dev, "Failed to start link\n");
++ return ret;
++ }
++
++ ret = cdns_pcie_host_start_link(rc);
++ if (ret)
++ dev_dbg(dev, "PCIe link never came up\n");
++
++ return 0;
++}
++
+ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+ {
+ struct device *dev = rc->pcie.dev;
+@@ -533,25 +556,14 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+ return PTR_ERR(rc->cfg_base);
+ rc->cfg_res = res;
+
+- if (rc->quirk_detect_quiet_flag)
+- cdns_pcie_detect_quiet_min_delay_set(&rc->pcie);
+-
+- cdns_pcie_host_enable_ptm_response(pcie);
+-
+- ret = cdns_pcie_start_link(pcie);
+- if (ret) {
+- dev_err(dev, "Failed to start link\n");
+- return ret;
+- }
+-
+- ret = cdns_pcie_host_start_link(rc);
++ ret = cdns_pcie_host_link_setup(rc);
+ if (ret)
+- dev_dbg(dev, "PCIe link never came up\n");
++ return ret;
+
+ for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++)
+ rc->avail_ib_bar[bar] = true;
+
+- ret = cdns_pcie_host_init(dev, rc);
++ ret = cdns_pcie_host_init(rc);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
+index 7a66a2f815dcea..bb215aeebf20ac 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence.h
++++ b/drivers/pci/controller/cadence/pcie-cadence.h
+@@ -521,10 +521,22 @@ static inline bool cdns_pcie_link_up(struct cdns_pcie *pcie)
+ }
+
+ #ifdef CONFIG_PCIE_CADENCE_HOST
++int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc);
++int cdns_pcie_host_init(struct cdns_pcie_rc *rc);
+ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc);
+ void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
+ int where);
+ #else
++static inline int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc)
++{
++ return 0;
++}
++
++static inline int cdns_pcie_host_init(struct cdns_pcie_rc *rc)
++{
++ return 0;
++}
++
+ static inline int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+ {
+ return 0;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+index c8bb7cd89678f5..a7f65d50d4ebbf 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom-ep.c
++++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+@@ -395,6 +395,10 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
+ return ret;
+ }
+
++ /* Perform cleanup that requires refclk */
++ pci_epc_deinit_notify(pci->ep.epc);
++ dw_pcie_ep_cleanup(&pci->ep);
++
+ /* Assert WAKE# to RC to indicate device is ready */
+ gpiod_set_value_cansleep(pcie_ep->wake, 1);
+ usleep_range(WAKE_DELAY_US, WAKE_DELAY_US + 500);
+@@ -534,8 +538,6 @@ static void qcom_pcie_perst_assert(struct dw_pcie *pci)
+ {
+ struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
+
+- pci_epc_deinit_notify(pci->ep.epc);
+- dw_pcie_ep_cleanup(&pci->ep);
+ qcom_pcie_disable_resources(pcie_ep);
+ pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED;
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index 4bf7b433417a3b..d68dd18ed43cb6 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -1709,9 +1709,6 @@ static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie)
+ if (ret)
+ dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret);
+
+- pci_epc_deinit_notify(pcie->pci.ep.epc);
+- dw_pcie_ep_cleanup(&pcie->pci.ep);
+-
+ reset_control_assert(pcie->core_rst);
+
+ tegra_pcie_disable_phy(pcie);
+@@ -1790,6 +1787,10 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
+ goto fail_phy;
+ }
+
++ /* Perform cleanup that requires refclk */
++ pci_epc_deinit_notify(pcie->pci.ep.epc);
++ dw_pcie_ep_cleanup(&pcie->pci.ep);
++
+ /* Clear any stale interrupt statuses */
+ appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
+ appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
+diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
+index 7d070b1def1166..54286a40bdfbf7 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
++++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
+@@ -867,12 +867,18 @@ static int pci_epf_mhi_bind(struct pci_epf *epf)
+ {
+ struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
+ struct pci_epc *epc = epf->epc;
++ struct device *dev = &epf->dev;
+ struct platform_device *pdev = to_platform_device(epc->dev.parent);
+ struct resource *res;
+ int ret;
+
+ /* Get MMIO base address from Endpoint controller */
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mmio");
++ if (!res) {
++ dev_err(dev, "Failed to get \"mmio\" resource\n");
++ return -ENODEV;
++ }
++
+ epf_mhi->mmio_phys = res->start;
+ epf_mhi->mmio_size = resource_size(res);
+
+diff --git a/drivers/pci/hotplug/cpqphp_pci.c b/drivers/pci/hotplug/cpqphp_pci.c
+index e9f1fb333a718e..974c7db3265b5a 100644
+--- a/drivers/pci/hotplug/cpqphp_pci.c
++++ b/drivers/pci/hotplug/cpqphp_pci.c
+@@ -135,11 +135,13 @@ int cpqhp_unconfigure_device(struct pci_func *func)
+ static int PCI_RefinedAccessConfig(struct pci_bus *bus, unsigned int devfn, u8 offset, u32 *value)
+ {
+ u32 vendID = 0;
++ int ret;
+
+- if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID) == -1)
+- return -1;
+- if (vendID == 0xffffffff)
+- return -1;
++ ret = pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID);
++ if (ret != PCIBIOS_SUCCESSFUL)
++ return PCIBIOS_DEVICE_NOT_FOUND;
++ if (PCI_POSSIBLE_ERROR(vendID))
++ return PCIBIOS_DEVICE_NOT_FOUND;
+ return pci_bus_read_config_dword(bus, devfn, offset, value);
+ }
+
+@@ -202,13 +204,15 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
+ {
+ u16 tdevice;
+ u32 work;
++ int ret;
+ u8 tbus;
+
+ ctrl->pci_bus->number = bus_num;
+
+ for (tdevice = 0; tdevice < 0xFF; tdevice++) {
+ /* Scan for access first */
+- if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
++ ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work);
++ if (ret)
+ continue;
+ dbg("Looking for nonbridge bus_num %d dev_num %d\n", bus_num, tdevice);
+ /* Yep we got one. Not a bridge ? */
+@@ -220,7 +224,8 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
+ }
+ for (tdevice = 0; tdevice < 0xFF; tdevice++) {
+ /* Scan for access first */
+- if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
++ ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work);
++ if (ret)
+ continue;
+ dbg("Looking for bridge bus_num %d dev_num %d\n", bus_num, tdevice);
+ /* Yep we got one. bridge ? */
+@@ -253,7 +258,7 @@ static int PCI_GetBusDevHelper(struct controller *ctrl, u8 *bus_num, u8 *dev_num
+ *dev_num = tdevice;
+ ctrl->pci_bus->number = tbus;
+ pci_bus_read_config_dword(ctrl->pci_bus, *dev_num, PCI_VENDOR_ID, &work);
+- if (!nobridge || (work == 0xffffffff))
++ if (!nobridge || PCI_POSSIBLE_ERROR(work))
+ return 0;
+
+ dbg("bus_num %d devfn %d\n", *bus_num, *dev_num);
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 51407c376a2227..1a720cae37a1ec 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -5233,7 +5233,7 @@ static ssize_t reset_method_store(struct device *dev,
+ const char *buf, size_t count)
+ {
+ struct pci_dev *pdev = to_pci_dev(dev);
+- char *options, *name;
++ char *options, *tmp_options, *name;
+ int m, n;
+ u8 reset_methods[PCI_NUM_RESET_METHODS] = { 0 };
+
+@@ -5253,7 +5253,8 @@ static ssize_t reset_method_store(struct device *dev,
+ return -ENOMEM;
+
+ n = 0;
+- while ((name = strsep(&options, " ")) != NULL) {
++ tmp_options = options;
++ while ((name = strsep(&tmp_options, " ")) != NULL) {
+ if (sysfs_streq(name, ""))
+ continue;
+
+diff --git a/drivers/pci/slot.c b/drivers/pci/slot.c
+index 0f87cade10f74b..ed645c7a4e4b41 100644
+--- a/drivers/pci/slot.c
++++ b/drivers/pci/slot.c
+@@ -79,6 +79,7 @@ static void pci_slot_release(struct kobject *kobj)
+ up_read(&pci_bus_sem);
+
+ list_del(&slot->list);
++ pci_bus_put(slot->bus);
+
+ kfree(slot);
+ }
+@@ -261,7 +262,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
+ goto err;
+ }
+
+- slot->bus = parent;
++ slot->bus = pci_bus_get(parent);
+ slot->number = slot_nr;
+
+ slot->kobj.kset = pci_slots_kset;
+@@ -269,6 +270,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
+ slot_name = make_slot_name(name);
+ if (!slot_name) {
+ err = -ENOMEM;
++ pci_bus_put(slot->bus);
+ kfree(slot);
+ goto err;
+ }
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 48863b31ccfb12..a2032d39796400 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -2147,8 +2147,6 @@ static int arm_cmn_init_dtcs(struct arm_cmn *cmn)
+ continue;
+
+ xp = arm_cmn_node_to_xp(cmn, dn);
+- dn->portid_bits = xp->portid_bits;
+- dn->deviceid_bits = xp->deviceid_bits;
+ dn->dtc = xp->dtc;
+ dn->dtm = xp->dtm;
+ if (cmn->multi_dtm)
+@@ -2379,6 +2377,8 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
+ }
+
+ arm_cmn_init_node_info(cmn, reg & CMN_CHILD_NODE_ADDR, dn);
++ dn->portid_bits = xp->portid_bits;
++ dn->deviceid_bits = xp->deviceid_bits;
+
+ switch (dn->type) {
+ case CMN_TYPE_DTC:
+diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
+index d5fa92ba837397..dabdb9f7bb82c4 100644
+--- a/drivers/perf/arm_smmuv3_pmu.c
++++ b/drivers/perf/arm_smmuv3_pmu.c
+@@ -431,6 +431,17 @@ static int smmu_pmu_event_init(struct perf_event *event)
+ return -EINVAL;
+ }
+
++ /*
++ * Ensure all events are on the same cpu so all events are in the
++ * same cpu context, to avoid races on pmu_enable etc.
++ */
++ event->cpu = smmu_pmu->on_cpu;
++
++ hwc->idx = -1;
++
++ if (event->group_leader == event)
++ return 0;
++
+ for_each_sibling_event(sibling, event->group_leader) {
+ if (is_software_event(sibling))
+ continue;
+@@ -442,14 +453,6 @@ static int smmu_pmu_event_init(struct perf_event *event)
+ return -EINVAL;
+ }
+
+- hwc->idx = -1;
+-
+- /*
+- * Ensure all events are on the same cpu so all events are in the
+- * same cpu context, to avoid races on pmu_enable etc.
+- */
+- event->cpu = smmu_pmu->on_cpu;
+-
+ return 0;
+ }
+
+diff --git a/drivers/phy/phy-airoha-pcie-regs.h b/drivers/phy/phy-airoha-pcie-regs.h
+index bb1f679ca1dfa0..b938a7b468fee3 100644
+--- a/drivers/phy/phy-airoha-pcie-regs.h
++++ b/drivers/phy/phy-airoha-pcie-regs.h
+@@ -197,9 +197,9 @@
+ #define CSR_2L_PXP_TX1_MULTLANE_EN BIT(0)
+
+ #define REG_CSR_2L_RX0_REV0 0x00fc
+-#define CSR_2L_PXP_VOS_PNINV GENMASK(3, 2)
+-#define CSR_2L_PXP_FE_GAIN_NORMAL_MODE GENMASK(6, 4)
+-#define CSR_2L_PXP_FE_GAIN_TRAIN_MODE GENMASK(10, 8)
++#define CSR_2L_PXP_VOS_PNINV GENMASK(19, 18)
++#define CSR_2L_PXP_FE_GAIN_NORMAL_MODE GENMASK(22, 20)
++#define CSR_2L_PXP_FE_GAIN_TRAIN_MODE GENMASK(26, 24)
+
+ #define REG_CSR_2L_RX0_PHYCK_DIV 0x0100
+ #define CSR_2L_PXP_RX0_PHYCK_SEL GENMASK(9, 8)
+diff --git a/drivers/phy/phy-airoha-pcie.c b/drivers/phy/phy-airoha-pcie.c
+index bd3edaa986c8bd..9d222898760bc5 100644
+--- a/drivers/phy/phy-airoha-pcie.c
++++ b/drivers/phy/phy-airoha-pcie.c
+@@ -456,7 +456,7 @@ static void airoha_pcie_phy_init_clk_out(struct airoha_pcie_phy *pcie_phy)
+ airoha_phy_csr_2l_clear_bits(pcie_phy, REG_CSR_2L_CLKTX1_OFFSET,
+ CSR_2L_PXP_CLKTX1_SR);
+ airoha_phy_csr_2l_update_field(pcie_phy, REG_CSR_2L_PLL_CMN_RESERVE0,
+- CSR_2L_PXP_PLL_RESERVE_MASK, 0xdd);
++ CSR_2L_PXP_PLL_RESERVE_MASK, 0xd0d);
+ }
+
+ static void airoha_pcie_phy_init_csr_2l(struct airoha_pcie_phy *pcie_phy)
+@@ -468,9 +468,9 @@ static void airoha_pcie_phy_init_csr_2l(struct airoha_pcie_phy *pcie_phy)
+ PCIE_SW_XFI_RXPCS_RST | PCIE_SW_REF_RST |
+ PCIE_SW_RX_RST);
+ airoha_phy_pma0_set_bits(pcie_phy, REG_PCIE_PMA_TX_RESET,
+- PCIE_TX_TOP_RST | REG_PCIE_PMA_TX_RESET);
++ PCIE_TX_TOP_RST | PCIE_TX_CAL_RST);
+ airoha_phy_pma1_set_bits(pcie_phy, REG_PCIE_PMA_TX_RESET,
+- PCIE_TX_TOP_RST | REG_PCIE_PMA_TX_RESET);
++ PCIE_TX_TOP_RST | PCIE_TX_CAL_RST);
+ }
+
+ static void airoha_pcie_phy_init_rx(struct airoha_pcie_phy *pcie_phy)
+@@ -799,7 +799,7 @@ static void airoha_pcie_phy_init_ssc_jcpll(struct airoha_pcie_phy *pcie_phy)
+ airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SDM_IFM,
+ CSR_2L_PXP_JCPLL_SDM_IFM);
+ airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SDM_HREN,
+- REG_CSR_2L_JCPLL_SDM_HREN);
++ CSR_2L_PXP_JCPLL_SDM_HREN);
+ airoha_phy_csr_2l_clear_bits(pcie_phy, REG_CSR_2L_JCPLL_RST_DLY,
+ CSR_2L_PXP_JCPLL_SDM_DI_EN);
+ airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SSC,
+diff --git a/drivers/phy/realtek/phy-rtk-usb2.c b/drivers/phy/realtek/phy-rtk-usb2.c
+index e3ad7cea510998..e8ca2ec5998fe6 100644
+--- a/drivers/phy/realtek/phy-rtk-usb2.c
++++ b/drivers/phy/realtek/phy-rtk-usb2.c
+@@ -1023,6 +1023,8 @@ static int rtk_usb2phy_probe(struct platform_device *pdev)
+
+ rtk_phy->dev = &pdev->dev;
+ rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL);
++ if (!rtk_phy->phy_cfg)
++ return -ENOMEM;
+
+ memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg));
+
+diff --git a/drivers/phy/realtek/phy-rtk-usb3.c b/drivers/phy/realtek/phy-rtk-usb3.c
+index dfcf4b921bba63..96af483e5444b9 100644
+--- a/drivers/phy/realtek/phy-rtk-usb3.c
++++ b/drivers/phy/realtek/phy-rtk-usb3.c
+@@ -577,6 +577,8 @@ static int rtk_usb3phy_probe(struct platform_device *pdev)
+
+ rtk_phy->dev = &pdev->dev;
+ rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL);
++ if (!rtk_phy->phy_cfg)
++ return -ENOMEM;
+
+ memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg));
+
+diff --git a/drivers/pinctrl/pinctrl-k210.c b/drivers/pinctrl/pinctrl-k210.c
+index a898e40451fe4b..9c1aa3246df7dd 100644
+--- a/drivers/pinctrl/pinctrl-k210.c
++++ b/drivers/pinctrl/pinctrl-k210.c
+@@ -183,7 +183,7 @@ static const u32 k210_pinconf_mode_id_to_mode[] = {
+ [K210_PC_DEFAULT_INT13] = K210_PC_MODE_IN | K210_PC_PU,
+ };
+
+-#undef DEFAULT
++#undef K210_PC_DEFAULT
+
+ /*
+ * Pin functions configuration information.
+diff --git a/drivers/pinctrl/pinctrl-zynqmp.c b/drivers/pinctrl/pinctrl-zynqmp.c
+index 3c6d56fdb8c964..93454d2a26bcc6 100644
+--- a/drivers/pinctrl/pinctrl-zynqmp.c
++++ b/drivers/pinctrl/pinctrl-zynqmp.c
+@@ -49,7 +49,6 @@
+ * @name: Name of the pin mux function
+ * @groups: List of pin groups for this function
+ * @ngroups: Number of entries in @groups
+- * @node: Firmware node matching with the function
+ *
+ * This structure holds information about pin control function
+ * and function group names supporting that function.
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+index d2dd66769aa891..a0eb4e01b3a755 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+@@ -667,7 +667,7 @@ static void pmic_gpio_config_dbg_show(struct pinctrl_dev *pctldev,
+ "push-pull", "open-drain", "open-source"
+ };
+ static const char *const strengths[] = {
+- "no", "high", "medium", "low"
++ "no", "low", "medium", "high"
+ };
+
+ pad = pctldev->desc->pins[pin].drv_data;
+diff --git a/drivers/pinctrl/renesas/Kconfig b/drivers/pinctrl/renesas/Kconfig
+index 14bd55d647319b..7f3f41c7fe54c8 100644
+--- a/drivers/pinctrl/renesas/Kconfig
++++ b/drivers/pinctrl/renesas/Kconfig
+@@ -41,6 +41,7 @@ config PINCTRL_RENESAS
+ select PINCTRL_PFC_R8A779H0 if ARCH_R8A779H0
+ select PINCTRL_RZG2L if ARCH_RZG2L
+ select PINCTRL_RZV2M if ARCH_R9A09G011
++ select PINCTRL_RZG2L if ARCH_R9A09G057
+ select PINCTRL_PFC_SH7203 if CPU_SUBTYPE_SH7203
+ select PINCTRL_PFC_SH7264 if CPU_SUBTYPE_SH7264
+ select PINCTRL_PFC_SH7269 if CPU_SUBTYPE_SH7269
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index 4d305876ec08f6..8f5992cc805c52 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -409,6 +409,7 @@ static int cros_typec_init_ports(struct cros_typec_data *typec)
+ return 0;
+
+ unregister_ports:
++ fwnode_handle_put(fwnode);
+ cros_unregister_ports(typec);
+ return ret;
+ }
+diff --git a/drivers/platform/x86/dell/dell-smbios-base.c b/drivers/platform/x86/dell/dell-smbios-base.c
+index 73e41eb69cb572..01c72b91a50d44 100644
+--- a/drivers/platform/x86/dell/dell-smbios-base.c
++++ b/drivers/platform/x86/dell/dell-smbios-base.c
+@@ -576,6 +576,7 @@ static int __init dell_smbios_init(void)
+ int ret, wmi, smm;
+
+ if (!dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "Dell System", NULL) &&
++ !dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "Alienware", NULL) &&
+ !dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "www.dell.com", NULL)) {
+ pr_err("Unable to run on non-Dell system\n");
+ return -ENODEV;
+diff --git a/drivers/platform/x86/dell/dell-wmi-base.c b/drivers/platform/x86/dell/dell-wmi-base.c
+index 24fd7ffadda952..841a5414d28a68 100644
+--- a/drivers/platform/x86/dell/dell-wmi-base.c
++++ b/drivers/platform/x86/dell/dell-wmi-base.c
+@@ -80,6 +80,12 @@ static const struct dmi_system_id dell_wmi_smbios_list[] __initconst = {
+ static const struct key_entry dell_wmi_keymap_type_0000[] = {
+ { KE_IGNORE, 0x003a, { KEY_CAPSLOCK } },
+
++ /* Meta key lock */
++ { KE_IGNORE, 0xe000, { KEY_RIGHTMETA } },
++
++ /* Meta key unlock */
++ { KE_IGNORE, 0xe001, { KEY_RIGHTMETA } },
++
+ /* Key code is followed by brightness level */
+ { KE_KEY, 0xe005, { KEY_BRIGHTNESSDOWN } },
+ { KE_KEY, 0xe006, { KEY_BRIGHTNESSUP } },
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index b58df617d4fda7..2fde38f5065088 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -1159,6 +1159,9 @@ static const struct key_entry ideapad_keymap[] = {
+ { KE_KEY, 0x27 | IDEAPAD_WMI_KEY, { KEY_HELP } },
+ /* Refresh Rate Toggle */
+ { KE_KEY, 0x0a | IDEAPAD_WMI_KEY, { KEY_REFRESH_RATE_TOGGLE } },
++ /* Specific to some newer models */
++ { KE_KEY, 0x3e | IDEAPAD_WMI_KEY, { KEY_MICMUTE } },
++ { KE_KEY, 0x3f | IDEAPAD_WMI_KEY, { KEY_RFKILL } },
+
+ { KE_END },
+ };
+diff --git a/drivers/platform/x86/intel/bxtwc_tmu.c b/drivers/platform/x86/intel/bxtwc_tmu.c
+index d0e2a3c293b0b0..9ac801b929b93c 100644
+--- a/drivers/platform/x86/intel/bxtwc_tmu.c
++++ b/drivers/platform/x86/intel/bxtwc_tmu.c
+@@ -48,9 +48,8 @@ static irqreturn_t bxt_wcove_tmu_irq_handler(int irq, void *data)
+ static int bxt_wcove_tmu_probe(struct platform_device *pdev)
+ {
+ struct intel_soc_pmic *pmic = dev_get_drvdata(pdev->dev.parent);
+- struct regmap_irq_chip_data *regmap_irq_chip;
+ struct wcove_tmu *wctmu;
+- int ret, virq, irq;
++ int ret;
+
+ wctmu = devm_kzalloc(&pdev->dev, sizeof(*wctmu), GFP_KERNEL);
+ if (!wctmu)
+@@ -59,27 +58,18 @@ static int bxt_wcove_tmu_probe(struct platform_device *pdev)
+ wctmu->dev = &pdev->dev;
+ wctmu->regmap = pmic->regmap;
+
+- irq = platform_get_irq(pdev, 0);
+- if (irq < 0)
+- return irq;
++ wctmu->irq = platform_get_irq(pdev, 0);
++ if (wctmu->irq < 0)
++ return wctmu->irq;
+
+- regmap_irq_chip = pmic->irq_chip_data_tmu;
+- virq = regmap_irq_get_virq(regmap_irq_chip, irq);
+- if (virq < 0) {
+- dev_err(&pdev->dev,
+- "failed to get virtual interrupt=%d\n", irq);
+- return virq;
+- }
+-
+- ret = devm_request_threaded_irq(&pdev->dev, virq,
++ ret = devm_request_threaded_irq(&pdev->dev, wctmu->irq,
+ NULL, bxt_wcove_tmu_irq_handler,
+ IRQF_ONESHOT, "bxt_wcove_tmu", wctmu);
+ if (ret) {
+ dev_err(&pdev->dev, "request irq failed: %d,virq: %d\n",
+- ret, virq);
++ ret, wctmu->irq);
+ return ret;
+ }
+- wctmu->irq = virq;
+
+ /* Unmask TMU second level Wake & System alarm */
+ regmap_update_bits(wctmu->regmap, BXTWC_MTMUIRQ_REG,
+diff --git a/drivers/platform/x86/panasonic-laptop.c b/drivers/platform/x86/panasonic-laptop.c
+index ebd81846e2d564..7365286f6d2dc1 100644
+--- a/drivers/platform/x86/panasonic-laptop.c
++++ b/drivers/platform/x86/panasonic-laptop.c
+@@ -602,8 +602,7 @@ static ssize_t eco_mode_show(struct device *dev, struct device_attribute *attr,
+ result = 1;
+ break;
+ default:
+- result = -EIO;
+- break;
++ return -EIO;
+ }
+ return sysfs_emit(buf, "%u\n", result);
+ }
+@@ -749,7 +748,12 @@ static ssize_t current_brightness_store(struct device *dev, struct device_attrib
+ static ssize_t cdpower_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+- return sysfs_emit(buf, "%d\n", get_optd_power_state());
++ int state = get_optd_power_state();
++
++ if (state < 0)
++ return state;
++
++ return sysfs_emit(buf, "%d\n", state);
+ }
+
+ static ssize_t cdpower_store(struct device *dev, struct device_attribute *attr,
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index f269ca1ff7718d..10e04424885eb8 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -7912,6 +7912,7 @@ static u8 fan_control_resume_level;
+ static int fan_watchdog_maxinterval;
+
+ static bool fan_with_ns_addr;
++static bool ecfw_with_fan_dec_rpm;
+
+ static struct mutex fan_mutex;
+
+@@ -8554,7 +8555,11 @@ static ssize_t fan_fan1_input_show(struct device *dev,
+ if (res < 0)
+ return res;
+
+- return sysfs_emit(buf, "%u\n", speed);
++ /* Check for fan speeds displayed in hexadecimal */
++ if (!ecfw_with_fan_dec_rpm)
++ return sysfs_emit(buf, "%u\n", speed);
++ else
++ return sysfs_emit(buf, "%x\n", speed);
+ }
+
+ static DEVICE_ATTR(fan1_input, S_IRUGO, fan_fan1_input_show, NULL);
+@@ -8571,7 +8576,11 @@ static ssize_t fan_fan2_input_show(struct device *dev,
+ if (res < 0)
+ return res;
+
+- return sysfs_emit(buf, "%u\n", speed);
++ /* Check for fan speeds displayed in hexadecimal */
++ if (!ecfw_with_fan_dec_rpm)
++ return sysfs_emit(buf, "%u\n", speed);
++ else
++ return sysfs_emit(buf, "%x\n", speed);
+ }
+
+ static DEVICE_ATTR(fan2_input, S_IRUGO, fan_fan2_input_show, NULL);
+@@ -8647,6 +8656,7 @@ static const struct attribute_group fan_driver_attr_group = {
+ #define TPACPI_FAN_2CTL 0x0004 /* selects fan2 control */
+ #define TPACPI_FAN_NOFAN 0x0008 /* no fan available */
+ #define TPACPI_FAN_NS 0x0010 /* For EC with non-Standard register addresses */
++#define TPACPI_FAN_DECRPM 0x0020 /* For ECFW's with RPM in register as decimal */
+
+ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ TPACPI_QEC_IBM('1', 'Y', TPACPI_FAN_Q1),
+@@ -8675,6 +8685,7 @@ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ TPACPI_Q_LNV3('R', '1', 'D', TPACPI_FAN_NS), /* 11e Gen5 GL-R */
+ TPACPI_Q_LNV3('R', '0', 'V', TPACPI_FAN_NS), /* 11e Gen5 KL-Y */
+ TPACPI_Q_LNV3('N', '1', 'O', TPACPI_FAN_NOFAN), /* X1 Tablet (2nd gen) */
++ TPACPI_Q_LNV3('R', '0', 'Q', TPACPI_FAN_DECRPM),/* L480 */
+ };
+
+ static int __init fan_init(struct ibm_init_struct *iibm)
+@@ -8715,6 +8726,13 @@ static int __init fan_init(struct ibm_init_struct *iibm)
+ tp_features.fan_ctrl_status_undef = 1;
+ }
+
++ /* Check for the EC/BIOS with RPM reported in decimal*/
++ if (quirks & TPACPI_FAN_DECRPM) {
++ pr_info("ECFW with fan RPM as decimal in EC register\n");
++ ecfw_with_fan_dec_rpm = 1;
++ tp_features.fan_ctrl_status_undef = 1;
++ }
++
+ if (gfan_handle) {
+ /* 570, 600e/x, 770e, 770x */
+ fan_status_access_mode = TPACPI_FAN_RD_ACPI_GFAN;
+@@ -8926,7 +8944,11 @@ static int fan_read(struct seq_file *m)
+ if (rc < 0)
+ return rc;
+
+- seq_printf(m, "speed:\t\t%d\n", speed);
++ /* Check for fan speeds displayed in hexadecimal */
++ if (!ecfw_with_fan_dec_rpm)
++ seq_printf(m, "speed:\t\t%d\n", speed);
++ else
++ seq_printf(m, "speed:\t\t%x\n", speed);
+
+ if (fan_status_access_mode == TPACPI_FAN_RD_TPEC_NS) {
+ /*
+diff --git a/drivers/pmdomain/ti/ti_sci_pm_domains.c b/drivers/pmdomain/ti/ti_sci_pm_domains.c
+index 1510d5ddae3dec..0df3eb7ff09a3d 100644
+--- a/drivers/pmdomain/ti/ti_sci_pm_domains.c
++++ b/drivers/pmdomain/ti/ti_sci_pm_domains.c
+@@ -161,6 +161,7 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
+ break;
+
+ if (args.args_count >= 1 && args.np == dev->of_node) {
++ of_node_put(args.np);
+ if (args.args[0] > max_id) {
+ max_id = args.args[0];
+ } else {
+@@ -192,7 +193,10 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
+ pm_genpd_init(&pd->pd, NULL, true);
+
+ list_add(&pd->node, &pd_provider->pd_list);
++ } else {
++ of_node_put(args.np);
+ }
++
+ index++;
+ }
+ }
+diff --git a/drivers/power/sequencing/Kconfig b/drivers/power/sequencing/Kconfig
+index c9f1cdb6652488..ddcc42a984921c 100644
+--- a/drivers/power/sequencing/Kconfig
++++ b/drivers/power/sequencing/Kconfig
+@@ -16,6 +16,7 @@ if POWER_SEQUENCING
+ config POWER_SEQUENCING_QCOM_WCN
+ tristate "Qualcomm WCN family PMU driver"
+ default m if ARCH_QCOM
++ depends on OF
+ help
+ Say Y here to enable the power sequencing driver for Qualcomm
+ WCN Bluetooth/WLAN chipsets.
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 750fda543308c8..51fb88aca0f9fd 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -449,9 +449,29 @@ static u8
+ [BQ27XXX_REG_AP] = 0x18,
+ BQ27XXX_DM_REG_ROWS,
+ },
++ bq27426_regs[BQ27XXX_REG_MAX] = {
++ [BQ27XXX_REG_CTRL] = 0x00,
++ [BQ27XXX_REG_TEMP] = 0x02,
++ [BQ27XXX_REG_INT_TEMP] = 0x1e,
++ [BQ27XXX_REG_VOLT] = 0x04,
++ [BQ27XXX_REG_AI] = 0x10,
++ [BQ27XXX_REG_FLAGS] = 0x06,
++ [BQ27XXX_REG_TTE] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_TTF] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_TTES] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_TTECP] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_NAC] = 0x08,
++ [BQ27XXX_REG_RC] = 0x0c,
++ [BQ27XXX_REG_FCC] = 0x0e,
++ [BQ27XXX_REG_CYCT] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_AE] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_SOC] = 0x1c,
++ [BQ27XXX_REG_DCAP] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_AP] = 0x18,
++ BQ27XXX_DM_REG_ROWS,
++ },
+ #define bq27411_regs bq27421_regs
+ #define bq27425_regs bq27421_regs
+-#define bq27426_regs bq27421_regs
+ #define bq27441_regs bq27421_regs
+ #define bq27621_regs bq27421_regs
+ bq27z561_regs[BQ27XXX_REG_MAX] = {
+@@ -769,10 +789,23 @@ static enum power_supply_property bq27421_props[] = {
+ };
+ #define bq27411_props bq27421_props
+ #define bq27425_props bq27421_props
+-#define bq27426_props bq27421_props
+ #define bq27441_props bq27421_props
+ #define bq27621_props bq27421_props
+
++static enum power_supply_property bq27426_props[] = {
++ POWER_SUPPLY_PROP_STATUS,
++ POWER_SUPPLY_PROP_PRESENT,
++ POWER_SUPPLY_PROP_VOLTAGE_NOW,
++ POWER_SUPPLY_PROP_CURRENT_NOW,
++ POWER_SUPPLY_PROP_CAPACITY,
++ POWER_SUPPLY_PROP_CAPACITY_LEVEL,
++ POWER_SUPPLY_PROP_TEMP,
++ POWER_SUPPLY_PROP_TECHNOLOGY,
++ POWER_SUPPLY_PROP_CHARGE_FULL,
++ POWER_SUPPLY_PROP_CHARGE_NOW,
++ POWER_SUPPLY_PROP_MANUFACTURER,
++};
++
+ static enum power_supply_property bq27z561_props[] = {
+ POWER_SUPPLY_PROP_STATUS,
+ POWER_SUPPLY_PROP_PRESENT,
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index b43c2557241e75..06ce24abbe77f1 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -483,8 +483,6 @@ EXPORT_SYMBOL_GPL(power_supply_get_by_name);
+ */
+ void power_supply_put(struct power_supply *psy)
+ {
+- might_sleep();
+-
+ atomic_dec(&psy->use_cnt);
+ put_device(&psy->dev);
+ }
+diff --git a/drivers/power/supply/rt9471.c b/drivers/power/supply/rt9471.c
+index 868b0703d15c52..522a67736fa5a9 100644
+--- a/drivers/power/supply/rt9471.c
++++ b/drivers/power/supply/rt9471.c
+@@ -139,6 +139,19 @@ enum {
+ RT9471_PORTSTAT_DCP,
+ };
+
++enum {
++ RT9471_ICSTAT_SLEEP = 0,
++ RT9471_ICSTAT_VBUSRDY,
++ RT9471_ICSTAT_TRICKLECHG,
++ RT9471_ICSTAT_PRECHG,
++ RT9471_ICSTAT_FASTCHG,
++ RT9471_ICSTAT_IEOC,
++ RT9471_ICSTAT_BGCHG,
++ RT9471_ICSTAT_CHGDONE,
++ RT9471_ICSTAT_CHGFAULT,
++ RT9471_ICSTAT_OTG = 15,
++};
++
+ struct rt9471_chip {
+ struct device *dev;
+ struct regmap *regmap;
+@@ -153,8 +166,8 @@ struct rt9471_chip {
+ };
+
+ static const struct reg_field rt9471_reg_fields[F_MAX_FIELDS] = {
+- [F_WDT] = REG_FIELD(RT9471_REG_TOP, 0, 0),
+- [F_WDT_RST] = REG_FIELD(RT9471_REG_TOP, 1, 1),
++ [F_WDT] = REG_FIELD(RT9471_REG_TOP, 0, 1),
++ [F_WDT_RST] = REG_FIELD(RT9471_REG_TOP, 2, 2),
+ [F_CHG_EN] = REG_FIELD(RT9471_REG_FUNC, 0, 0),
+ [F_HZ] = REG_FIELD(RT9471_REG_FUNC, 5, 5),
+ [F_BATFET_DIS] = REG_FIELD(RT9471_REG_FUNC, 7, 7),
+@@ -255,31 +268,32 @@ static int rt9471_get_ieoc(struct rt9471_chip *chip, int *microamp)
+
+ static int rt9471_get_status(struct rt9471_chip *chip, int *status)
+ {
+- unsigned int chg_ready, chg_done, fault_stat;
++ unsigned int ic_stat;
+ int ret;
+
+- ret = regmap_field_read(chip->rm_fields[F_ST_CHG_RDY], &chg_ready);
+- if (ret)
+- return ret;
+-
+- ret = regmap_field_read(chip->rm_fields[F_ST_CHG_DONE], &chg_done);
++ ret = regmap_field_read(chip->rm_fields[F_IC_STAT], &ic_stat);
+ if (ret)
+ return ret;
+
+- ret = regmap_read(chip->regmap, RT9471_REG_STAT1, &fault_stat);
+- if (ret)
+- return ret;
+-
+- fault_stat &= RT9471_CHGFAULT_MASK;
+-
+- if (chg_ready && chg_done)
+- *status = POWER_SUPPLY_STATUS_FULL;
+- else if (chg_ready && fault_stat)
++ switch (ic_stat) {
++ case RT9471_ICSTAT_VBUSRDY:
++ case RT9471_ICSTAT_CHGFAULT:
+ *status = POWER_SUPPLY_STATUS_NOT_CHARGING;
+- else if (chg_ready && !fault_stat)
++ break;
++ case RT9471_ICSTAT_TRICKLECHG ... RT9471_ICSTAT_BGCHG:
+ *status = POWER_SUPPLY_STATUS_CHARGING;
+- else
++ break;
++ case RT9471_ICSTAT_CHGDONE:
++ *status = POWER_SUPPLY_STATUS_FULL;
++ break;
++ case RT9471_ICSTAT_SLEEP:
++ case RT9471_ICSTAT_OTG:
+ *status = POWER_SUPPLY_STATUS_DISCHARGING;
++ break;
++ default:
++ *status = POWER_SUPPLY_STATUS_UNKNOWN;
++ break;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index 8acbcf5b66739f..53c3a76a185ed4 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -75,7 +75,7 @@ static void pwm_apply_debug(struct pwm_device *pwm,
+ state->duty_cycle < state->period)
+ dev_warn(pwmchip_parent(chip), ".apply ignored .polarity\n");
+
+- if (state->enabled &&
++ if (state->enabled && s2.enabled &&
+ last->polarity == state->polarity &&
+ last->period > s2.period &&
+ last->period <= state->period)
+@@ -83,7 +83,11 @@ static void pwm_apply_debug(struct pwm_device *pwm,
+ ".apply didn't pick the best available period (requested: %llu, applied: %llu, possible: %llu)\n",
+ state->period, s2.period, last->period);
+
+- if (state->enabled && state->period < s2.period)
++ /*
++ * Rounding period up is fine only if duty_cycle is 0 then, because a
++ * flat line doesn't have a characteristic period.
++ */
++ if (state->enabled && s2.enabled && state->period < s2.period && s2.duty_cycle)
+ dev_warn(pwmchip_parent(chip),
+ ".apply is supposed to round down period (requested: %llu, applied: %llu)\n",
+ state->period, s2.period);
+@@ -99,7 +103,7 @@ static void pwm_apply_debug(struct pwm_device *pwm,
+ s2.duty_cycle, s2.period,
+ last->duty_cycle, last->period);
+
+- if (state->enabled && state->duty_cycle < s2.duty_cycle)
++ if (state->enabled && s2.enabled && state->duty_cycle < s2.duty_cycle)
+ dev_warn(pwmchip_parent(chip),
+ ".apply is supposed to round down duty_cycle (requested: %llu/%llu, applied: %llu/%llu)\n",
+ state->duty_cycle, state->period,
+diff --git a/drivers/pwm/pwm-imx27.c b/drivers/pwm/pwm-imx27.c
+index 9e2bbf5b4a8ce7..0375987194318f 100644
+--- a/drivers/pwm/pwm-imx27.c
++++ b/drivers/pwm/pwm-imx27.c
+@@ -26,6 +26,7 @@
+ #define MX3_PWMSR 0x04 /* PWM Status Register */
+ #define MX3_PWMSAR 0x0C /* PWM Sample Register */
+ #define MX3_PWMPR 0x10 /* PWM Period Register */
++#define MX3_PWMCNR 0x14 /* PWM Counter Register */
+
+ #define MX3_PWMCR_FWM GENMASK(27, 26)
+ #define MX3_PWMCR_STOPEN BIT(25)
+@@ -219,10 +220,12 @@ static void pwm_imx27_wait_fifo_slot(struct pwm_chip *chip,
+ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ const struct pwm_state *state)
+ {
+- unsigned long period_cycles, duty_cycles, prescale;
++ unsigned long period_cycles, duty_cycles, prescale, period_us, tmp;
+ struct pwm_imx27_chip *imx = to_pwm_imx27_chip(chip);
+ unsigned long long c;
+ unsigned long long clkrate;
++ unsigned long flags;
++ int val;
+ int ret;
+ u32 cr;
+
+@@ -263,7 +266,98 @@ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ pwm_imx27_sw_reset(chip);
+ }
+
+- writel(duty_cycles, imx->mmio_base + MX3_PWMSAR);
++ val = readl(imx->mmio_base + MX3_PWMPR);
++ val = val >= MX3_PWMPR_MAX ? MX3_PWMPR_MAX : val;
++ cr = readl(imx->mmio_base + MX3_PWMCR);
++ tmp = NSEC_PER_SEC * (u64)(val + 2) * MX3_PWMCR_PRESCALER_GET(cr);
++ tmp = DIV_ROUND_UP_ULL(tmp, clkrate);
++ period_us = DIV_ROUND_UP_ULL(tmp, 1000);
++
++ /*
++ * ERR051198:
++ * PWM: PWM output may not function correctly if the FIFO is empty when
++ * a new SAR value is programmed
++ *
++ * Description:
++ * When the PWM FIFO is empty, a new value programmed to the PWM Sample
++ * register (PWM_PWMSAR) will be directly applied even if the current
++ * timer period has not expired.
++ *
++ * If the new SAMPLE value programmed in the PWM_PWMSAR register is
++ * less than the previous value, and the PWM counter register
++ * (PWM_PWMCNR) that contains the current COUNT value is greater than
++ * the new programmed SAMPLE value, the current period will not flip
++ * the level. This may result in an output pulse with a duty cycle of
++ * 100%.
++ *
++ * Consider a change from
++ * ________
++ * / \______/
++ * ^ * ^
++ * to
++ * ____
++ * / \__________/
++ * ^ ^
++ * At the time marked by *, the new write value will be directly applied
++ * to SAR even the current period is not over if FIFO is empty.
++ *
++ * ________ ____________________
++ * / \______/ \__________/
++ * ^ ^ * ^ ^
++ * |<-- old SAR -->| |<-- new SAR -->|
++ *
++ * That is the output is active for a whole period.
++ *
++ * Workaround:
++ * Check new SAR less than old SAR and current counter is in errata
++ * windows, write extra old SAR into FIFO and new SAR will effect at
++ * next period.
++ *
++ * Sometime period is quite long, such as over 1 second. If add old SAR
++ * into FIFO unconditional, new SAR have to wait for next period. It
++ * may be too long.
++ *
++ * Turn off the interrupt to ensure that not IRQ and schedule happen
++ * during above operations. If any irq and schedule happen, counter
++ * in PWM will be out of data and take wrong action.
++ *
++ * Add a safety margin 1.5us because it needs some time to complete
++ * IO write.
++ *
++ * Use writel_relaxed() to minimize the interval between two writes to
++ * the SAR register to increase the fastest PWM frequency supported.
++ *
++ * When the PWM period is longer than 2us(or <500kHz), this workaround
++ * can solve this problem. No software workaround is available if PWM
++ * period is shorter than IO write. Just try best to fill old data
++ * into FIFO.
++ */
++ c = clkrate * 1500;
++ do_div(c, NSEC_PER_SEC);
++
++ local_irq_save(flags);
++ val = FIELD_GET(MX3_PWMSR_FIFOAV, readl_relaxed(imx->mmio_base + MX3_PWMSR));
++
++ if (duty_cycles < imx->duty_cycle && (cr & MX3_PWMCR_EN)) {
++ if (period_us < 2) { /* 2us = 500 kHz */
++ /* Best effort attempt to fix up >500 kHz case */
++ udelay(3 * period_us);
++ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
++ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
++ } else if (val < MX3_PWMSR_FIFOAV_2WORDS) {
++ val = readl_relaxed(imx->mmio_base + MX3_PWMCNR);
++ /*
++ * If counter is close to period, controller may roll over when
++ * next IO write.
++ */
++ if ((val + c >= duty_cycles && val < imx->duty_cycle) ||
++ val + c >= period_cycles)
++ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
++ }
++ }
++ writel_relaxed(duty_cycles, imx->mmio_base + MX3_PWMSAR);
++ local_irq_restore(flags);
++
+ writel(period_cycles, imx->mmio_base + MX3_PWMPR);
+
+ /*
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index 3b7e06b9f5ce96..678428ab42215b 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -11,7 +11,7 @@
+ #include <linux/regulator/of_regulator.h>
+ #include <linux/soc/qcom/smd-rpm.h>
+
+-struct qcom_smd_rpm *smd_vreg_rpm;
++static struct qcom_smd_rpm *smd_vreg_rpm;
+
+ struct qcom_rpm_reg {
+ struct device *dev;
+diff --git a/drivers/regulator/rk808-regulator.c b/drivers/regulator/rk808-regulator.c
+index 14b60abd6afc64..37476d2558fda7 100644
+--- a/drivers/regulator/rk808-regulator.c
++++ b/drivers/regulator/rk808-regulator.c
+@@ -1379,6 +1379,8 @@ static const struct regulator_desc rk809_reg[] = {
+ .n_linear_ranges = ARRAY_SIZE(rk817_buck1_voltage_ranges),
+ .vsel_reg = RK817_BUCK3_ON_VSEL_REG,
+ .vsel_mask = RK817_BUCK_VSEL_MASK,
++ .apply_reg = RK817_POWER_CONFIG,
++ .apply_bit = RK817_BUCK3_FB_RES_INTER,
+ .enable_reg = RK817_POWER_EN_REG(0),
+ .enable_mask = ENABLE_MASK(RK817_ID_DCDC3),
+ .enable_val = ENABLE_MASK(RK817_ID_DCDC3),
+@@ -1851,7 +1853,7 @@ static int rk808_regulator_dt_parse_pdata(struct device *dev,
+ }
+
+ if (!pdata->dvs_gpio[i]) {
+- dev_info(dev, "there is no dvs%d gpio\n", i);
++ dev_dbg(dev, "there is no dvs%d gpio\n", i);
+ continue;
+ }
+
+@@ -1887,12 +1889,6 @@ static int rk808_regulator_probe(struct platform_device *pdev)
+ if (!pdata)
+ return -ENOMEM;
+
+- ret = rk808_regulator_dt_parse_pdata(&pdev->dev, regmap, pdata);
+- if (ret < 0)
+- return ret;
+-
+- platform_set_drvdata(pdev, pdata);
+-
+ switch (rk808->variant) {
+ case RK805_ID:
+ regulators = rk805_reg;
+@@ -1903,6 +1899,11 @@ static int rk808_regulator_probe(struct platform_device *pdev)
+ nregulators = ARRAY_SIZE(rk806_reg);
+ break;
+ case RK808_ID:
++ /* DVS0/1 GPIOs are supported on the RK808 only */
++ ret = rk808_regulator_dt_parse_pdata(&pdev->dev, regmap, pdata);
++ if (ret < 0)
++ return ret;
++
+ regulators = rk808_reg;
+ nregulators = RK808_NUM_REGULATORS;
+ break;
+@@ -1928,6 +1929,8 @@ static int rk808_regulator_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
++ platform_set_drvdata(pdev, pdata);
++
+ config.dev = &pdev->dev;
+ config.driver_data = pdata;
+ config.regmap = regmap;
+diff --git a/drivers/remoteproc/qcom_q6v5_adsp.c b/drivers/remoteproc/qcom_q6v5_adsp.c
+index 572dcb0f055b76..223f6ca0745d3d 100644
+--- a/drivers/remoteproc/qcom_q6v5_adsp.c
++++ b/drivers/remoteproc/qcom_q6v5_adsp.c
+@@ -734,15 +734,22 @@ static int adsp_probe(struct platform_device *pdev)
+ desc->ssctl_id);
+ if (IS_ERR(adsp->sysmon)) {
+ ret = PTR_ERR(adsp->sysmon);
+- goto disable_pm;
++ goto deinit_remove_glink_pdm_ssr;
+ }
+
+ ret = rproc_add(rproc);
+ if (ret)
+- goto disable_pm;
++ goto remove_sysmon;
+
+ return 0;
+
++remove_sysmon:
++ qcom_remove_sysmon_subdev(adsp->sysmon);
++deinit_remove_glink_pdm_ssr:
++ qcom_q6v5_deinit(&adsp->q6v5);
++ qcom_remove_glink_subdev(rproc, &adsp->glink_subdev);
++ qcom_remove_pdm_subdev(rproc, &adsp->pdm_subdev);
++ qcom_remove_ssr_subdev(rproc, &adsp->ssr_subdev);
+ disable_pm:
+ qcom_rproc_pds_detach(adsp);
+
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 2a42215ce8e07b..32c3531b20c70a 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1162,6 +1162,9 @@ static int q6v5_mba_load(struct q6v5 *qproc)
+ goto disable_active_clks;
+ }
+
++ if (qproc->has_mba_logs)
++ qcom_pil_info_store("mba", qproc->mba_phys, MBA_LOG_SIZE);
++
+ writel(qproc->mba_phys, qproc->rmb_base + RMB_MBA_IMAGE_REG);
+ if (qproc->dp_size) {
+ writel(qproc->mba_phys + SZ_1M, qproc->rmb_base + RMB_PMI_CODE_START_REG);
+@@ -1172,9 +1175,6 @@ static int q6v5_mba_load(struct q6v5 *qproc)
+ if (ret)
+ goto reclaim_mba;
+
+- if (qproc->has_mba_logs)
+- qcom_pil_info_store("mba", qproc->mba_phys, MBA_LOG_SIZE);
+-
+ ret = q6v5_rmb_mba_wait(qproc, 0, 5000);
+ if (ret == -ETIMEDOUT) {
+ dev_err(qproc->dev, "MBA boot timed out\n");
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index 88e7b84f223c02..a4abec1e1e846d 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -759,16 +759,16 @@ static int adsp_probe(struct platform_device *pdev)
+
+ ret = adsp_init_clock(adsp);
+ if (ret)
+- goto free_rproc;
++ goto unassign_mem;
+
+ ret = adsp_init_regulator(adsp);
+ if (ret)
+- goto free_rproc;
++ goto unassign_mem;
+
+ ret = adsp_pds_attach(&pdev->dev, adsp->proxy_pds,
+ desc->proxy_pd_names);
+ if (ret < 0)
+- goto free_rproc;
++ goto unassign_mem;
+ adsp->proxy_pd_count = ret;
+
+ ret = qcom_q6v5_init(&adsp->q6v5, pdev, rproc, desc->crash_reason_smem, desc->load_state,
+@@ -784,18 +784,28 @@ static int adsp_probe(struct platform_device *pdev)
+ desc->ssctl_id);
+ if (IS_ERR(adsp->sysmon)) {
+ ret = PTR_ERR(adsp->sysmon);
+- goto detach_proxy_pds;
++ goto deinit_remove_pdm_smd_glink;
+ }
+
+ qcom_add_ssr_subdev(rproc, &adsp->ssr_subdev, desc->ssr_name);
+ ret = rproc_add(rproc);
+ if (ret)
+- goto detach_proxy_pds;
++ goto remove_ssr_sysmon;
+
+ return 0;
+
++remove_ssr_sysmon:
++ qcom_remove_ssr_subdev(rproc, &adsp->ssr_subdev);
++ qcom_remove_sysmon_subdev(adsp->sysmon);
++deinit_remove_pdm_smd_glink:
++ qcom_remove_pdm_subdev(rproc, &adsp->pdm_subdev);
++ qcom_remove_smd_subdev(rproc, &adsp->smd_subdev);
++ qcom_remove_glink_subdev(rproc, &adsp->glink_subdev);
++ qcom_q6v5_deinit(&adsp->q6v5);
+ detach_proxy_pds:
+ adsp_pds_detach(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++unassign_mem:
++ adsp_unassign_memory_region(adsp);
+ free_rproc:
+ device_init_wakeup(adsp->dev, false);
+
+@@ -890,6 +900,7 @@ static const struct adsp_data sm8250_adsp_resource = {
+ .crash_reason_smem = 423,
+ .firmware_name = "adsp.mdt",
+ .pas_id = 1,
++ .minidump_id = 5,
+ .auto_boot = true,
+ .proxy_pd_names = (char*[]){
+ "lcx",
+@@ -1071,6 +1082,7 @@ static const struct adsp_data sm8350_cdsp_resource = {
+ .crash_reason_smem = 601,
+ .firmware_name = "cdsp.mdt",
+ .pas_id = 18,
++ .minidump_id = 7,
+ .auto_boot = true,
+ .proxy_pd_names = (char*[]){
+ "cx",
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index d877a1a1aeb4bf..c7f91a82e634f4 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -1117,7 +1117,8 @@ void qcom_glink_native_rx(struct qcom_glink *glink)
+ qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
+ break;
+ case GLINK_CMD_OPEN:
+- ret = qcom_glink_rx_defer(glink, param2);
++ /* upper 16 bits of param2 are the "prio" field */
++ ret = qcom_glink_rx_defer(glink, param2 & 0xffff);
+ break;
+ case GLINK_CMD_TX_DATA:
+ case GLINK_CMD_TX_DATA_CONT:
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index cca650b2e0b94d..aaf76406cd7d7d 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -904,13 +904,18 @@ void rtc_timer_do_work(struct work_struct *work)
+ struct timerqueue_node *next;
+ ktime_t now;
+ struct rtc_time tm;
++ int err;
+
+ struct rtc_device *rtc =
+ container_of(work, struct rtc_device, irqwork);
+
+ mutex_lock(&rtc->ops_lock);
+ again:
+- __rtc_read_time(rtc, &tm);
++ err = __rtc_read_time(rtc, &tm);
++ if (err) {
++ mutex_unlock(&rtc->ops_lock);
++ return;
++ }
+ now = rtc_tm_to_ktime(tm);
+ while ((next = timerqueue_getnext(&rtc->timerqueue))) {
+ if (next->expires > now)
+diff --git a/drivers/rtc/rtc-ab-eoz9.c b/drivers/rtc/rtc-ab-eoz9.c
+index 02f7d071128772..e17bce9a27468b 100644
+--- a/drivers/rtc/rtc-ab-eoz9.c
++++ b/drivers/rtc/rtc-ab-eoz9.c
+@@ -396,13 +396,6 @@ static int abeoz9z3_temp_read(struct device *dev,
+ if (ret < 0)
+ return ret;
+
+- if ((val & ABEOZ9_REG_CTRL_STATUS_V1F) ||
+- (val & ABEOZ9_REG_CTRL_STATUS_V2F)) {
+- dev_err(dev,
+- "thermometer might be disabled due to low voltage\n");
+- return -EINVAL;
+- }
+-
+ switch (attr) {
+ case hwmon_temp_input:
+ ret = regmap_read(regmap, ABEOZ9_REG_REG_TEMP, &val);
+diff --git a/drivers/rtc/rtc-abx80x.c b/drivers/rtc/rtc-abx80x.c
+index 1298962402ff47..3fee27914ba805 100644
+--- a/drivers/rtc/rtc-abx80x.c
++++ b/drivers/rtc/rtc-abx80x.c
+@@ -39,7 +39,7 @@
+ #define ABX8XX_REG_STATUS 0x0f
+ #define ABX8XX_STATUS_AF BIT(2)
+ #define ABX8XX_STATUS_BLF BIT(4)
+-#define ABX8XX_STATUS_WDT BIT(6)
++#define ABX8XX_STATUS_WDT BIT(5)
+
+ #define ABX8XX_REG_CTRL1 0x10
+ #define ABX8XX_CTRL_WRITE BIT(0)
+diff --git a/drivers/rtc/rtc-rzn1.c b/drivers/rtc/rtc-rzn1.c
+index 56ebbd4d048147..8570c8e63d70c3 100644
+--- a/drivers/rtc/rtc-rzn1.c
++++ b/drivers/rtc/rtc-rzn1.c
+@@ -111,8 +111,8 @@ static int rzn1_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ tm->tm_hour = bcd2bin(tm->tm_hour);
+ tm->tm_wday = bcd2bin(tm->tm_wday);
+ tm->tm_mday = bcd2bin(tm->tm_mday);
+- tm->tm_mon = bcd2bin(tm->tm_mon);
+- tm->tm_year = bcd2bin(tm->tm_year);
++ tm->tm_mon = bcd2bin(tm->tm_mon) - 1;
++ tm->tm_year = bcd2bin(tm->tm_year) + 100;
+
+ return 0;
+ }
+@@ -128,8 +128,8 @@ static int rzn1_rtc_set_time(struct device *dev, struct rtc_time *tm)
+ tm->tm_hour = bin2bcd(tm->tm_hour);
+ tm->tm_wday = bin2bcd(rzn1_rtc_tm_to_wday(tm));
+ tm->tm_mday = bin2bcd(tm->tm_mday);
+- tm->tm_mon = bin2bcd(tm->tm_mon);
+- tm->tm_year = bin2bcd(tm->tm_year);
++ tm->tm_mon = bin2bcd(tm->tm_mon + 1);
++ tm->tm_year = bin2bcd(tm->tm_year - 100);
+
+ val = readl(rtc->base + RZN1_RTC_CTL2);
+ if (!(val & RZN1_RTC_CTL2_STOPPED)) {
+diff --git a/drivers/rtc/rtc-st-lpc.c b/drivers/rtc/rtc-st-lpc.c
+index d492a2d26600c1..c6d4522411b312 100644
+--- a/drivers/rtc/rtc-st-lpc.c
++++ b/drivers/rtc/rtc-st-lpc.c
+@@ -218,15 +218,14 @@ static int st_rtc_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
+- ret = devm_request_irq(&pdev->dev, rtc->irq, st_rtc_handler, 0,
+- pdev->name, rtc);
++ ret = devm_request_irq(&pdev->dev, rtc->irq, st_rtc_handler,
++ IRQF_NO_AUTOEN, pdev->name, rtc);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to request irq %i\n", rtc->irq);
+ return ret;
+ }
+
+ enable_irq_wake(rtc->irq);
+- disable_irq(rtc->irq);
+
+ rtc->clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ if (IS_ERR(rtc->clk))
+diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c
+index c32e818f06dbad..ad17ab0a931494 100644
+--- a/drivers/s390/cio/cio.c
++++ b/drivers/s390/cio/cio.c
+@@ -459,10 +459,14 @@ int cio_update_schib(struct subchannel *sch)
+ {
+ struct schib schib;
+
+- if (stsch(sch->schid, &schib) || !css_sch_is_valid(&schib))
++ if (stsch(sch->schid, &schib))
+ return -ENODEV;
+
+ memcpy(&sch->schib, &schib, sizeof(schib));
++
++ if (!css_sch_is_valid(&schib))
++ return -EACCES;
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(cio_update_schib);
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index b0f23242e17145..9498825d9c7a5c 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -1387,14 +1387,18 @@ enum io_sch_action {
+ IO_SCH_VERIFY,
+ IO_SCH_DISC,
+ IO_SCH_NOP,
++ IO_SCH_ORPH_CDEV,
+ };
+
+ static enum io_sch_action sch_get_action(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
++ int rc;
+
+ cdev = sch_get_cdev(sch);
+- if (cio_update_schib(sch)) {
++ rc = cio_update_schib(sch);
++
++ if (rc == -ENODEV) {
+ /* Not operational. */
+ if (!cdev)
+ return IO_SCH_UNREG;
+@@ -1402,6 +1406,16 @@ static enum io_sch_action sch_get_action(struct subchannel *sch)
+ return IO_SCH_UNREG;
+ return IO_SCH_ORPH_UNREG;
+ }
++
++ /* Avoid unregistering subchannels without working device. */
++ if (rc == -EACCES) {
++ if (!cdev)
++ return IO_SCH_NOP;
++ if (ccw_device_notify(cdev, CIO_GONE) != NOTIFY_OK)
++ return IO_SCH_UNREG_CDEV;
++ return IO_SCH_ORPH_CDEV;
++ }
++
+ /* Operational. */
+ if (!cdev)
+ return IO_SCH_ATTACH;
+@@ -1471,6 +1485,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
+ rc = 0;
+ goto out_unlock;
+ case IO_SCH_ORPH_UNREG:
++ case IO_SCH_ORPH_CDEV:
+ case IO_SCH_ORPH_ATTACH:
+ ccw_device_set_disconnected(cdev);
+ break;
+@@ -1502,6 +1517,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
+ /* Handle attached ccw device. */
+ switch (action) {
+ case IO_SCH_ORPH_UNREG:
++ case IO_SCH_ORPH_CDEV:
+ case IO_SCH_ORPH_ATTACH:
+ /* Move ccw device to orphanage. */
+ rc = ccw_device_move_to_orph(cdev);
+diff --git a/drivers/scsi/bfa/bfad.c b/drivers/scsi/bfa/bfad.c
+index 62cb7a864fd53d..70c7515a822f52 100644
+--- a/drivers/scsi/bfa/bfad.c
++++ b/drivers/scsi/bfa/bfad.c
+@@ -1693,9 +1693,8 @@ bfad_init(void)
+
+ error = bfad_im_module_init();
+ if (error) {
+- error = -ENOMEM;
+ printk(KERN_WARNING "bfad_im_module_init failure\n");
+- goto ext;
++ return -ENOMEM;
+ }
+
+ if (strcmp(FCPI_NAME, " fcpim") == 0)
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index ec1a3e7ee94d30..4a51db160ac8eb 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -1545,10 +1545,16 @@ void hisi_sas_controller_reset_done(struct hisi_hba *hisi_hba)
+ /* Init and wait for PHYs to come up and all libsas event finished. */
+ for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) {
+ struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
++ struct asd_sas_phy *sas_phy = &phy->sas_phy;
+
+- if (!(hisi_hba->phy_state & BIT(phy_no)))
++ if (!sas_phy->phy->enabled)
+ continue;
+
++ if (!(hisi_hba->phy_state & BIT(phy_no))) {
++ hisi_sas_phy_enable(hisi_hba, phy_no, 1);
++ continue;
++ }
++
+ async_schedule_domain(hisi_sas_async_init_wait_phyup,
+ phy, &async);
+ }
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 4813087e58a157..6f1fc88c59fc0c 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -2738,6 +2738,7 @@ static int qedf_alloc_and_init_sb(struct qedf_ctx *qedf,
+ sb_id, QED_SB_TYPE_STORAGE);
+
+ if (ret) {
++ dma_free_coherent(&qedf->pdev->dev, sizeof(*sb_virt), sb_virt, sb_phys);
+ QEDF_ERR(&qedf->dbg_ctx,
+ "Status block initialization failed (0x%x) for id = %d.\n",
+ ret, sb_id);
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index cd0180b1f5b9da..ede8d1f6ae2361 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -369,6 +369,7 @@ static int qedi_alloc_and_init_sb(struct qedi_ctx *qedi,
+ ret = qedi_ops->common->sb_init(qedi->cdev, sb_info, sb_virt, sb_phys,
+ sb_id, QED_SB_TYPE_STORAGE);
+ if (ret) {
++ dma_free_coherent(&qedi->pdev->dev, sizeof(*sb_virt), sb_virt, sb_phys);
+ QEDI_ERR(&qedi->dbg_ctx,
+ "Status block initialization failed for id = %d.\n",
+ sb_id);
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index baf870a03ecf6c..9e6eebf307d3a0 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -307,10 +307,6 @@ sg_open(struct inode *inode, struct file *filp)
+ if (retval)
+ goto sg_put;
+
+- retval = scsi_autopm_get_device(device);
+- if (retval)
+- goto sdp_put;
+-
+ /* scsi_block_when_processing_errors() may block so bypass
+ * check if O_NONBLOCK. Permits SCSI commands to be issued
+ * during error recovery. Tread carefully. */
+@@ -318,7 +314,7 @@ sg_open(struct inode *inode, struct file *filp)
+ scsi_block_when_processing_errors(device))) {
+ retval = -ENXIO;
+ /* we are in error recovery for this device */
+- goto error_out;
++ goto sdp_put;
+ }
+
+ mutex_lock(&sdp->open_rel_lock);
+@@ -371,8 +367,6 @@ sg_open(struct inode *inode, struct file *filp)
+ }
+ error_mutex_locked:
+ mutex_unlock(&sdp->open_rel_lock);
+-error_out:
+- scsi_autopm_put_device(device);
+ sdp_put:
+ kref_put(&sdp->d_ref, sg_device_destroy);
+ scsi_device_put(device);
+@@ -392,7 +386,6 @@ sg_release(struct inode *inode, struct file *filp)
+ SCSI_LOG_TIMEOUT(3, sg_printk(KERN_INFO, sdp, "sg_release\n"));
+
+ mutex_lock(&sdp->open_rel_lock);
+- scsi_autopm_put_device(sdp->device);
+ kref_put(&sfp->f_ref, sg_remove_sfp);
+ sdp->open_cnt--;
+
+diff --git a/drivers/sh/intc/core.c b/drivers/sh/intc/core.c
+index 74350b5871dc8e..ea571eeb307878 100644
+--- a/drivers/sh/intc/core.c
++++ b/drivers/sh/intc/core.c
+@@ -209,7 +209,6 @@ int __init register_intc_controller(struct intc_desc *desc)
+ goto err0;
+
+ INIT_LIST_HEAD(&d->list);
+- list_add_tail(&d->list, &intc_list);
+
+ raw_spin_lock_init(&d->lock);
+ INIT_RADIX_TREE(&d->tree, GFP_ATOMIC);
+@@ -369,6 +368,7 @@ int __init register_intc_controller(struct intc_desc *desc)
+
+ d->skip_suspend = desc->skip_syscore_suspend;
+
++ list_add_tail(&d->list, &intc_list);
+ nr_intc_controllers++;
+
+ return 0;
+diff --git a/drivers/soc/fsl/rcpm.c b/drivers/soc/fsl/rcpm.c
+index 3d0cae30c769ea..06bd94b29fb321 100644
+--- a/drivers/soc/fsl/rcpm.c
++++ b/drivers/soc/fsl/rcpm.c
+@@ -36,6 +36,7 @@ static void copy_ippdexpcr1_setting(u32 val)
+ return;
+
+ regs = of_iomap(np, 0);
++ of_node_put(np);
+ if (!regs)
+ return;
+
+diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
+index 2e8f24d5da80b6..4cb959106efa9e 100644
+--- a/drivers/soc/qcom/qcom-geni-se.c
++++ b/drivers/soc/qcom/qcom-geni-se.c
+@@ -585,7 +585,8 @@ int geni_se_clk_tbl_get(struct geni_se *se, unsigned long **tbl)
+
+ for (i = 0; i < MAX_CLK_PERF_LEVEL; i++) {
+ freq = clk_round_rate(se->clk, freq + 1);
+- if (freq <= 0 || freq == se->clk_perf_tbl[i - 1])
++ if (freq <= 0 ||
++ (i > 0 && freq == se->clk_perf_tbl[i - 1]))
+ break;
+ se->clk_perf_tbl[i] = freq;
+ }
+diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
+index d7359a235e3cff..945e4e11817384 100644
+--- a/drivers/soc/qcom/socinfo.c
++++ b/drivers/soc/qcom/socinfo.c
+@@ -782,10 +782,16 @@ static int qcom_socinfo_probe(struct platform_device *pdev)
+ qs->attr.revision = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%u.%u",
+ SOCINFO_MAJOR(le32_to_cpu(info->ver)),
+ SOCINFO_MINOR(le32_to_cpu(info->ver)));
+- if (offsetof(struct socinfo, serial_num) <= item_size)
++ if (!qs->attr.soc_id || !qs->attr.revision)
++ return -ENOMEM;
++
++ if (offsetof(struct socinfo, serial_num) <= item_size) {
+ qs->attr.serial_number = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "%u",
+ le32_to_cpu(info->serial_num));
++ if (!qs->attr.serial_number)
++ return -ENOMEM;
++ }
+
+ qs->soc_dev = soc_device_register(&qs->attr);
+ if (IS_ERR(qs->soc_dev))
+diff --git a/drivers/soc/ti/smartreflex.c b/drivers/soc/ti/smartreflex.c
+index d6219060b616d6..38add2ab561372 100644
+--- a/drivers/soc/ti/smartreflex.c
++++ b/drivers/soc/ti/smartreflex.c
+@@ -202,10 +202,10 @@ static int sr_late_init(struct omap_sr *sr_info)
+
+ if (sr_class->notify && sr_class->notify_flags && sr_info->irq) {
+ ret = devm_request_irq(&sr_info->pdev->dev, sr_info->irq,
+- sr_interrupt, 0, sr_info->name, sr_info);
++ sr_interrupt, IRQF_NO_AUTOEN,
++ sr_info->name, sr_info);
+ if (ret)
+ goto error;
+- disable_irq(sr_info->irq);
+ }
+
+ return ret;
+diff --git a/drivers/soc/xilinx/xlnx_event_manager.c b/drivers/soc/xilinx/xlnx_event_manager.c
+index f529e1346247cc..85df6b9c04ee69 100644
+--- a/drivers/soc/xilinx/xlnx_event_manager.c
++++ b/drivers/soc/xilinx/xlnx_event_manager.c
+@@ -188,8 +188,10 @@ static int xlnx_add_cb_for_suspend(event_cb_func_t cb_fun, void *data)
+ INIT_LIST_HEAD(&eve_data->cb_list_head);
+
+ cb_data = kmalloc(sizeof(*cb_data), GFP_KERNEL);
+- if (!cb_data)
++ if (!cb_data) {
++ kfree(eve_data);
+ return -ENOMEM;
++ }
+ cb_data->eve_cb = cb_fun;
+ cb_data->agent_data = data;
+
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 9ea91432c11d81..e583146bf85049 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -183,7 +183,7 @@ static const char *atmel_qspi_reg_name(u32 offset, char *tmp, size_t sz)
+ case QSPI_MR:
+ return "MR";
+ case QSPI_RD:
+- return "MR";
++ return "RD";
+ case QSPI_TD:
+ return "TD";
+ case QSPI_SR:
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index 977e8b55c82b7d..9573b8fa4fbfc6 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -891,7 +891,7 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- ret = devm_request_irq(&pdev->dev, irq, fsl_lpspi_isr, 0,
++ ret = devm_request_irq(&pdev->dev, irq, fsl_lpspi_isr, IRQF_NO_AUTOEN,
+ dev_name(&pdev->dev), fsl_lpspi);
+ if (ret) {
+ dev_err(&pdev->dev, "can't get irq%d: %d\n", irq, ret);
+@@ -948,14 +948,10 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
+ ret = fsl_lpspi_dma_init(&pdev->dev, fsl_lpspi, controller);
+ if (ret == -EPROBE_DEFER)
+ goto out_pm_get;
+- if (ret < 0)
++ if (ret < 0) {
+ dev_warn(&pdev->dev, "dma setup error %d, use pio\n", ret);
+- else
+- /*
+- * disable LPSPI module IRQ when enable DMA mode successfully,
+- * to prevent the unexpected LPSPI module IRQ events.
+- */
+- disable_irq(irq);
++ enable_irq(irq);
++ }
+
+ ret = devm_spi_register_controller(&pdev->dev, controller);
+ if (ret < 0) {
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 4c4ff074e3f6f8..fc72a89fb3a7b7 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -2044,6 +2044,7 @@ static const struct stm32_spi_cfg stm32mp25_spi_cfg = {
+ .baud_rate_div_max = STM32H7_SPI_MBR_DIV_MAX,
+ .has_fifo = true,
+ .prevent_dma_burst = true,
++ .has_device_mode = true,
+ };
+
+ static const struct of_device_id stm32_spi_of_match[] = {
+diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c
+index afbd64a217eb06..43f11b0e9e765c 100644
+--- a/drivers/spi/spi-tegra210-quad.c
++++ b/drivers/spi/spi-tegra210-quad.c
+@@ -341,7 +341,7 @@ tegra_qspi_fill_tx_fifo_from_client_txbuf(struct tegra_qspi *tqspi, struct spi_t
+ for (count = 0; count < max_n_32bit; count++) {
+ u32 x = 0;
+
+- for (i = 0; len && (i < bytes_per_word); i++, len--)
++ for (i = 0; len && (i < min(4, bytes_per_word)); i++, len--)
+ x |= (u32)(*tx_buf++) << (i * 8);
+ tegra_qspi_writel(tqspi, x, QSPI_TX_FIFO);
+ }
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index 558c466135a51b..d3e369e0fe5cfd 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -1359,6 +1359,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+
+ clk_dis_all:
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_set_suspended(&pdev->dev);
+ clk_disable_unprepare(xqspi->refclk);
+@@ -1389,6 +1390,7 @@ static void zynqmp_qspi_remove(struct platform_device *pdev)
+ zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
+
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_set_suspended(&pdev->dev);
+ clk_disable_unprepare(xqspi->refclk);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 6ebe5dd9bbb185..c89965b29ef266 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -424,6 +424,16 @@ static int spi_probe(struct device *dev)
+ spi->irq = 0;
+ }
+
++ if (has_acpi_companion(dev) && spi->irq < 0) {
++ struct acpi_device *adev = to_acpi_device_node(dev->fwnode);
++
++ spi->irq = acpi_dev_gpio_irq_get(adev, 0);
++ if (spi->irq == -EPROBE_DEFER)
++ return -EPROBE_DEFER;
++ if (spi->irq < 0)
++ spi->irq = 0;
++ }
++
+ ret = dev_pm_domain_attach(dev, true);
+ if (ret)
+ return ret;
+@@ -2869,9 +2879,6 @@ static acpi_status acpi_register_spi_device(struct spi_controller *ctlr,
+ acpi_set_modalias(adev, acpi_device_hid(adev), spi->modalias,
+ sizeof(spi->modalias));
+
+- if (spi->irq < 0)
+- spi->irq = acpi_dev_gpio_irq_get(adev, 0);
+-
+ acpi_device_set_enumerated(adev);
+
+ adev->power.flags.ignore_parent = true;
+diff --git a/drivers/staging/media/atomisp/pci/sh_css_params.c b/drivers/staging/media/atomisp/pci/sh_css_params.c
+index 232744973ab887..b1feb6f6ebe895 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css_params.c
++++ b/drivers/staging/media/atomisp/pci/sh_css_params.c
+@@ -4181,6 +4181,8 @@ ia_css_3a_statistics_allocate(const struct ia_css_3a_grid_info *grid)
+ goto err;
+ /* No weighted histogram, no structure, treat the histogram data as a byte dump in a byte array */
+ me->rgby_data = kvmalloc(sizeof_hmem(HMEM0_ID), GFP_KERNEL);
++ if (!me->rgby_data)
++ goto err;
+
+ IA_CSS_LEAVE("return=%p", me);
+ return me;
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 0ea1019b9edf6a..1e0d3d871fd41d 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -1715,7 +1715,6 @@ MODULE_DEVICE_TABLE(of, vchiq_of_match);
+
+ static int vchiq_probe(struct platform_device *pdev)
+ {
+- struct device_node *fw_node;
+ const struct vchiq_platform_info *info;
+ struct vchiq_drv_mgmt *mgmt;
+ int ret;
+@@ -1724,8 +1723,8 @@ static int vchiq_probe(struct platform_device *pdev)
+ if (!info)
+ return -EINVAL;
+
+- fw_node = of_find_compatible_node(NULL, NULL,
+- "raspberrypi,bcm2835-firmware");
++ struct device_node *fw_node __free(device_node) =
++ of_find_compatible_node(NULL, NULL, "raspberrypi,bcm2835-firmware");
+ if (!fw_node) {
+ dev_err(&pdev->dev, "Missing firmware node\n");
+ return -ENOENT;
+@@ -1736,7 +1735,6 @@ static int vchiq_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ mgmt->fw = devm_rpi_firmware_get(&pdev->dev, fw_node);
+- of_node_put(fw_node);
+ if (!mgmt->fw)
+ return -EPROBE_DEFER;
+
+diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
+index f98ebb18666bf0..da7017113f92a6 100644
+--- a/drivers/target/target_core_pscsi.c
++++ b/drivers/target/target_core_pscsi.c
+@@ -369,7 +369,7 @@ static int pscsi_create_type_disk(struct se_device *dev, struct scsi_device *sd)
+ bdev_file = bdev_file_open_by_path(dev->udev_path,
+ BLK_OPEN_WRITE | BLK_OPEN_READ, pdv, NULL);
+ if (IS_ERR(bdev_file)) {
+- pr_err("pSCSI: bdev_open_by_path() failed\n");
++ pr_err("pSCSI: bdev_file_open_by_path() failed\n");
+ scsi_device_put(sd);
+ return PTR_ERR(bdev_file);
+ }
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 795be67ca878b4..a674469adad192 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -40,6 +40,8 @@ static DEFINE_MUTEX(thermal_governor_lock);
+
+ static struct thermal_governor *def_governor;
+
++static bool thermal_pm_suspended;
++
+ /*
+ * Governor section: set of functions to handle thermal governors
+ *
+@@ -553,10 +555,7 @@ void __thermal_zone_device_update(struct thermal_zone_device *tz,
+ LIST_HEAD(way_up_list);
+ int temp, ret;
+
+- if (tz->suspended)
+- return;
+-
+- if (!thermal_zone_device_is_enabled(tz))
++ if (tz->state != TZ_STATE_READY || tz->mode != THERMAL_DEVICE_ENABLED)
+ return;
+
+ ret = __thermal_zone_get_temp(tz, &temp);
+@@ -651,13 +650,6 @@ int thermal_zone_device_disable(struct thermal_zone_device *tz)
+ }
+ EXPORT_SYMBOL_GPL(thermal_zone_device_disable);
+
+-int thermal_zone_device_is_enabled(struct thermal_zone_device *tz)
+-{
+- lockdep_assert_held(&tz->lock);
+-
+- return tz->mode == THERMAL_DEVICE_ENABLED;
+-}
+-
+ static bool thermal_zone_is_present(struct thermal_zone_device *tz)
+ {
+ return !list_empty(&tz->node);
+@@ -1361,6 +1353,24 @@ int thermal_zone_get_crit_temp(struct thermal_zone_device *tz, int *temp)
+ }
+ EXPORT_SYMBOL_GPL(thermal_zone_get_crit_temp);
+
++static void thermal_zone_init_complete(struct thermal_zone_device *tz)
++{
++ mutex_lock(&tz->lock);
++
++ tz->state &= ~TZ_STATE_FLAG_INIT;
++ /*
++ * If system suspend or resume is in progress at this point, the
++ * new thermal zone needs to be marked as suspended because
++ * thermal_pm_notify() has run already.
++ */
++ if (thermal_pm_suspended)
++ tz->state |= TZ_STATE_FLAG_SUSPENDED;
++
++ __thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
++
++ mutex_unlock(&tz->lock);
++}
++
+ /**
+ * thermal_zone_device_register_with_trips() - register a new thermal zone device
+ * @type: the thermal zone device type
+@@ -1484,6 +1494,8 @@ thermal_zone_device_register_with_trips(const char *type,
+ tz->passive_delay_jiffies = msecs_to_jiffies(passive_delay);
+ tz->recheck_delay_jiffies = THERMAL_RECHECK_DELAY;
+
++ tz->state = TZ_STATE_FLAG_INIT;
++
+ /* sys I/F */
+ /* Add nodes that are always present via .groups */
+ result = thermal_zone_create_device_groups(tz);
+@@ -1498,6 +1510,7 @@ thermal_zone_device_register_with_trips(const char *type,
+ thermal_zone_destroy_device_groups(tz);
+ goto remove_id;
+ }
++ thermal_zone_device_init(tz);
+ result = device_register(&tz->device);
+ if (result)
+ goto release_device;
+@@ -1541,12 +1554,9 @@ thermal_zone_device_register_with_trips(const char *type,
+ }
+ }
+
+- mutex_unlock(&thermal_list_lock);
++ thermal_zone_init_complete(tz);
+
+- thermal_zone_device_init(tz);
+- /* Update the new thermal zone and mark it as already updated. */
+- if (atomic_cmpxchg(&tz->need_update, 1, 0))
+- thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
++ mutex_unlock(&thermal_list_lock);
+
+ thermal_notify_tz_create(tz);
+
+@@ -1703,7 +1713,7 @@ static void thermal_zone_device_resume(struct work_struct *work)
+
+ mutex_lock(&tz->lock);
+
+- tz->suspended = false;
++ tz->state &= ~(TZ_STATE_FLAG_SUSPENDED | TZ_STATE_FLAG_RESUMING);
+
+ thermal_debug_tz_resume(tz);
+ thermal_zone_device_init(tz);
+@@ -1711,7 +1721,48 @@ static void thermal_zone_device_resume(struct work_struct *work)
+ __thermal_zone_device_update(tz, THERMAL_TZ_RESUME);
+
+ complete(&tz->resume);
+- tz->resuming = false;
++
++ mutex_unlock(&tz->lock);
++}
++
++static void thermal_zone_pm_prepare(struct thermal_zone_device *tz)
++{
++ mutex_lock(&tz->lock);
++
++ if (tz->state & TZ_STATE_FLAG_RESUMING) {
++ /*
++ * thermal_zone_device_resume() queued up for this zone has not
++ * acquired the lock yet, so release it to let the function run
++ * and wait util it has done the work.
++ */
++ mutex_unlock(&tz->lock);
++
++ wait_for_completion(&tz->resume);
++
++ mutex_lock(&tz->lock);
++ }
++
++ tz->state |= TZ_STATE_FLAG_SUSPENDED;
++
++ mutex_unlock(&tz->lock);
++}
++
++static void thermal_zone_pm_complete(struct thermal_zone_device *tz)
++{
++ mutex_lock(&tz->lock);
++
++ cancel_delayed_work(&tz->poll_queue);
++
++ reinit_completion(&tz->resume);
++ tz->state |= TZ_STATE_FLAG_RESUMING;
++
++ /*
++ * Replace the work function with the resume one, which will restore the
++ * original work function and schedule the polling work if needed.
++ */
++ INIT_DELAYED_WORK(&tz->poll_queue, thermal_zone_device_resume);
++ /* Queue up the work without a delay. */
++ mod_delayed_work(system_freezable_power_efficient_wq, &tz->poll_queue, 0);
+
+ mutex_unlock(&tz->lock);
+ }
+@@ -1727,27 +1778,10 @@ static int thermal_pm_notify(struct notifier_block *nb,
+ case PM_SUSPEND_PREPARE:
+ mutex_lock(&thermal_list_lock);
+
+- list_for_each_entry(tz, &thermal_tz_list, node) {
+- mutex_lock(&tz->lock);
+-
+- if (tz->resuming) {
+- /*
+- * thermal_zone_device_resume() queued up for
+- * this zone has not acquired the lock yet, so
+- * release it to let the function run and wait
+- * util it has done the work.
+- */
+- mutex_unlock(&tz->lock);
++ thermal_pm_suspended = true;
+
+- wait_for_completion(&tz->resume);
+-
+- mutex_lock(&tz->lock);
+- }
+-
+- tz->suspended = true;
+-
+- mutex_unlock(&tz->lock);
+- }
++ list_for_each_entry(tz, &thermal_tz_list, node)
++ thermal_zone_pm_prepare(tz);
+
+ mutex_unlock(&thermal_list_lock);
+ break;
+@@ -1756,27 +1790,10 @@ static int thermal_pm_notify(struct notifier_block *nb,
+ case PM_POST_SUSPEND:
+ mutex_lock(&thermal_list_lock);
+
+- list_for_each_entry(tz, &thermal_tz_list, node) {
+- mutex_lock(&tz->lock);
+-
+- cancel_delayed_work(&tz->poll_queue);
+-
+- reinit_completion(&tz->resume);
+- tz->resuming = true;
++ thermal_pm_suspended = false;
+
+- /*
+- * Replace the work function with the resume one, which
+- * will restore the original work function and schedule
+- * the polling work if needed.
+- */
+- INIT_DELAYED_WORK(&tz->poll_queue,
+- thermal_zone_device_resume);
+- /* Queue up the work without a delay. */
+- mod_delayed_work(system_freezable_power_efficient_wq,
+- &tz->poll_queue, 0);
+-
+- mutex_unlock(&tz->lock);
+- }
++ list_for_each_entry(tz, &thermal_tz_list, node)
++ thermal_zone_pm_complete(tz);
+
+ mutex_unlock(&thermal_list_lock);
+ break;
+diff --git a/drivers/thermal/thermal_core.h b/drivers/thermal/thermal_core.h
+index 57fff8bd57611d..1605a930814a53 100644
+--- a/drivers/thermal/thermal_core.h
++++ b/drivers/thermal/thermal_core.h
+@@ -61,6 +61,12 @@ struct thermal_governor {
+ struct list_head governor_list;
+ };
+
++#define TZ_STATE_FLAG_SUSPENDED BIT(0)
++#define TZ_STATE_FLAG_RESUMING BIT(1)
++#define TZ_STATE_FLAG_INIT BIT(2)
++
++#define TZ_STATE_READY 0
++
+ /**
+ * struct thermal_zone_device - structure for a thermal zone
+ * @id: unique id number for each thermal zone
+@@ -100,8 +106,7 @@ struct thermal_governor {
+ * @node: node in thermal_tz_list (in thermal_core.c)
+ * @poll_queue: delayed work for polling
+ * @notify_event: Last notification event
+- * @suspended: thermal zone suspend indicator
+- * @resuming: indicates whether or not thermal zone resume is in progress
++ * @state: current state of the thermal zone
+ * @trips: array of struct thermal_trip objects
+ */
+ struct thermal_zone_device {
+@@ -134,8 +139,7 @@ struct thermal_zone_device {
+ struct list_head node;
+ struct delayed_work poll_queue;
+ enum thermal_notify_event notify_event;
+- bool suspended;
+- bool resuming;
++ u8 state;
+ #ifdef CONFIG_THERMAL_DEBUGFS
+ struct thermal_debugfs *debugfs;
+ #endif
+@@ -293,7 +297,4 @@ thermal_cooling_device_stats_update(struct thermal_cooling_device *cdev,
+ unsigned long new_state) {}
+ #endif /* CONFIG_THERMAL_STATISTICS */
+
+-/* device tree support */
+-int thermal_zone_device_is_enabled(struct thermal_zone_device *tz);
+-
+ #endif /* __THERMAL_CORE_H__ */
+diff --git a/drivers/thermal/thermal_sysfs.c b/drivers/thermal/thermal_sysfs.c
+index d628dd67be5ccf..21f73a230c53e3 100644
+--- a/drivers/thermal/thermal_sysfs.c
++++ b/drivers/thermal/thermal_sysfs.c
+@@ -53,7 +53,7 @@ mode_show(struct device *dev, struct device_attribute *attr, char *buf)
+ int enabled;
+
+ mutex_lock(&tz->lock);
+- enabled = thermal_zone_device_is_enabled(tz);
++ enabled = tz->mode == THERMAL_DEVICE_ENABLED;
+ mutex_unlock(&tz->lock);
+
+ return sprintf(buf, "%s\n", enabled ? "enabled" : "disabled");
+diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c
+index e2aa2a1a02ddf5..ecbce226b8747e 100644
+--- a/drivers/tty/serial/8250/8250_fintek.c
++++ b/drivers/tty/serial/8250/8250_fintek.c
+@@ -21,6 +21,7 @@
+ #define CHIP_ID_F81866 0x1010
+ #define CHIP_ID_F81966 0x0215
+ #define CHIP_ID_F81216AD 0x1602
++#define CHIP_ID_F81216E 0x1617
+ #define CHIP_ID_F81216H 0x0501
+ #define CHIP_ID_F81216 0x0802
+ #define VENDOR_ID1 0x23
+@@ -158,6 +159,7 @@ static int fintek_8250_check_id(struct fintek_8250 *pdata)
+ case CHIP_ID_F81866:
+ case CHIP_ID_F81966:
+ case CHIP_ID_F81216AD:
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81216:
+ break;
+@@ -181,6 +183,7 @@ static int fintek_8250_get_ldn_range(struct fintek_8250 *pdata, int *min,
+ return 0;
+
+ case CHIP_ID_F81216AD:
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81216:
+ *min = F81216_LDN_LOW;
+@@ -250,6 +253,7 @@ static void fintek_8250_set_irq_mode(struct fintek_8250 *pdata, bool is_level)
+ break;
+
+ case CHIP_ID_F81216AD:
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81216:
+ sio_write_mask_reg(pdata, FINTEK_IRQ_MODE, IRQ_SHARE,
+@@ -263,7 +267,8 @@ static void fintek_8250_set_irq_mode(struct fintek_8250 *pdata, bool is_level)
+ static void fintek_8250_set_max_fifo(struct fintek_8250 *pdata)
+ {
+ switch (pdata->pid) {
+- case CHIP_ID_F81216H: /* 128Bytes FIFO */
++ case CHIP_ID_F81216E: /* 128Bytes FIFO */
++ case CHIP_ID_F81216H:
+ case CHIP_ID_F81966:
+ case CHIP_ID_F81866:
+ sio_write_mask_reg(pdata, FIFO_CTRL,
+@@ -297,6 +302,7 @@ static void fintek_8250_set_termios(struct uart_port *port,
+ goto exit;
+
+ switch (pdata->pid) {
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ reg = RS485;
+ break;
+@@ -346,6 +352,7 @@ static void fintek_8250_set_termios_handler(struct uart_8250_port *uart)
+ struct fintek_8250 *pdata = uart->port.private_data;
+
+ switch (pdata->pid) {
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81966:
+ case CHIP_ID_F81866:
+@@ -438,6 +445,11 @@ static void fintek_8250_set_rs485_handler(struct uart_8250_port *uart)
+ uart->port.rs485_supported = fintek_8250_rs485_supported;
+ break;
+
++ case CHIP_ID_F81216E: /* F81216E does not support RS485 delays */
++ uart->port.rs485_config = fintek_8250_rs485_config;
++ uart->port.rs485_supported = fintek_8250_rs485_supported;
++ break;
++
+ default: /* No RS485 Auto direction functional */
+ break;
+ }
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index fca5f25d693a72..a74db57912e4de 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -777,12 +777,12 @@ static void omap_8250_shutdown(struct uart_port *port)
+ struct uart_8250_port *up = up_to_u8250p(port);
+ struct omap8250_priv *priv = port->private_data;
+
++ pm_runtime_get_sync(port->dev);
++
+ flush_work(&priv->qos_work);
+ if (up->dma)
+ omap_8250_rx_dma_flush(up);
+
+- pm_runtime_get_sync(port->dev);
+-
+ serial_out(up, UART_OMAP_WER, 0);
+ if (priv->habit & UART_HAS_EFR2)
+ serial_out(up, UART_OMAP_EFR2, 0x0);
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 8b1644f5411ecf..41a2a90071a377 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -1819,6 +1819,13 @@ static void pl011_unthrottle_rx(struct uart_port *port)
+
+ pl011_write(uap->im, uap, REG_IMSC);
+
++#ifdef CONFIG_DMA_ENGINE
++ if (uap->using_rx_dma) {
++ uap->dmacr |= UART011_RXDMAE;
++ pl011_write(uap->dmacr, uap, REG_DMACR);
++ }
++#endif
++
+ uart_port_unlock_irqrestore(&uap->port, flags);
+ }
+
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 407b0d87b7c108..f2111543674209 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -3631,7 +3631,7 @@ static struct ctl_table tty_table[] = {
+ .data = &tty_ldisc_autoload,
+ .maxlen = sizeof(tty_ldisc_autoload),
+ .mode = 0644,
+- .proc_handler = proc_dointvec,
++ .proc_handler = proc_dointvec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ .extra2 = SYSCTL_ONE,
+ },
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index c9533a99e47c89..874497f86499b3 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -232,7 +232,7 @@ void dwc3_ep0_stall_and_restart(struct dwc3 *dwc)
+ /* stall is always issued on EP0 */
+ dep = dwc->eps[0];
+ __dwc3_gadget_ep_set_halt(dep, 1, false);
+- dep->flags &= DWC3_EP_RESOURCE_ALLOCATED;
++ dep->flags &= DWC3_EP_RESOURCE_ALLOCATED | DWC3_EP_TRANSFER_STARTED;
+ dep->flags |= DWC3_EP_ENABLED;
+ dwc->delayed_status = false;
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 4959c26d3b71b8..271f9dc9c03574 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1177,11 +1177,14 @@ static u32 dwc3_calc_trbs_left(struct dwc3_ep *dep)
+ * pending to be processed by the driver.
+ */
+ if (dep->trb_enqueue == dep->trb_dequeue) {
++ struct dwc3_request *req;
++
+ /*
+- * If there is any request remained in the started_list at
+- * this point, that means there is no TRB available.
++ * If there is any request remained in the started_list with
++ * active TRBs at this point, then there is no TRB available.
+ */
+- if (!list_empty(&dep->started_list))
++ req = next_request(&dep->started_list);
++ if (req && req->num_trbs)
+ return 0;
+
+ return DWC3_TRB_NUM - 1;
+@@ -1414,8 +1417,8 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
+ struct scatterlist *s;
+ int i;
+ unsigned int length = req->request.length;
+- unsigned int remaining = req->request.num_mapped_sgs
+- - req->num_queued_sgs;
++ unsigned int remaining = req->num_pending_sgs;
++ unsigned int num_queued_sgs = req->request.num_mapped_sgs - remaining;
+ unsigned int num_trbs = req->num_trbs;
+ bool needs_extra_trb = dwc3_needs_extra_trb(dep, req);
+
+@@ -1423,7 +1426,7 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
+ * If we resume preparing the request, then get the remaining length of
+ * the request and resume where we left off.
+ */
+- for_each_sg(req->request.sg, s, req->num_queued_sgs, i)
++ for_each_sg(req->request.sg, s, num_queued_sgs, i)
+ length -= sg_dma_len(s);
+
+ for_each_sg(sg, s, remaining, i) {
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index f45d5bedda689e..64109f39970f59 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -2111,8 +2111,20 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ memset(buf, 0, w_length);
+ buf[5] = 0x01;
+ switch (ctrl->bRequestType & USB_RECIP_MASK) {
++ /*
++ * The Microsoft CompatID OS Descriptor Spec(w_index = 0x4) and
++ * Extended Prop OS Desc Spec(w_index = 0x5) state that the
++ * HighByte of wValue is the InterfaceNumber and the LowByte is
++ * the PageNumber. This high/low byte ordering is incorrectly
++ * documented in the Spec. USB analyzer output on the below
++ * request packets show the high/low byte inverted i.e LowByte
++ * is the InterfaceNumber and the HighByte is the PageNumber.
++ * Since we dont support >64KB CompatID/ExtendedProp descriptors,
++ * PageNumber is set to 0. Hence verify that the HighByte is 0
++ * for below two cases.
++ */
+ case USB_RECIP_DEVICE:
+- if (w_index != 0x4 || (w_value & 0xff))
++ if (w_index != 0x4 || (w_value >> 8))
+ break;
+ buf[6] = w_index;
+ /* Number of ext compat interfaces */
+@@ -2128,9 +2140,9 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ }
+ break;
+ case USB_RECIP_INTERFACE:
+- if (w_index != 0x5 || (w_value & 0xff))
++ if (w_index != 0x5 || (w_value >> 8))
+ break;
+- interface = w_value >> 8;
++ interface = w_value & 0xFF;
+ if (interface >= MAX_CONFIG_INTERFACES ||
+ !os_desc_cfg->interface[interface])
+ break;
+diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
+index a9edd60fbbf779..48fd8d3c50b0c1 100644
+--- a/drivers/usb/gadget/function/uvc_video.c
++++ b/drivers/usb/gadget/function/uvc_video.c
+@@ -480,6 +480,10 @@ uvc_video_complete(struct usb_ep *ep, struct usb_request *req)
+ * up later.
+ */
+ list_add_tail(&to_queue->list, &video->req_free);
++ /*
++ * There is a new free request - wake up the pump.
++ */
++ queue_work(video->async_wq, &video->pump);
+ }
+
+ spin_unlock_irqrestore(&video->req_lock, flags);
+diff --git a/drivers/usb/host/ehci-spear.c b/drivers/usb/host/ehci-spear.c
+index d0e94e4c9fe274..11294f196ee335 100644
+--- a/drivers/usb/host/ehci-spear.c
++++ b/drivers/usb/host/ehci-spear.c
+@@ -105,7 +105,9 @@ static int spear_ehci_hcd_drv_probe(struct platform_device *pdev)
+ /* registers start at offset 0x0 */
+ hcd_to_ehci(hcd)->caps = hcd->regs;
+
+- clk_prepare_enable(sehci->clk);
++ retval = clk_prepare_enable(sehci->clk);
++ if (retval)
++ goto err_put_hcd;
+ retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
+ if (retval)
+ goto err_stop_ehci;
+@@ -130,8 +132,7 @@ static void spear_ehci_hcd_drv_remove(struct platform_device *pdev)
+
+ usb_remove_hcd(hcd);
+
+- if (sehci->clk)
+- clk_disable_unprepare(sehci->clk);
++ clk_disable_unprepare(sehci->clk);
+ usb_put_hcd(hcd);
+ }
+
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index e629b3442640a6..114f16a23eb4e8 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -401,14 +401,12 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+
+ if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+- pdev->device == PCI_DEVICE_ID_EJ168) {
+- xhci->quirks |= XHCI_RESET_ON_RESUME;
+- xhci->quirks |= XHCI_BROKEN_STREAMS;
+- }
+- if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+- pdev->device == PCI_DEVICE_ID_EJ188) {
++ (pdev->device == PCI_DEVICE_ID_EJ168 ||
++ pdev->device == PCI_DEVICE_ID_EJ188)) {
++ xhci->quirks |= XHCI_ETRON_HOST;
+ xhci->quirks |= XHCI_RESET_ON_RESUME;
+ xhci->quirks |= XHCI_BROKEN_STREAMS;
++ xhci->quirks |= XHCI_NO_SOFT_RETRY;
+ }
+
+ if (pdev->vendor == PCI_VENDOR_ID_RENESAS &&
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 4479e949fa64b9..a723ae66e7c47f 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -52,6 +52,7 @@
+ * endpoint rings; it generates events on the event ring for these.
+ */
+
++#include <linux/jiffies.h>
+ #include <linux/scatterlist.h>
+ #include <linux/slab.h>
+ #include <linux/dma-mapping.h>
+@@ -972,6 +973,13 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
+ unsigned int slot_id = ep->vdev->slot_id;
+ int err;
+
++ /*
++ * This is not going to work if the hardware is changing its dequeue
++ * pointers as we look at them. Completion handler will call us later.
++ */
++ if (ep->ep_state & SET_DEQ_PENDING)
++ return 0;
++
+ xhci = ep->xhci;
+
+ list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list, cancelled_td_list) {
+@@ -1061,6 +1069,19 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
+ return 0;
+ }
+
++/*
++ * Erase queued TDs from transfer ring(s) and give back those the xHC didn't
++ * stop on. If necessary, queue commands to move the xHC off cancelled TDs it
++ * stopped on. Those will be given back later when the commands complete.
++ *
++ * Call under xhci->lock on a stopped endpoint.
++ */
++void xhci_process_cancelled_tds(struct xhci_virt_ep *ep)
++{
++ xhci_invalidate_cancelled_tds(ep);
++ xhci_giveback_invalidated_tds(ep);
++}
++
+ /*
+ * Returns the TD the endpoint ring halted on.
+ * Only call for non-running rings without streams.
+@@ -1151,16 +1172,35 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
+ return;
+ case EP_STATE_STOPPED:
+ /*
+- * NEC uPD720200 sometimes sets this state and fails with
+- * Context Error while continuing to process TRBs.
+- * Be conservative and trust EP_CTX_STATE on other chips.
++ * Per xHCI 4.6.9, Stop Endpoint command on a Stopped
++ * EP is a Context State Error, and EP stays Stopped.
++ *
++ * But maybe it failed on Halted, and somebody ran Reset
++ * Endpoint later. EP state is now Stopped and EP_HALTED
++ * still set because Reset EP handler will run after us.
++ */
++ if (ep->ep_state & EP_HALTED)
++ break;
++ /*
++ * On some HCs EP state remains Stopped for some tens of
++ * us to a few ms or more after a doorbell ring, and any
++ * new Stop Endpoint fails without aborting the restart.
++ * This handler may run quickly enough to still see this
++ * Stopped state, but it will soon change to Running.
++ *
++ * Assume this bug on unexpected Stop Endpoint failures.
++ * Keep retrying until the EP starts and stops again, on
++ * chips where this is known to help. Wait for 100ms.
+ */
+ if (!(xhci->quirks & XHCI_NEC_HOST))
+ break;
++ if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100)))
++ break;
+ fallthrough;
+ case EP_STATE_RUNNING:
+ /* Race, HW handled stop ep cmd before ep was running */
+- xhci_dbg(xhci, "Stop ep completion ctx error, ep is running\n");
++ xhci_dbg(xhci, "Stop ep completion ctx error, ctx_state %d\n",
++ GET_EP_CTX_STATE(ep_ctx));
+
+ command = xhci_alloc_command(xhci, false, GFP_ATOMIC);
+ if (!command) {
+@@ -1339,7 +1379,6 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ struct xhci_ep_ctx *ep_ctx;
+ struct xhci_slot_ctx *slot_ctx;
+ struct xhci_td *td, *tmp_td;
+- bool deferred = false;
+
+ ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3]));
+ stream_id = TRB_TO_STREAM_ID(le32_to_cpu(trb->generic.field[2]));
+@@ -1440,8 +1479,6 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ xhci_dbg(ep->xhci, "%s: Giveback cancelled URB %p TD\n",
+ __func__, td->urb);
+ xhci_td_cleanup(ep->xhci, td, ep_ring, td->status);
+- } else if (td->cancel_status == TD_CLEARING_CACHE_DEFERRED) {
+- deferred = true;
+ } else {
+ xhci_dbg(ep->xhci, "%s: Keep cancelled URB %p TD as cancel_status is %d\n",
+ __func__, td->urb, td->cancel_status);
+@@ -1452,11 +1489,15 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ ep->queued_deq_seg = NULL;
+ ep->queued_deq_ptr = NULL;
+
+- if (deferred) {
+- /* We have more streams to clear */
++ /* Check for deferred or newly cancelled TDs */
++ if (!list_empty(&ep->cancelled_td_list)) {
+ xhci_dbg(ep->xhci, "%s: Pending TDs to clear, continuing with invalidation\n",
+ __func__);
+ xhci_invalidate_cancelled_tds(ep);
++ /* Try to restart the endpoint if all is done */
++ ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
++ /* Start giving back any TDs invalidated above */
++ xhci_giveback_invalidated_tds(ep);
+ } else {
+ /* Restart any rings with pending URBs */
+ xhci_dbg(ep->xhci, "%s: All TDs cleared, ring doorbell\n", __func__);
+@@ -3741,6 +3782,20 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ if (!urb->setup_packet)
+ return -EINVAL;
+
++ if ((xhci->quirks & XHCI_ETRON_HOST) &&
++ urb->dev->speed >= USB_SPEED_SUPER) {
++ /*
++ * If next available TRB is the Link TRB in the ring segment then
++ * enqueue a No Op TRB, this can prevent the Setup and Data Stage
++ * TRB to be breaked by the Link TRB.
++ */
++ if (trb_is_link(ep_ring->enqueue + 1)) {
++ field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state;
++ queue_trb(xhci, ep_ring, false, 0, 0,
++ TRB_INTR_TARGET(0), field);
++ }
++ }
++
+ /* 1 TRB for setup, 1 for status */
+ num_trbs = 2;
+ /*
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index efdf4c228b8c0a..a9c489184f2a7d 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -8,6 +8,7 @@
+ * Some code borrowed from the Linux EHCI driver.
+ */
+
++#include <linux/jiffies.h>
+ #include <linux/pci.h>
+ #include <linux/iommu.h>
+ #include <linux/iopoll.h>
+@@ -1768,15 +1769,27 @@ static int xhci_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+ }
+ }
+
+- /* Queue a stop endpoint command, but only if this is
+- * the first cancellation to be handled.
+- */
+- if (!(ep->ep_state & EP_STOP_CMD_PENDING)) {
++ /* These completion handlers will sort out cancelled TDs for us */
++ if (ep->ep_state & (EP_STOP_CMD_PENDING | EP_HALTED | SET_DEQ_PENDING)) {
++ xhci_dbg(xhci, "Not queuing Stop Endpoint on slot %d ep %d in state 0x%x\n",
++ urb->dev->slot_id, ep_index, ep->ep_state);
++ goto done;
++ }
++
++ /* In this case no commands are pending but the endpoint is stopped */
++ if (ep->ep_state & EP_CLEARING_TT) {
++ /* and cancelled TDs can be given back right away */
++ xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n",
++ urb->dev->slot_id, ep_index, ep->ep_state);
++ xhci_process_cancelled_tds(ep);
++ } else {
++ /* Otherwise, queue a new Stop Endpoint command */
+ command = xhci_alloc_command(xhci, false, GFP_ATOMIC);
+ if (!command) {
+ ret = -ENOMEM;
+ goto done;
+ }
++ ep->stop_time = jiffies;
+ ep->ep_state |= EP_STOP_CMD_PENDING;
+ xhci_queue_stop_endpoint(xhci, command, urb->dev->slot_id,
+ ep_index, 0);
+@@ -3692,6 +3705,8 @@ void xhci_free_device_endpoint_resources(struct xhci_hcd *xhci,
+ xhci->num_active_eps);
+ }
+
++static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev);
++
+ /*
+ * This submits a Reset Device Command, which will set the device state to 0,
+ * set the device address to 0, and disable all the endpoints except the default
+@@ -3762,6 +3777,23 @@ static int xhci_discover_or_reset_device(struct usb_hcd *hcd,
+ SLOT_STATE_DISABLED)
+ return 0;
+
++ if (xhci->quirks & XHCI_ETRON_HOST) {
++ /*
++ * Obtaining a new device slot to inform the xHCI host that
++ * the USB device has been reset.
++ */
++ ret = xhci_disable_slot(xhci, udev->slot_id);
++ xhci_free_virt_device(xhci, udev->slot_id);
++ if (!ret) {
++ ret = xhci_alloc_dev(hcd, udev);
++ if (ret == 1)
++ ret = 0;
++ else
++ ret = -EINVAL;
++ }
++ return ret;
++ }
++
+ trace_xhci_discover_or_reset_device(slot_ctx);
+
+ xhci_dbg(xhci, "Resetting device with slot ID %u\n", slot_id);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 856f16e64dcf05..a200a91ed9d03b 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -690,6 +690,7 @@ struct xhci_virt_ep {
+ /* Bandwidth checking storage */
+ struct xhci_bw_info bw_info;
+ struct list_head bw_endpoint_list;
++ unsigned long stop_time;
+ /* Isoch Frame ID checking storage */
+ int next_frame_id;
+ /* Use new Isoch TRB layout needed for extended TBC support */
+@@ -1629,6 +1630,7 @@ struct xhci_hcd {
+ #define XHCI_ZHAOXIN_HOST BIT_ULL(46)
+ #define XHCI_WRITE_64_HI_LO BIT_ULL(47)
+ #define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48)
++#define XHCI_ETRON_HOST BIT_ULL(49)
+
+ unsigned int num_active_eps;
+ unsigned int limit_active_eps;
+@@ -1919,6 +1921,7 @@ void xhci_ring_doorbell_for_active_rings(struct xhci_hcd *xhci,
+ void xhci_cleanup_command_queue(struct xhci_hcd *xhci);
+ void inc_deq(struct xhci_hcd *xhci, struct xhci_ring *ring);
+ unsigned int count_trbs(u64 addr, u64 len);
++void xhci_process_cancelled_tds(struct xhci_virt_ep *ep);
+
+ /* xHCI roothub code */
+ void xhci_set_link_state(struct xhci_hcd *xhci, struct xhci_port *port,
+diff --git a/drivers/usb/misc/chaoskey.c b/drivers/usb/misc/chaoskey.c
+index 6fb5140e29b9dd..225863321dc479 100644
+--- a/drivers/usb/misc/chaoskey.c
++++ b/drivers/usb/misc/chaoskey.c
+@@ -27,6 +27,8 @@ static struct usb_class_driver chaoskey_class;
+ static int chaoskey_rng_read(struct hwrng *rng, void *data,
+ size_t max, bool wait);
+
++static DEFINE_MUTEX(chaoskey_list_lock);
++
+ #define usb_dbg(usb_if, format, arg...) \
+ dev_dbg(&(usb_if)->dev, format, ## arg)
+
+@@ -233,6 +235,7 @@ static void chaoskey_disconnect(struct usb_interface *interface)
+ usb_deregister_dev(interface, &chaoskey_class);
+
+ usb_set_intfdata(interface, NULL);
++ mutex_lock(&chaoskey_list_lock);
+ mutex_lock(&dev->lock);
+
+ dev->present = false;
+@@ -244,6 +247,7 @@ static void chaoskey_disconnect(struct usb_interface *interface)
+ } else
+ mutex_unlock(&dev->lock);
+
++ mutex_unlock(&chaoskey_list_lock);
+ usb_dbg(interface, "disconnect done");
+ }
+
+@@ -251,6 +255,7 @@ static int chaoskey_open(struct inode *inode, struct file *file)
+ {
+ struct chaoskey *dev;
+ struct usb_interface *interface;
++ int rv = 0;
+
+ /* get the interface from minor number and driver information */
+ interface = usb_find_interface(&chaoskey_driver, iminor(inode));
+@@ -266,18 +271,23 @@ static int chaoskey_open(struct inode *inode, struct file *file)
+ }
+
+ file->private_data = dev;
++ mutex_lock(&chaoskey_list_lock);
+ mutex_lock(&dev->lock);
+- ++dev->open;
++ if (dev->present)
++ ++dev->open;
++ else
++ rv = -ENODEV;
+ mutex_unlock(&dev->lock);
++ mutex_unlock(&chaoskey_list_lock);
+
+- usb_dbg(interface, "open success");
+- return 0;
++ return rv;
+ }
+
+ static int chaoskey_release(struct inode *inode, struct file *file)
+ {
+ struct chaoskey *dev = file->private_data;
+ struct usb_interface *interface;
++ int rv = 0;
+
+ if (dev == NULL)
+ return -ENODEV;
+@@ -286,14 +296,15 @@ static int chaoskey_release(struct inode *inode, struct file *file)
+
+ usb_dbg(interface, "release");
+
++ mutex_lock(&chaoskey_list_lock);
+ mutex_lock(&dev->lock);
+
+ usb_dbg(interface, "open count at release is %d", dev->open);
+
+ if (dev->open <= 0) {
+ usb_dbg(interface, "invalid open count (%d)", dev->open);
+- mutex_unlock(&dev->lock);
+- return -ENODEV;
++ rv = -ENODEV;
++ goto bail;
+ }
+
+ --dev->open;
+@@ -302,13 +313,15 @@ static int chaoskey_release(struct inode *inode, struct file *file)
+ if (dev->open == 0) {
+ mutex_unlock(&dev->lock);
+ chaoskey_free(dev);
+- } else
+- mutex_unlock(&dev->lock);
+- } else
+- mutex_unlock(&dev->lock);
+-
++ goto destruction;
++ }
++ }
++bail:
++ mutex_unlock(&dev->lock);
++destruction:
++ mutex_unlock(&chaoskey_list_lock);
+ usb_dbg(interface, "release success");
+- return 0;
++ return rv;
+ }
+
+ static void chaos_read_callback(struct urb *urb)
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index 6d28467ce35227..365c1006934583 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -277,28 +277,45 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer,
+ struct iowarrior *dev;
+ int read_idx;
+ int offset;
++ int retval;
+
+ dev = file->private_data;
+
++ if (file->f_flags & O_NONBLOCK) {
++ retval = mutex_trylock(&dev->mutex);
++ if (!retval)
++ return -EAGAIN;
++ } else {
++ retval = mutex_lock_interruptible(&dev->mutex);
++ if (retval)
++ return -ERESTARTSYS;
++ }
++
+ /* verify that the device wasn't unplugged */
+- if (!dev || !dev->present)
+- return -ENODEV;
++ if (!dev->present) {
++ retval = -ENODEV;
++ goto exit;
++ }
+
+ dev_dbg(&dev->interface->dev, "minor %d, count = %zd\n",
+ dev->minor, count);
+
+ /* read count must be packet size (+ time stamp) */
+ if ((count != dev->report_size)
+- && (count != (dev->report_size + 1)))
+- return -EINVAL;
++ && (count != (dev->report_size + 1))) {
++ retval = -EINVAL;
++ goto exit;
++ }
+
+ /* repeat until no buffer overrun in callback handler occur */
+ do {
+ atomic_set(&dev->overflow_flag, 0);
+ if ((read_idx = read_index(dev)) == -1) {
+ /* queue empty */
+- if (file->f_flags & O_NONBLOCK)
+- return -EAGAIN;
++ if (file->f_flags & O_NONBLOCK) {
++ retval = -EAGAIN;
++ goto exit;
++ }
+ else {
+ //next line will return when there is either new data, or the device is unplugged
+ int r = wait_event_interruptible(dev->read_wait,
+@@ -309,28 +326,37 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer,
+ -1));
+ if (r) {
+ //we were interrupted by a signal
+- return -ERESTART;
++ retval = -ERESTART;
++ goto exit;
+ }
+ if (!dev->present) {
+ //The device was unplugged
+- return -ENODEV;
++ retval = -ENODEV;
++ goto exit;
+ }
+ if (read_idx == -1) {
+ // Can this happen ???
+- return 0;
++ retval = 0;
++ goto exit;
+ }
+ }
+ }
+
+ offset = read_idx * (dev->report_size + 1);
+ if (copy_to_user(buffer, dev->read_queue + offset, count)) {
+- return -EFAULT;
++ retval = -EFAULT;
++ goto exit;
+ }
+ } while (atomic_read(&dev->overflow_flag));
+
+ read_idx = ++read_idx == MAX_INTERRUPT_BUFFER ? 0 : read_idx;
+ atomic_set(&dev->read_idx, read_idx);
++ mutex_unlock(&dev->mutex);
+ return count;
++
++exit:
++ mutex_unlock(&dev->mutex);
++ return retval;
+ }
+
+ /*
+@@ -885,7 +911,6 @@ static int iowarrior_probe(struct usb_interface *interface,
+ static void iowarrior_disconnect(struct usb_interface *interface)
+ {
+ struct iowarrior *dev = usb_get_intfdata(interface);
+- int minor = dev->minor;
+
+ usb_deregister_dev(interface, &iowarrior_class);
+
+@@ -909,9 +934,6 @@ static void iowarrior_disconnect(struct usb_interface *interface)
+ mutex_unlock(&dev->mutex);
+ iowarrior_delete(dev);
+ }
+-
+- dev_info(&interface->dev, "I/O-Warror #%d now disconnected\n",
+- minor - IOWARRIOR_MINOR_BASE);
+ }
+
+ /* usb specific object needed to register this driver with the usb subsystem */
+diff --git a/drivers/usb/misc/usb-ljca.c b/drivers/usb/misc/usb-ljca.c
+index 1a8d5e80b9aec2..76951854d88afc 100644
+--- a/drivers/usb/misc/usb-ljca.c
++++ b/drivers/usb/misc/usb-ljca.c
+@@ -332,14 +332,11 @@ static int ljca_send(struct ljca_adapter *adap, u8 type, u8 cmd,
+
+ ret = usb_bulk_msg(adap->usb_dev, adap->tx_pipe, header,
+ msg_len, &transferred, LJCA_WRITE_TIMEOUT_MS);
+-
+- usb_autopm_put_interface(adap->intf);
+-
+ if (ret < 0)
+- goto out;
++ goto out_put;
+ if (transferred != msg_len) {
+ ret = -EIO;
+- goto out;
++ goto out_put;
+ }
+
+ if (ack) {
+@@ -347,11 +344,14 @@ static int ljca_send(struct ljca_adapter *adap, u8 type, u8 cmd,
+ timeout);
+ if (!ret) {
+ ret = -ETIMEDOUT;
+- goto out;
++ goto out_put;
+ }
+ }
+ ret = adap->actual_length;
+
++out_put:
++ usb_autopm_put_interface(adap->intf);
++
+ out:
+ spin_lock_irqsave(&adap->lock, flags);
+ adap->ex_buf = NULL;
+@@ -811,6 +811,14 @@ static int ljca_probe(struct usb_interface *interface,
+ if (ret)
+ goto err_free;
+
++ /*
++ * This works around problems with ov2740 initialization on some
++ * Lenovo platforms. The autosuspend delay, has to be smaller than
++ * the delay after setting the reset_gpio line in ov2740_resume().
++ * Otherwise the sensor randomly fails to initialize.
++ */
++ pm_runtime_set_autosuspend_delay(&usb_dev->dev, 10);
++
+ usb_enable_autosuspend(usb_dev);
+
+ return 0;
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 7c12b937d0759c..bd94bf078de3b8 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -441,7 +441,10 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ if (count == 0)
+ goto error;
+
+- mutex_lock(&dev->io_mutex);
++ retval = mutex_lock_interruptible(&dev->io_mutex);
++ if (retval < 0)
++ return -EINTR;
++
+ if (dev->disconnected) { /* already disconnected */
+ mutex_unlock(&dev->io_mutex);
+ retval = -ENODEV;
+diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
+index bdf13911a1e590..c6076df0d50cc7 100644
+--- a/drivers/usb/musb/musb_gadget.c
++++ b/drivers/usb/musb/musb_gadget.c
+@@ -1161,12 +1161,19 @@ void musb_free_request(struct usb_ep *ep, struct usb_request *req)
+ */
+ void musb_ep_restart(struct musb *musb, struct musb_request *req)
+ {
++ u16 csr;
++ void __iomem *epio = req->ep->hw_ep->regs;
++
+ trace_musb_req_start(req);
+ musb_ep_select(musb->mregs, req->epnum);
+- if (req->tx)
++ if (req->tx) {
+ txstate(musb, req);
+- else
+- rxstate(musb, req);
++ } else {
++ csr = musb_readw(epio, MUSB_RXCSR);
++ csr |= MUSB_RXCSR_FLUSHFIFO | MUSB_RXCSR_P_WZC_BITS;
++ musb_writew(epio, MUSB_RXCSR, csr);
++ musb_writew(epio, MUSB_RXCSR, csr);
++ }
+ }
+
+ static int musb_ep_restart_resume_work(struct musb *musb, void *data)
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index 1eb240604cf6f8..145e12e13aef90 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -2293,7 +2293,7 @@ void typec_port_register_altmodes(struct typec_port *port,
+ const struct typec_altmode_ops *ops, void *drvdata,
+ struct typec_altmode **altmodes, size_t n)
+ {
+- struct fwnode_handle *altmodes_node, *child;
++ struct fwnode_handle *child;
+ struct typec_altmode_desc desc;
+ struct typec_altmode *alt;
+ size_t index = 0;
+@@ -2301,7 +2301,9 @@ void typec_port_register_altmodes(struct typec_port *port,
+ u32 vdo;
+ int ret;
+
+- altmodes_node = device_get_named_child_node(&port->dev, "altmodes");
++ struct fwnode_handle *altmodes_node __free(fwnode_handle) =
++ device_get_named_child_node(&port->dev, "altmodes");
++
+ if (!altmodes_node)
+ return; /* No altmodes specified */
+
+diff --git a/drivers/usb/typec/tcpm/wcove.c b/drivers/usb/typec/tcpm/wcove.c
+index cf719307b3f6b9..60b2766a69bf8a 100644
+--- a/drivers/usb/typec/tcpm/wcove.c
++++ b/drivers/usb/typec/tcpm/wcove.c
+@@ -621,10 +621,6 @@ static int wcove_typec_probe(struct platform_device *pdev)
+ if (irq < 0)
+ return irq;
+
+- irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr, irq);
+- if (irq < 0)
+- return irq;
+-
+ ret = guid_parse(WCOVE_DSM_UUID, &wcove->guid);
+ if (ret)
+ return ret;
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index 763ff99f000d94..14c2dc9255b547 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -644,6 +644,10 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+ uc->has_multiple_dp) {
+ con_index = (uc->last_cmd_sent >> 16) &
+ UCSI_CMD_CONNECTOR_MASK;
++ if (con_index == 0) {
++ ret = -EINVAL;
++ goto unlock;
++ }
+ con = &uc->ucsi->connector[con_index - 1];
+ ucsi_ccg_update_set_new_cam_cmd(uc, con, &command);
+ }
+@@ -651,6 +655,7 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+ ret = ucsi_sync_control_common(ucsi, command);
+
+ pm_runtime_put_sync(uc->dev);
++unlock:
+ mutex_unlock(&uc->lock);
+
+ return ret;
+diff --git a/drivers/usb/typec/ucsi/ucsi_glink.c b/drivers/usb/typec/ucsi/ucsi_glink.c
+index 6aace19d595bcd..69549b172186a7 100644
+--- a/drivers/usb/typec/ucsi/ucsi_glink.c
++++ b/drivers/usb/typec/ucsi/ucsi_glink.c
+@@ -185,7 +185,7 @@ static void pmic_glink_ucsi_connector_status(struct ucsi_connector *con)
+ struct pmic_glink_ucsi *ucsi = ucsi_get_drvdata(con->ucsi);
+ int orientation;
+
+- if (con->num >= PMIC_GLINK_MAX_PORTS ||
++ if (con->num > PMIC_GLINK_MAX_PORTS ||
+ !ucsi->port_orientation[con->num - 1])
+ return;
+
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index f1da0b7e08e9e0..906b39f2c4be4e 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -227,7 +227,6 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ unsigned long lgcd = 0;
+ int log_entity_size;
+ unsigned long size;
+- u64 start = 0;
+ int err;
+ struct page *pg;
+ unsigned int nsg;
+@@ -238,10 +237,9 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ struct device *dma = mvdev->vdev.dma_dev;
+
+ for (map = vhost_iotlb_itree_first(iotlb, mr->start, mr->end - 1);
+- map; map = vhost_iotlb_itree_next(map, start, mr->end - 1)) {
++ map; map = vhost_iotlb_itree_next(map, mr->start, mr->end - 1)) {
+ size = maplen(map, mr);
+ lgcd = gcd(lgcd, size);
+- start += size;
+ }
+ log_entity_size = ilog2(lgcd);
+
+diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c
+index 41a4b0cf429756..7527e277c89897 100644
+--- a/drivers/vfio/pci/mlx5/cmd.c
++++ b/drivers/vfio/pci/mlx5/cmd.c
+@@ -423,6 +423,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf,
+ unsigned long filled;
+ unsigned int to_fill;
+ int ret;
++ int i;
+
+ to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*page_list));
+ page_list = kvzalloc(to_fill * sizeof(*page_list), GFP_KERNEL_ACCOUNT);
+@@ -443,7 +444,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf,
+ GFP_KERNEL_ACCOUNT);
+
+ if (ret)
+- goto err;
++ goto err_append;
+ buf->allocated_length += filled * PAGE_SIZE;
+ /* clean input for another bulk allocation */
+ memset(page_list, 0, filled * sizeof(*page_list));
+@@ -454,6 +455,9 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf,
+ kvfree(page_list);
+ return 0;
+
++err_append:
++ for (i = filled - 1; i >= 0; i--)
++ __free_page(page_list[i]);
+ err:
+ kvfree(page_list);
+ return ret;
+diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c
+index 61d9b0f9146d1b..8de6037c88194b 100644
+--- a/drivers/vfio/pci/mlx5/main.c
++++ b/drivers/vfio/pci/mlx5/main.c
+@@ -641,14 +641,11 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track)
+ O_RDONLY);
+ if (IS_ERR(migf->filp)) {
+ ret = PTR_ERR(migf->filp);
+- goto end;
++ kfree(migf);
++ return ERR_PTR(ret);
+ }
+
+ migf->mvdev = mvdev;
+- ret = mlx5vf_cmd_alloc_pd(migf);
+- if (ret)
+- goto out_free;
+-
+ stream_open(migf->filp->f_inode, migf->filp);
+ mutex_init(&migf->lock);
+ init_waitqueue_head(&migf->poll_wait);
+@@ -664,6 +661,11 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track)
+ INIT_LIST_HEAD(&migf->buf_list);
+ INIT_LIST_HEAD(&migf->avail_list);
+ spin_lock_init(&migf->list_lock);
++
++ ret = mlx5vf_cmd_alloc_pd(migf);
++ if (ret)
++ goto out;
++
+ ret = mlx5vf_cmd_query_vhca_migration_state(mvdev, &length, &full_size, 0);
+ if (ret)
+ goto out_pd;
+@@ -693,10 +695,8 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track)
+ mlx5vf_free_data_buffer(buf);
+ out_pd:
+ mlx5fv_cmd_clean_migf_resources(migf);
+-out_free:
++out:
+ fput(migf->filp);
+-end:
+- kfree(migf);
+ return ERR_PTR(ret);
+ }
+
+@@ -1018,13 +1018,19 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev)
+ O_WRONLY);
+ if (IS_ERR(migf->filp)) {
+ ret = PTR_ERR(migf->filp);
+- goto end;
++ kfree(migf);
++ return ERR_PTR(ret);
+ }
+
++ stream_open(migf->filp->f_inode, migf->filp);
++ mutex_init(&migf->lock);
++ INIT_LIST_HEAD(&migf->buf_list);
++ INIT_LIST_HEAD(&migf->avail_list);
++ spin_lock_init(&migf->list_lock);
+ migf->mvdev = mvdev;
+ ret = mlx5vf_cmd_alloc_pd(migf);
+ if (ret)
+- goto out_free;
++ goto out;
+
+ buf = mlx5vf_alloc_data_buffer(migf, 0, DMA_TO_DEVICE);
+ if (IS_ERR(buf)) {
+@@ -1043,20 +1049,13 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev)
+ migf->buf_header[0] = buf;
+ migf->load_state = MLX5_VF_LOAD_STATE_READ_HEADER;
+
+- stream_open(migf->filp->f_inode, migf->filp);
+- mutex_init(&migf->lock);
+- INIT_LIST_HEAD(&migf->buf_list);
+- INIT_LIST_HEAD(&migf->avail_list);
+- spin_lock_init(&migf->list_lock);
+ return migf;
+ out_buf:
+ mlx5vf_free_data_buffer(migf->buf[0]);
+ out_pd:
+ mlx5vf_cmd_dealloc_pd(migf);
+-out_free:
++out:
+ fput(migf->filp);
+-end:
+- kfree(migf);
+ return ERR_PTR(ret);
+ }
+
+diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
+index 97422aafaa7b5d..ea2745c1ac5e68 100644
+--- a/drivers/vfio/pci/vfio_pci_config.c
++++ b/drivers/vfio/pci/vfio_pci_config.c
+@@ -313,6 +313,10 @@ static int vfio_virt_config_read(struct vfio_pci_core_device *vdev, int pos,
+ return count;
+ }
+
++static struct perm_bits direct_ro_perms = {
++ .readfn = vfio_direct_config_read,
++};
++
+ /* Default capability regions to read-only, no-virtualization */
+ static struct perm_bits cap_perms[PCI_CAP_ID_MAX + 1] = {
+ [0 ... PCI_CAP_ID_MAX] = { .readfn = vfio_direct_config_read }
+@@ -1897,9 +1901,17 @@ static ssize_t vfio_config_do_rw(struct vfio_pci_core_device *vdev, char __user
+ cap_start = *ppos;
+ } else {
+ if (*ppos >= PCI_CFG_SPACE_SIZE) {
+- WARN_ON(cap_id > PCI_EXT_CAP_ID_MAX);
++ /*
++ * We can get a cap_id that exceeds PCI_EXT_CAP_ID_MAX
++ * if we're hiding an unknown capability at the start
++ * of the extended capability list. Use default, ro
++ * access, which will virtualize the id and next values.
++ */
++ if (cap_id > PCI_EXT_CAP_ID_MAX)
++ perm = &direct_ro_perms;
++ else
++ perm = &ecap_perms[cap_id];
+
+- perm = &ecap_perms[cap_id];
+ cap_start = vfio_find_cap_start(vdev, *ppos);
+ } else {
+ WARN_ON(cap_id > PCI_CAP_ID_MAX);
+diff --git a/drivers/video/fbdev/sh7760fb.c b/drivers/video/fbdev/sh7760fb.c
+index 08a4943dc54184..d0ee5fec647adb 100644
+--- a/drivers/video/fbdev/sh7760fb.c
++++ b/drivers/video/fbdev/sh7760fb.c
+@@ -409,12 +409,11 @@ static int sh7760fb_alloc_mem(struct fb_info *info)
+ vram = PAGE_SIZE;
+
+ fbmem = dma_alloc_coherent(info->device, vram, &par->fbdma, GFP_KERNEL);
+-
+ if (!fbmem)
+ return -ENOMEM;
+
+ if ((par->fbdma & SH7760FB_DMA_MASK) != SH7760FB_DMA_MASK) {
+- sh7760fb_free_mem(info);
++ dma_free_coherent(info->device, vram, fbmem, par->fbdma);
+ dev_err(info->device, "kernel gave me memory at 0x%08lx, which is"
+ "unusable for the LCDC\n", (unsigned long)par->fbdma);
+ return -ENOMEM;
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index bae1d97cce89bd..8a48113ec2ace3 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -1500,7 +1500,7 @@ config 60XX_WDT
+
+ config SBC8360_WDT
+ tristate "SBC8360 Watchdog Timer"
+- depends on X86_32
++ depends on X86_32 && HAS_IOPORT
+ help
+
+ This is the driver for the hardware watchdog on the SBC8360 Single
+@@ -1513,7 +1513,7 @@ config SBC8360_WDT
+
+ config SBC7240_WDT
+ tristate "SBC Nano 7240 Watchdog Timer"
+- depends on X86_32 && !UML
++ depends on X86_32 && HAS_IOPORT
+ help
+ This is the driver for the hardware watchdog found on the IEI
+ single board computers EPIC Nano 7240 (and likely others). This
+diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
+index 9f097f1f4a4cf3..6d32ffb0113650 100644
+--- a/drivers/xen/xenbus/xenbus_probe.c
++++ b/drivers/xen/xenbus/xenbus_probe.c
+@@ -313,7 +313,7 @@ int xenbus_dev_probe(struct device *_dev)
+ if (err) {
+ dev_warn(&dev->dev, "watch_otherend on %s failed.\n",
+ dev->nodename);
+- return err;
++ goto fail_remove;
+ }
+
+ dev->spurious_threshold = 1;
+@@ -322,6 +322,12 @@ int xenbus_dev_probe(struct device *_dev)
+ dev->nodename);
+
+ return 0;
++fail_remove:
++ if (drv->remove) {
++ down(&dev->reclaim_sem);
++ drv->remove(dev);
++ up(&dev->reclaim_sem);
++ }
+ fail_put:
+ module_put(drv->driver.owner);
+ fail:
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 19fa49cd9907f2..04e748c5955f0d 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1251,6 +1251,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ }
+ reloc_func_desc = interp_load_addr;
+
++ allow_write_access(interpreter);
+ fput(interpreter);
+
+ kfree(interp_elf_ex);
+@@ -1342,6 +1343,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ kfree(interp_elf_ex);
+ kfree(interp_elf_phdata);
+ out_free_file:
++ allow_write_access(interpreter);
+ if (interpreter)
+ fput(interpreter);
+ out_free_ph:
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index 4fe5bb9f1b1f5e..7d35f0e1bc7641 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -394,6 +394,7 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ goto error;
+ }
+
++ allow_write_access(interpreter);
+ fput(interpreter);
+ interpreter = NULL;
+ }
+@@ -465,8 +466,10 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ retval = 0;
+
+ error:
+- if (interpreter)
++ if (interpreter) {
++ allow_write_access(interpreter);
+ fput(interpreter);
++ }
+ kfree(interpreter_name);
+ kfree(exec_params.phdrs);
+ kfree(exec_params.loadmap);
+diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
+index 31660d8cc2c610..6a3a16f910516c 100644
+--- a/fs/binfmt_misc.c
++++ b/fs/binfmt_misc.c
+@@ -247,10 +247,13 @@ static int load_misc_binary(struct linux_binprm *bprm)
+ if (retval < 0)
+ goto ret;
+
+- if (fmt->flags & MISC_FMT_OPEN_FILE)
++ if (fmt->flags & MISC_FMT_OPEN_FILE) {
+ interp_file = file_clone_open(fmt->interp_file);
+- else
++ if (!IS_ERR(interp_file))
++ deny_write_access(interp_file);
++ } else {
+ interp_file = open_exec(fmt->interpreter);
++ }
+ retval = PTR_ERR(interp_file);
+ if (IS_ERR(interp_file))
+ goto ret;
+diff --git a/fs/cachefiles/interface.c b/fs/cachefiles/interface.c
+index 35ba2117a6f652..3e63cfe1587472 100644
+--- a/fs/cachefiles/interface.c
++++ b/fs/cachefiles/interface.c
+@@ -327,6 +327,8 @@ static void cachefiles_commit_object(struct cachefiles_object *object,
+ static void cachefiles_clean_up_object(struct cachefiles_object *object,
+ struct cachefiles_cache *cache)
+ {
++ struct file *file;
++
+ if (test_bit(FSCACHE_COOKIE_RETIRED, &object->cookie->flags)) {
+ if (!test_bit(CACHEFILES_OBJECT_USING_TMPFILE, &object->flags)) {
+ cachefiles_see_object(object, cachefiles_obj_see_clean_delete);
+@@ -342,10 +344,14 @@ static void cachefiles_clean_up_object(struct cachefiles_object *object,
+ }
+
+ cachefiles_unmark_inode_in_use(object, object->file);
+- if (object->file) {
+- fput(object->file);
+- object->file = NULL;
+- }
++
++ spin_lock(&object->lock);
++ file = object->file;
++ object->file = NULL;
++ spin_unlock(&object->lock);
++
++ if (file)
++ fput(file);
+ }
+
+ /*
+diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
+index 470c9665838505..fe3de9ad57bf6d 100644
+--- a/fs/cachefiles/ondemand.c
++++ b/fs/cachefiles/ondemand.c
+@@ -60,26 +60,36 @@ static ssize_t cachefiles_ondemand_fd_write_iter(struct kiocb *kiocb,
+ {
+ struct cachefiles_object *object = kiocb->ki_filp->private_data;
+ struct cachefiles_cache *cache = object->volume->cache;
+- struct file *file = object->file;
+- size_t len = iter->count;
++ struct file *file;
++ size_t len = iter->count, aligned_len = len;
+ loff_t pos = kiocb->ki_pos;
+ const struct cred *saved_cred;
+ int ret;
+
+- if (!file)
++ spin_lock(&object->lock);
++ file = object->file;
++ if (!file) {
++ spin_unlock(&object->lock);
+ return -ENOBUFS;
++ }
++ get_file(file);
++ spin_unlock(&object->lock);
+
+ cachefiles_begin_secure(cache, &saved_cred);
+- ret = __cachefiles_prepare_write(object, file, &pos, &len, len, true);
++ ret = __cachefiles_prepare_write(object, file, &pos, &aligned_len, len, true);
+ cachefiles_end_secure(cache, saved_cred);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+ trace_cachefiles_ondemand_fd_write(object, file_inode(file), pos, len);
+ ret = __cachefiles_write(object, file, pos, iter, NULL, NULL);
+- if (!ret)
++ if (!ret) {
+ ret = len;
++ kiocb->ki_pos += ret;
++ }
+
++out:
++ fput(file);
+ return ret;
+ }
+
+@@ -87,12 +97,22 @@ static loff_t cachefiles_ondemand_fd_llseek(struct file *filp, loff_t pos,
+ int whence)
+ {
+ struct cachefiles_object *object = filp->private_data;
+- struct file *file = object->file;
++ struct file *file;
++ loff_t ret;
+
+- if (!file)
++ spin_lock(&object->lock);
++ file = object->file;
++ if (!file) {
++ spin_unlock(&object->lock);
+ return -ENOBUFS;
++ }
++ get_file(file);
++ spin_unlock(&object->lock);
+
+- return vfs_llseek(file, pos, whence);
++ ret = vfs_llseek(file, pos, whence);
++ fput(file);
++
++ return ret;
+ }
+
+ static long cachefiles_ondemand_fd_ioctl(struct file *filp, unsigned int ioctl,
+diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
+index 742b30b61c196f..0fe8d80ce5e8d3 100644
+--- a/fs/dlm/ast.c
++++ b/fs/dlm/ast.c
+@@ -30,7 +30,7 @@ static void dlm_run_callback(uint32_t ls_id, uint32_t lkb_id, int8_t mode,
+ trace_dlm_bast(ls_id, lkb_id, mode, res_name, res_length);
+ bastfn(astparam, mode);
+ } else if (flags & DLM_CB_CAST) {
+- trace_dlm_ast(ls_id, lkb_id, sb_status, sb_flags, res_name,
++ trace_dlm_ast(ls_id, lkb_id, sb_flags, sb_status, res_name,
+ res_length);
+ lksb->sb_status = sb_status;
+ lksb->sb_flags = sb_flags;
+diff --git a/fs/dlm/recoverd.c b/fs/dlm/recoverd.c
+index 34f4f9f49a6ce5..12272a8f6d75f3 100644
+--- a/fs/dlm/recoverd.c
++++ b/fs/dlm/recoverd.c
+@@ -151,7 +151,7 @@ static int ls_recover(struct dlm_ls *ls, struct dlm_recover *rv)
+ error = dlm_recover_members(ls, rv, &neg);
+ if (error) {
+ log_rinfo(ls, "dlm_recover_members error %d", error);
+- goto fail;
++ goto fail_root_list;
+ }
+
+ dlm_recover_dir_nodeid(ls, &root_list);
+diff --git a/fs/efs/super.c b/fs/efs/super.c
+index e4421c10caebe5..c59086b7eabfe9 100644
+--- a/fs/efs/super.c
++++ b/fs/efs/super.c
+@@ -15,7 +15,6 @@
+ #include <linux/vfs.h>
+ #include <linux/blkdev.h>
+ #include <linux/fs_context.h>
+-#include <linux/fs_parser.h>
+ #include "efs.h"
+ #include <linux/efs_vh.h>
+ #include <linux/efs_fs_sb.h>
+@@ -49,15 +48,6 @@ static struct pt_types sgi_pt_types[] = {
+ {0, NULL}
+ };
+
+-enum {
+- Opt_explicit_open,
+-};
+-
+-static const struct fs_parameter_spec efs_param_spec[] = {
+- fsparam_flag ("explicit-open", Opt_explicit_open),
+- {}
+-};
+-
+ /*
+ * File system definition and registration.
+ */
+@@ -67,7 +57,6 @@ static struct file_system_type efs_fs_type = {
+ .kill_sb = efs_kill_sb,
+ .fs_flags = FS_REQUIRES_DEV,
+ .init_fs_context = efs_init_fs_context,
+- .parameters = efs_param_spec,
+ };
+ MODULE_ALIAS_FS("efs");
+
+@@ -265,7 +254,8 @@ static int efs_fill_super(struct super_block *s, struct fs_context *fc)
+ if (!sb_set_blocksize(s, EFS_BLOCKSIZE)) {
+ pr_err("device does not support %d byte blocks\n",
+ EFS_BLOCKSIZE);
+- return -EINVAL;
++ return invalf(fc, "device does not support %d byte blocks\n",
++ EFS_BLOCKSIZE);
+ }
+
+ /* read the vh (volume header) block */
+@@ -327,43 +317,22 @@ static int efs_fill_super(struct super_block *s, struct fs_context *fc)
+ return 0;
+ }
+
+-static void efs_free_fc(struct fs_context *fc)
+-{
+- kfree(fc->fs_private);
+-}
+-
+ static int efs_get_tree(struct fs_context *fc)
+ {
+ return get_tree_bdev(fc, efs_fill_super);
+ }
+
+-static int efs_parse_param(struct fs_context *fc, struct fs_parameter *param)
+-{
+- int token;
+- struct fs_parse_result result;
+-
+- token = fs_parse(fc, efs_param_spec, param, &result);
+- if (token < 0)
+- return token;
+- return 0;
+-}
+-
+ static int efs_reconfigure(struct fs_context *fc)
+ {
+ sync_filesystem(fc->root->d_sb);
++ fc->sb_flags |= SB_RDONLY;
+
+ return 0;
+ }
+
+-struct efs_context {
+- unsigned long s_mount_opts;
+-};
+-
+ static const struct fs_context_operations efs_context_opts = {
+- .parse_param = efs_parse_param,
+ .get_tree = efs_get_tree,
+ .reconfigure = efs_reconfigure,
+- .free = efs_free_fc,
+ };
+
+ /*
+@@ -371,12 +340,6 @@ static const struct fs_context_operations efs_context_opts = {
+ */
+ static int efs_init_fs_context(struct fs_context *fc)
+ {
+- struct efs_context *ctx;
+-
+- ctx = kzalloc(sizeof(struct efs_context), GFP_KERNEL);
+- if (!ctx)
+- return -ENOMEM;
+- fc->fs_private = ctx;
+ fc->ops = &efs_context_opts;
+
+ return 0;
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index 403af6e31d5b2c..8d28cfc6a4b837 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -223,7 +223,7 @@ static int z_erofs_load_compact_lcluster(struct z_erofs_maprecorder *m,
+ unsigned int amortizedshift;
+ erofs_off_t pos;
+
+- if (lcn >= totalidx)
++ if (lcn >= totalidx || vi->z_logical_clusterbits > 14)
+ return -EINVAL;
+
+ m->lcn = lcn;
+@@ -398,7 +398,7 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
+ u64 lcn = m->lcn, headlcn = map->m_la >> lclusterbits;
+ int err;
+
+- do {
++ while (1) {
+ /* handle the last EOF pcluster (no next HEAD lcluster) */
+ if ((lcn << lclusterbits) >= inode->i_size) {
+ map->m_llen = inode->i_size - map->m_la;
+@@ -410,14 +410,16 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
+ return err;
+
+ if (m->type == Z_EROFS_LCLUSTER_TYPE_NONHEAD) {
+- DBG_BUGON(!m->delta[1] &&
+- m->clusterofs != 1 << lclusterbits);
++ /* work around invalid d1 generated by pre-1.0 mkfs */
++ if (unlikely(!m->delta[1])) {
++ m->delta[1] = 1;
++ DBG_BUGON(1);
++ }
+ } else if (m->type == Z_EROFS_LCLUSTER_TYPE_PLAIN ||
+ m->type == Z_EROFS_LCLUSTER_TYPE_HEAD1 ||
+ m->type == Z_EROFS_LCLUSTER_TYPE_HEAD2) {
+- /* go on until the next HEAD lcluster */
+ if (lcn != headlcn)
+- break;
++ break; /* ends at the next HEAD lcluster */
+ m->delta[1] = 1;
+ } else {
+ erofs_err(inode->i_sb, "unknown type %u @ lcn %llu of nid %llu",
+@@ -426,8 +428,7 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
+ return -EOPNOTSUPP;
+ }
+ lcn += m->delta[1];
+- } while (m->delta[1]);
+-
++ }
+ map->m_llen = (lcn << lclusterbits) + m->clusterofs - map->m_la;
+ return 0;
+ }
+diff --git a/fs/exec.c b/fs/exec.c
+index dad402d55681b2..5c07a4b06a3563 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -145,11 +145,13 @@ SYSCALL_DEFINE1(uselib, const char __user *, library)
+ goto out;
+
+ /*
+- * Check do_open_execat() for an explanation.
++ * may_open() has already checked for this, so it should be
++ * impossible to trip now. But we need to be extra cautious
++ * and check again at the very end too.
+ */
+ error = -EACCES;
+- if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode)) ||
+- path_noexec(&file->f_path))
++ if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode) ||
++ path_noexec(&file->f_path)))
+ goto exit;
+
+ error = -ENOEXEC;
+@@ -953,6 +955,7 @@ EXPORT_SYMBOL(transfer_args_to_stack);
+ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+ {
+ struct file *file;
++ int err;
+ struct open_flags open_exec_flags = {
+ .open_flag = O_LARGEFILE | O_RDONLY | __FMODE_EXEC,
+ .acc_mode = MAY_EXEC,
+@@ -969,20 +972,28 @@ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+
+ file = do_filp_open(fd, name, &open_exec_flags);
+ if (IS_ERR(file))
+- return file;
++ goto out;
+
+ /*
+- * In the past the regular type check was here. It moved to may_open() in
+- * 633fb6ac3980 ("exec: move S_ISREG() check earlier"). Since then it is
+- * an invariant that all non-regular files error out before we get here.
++ * may_open() has already checked for this, so it should be
++ * impossible to trip now. But we need to be extra cautious
++ * and check again at the very end too.
+ */
+- if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode)) ||
+- path_noexec(&file->f_path)) {
+- fput(file);
+- return ERR_PTR(-EACCES);
+- }
++ err = -EACCES;
++ if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode) ||
++ path_noexec(&file->f_path)))
++ goto exit;
++
++ err = deny_write_access(file);
++ if (err)
++ goto exit;
+
++out:
+ return file;
++
++exit:
++ fput(file);
++ return ERR_PTR(err);
+ }
+
+ /**
+@@ -992,7 +1003,8 @@ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+ *
+ * Returns ERR_PTR on failure or allocated struct file on success.
+ *
+- * As this is a wrapper for the internal do_open_execat(). Also see
++ * As this is a wrapper for the internal do_open_execat(), callers
++ * must call allow_write_access() before fput() on release. Also see
+ * do_close_execat().
+ */
+ struct file *open_exec(const char *name)
+@@ -1544,8 +1556,10 @@ static int prepare_bprm_creds(struct linux_binprm *bprm)
+ /* Matches do_open_execat() */
+ static void do_close_execat(struct file *file)
+ {
+- if (file)
+- fput(file);
++ if (!file)
++ return;
++ allow_write_access(file);
++ fput(file);
+ }
+
+ static void free_bprm(struct linux_binprm *bprm)
+@@ -1870,6 +1884,7 @@ static int exec_binprm(struct linux_binprm *bprm)
+ bprm->file = bprm->interpreter;
+ bprm->interpreter = NULL;
+
++ allow_write_access(exec);
+ if (unlikely(bprm->have_execfd)) {
+ if (bprm->executable) {
+ fput(exec);
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index 64c31867bc7615..525c3ad411ea33 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -578,6 +578,16 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ if (ret < 0)
+ goto unlock;
+
++ if (iocb->ki_flags & IOCB_DIRECT) {
++ unsigned long align = pos | iov_iter_alignment(iter);
++
++ if (!IS_ALIGNED(align, i_blocksize(inode)) &&
++ !IS_ALIGNED(align, bdev_logical_block_size(inode->i_sb->s_bdev))) {
++ ret = -EINVAL;
++ goto unlock;
++ }
++ }
++
+ if (pos > valid_size) {
+ ret = exfat_file_zeroed_range(file, valid_size, pos);
+ if (ret < 0 && ret != -ENOSPC) {
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index 631ad9e8e32a91..2f8899cab539d4 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -345,6 +345,7 @@ static int exfat_find_empty_entry(struct inode *inode,
+ if (ei->start_clu == EXFAT_EOF_CLUSTER) {
+ ei->start_clu = clu.dir;
+ p_dir->dir = clu.dir;
++ hint_femp.eidx = 0;
+ }
+
+ /* append to the FAT chain */
+@@ -636,14 +637,26 @@ static int exfat_find(struct inode *dir, struct qstr *qname,
+ info->size = le64_to_cpu(ep2->dentry.stream.valid_size);
+ info->valid_size = le64_to_cpu(ep2->dentry.stream.valid_size);
+ info->size = le64_to_cpu(ep2->dentry.stream.size);
++
++ info->start_clu = le32_to_cpu(ep2->dentry.stream.start_clu);
++ if (!is_valid_cluster(sbi, info->start_clu) && info->size) {
++ exfat_warn(sb, "start_clu is invalid cluster(0x%x)",
++ info->start_clu);
++ info->size = 0;
++ info->valid_size = 0;
++ }
++
++ if (info->valid_size > info->size) {
++ exfat_warn(sb, "valid_size(%lld) is greater than size(%lld)",
++ info->valid_size, info->size);
++ info->valid_size = info->size;
++ }
++
+ if (info->size == 0) {
+ info->flags = ALLOC_NO_FAT_CHAIN;
+ info->start_clu = EXFAT_EOF_CLUSTER;
+- } else {
++ } else
+ info->flags = ep2->dentry.stream.flags;
+- info->start_clu =
+- le32_to_cpu(ep2->dentry.stream.start_clu);
+- }
+
+ exfat_get_entry_time(sbi, &info->crtime,
+ ep->dentry.file.create_tz,
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 591fb3f710be72..8042ad87380897 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -550,7 +550,8 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group,
+ trace_ext4_read_block_bitmap_load(sb, block_group, ignore_locked);
+ ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO |
+ (ignore_locked ? REQ_RAHEAD : 0),
+- ext4_end_bitmap_read);
++ ext4_end_bitmap_read,
++ ext4_simulate_fail(sb, EXT4_SIM_BBITMAP_EIO));
+ return bh;
+ verify:
+ err = ext4_validate_block_bitmap(sb, desc, block_group, bh);
+@@ -577,7 +578,6 @@ int ext4_wait_block_bitmap(struct super_block *sb, ext4_group_t block_group,
+ if (!desc)
+ return -EFSCORRUPTED;
+ wait_on_buffer(bh);
+- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_BBITMAP_EIO);
+ if (!buffer_uptodate(bh)) {
+ ext4_error_err(sb, EIO, "Cannot read block bitmap - "
+ "block_group = %u, block_bitmap = %llu",
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 8bd302392d759a..b02d6e444cfb6e 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1865,14 +1865,6 @@ static inline bool ext4_simulate_fail(struct super_block *sb,
+ return false;
+ }
+
+-static inline void ext4_simulate_fail_bh(struct super_block *sb,
+- struct buffer_head *bh,
+- unsigned long code)
+-{
+- if (!IS_ERR(bh) && ext4_simulate_fail(sb, code))
+- clear_buffer_uptodate(bh);
+-}
+-
+ /*
+ * Error number codes for s_{first,last}_error_errno
+ *
+@@ -3098,9 +3090,9 @@ extern struct buffer_head *ext4_sb_bread(struct super_block *sb,
+ extern struct buffer_head *ext4_sb_bread_unmovable(struct super_block *sb,
+ sector_t block);
+ extern void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io);
++ bh_end_io_t *end_io, bool simu_fail);
+ extern int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io);
++ bh_end_io_t *end_io, bool simu_fail);
+ extern int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait);
+ extern void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block);
+ extern int ext4_seq_options_show(struct seq_file *seq, void *offset);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index c64f7c1b1d9082..123a69e62e59be 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -564,7 +564,7 @@ __read_extent_tree_block(const char *function, unsigned int line,
+
+ if (!bh_uptodate_or_lock(bh)) {
+ trace_ext4_ext_load_extent(inode, pblk, _RET_IP_);
+- err = ext4_read_bh(bh, 0, NULL);
++ err = ext4_read_bh(bh, 0, NULL, false);
+ if (err < 0)
+ goto errout;
+ }
+diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c
+index df853c4d3a8c91..383c6edea6dd31 100644
+--- a/fs/ext4/fsmap.c
++++ b/fs/ext4/fsmap.c
+@@ -185,6 +185,56 @@ static inline ext4_fsblk_t ext4_fsmap_next_pblk(struct ext4_fsmap *fmr)
+ return fmr->fmr_physical + fmr->fmr_length;
+ }
+
++static int ext4_getfsmap_meta_helper(struct super_block *sb,
++ ext4_group_t agno, ext4_grpblk_t start,
++ ext4_grpblk_t len, void *priv)
++{
++ struct ext4_getfsmap_info *info = priv;
++ struct ext4_fsmap *p;
++ struct ext4_fsmap *tmp;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ ext4_fsblk_t fsb, fs_start, fs_end;
++ int error;
++
++ fs_start = fsb = (EXT4_C2B(sbi, start) +
++ ext4_group_first_block_no(sb, agno));
++ fs_end = fs_start + EXT4_C2B(sbi, len);
++
++ /* Return relevant extents from the meta_list */
++ list_for_each_entry_safe(p, tmp, &info->gfi_meta_list, fmr_list) {
++ if (p->fmr_physical < info->gfi_next_fsblk) {
++ list_del(&p->fmr_list);
++ kfree(p);
++ continue;
++ }
++ if (p->fmr_physical <= fs_start ||
++ p->fmr_physical + p->fmr_length <= fs_end) {
++ /* Emit the retained free extent record if present */
++ if (info->gfi_lastfree.fmr_owner) {
++ error = ext4_getfsmap_helper(sb, info,
++ &info->gfi_lastfree);
++ if (error)
++ return error;
++ info->gfi_lastfree.fmr_owner = 0;
++ }
++ error = ext4_getfsmap_helper(sb, info, p);
++ if (error)
++ return error;
++ fsb = p->fmr_physical + p->fmr_length;
++ if (info->gfi_next_fsblk < fsb)
++ info->gfi_next_fsblk = fsb;
++ list_del(&p->fmr_list);
++ kfree(p);
++ continue;
++ }
++ }
++ if (info->gfi_next_fsblk < fsb)
++ info->gfi_next_fsblk = fsb;
++
++ return 0;
++}
++
++
+ /* Transform a blockgroup's free record into a fsmap */
+ static int ext4_getfsmap_datadev_helper(struct super_block *sb,
+ ext4_group_t agno, ext4_grpblk_t start,
+@@ -539,6 +589,7 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
+ error = ext4_mballoc_query_range(sb, info->gfi_agno,
+ EXT4_B2C(sbi, info->gfi_low.fmr_physical),
+ EXT4_B2C(sbi, info->gfi_high.fmr_physical),
++ ext4_getfsmap_meta_helper,
+ ext4_getfsmap_datadev_helper, info);
+ if (error)
+ goto err;
+@@ -560,7 +611,8 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
+
+ /* Report any gaps at the end of the bg */
+ info->gfi_last = true;
+- error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster, 0, info);
++ error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster + 1,
++ 0, info);
+ if (error)
+ goto err;
+
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index 81641be38c0e8b..072084da50d33f 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -194,8 +194,9 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
+ * submit the buffer_head for reading
+ */
+ trace_ext4_load_inode_bitmap(sb, block_group);
+- ext4_read_bh(bh, REQ_META | REQ_PRIO, ext4_end_bitmap_read);
+- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_IBITMAP_EIO);
++ ext4_read_bh(bh, REQ_META | REQ_PRIO,
++ ext4_end_bitmap_read,
++ ext4_simulate_fail(sb, EXT4_SIM_IBITMAP_EIO));
+ if (!buffer_uptodate(bh)) {
+ put_bh(bh);
+ ext4_error_err(sb, EIO, "Cannot read inode bitmap - "
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index d8ca7f64f95234..8480bddde789b8 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -170,7 +170,7 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth,
+ }
+
+ if (!bh_uptodate_or_lock(bh)) {
+- if (ext4_read_bh(bh, 0, NULL) < 0) {
++ if (ext4_read_bh(bh, 0, NULL, false) < 0) {
+ put_bh(bh);
+ goto failure;
+ }
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index a0fa5192db8ed3..235e22790ec628 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4530,10 +4530,10 @@ static int __ext4_get_inode_loc(struct super_block *sb, unsigned long ino,
+ * Read the block from disk.
+ */
+ trace_ext4_load_inode(sb, ino);
+- ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL);
++ ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL,
++ ext4_simulate_fail(sb, EXT4_SIM_INODE_EIO));
+ blk_finish_plug(&plug);
+ wait_on_buffer(bh);
+- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_INODE_EIO);
+ if (!buffer_uptodate(bh)) {
+ if (ret_block)
+ *ret_block = block;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index dfecd25cee4eae..d2b04daa04827b 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -6998,13 +6998,14 @@ int
+ ext4_mballoc_query_range(
+ struct super_block *sb,
+ ext4_group_t group,
+- ext4_grpblk_t start,
++ ext4_grpblk_t first,
+ ext4_grpblk_t end,
++ ext4_mballoc_query_range_fn meta_formatter,
+ ext4_mballoc_query_range_fn formatter,
+ void *priv)
+ {
+ void *bitmap;
+- ext4_grpblk_t next;
++ ext4_grpblk_t start, next;
+ struct ext4_buddy e4b;
+ int error;
+
+@@ -7015,10 +7016,19 @@ ext4_mballoc_query_range(
+
+ ext4_lock_group(sb, group);
+
+- start = max(e4b.bd_info->bb_first_free, start);
++ start = max(e4b.bd_info->bb_first_free, first);
+ if (end >= EXT4_CLUSTERS_PER_GROUP(sb))
+ end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
+-
++ if (meta_formatter && start != first) {
++ if (start > end)
++ start = end;
++ ext4_unlock_group(sb, group);
++ error = meta_formatter(sb, group, first, start - first,
++ priv);
++ if (error)
++ goto out_unload;
++ ext4_lock_group(sb, group);
++ }
+ while (start <= end) {
+ start = mb_find_next_zero_bit(bitmap, end + 1, start);
+ if (start > end)
+diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
+index d8553f1498d3cb..f8280de3e8820a 100644
+--- a/fs/ext4/mballoc.h
++++ b/fs/ext4/mballoc.h
+@@ -259,6 +259,7 @@ ext4_mballoc_query_range(
+ ext4_group_t agno,
+ ext4_grpblk_t start,
+ ext4_grpblk_t end,
++ ext4_mballoc_query_range_fn meta_formatter,
+ ext4_mballoc_query_range_fn formatter,
+ void *priv);
+
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index bd946d0c71b700..d64c04ed061ae9 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -94,7 +94,7 @@ static int read_mmp_block(struct super_block *sb, struct buffer_head **bh,
+ }
+
+ lock_buffer(*bh);
+- ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL);
++ ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL, false);
+ if (ret)
+ goto warn_exit;
+
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index c95e3e526390d7..fd3487df360a58 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -165,15 +165,16 @@ mext_folio_double_lock(struct inode *inode1, struct inode *inode2,
+ return 0;
+ }
+
+-/* Force page buffers uptodate w/o dropping page's lock */
+-static int
+-mext_page_mkuptodate(struct folio *folio, unsigned from, unsigned to)
++/* Force folio buffers uptodate w/o dropping folio's lock */
++static int mext_page_mkuptodate(struct folio *folio, size_t from, size_t to)
+ {
+ struct inode *inode = folio->mapping->host;
+ sector_t block;
+- struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
++ struct buffer_head *bh, *head;
+ unsigned int blocksize, block_start, block_end;
+- int i, err, nr = 0, partial = 0;
++ int nr = 0;
++ bool partial = false;
++
+ BUG_ON(!folio_test_locked(folio));
+ BUG_ON(folio_test_writeback(folio));
+
+@@ -191,13 +192,13 @@ mext_page_mkuptodate(struct folio *folio, unsigned from, unsigned to)
+ block_end = block_start + blocksize;
+ if (block_end <= from || block_start >= to) {
+ if (!buffer_uptodate(bh))
+- partial = 1;
++ partial = true;
+ continue;
+ }
+ if (buffer_uptodate(bh))
+ continue;
+ if (!buffer_mapped(bh)) {
+- err = ext4_get_block(inode, block, bh, 0);
++ int err = ext4_get_block(inode, block, bh, 0);
+ if (err)
+ return err;
+ if (!buffer_mapped(bh)) {
+@@ -206,21 +207,29 @@ mext_page_mkuptodate(struct folio *folio, unsigned from, unsigned to)
+ continue;
+ }
+ }
+- BUG_ON(nr >= MAX_BUF_PER_PAGE);
+- arr[nr++] = bh;
++ lock_buffer(bh);
++ if (buffer_uptodate(bh)) {
++ unlock_buffer(bh);
++ continue;
++ }
++ ext4_read_bh_nowait(bh, 0, NULL, false);
++ nr++;
+ }
+ /* No io required */
+ if (!nr)
+ goto out;
+
+- for (i = 0; i < nr; i++) {
+- bh = arr[i];
+- if (!bh_uptodate_or_lock(bh)) {
+- err = ext4_read_bh(bh, 0, NULL);
+- if (err)
+- return err;
+- }
+- }
++ bh = head;
++ do {
++ if (bh_offset(bh) + blocksize <= from)
++ continue;
++ if (bh_offset(bh) > to)
++ break;
++ wait_on_buffer(bh);
++ if (buffer_uptodate(bh))
++ continue;
++ return -EIO;
++ } while ((bh = bh->b_this_page) != head);
+ out:
+ if (!partial)
+ folio_mark_uptodate(folio);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index f12ccaabf13d8b..e9da8bba0f9d72 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1300,7 +1300,7 @@ static struct buffer_head *ext4_get_bitmap(struct super_block *sb, __u64 block)
+ if (unlikely(!bh))
+ return NULL;
+ if (!bh_uptodate_or_lock(bh)) {
+- if (ext4_read_bh(bh, 0, NULL) < 0) {
++ if (ext4_read_bh(bh, 0, NULL, false) < 0) {
+ brelse(bh);
+ return NULL;
+ }
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index cd3328b0315349..af4c5d4b963e91 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -161,8 +161,14 @@ MODULE_ALIAS("ext3");
+
+
+ static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io)
++ bh_end_io_t *end_io, bool simu_fail)
+ {
++ if (simu_fail) {
++ clear_buffer_uptodate(bh);
++ unlock_buffer(bh);
++ return;
++ }
++
+ /*
+ * buffer's verified bit is no longer valid after reading from
+ * disk again due to write out error, clear it to make sure we
+@@ -176,7 +182,7 @@ static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
+ }
+
+ void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io)
++ bh_end_io_t *end_io, bool simu_fail)
+ {
+ BUG_ON(!buffer_locked(bh));
+
+@@ -184,10 +190,11 @@ void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
+ unlock_buffer(bh);
+ return;
+ }
+- __ext4_read_bh(bh, op_flags, end_io);
++ __ext4_read_bh(bh, op_flags, end_io, simu_fail);
+ }
+
+-int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io)
++int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
++ bh_end_io_t *end_io, bool simu_fail)
+ {
+ BUG_ON(!buffer_locked(bh));
+
+@@ -196,7 +203,7 @@ int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io
+ return 0;
+ }
+
+- __ext4_read_bh(bh, op_flags, end_io);
++ __ext4_read_bh(bh, op_flags, end_io, simu_fail);
+
+ wait_on_buffer(bh);
+ if (buffer_uptodate(bh))
+@@ -208,10 +215,10 @@ int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait)
+ {
+ lock_buffer(bh);
+ if (!wait) {
+- ext4_read_bh_nowait(bh, op_flags, NULL);
++ ext4_read_bh_nowait(bh, op_flags, NULL, false);
+ return 0;
+ }
+- return ext4_read_bh(bh, op_flags, NULL);
++ return ext4_read_bh(bh, op_flags, NULL, false);
+ }
+
+ /*
+@@ -266,7 +273,7 @@ void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block)
+
+ if (likely(bh)) {
+ if (trylock_buffer(bh))
+- ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL);
++ ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL, false);
+ brelse(bh);
+ }
+ }
+@@ -346,9 +353,9 @@ __u32 ext4_free_group_clusters(struct super_block *sb,
+ __u32 ext4_free_inodes_count(struct super_block *sb,
+ struct ext4_group_desc *bg)
+ {
+- return le16_to_cpu(bg->bg_free_inodes_count_lo) |
++ return le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_lo)) |
+ (EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT ?
+- (__u32)le16_to_cpu(bg->bg_free_inodes_count_hi) << 16 : 0);
++ (__u32)le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_hi)) << 16 : 0);
+ }
+
+ __u32 ext4_used_dirs_count(struct super_block *sb,
+@@ -402,9 +409,9 @@ void ext4_free_group_clusters_set(struct super_block *sb,
+ void ext4_free_inodes_set(struct super_block *sb,
+ struct ext4_group_desc *bg, __u32 count)
+ {
+- bg->bg_free_inodes_count_lo = cpu_to_le16((__u16)count);
++ WRITE_ONCE(bg->bg_free_inodes_count_lo, cpu_to_le16((__u16)count));
+ if (EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT)
+- bg->bg_free_inodes_count_hi = cpu_to_le16(count >> 16);
++ WRITE_ONCE(bg->bg_free_inodes_count_hi, cpu_to_le16(count >> 16));
+ }
+
+ void ext4_used_dirs_set(struct super_block *sb,
+@@ -6518,9 +6525,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ goto restore_opts;
+ }
+
+- if (test_opt2(sb, ABORT))
+- ext4_abort(sb, ESHUTDOWN, "Abort forced by user");
+-
+ sb->s_flags = (sb->s_flags & ~SB_POSIXACL) |
+ (test_opt(sb, POSIX_ACL) ? SB_POSIXACL : 0);
+
+@@ -6689,6 +6693,14 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
+ ext4_stop_mmpd(sbi);
+
++ /*
++ * Handle aborting the filesystem as the last thing during remount to
++ * avoid obsure errors during remount when some option changes fail to
++ * apply due to shutdown filesystem.
++ */
++ if (test_opt2(sb, ABORT))
++ ext4_abort(sb, ESHUTDOWN, "Abort forced by user");
++
+ return 0;
+
+ restore_opts:
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index bdd96329dddd49..ee8b29745a5b28 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -32,7 +32,7 @@ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io,
+ f2fs_build_fault_attr(sbi, 0, 0);
+ if (!end_io)
+ f2fs_flush_merged_writes(sbi);
+- f2fs_handle_critical_error(sbi, reason, end_io);
++ f2fs_handle_critical_error(sbi, reason);
+ }
+
+ /*
+@@ -1551,7 +1551,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ blk = start_blk + BLKS_PER_SEG(sbi) - nm_i->nat_bits_blocks;
+ for (i = 0; i < nm_i->nat_bits_blocks; i++)
+ f2fs_update_meta_page(sbi, nm_i->nat_bits +
+- (i << F2FS_BLKSIZE_BITS), blk + i);
++ F2FS_BLK_TO_BYTES(i), blk + i);
+ }
+
+ /* write out checkpoint buffer at block 0 */
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index be66b3a0e793f6..af66edc722e98d 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -1677,7 +1677,8 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int flag)
+ /* reserved delalloc block should be mapped for fiemap. */
+ if (blkaddr == NEW_ADDR)
+ map->m_flags |= F2FS_MAP_DELALLOC;
+- if (flag != F2FS_GET_BLOCK_DIO || !is_hole)
++ /* DIO READ and hole case, should not map the blocks. */
++ if (!(flag == F2FS_GET_BLOCK_DIO && is_hole && !map->m_may_create))
+ map->m_flags |= F2FS_MAP_MAPPED;
+
+ map->m_pblk = blkaddr;
+@@ -1894,25 +1895,6 @@ static int f2fs_xattr_fiemap(struct inode *inode,
+ return (err < 0 ? err : 0);
+ }
+
+-static loff_t max_inode_blocks(struct inode *inode)
+-{
+- loff_t result = ADDRS_PER_INODE(inode);
+- loff_t leaf_count = ADDRS_PER_BLOCK(inode);
+-
+- /* two direct node blocks */
+- result += (leaf_count * 2);
+-
+- /* two indirect node blocks */
+- leaf_count *= NIDS_PER_BLOCK;
+- result += (leaf_count * 2);
+-
+- /* one double indirect node block */
+- leaf_count *= NIDS_PER_BLOCK;
+- result += leaf_count;
+-
+- return result;
+-}
+-
+ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ u64 start, u64 len)
+ {
+@@ -1985,8 +1967,7 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ if (!compr_cluster && !(map.m_flags & F2FS_MAP_FLAGS)) {
+ start_blk = next_pgofs;
+
+- if (blks_to_bytes(inode, start_blk) < blks_to_bytes(inode,
+- max_inode_blocks(inode)))
++ if (blks_to_bytes(inode, start_blk) < maxbytes)
+ goto prep_next;
+
+ flags |= FIEMAP_EXTENT_LAST;
+@@ -2374,10 +2355,10 @@ static int f2fs_mpage_readpages(struct inode *inode,
+ .nr_cpages = 0,
+ };
+ pgoff_t nc_cluster_idx = NULL_CLUSTER;
++ pgoff_t index;
+ #endif
+ unsigned nr_pages = rac ? readahead_count(rac) : 1;
+ unsigned max_nr_pages = nr_pages;
+- pgoff_t index;
+ int ret = 0;
+
+ map.m_pblk = 0;
+@@ -2395,9 +2376,9 @@ static int f2fs_mpage_readpages(struct inode *inode,
+ prefetchw(&folio->flags);
+ }
+
++#ifdef CONFIG_F2FS_FS_COMPRESSION
+ index = folio_index(folio);
+
+-#ifdef CONFIG_F2FS_FS_COMPRESSION
+ if (!f2fs_compressed_file(inode))
+ goto read_single_page;
+
+diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
+index 8b0e1e71b66744..546b8ba9126130 100644
+--- a/fs/f2fs/debug.c
++++ b/fs/f2fs/debug.c
+@@ -275,7 +275,7 @@ static void update_mem_info(struct f2fs_sb_info *sbi)
+ /* build nm */
+ si->base_mem += sizeof(struct f2fs_nm_info);
+ si->base_mem += __bitmap_size(sbi, NAT_BITMAP);
+- si->base_mem += (NM_I(sbi)->nat_bits_blocks << F2FS_BLKSIZE_BITS);
++ si->base_mem += F2FS_BLK_TO_BYTES(NM_I(sbi)->nat_bits_blocks);
+ si->base_mem += NM_I(sbi)->nat_blocks *
+ f2fs_bitmap_size(NAT_ENTRY_PER_BLOCK);
+ si->base_mem += NM_I(sbi)->nat_blocks / 8;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 3bdfa53b24bb9e..0c8afcd6385184 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3635,8 +3635,7 @@ int f2fs_quota_sync(struct super_block *sb, int type);
+ loff_t max_file_blocks(struct inode *inode);
+ void f2fs_quota_off_umount(struct super_block *sb);
+ void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag);
+-void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
+- bool irq_context);
++void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason);
+ void f2fs_handle_error(struct f2fs_sb_info *sbi, unsigned char error);
+ void f2fs_handle_error_async(struct f2fs_sb_info *sbi, unsigned char error);
+ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 80ebc13ccb35b6..5f79d49b22bb80 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -868,7 +868,11 @@ static bool f2fs_force_buffered_io(struct inode *inode, int rw)
+ return true;
+ if (f2fs_compressed_file(inode))
+ return true;
+- if (f2fs_has_inline_data(inode))
++ /*
++ * only force direct read to use buffered IO, for direct write,
++ * it expects inline data conversion before committing IO.
++ */
++ if (f2fs_has_inline_data(inode) && rw == READ)
+ return true;
+
+ /* disallow direct IO if any of devices has unaligned blksize */
+@@ -2349,9 +2353,12 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+ if (readonly)
+ goto out;
+
+- /* grab sb->s_umount to avoid racing w/ remount() */
++ /*
++ * grab sb->s_umount to avoid racing w/ remount() and other shutdown
++ * paths.
++ */
+ if (need_lock)
+- down_read(&sbi->sb->s_umount);
++ down_write(&sbi->sb->s_umount);
+
+ f2fs_stop_gc_thread(sbi);
+ f2fs_stop_discard_thread(sbi);
+@@ -2360,7 +2367,7 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+ clear_opt(sbi, DISCARD);
+
+ if (need_lock)
+- up_read(&sbi->sb->s_umount);
++ up_write(&sbi->sb->s_umount);
+
+ f2fs_update_time(sbi, REQ_TIME);
+ out:
+@@ -3006,9 +3013,9 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in,
+ }
+
+ f2fs_lock_op(sbi);
+- ret = __exchange_data_block(src, dst, pos_in >> F2FS_BLKSIZE_BITS,
+- pos_out >> F2FS_BLKSIZE_BITS,
+- len >> F2FS_BLKSIZE_BITS, false);
++ ret = __exchange_data_block(src, dst, F2FS_BYTES_TO_BLK(pos_in),
++ F2FS_BYTES_TO_BLK(pos_out),
++ F2FS_BYTES_TO_BLK(len), false);
+
+ if (!ret) {
+ if (dst_max_i_size)
+@@ -3798,7 +3805,7 @@ static int reserve_compress_blocks(struct dnode_of_data *dn, pgoff_t count,
+ to_reserved = cluster_size - compr_blocks - reserved;
+
+ /* for the case all blocks in cluster were reserved */
+- if (to_reserved == 1) {
++ if (reserved && to_reserved == 1) {
+ dn->ofs_in_node += cluster_size;
+ goto next;
+ }
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 938249e7819e4e..2bc16e31065086 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -251,6 +251,8 @@ static int select_gc_type(struct f2fs_sb_info *sbi, int gc_type)
+
+ switch (sbi->gc_mode) {
+ case GC_IDLE_CB:
++ case GC_URGENT_LOW:
++ case GC_URGENT_MID:
+ gc_mode = GC_CB;
+ break;
+ case GC_IDLE_GREEDY:
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index b72ef96f7e33af..ff5281a75933c7 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -905,6 +905,16 @@ static int truncate_node(struct dnode_of_data *dn)
+ if (err)
+ return err;
+
++ if (ni.blk_addr != NEW_ADDR &&
++ !f2fs_is_valid_blkaddr(sbi, ni.blk_addr, DATA_GENERIC_ENHANCE)) {
++ f2fs_err_ratelimited(sbi,
++ "nat entry is corrupted, run fsck to fix it, ino:%u, "
++ "nid:%u, blkaddr:%u", ni.ino, ni.nid, ni.blk_addr);
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ f2fs_handle_error(sbi, ERROR_INCONSISTENT_NAT);
++ return -EFSCORRUPTED;
++ }
++
+ /* Deallocate node address */
+ f2fs_invalidate_blocks(sbi, ni.blk_addr);
+ dec_valid_node_count(sbi, dn->inode, dn->nid == dn->inode->i_ino);
+@@ -3166,7 +3176,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
+
+ nm_i->nat_bits_blocks = F2FS_BLK_ALIGN((nat_bits_bytes << 1) + 8);
+ nm_i->nat_bits = f2fs_kvzalloc(sbi,
+- nm_i->nat_bits_blocks << F2FS_BLKSIZE_BITS, GFP_KERNEL);
++ F2FS_BLK_TO_BYTES(nm_i->nat_bits_blocks), GFP_KERNEL);
+ if (!nm_i->nat_bits)
+ return -ENOMEM;
+
+@@ -3185,7 +3195,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+
+- memcpy(nm_i->nat_bits + (i << F2FS_BLKSIZE_BITS),
++ memcpy(nm_i->nat_bits + F2FS_BLK_TO_BYTES(i),
+ page_address(page), F2FS_BLKSIZE);
+ f2fs_put_page(page, 1);
+ }
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index c9320c98cb1425..6379d9c40a0243 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -2926,7 +2926,8 @@ static int change_curseg(struct f2fs_sb_info *sbi, int type)
+ struct f2fs_summary_block *sum_node;
+ struct page *sum_page;
+
+- write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno));
++ if (curseg->inited)
++ write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno));
+
+ __set_test_and_inuse(sbi, new_segno);
+
+@@ -3974,8 +3975,8 @@ void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ }
+ }
+
+- f2fs_bug_on(sbi, !IS_DATASEG(type));
+ curseg = CURSEG_I(sbi, type);
++ f2fs_bug_on(sbi, !IS_DATASEG(curseg->seg_type));
+
+ mutex_lock(&curseg->curseg_mutex);
+ down_write(&sit_i->sentry_lock);
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index bfc01a521cb98e..c3aeca7e0dde6c 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -558,18 +558,21 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi)
+ }
+
+ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
+- unsigned int node_blocks, unsigned int dent_blocks)
++ unsigned int node_blocks, unsigned int data_blocks,
++ unsigned int dent_blocks)
+ {
+
+- unsigned segno, left_blocks;
++ unsigned int segno, left_blocks, blocks;
+ int i;
+
+- /* check current node sections in the worst case. */
+- for (i = CURSEG_HOT_NODE; i <= CURSEG_COLD_NODE; i++) {
++ /* check current data/node sections in the worst case. */
++ for (i = CURSEG_HOT_DATA; i < NR_PERSISTENT_LOG; i++) {
+ segno = CURSEG_I(sbi, i)->segno;
+ left_blocks = CAP_BLKS_PER_SEC(sbi) -
+ get_ckpt_valid_blocks(sbi, segno, true);
+- if (node_blocks > left_blocks)
++
++ blocks = i <= CURSEG_COLD_DATA ? data_blocks : node_blocks;
++ if (blocks > left_blocks)
+ return false;
+ }
+
+@@ -583,8 +586,9 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
+ }
+
+ /*
+- * calculate needed sections for dirty node/dentry
+- * and call has_curseg_enough_space
++ * calculate needed sections for dirty node/dentry and call
++ * has_curseg_enough_space, please note that, it needs to account
++ * dirty data as well in lfs mode when checkpoint is disabled.
+ */
+ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ unsigned int *lower_p, unsigned int *upper_p, bool *curseg_p)
+@@ -593,19 +597,30 @@ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ get_pages(sbi, F2FS_DIRTY_DENTS) +
+ get_pages(sbi, F2FS_DIRTY_IMETA);
+ unsigned int total_dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS);
++ unsigned int total_data_blocks = 0;
+ unsigned int node_secs = total_node_blocks / CAP_BLKS_PER_SEC(sbi);
+ unsigned int dent_secs = total_dent_blocks / CAP_BLKS_PER_SEC(sbi);
++ unsigned int data_secs = 0;
+ unsigned int node_blocks = total_node_blocks % CAP_BLKS_PER_SEC(sbi);
+ unsigned int dent_blocks = total_dent_blocks % CAP_BLKS_PER_SEC(sbi);
++ unsigned int data_blocks = 0;
++
++ if (f2fs_lfs_mode(sbi) &&
++ unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
++ total_data_blocks = get_pages(sbi, F2FS_DIRTY_DATA);
++ data_secs = total_data_blocks / CAP_BLKS_PER_SEC(sbi);
++ data_blocks = total_data_blocks % CAP_BLKS_PER_SEC(sbi);
++ }
+
+ if (lower_p)
+- *lower_p = node_secs + dent_secs;
++ *lower_p = node_secs + dent_secs + data_secs;
+ if (upper_p)
+ *upper_p = node_secs + dent_secs +
+- (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0);
++ (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0) +
++ (data_blocks ? 1 : 0);
+ if (curseg_p)
+ *curseg_p = has_curseg_enough_space(sbi,
+- node_blocks, dent_blocks);
++ node_blocks, data_blocks, dent_blocks);
+ }
+
+ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 0f6e2b3f6a4c51..7e687bd0622661 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -151,6 +151,8 @@ enum {
+ Opt_mode,
+ Opt_fault_injection,
+ Opt_fault_type,
++ Opt_lazytime,
++ Opt_nolazytime,
+ Opt_quota,
+ Opt_noquota,
+ Opt_usrquota,
+@@ -227,6 +229,8 @@ static match_table_t f2fs_tokens = {
+ {Opt_mode, "mode=%s"},
+ {Opt_fault_injection, "fault_injection=%u"},
+ {Opt_fault_type, "fault_type=%u"},
++ {Opt_lazytime, "lazytime"},
++ {Opt_nolazytime, "nolazytime"},
+ {Opt_quota, "quota"},
+ {Opt_noquota, "noquota"},
+ {Opt_usrquota, "usrquota"},
+@@ -919,6 +923,12 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
+ f2fs_info(sbi, "fault_type options not supported");
+ break;
+ #endif
++ case Opt_lazytime:
++ sb->s_flags |= SB_LAZYTIME;
++ break;
++ case Opt_nolazytime:
++ sb->s_flags &= ~SB_LAZYTIME;
++ break;
+ #ifdef CONFIG_QUOTA
+ case Opt_quota:
+ case Opt_usrquota:
+@@ -3323,7 +3333,7 @@ loff_t max_file_blocks(struct inode *inode)
+ * fit within U32_MAX + 1 data units.
+ */
+
+- result = min(result, (((loff_t)U32_MAX + 1) * 4096) >> F2FS_BLKSIZE_BITS);
++ result = umin(result, F2FS_BYTES_TO_BLK(((loff_t)U32_MAX + 1) * 4096));
+
+ return result;
+ }
+@@ -4136,8 +4146,7 @@ static bool system_going_down(void)
+ || system_state == SYSTEM_RESTART;
+ }
+
+-void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
+- bool irq_context)
++void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason)
+ {
+ struct super_block *sb = sbi->sb;
+ bool shutdown = reason == STOP_CP_REASON_SHUTDOWN;
+@@ -4149,10 +4158,12 @@ void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
+ if (!f2fs_hw_is_readonly(sbi)) {
+ save_stop_reason(sbi, reason);
+
+- if (irq_context && !shutdown)
+- schedule_work(&sbi->s_error_work);
+- else
+- f2fs_record_stop_reason(sbi);
++ /*
++ * always create an asynchronous task to record stop_reason
++ * in order to avoid potential deadlock when running into
++ * f2fs_record_stop_reason() synchronously.
++ */
++ schedule_work(&sbi->s_error_work);
+ }
+
+ /*
+@@ -4972,9 +4983,6 @@ static int __init init_f2fs_fs(void)
+ err = f2fs_init_shrinker();
+ if (err)
+ goto free_sysfs;
+- err = register_filesystem(&f2fs_fs_type);
+- if (err)
+- goto free_shrinker;
+ f2fs_create_root_stats();
+ err = f2fs_init_post_read_processing();
+ if (err)
+@@ -4997,7 +5005,12 @@ static int __init init_f2fs_fs(void)
+ err = f2fs_create_casefold_cache();
+ if (err)
+ goto free_compress_cache;
++ err = register_filesystem(&f2fs_fs_type);
++ if (err)
++ goto free_casefold_cache;
+ return 0;
++free_casefold_cache:
++ f2fs_destroy_casefold_cache();
+ free_compress_cache:
+ f2fs_destroy_compress_cache();
+ free_compress_mempool:
+@@ -5012,8 +5025,6 @@ static int __init init_f2fs_fs(void)
+ f2fs_destroy_post_read_processing();
+ free_root_stats:
+ f2fs_destroy_root_stats();
+- unregister_filesystem(&f2fs_fs_type);
+-free_shrinker:
+ f2fs_exit_shrinker();
+ free_sysfs:
+ f2fs_exit_sysfs();
+@@ -5037,6 +5048,7 @@ static int __init init_f2fs_fs(void)
+
+ static void __exit exit_f2fs_fs(void)
+ {
++ unregister_filesystem(&f2fs_fs_type);
+ f2fs_destroy_casefold_cache();
+ f2fs_destroy_compress_cache();
+ f2fs_destroy_compress_mempool();
+@@ -5045,7 +5057,6 @@ static void __exit exit_f2fs_fs(void)
+ f2fs_destroy_iostat_processing();
+ f2fs_destroy_post_read_processing();
+ f2fs_destroy_root_stats();
+- unregister_filesystem(&f2fs_fs_type);
+ f2fs_exit_shrinker();
+ f2fs_exit_sysfs();
+ f2fs_destroy_garbage_collection_cache();
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 08f7d538ca98f4..f4d17cd8c734c4 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -645,7 +645,7 @@ void fuse_read_args_fill(struct fuse_io_args *ia, struct file *file, loff_t pos,
+ args->out_args[0].size = count;
+ }
+
+-static void fuse_release_user_pages(struct fuse_args_pages *ap,
++static void fuse_release_user_pages(struct fuse_args_pages *ap, ssize_t nres,
+ bool should_dirty)
+ {
+ unsigned int i;
+@@ -656,6 +656,9 @@ static void fuse_release_user_pages(struct fuse_args_pages *ap,
+ if (ap->args.is_pinned)
+ unpin_user_page(ap->pages[i]);
+ }
++
++ if (nres > 0 && ap->args.invalidate_vmap)
++ invalidate_kernel_vmap_range(ap->args.vmap_base, nres);
+ }
+
+ static void fuse_io_release(struct kref *kref)
+@@ -754,25 +757,29 @@ static void fuse_aio_complete_req(struct fuse_mount *fm, struct fuse_args *args,
+ struct fuse_io_args *ia = container_of(args, typeof(*ia), ap.args);
+ struct fuse_io_priv *io = ia->io;
+ ssize_t pos = -1;
+-
+- fuse_release_user_pages(&ia->ap, io->should_dirty);
++ size_t nres;
+
+ if (err) {
+ /* Nothing */
+ } else if (io->write) {
+ if (ia->write.out.size > ia->write.in.size) {
+ err = -EIO;
+- } else if (ia->write.in.size != ia->write.out.size) {
+- pos = ia->write.in.offset - io->offset +
+- ia->write.out.size;
++ } else {
++ nres = ia->write.out.size;
++ if (ia->write.in.size != ia->write.out.size)
++ pos = ia->write.in.offset - io->offset +
++ ia->write.out.size;
+ }
+ } else {
+ u32 outsize = args->out_args[0].size;
+
++ nres = outsize;
+ if (ia->read.in.size != outsize)
+ pos = ia->read.in.offset - io->offset + outsize;
+ }
+
++ fuse_release_user_pages(&ia->ap, err ?: nres, io->should_dirty);
++
+ fuse_aio_complete(io, err, pos);
+ fuse_io_free(ia);
+ }
+@@ -1467,24 +1474,37 @@ static inline size_t fuse_get_frag_size(const struct iov_iter *ii,
+
+ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii,
+ size_t *nbytesp, int write,
+- unsigned int max_pages)
++ unsigned int max_pages,
++ bool use_pages_for_kvec_io)
+ {
++ bool flush_or_invalidate = false;
+ size_t nbytes = 0; /* # bytes already packed in req */
+ ssize_t ret = 0;
+
+- /* Special case for kernel I/O: can copy directly into the buffer */
++ /* Special case for kernel I/O: can copy directly into the buffer.
++ * However if the implementation of fuse_conn requires pages instead of
++ * pointer (e.g., virtio-fs), use iov_iter_extract_pages() instead.
++ */
+ if (iov_iter_is_kvec(ii)) {
+- unsigned long user_addr = fuse_get_user_addr(ii);
+- size_t frag_size = fuse_get_frag_size(ii, *nbytesp);
++ void *user_addr = (void *)fuse_get_user_addr(ii);
+
+- if (write)
+- ap->args.in_args[1].value = (void *) user_addr;
+- else
+- ap->args.out_args[0].value = (void *) user_addr;
++ if (!use_pages_for_kvec_io) {
++ size_t frag_size = fuse_get_frag_size(ii, *nbytesp);
+
+- iov_iter_advance(ii, frag_size);
+- *nbytesp = frag_size;
+- return 0;
++ if (write)
++ ap->args.in_args[1].value = user_addr;
++ else
++ ap->args.out_args[0].value = user_addr;
++
++ iov_iter_advance(ii, frag_size);
++ *nbytesp = frag_size;
++ return 0;
++ }
++
++ if (is_vmalloc_addr(user_addr)) {
++ ap->args.vmap_base = user_addr;
++ flush_or_invalidate = true;
++ }
+ }
+
+ while (nbytes < *nbytesp && ap->num_pages < max_pages) {
+@@ -1513,6 +1533,10 @@ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii,
+ (PAGE_SIZE - ret) & (PAGE_SIZE - 1);
+ }
+
++ if (write && flush_or_invalidate)
++ flush_kernel_vmap_range(ap->args.vmap_base, nbytes);
++
++ ap->args.invalidate_vmap = !write && flush_or_invalidate;
+ ap->args.is_pinned = iov_iter_extract_will_pin(ii);
+ ap->args.user_pages = true;
+ if (write)
+@@ -1581,7 +1605,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter,
+ size_t nbytes = min(count, nmax);
+
+ err = fuse_get_user_pages(&ia->ap, iter, &nbytes, write,
+- max_pages);
++ max_pages, fc->use_pages_for_kvec_io);
+ if (err && !nbytes)
+ break;
+
+@@ -1595,7 +1619,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter,
+ }
+
+ if (!io->async || nres < 0) {
+- fuse_release_user_pages(&ia->ap, io->should_dirty);
++ fuse_release_user_pages(&ia->ap, nres, io->should_dirty);
+ fuse_io_free(ia);
+ }
+ ia = NULL;
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index f2391961031374..79add14c363fb7 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -309,9 +309,12 @@ struct fuse_args {
+ bool may_block:1;
+ bool is_ext:1;
+ bool is_pinned:1;
++ bool invalidate_vmap:1;
+ struct fuse_in_arg in_args[3];
+ struct fuse_arg out_args[2];
+ void (*end)(struct fuse_mount *fm, struct fuse_args *args, int error);
++ /* Used for kvec iter backed by vmalloc address */
++ void *vmap_base;
+ };
+
+ struct fuse_args_pages {
+@@ -860,6 +863,9 @@ struct fuse_conn {
+ /** Passthrough support for read/write IO */
+ unsigned int passthrough:1;
+
++ /* Use pages instead of pointer for kernel I/O */
++ unsigned int use_pages_for_kvec_io:1;
++
+ /** Maximum stack depth for passthrough backing files */
+ int max_stack_depth;
+
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index dd526014161503..43d66ab5e89188 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -1568,6 +1568,7 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ fc->delete_stale = true;
+ fc->auto_submounts = true;
+ fc->sync_fs = true;
++ fc->use_pages_for_kvec_io = true;
+
+ /* Tell FUSE to split requests that exceed the virtqueue's size */
+ fc->max_pages_limit = min_t(unsigned int, fc->max_pages_limit,
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 4775c2cb8ae1b7..3af6120d55b51a 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -1013,14 +1013,15 @@ bool gfs2_queue_try_to_evict(struct gfs2_glock *gl)
+ &gl->gl_delete, 0);
+ }
+
+-static bool gfs2_queue_verify_evict(struct gfs2_glock *gl)
++bool gfs2_queue_verify_delete(struct gfs2_glock *gl, bool later)
+ {
+ struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
++ unsigned long delay;
+
+- if (test_and_set_bit(GLF_VERIFY_EVICT, &gl->gl_flags))
++ if (test_and_set_bit(GLF_VERIFY_DELETE, &gl->gl_flags))
+ return false;
+- return queue_delayed_work(sdp->sd_delete_wq,
+- &gl->gl_delete, 5 * HZ);
++ delay = later ? 5 * HZ : 0;
++ return queue_delayed_work(sdp->sd_delete_wq, &gl->gl_delete, delay);
+ }
+
+ static void delete_work_func(struct work_struct *work)
+@@ -1052,19 +1053,19 @@ static void delete_work_func(struct work_struct *work)
+ if (gfs2_try_evict(gl)) {
+ if (test_bit(SDF_KILL, &sdp->sd_flags))
+ goto out;
+- if (gfs2_queue_verify_evict(gl))
++ if (gfs2_queue_verify_delete(gl, true))
+ return;
+ }
+ goto out;
+ }
+
+- if (test_and_clear_bit(GLF_VERIFY_EVICT, &gl->gl_flags)) {
++ if (test_and_clear_bit(GLF_VERIFY_DELETE, &gl->gl_flags)) {
+ inode = gfs2_lookup_by_inum(sdp, no_addr, gl->gl_no_formal_ino,
+ GFS2_BLKST_UNLINKED);
+ if (IS_ERR(inode)) {
+ if (PTR_ERR(inode) == -EAGAIN &&
+ !test_bit(SDF_KILL, &sdp->sd_flags) &&
+- gfs2_queue_verify_evict(gl))
++ gfs2_queue_verify_delete(gl, true))
+ return;
+ } else {
+ d_prune_aliases(inode);
+@@ -2116,7 +2117,7 @@ static void glock_hash_walk(glock_examiner examiner, const struct gfs2_sbd *sdp)
+ void gfs2_cancel_delete_work(struct gfs2_glock *gl)
+ {
+ clear_bit(GLF_TRY_TO_EVICT, &gl->gl_flags);
+- clear_bit(GLF_VERIFY_EVICT, &gl->gl_flags);
++ clear_bit(GLF_VERIFY_DELETE, &gl->gl_flags);
+ if (cancel_delayed_work(&gl->gl_delete))
+ gfs2_glock_put(gl);
+ }
+@@ -2369,7 +2370,7 @@ static const char *gflags2str(char *buf, const struct gfs2_glock *gl)
+ *p++ = 'N';
+ if (test_bit(GLF_TRY_TO_EVICT, gflags))
+ *p++ = 'e';
+- if (test_bit(GLF_VERIFY_EVICT, gflags))
++ if (test_bit(GLF_VERIFY_DELETE, gflags))
+ *p++ = 'E';
+ *p = 0;
+ return buf;
+diff --git a/fs/gfs2/glock.h b/fs/gfs2/glock.h
+index adf0091cc98f95..63e101d448e961 100644
+--- a/fs/gfs2/glock.h
++++ b/fs/gfs2/glock.h
+@@ -245,6 +245,7 @@ static inline int gfs2_glock_nq_init(struct gfs2_glock *gl,
+ void gfs2_glock_cb(struct gfs2_glock *gl, unsigned int state);
+ void gfs2_glock_complete(struct gfs2_glock *gl, int ret);
+ bool gfs2_queue_try_to_evict(struct gfs2_glock *gl);
++bool gfs2_queue_verify_delete(struct gfs2_glock *gl, bool later);
+ void gfs2_cancel_delete_work(struct gfs2_glock *gl);
+ void gfs2_flush_delete_work(struct gfs2_sbd *sdp);
+ void gfs2_gl_hash_clear(struct gfs2_sbd *sdp);
+diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
+index aa4ef67a34e037..bd1348bff90ebe 100644
+--- a/fs/gfs2/incore.h
++++ b/fs/gfs2/incore.h
+@@ -329,7 +329,7 @@ enum {
+ GLF_BLOCKING = 15,
+ GLF_UNLOCKED = 16, /* Wait for glock to be unlocked */
+ GLF_TRY_TO_EVICT = 17, /* iopen glocks only */
+- GLF_VERIFY_EVICT = 18, /* iopen glocks only */
++ GLF_VERIFY_DELETE = 18, /* iopen glocks only */
+ };
+
+ struct gfs2_glock {
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 29c77281676526..53930312971530 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -1879,7 +1879,7 @@ static void try_rgrp_unlink(struct gfs2_rgrpd *rgd, u64 *last_unlinked, u64 skip
+ */
+ ip = gl->gl_object;
+
+- if (ip || !gfs2_queue_try_to_evict(gl))
++ if (ip || !gfs2_queue_verify_delete(gl, false))
+ gfs2_glock_put(gl);
+ else
+ found++;
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 6678060ed4d2bb..e22c1edc32b39e 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1045,7 +1045,7 @@ static int gfs2_drop_inode(struct inode *inode)
+ struct gfs2_glock *gl = ip->i_iopen_gh.gh_gl;
+
+ gfs2_glock_hold(gl);
+- if (!gfs2_queue_try_to_evict(gl))
++ if (!gfs2_queue_verify_delete(gl, true))
+ gfs2_glock_put_async(gl);
+ return 0;
+ }
+diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h
+index 9e78f181c24f48..c1433043473370 100644
+--- a/fs/hfsplus/hfsplus_fs.h
++++ b/fs/hfsplus/hfsplus_fs.h
+@@ -156,6 +156,7 @@ struct hfsplus_sb_info {
+
+ /* Runtime variables */
+ u32 blockoffset;
++ u32 min_io_size;
+ sector_t part_start;
+ sector_t sect_count;
+ int fs_shift;
+@@ -307,7 +308,7 @@ struct hfsplus_readdir_data {
+ */
+ static inline unsigned short hfsplus_min_io_size(struct super_block *sb)
+ {
+- return max_t(unsigned short, bdev_logical_block_size(sb->s_bdev),
++ return max_t(unsigned short, HFSPLUS_SB(sb)->min_io_size,
+ HFSPLUS_SECTOR_SIZE);
+ }
+
+diff --git a/fs/hfsplus/wrapper.c b/fs/hfsplus/wrapper.c
+index ce9346099c72dc..4b0fdc49d1fe7e 100644
+--- a/fs/hfsplus/wrapper.c
++++ b/fs/hfsplus/wrapper.c
+@@ -172,6 +172,8 @@ int hfsplus_read_wrapper(struct super_block *sb)
+ if (!blocksize)
+ goto out;
+
++ sbi->min_io_size = blocksize;
++
+ if (hfsplus_get_last_session(sb, &part_start, &part_size))
+ goto out;
+
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index f50311a6b4299d..47038e6608123c 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -948,8 +948,6 @@ static int isofs_fill_super(struct super_block *s, struct fs_context *fc)
+ goto out_no_inode;
+ }
+
+- kfree(opt->iocharset);
+-
+ return 0;
+
+ /*
+@@ -987,7 +985,6 @@ static int isofs_fill_super(struct super_block *s, struct fs_context *fc)
+ brelse(bh);
+ brelse(pri_bh);
+ out_freesbi:
+- kfree(opt->iocharset);
+ kfree(sbi);
+ s->s_fs_info = NULL;
+ return error;
+@@ -1528,7 +1525,10 @@ static int isofs_get_tree(struct fs_context *fc)
+
+ static void isofs_free_fc(struct fs_context *fc)
+ {
+- kfree(fc->fs_private);
++ struct isofs_options *opt = fc->fs_private;
++
++ kfree(opt->iocharset);
++ kfree(opt);
+ }
+
+ static const struct fs_context_operations isofs_context_ops = {
+diff --git a/fs/jffs2/erase.c b/fs/jffs2/erase.c
+index acd32f05b51988..ef3a1e1b6cb065 100644
+--- a/fs/jffs2/erase.c
++++ b/fs/jffs2/erase.c
+@@ -338,10 +338,9 @@ static int jffs2_block_check_erase(struct jffs2_sb_info *c, struct jffs2_erasebl
+ } while(--retlen);
+ mtd_unpoint(c->mtd, jeb->offset, c->sector_size);
+ if (retlen) {
+- pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08tx\n",
+- *wordebuf,
+- jeb->offset +
+- c->sector_size-retlen * sizeof(*wordebuf));
++ *bad_offset = jeb->offset + c->sector_size - retlen * sizeof(*wordebuf);
++ pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08x\n",
++ *wordebuf, *bad_offset);
+ return -EIO;
+ }
+ return 0;
+diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
+index 0fb05e314edf60..24afbae87225a7 100644
+--- a/fs/jfs/xattr.c
++++ b/fs/jfs/xattr.c
+@@ -559,7 +559,7 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
+
+ size_check:
+ if (EALIST_SIZE(ea_buf->xattr) != ea_size) {
+- int size = min_t(int, EALIST_SIZE(ea_buf->xattr), ea_size);
++ int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr));
+
+ printk(KERN_ERR "ea_get: invalid extended attribute\n");
+ print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1,
+diff --git a/fs/netfs/fscache_volume.c b/fs/netfs/fscache_volume.c
+index cb75c07b5281a5..ced14ac78cc1c2 100644
+--- a/fs/netfs/fscache_volume.c
++++ b/fs/netfs/fscache_volume.c
+@@ -322,8 +322,7 @@ void fscache_create_volume(struct fscache_volume *volume, bool wait)
+ }
+ return;
+ no_wait:
+- clear_bit_unlock(FSCACHE_VOLUME_CREATING, &volume->flags);
+- wake_up_bit(&volume->flags, FSCACHE_VOLUME_CREATING);
++ clear_and_wake_up_bit(FSCACHE_VOLUME_CREATING, &volume->flags);
+ }
+
+ /*
+diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
+index 0becdec129704f..47189476b5538b 100644
+--- a/fs/nfs/blocklayout/blocklayout.c
++++ b/fs/nfs/blocklayout/blocklayout.c
+@@ -571,19 +571,32 @@ bl_find_get_deviceid(struct nfs_server *server,
+ if (!node)
+ return ERR_PTR(-ENODEV);
+
++ /*
++ * Devices that are marked unavailable are left in the cache with a
++ * timeout to avoid sending GETDEVINFO after every LAYOUTGET, or
++ * constantly attempting to register the device. Once marked as
++ * unavailable they must be deleted and never reused.
++ */
+ if (test_bit(NFS_DEVICEID_UNAVAILABLE, &node->flags)) {
+ unsigned long end = jiffies;
+ unsigned long start = end - PNFS_DEVICE_RETRY_TIMEOUT;
+
+ if (!time_in_range(node->timestamp_unavailable, start, end)) {
++ /* Uncork subsequent GETDEVINFO operations for this device */
+ nfs4_delete_deviceid(node->ld, node->nfs_client, id);
+ goto retry;
+ }
+ goto out_put;
+ }
+
+- if (!bl_register_dev(container_of(node, struct pnfs_block_dev, node)))
++ if (!bl_register_dev(container_of(node, struct pnfs_block_dev, node))) {
++ /*
++ * If we cannot register, treat this device as transient:
++ * Make a negative cache entry for the device
++ */
++ nfs4_mark_deviceid_unavailable(node);
+ goto out_put;
++ }
+
+ return node;
+
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index 6252f44479457b..cab8809f0e0f48 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -20,9 +20,6 @@ static void bl_unregister_scsi(struct pnfs_block_dev *dev)
+ const struct pr_ops *ops = bdev->bd_disk->fops->pr_ops;
+ int status;
+
+- if (!test_and_clear_bit(PNFS_BDEV_REGISTERED, &dev->flags))
+- return;
+-
+ status = ops->pr_register(bdev, dev->pr_key, 0, false);
+ if (status)
+ trace_bl_pr_key_unreg_err(bdev, dev->pr_key, status);
+@@ -58,7 +55,8 @@ static void bl_unregister_dev(struct pnfs_block_dev *dev)
+ return;
+ }
+
+- if (dev->type == PNFS_BLOCK_VOLUME_SCSI)
++ if (dev->type == PNFS_BLOCK_VOLUME_SCSI &&
++ test_and_clear_bit(PNFS_BDEV_REGISTERED, &dev->flags))
+ bl_unregister_scsi(dev);
+ }
+
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 5902a9beca1f3f..080af26c14dce2 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -11,7 +11,7 @@
+ #include <linux/nfs_page.h>
+ #include <linux/wait_bit.h>
+
+-#define NFS_SB_MASK (SB_RDONLY|SB_NOSUID|SB_NODEV|SB_NOEXEC|SB_SYNCHRONOUS)
++#define NFS_SB_MASK (SB_NOSUID|SB_NODEV|SB_NOEXEC|SB_SYNCHRONOUS)
+
+ extern const struct export_operations nfs_export_ops;
+
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 9d40319e063dea..405f17e6e0b45b 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -2603,12 +2603,14 @@ static void nfs4_open_release(void *calldata)
+ struct nfs4_opendata *data = calldata;
+ struct nfs4_state *state = NULL;
+
++ /* In case of error, no cleanup! */
++ if (data->rpc_status != 0 || !data->rpc_done) {
++ nfs_release_seqid(data->o_arg.seqid);
++ goto out_free;
++ }
+ /* If this request hasn't been cancelled, do nothing */
+ if (!data->cancelled)
+ goto out_free;
+- /* In case of error, no cleanup! */
+- if (data->rpc_status != 0 || !data->rpc_done)
+- goto out_free;
+ /* In case we need an open_confirm, no cleanup! */
+ if (data->o_res.rflags & NFS4_OPEN_RESULT_CONFIRM)
+ goto out_free;
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index d074d0ceb4f016..1e2b9ee4222e7c 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -144,6 +144,31 @@ static void nfs_io_completion_put(struct nfs_io_completion *ioc)
+ kref_put(&ioc->refcount, nfs_io_completion_release);
+ }
+
++static void
++nfs_page_set_inode_ref(struct nfs_page *req, struct inode *inode)
++{
++ if (!test_and_set_bit(PG_INODE_REF, &req->wb_flags)) {
++ kref_get(&req->wb_kref);
++ atomic_long_inc(&NFS_I(inode)->nrequests);
++ }
++}
++
++static int
++nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode)
++{
++ int ret;
++
++ if (!test_bit(PG_REMOVE, &req->wb_flags))
++ return 0;
++ ret = nfs_page_group_lock(req);
++ if (ret)
++ return ret;
++ if (test_and_clear_bit(PG_REMOVE, &req->wb_flags))
++ nfs_page_set_inode_ref(req, inode);
++ nfs_page_group_unlock(req);
++ return 0;
++}
++
+ /**
+ * nfs_folio_find_head_request - find head request associated with a folio
+ * @folio: pointer to folio
+@@ -540,7 +565,6 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+ struct inode *inode = folio->mapping->host;
+ struct nfs_page *head, *subreq;
+ struct nfs_commit_info cinfo;
+- bool removed;
+ int ret;
+
+ /*
+@@ -565,18 +589,18 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+ goto retry;
+ }
+
+- ret = nfs_page_group_lock(head);
++ ret = nfs_cancel_remove_inode(head, inode);
+ if (ret < 0)
+ goto out_unlock;
+
+- removed = test_bit(PG_REMOVE, &head->wb_flags);
++ ret = nfs_page_group_lock(head);
++ if (ret < 0)
++ goto out_unlock;
+
+ /* lock each request in the page group */
+ for (subreq = head->wb_this_page;
+ subreq != head;
+ subreq = subreq->wb_this_page) {
+- if (test_bit(PG_REMOVE, &subreq->wb_flags))
+- removed = true;
+ ret = nfs_page_group_lock_subreq(head, subreq);
+ if (ret < 0)
+ goto out_unlock;
+@@ -584,21 +608,6 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+
+ nfs_page_group_unlock(head);
+
+- /*
+- * If PG_REMOVE is set on any request, I/O on that request has
+- * completed, but some requests were still under I/O at the time
+- * we locked the head request.
+- *
+- * In that case the above wait for all requests means that all I/O
+- * has now finished, and we can restart from a clean slate. Let the
+- * old requests go away and start from scratch instead.
+- */
+- if (removed) {
+- nfs_unroll_locks(head, head);
+- nfs_unlock_and_release_request(head);
+- goto retry;
+- }
+-
+ nfs_init_cinfo_from_inode(&cinfo, inode);
+ nfs_join_page_group(head, &cinfo, inode);
+ return head;
+diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
+index 50b3135d07ac73..d8e22bae76e8c3 100644
+--- a/fs/nfsd/export.c
++++ b/fs/nfsd/export.c
+@@ -40,15 +40,24 @@
+ #define EXPKEY_HASHMAX (1 << EXPKEY_HASHBITS)
+ #define EXPKEY_HASHMASK (EXPKEY_HASHMAX -1)
+
+-static void expkey_put(struct kref *ref)
++static void expkey_put_work(struct work_struct *work)
+ {
+- struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
++ struct svc_expkey *key =
++ container_of(to_rcu_work(work), struct svc_expkey, ek_rcu_work);
+
+ if (test_bit(CACHE_VALID, &key->h.flags) &&
+ !test_bit(CACHE_NEGATIVE, &key->h.flags))
+ path_put(&key->ek_path);
+ auth_domain_put(key->ek_client);
+- kfree_rcu(key, ek_rcu);
++ kfree(key);
++}
++
++static void expkey_put(struct kref *ref)
++{
++ struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
++
++ INIT_RCU_WORK(&key->ek_rcu_work, expkey_put_work);
++ queue_rcu_work(system_wq, &key->ek_rcu_work);
+ }
+
+ static int expkey_upcall(struct cache_detail *cd, struct cache_head *h)
+@@ -355,16 +364,26 @@ static void export_stats_destroy(struct export_stats *stats)
+ EXP_STATS_COUNTERS_NUM);
+ }
+
+-static void svc_export_put(struct kref *ref)
++static void svc_export_put_work(struct work_struct *work)
+ {
+- struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
++ struct svc_export *exp =
++ container_of(to_rcu_work(work), struct svc_export, ex_rcu_work);
++
+ path_put(&exp->ex_path);
+ auth_domain_put(exp->ex_client);
+ nfsd4_fslocs_free(&exp->ex_fslocs);
+ export_stats_destroy(exp->ex_stats);
+ kfree(exp->ex_stats);
+ kfree(exp->ex_uuid);
+- kfree_rcu(exp, ex_rcu);
++ kfree(exp);
++}
++
++static void svc_export_put(struct kref *ref)
++{
++ struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
++
++ INIT_RCU_WORK(&exp->ex_rcu_work, svc_export_put_work);
++ queue_rcu_work(system_wq, &exp->ex_rcu_work);
+ }
+
+ static int svc_export_upcall(struct cache_detail *cd, struct cache_head *h)
+diff --git a/fs/nfsd/export.h b/fs/nfsd/export.h
+index ca9dc230ae3d0b..9d895570ceba05 100644
+--- a/fs/nfsd/export.h
++++ b/fs/nfsd/export.h
+@@ -75,7 +75,7 @@ struct svc_export {
+ u32 ex_layout_types;
+ struct nfsd4_deviceid_map *ex_devid_map;
+ struct cache_detail *cd;
+- struct rcu_head ex_rcu;
++ struct rcu_work ex_rcu_work;
+ unsigned long ex_xprtsec_modes;
+ struct export_stats *ex_stats;
+ };
+@@ -92,7 +92,7 @@ struct svc_expkey {
+ u32 ek_fsid[6];
+
+ struct path ek_path;
+- struct rcu_head ek_rcu;
++ struct rcu_work ek_rcu_work;
+ };
+
+ #define EX_ISSYNC(exp) (!((exp)->ex_flags & NFSEXP_ASYNC))
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index d756f443fc444c..245efbbf145479 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -287,17 +287,17 @@ static int decode_cb_compound4res(struct xdr_stream *xdr,
+ u32 length;
+ __be32 *p;
+
+- p = xdr_inline_decode(xdr, 4 + 4);
++ p = xdr_inline_decode(xdr, XDR_UNIT);
+ if (unlikely(p == NULL))
+ goto out_overflow;
+- hdr->status = be32_to_cpup(p++);
++ hdr->status = be32_to_cpup(p);
+ /* Ignore the tag */
+- length = be32_to_cpup(p++);
+- p = xdr_inline_decode(xdr, length + 4);
+- if (unlikely(p == NULL))
++ if (xdr_stream_decode_u32(xdr, &length) < 0)
++ goto out_overflow;
++ if (xdr_inline_decode(xdr, length) == NULL)
++ goto out_overflow;
++ if (xdr_stream_decode_u32(xdr, &hdr->nops) < 0)
+ goto out_overflow;
+- p += XDR_QUADLEN(length);
+- hdr->nops = be32_to_cpup(p);
+ return 0;
+ out_overflow:
+ return -EIO;
+@@ -1455,6 +1455,8 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
+ ses = c->cn_session;
+ }
+ spin_unlock(&clp->cl_lock);
++ if (!c)
++ return;
+
+ err = setup_callback_client(clp, &conn, ses);
+ if (err) {
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 8b78d493e301f3..0d6505ac1a33e3 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1291,7 +1291,7 @@ static void nfsd4_stop_copy(struct nfsd4_copy *copy)
+ nfs4_put_copy(copy);
+ }
+
+-static struct nfsd4_copy *nfsd4_get_copy(struct nfs4_client *clp)
++static struct nfsd4_copy *nfsd4_unhash_copy(struct nfs4_client *clp)
+ {
+ struct nfsd4_copy *copy = NULL;
+
+@@ -1300,6 +1300,9 @@ static struct nfsd4_copy *nfsd4_get_copy(struct nfs4_client *clp)
+ copy = list_first_entry(&clp->async_copies, struct nfsd4_copy,
+ copies);
+ refcount_inc(©->refcount);
++ copy->cp_clp = NULL;
++ if (!list_empty(©->copies))
++ list_del_init(©->copies);
+ }
+ spin_unlock(&clp->async_lock);
+ return copy;
+@@ -1309,7 +1312,7 @@ void nfsd4_shutdown_copy(struct nfs4_client *clp)
+ {
+ struct nfsd4_copy *copy;
+
+- while ((copy = nfsd4_get_copy(clp)) != NULL)
++ while ((copy = nfsd4_unhash_copy(clp)) != NULL)
+ nfsd4_stop_copy(copy);
+ }
+ #ifdef CONFIG_NFSD_V4_2_INTER_SSC
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index 69a3a84e159e62..d92c650888312b 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -659,7 +659,8 @@ nfs4_reset_recoverydir(char *recdir)
+ return status;
+ status = -ENOTDIR;
+ if (d_is_dir(path.dentry)) {
+- strcpy(user_recovery_dirname, recdir);
++ strscpy(user_recovery_dirname, recdir,
++ sizeof(user_recovery_dirname));
+ status = 0;
+ }
+ path_put(&path);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index d96d8cfd1ff86b..e2903f4cc3adaa 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -5955,7 +5955,7 @@ nfs4_delegation_stat(struct nfs4_delegation *dp, struct svc_fh *currentfh,
+ path.dentry = file_dentry(nf->nf_file);
+
+ rc = vfs_getattr(&path, stat,
+- (STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
++ (STATX_MODE | STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
+ AT_STATX_SYNC_AS_STAT);
+
+ nfsd_file_put(nf);
+@@ -6039,8 +6039,7 @@ nfs4_open_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ }
+ open->op_delegate_type = NFS4_OPEN_DELEGATE_WRITE;
+ dp->dl_cb_fattr.ncf_cur_fsize = stat.size;
+- dp->dl_cb_fattr.ncf_initial_cinfo =
+- nfsd4_change_attribute(&stat, d_inode(currentfh->fh_dentry));
++ dp->dl_cb_fattr.ncf_initial_cinfo = nfsd4_change_attribute(&stat);
+ trace_nfsd_deleg_write(&dp->dl_stid.sc_stateid);
+ } else {
+ open->op_delegate_type = NFS4_OPEN_DELEGATE_READ;
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index ebbae04837ef07..6bf37124e389f6 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3040,7 +3040,7 @@ static __be32 nfsd4_encode_fattr4_change(struct xdr_stream *xdr,
+ return nfs_ok;
+ }
+
+- c = nfsd4_change_attribute(&args->stat, d_inode(args->dentry));
++ c = nfsd4_change_attribute(&args->stat);
+ return nfsd4_encode_changeid4(xdr, c);
+ }
+
+diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
+index dd4e11a703aa6a..85fd4d1b156204 100644
+--- a/fs/nfsd/nfsfh.c
++++ b/fs/nfsd/nfsfh.c
+@@ -618,20 +618,18 @@ fh_update(struct svc_fh *fhp)
+ __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp)
+ {
+ bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE);
+- struct inode *inode;
+ struct kstat stat;
+ __be32 err;
+
+ if (fhp->fh_no_wcc || fhp->fh_pre_saved)
+ return nfs_ok;
+
+- inode = d_inode(fhp->fh_dentry);
+ err = fh_getattr(fhp, &stat);
+ if (err)
+ return err;
+
+ if (v4)
+- fhp->fh_pre_change = nfsd4_change_attribute(&stat, inode);
++ fhp->fh_pre_change = nfsd4_change_attribute(&stat);
+
+ fhp->fh_pre_mtime = stat.mtime;
+ fhp->fh_pre_ctime = stat.ctime;
+@@ -648,7 +646,6 @@ __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp)
+ __be32 fh_fill_post_attrs(struct svc_fh *fhp)
+ {
+ bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE);
+- struct inode *inode = d_inode(fhp->fh_dentry);
+ __be32 err;
+
+ if (fhp->fh_no_wcc)
+@@ -664,7 +661,7 @@ __be32 fh_fill_post_attrs(struct svc_fh *fhp)
+ fhp->fh_post_saved = true;
+ if (v4)
+ fhp->fh_post_change =
+- nfsd4_change_attribute(&fhp->fh_post_attr, inode);
++ nfsd4_change_attribute(&fhp->fh_post_attr);
+ return nfs_ok;
+ }
+
+@@ -755,7 +752,14 @@ enum fsid_source fsid_source(const struct svc_fh *fhp)
+ return FSIDSOURCE_DEV;
+ }
+
+-/*
++/**
++ * nfsd4_change_attribute - Generate an NFSv4 change_attribute value
++ * @stat: inode attributes
++ *
++ * Caller must fill in @stat before calling, typically by invoking
++ * vfs_getattr() with STATX_MODE, STATX_CTIME, and STATX_CHANGE_COOKIE.
++ * Returns an unsigned 64-bit changeid4 value (RFC 8881 Section 3.2).
++ *
+ * We could use i_version alone as the change attribute. However, i_version
+ * can go backwards on a regular file after an unclean shutdown. On its own
+ * that doesn't necessarily cause a problem, but if i_version goes backwards
+@@ -772,13 +776,13 @@ enum fsid_source fsid_source(const struct svc_fh *fhp)
+ * assume that the new change attr is always logged to stable storage in some
+ * fashion before the results can be seen.
+ */
+-u64 nfsd4_change_attribute(const struct kstat *stat, const struct inode *inode)
++u64 nfsd4_change_attribute(const struct kstat *stat)
+ {
+ u64 chattr;
+
+ if (stat->result_mask & STATX_CHANGE_COOKIE) {
+ chattr = stat->change_cookie;
+- if (S_ISREG(inode->i_mode) &&
++ if (S_ISREG(stat->mode) &&
+ !(stat->attributes & STATX_ATTR_CHANGE_MONOTONIC)) {
+ chattr += (u64)stat->ctime.tv_sec << 30;
+ chattr += stat->ctime.tv_nsec;
+diff --git a/fs/nfsd/nfsfh.h b/fs/nfsd/nfsfh.h
+index 6ebdf7ea27bfdb..c3da6d194b5fb3 100644
+--- a/fs/nfsd/nfsfh.h
++++ b/fs/nfsd/nfsfh.h
+@@ -293,8 +293,7 @@ static inline void fh_clear_pre_post_attrs(struct svc_fh *fhp)
+ fhp->fh_pre_saved = false;
+ }
+
+-u64 nfsd4_change_attribute(const struct kstat *stat,
+- const struct inode *inode);
++u64 nfsd4_change_attribute(const struct kstat *stat);
+ __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp);
+ __be32 fh_fill_post_attrs(struct svc_fh *fhp);
+ __be32 __must_check fh_fill_both_attrs(struct svc_fh *fhp);
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index 82ae8254c068be..f976949d2634a1 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -333,16 +333,19 @@ static int fsnotify_handle_event(struct fsnotify_group *group, __u32 mask,
+ if (!inode_mark)
+ return 0;
+
+- if (mask & FS_EVENT_ON_CHILD) {
+- /*
+- * Some events can be sent on both parent dir and child marks
+- * (e.g. FS_ATTRIB). If both parent dir and child are
+- * watching, report the event once to parent dir with name (if
+- * interested) and once to child without name (if interested).
+- * The child watcher is expecting an event without a file name
+- * and without the FS_EVENT_ON_CHILD flag.
+- */
+- mask &= ~FS_EVENT_ON_CHILD;
++ /*
++ * Some events can be sent on both parent dir and child marks (e.g.
++ * FS_ATTRIB). If both parent dir and child are watching, report the
++ * event once to parent dir with name (if interested) and once to child
++ * without name (if interested).
++ *
++ * In any case regardless whether the parent is watching or not, the
++ * child watcher is expecting an event without the FS_EVENT_ON_CHILD
++ * flag. The file name is expected if and only if this is a directory
++ * event.
++ */
++ mask &= ~FS_EVENT_ON_CHILD;
++ if (!(mask & ALL_FSNOTIFY_DIRENT_EVENTS)) {
+ dir = NULL;
+ name = NULL;
+ }
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index c45b222cf9c11c..4981439e62092a 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -138,8 +138,11 @@ static void fsnotify_get_sb_watched_objects(struct super_block *sb)
+
+ static void fsnotify_put_sb_watched_objects(struct super_block *sb)
+ {
+- if (atomic_long_dec_and_test(fsnotify_sb_watched_objects(sb)))
+- wake_up_var(fsnotify_sb_watched_objects(sb));
++ atomic_long_t *watched_objects = fsnotify_sb_watched_objects(sb);
++
++ /* the superblock can go away after this decrement */
++ if (atomic_long_dec_and_test(watched_objects))
++ wake_up_var(watched_objects);
+ }
+
+ static void fsnotify_get_inode_ref(struct inode *inode)
+@@ -150,8 +153,11 @@ static void fsnotify_get_inode_ref(struct inode *inode)
+
+ static void fsnotify_put_inode_ref(struct inode *inode)
+ {
+- fsnotify_put_sb_watched_objects(inode->i_sb);
++ /* read ->i_sb before the inode can go away */
++ struct super_block *sb = inode->i_sb;
++
+ iput(inode);
++ fsnotify_put_sb_watched_objects(sb);
+ }
+
+ /*
+diff --git a/fs/ocfs2/aops.h b/fs/ocfs2/aops.h
+index 3a520117fa59f0..a9ce7947228c8d 100644
+--- a/fs/ocfs2/aops.h
++++ b/fs/ocfs2/aops.h
+@@ -70,6 +70,8 @@ enum ocfs2_iocb_lock_bits {
+ OCFS2_IOCB_NUM_LOCKS
+ };
+
++#define ocfs2_iocb_init_rw_locked(iocb) \
++ (iocb->private = NULL)
+ #define ocfs2_iocb_clear_rw_locked(iocb) \
+ clear_bit(OCFS2_IOCB_RW_LOCK, (unsigned long *)&iocb->private)
+ #define ocfs2_iocb_rw_locked_level(iocb) \
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index c6d0d17a759c1e..f772313ccb0625 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -2397,6 +2397,8 @@ static ssize_t ocfs2_file_write_iter(struct kiocb *iocb,
+ } else
+ inode_lock(inode);
+
++ ocfs2_iocb_init_rw_locked(iocb);
++
+ /*
+ * Concurrent O_DIRECT writes are allowed with
+ * mount_option "coherency=buffered".
+@@ -2543,6 +2545,8 @@ static ssize_t ocfs2_file_read_iter(struct kiocb *iocb,
+ if (!direct_io && nowait)
+ return -EOPNOTSUPP;
+
++ ocfs2_iocb_init_rw_locked(iocb);
++
+ /*
+ * buffered reads protect themselves in ->read_folio(). O_DIRECT reads
+ * need locks to protect pending reads from racing with truncate.
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index 13a041ef0c4e9b..7e4afd7409a660 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -493,13 +493,13 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter)
+ * the previous entry, search for a matching entry.
+ */
+ if (!m || start < m->addr || start >= m->addr + m->size) {
+- struct kcore_list *iter;
++ struct kcore_list *pos;
+
+ m = NULL;
+- list_for_each_entry(iter, &kclist_head, list) {
+- if (start >= iter->addr &&
+- start < iter->addr + iter->size) {
+- m = iter;
++ list_for_each_entry(pos, &kclist_head, list) {
++ if (start >= pos->addr &&
++ start < pos->addr + pos->size) {
++ m = pos;
+ break;
+ }
+ }
+diff --git a/fs/proc/softirqs.c b/fs/proc/softirqs.c
+index f4616083faef3b..04bb29721419b0 100644
+--- a/fs/proc/softirqs.c
++++ b/fs/proc/softirqs.c
+@@ -20,7 +20,7 @@ static int show_softirqs(struct seq_file *p, void *v)
+ for (i = 0; i < NR_SOFTIRQS; i++) {
+ seq_printf(p, "%12s:", softirq_to_name[i]);
+ for_each_possible_cpu(j)
+- seq_printf(p, " %10u", kstat_softirqs_cpu(i, j));
++ seq_put_decimal_ull_width(p, " ", kstat_softirqs_cpu(i, j), 10);
+ seq_putc(p, '\n');
+ }
+ return 0;
+diff --git a/fs/read_write.c b/fs/read_write.c
+index 90e283b31ca181..1b122ec6d8f0dc 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -1737,18 +1737,21 @@ int generic_file_rw_checks(struct file *file_in, struct file *file_out)
+ return 0;
+ }
+
+-bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos)
++int generic_atomic_write_valid(struct kiocb *iocb, struct iov_iter *iter)
+ {
+ size_t len = iov_iter_count(iter);
+
+ if (!iter_is_ubuf(iter))
+- return false;
++ return -EINVAL;
+
+ if (!is_power_of_2(len))
+- return false;
++ return -EINVAL;
++
++ if (!IS_ALIGNED(iocb->ki_pos, len))
++ return -EINVAL;
+
+- if (!IS_ALIGNED(pos, len))
+- return false;
++ if (!(iocb->ki_flags & IOCB_DIRECT))
++ return -EOPNOTSUPP;
+
+- return true;
++ return 0;
+ }
+diff --git a/fs/smb/client/cached_dir.c b/fs/smb/client/cached_dir.c
+index 0ff2491c311d8a..9c0ef4195b5829 100644
+--- a/fs/smb/client/cached_dir.c
++++ b/fs/smb/client/cached_dir.c
+@@ -17,6 +17,11 @@ static void free_cached_dir(struct cached_fid *cfid);
+ static void smb2_close_cached_fid(struct kref *ref);
+ static void cfids_laundromat_worker(struct work_struct *work);
+
++struct cached_dir_dentry {
++ struct list_head entry;
++ struct dentry *dentry;
++};
++
+ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
+ const char *path,
+ bool lookup_only,
+@@ -59,6 +64,16 @@ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
+ list_add(&cfid->entry, &cfids->entries);
+ cfid->on_list = true;
+ kref_get(&cfid->refcount);
++ /*
++ * Set @cfid->has_lease to true during construction so that the lease
++ * reference can be put in cached_dir_lease_break() due to a potential
++ * lease break right after the request is sent or while @cfid is still
++ * being cached, or if a reconnection is triggered during construction.
++ * Concurrent processes won't be to use it yet due to @cfid->time being
++ * zero.
++ */
++ cfid->has_lease = true;
++
+ spin_unlock(&cfids->cfid_list_lock);
+ return cfid;
+ }
+@@ -176,12 +191,12 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ return -ENOENT;
+ }
+ /*
+- * Return cached fid if it has a lease. Otherwise, it is either a new
+- * entry or laundromat worker removed it from @cfids->entries. Caller
+- * will put last reference if the latter.
++ * Return cached fid if it is valid (has a lease and has a time).
++ * Otherwise, it is either a new entry or laundromat worker removed it
++ * from @cfids->entries. Caller will put last reference if the latter.
+ */
+ spin_lock(&cfids->cfid_list_lock);
+- if (cfid->has_lease) {
++ if (cfid->has_lease && cfid->time) {
+ spin_unlock(&cfids->cfid_list_lock);
+ *ret_cfid = cfid;
+ kfree(utf16_path);
+@@ -212,6 +227,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ }
+ }
+ cfid->dentry = dentry;
++ cfid->tcon = tcon;
+
+ /*
+ * We do not hold the lock for the open because in case
+@@ -267,15 +283,6 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+
+ smb2_set_related(&rqst[1]);
+
+- /*
+- * Set @cfid->has_lease to true before sending out compounded request so
+- * its lease reference can be put in cached_dir_lease_break() due to a
+- * potential lease break right after the request is sent or while @cfid
+- * is still being cached. Concurrent processes won't be to use it yet
+- * due to @cfid->time being zero.
+- */
+- cfid->has_lease = true;
+-
+ if (retries) {
+ smb2_set_replay(server, &rqst[0]);
+ smb2_set_replay(server, &rqst[1]);
+@@ -292,7 +299,6 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ }
+ goto oshr_free;
+ }
+- cfid->tcon = tcon;
+ cfid->is_open = true;
+
+ spin_lock(&cfids->cfid_list_lock);
+@@ -347,6 +353,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ SMB2_query_info_free(&rqst[1]);
+ free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
+ free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
++out:
+ if (rc) {
+ spin_lock(&cfids->cfid_list_lock);
+ if (cfid->on_list) {
+@@ -358,23 +365,14 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ /*
+ * We are guaranteed to have two references at this
+ * point. One for the caller and one for a potential
+- * lease. Release the Lease-ref so that the directory
+- * will be closed when the caller closes the cached
+- * handle.
++ * lease. Release one here, and the second below.
+ */
+ cfid->has_lease = false;
+- spin_unlock(&cfids->cfid_list_lock);
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
+- goto out;
+ }
+ spin_unlock(&cfids->cfid_list_lock);
+- }
+-out:
+- if (rc) {
+- if (cfid->is_open)
+- SMB2_close(0, cfid->tcon, cfid->fid.persistent_fid,
+- cfid->fid.volatile_fid);
+- free_cached_dir(cfid);
++
++ kref_put(&cfid->refcount, smb2_close_cached_fid);
+ } else {
+ *ret_cfid = cfid;
+ atomic_inc(&tcon->num_remote_opens);
+@@ -401,7 +399,7 @@ int open_cached_dir_by_dentry(struct cifs_tcon *tcon,
+ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry(cfid, &cfids->entries, entry) {
+ if (dentry && cfid->dentry == dentry) {
+- cifs_dbg(FYI, "found a cached root file handle by dentry\n");
++ cifs_dbg(FYI, "found a cached file handle by dentry\n");
+ kref_get(&cfid->refcount);
+ *ret_cfid = cfid;
+ spin_unlock(&cfids->cfid_list_lock);
+@@ -477,7 +475,10 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)
+ struct cifs_tcon *tcon;
+ struct tcon_link *tlink;
+ struct cached_fids *cfids;
++ struct cached_dir_dentry *tmp_list, *q;
++ LIST_HEAD(entry);
+
++ spin_lock(&cifs_sb->tlink_tree_lock);
+ for (node = rb_first(root); node; node = rb_next(node)) {
+ tlink = rb_entry(node, struct tcon_link, tl_rbnode);
+ tcon = tlink_tcon(tlink);
+@@ -486,11 +487,30 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)
+ cfids = tcon->cfids;
+ if (cfids == NULL)
+ continue;
++ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry(cfid, &cfids->entries, entry) {
+- dput(cfid->dentry);
++ tmp_list = kmalloc(sizeof(*tmp_list), GFP_ATOMIC);
++ if (tmp_list == NULL)
++ break;
++ spin_lock(&cfid->fid_lock);
++ tmp_list->dentry = cfid->dentry;
+ cfid->dentry = NULL;
++ spin_unlock(&cfid->fid_lock);
++
++ list_add_tail(&tmp_list->entry, &entry);
+ }
++ spin_unlock(&cfids->cfid_list_lock);
++ }
++ spin_unlock(&cifs_sb->tlink_tree_lock);
++
++ list_for_each_entry_safe(tmp_list, q, &entry, entry) {
++ list_del(&tmp_list->entry);
++ dput(tmp_list->dentry);
++ kfree(tmp_list);
+ }
++
++ /* Flush any pending work that will drop dentries */
++ flush_workqueue(cfid_put_wq);
+ }
+
+ /*
+@@ -501,50 +521,71 @@ void invalidate_all_cached_dirs(struct cifs_tcon *tcon)
+ {
+ struct cached_fids *cfids = tcon->cfids;
+ struct cached_fid *cfid, *q;
+- LIST_HEAD(entry);
+
+ if (cfids == NULL)
+ return;
+
++ /*
++ * Mark all the cfids as closed, and move them to the cfids->dying list.
++ * They'll be cleaned up later by cfids_invalidation_worker. Take
++ * a reference to each cfid during this process.
++ */
+ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
+- list_move(&cfid->entry, &entry);
++ list_move(&cfid->entry, &cfids->dying);
+ cfids->num_entries--;
+ cfid->is_open = false;
+ cfid->on_list = false;
+- /* To prevent race with smb2_cached_lease_break() */
+- kref_get(&cfid->refcount);
+- }
+- spin_unlock(&cfids->cfid_list_lock);
+-
+- list_for_each_entry_safe(cfid, q, &entry, entry) {
+- list_del(&cfid->entry);
+- cancel_work_sync(&cfid->lease_break);
+ if (cfid->has_lease) {
+ /*
+- * We lease was never cancelled from the server so we
+- * need to drop the reference.
++ * The lease was never cancelled from the server,
++ * so steal that reference.
+ */
+- spin_lock(&cfids->cfid_list_lock);
+ cfid->has_lease = false;
+- spin_unlock(&cfids->cfid_list_lock);
+- kref_put(&cfid->refcount, smb2_close_cached_fid);
+- }
+- /* Drop the extra reference opened above*/
+- kref_put(&cfid->refcount, smb2_close_cached_fid);
++ } else
++ kref_get(&cfid->refcount);
+ }
++ /*
++ * Queue dropping of the dentries once locks have been dropped
++ */
++ if (!list_empty(&cfids->dying))
++ queue_work(cfid_put_wq, &cfids->invalidation_work);
++ spin_unlock(&cfids->cfid_list_lock);
+ }
+
+ static void
+-smb2_cached_lease_break(struct work_struct *work)
++cached_dir_offload_close(struct work_struct *work)
+ {
+ struct cached_fid *cfid = container_of(work,
+- struct cached_fid, lease_break);
++ struct cached_fid, close_work);
++ struct cifs_tcon *tcon = cfid->tcon;
++
++ WARN_ON(cfid->on_list);
+
+- spin_lock(&cfid->cfids->cfid_list_lock);
+- cfid->has_lease = false;
+- spin_unlock(&cfid->cfids->cfid_list_lock);
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
++ cifs_put_tcon(tcon, netfs_trace_tcon_ref_put_cached_close);
++}
++
++/*
++ * Release the cached directory's dentry, and then queue work to drop cached
++ * directory itself (closing on server if needed).
++ *
++ * Must be called with a reference to the cached_fid and a reference to the
++ * tcon.
++ */
++static void cached_dir_put_work(struct work_struct *work)
++{
++ struct cached_fid *cfid = container_of(work, struct cached_fid,
++ put_work);
++ struct dentry *dentry;
++
++ spin_lock(&cfid->fid_lock);
++ dentry = cfid->dentry;
++ cfid->dentry = NULL;
++ spin_unlock(&cfid->fid_lock);
++
++ dput(dentry);
++ queue_work(serverclose_wq, &cfid->close_work);
+ }
+
+ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
+@@ -561,6 +602,7 @@ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
+ !memcmp(lease_key,
+ cfid->fid.lease_key,
+ SMB2_LEASE_KEY_SIZE)) {
++ cfid->has_lease = false;
+ cfid->time = 0;
+ /*
+ * We found a lease remove it from the list
+@@ -570,8 +612,10 @@ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
+ cfid->on_list = false;
+ cfids->num_entries--;
+
+- queue_work(cifsiod_wq,
+- &cfid->lease_break);
++ ++tcon->tc_count;
++ trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,
++ netfs_trace_tcon_ref_get_cached_lease_break);
++ queue_work(cfid_put_wq, &cfid->put_work);
+ spin_unlock(&cfids->cfid_list_lock);
+ return true;
+ }
+@@ -593,7 +637,8 @@ static struct cached_fid *init_cached_dir(const char *path)
+ return NULL;
+ }
+
+- INIT_WORK(&cfid->lease_break, smb2_cached_lease_break);
++ INIT_WORK(&cfid->close_work, cached_dir_offload_close);
++ INIT_WORK(&cfid->put_work, cached_dir_put_work);
+ INIT_LIST_HEAD(&cfid->entry);
+ INIT_LIST_HEAD(&cfid->dirents.entries);
+ mutex_init(&cfid->dirents.de_mutex);
+@@ -606,6 +651,9 @@ static void free_cached_dir(struct cached_fid *cfid)
+ {
+ struct cached_dirent *dirent, *q;
+
++ WARN_ON(work_pending(&cfid->close_work));
++ WARN_ON(work_pending(&cfid->put_work));
++
+ dput(cfid->dentry);
+ cfid->dentry = NULL;
+
+@@ -623,10 +671,30 @@ static void free_cached_dir(struct cached_fid *cfid)
+ kfree(cfid);
+ }
+
++static void cfids_invalidation_worker(struct work_struct *work)
++{
++ struct cached_fids *cfids = container_of(work, struct cached_fids,
++ invalidation_work);
++ struct cached_fid *cfid, *q;
++ LIST_HEAD(entry);
++
++ spin_lock(&cfids->cfid_list_lock);
++ /* move cfids->dying to the local list */
++ list_cut_before(&entry, &cfids->dying, &cfids->dying);
++ spin_unlock(&cfids->cfid_list_lock);
++
++ list_for_each_entry_safe(cfid, q, &entry, entry) {
++ list_del(&cfid->entry);
++ /* Drop the ref-count acquired in invalidate_all_cached_dirs */
++ kref_put(&cfid->refcount, smb2_close_cached_fid);
++ }
++}
++
+ static void cfids_laundromat_worker(struct work_struct *work)
+ {
+ struct cached_fids *cfids;
+ struct cached_fid *cfid, *q;
++ struct dentry *dentry;
+ LIST_HEAD(entry);
+
+ cfids = container_of(work, struct cached_fids, laundromat_work.work);
+@@ -638,33 +706,42 @@ static void cfids_laundromat_worker(struct work_struct *work)
+ cfid->on_list = false;
+ list_move(&cfid->entry, &entry);
+ cfids->num_entries--;
+- /* To prevent race with smb2_cached_lease_break() */
+- kref_get(&cfid->refcount);
++ if (cfid->has_lease) {
++ /*
++ * Our lease has not yet been cancelled from the
++ * server. Steal that reference.
++ */
++ cfid->has_lease = false;
++ } else
++ kref_get(&cfid->refcount);
+ }
+ }
+ spin_unlock(&cfids->cfid_list_lock);
+
+ list_for_each_entry_safe(cfid, q, &entry, entry) {
+ list_del(&cfid->entry);
+- /*
+- * Cancel and wait for the work to finish in case we are racing
+- * with it.
+- */
+- cancel_work_sync(&cfid->lease_break);
+- if (cfid->has_lease) {
++
++ spin_lock(&cfid->fid_lock);
++ dentry = cfid->dentry;
++ cfid->dentry = NULL;
++ spin_unlock(&cfid->fid_lock);
++
++ dput(dentry);
++ if (cfid->is_open) {
++ spin_lock(&cifs_tcp_ses_lock);
++ ++cfid->tcon->tc_count;
++ trace_smb3_tcon_ref(cfid->tcon->debug_id, cfid->tcon->tc_count,
++ netfs_trace_tcon_ref_get_cached_laundromat);
++ spin_unlock(&cifs_tcp_ses_lock);
++ queue_work(serverclose_wq, &cfid->close_work);
++ } else
+ /*
+- * Our lease has not yet been cancelled from the server
+- * so we need to drop the reference.
++ * Drop the ref-count from above, either the lease-ref (if there
++ * was one) or the extra one acquired.
+ */
+- spin_lock(&cfids->cfid_list_lock);
+- cfid->has_lease = false;
+- spin_unlock(&cfids->cfid_list_lock);
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
+- }
+- /* Drop the extra reference opened above */
+- kref_put(&cfid->refcount, smb2_close_cached_fid);
+ }
+- queue_delayed_work(cifsiod_wq, &cfids->laundromat_work,
++ queue_delayed_work(cfid_put_wq, &cfids->laundromat_work,
+ dir_cache_timeout * HZ);
+ }
+
+@@ -677,9 +754,11 @@ struct cached_fids *init_cached_dirs(void)
+ return NULL;
+ spin_lock_init(&cfids->cfid_list_lock);
+ INIT_LIST_HEAD(&cfids->entries);
++ INIT_LIST_HEAD(&cfids->dying);
+
++ INIT_WORK(&cfids->invalidation_work, cfids_invalidation_worker);
+ INIT_DELAYED_WORK(&cfids->laundromat_work, cfids_laundromat_worker);
+- queue_delayed_work(cifsiod_wq, &cfids->laundromat_work,
++ queue_delayed_work(cfid_put_wq, &cfids->laundromat_work,
+ dir_cache_timeout * HZ);
+
+ return cfids;
+@@ -698,6 +777,7 @@ void free_cached_dirs(struct cached_fids *cfids)
+ return;
+
+ cancel_delayed_work_sync(&cfids->laundromat_work);
++ cancel_work_sync(&cfids->invalidation_work);
+
+ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
+@@ -705,6 +785,11 @@ void free_cached_dirs(struct cached_fids *cfids)
+ cfid->is_open = false;
+ list_move(&cfid->entry, &entry);
+ }
++ list_for_each_entry_safe(cfid, q, &cfids->dying, entry) {
++ cfid->on_list = false;
++ cfid->is_open = false;
++ list_move(&cfid->entry, &entry);
++ }
+ spin_unlock(&cfids->cfid_list_lock);
+
+ list_for_each_entry_safe(cfid, q, &entry, entry) {
+diff --git a/fs/smb/client/cached_dir.h b/fs/smb/client/cached_dir.h
+index 81ba0fd5cc16d6..1dfe79d947a62f 100644
+--- a/fs/smb/client/cached_dir.h
++++ b/fs/smb/client/cached_dir.h
+@@ -44,7 +44,8 @@ struct cached_fid {
+ spinlock_t fid_lock;
+ struct cifs_tcon *tcon;
+ struct dentry *dentry;
+- struct work_struct lease_break;
++ struct work_struct put_work;
++ struct work_struct close_work;
+ struct smb2_file_all_info file_all_info;
+ struct cached_dirents dirents;
+ };
+@@ -53,10 +54,13 @@ struct cached_fid {
+ struct cached_fids {
+ /* Must be held when:
+ * - accessing the cfids->entries list
++ * - accessing the cfids->dying list
+ */
+ spinlock_t cfid_list_lock;
+ int num_entries;
+ struct list_head entries;
++ struct list_head dying;
++ struct work_struct invalidation_work;
+ struct delayed_work laundromat_work;
+ };
+
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index 9bdb6e7f1dc3a9..44a367e21804d7 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -157,6 +157,7 @@ struct workqueue_struct *fileinfo_put_wq;
+ struct workqueue_struct *cifsoplockd_wq;
+ struct workqueue_struct *deferredclose_wq;
+ struct workqueue_struct *serverclose_wq;
++struct workqueue_struct *cfid_put_wq;
+ __u32 cifs_lock_secret;
+
+ /*
+@@ -1895,9 +1896,16 @@ init_cifs(void)
+ goto out_destroy_deferredclose_wq;
+ }
+
++ cfid_put_wq = alloc_workqueue("cfid_put_wq",
++ WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
++ if (!cfid_put_wq) {
++ rc = -ENOMEM;
++ goto out_destroy_serverclose_wq;
++ }
++
+ rc = cifs_init_inodecache();
+ if (rc)
+- goto out_destroy_serverclose_wq;
++ goto out_destroy_cfid_put_wq;
+
+ rc = cifs_init_netfs();
+ if (rc)
+@@ -1965,6 +1973,8 @@ init_cifs(void)
+ cifs_destroy_netfs();
+ out_destroy_inodecache:
+ cifs_destroy_inodecache();
++out_destroy_cfid_put_wq:
++ destroy_workqueue(cfid_put_wq);
+ out_destroy_serverclose_wq:
+ destroy_workqueue(serverclose_wq);
+ out_destroy_deferredclose_wq:
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 59f649da5fdde7..64610236cc7251 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -589,6 +589,7 @@ struct smb_version_operations {
+ /* Check for STATUS_NETWORK_NAME_DELETED */
+ bool (*is_network_name_deleted)(char *buf, struct TCP_Server_Info *srv);
+ int (*parse_reparse_point)(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data);
+ int (*create_reparse_symlink)(const unsigned int xid,
+@@ -1982,7 +1983,7 @@ require use of the stronger protocol */
+ * cifsInodeInfo->lock_sem cifsInodeInfo->llist cifs_init_once
+ * ->can_cache_brlcks
+ * cifsInodeInfo->deferred_lock cifsInodeInfo->deferred_closes cifsInodeInfo_alloc
+- * cached_fid->fid_mutex cifs_tcon->crfid tcon_info_alloc
++ * cached_fids->cfid_list_lock cifs_tcon->cfids->entries init_cached_dirs
+ * cifsFileInfo->fh_mutex cifsFileInfo cifs_new_fileinfo
+ * cifsFileInfo->file_info_lock cifsFileInfo->count cifs_new_fileinfo
+ * ->invalidHandle initiate_cifs_search
+@@ -2070,6 +2071,7 @@ extern struct workqueue_struct *fileinfo_put_wq;
+ extern struct workqueue_struct *cifsoplockd_wq;
+ extern struct workqueue_struct *deferredclose_wq;
+ extern struct workqueue_struct *serverclose_wq;
++extern struct workqueue_struct *cfid_put_wq;
+ extern __u32 cifs_lock_secret;
+
+ extern mempool_t *cifs_sm_req_poolp;
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 497bf3c447bcb5..8d35a5cab39e35 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -675,6 +675,7 @@ char *extract_hostname(const char *unc);
+ char *extract_sharename(const char *unc);
+ int parse_reparse_point(struct reparse_data_buffer *buf,
+ u32 plen, struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ bool unicode, struct cifs_open_info_data *data);
+ int cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ struct dentry *dentry, struct cifs_tcon *tcon,
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 9af28ed4cca46d..9cf87ba0fb8c93 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -1908,11 +1908,35 @@ static int match_session(struct cifs_ses *ses, struct smb3_fs_context *ctx)
+ CIFS_MAX_USERNAME_LEN))
+ return 0;
+ if ((ctx->username && strlen(ctx->username) != 0) &&
+- ses->password != NULL &&
+- strncmp(ses->password,
+- ctx->password ? ctx->password : "",
+- CIFS_MAX_PASSWORD_LEN))
+- return 0;
++ ses->password != NULL) {
++
++ /* New mount can only share sessions with an existing mount if:
++ * 1. Both password and password2 match, or
++ * 2. password2 of the old mount matches password of the new mount
++ * and password of the old mount matches password2 of the new
++ * mount
++ */
++ if (ses->password2 != NULL && ctx->password2 != NULL) {
++ if (!((strncmp(ses->password, ctx->password ?
++ ctx->password : "", CIFS_MAX_PASSWORD_LEN) == 0 &&
++ strncmp(ses->password2, ctx->password2,
++ CIFS_MAX_PASSWORD_LEN) == 0) ||
++ (strncmp(ses->password, ctx->password2,
++ CIFS_MAX_PASSWORD_LEN) == 0 &&
++ strncmp(ses->password2, ctx->password ?
++ ctx->password : "", CIFS_MAX_PASSWORD_LEN) == 0)))
++ return 0;
++
++ } else if ((ses->password2 == NULL && ctx->password2 != NULL) ||
++ (ses->password2 != NULL && ctx->password2 == NULL)) {
++ return 0;
++
++ } else {
++ if (strncmp(ses->password, ctx->password ?
++ ctx->password : "", CIFS_MAX_PASSWORD_LEN))
++ return 0;
++ }
++ }
+ }
+
+ if (strcmp(ctx->local_nls->charset, ses->local_nls->charset))
+@@ -2256,6 +2280,7 @@ struct cifs_ses *
+ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ {
+ int rc = 0;
++ int retries = 0;
+ unsigned int xid;
+ struct cifs_ses *ses;
+ struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr;
+@@ -2274,6 +2299,8 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ cifs_dbg(FYI, "Session needs reconnect\n");
+
+ mutex_lock(&ses->session_mutex);
++
++retry_old_session:
+ rc = cifs_negotiate_protocol(xid, ses, server);
+ if (rc) {
+ mutex_unlock(&ses->session_mutex);
+@@ -2286,6 +2313,13 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ rc = cifs_setup_session(xid, ses, server,
+ ctx->local_nls);
+ if (rc) {
++ if (((rc == -EACCES) || (rc == -EKEYEXPIRED) ||
++ (rc == -EKEYREVOKED)) && !retries && ses->password2) {
++ retries++;
++ cifs_dbg(FYI, "Session reconnect failed, retrying with alternate password\n");
++ swap(ses->password, ses->password2);
++ goto retry_old_session;
++ }
+ mutex_unlock(&ses->session_mutex);
+ /* problem -- put our reference */
+ cifs_put_smb_ses(ses);
+@@ -2361,6 +2395,7 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ ses->chans_need_reconnect = 1;
+ spin_unlock(&ses->chan_lock);
+
++retry_new_session:
+ mutex_lock(&ses->session_mutex);
+ rc = cifs_negotiate_protocol(xid, ses, server);
+ if (!rc)
+@@ -2373,8 +2408,16 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ sizeof(ses->smb3signingkey));
+ spin_unlock(&ses->chan_lock);
+
+- if (rc)
+- goto get_ses_fail;
++ if (rc) {
++ if (((rc == -EACCES) || (rc == -EKEYEXPIRED) ||
++ (rc == -EKEYREVOKED)) && !retries && ses->password2) {
++ retries++;
++ cifs_dbg(FYI, "Session setup failed, retrying with alternate password\n");
++ swap(ses->password, ses->password2);
++ goto retry_new_session;
++ } else
++ goto get_ses_fail;
++ }
+
+ /*
+ * success, put it on the list and add it as first channel
+@@ -2558,7 +2601,7 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx)
+
+ if (ses->server->dialect >= SMB20_PROT_ID &&
+ (ses->server->capabilities & SMB2_GLOBAL_CAP_DIRECTORY_LEASING))
+- nohandlecache = ctx->nohandlecache;
++ nohandlecache = ctx->nohandlecache || !dir_cache_timeout;
+ else
+ nohandlecache = true;
+ tcon = tcon_info_alloc(!nohandlecache, netfs_trace_tcon_ref_new);
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index 4069b69fbc7e04..e9fe48a3625bae 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -890,12 +890,37 @@ do { \
+ cifs_sb->ctx->field = NULL; \
+ } while (0)
+
++int smb3_sync_session_ctx_passwords(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)
++{
++ if (ses->password &&
++ cifs_sb->ctx->password &&
++ strcmp(ses->password, cifs_sb->ctx->password)) {
++ kfree_sensitive(cifs_sb->ctx->password);
++ cifs_sb->ctx->password = kstrdup(ses->password, GFP_KERNEL);
++ if (!cifs_sb->ctx->password)
++ return -ENOMEM;
++ }
++ if (ses->password2 &&
++ cifs_sb->ctx->password2 &&
++ strcmp(ses->password2, cifs_sb->ctx->password2)) {
++ kfree_sensitive(cifs_sb->ctx->password2);
++ cifs_sb->ctx->password2 = kstrdup(ses->password2, GFP_KERNEL);
++ if (!cifs_sb->ctx->password2) {
++ kfree_sensitive(cifs_sb->ctx->password);
++ cifs_sb->ctx->password = NULL;
++ return -ENOMEM;
++ }
++ }
++ return 0;
++}
++
+ static int smb3_reconfigure(struct fs_context *fc)
+ {
+ struct smb3_fs_context *ctx = smb3_fc2context(fc);
+ struct dentry *root = fc->root;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(root->d_sb);
+ struct cifs_ses *ses = cifs_sb_master_tcon(cifs_sb)->ses;
++ char *new_password = NULL, *new_password2 = NULL;
+ bool need_recon = false;
+ int rc;
+
+@@ -915,21 +940,63 @@ static int smb3_reconfigure(struct fs_context *fc)
+ STEAL_STRING(cifs_sb, ctx, UNC);
+ STEAL_STRING(cifs_sb, ctx, source);
+ STEAL_STRING(cifs_sb, ctx, username);
++
+ if (need_recon == false)
+ STEAL_STRING_SENSITIVE(cifs_sb, ctx, password);
+ else {
+- kfree_sensitive(ses->password);
+- ses->password = kstrdup(ctx->password, GFP_KERNEL);
+- if (!ses->password)
+- return -ENOMEM;
+- kfree_sensitive(ses->password2);
+- ses->password2 = kstrdup(ctx->password2, GFP_KERNEL);
+- if (!ses->password2) {
+- kfree_sensitive(ses->password);
+- ses->password = NULL;
++ if (ctx->password) {
++ new_password = kstrdup(ctx->password, GFP_KERNEL);
++ if (!new_password)
++ return -ENOMEM;
++ } else
++ STEAL_STRING_SENSITIVE(cifs_sb, ctx, password);
++ }
++
++ /*
++ * if a new password2 has been specified, then reset it's value
++ * inside the ses struct
++ */
++ if (ctx->password2) {
++ new_password2 = kstrdup(ctx->password2, GFP_KERNEL);
++ if (!new_password2) {
++ kfree_sensitive(new_password);
+ return -ENOMEM;
+ }
++ } else
++ STEAL_STRING_SENSITIVE(cifs_sb, ctx, password2);
++
++ /*
++ * we may update the passwords in the ses struct below. Make sure we do
++ * not race with smb2_reconnect
++ */
++ mutex_lock(&ses->session_mutex);
++
++ /*
++ * smb2_reconnect may swap password and password2 in case session setup
++ * failed. First get ctx passwords in sync with ses passwords. It should
++ * be okay to do this even if this function were to return an error at a
++ * later stage
++ */
++ rc = smb3_sync_session_ctx_passwords(cifs_sb, ses);
++ if (rc) {
++ mutex_unlock(&ses->session_mutex);
++ return rc;
+ }
++
++ /*
++ * now that allocations for passwords are done, commit them
++ */
++ if (new_password) {
++ kfree_sensitive(ses->password);
++ ses->password = new_password;
++ }
++ if (new_password2) {
++ kfree_sensitive(ses->password2);
++ ses->password2 = new_password2;
++ }
++
++ mutex_unlock(&ses->session_mutex);
++
+ STEAL_STRING(cifs_sb, ctx, domainname);
+ STEAL_STRING(cifs_sb, ctx, nodename);
+ STEAL_STRING(cifs_sb, ctx, iocharset);
+diff --git a/fs/smb/client/fs_context.h b/fs/smb/client/fs_context.h
+index cf577ec0dd0ac4..bbd2063ab838d3 100644
+--- a/fs/smb/client/fs_context.h
++++ b/fs/smb/client/fs_context.h
+@@ -298,6 +298,7 @@ static inline struct smb3_fs_context *smb3_fc2context(const struct fs_context *f
+ }
+
+ extern int smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx);
++extern int smb3_sync_session_ctx_passwords(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses);
+ extern void smb3_update_mnt_flags(struct cifs_sb_info *cifs_sb);
+
+ /*
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index 54579a2003ac6a..200936773a9560 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1075,6 +1075,7 @@ static int reparse_info_to_fattr(struct cifs_open_info_data *data,
+ rc = 0;
+ } else if (iov && server->ops->parse_reparse_point) {
+ rc = server->ops->parse_reparse_point(cifs_sb,
++ full_path,
+ iov, data);
+ }
+ break;
+@@ -2433,13 +2434,10 @@ cifs_dentry_needs_reval(struct dentry *dentry)
+ return true;
+
+ if (!open_cached_dir_by_dentry(tcon, dentry->d_parent, &cfid)) {
+- spin_lock(&cfid->fid_lock);
+ if (cfid->time && cifs_i->time > cfid->time) {
+- spin_unlock(&cfid->fid_lock);
+ close_cached_dir(cfid);
+ return false;
+ }
+- spin_unlock(&cfid->fid_lock);
+ close_cached_dir(cfid);
+ }
+ /*
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 74abbdf5026c73..f74d0a86f44a4e 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -35,6 +35,9 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ u16 len, plen;
+ int rc = 0;
+
++ if (strlen(symname) > REPARSE_SYM_PATH_MAX)
++ return -ENAMETOOLONG;
++
+ sym = kstrdup(symname, GFP_KERNEL);
+ if (!sym)
+ return -ENOMEM;
+@@ -64,7 +67,7 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ if (rc < 0)
+ goto out;
+
+- plen = 2 * UniStrnlen((wchar_t *)path, PATH_MAX);
++ plen = 2 * UniStrnlen((wchar_t *)path, REPARSE_SYM_PATH_MAX);
+ len = sizeof(*buf) + plen * 2;
+ buf = kzalloc(len, GFP_KERNEL);
+ if (!buf) {
+@@ -532,9 +535,76 @@ static int parse_reparse_posix(struct reparse_posix_data *buf,
+ return 0;
+ }
+
++int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
++ bool unicode, bool relative,
++ const char *full_path,
++ struct cifs_sb_info *cifs_sb)
++{
++ char sep = CIFS_DIR_SEP(cifs_sb);
++ char *linux_target = NULL;
++ char *smb_target = NULL;
++ int levels;
++ int rc;
++ int i;
++
++ smb_target = cifs_strndup_from_utf16(buf, len, unicode, cifs_sb->local_nls);
++ if (!smb_target) {
++ rc = -ENOMEM;
++ goto out;
++ }
++
++ if (smb_target[0] == sep && relative) {
++ /*
++ * This is a relative SMB symlink from the top of the share,
++ * which is the top level directory of the Linux mount point.
++ * Linux does not support such relative symlinks, so convert
++ * it to the relative symlink from the current directory.
++ * full_path is the SMB path to the symlink (from which is
++ * extracted current directory) and smb_target is the SMB path
++ * where symlink points, therefore full_path must always be on
++ * the SMB share.
++ */
++ int smb_target_len = strlen(smb_target)+1;
++ levels = 0;
++ for (i = 1; full_path[i]; i++) { /* i=1 to skip leading sep */
++ if (full_path[i] == sep)
++ levels++;
++ }
++ linux_target = kmalloc(levels*3 + smb_target_len, GFP_KERNEL);
++ if (!linux_target) {
++ rc = -ENOMEM;
++ goto out;
++ }
++ for (i = 0; i < levels; i++) {
++ linux_target[i*3 + 0] = '.';
++ linux_target[i*3 + 1] = '.';
++ linux_target[i*3 + 2] = sep;
++ }
++ memcpy(linux_target + levels*3, smb_target+1, smb_target_len); /* +1 to skip leading sep */
++ } else {
++ linux_target = smb_target;
++ smb_target = NULL;
++ }
++
++ if (sep == '\\')
++ convert_delimiter(linux_target, '/');
++
++ rc = 0;
++ *target = linux_target;
++
++ cifs_dbg(FYI, "%s: symlink target: %s\n", __func__, *target);
++
++out:
++ if (rc != 0)
++ kfree(linux_target);
++ kfree(smb_target);
++ return rc;
++}
++
+ static int parse_reparse_symlink(struct reparse_symlink_data_buffer *sym,
+ u32 plen, bool unicode,
+ struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct cifs_open_info_data *data)
+ {
+ unsigned int len;
+@@ -549,20 +619,18 @@ static int parse_reparse_symlink(struct reparse_symlink_data_buffer *sym,
+ return -EIO;
+ }
+
+- data->symlink_target = cifs_strndup_from_utf16(sym->PathBuffer + offs,
+- len, unicode,
+- cifs_sb->local_nls);
+- if (!data->symlink_target)
+- return -ENOMEM;
+-
+- convert_delimiter(data->symlink_target, '/');
+- cifs_dbg(FYI, "%s: target path: %s\n", __func__, data->symlink_target);
+-
+- return 0;
++ return smb2_parse_native_symlink(&data->symlink_target,
++ sym->PathBuffer + offs,
++ len,
++ unicode,
++ le32_to_cpu(sym->Flags) & SYMLINK_FLAG_RELATIVE,
++ full_path,
++ cifs_sb);
+ }
+
+ int parse_reparse_point(struct reparse_data_buffer *buf,
+ u32 plen, struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ bool unicode, struct cifs_open_info_data *data)
+ {
+ struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+@@ -577,7 +645,7 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ case IO_REPARSE_TAG_SYMLINK:
+ return parse_reparse_symlink(
+ (struct reparse_symlink_data_buffer *)buf,
+- plen, unicode, cifs_sb, data);
++ plen, unicode, cifs_sb, full_path, data);
+ case IO_REPARSE_TAG_LX_SYMLINK:
+ case IO_REPARSE_TAG_AF_UNIX:
+ case IO_REPARSE_TAG_LX_FIFO:
+@@ -593,6 +661,7 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ }
+
+ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data)
+ {
+@@ -602,7 +671,7 @@ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
+
+ buf = (struct reparse_data_buffer *)((u8 *)io +
+ le32_to_cpu(io->OutputOffset));
+- return parse_reparse_point(buf, plen, cifs_sb, true, data);
++ return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data);
+ }
+
+ static void wsl_to_fattr(struct cifs_open_info_data *data,
+diff --git a/fs/smb/client/reparse.h b/fs/smb/client/reparse.h
+index 158e7b7aae646c..ff05b0e75c9284 100644
+--- a/fs/smb/client/reparse.h
++++ b/fs/smb/client/reparse.h
+@@ -12,6 +12,8 @@
+ #include "fs_context.h"
+ #include "cifsglob.h"
+
++#define REPARSE_SYM_PATH_MAX 4060
++
+ /*
+ * Used only by cifs.ko to ignore reparse points from files when client or
+ * server doesn't support FSCTL_GET_REPARSE_POINT.
+@@ -115,7 +117,9 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ int smb2_mknod_reparse(unsigned int xid, struct inode *inode,
+ struct dentry *dentry, struct cifs_tcon *tcon,
+ const char *full_path, umode_t mode, dev_t dev);
+-int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb, struct kvec *rsp_iov,
++int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
++ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data);
+
+ #endif /* _CIFS_REPARSE_H */
+diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
+index 8c03250d85ae0c..c252447918d678 100644
+--- a/fs/smb/client/smb1ops.c
++++ b/fs/smb/client/smb1ops.c
+@@ -994,17 +994,17 @@ static int cifs_query_symlink(const unsigned int xid,
+ }
+
+ static int cifs_parse_reparse_point(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data)
+ {
+ struct reparse_data_buffer *buf;
+ TRANSACT_IOCTL_RSP *io = rsp_iov->iov_base;
+- bool unicode = !!(io->hdr.Flags2 & SMBFLG2_UNICODE);
+ u32 plen = le16_to_cpu(io->ByteCount);
+
+ buf = (struct reparse_data_buffer *)((__u8 *)&io->hdr.Protocol +
+ le32_to_cpu(io->DataOffset));
+- return parse_reparse_point(buf, plen, cifs_sb, unicode, data);
++ return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data);
+ }
+
+ static bool
+diff --git a/fs/smb/client/smb2file.c b/fs/smb/client/smb2file.c
+index c23478ab1cf851..dc52995f559105 100644
+--- a/fs/smb/client/smb2file.c
++++ b/fs/smb/client/smb2file.c
+@@ -63,12 +63,12 @@ static struct smb2_symlink_err_rsp *symlink_data(const struct kvec *iov)
+ return sym;
+ }
+
+-int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path)
++int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov,
++ const char *full_path, char **path)
+ {
+ struct smb2_symlink_err_rsp *sym;
+ unsigned int sub_offs, sub_len;
+ unsigned int print_offs, print_len;
+- char *s;
+
+ if (!cifs_sb || !iov || !iov->iov_base || !iov->iov_len || !path)
+ return -EINVAL;
+@@ -86,15 +86,13 @@ int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec
+ iov->iov_len < SMB2_SYMLINK_STRUCT_SIZE + print_offs + print_len)
+ return -EINVAL;
+
+- s = cifs_strndup_from_utf16((char *)sym->PathBuffer + sub_offs, sub_len, true,
+- cifs_sb->local_nls);
+- if (!s)
+- return -ENOMEM;
+- convert_delimiter(s, '/');
+- cifs_dbg(FYI, "%s: symlink target: %s\n", __func__, s);
+-
+- *path = s;
+- return 0;
++ return smb2_parse_native_symlink(path,
++ (char *)sym->PathBuffer + sub_offs,
++ sub_len,
++ true,
++ le32_to_cpu(sym->Flags) & SYMLINK_FLAG_RELATIVE,
++ full_path,
++ cifs_sb);
+ }
+
+ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock, void *buf)
+@@ -126,6 +124,7 @@ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32
+ goto out;
+ if (hdr->Status == STATUS_STOPPED_ON_SYMLINK) {
+ rc = smb2_parse_symlink_response(oparms->cifs_sb, &err_iov,
++ oparms->path,
+ &data->symlink_target);
+ if (!rc) {
+ memset(smb2_data, 0, sizeof(*smb2_data));
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index cdb0e028e73c46..9a28a30ec1a344 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -828,6 +828,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+
+ static int parse_create_response(struct cifs_open_info_data *data,
+ struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ const struct kvec *iov)
+ {
+ struct smb2_create_rsp *rsp = iov->iov_base;
+@@ -841,6 +842,7 @@ static int parse_create_response(struct cifs_open_info_data *data,
+ break;
+ case STATUS_STOPPED_ON_SYMLINK:
+ rc = smb2_parse_symlink_response(cifs_sb, iov,
++ full_path,
+ &data->symlink_target);
+ if (rc)
+ return rc;
+@@ -930,14 +932,14 @@ int smb2_query_path_info(const unsigned int xid,
+
+ switch (rc) {
+ case 0:
+- rc = parse_create_response(data, cifs_sb, &out_iov[0]);
++ rc = parse_create_response(data, cifs_sb, full_path, &out_iov[0]);
+ break;
+ case -EOPNOTSUPP:
+ /*
+ * BB TODO: When support for special files added to Samba
+ * re-verify this path.
+ */
+- rc = parse_create_response(data, cifs_sb, &out_iov[0]);
++ rc = parse_create_response(data, cifs_sb, full_path, &out_iov[0]);
+ if (rc || !data->reparse_point)
+ goto out;
+
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index b9e332443b0d90..295aa74b4c6948 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -4078,7 +4078,7 @@ map_oplock_to_lease(u8 oplock)
+ if (oplock == SMB2_OPLOCK_LEVEL_EXCLUSIVE)
+ return SMB2_LEASE_WRITE_CACHING_LE | SMB2_LEASE_READ_CACHING_LE;
+ else if (oplock == SMB2_OPLOCK_LEVEL_II)
+- return SMB2_LEASE_READ_CACHING_LE;
++ return SMB2_LEASE_READ_CACHING_LE | SMB2_LEASE_HANDLE_CACHING_LE;
+ else if (oplock == SMB2_OPLOCK_LEVEL_BATCH)
+ return SMB2_LEASE_HANDLE_CACHING_LE | SMB2_LEASE_READ_CACHING_LE |
+ SMB2_LEASE_WRITE_CACHING_LE;
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 194a4262d57a09..30565f7ef308f8 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -1230,7 +1230,9 @@ SMB2_negotiate(const unsigned int xid,
+ * SMB3.0 supports only 1 cipher and doesn't have a encryption neg context
+ * Set the cipher type manually.
+ */
+- if (server->dialect == SMB30_PROT_ID && (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
++ if ((server->dialect == SMB30_PROT_ID ||
++ server->dialect == SMB302_PROT_ID) &&
++ (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
+ server->cipher_type = SMB2_ENCRYPTION_AES128_CCM;
+
+ security_blob = smb2_get_data_area_len(&blob_offset, &blob_length,
+diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
+index 5e0855fefcfe66..aa01ae234732a1 100644
+--- a/fs/smb/client/smb2proto.h
++++ b/fs/smb/client/smb2proto.h
+@@ -113,7 +113,14 @@ extern int smb3_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_sb_info *cifs_sb,
+ const unsigned char *path, char *pbuf,
+ unsigned int *pbytes_read);
+-int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path);
++int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
++ bool unicode, bool relative,
++ const char *full_path,
++ struct cifs_sb_info *cifs_sb);
++int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb,
++ const struct kvec *iov,
++ const char *full_path,
++ char **path);
+ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock,
+ void *buf);
+ extern int smb2_unlock_range(struct cifsFileInfo *cfile,
+diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
+index 8e9964001e2aed..f3a261d0102d49 100644
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -44,6 +44,8 @@
+ EM(netfs_trace_tcon_ref_free_ipc, "FRE Ipc ") \
+ EM(netfs_trace_tcon_ref_free_ipc_fail, "FRE Ipc-F ") \
+ EM(netfs_trace_tcon_ref_free_reconnect_server, "FRE Reconn") \
++ EM(netfs_trace_tcon_ref_get_cached_laundromat, "GET Ch-Lau") \
++ EM(netfs_trace_tcon_ref_get_cached_lease_break, "GET Ch-Lea") \
+ EM(netfs_trace_tcon_ref_get_cancelled_close, "GET Cn-Cls") \
+ EM(netfs_trace_tcon_ref_get_dfs_refer, "GET DfsRef") \
+ EM(netfs_trace_tcon_ref_get_find, "GET Find ") \
+@@ -52,6 +54,7 @@
+ EM(netfs_trace_tcon_ref_new, "NEW ") \
+ EM(netfs_trace_tcon_ref_new_ipc, "NEW Ipc ") \
+ EM(netfs_trace_tcon_ref_new_reconnect_server, "NEW Reconn") \
++ EM(netfs_trace_tcon_ref_put_cached_close, "PUT Ch-Cls") \
+ EM(netfs_trace_tcon_ref_put_cancelled_close, "PUT Cn-Cls") \
+ EM(netfs_trace_tcon_ref_put_cancelled_close_fid, "PUT Cn-Fid") \
+ EM(netfs_trace_tcon_ref_put_cancelled_mid, "PUT Cn-Mid") \
+diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c
+index 6223908f9c5642..af67f2badbed04 100644
+--- a/fs/smb/server/server.c
++++ b/fs/smb/server/server.c
+@@ -276,8 +276,12 @@ static void handle_ksmbd_work(struct work_struct *wk)
+ * disconnection. waitqueue_active is safe because it
+ * uses atomic operation for condition.
+ */
++ atomic_inc(&conn->refcnt);
+ if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q))
+ wake_up(&conn->r_count_q);
++
++ if (atomic_dec_and_test(&conn->refcnt))
++ kfree(conn);
+ }
+
+ /**
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index 291583005dd123..245a10cc1eeb4d 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -773,10 +773,10 @@ static void init_constants_master(struct ubifs_info *c)
+ * necessary to report something for the 'statfs()' call.
+ *
+ * Subtract the LEB reserved for GC, the LEB which is reserved for
+- * deletions, minimum LEBs for the index, and assume only one journal
+- * head is available.
++ * deletions, minimum LEBs for the index, the LEBs which are reserved
++ * for each journal head.
+ */
+- tmp64 = c->main_lebs - 1 - 1 - MIN_INDEX_LEBS - c->jhead_cnt + 1;
++ tmp64 = c->main_lebs - 1 - 1 - MIN_INDEX_LEBS - c->jhead_cnt;
+ tmp64 *= (long long)c->leb_size - c->leb_overhead;
+ tmp64 = ubifs_reported_space(c, tmp64);
+ c->block_cnt = tmp64 >> UBIFS_BLOCK_SHIFT;
+diff --git a/fs/ubifs/tnc_commit.c b/fs/ubifs/tnc_commit.c
+index a55e04822d16e9..7c43e0ccf6d47d 100644
+--- a/fs/ubifs/tnc_commit.c
++++ b/fs/ubifs/tnc_commit.c
+@@ -657,6 +657,8 @@ static int get_znodes_to_commit(struct ubifs_info *c)
+ znode->alt = 0;
+ cnext = find_next_dirty(znode);
+ if (!cnext) {
++ ubifs_assert(c, !znode->parent);
++ znode->cparent = NULL;
+ znode->cnext = c->cnext;
+ break;
+ }
+diff --git a/fs/unicode/utf8-core.c b/fs/unicode/utf8-core.c
+index 8395066341a437..0400824ef4936e 100644
+--- a/fs/unicode/utf8-core.c
++++ b/fs/unicode/utf8-core.c
+@@ -198,7 +198,7 @@ struct unicode_map *utf8_load(unsigned int version)
+ return um;
+
+ out_symbol_put:
+- symbol_put(um->tables);
++ symbol_put(utf8_data_table);
+ out_free_um:
+ kfree(um);
+ return ERR_PTR(-EINVAL);
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 1ae44793132a8d..38d710f620efc1 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -349,9 +349,9 @@
+ *(.data..decrypted) \
+ *(.ref.data) \
+ *(.data..shared_aligned) /* percpu related */ \
+- *(.data.unlikely) \
++ *(.data..unlikely) \
+ __start_once = .; \
+- *(.data.once) \
++ *(.data..once) \
+ __end_once = .; \
+ STRUCT_ALIGN(); \
+ *(__tracepoints) \
+diff --git a/include/kunit/skbuff.h b/include/kunit/skbuff.h
+index 44d12370939a90..345e1e8f031235 100644
+--- a/include/kunit/skbuff.h
++++ b/include/kunit/skbuff.h
+@@ -29,7 +29,7 @@ static void kunit_action_kfree_skb(void *p)
+ static inline struct sk_buff *kunit_zalloc_skb(struct kunit *test, int len,
+ gfp_t gfp)
+ {
+- struct sk_buff *res = alloc_skb(len, GFP_KERNEL);
++ struct sk_buff *res = alloc_skb(len, gfp);
+
+ if (!res || skb_pad(res, len))
+ return NULL;
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index 8d304b1d16b15a..a2d27a4d7b6c7d 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -928,6 +928,8 @@ void blk_freeze_queue_start(struct request_queue *q);
+ void blk_mq_freeze_queue_wait(struct request_queue *q);
+ int blk_mq_freeze_queue_wait_timeout(struct request_queue *q,
+ unsigned long timeout);
++void blk_mq_unfreeze_queue_non_owner(struct request_queue *q);
++void blk_freeze_queue_start_non_owner(struct request_queue *q);
+
+ void blk_mq_map_queues(struct blk_mq_queue_map *qmap);
+ void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues);
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index b7664d593486a8..28a19ab37c1f49 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -25,6 +25,7 @@
+ #include <linux/uuid.h>
+ #include <linux/xarray.h>
+ #include <linux/file.h>
++#include <linux/lockdep.h>
+
+ struct module;
+ struct request_queue;
+@@ -471,6 +472,11 @@ struct request_queue {
+ struct xarray hctx_table;
+
+ struct percpu_ref q_usage_counter;
++ struct lock_class_key io_lock_cls_key;
++ struct lockdep_map io_lockdep_map;
++
++ struct lock_class_key q_lock_cls_key;
++ struct lockdep_map q_lockdep_map;
+
+ struct request *last_merge;
+
+@@ -566,6 +572,10 @@ struct request_queue {
+ struct throtl_data *td;
+ #endif
+ struct rcu_head rcu_head;
++#ifdef CONFIG_LOCKDEP
++ struct task_struct *mq_freeze_owner;
++ int mq_freeze_owner_depth;
++#endif
+ wait_queue_head_t mq_freeze_wq;
+ /*
+ * Protect concurrent access to q_usage_counter by
+@@ -1187,7 +1197,8 @@ static inline unsigned int queue_max_segment_size(const struct request_queue *q)
+ return q->limits.max_segment_size;
+ }
+
+-static inline unsigned int queue_limits_max_zone_append_sectors(struct queue_limits *l)
++static inline unsigned int
++queue_limits_max_zone_append_sectors(const struct queue_limits *l)
+ {
+ unsigned int max_sectors = min(l->chunk_sectors, l->max_hw_sectors);
+
+@@ -1248,7 +1259,7 @@ static inline unsigned int queue_io_min(const struct request_queue *q)
+ return q->limits.io_min;
+ }
+
+-static inline int bdev_io_min(struct block_device *bdev)
++static inline unsigned int bdev_io_min(struct block_device *bdev)
+ {
+ return queue_io_min(bdev_get_queue(bdev));
+ }
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index eb1d3a2fe3339b..f37114ad86f931 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1365,7 +1365,8 @@ int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_func
+ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
+ struct bpf_prog *to);
+ /* Called only from JIT-enabled code, so there's no need for stubs. */
+-void bpf_image_ksym_add(void *data, unsigned int size, struct bpf_ksym *ksym);
++void bpf_image_ksym_init(void *data, unsigned int size, struct bpf_ksym *ksym);
++void bpf_image_ksym_add(struct bpf_ksym *ksym);
+ void bpf_image_ksym_del(struct bpf_ksym *ksym);
+ void bpf_ksym_add(struct bpf_ksym *ksym);
+ void bpf_ksym_del(struct bpf_ksym *ksym);
+@@ -3436,4 +3437,10 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog)
+ return prog->aux->func_idx != 0;
+ }
+
++static inline bool bpf_prog_is_raw_tp(const struct bpf_prog *prog)
++{
++ return prog->type == BPF_PROG_TYPE_TRACING &&
++ prog->expected_attach_type == BPF_TRACE_RAW_TP;
++}
++
+ #endif /* _LINUX_BPF_H */
+diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h
+index d9e613803df15a..b3200ccb96186a 100644
+--- a/include/linux/cleanup.h
++++ b/include/linux/cleanup.h
+@@ -154,7 +154,7 @@ static inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \
+ #define DEFINE_GUARD(_name, _type, _lock, _unlock) \
+ DEFINE_CLASS(_name, _type, if (_T) { _unlock; }, ({ _lock; _T; }), _type _T); \
+ static inline void * class_##_name##_lock_ptr(class_##_name##_t *_T) \
+- { return *_T; }
++ { return (void *)(__force unsigned long)*_T; }
+
+ #define DEFINE_GUARD_COND(_name, _ext, _condlock) \
+ EXTEND_CLASS(_name, _ext, \
+@@ -211,7 +211,7 @@ static inline void class_##_name##_destructor(class_##_name##_t *_T) \
+ \
+ static inline void *class_##_name##_lock_ptr(class_##_name##_t *_T) \
+ { \
+- return _T->lock; \
++ return (void *)(__force unsigned long)_T->lock; \
+ }
+
+
+diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h
+index 32284cd26d52a7..c16d4199bf9231 100644
+--- a/include/linux/compiler_attributes.h
++++ b/include/linux/compiler_attributes.h
+@@ -94,19 +94,6 @@
+ # define __copy(symbol)
+ #endif
+
+-/*
+- * Optional: only supported since gcc >= 15
+- * Optional: only supported since clang >= 18
+- *
+- * gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896
+- * clang: https://github.com/llvm/llvm-project/pull/76348
+- */
+-#if __has_attribute(__counted_by__)
+-# define __counted_by(member) __attribute__((__counted_by__(member)))
+-#else
+-# define __counted_by(member)
+-#endif
+-
+ /*
+ * Optional: not supported by gcc
+ * Optional: only supported since clang >= 14.0
+diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
+index f14c275950b5bc..a9cfb7f6fb4b66 100644
+--- a/include/linux/compiler_types.h
++++ b/include/linux/compiler_types.h
+@@ -323,6 +323,25 @@ struct ftrace_likely_data {
+ #define __no_sanitize_or_inline __always_inline
+ #endif
+
++/*
++ * Optional: only supported since gcc >= 15
++ * Optional: only supported since clang >= 18
++ *
++ * gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896
++ * clang: https://github.com/llvm/llvm-project/pull/76348
++ *
++ * __bdos on clang < 19.1.2 can erroneously return 0:
++ * https://github.com/llvm/llvm-project/pull/110497
++ *
++ * __bdos on clang < 19.1.3 can be off by 4:
++ * https://github.com/llvm/llvm-project/pull/112636
++ */
++#ifdef CONFIG_CC_HAS_COUNTED_BY
++# define __counted_by(member) __attribute__((__counted_by__(member)))
++#else
++# define __counted_by(member)
++#endif
++
+ /*
+ * Apply __counted_by() when the Endianness matches to increase test coverage.
+ */
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index c68e37201a12ad..3b2ad444c002ee 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -19,15 +19,15 @@
+ #define F2FS_BLKSIZE_BITS PAGE_SHIFT /* bits for F2FS_BLKSIZE */
+ #define F2FS_MAX_EXTENSION 64 /* # of extension entries */
+ #define F2FS_EXTENSION_LEN 8 /* max size of extension */
+-#define F2FS_BLK_ALIGN(x) (((x) + F2FS_BLKSIZE - 1) >> F2FS_BLKSIZE_BITS)
+
+ #define NULL_ADDR ((block_t)0) /* used as block_t addresses */
+ #define NEW_ADDR ((block_t)-1) /* used as block_t addresses */
+ #define COMPRESS_ADDR ((block_t)-2) /* used as compressed data flag */
+
+-#define F2FS_BYTES_TO_BLK(bytes) ((bytes) >> F2FS_BLKSIZE_BITS)
+-#define F2FS_BLK_TO_BYTES(blk) ((blk) << F2FS_BLKSIZE_BITS)
++#define F2FS_BYTES_TO_BLK(bytes) ((unsigned long long)(bytes) >> F2FS_BLKSIZE_BITS)
++#define F2FS_BLK_TO_BYTES(blk) ((unsigned long long)(blk) << F2FS_BLKSIZE_BITS)
+ #define F2FS_BLK_END_BYTES(blk) (F2FS_BLK_TO_BYTES(blk + 1) - 1)
++#define F2FS_BLK_ALIGN(x) (F2FS_BYTES_TO_BLK((x) + F2FS_BLKSIZE - 1))
+
+ /* 0, 1(node nid), 2(meta nid) are reserved node id */
+ #define F2FS_RESERVED_NODE_NUM 3
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 6ca11e241a2495..232b56416ccd92 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3663,6 +3663,6 @@ static inline bool vfs_empty_path(int dfd, const char __user *path)
+ return !c;
+ }
+
+-bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos);
++int generic_atomic_write_valid(struct kiocb *iocb, struct iov_iter *iter);
+
+ #endif /* _LINUX_FS_H */
+diff --git a/include/linux/hisi_acc_qm.h b/include/linux/hisi_acc_qm.h
+index 9d7754ad5e9b08..43ad280935e360 100644
+--- a/include/linux/hisi_acc_qm.h
++++ b/include/linux/hisi_acc_qm.h
+@@ -229,6 +229,12 @@ struct hisi_qm_status {
+
+ struct hisi_qm;
+
++enum acc_err_result {
++ ACC_ERR_NONE,
++ ACC_ERR_NEED_RESET,
++ ACC_ERR_RECOVERED,
++};
++
+ struct hisi_qm_err_info {
+ char *acpi_rst;
+ u32 msi_wr_port;
+@@ -257,9 +263,9 @@ struct hisi_qm_err_ini {
+ void (*close_axi_master_ooo)(struct hisi_qm *qm);
+ void (*open_sva_prefetch)(struct hisi_qm *qm);
+ void (*close_sva_prefetch)(struct hisi_qm *qm);
+- void (*log_dev_hw_err)(struct hisi_qm *qm, u32 err_sts);
+ void (*show_last_dfx_regs)(struct hisi_qm *qm);
+ void (*err_info_init)(struct hisi_qm *qm);
++ enum acc_err_result (*get_err_result)(struct hisi_qm *qm);
+ };
+
+ struct hisi_qm_cap_info {
+diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h
+index f9a81761bfceda..b1ecfc3cd5bcc0 100644
+--- a/include/linux/io-pgtable.h
++++ b/include/linux/io-pgtable.h
+@@ -171,6 +171,10 @@ struct io_pgtable_cfg {
+ u64 ttbr[4];
+ u32 n_ttbrs;
+ } apple_dart_cfg;
++
++ struct {
++ int nid;
++ } amd;
+ };
+ };
+
+diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
+index de6105f68fecdf..e432b6a12a32f9 100644
+--- a/include/linux/irqdomain.h
++++ b/include/linux/irqdomain.h
+@@ -291,7 +291,12 @@ struct irq_domain_chip_generic_info;
+ * @hwirq_max: Maximum number of interrupts supported by controller
+ * @direct_max: Maximum value of direct maps;
+ * Use ~0 for no limit; 0 for no direct mapping
++ * @hwirq_base: The first hardware interrupt number (legacy domains only)
++ * @virq_base: The first Linux interrupt number for legacy domains to
++ * immediately associate the interrupts after domain creation
+ * @bus_token: Domain bus token
++ * @name_suffix: Optional name suffix to avoid collisions when multiple
++ * domains are added using same fwnode
+ * @ops: Domain operation callbacks
+ * @host_data: Controller private data pointer
+ * @dgc_info: Geneneric chip information structure pointer used to
+@@ -307,7 +312,10 @@ struct irq_domain_info {
+ unsigned int size;
+ irq_hw_number_t hwirq_max;
+ int direct_max;
++ unsigned int hwirq_base;
++ unsigned int virq_base;
+ enum irq_domain_bus_token bus_token;
++ const char *name_suffix;
+ const struct irq_domain_ops *ops;
+ void *host_data;
+ #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY
+diff --git a/include/linux/jiffies.h b/include/linux/jiffies.h
+index d9f1435a5a13cf..26c7c0a4c7f8ce 100644
+--- a/include/linux/jiffies.h
++++ b/include/linux/jiffies.h
+@@ -502,7 +502,7 @@ static inline unsigned long _msecs_to_jiffies(const unsigned int m)
+ * - all other values are converted to jiffies by either multiplying
+ * the input value by a factor or dividing it with a factor and
+ * handling any 32-bit overflows.
+- * for the details see __msecs_to_jiffies()
++ * for the details see _msecs_to_jiffies()
+ *
+ * msecs_to_jiffies() checks for the passed in value being a constant
+ * via __builtin_constant_p() allowing gcc to eliminate most of the
+diff --git a/include/linux/kfifo.h b/include/linux/kfifo.h
+index 564868bdce898b..fd743d4c4b4bdc 100644
+--- a/include/linux/kfifo.h
++++ b/include/linux/kfifo.h
+@@ -37,7 +37,6 @@
+ */
+
+ #include <linux/array_size.h>
+-#include <linux/dma-mapping.h>
+ #include <linux/spinlock.h>
+ #include <linux/stddef.h>
+ #include <linux/types.h>
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 0d5125a3e31a9d..4173a0dc9c661d 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -2370,12 +2370,6 @@ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
+ }
+ #endif /* CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE */
+
+-typedef int (*kvm_vm_thread_fn_t)(struct kvm *kvm, uintptr_t data);
+-
+-int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_vm_thread_fn_t thread_fn,
+- uintptr_t data, const char *name,
+- struct task_struct **thread_ptr);
+-
+ #ifdef CONFIG_KVM_XFER_TO_GUEST_WORK
+ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu)
+ {
+diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
+index 217f7abf2cbfab..67964dc4db952e 100644
+--- a/include/linux/lockdep.h
++++ b/include/linux/lockdep.h
+@@ -173,7 +173,7 @@ static inline void lockdep_init_map(struct lockdep_map *lock, const char *name,
+ (lock)->dep_map.lock_type)
+
+ #define lockdep_set_subclass(lock, sub) \
+- lockdep_init_map_type(&(lock)->dep_map, #lock, (lock)->dep_map.key, sub,\
++ lockdep_init_map_type(&(lock)->dep_map, (lock)->dep_map.name, (lock)->dep_map.key, sub,\
+ (lock)->dep_map.wait_type_inner, \
+ (lock)->dep_map.wait_type_outer, \
+ (lock)->dep_map.lock_type)
+diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
+index 39a7714605a796..d7cb1e5ecbda9d 100644
+--- a/include/linux/mmdebug.h
++++ b/include/linux/mmdebug.h
+@@ -46,7 +46,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
+ } \
+ } while (0)
+ #define VM_WARN_ON_ONCE_PAGE(cond, page) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(__ret_warn_once && !__warned)) { \
+@@ -66,7 +66,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
+ unlikely(__ret_warn); \
+ })
+ #define VM_WARN_ON_ONCE_FOLIO(cond, folio) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(__ret_warn_once && !__warned)) { \
+@@ -77,7 +77,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
+ unlikely(__ret_warn_once); \
+ })
+ #define VM_WARN_ON_ONCE_MM(cond, mm) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(__ret_warn_once && !__warned)) { \
+diff --git a/include/linux/netpoll.h b/include/linux/netpoll.h
+index bd19c4b91e3120..3ddf205b7e2c38 100644
+--- a/include/linux/netpoll.h
++++ b/include/linux/netpoll.h
+@@ -71,7 +71,7 @@ static inline void *netpoll_poll_lock(struct napi_struct *napi)
+ {
+ struct net_device *dev = napi->dev;
+
+- if (dev && dev->npinfo) {
++ if (dev && rcu_access_pointer(dev->npinfo)) {
+ int owner = smp_processor_id();
+
+ while (cmpxchg(&napi->poll_owner, -1, owner) != -1)
+diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h
+index d69ad5bb1eb1e6..b8d6c0c208760a 100644
+--- a/include/linux/of_fdt.h
++++ b/include/linux/of_fdt.h
+@@ -31,6 +31,7 @@ extern void *of_fdt_unflatten_tree(const unsigned long *blob,
+ extern int __initdata dt_root_addr_cells;
+ extern int __initdata dt_root_size_cells;
+ extern void *initial_boot_params;
++extern phys_addr_t initial_boot_params_pa;
+
+ extern char __dtb_start[];
+ extern char __dtb_end[];
+@@ -70,8 +71,8 @@ extern u64 dt_mem_next_cell(int s, const __be32 **cellp);
+ /* Early flat tree scan hooks */
+ extern int early_init_dt_scan_root(void);
+
+-extern bool early_init_dt_scan(void *params);
+-extern bool early_init_dt_verify(void *params);
++extern bool early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys);
++extern bool early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys);
+ extern void early_init_dt_scan_nodes(void);
+
+ extern const char *of_flat_dt_get_machine_name(void);
+diff --git a/include/linux/once.h b/include/linux/once.h
+index bc714d414448a7..30346fcdc7995d 100644
+--- a/include/linux/once.h
++++ b/include/linux/once.h
+@@ -46,7 +46,7 @@ void __do_once_sleepable_done(bool *done, struct static_key_true *once_key,
+ #define DO_ONCE(func, ...) \
+ ({ \
+ bool ___ret = false; \
+- static bool __section(".data.once") ___done = false; \
++ static bool __section(".data..once") ___done = false; \
+ static DEFINE_STATIC_KEY_TRUE(___once_key); \
+ if (static_branch_unlikely(&___once_key)) { \
+ unsigned long ___flags; \
+@@ -64,7 +64,7 @@ void __do_once_sleepable_done(bool *done, struct static_key_true *once_key,
+ #define DO_ONCE_SLEEPABLE(func, ...) \
+ ({ \
+ bool ___ret = false; \
+- static bool __section(".data.once") ___done = false; \
++ static bool __section(".data..once") ___done = false; \
+ static DEFINE_STATIC_KEY_TRUE(___once_key); \
+ if (static_branch_unlikely(&___once_key)) { \
+ ___ret = __do_once_sleepable_start(&___done); \
+diff --git a/include/linux/once_lite.h b/include/linux/once_lite.h
+index b7bce4983638f8..27de7bc32a0610 100644
+--- a/include/linux/once_lite.h
++++ b/include/linux/once_lite.h
+@@ -12,7 +12,7 @@
+
+ #define __ONCE_LITE_IF(condition) \
+ ({ \
+- static bool __section(".data.once") __already_done; \
++ static bool __section(".data..once") __already_done; \
+ bool __ret_cond = !!(condition); \
+ bool __ret_once = false; \
+ \
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 13f6f00aecf9c9..1986d017b67fb4 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -390,7 +390,7 @@ static inline int debug_lockdep_rcu_enabled(void)
+ */
+ #define RCU_LOCKDEP_WARN(c, s) \
+ do { \
+- static bool __section(".data.unlikely") __warned; \
++ static bool __section(".data..unlikely") __warned; \
+ if (debug_lockdep_rcu_enabled() && (c) && \
+ debug_lockdep_rcu_enabled() && !__warned) { \
+ __warned = true; \
+diff --git a/include/linux/regmap.h b/include/linux/regmap.h
+index 122e38161acb8d..f9ccad32fc5cba 100644
+--- a/include/linux/regmap.h
++++ b/include/linux/regmap.h
+@@ -1521,6 +1521,9 @@ struct regmap_irq_chip_data;
+ * struct regmap_irq_chip - Description of a generic regmap irq_chip.
+ *
+ * @name: Descriptive name for IRQ controller.
++ * @domain_suffix: Name suffix to be appended to end of IRQ domain name. Needed
++ * when multiple regmap-IRQ controllers are created from same
++ * device.
+ *
+ * @main_status: Base main status register address. For chips which have
+ * interrupts arranged in separate sub-irq blocks with own IRQ
+@@ -1606,6 +1609,7 @@ struct regmap_irq_chip_data;
+ */
+ struct regmap_irq_chip {
+ const char *name;
++ const char *domain_suffix;
+
+ unsigned int main_status;
+ unsigned int num_main_status_bits;
+diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
+index d90d8ee29d811e..e1c11db8807951 100644
+--- a/include/linux/seqlock.h
++++ b/include/linux/seqlock.h
+@@ -636,6 +636,23 @@ static __always_inline unsigned raw_read_seqcount_latch(const seqcount_latch_t *
+ return READ_ONCE(s->seqcount.sequence);
+ }
+
++/**
++ * read_seqcount_latch() - pick even/odd latch data copy
++ * @s: Pointer to seqcount_latch_t
++ *
++ * See write_seqcount_latch() for details and a full reader/writer usage
++ * example.
++ *
++ * Return: sequence counter raw value. Use the lowest bit as an index for
++ * picking which data copy to read. The full counter must then be checked
++ * with read_seqcount_latch_retry().
++ */
++static __always_inline unsigned read_seqcount_latch(const seqcount_latch_t *s)
++{
++ kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX);
++ return raw_read_seqcount_latch(s);
++}
++
+ /**
+ * raw_read_seqcount_latch_retry() - end a seqcount_latch_t read section
+ * @s: Pointer to seqcount_latch_t
+@@ -650,9 +667,34 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ return unlikely(READ_ONCE(s->seqcount.sequence) != start);
+ }
+
++/**
++ * read_seqcount_latch_retry() - end a seqcount_latch_t read section
++ * @s: Pointer to seqcount_latch_t
++ * @start: count, from read_seqcount_latch()
++ *
++ * Return: true if a read section retry is required, else false
++ */
++static __always_inline int
++read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
++{
++ kcsan_atomic_next(0);
++ return raw_read_seqcount_latch_retry(s, start);
++}
++
+ /**
+ * raw_write_seqcount_latch() - redirect latch readers to even/odd copy
+ * @s: Pointer to seqcount_latch_t
++ */
++static __always_inline void raw_write_seqcount_latch(seqcount_latch_t *s)
++{
++ smp_wmb(); /* prior stores before incrementing "sequence" */
++ s->seqcount.sequence++;
++ smp_wmb(); /* increment "sequence" before following stores */
++}
++
++/**
++ * write_seqcount_latch_begin() - redirect latch readers to odd copy
++ * @s: Pointer to seqcount_latch_t
+ *
+ * The latch technique is a multiversion concurrency control method that allows
+ * queries during non-atomic modifications. If you can guarantee queries never
+@@ -680,17 +722,11 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ *
+ * void latch_modify(struct latch_struct *latch, ...)
+ * {
+- * smp_wmb(); // Ensure that the last data[1] update is visible
+- * latch->seq.sequence++;
+- * smp_wmb(); // Ensure that the seqcount update is visible
+- *
++ * write_seqcount_latch_begin(&latch->seq);
+ * modify(latch->data[0], ...);
+- *
+- * smp_wmb(); // Ensure that the data[0] update is visible
+- * latch->seq.sequence++;
+- * smp_wmb(); // Ensure that the seqcount update is visible
+- *
++ * write_seqcount_latch(&latch->seq);
+ * modify(latch->data[1], ...);
++ * write_seqcount_latch_end(&latch->seq);
+ * }
+ *
+ * The query will have a form like::
+@@ -701,13 +737,13 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ * unsigned seq, idx;
+ *
+ * do {
+- * seq = raw_read_seqcount_latch(&latch->seq);
++ * seq = read_seqcount_latch(&latch->seq);
+ *
+ * idx = seq & 0x01;
+ * entry = data_query(latch->data[idx], ...);
+ *
+ * // This includes needed smp_rmb()
+- * } while (raw_read_seqcount_latch_retry(&latch->seq, seq));
++ * } while (read_seqcount_latch_retry(&latch->seq, seq));
+ *
+ * return entry;
+ * }
+@@ -731,11 +767,31 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ * When data is a dynamic data structure; one should use regular RCU
+ * patterns to manage the lifetimes of the objects within.
+ */
+-static inline void raw_write_seqcount_latch(seqcount_latch_t *s)
++static __always_inline void write_seqcount_latch_begin(seqcount_latch_t *s)
+ {
+- smp_wmb(); /* prior stores before incrementing "sequence" */
+- s->seqcount.sequence++;
+- smp_wmb(); /* increment "sequence" before following stores */
++ kcsan_nestable_atomic_begin();
++ raw_write_seqcount_latch(s);
++}
++
++/**
++ * write_seqcount_latch() - redirect latch readers to even copy
++ * @s: Pointer to seqcount_latch_t
++ */
++static __always_inline void write_seqcount_latch(seqcount_latch_t *s)
++{
++ raw_write_seqcount_latch(s);
++}
++
++/**
++ * write_seqcount_latch_end() - end a seqcount_latch_t write section
++ * @s: Pointer to seqcount_latch_t
++ *
++ * Marks the end of a seqcount_latch_t writer section, after all copies of the
++ * latch-protected data have been updated.
++ */
++static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s)
++{
++ kcsan_nestable_atomic_end();
+ }
+
+ #define __SEQLOCK_UNLOCKED(lockname) \
+@@ -769,11 +825,7 @@ static inline void raw_write_seqcount_latch(seqcount_latch_t *s)
+ */
+ static inline unsigned read_seqbegin(const seqlock_t *sl)
+ {
+- unsigned ret = read_seqcount_begin(&sl->seqcount);
+-
+- kcsan_atomic_next(0); /* non-raw usage, assume closing read_seqretry() */
+- kcsan_flat_atomic_begin();
+- return ret;
++ return read_seqcount_begin(&sl->seqcount);
+ }
+
+ /**
+@@ -789,12 +841,6 @@ static inline unsigned read_seqbegin(const seqlock_t *sl)
+ */
+ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
+ {
+- /*
+- * Assume not nested: read_seqretry() may be called multiple times when
+- * completing read critical section.
+- */
+- kcsan_flat_atomic_end();
+-
+ return read_seqcount_retry(&sl->seqcount, start);
+ }
+
+diff --git a/include/media/v4l2-dv-timings.h b/include/media/v4l2-dv-timings.h
+index 8fa963326bf6a2..c64096b5c78215 100644
+--- a/include/media/v4l2-dv-timings.h
++++ b/include/media/v4l2-dv-timings.h
+@@ -146,15 +146,18 @@ void v4l2_print_dv_timings(const char *dev_prefix, const char *prefix,
+ * @polarities: the horizontal and vertical polarities (same as struct
+ * v4l2_bt_timings polarities).
+ * @interlaced: if this flag is true, it indicates interlaced format
++ * @cap: the v4l2_dv_timings_cap capabilities.
+ * @fmt: the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid CVT format. If so, then it will return true, and fmt will be filled
+ * in with the found CVT timings.
+ */
+-bool v4l2_detect_cvt(unsigned frame_height, unsigned hfreq, unsigned vsync,
+- unsigned active_width, u32 polarities, bool interlaced,
+- struct v4l2_dv_timings *fmt);
++bool v4l2_detect_cvt(unsigned int frame_height, unsigned int hfreq,
++ unsigned int vsync, unsigned int active_width,
++ u32 polarities, bool interlaced,
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *fmt);
+
+ /**
+ * v4l2_detect_gtf - detect if the given timings follow the GTF standard
+@@ -170,15 +173,18 @@ bool v4l2_detect_cvt(unsigned frame_height, unsigned hfreq, unsigned vsync,
+ * image height, so it has to be passed explicitly. Usually
+ * the native screen aspect ratio is used for this. If it
+ * is not filled in correctly, then 16:9 will be assumed.
++ * @cap: the v4l2_dv_timings_cap capabilities.
+ * @fmt: the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid GTF format. If so, then it will return true, and fmt will be filled
+ * in with the found GTF timings.
+ */
+-bool v4l2_detect_gtf(unsigned frame_height, unsigned hfreq, unsigned vsync,
+- u32 polarities, bool interlaced, struct v4l2_fract aspect,
+- struct v4l2_dv_timings *fmt);
++bool v4l2_detect_gtf(unsigned int frame_height, unsigned int hfreq,
++ unsigned int vsync, u32 polarities, bool interlaced,
++ struct v4l2_fract aspect,
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *fmt);
+
+ /**
+ * v4l2_calc_aspect_ratio - calculate the aspect ratio based on bytes
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index d1d073089f384e..f2cd1ed20d6916 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -1,7 +1,7 @@
+ /*
+ BlueZ - Bluetooth protocol stack for Linux
+ Copyright (C) 2000-2001 Qualcomm Incorporated
+- Copyright 2023 NXP
++ Copyright 2023-2024 NXP
+
+ Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com>
+
+@@ -29,6 +29,7 @@
+ #define HCI_MAX_ACL_SIZE 1024
+ #define HCI_MAX_SCO_SIZE 255
+ #define HCI_MAX_ISO_SIZE 251
++#define HCI_MAX_ISO_BIS 31
+ #define HCI_MAX_EVENT_SIZE 260
+ #define HCI_MAX_FRAME_SIZE (HCI_MAX_ACL_SIZE + 4)
+
+@@ -683,6 +684,7 @@ enum {
+ #define HCI_RSSI_INVALID 127
+
+ #define HCI_SYNC_HANDLE_INVALID 0xffff
++#define HCI_SID_INVALID 0xff
+
+ #define HCI_ROLE_MASTER 0x00
+ #define HCI_ROLE_SLAVE 0x01
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 88265d37aa72e3..4c185a08c3a3af 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -668,6 +668,7 @@ struct hci_conn {
+ __u8 adv_instance;
+ __u16 handle;
+ __u16 sync_handle;
++ __u8 sid;
+ __u16 state;
+ __u16 mtu;
+ __u8 mode;
+@@ -710,6 +711,9 @@ struct hci_conn {
+ __s8 tx_power;
+ __s8 max_tx_power;
+ struct bt_iso_qos iso_qos;
++ __u8 num_bis;
++ __u8 bis[HCI_MAX_ISO_BIS];
++
+ unsigned long flags;
+
+ enum conn_reasons conn_reason;
+@@ -945,8 +949,10 @@ enum {
+ HCI_CONN_PER_ADV,
+ HCI_CONN_BIG_CREATED,
+ HCI_CONN_CREATE_CIS,
++ HCI_CONN_CREATE_BIG_SYNC,
+ HCI_CONN_BIG_SYNC,
+ HCI_CONN_BIG_SYNC_FAILED,
++ HCI_CONN_CREATE_PA_SYNC,
+ HCI_CONN_PA_SYNC,
+ HCI_CONN_PA_SYNC_FAILED,
+ };
+@@ -1099,6 +1105,30 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev,
+ return NULL;
+ }
+
++static inline struct hci_conn *hci_conn_hash_lookup_sid(struct hci_dev *hdev,
++ __u8 sid,
++ bdaddr_t *dst,
++ __u8 dst_type)
++{
++ struct hci_conn_hash *h = &hdev->conn_hash;
++ struct hci_conn *c;
++
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(c, &h->list, list) {
++ if (c->type != ISO_LINK || bacmp(&c->dst, dst) ||
++ c->dst_type != dst_type || c->sid != sid)
++ continue;
++
++ rcu_read_unlock();
++ return c;
++ }
++
++ rcu_read_unlock();
++
++ return NULL;
++}
++
+ static inline struct hci_conn *
+ hci_conn_hash_lookup_per_adv_bis(struct hci_dev *hdev,
+ bdaddr_t *ba,
+@@ -1269,6 +1299,30 @@ static inline struct hci_conn *hci_conn_hash_lookup_big(struct hci_dev *hdev,
+ return NULL;
+ }
+
++static inline struct hci_conn *
++hci_conn_hash_lookup_big_sync_pend(struct hci_dev *hdev,
++ __u8 handle, __u8 num_bis)
++{
++ struct hci_conn_hash *h = &hdev->conn_hash;
++ struct hci_conn *c;
++
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(c, &h->list, list) {
++ if (c->type != ISO_LINK)
++ continue;
++
++ if (handle == c->iso_qos.bcast.big && num_bis == c->num_bis) {
++ rcu_read_unlock();
++ return c;
++ }
++ }
++
++ rcu_read_unlock();
++
++ return NULL;
++}
++
+ static inline struct hci_conn *
+ hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle, __u16 state)
+ {
+@@ -1328,6 +1382,13 @@ hci_conn_hash_lookup_pa_sync_handle(struct hci_dev *hdev, __u16 sync_handle)
+ if (c->type != ISO_LINK)
+ continue;
+
++ /* Ignore the listen hcon, we are looking
++ * for the child hcon that was created as
++ * a result of the PA sync established event.
++ */
++ if (c->state == BT_LISTEN)
++ continue;
++
+ if (c->sync_handle == sync_handle) {
+ rcu_read_unlock();
+ return c;
+@@ -1445,6 +1506,8 @@ bool hci_setup_sync(struct hci_conn *conn, __u16 handle);
+ void hci_sco_setup(struct hci_conn *conn, __u8 status);
+ bool hci_iso_setup_path(struct hci_conn *conn);
+ int hci_le_create_cis_pending(struct hci_dev *hdev);
++int hci_pa_create_sync_pending(struct hci_dev *hdev);
++int hci_le_big_create_sync_pending(struct hci_dev *hdev);
+ int hci_conn_check_create_cis(struct hci_conn *conn);
+
+ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 192d72c8b46544..702653448d2fcf 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -6129,6 +6129,50 @@ void wiphy_delayed_work_cancel(struct wiphy *wiphy,
+ void wiphy_delayed_work_flush(struct wiphy *wiphy,
+ struct wiphy_delayed_work *dwork);
+
++/**
++ * wiphy_delayed_work_pending - Find out whether a wiphy delayable
++ * work item is currently pending.
++ *
++ * @wiphy: the wiphy, for debug purposes
++ * @dwork: the delayed work in question
++ *
++ * Return: true if timer is pending, false otherwise
++ *
++ * How wiphy_delayed_work_queue() works is by setting a timer which
++ * when it expires calls wiphy_work_queue() to queue the wiphy work.
++ * Because wiphy_delayed_work_queue() uses mod_timer(), if it is
++ * called twice and the second call happens before the first call
++ * deadline, the work will rescheduled for the second deadline and
++ * won't run before that.
++ *
++ * wiphy_delayed_work_pending() can be used to detect if calling
++ * wiphy_work_delayed_work_queue() would start a new work schedule
++ * or delayed a previous one. As seen below it cannot be used to
++ * detect precisely if the work has finished to execute nor if it
++ * is currently executing.
++ *
++ * CPU0 CPU1
++ * wiphy_delayed_work_queue(wk)
++ * mod_timer(wk->timer)
++ * wiphy_delayed_work_pending(wk) -> true
++ *
++ * [...]
++ * expire_timers(wk->timer)
++ * detach_timer(wk->timer)
++ * wiphy_delayed_work_pending(wk) -> false
++ * wk->timer->function() |
++ * wiphy_work_queue(wk) | delayed work pending
++ * list_add_tail() | returns false but
++ * queue_work(cfg80211_wiphy_work) | wk->func() has not
++ * | been run yet
++ * [...] |
++ * cfg80211_wiphy_work() |
++ * wk->func() V
++ *
++ */
++bool wiphy_delayed_work_pending(struct wiphy *wiphy,
++ struct wiphy_delayed_work *dwork);
++
+ /**
+ * enum ieee80211_ap_reg_power - regulatory power for an Access Point
+ *
+diff --git a/include/net/ieee80211_radiotap.h b/include/net/ieee80211_radiotap.h
+index 91762faecc13d5..1458d3695005a8 100644
+--- a/include/net/ieee80211_radiotap.h
++++ b/include/net/ieee80211_radiotap.h
+@@ -24,25 +24,27 @@
+ * struct ieee80211_radiotap_header - base radiotap header
+ */
+ struct ieee80211_radiotap_header {
+- /**
+- * @it_version: radiotap version, always 0
+- */
+- uint8_t it_version;
+-
+- /**
+- * @it_pad: padding (or alignment)
+- */
+- uint8_t it_pad;
+-
+- /**
+- * @it_len: overall radiotap header length
+- */
+- __le16 it_len;
+-
+- /**
+- * @it_present: (first) present word
+- */
+- __le32 it_present;
++ __struct_group(ieee80211_radiotap_header_fixed, hdr, __packed,
++ /**
++ * @it_version: radiotap version, always 0
++ */
++ uint8_t it_version;
++
++ /**
++ * @it_pad: padding (or alignment)
++ */
++ uint8_t it_pad;
++
++ /**
++ * @it_len: overall radiotap header length
++ */
++ __le16 it_len;
++
++ /**
++ * @it_present: (first) present word
++ */
++ __le32 it_present;
++ );
+
+ /**
+ * @it_optional: all remaining presence bitmaps
+@@ -50,6 +52,9 @@ struct ieee80211_radiotap_header {
+ __le32 it_optional[];
+ } __packed;
+
++static_assert(offsetof(struct ieee80211_radiotap_header, it_optional) == sizeof(struct ieee80211_radiotap_header_fixed),
++ "struct member likely outside of __struct_group()");
++
+ /* version is always 0 */
+ #define PKTHDR_RADIOTAP_VERSION 0
+
+diff --git a/include/net/net_debug.h b/include/net/net_debug.h
+index 1e74684cbbdbcd..4a79204c8d306e 100644
+--- a/include/net/net_debug.h
++++ b/include/net/net_debug.h
+@@ -27,7 +27,7 @@ void netdev_info(const struct net_device *dev, const char *format, ...);
+
+ #define netdev_level_once(level, dev, fmt, ...) \
+ do { \
+- static bool __section(".data.once") __print_once; \
++ static bool __section(".data..once") __print_once; \
+ \
+ if (!__print_once) { \
+ __print_once = true; \
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 6c5712ae559d8f..5acdf98b3e7893 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -2948,6 +2948,14 @@ int rdma_user_mmap_entry_insert_range(struct ib_ucontext *ucontext,
+ size_t length, u32 min_pgoff,
+ u32 max_pgoff);
+
++#if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS)
++void rdma_user_mmap_disassociate(struct ib_device *device);
++#else
++static inline void rdma_user_mmap_disassociate(struct ib_device *device)
++{
++}
++#endif
++
+ static inline int
+ rdma_user_mmap_entry_insert_exact(struct ib_ucontext *ucontext,
+ struct rdma_user_mmap_entry *entry,
+diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h
+index 3b687d20c9ed34..db7254d52d9355 100644
+--- a/include/uapi/linux/rtnetlink.h
++++ b/include/uapi/linux/rtnetlink.h
+@@ -174,7 +174,7 @@ enum {
+ #define RTM_GETLINKPROP RTM_GETLINKPROP
+
+ RTM_NEWVLAN = 112,
+-#define RTM_NEWNVLAN RTM_NEWVLAN
++#define RTM_NEWVLAN RTM_NEWVLAN
+ RTM_DELVLAN,
+ #define RTM_DELVLAN RTM_DELVLAN
+ RTM_GETVLAN,
+diff --git a/init/Kconfig b/init/Kconfig
+index 5783a0b8751726..c3f2ade834d143 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -109,6 +109,15 @@ config CC_HAS_ASM_INLINE
+ config CC_HAS_NO_PROFILE_FN_ATTR
+ def_bool $(success,echo '__attribute__((no_profile_instrument_function)) int x();' | $(CC) -x c - -c -o /dev/null -Werror)
+
++config CC_HAS_COUNTED_BY
++ # TODO: when gcc 15 is released remove the build test and add
++ # a gcc version check
++ def_bool $(success,echo 'struct flex { int count; int array[] __attribute__((__counted_by__(count))); };' | $(CC) $(CLANG_FLAGS) -x c - -c -o /dev/null -Werror)
++ # clang needs to be at least 19.1.3 to avoid __bdos miscalculations
++ # https://github.com/llvm/llvm-project/pull/110497
++ # https://github.com/llvm/llvm-project/pull/112636
++ depends on !(CC_IS_CLANG && CLANG_VERSION < 190103)
++
+ config PAHOLE_VERSION
+ int
+ default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+diff --git a/init/initramfs.c b/init/initramfs.c
+index 814241b648274f..04e0990f451cf9 100644
+--- a/init/initramfs.c
++++ b/init/initramfs.c
+@@ -359,6 +359,15 @@ static int __init do_name(void)
+ {
+ state = SkipIt;
+ next_state = Reset;
++
++ /* name_len > 0 && name_len <= PATH_MAX checked in do_header */
++ if (collected[name_len - 1] != '\0') {
++ pr_err("initramfs name without nulterm: %.*s\n",
++ (int)name_len, collected);
++ error("malformed archive");
++ return 1;
++ }
++
+ if (strcmp(collected, "TRAILER!!!") == 0) {
+ free_hash();
+ return 0;
+@@ -423,6 +432,12 @@ static int __init do_copy(void)
+
+ static int __init do_symlink(void)
+ {
++ if (collected[name_len - 1] != '\0') {
++ pr_err("initramfs symlink without nulterm: %.*s\n",
++ (int)name_len, collected);
++ error("malformed archive");
++ return 1;
++ }
+ collected[N_ALIGN(name_len) + body_len] = '\0';
+ clean_path(collected, 0);
+ init_symlink(collected + N_ALIGN(name_len), collected);
+diff --git a/io_uring/memmap.c b/io_uring/memmap.c
+index a0f32a255fd1e1..6d151e46f3d69e 100644
+--- a/io_uring/memmap.c
++++ b/io_uring/memmap.c
+@@ -72,6 +72,8 @@ void *io_pages_map(struct page ***out_pages, unsigned short *npages,
+ ret = io_mem_alloc_compound(pages, nr_pages, size, gfp);
+ if (!IS_ERR(ret))
+ goto done;
++ if (nr_pages == 1)
++ goto fail;
+
+ ret = io_mem_alloc_single(pages, nr_pages, size, gfp);
+ if (!IS_ERR(ret)) {
+@@ -80,7 +82,7 @@ void *io_pages_map(struct page ***out_pages, unsigned short *npages,
+ *npages = nr_pages;
+ return ret;
+ }
+-
++fail:
+ kvfree(pages);
+ *out_pages = NULL;
+ *npages = 0;
+@@ -135,7 +137,12 @@ struct page **io_pin_pages(unsigned long uaddr, unsigned long len, int *npages)
+ struct page **pages;
+ int ret;
+
+- end = (uaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
++ if (check_add_overflow(uaddr, len, &end))
++ return ERR_PTR(-EOVERFLOW);
++ if (check_add_overflow(end, PAGE_SIZE - 1, &end))
++ return ERR_PTR(-EOVERFLOW);
++
++ end = end >> PAGE_SHIFT;
+ start = uaddr >> PAGE_SHIFT;
+ nr_pages = end - start;
+ if (WARN_ON_ONCE(!nr_pages))
+diff --git a/ipc/namespace.c b/ipc/namespace.c
+index 6ecc30effd3ec6..4df91ceeeafe9f 100644
+--- a/ipc/namespace.c
++++ b/ipc/namespace.c
+@@ -83,13 +83,15 @@ static struct ipc_namespace *create_ipc_ns(struct user_namespace *user_ns,
+
+ err = msg_init_ns(ns);
+ if (err)
+- goto fail_put;
++ goto fail_ipc;
+
+ sem_init_ns(ns);
+ shm_init_ns(ns);
+
+ return ns;
+
++fail_ipc:
++ retire_ipc_sysctls(ns);
+ fail_mq:
+ retire_mq_sysctls(ns);
+
+diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
+index 0d515ec57aa558..5880e94e4fc703 100644
+--- a/kernel/bpf/bpf_struct_ops.c
++++ b/kernel/bpf/bpf_struct_ops.c
+@@ -32,7 +32,9 @@ struct bpf_struct_ops_map {
+ * (in kvalue.data).
+ */
+ struct bpf_link **links;
+- u32 links_cnt;
++ /* ksyms for bpf trampolines */
++ struct bpf_ksym **ksyms;
++ u32 funcs_cnt;
+ u32 image_pages_cnt;
+ /* image_pages is an array of pages that has all the trampolines
+ * that stores the func args before calling the bpf_prog.
+@@ -481,11 +483,11 @@ static void bpf_struct_ops_map_put_progs(struct bpf_struct_ops_map *st_map)
+ {
+ u32 i;
+
+- for (i = 0; i < st_map->links_cnt; i++) {
+- if (st_map->links[i]) {
+- bpf_link_put(st_map->links[i]);
+- st_map->links[i] = NULL;
+- }
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->links[i])
++ break;
++ bpf_link_put(st_map->links[i]);
++ st_map->links[i] = NULL;
+ }
+ }
+
+@@ -586,6 +588,49 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
+ return 0;
+ }
+
++static void bpf_struct_ops_ksym_init(const char *tname, const char *mname,
++ void *image, unsigned int size,
++ struct bpf_ksym *ksym)
++{
++ snprintf(ksym->name, KSYM_NAME_LEN, "bpf__%s_%s", tname, mname);
++ INIT_LIST_HEAD_RCU(&ksym->lnode);
++ bpf_image_ksym_init(image, size, ksym);
++}
++
++static void bpf_struct_ops_map_add_ksyms(struct bpf_struct_ops_map *st_map)
++{
++ u32 i;
++
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->ksyms[i])
++ break;
++ bpf_image_ksym_add(st_map->ksyms[i]);
++ }
++}
++
++static void bpf_struct_ops_map_del_ksyms(struct bpf_struct_ops_map *st_map)
++{
++ u32 i;
++
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->ksyms[i])
++ break;
++ bpf_image_ksym_del(st_map->ksyms[i]);
++ }
++}
++
++static void bpf_struct_ops_map_free_ksyms(struct bpf_struct_ops_map *st_map)
++{
++ u32 i;
++
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->ksyms[i])
++ break;
++ kfree(st_map->ksyms[i]);
++ st_map->ksyms[i] = NULL;
++ }
++}
++
+ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ void *value, u64 flags)
+ {
+@@ -601,6 +646,9 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ int prog_fd, err;
+ u32 i, trampoline_start, image_off = 0;
+ void *cur_image = NULL, *image = NULL;
++ struct bpf_link **plink;
++ struct bpf_ksym **pksym;
++ const char *tname, *mname;
+
+ if (flags)
+ return -EINVAL;
+@@ -639,14 +687,19 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ udata = &uvalue->data;
+ kdata = &kvalue->data;
+
++ plink = st_map->links;
++ pksym = st_map->ksyms;
++ tname = btf_name_by_offset(st_map->btf, t->name_off);
+ module_type = btf_type_by_id(btf_vmlinux, st_ops_ids[IDX_MODULE_ID]);
+ for_each_member(i, t, member) {
+ const struct btf_type *mtype, *ptype;
+ struct bpf_prog *prog;
+ struct bpf_tramp_link *link;
++ struct bpf_ksym *ksym;
+ u32 moff;
+
+ moff = __btf_member_bit_offset(t, member) / 8;
++ mname = btf_name_by_offset(st_map->btf, member->name_off);
+ ptype = btf_type_resolve_ptr(st_map->btf, member->type, NULL);
+ if (ptype == module_type) {
+ if (*(void **)(udata + moff))
+@@ -714,7 +767,14 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ }
+ bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS,
+ &bpf_struct_ops_link_lops, prog);
+- st_map->links[i] = &link->link;
++ *plink++ = &link->link;
++
++ ksym = kzalloc(sizeof(*ksym), GFP_USER);
++ if (!ksym) {
++ err = -ENOMEM;
++ goto reset_unlock;
++ }
++ *pksym++ = ksym;
+
+ trampoline_start = image_off;
+ err = bpf_struct_ops_prepare_trampoline(tlinks, link,
+@@ -735,6 +795,12 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+
+ /* put prog_id to udata */
+ *(unsigned long *)(udata + moff) = prog->aux->id;
++
++ /* init ksym for this trampoline */
++ bpf_struct_ops_ksym_init(tname, mname,
++ image + trampoline_start,
++ image_off - trampoline_start,
++ ksym);
+ }
+
+ if (st_ops->validate) {
+@@ -783,6 +849,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ */
+
+ reset_unlock:
++ bpf_struct_ops_map_free_ksyms(st_map);
+ bpf_struct_ops_map_free_image(st_map);
+ bpf_struct_ops_map_put_progs(st_map);
+ memset(uvalue, 0, map->value_size);
+@@ -790,6 +857,8 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ unlock:
+ kfree(tlinks);
+ mutex_unlock(&st_map->lock);
++ if (!err)
++ bpf_struct_ops_map_add_ksyms(st_map);
+ return err;
+ }
+
+@@ -849,7 +918,10 @@ static void __bpf_struct_ops_map_free(struct bpf_map *map)
+
+ if (st_map->links)
+ bpf_struct_ops_map_put_progs(st_map);
++ if (st_map->ksyms)
++ bpf_struct_ops_map_free_ksyms(st_map);
+ bpf_map_area_free(st_map->links);
++ bpf_map_area_free(st_map->ksyms);
+ bpf_struct_ops_map_free_image(st_map);
+ bpf_map_area_free(st_map->uvalue);
+ bpf_map_area_free(st_map);
+@@ -866,6 +938,8 @@ static void bpf_struct_ops_map_free(struct bpf_map *map)
+ if (btf_is_module(st_map->btf))
+ module_put(st_map->st_ops_desc->st_ops->owner);
+
++ bpf_struct_ops_map_del_ksyms(st_map);
++
+ /* The struct_ops's function may switch to another struct_ops.
+ *
+ * For example, bpf_tcp_cc_x->init() may switch to
+@@ -895,6 +969,19 @@ static int bpf_struct_ops_map_alloc_check(union bpf_attr *attr)
+ return 0;
+ }
+
++static u32 count_func_ptrs(const struct btf *btf, const struct btf_type *t)
++{
++ int i;
++ u32 count;
++ const struct btf_member *member;
++
++ count = 0;
++ for_each_member(i, t, member)
++ if (btf_type_resolve_func_ptr(btf, member->type, NULL))
++ count++;
++ return count;
++}
++
+ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
+ {
+ const struct bpf_struct_ops_desc *st_ops_desc;
+@@ -961,11 +1048,15 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
+ map = &st_map->map;
+
+ st_map->uvalue = bpf_map_area_alloc(vt->size, NUMA_NO_NODE);
+- st_map->links_cnt = btf_type_vlen(t);
++ st_map->funcs_cnt = count_func_ptrs(btf, t);
+ st_map->links =
+- bpf_map_area_alloc(st_map->links_cnt * sizeof(struct bpf_links *),
++ bpf_map_area_alloc(st_map->funcs_cnt * sizeof(struct bpf_link *),
++ NUMA_NO_NODE);
++
++ st_map->ksyms =
++ bpf_map_area_alloc(st_map->funcs_cnt * sizeof(struct bpf_ksym *),
+ NUMA_NO_NODE);
+- if (!st_map->uvalue || !st_map->links) {
++ if (!st_map->uvalue || !st_map->links || !st_map->ksyms) {
+ ret = -ENOMEM;
+ goto errout_free;
+ }
+@@ -994,7 +1085,8 @@ static u64 bpf_struct_ops_map_mem_usage(const struct bpf_map *map)
+ usage = sizeof(*st_map) +
+ vt->size - sizeof(struct bpf_struct_ops_value);
+ usage += vt->size;
+- usage += btf_type_vlen(vt) * sizeof(struct bpf_links *);
++ usage += st_map->funcs_cnt * sizeof(struct bpf_link *);
++ usage += st_map->funcs_cnt * sizeof(struct bpf_ksym *);
+ usage += PAGE_SIZE;
+ return usage;
+ }
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 5f4f1d0bc23a47..8608d24369f019 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -6534,6 +6534,12 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ if (prog_args_trusted(prog))
+ info->reg_type |= PTR_TRUSTED;
+
++ /* Raw tracepoint arguments always get marked as maybe NULL */
++ if (bpf_prog_is_raw_tp(prog))
++ info->reg_type |= PTR_MAYBE_NULL;
++ else if (btf_param_match_suffix(btf, &args[arg], "__nullable"))
++ info->reg_type |= PTR_MAYBE_NULL;
++
+ if (tgt_prog) {
+ enum bpf_prog_type tgt_type;
+
+diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c
+index 70fb82bf16370e..b77db7413f8c70 100644
+--- a/kernel/bpf/dispatcher.c
++++ b/kernel/bpf/dispatcher.c
+@@ -154,7 +154,8 @@ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
+ d->image = NULL;
+ goto out;
+ }
+- bpf_image_ksym_add(d->image, PAGE_SIZE, &d->ksym);
++ bpf_image_ksym_init(d->image, PAGE_SIZE, &d->ksym);
++ bpf_image_ksym_add(&d->ksym);
+ }
+
+ prev_num_progs = d->num_progs;
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index f8302a5ca400da..1166d9dd3e8b5d 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -115,10 +115,14 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
+ (ptype == BPF_PROG_TYPE_LSM && eatype == BPF_LSM_MAC);
+ }
+
+-void bpf_image_ksym_add(void *data, unsigned int size, struct bpf_ksym *ksym)
++void bpf_image_ksym_init(void *data, unsigned int size, struct bpf_ksym *ksym)
+ {
+ ksym->start = (unsigned long) data;
+ ksym->end = ksym->start + size;
++}
++
++void bpf_image_ksym_add(struct bpf_ksym *ksym)
++{
+ bpf_ksym_add(ksym);
+ perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, ksym->start,
+ PAGE_SIZE, false, ksym->name);
+@@ -377,7 +381,8 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, int size)
+ ksym = &im->ksym;
+ INIT_LIST_HEAD_RCU(&ksym->lnode);
+ snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", key);
+- bpf_image_ksym_add(image, size, ksym);
++ bpf_image_ksym_init(image, size, ksym);
++ bpf_image_ksym_add(ksym);
+ return im;
+
+ out_free_image:
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index a5a9b4e418a685..61c3896087b2c8 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -28,6 +28,8 @@
+ #include <linux/cpumask.h>
+ #include <linux/bpf_mem_alloc.h>
+ #include <net/xdp.h>
++#include <linux/trace_events.h>
++#include <linux/kallsyms.h>
+
+ #include "disasm.h"
+
+@@ -421,6 +423,25 @@ static struct btf_record *reg_btf_record(const struct bpf_reg_state *reg)
+ return rec;
+ }
+
++static bool mask_raw_tp_reg_cond(const struct bpf_verifier_env *env, struct bpf_reg_state *reg) {
++ return reg->type == (PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL) &&
++ bpf_prog_is_raw_tp(env->prog) && !reg->ref_obj_id;
++}
++
++static bool mask_raw_tp_reg(const struct bpf_verifier_env *env, struct bpf_reg_state *reg)
++{
++ if (!mask_raw_tp_reg_cond(env, reg))
++ return false;
++ reg->type &= ~PTR_MAYBE_NULL;
++ return true;
++}
++
++static void unmask_raw_tp_reg(struct bpf_reg_state *reg, bool result)
++{
++ if (result)
++ reg->type |= PTR_MAYBE_NULL;
++}
++
+ static bool subprog_is_global(const struct bpf_verifier_env *env, int subprog)
+ {
+ struct bpf_func_info_aux *aux = env->prog->aux->func_info_aux;
+@@ -6511,6 +6532,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ const char *field_name = NULL;
+ enum bpf_type_flag flag = 0;
+ u32 btf_id = 0;
++ bool mask;
+ int ret;
+
+ if (!env->allow_ptr_leaks) {
+@@ -6582,7 +6604,21 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+
+ if (ret < 0)
+ return ret;
+-
++ /* For raw_tp progs, we allow dereference of PTR_MAYBE_NULL
++ * trusted PTR_TO_BTF_ID, these are the ones that are possibly
++ * arguments to the raw_tp. Since internal checks in for trusted
++ * reg in check_ptr_to_btf_access would consider PTR_MAYBE_NULL
++ * modifier as problematic, mask it out temporarily for the
++ * check. Don't apply this to pointers with ref_obj_id > 0, as
++ * those won't be raw_tp args.
++ *
++ * We may end up applying this relaxation to other trusted
++ * PTR_TO_BTF_ID with maybe null flag, since we cannot
++ * distinguish PTR_MAYBE_NULL tagged for arguments vs normal
++ * tagging, but that should expand allowed behavior, and not
++ * cause regression for existing behavior.
++ */
++ mask = mask_raw_tp_reg(env, reg);
+ if (ret != PTR_TO_BTF_ID) {
+ /* just mark; */
+
+@@ -6643,8 +6679,13 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ clear_trusted_flags(&flag);
+ }
+
+- if (atype == BPF_READ && value_regno >= 0)
++ if (atype == BPF_READ && value_regno >= 0) {
+ mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag);
++ /* We've assigned a new type to regno, so don't undo masking. */
++ if (regno == value_regno)
++ mask = false;
++ }
++ unmask_raw_tp_reg(reg, mask);
+
+ return 0;
+ }
+@@ -7019,7 +7060,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ if (!err && t == BPF_READ && value_regno >= 0)
+ mark_reg_unknown(env, regs, value_regno);
+ } else if (base_type(reg->type) == PTR_TO_BTF_ID &&
+- !type_may_be_null(reg->type)) {
++ (mask_raw_tp_reg_cond(env, reg) || !type_may_be_null(reg->type))) {
+ err = check_ptr_to_btf_access(env, regs, regno, off, size, t,
+ value_regno);
+ } else if (reg->type == CONST_PTR_TO_MAP) {
+@@ -8683,6 +8724,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ enum bpf_reg_type type = reg->type;
+ u32 *arg_btf_id = NULL;
+ int err = 0;
++ bool mask;
+
+ if (arg_type == ARG_DONTCARE)
+ return 0;
+@@ -8723,11 +8765,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ base_type(arg_type) == ARG_PTR_TO_SPIN_LOCK)
+ arg_btf_id = fn->arg_btf_id[arg];
+
++ mask = mask_raw_tp_reg(env, reg);
+ err = check_reg_type(env, regno, arg_type, arg_btf_id, meta);
+- if (err)
+- return err;
+
+- err = check_func_arg_reg_off(env, reg, regno, arg_type);
++ err = err ?: check_func_arg_reg_off(env, reg, regno, arg_type);
++ unmask_raw_tp_reg(reg, mask);
+ if (err)
+ return err;
+
+@@ -9522,14 +9564,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
+ return ret;
+ } else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
+ struct bpf_call_arg_meta meta;
++ bool mask;
+ int err;
+
+ if (register_is_null(reg) && type_may_be_null(arg->arg_type))
+ continue;
+
+ memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */
++ mask = mask_raw_tp_reg(env, reg);
+ err = check_reg_type(env, regno, arg->arg_type, &arg->btf_id, &meta);
+ err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type);
++ unmask_raw_tp_reg(reg, mask);
+ if (err)
+ return err;
+ } else {
+@@ -10459,11 +10504,26 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+
+ switch (func_id) {
+ case BPF_FUNC_tail_call:
++ if (env->cur_state->active_lock.ptr) {
++ verbose(env, "tail_call cannot be used inside bpf_spin_lock-ed region\n");
++ return -EINVAL;
++ }
++
+ err = check_reference_leak(env, false);
+ if (err) {
+ verbose(env, "tail_call would lead to reference leak\n");
+ return err;
+ }
++
++ if (env->cur_state->active_rcu_lock) {
++ verbose(env, "tail_call cannot be used inside bpf_rcu_read_lock-ed region\n");
++ return -EINVAL;
++ }
++
++ if (env->cur_state->active_preempt_lock) {
++ verbose(env, "tail_call cannot be used inside bpf_preempt_disable-ed region\n");
++ return -EINVAL;
++ }
+ break;
+ case BPF_FUNC_get_local_storage:
+ /* check that flags argument in get_local_storage(map, flags) is 0,
+@@ -11819,6 +11879,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ enum bpf_arg_type arg_type = ARG_DONTCARE;
+ u32 regno = i + 1, ref_id, type_size;
+ bool is_ret_buf_sz = false;
++ bool mask = false;
+ int kf_arg_type;
+
+ t = btf_type_skip_modifiers(btf, args[i].type, NULL);
+@@ -11877,12 +11938,15 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ return -EINVAL;
+ }
+
++ mask = mask_raw_tp_reg(env, reg);
+ if ((is_kfunc_trusted_args(meta) || is_kfunc_rcu(meta)) &&
+ (register_is_null(reg) || type_may_be_null(reg->type)) &&
+ !is_kfunc_arg_nullable(meta->btf, &args[i])) {
+ verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i);
++ unmask_raw_tp_reg(reg, mask);
+ return -EACCES;
+ }
++ unmask_raw_tp_reg(reg, mask);
+
+ if (reg->ref_obj_id) {
+ if (is_kfunc_release(meta) && meta->ref_obj_id) {
+@@ -11940,16 +12004,24 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ if (!is_kfunc_trusted_args(meta) && !is_kfunc_rcu(meta))
+ break;
+
++ /* Allow passing maybe NULL raw_tp arguments to
++ * kfuncs for compatibility. Don't apply this to
++ * arguments with ref_obj_id > 0.
++ */
++ mask = mask_raw_tp_reg(env, reg);
+ if (!is_trusted_reg(reg)) {
+ if (!is_kfunc_rcu(meta)) {
+ verbose(env, "R%d must be referenced or trusted\n", regno);
++ unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ if (!is_rcu_reg(reg)) {
+ verbose(env, "R%d must be a rcu pointer\n", regno);
++ unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ }
++ unmask_raw_tp_reg(reg, mask);
+ fallthrough;
+ case KF_ARG_PTR_TO_CTX:
+ case KF_ARG_PTR_TO_DYNPTR:
+@@ -11972,7 +12044,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+
+ if (is_kfunc_release(meta) && reg->ref_obj_id)
+ arg_type |= OBJ_RELEASE;
++ mask = mask_raw_tp_reg(env, reg);
+ ret = check_func_arg_reg_off(env, reg, regno, arg_type);
++ unmask_raw_tp_reg(reg, mask);
+ if (ret < 0)
+ return ret;
+
+@@ -12148,6 +12222,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ ref_tname = btf_name_by_offset(btf, ref_t->name_off);
+ fallthrough;
+ case KF_ARG_PTR_TO_BTF_ID:
++ mask = mask_raw_tp_reg(env, reg);
+ /* Only base_type is checked, further checks are done here */
+ if ((base_type(reg->type) != PTR_TO_BTF_ID ||
+ (bpf_type_has_unsafe_modifiers(reg->type) && !is_rcu_reg(reg))) &&
+@@ -12156,9 +12231,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ verbose(env, "expected %s or socket\n",
+ reg_type_str(env, base_type(reg->type) |
+ (type_flag(reg->type) & BPF_REG_TRUSTED_MODIFIERS)));
++ unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ ret = process_kf_arg_ptr_to_btf_id(env, reg, ref_t, ref_tname, ref_id, meta, i);
++ unmask_raw_tp_reg(reg, mask);
+ if (ret < 0)
+ return ret;
+ break;
+@@ -13128,7 +13205,7 @@ static int sanitize_check_bounds(struct bpf_verifier_env *env,
+ */
+ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ struct bpf_insn *insn,
+- const struct bpf_reg_state *ptr_reg,
++ struct bpf_reg_state *ptr_reg,
+ const struct bpf_reg_state *off_reg)
+ {
+ struct bpf_verifier_state *vstate = env->cur_state;
+@@ -13142,6 +13219,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ struct bpf_sanitize_info info = {};
+ u8 opcode = BPF_OP(insn->code);
+ u32 dst = insn->dst_reg;
++ bool mask;
+ int ret;
+
+ dst_reg = ®s[dst];
+@@ -13168,11 +13246,14 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ return -EACCES;
+ }
+
++ mask = mask_raw_tp_reg(env, ptr_reg);
+ if (ptr_reg->type & PTR_MAYBE_NULL) {
+ verbose(env, "R%d pointer arithmetic on %s prohibited, null-check it first\n",
+ dst, reg_type_str(env, ptr_reg->type));
++ unmask_raw_tp_reg(ptr_reg, mask);
+ return -EACCES;
+ }
++ unmask_raw_tp_reg(ptr_reg, mask);
+
+ switch (base_type(ptr_reg->type)) {
+ case PTR_TO_CTX:
+@@ -15717,6 +15798,15 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
+ return -ENOTSUPP;
+ }
+ break;
++ case BPF_PROG_TYPE_KPROBE:
++ switch (env->prog->expected_attach_type) {
++ case BPF_TRACE_KPROBE_SESSION:
++ range = retval_range(0, 1);
++ break;
++ default:
++ return 0;
++ }
++ break;
+ case BPF_PROG_TYPE_SK_LOOKUP:
+ range = retval_range(SK_DROP, SK_PASS);
+ break;
+@@ -19313,6 +19403,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ * for this case.
+ */
+ case PTR_TO_BTF_ID | MEM_ALLOC | PTR_UNTRUSTED:
++ case PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL:
+ if (type == BPF_READ) {
+ if (BPF_MODE(insn->code) == BPF_MEM)
+ insn->code = BPF_LDX | BPF_PROBE_MEM |
+@@ -21293,11 +21384,13 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
+ {
+ bool prog_extension = prog->type == BPF_PROG_TYPE_EXT;
+ bool prog_tracing = prog->type == BPF_PROG_TYPE_TRACING;
++ char trace_symbol[KSYM_SYMBOL_LEN];
+ const char prefix[] = "btf_trace_";
++ struct bpf_raw_event_map *btp;
+ int ret = 0, subprog = -1, i;
+ const struct btf_type *t;
+ bool conservative = true;
+- const char *tname;
++ const char *tname, *fname;
+ struct btf *btf;
+ long addr = 0;
+ struct module *mod = NULL;
+@@ -21428,10 +21521,34 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
+ return -EINVAL;
+ }
+ tname += sizeof(prefix) - 1;
+- t = btf_type_by_id(btf, t->type);
+- if (!btf_type_is_ptr(t))
+- /* should never happen in valid vmlinux build */
++
++ /* The func_proto of "btf_trace_##tname" is generated from typedef without argument
++ * names. Thus using bpf_raw_event_map to get argument names.
++ */
++ btp = bpf_get_raw_tracepoint(tname);
++ if (!btp)
+ return -EINVAL;
++ fname = kallsyms_lookup((unsigned long)btp->bpf_func, NULL, NULL, NULL,
++ trace_symbol);
++ bpf_put_raw_tracepoint(btp);
++
++ if (fname)
++ ret = btf_find_by_name_kind(btf, fname, BTF_KIND_FUNC);
++
++ if (!fname || ret < 0) {
++ bpf_log(log, "Cannot find btf of tracepoint template, fall back to %s%s.\n",
++ prefix, tname);
++ t = btf_type_by_id(btf, t->type);
++ if (!btf_type_is_ptr(t))
++ /* should never happen in valid vmlinux build */
++ return -EINVAL;
++ } else {
++ t = btf_type_by_id(btf, ret);
++ if (!btf_type_is_func(t))
++ /* should never happen in valid vmlinux build */
++ return -EINVAL;
++ }
++
+ t = btf_type_by_id(btf, t->type);
+ if (!btf_type_is_func_proto(t))
+ /* should never happen in valid vmlinux build */
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 6ba7dd2ab771d0..39402a9eb6b047 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2133,8 +2133,10 @@ int cgroup_setup_root(struct cgroup_root *root, u16 ss_mask)
+ if (ret)
+ goto exit_stats;
+
+- ret = cgroup_bpf_inherit(root_cgrp);
+- WARN_ON_ONCE(ret);
++ if (root == &cgrp_dfl_root) {
++ ret = cgroup_bpf_inherit(root_cgrp);
++ WARN_ON_ONCE(ret);
++ }
+
+ trace_cgroup_setup_root(root);
+
+@@ -2307,10 +2309,8 @@ static void cgroup_kill_sb(struct super_block *sb)
+ * And don't kill the default root.
+ */
+ if (list_empty(&root->cgrp.self.children) && root != &cgrp_dfl_root &&
+- !percpu_ref_is_dying(&root->cgrp.self.refcnt)) {
+- cgroup_bpf_offline(&root->cgrp);
++ !percpu_ref_is_dying(&root->cgrp.self.refcnt))
+ percpu_ref_kill(&root->cgrp.self.refcnt);
+- }
+ cgroup_put(&root->cgrp);
+ kernfs_kill_sb(sb);
+ }
+@@ -5643,9 +5643,11 @@ static struct cgroup *cgroup_create(struct cgroup *parent, const char *name,
+ if (ret)
+ goto out_kernfs_remove;
+
+- ret = cgroup_bpf_inherit(cgrp);
+- if (ret)
+- goto out_psi_free;
++ if (cgrp->root == &cgrp_dfl_root) {
++ ret = cgroup_bpf_inherit(cgrp);
++ if (ret)
++ goto out_psi_free;
++ }
+
+ /*
+ * New cgroup inherits effective freeze counter, and
+@@ -5959,7 +5961,8 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
+
+ cgroup1_check_for_release(parent);
+
+- cgroup_bpf_offline(cgrp);
++ if (cgrp->root == &cgrp_dfl_root)
++ cgroup_bpf_offline(cgrp);
+
+ /* put the base reference */
+ percpu_ref_kill(&cgrp->self.refcnt);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index dc08a23747338c..0d15759066341a 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -621,6 +621,12 @@ static void dup_mm_exe_file(struct mm_struct *mm, struct mm_struct *oldmm)
+
+ exe_file = get_mm_exe_file(oldmm);
+ RCU_INIT_POINTER(mm->exe_file, exe_file);
++ /*
++ * We depend on the oldmm having properly denied write access to the
++ * exe_file already.
++ */
++ if (exe_file && deny_write_access(exe_file))
++ pr_warn_once("deny_write_access() failed in %s\n", __func__);
+ }
+
+ #ifdef CONFIG_MMU
+@@ -1412,11 +1418,20 @@ int set_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
+ */
+ old_exe_file = rcu_dereference_raw(mm->exe_file);
+
+- if (new_exe_file)
++ if (new_exe_file) {
++ /*
++ * We expect the caller (i.e., sys_execve) to already denied
++ * write access, so this is unlikely to fail.
++ */
++ if (unlikely(deny_write_access(new_exe_file)))
++ return -EACCES;
+ get_file(new_exe_file);
++ }
+ rcu_assign_pointer(mm->exe_file, new_exe_file);
+- if (old_exe_file)
++ if (old_exe_file) {
++ allow_write_access(old_exe_file);
+ fput(old_exe_file);
++ }
+ return 0;
+ }
+
+@@ -1455,6 +1470,9 @@ int replace_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
+ return ret;
+ }
+
++ ret = deny_write_access(new_exe_file);
++ if (ret)
++ return -EACCES;
+ get_file(new_exe_file);
+
+ /* set the new file */
+@@ -1463,8 +1481,10 @@ int replace_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
+ rcu_assign_pointer(mm->exe_file, new_exe_file);
+ mmap_write_unlock(mm);
+
+- if (old_exe_file)
++ if (old_exe_file) {
++ allow_write_access(old_exe_file);
+ fput(old_exe_file);
++ }
+ return 0;
+ }
+
+diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
+index cea8f6874b1fbc..e5a27b511d73f9 100644
+--- a/kernel/irq/irqdomain.c
++++ b/kernel/irq/irqdomain.c
+@@ -128,72 +128,92 @@ void irq_domain_free_fwnode(struct fwnode_handle *fwnode)
+ }
+ EXPORT_SYMBOL_GPL(irq_domain_free_fwnode);
+
+-static int irq_domain_set_name(struct irq_domain *domain,
+- const struct fwnode_handle *fwnode,
+- enum irq_domain_bus_token bus_token)
++static int alloc_name(struct irq_domain *domain, char *base, enum irq_domain_bus_token bus_token)
++{
++ domain->name = bus_token ? kasprintf(GFP_KERNEL, "%s-%d", base, bus_token) :
++ kasprintf(GFP_KERNEL, "%s", base);
++ if (!domain->name)
++ return -ENOMEM;
++
++ domain->flags |= IRQ_DOMAIN_NAME_ALLOCATED;
++ return 0;
++}
++
++static int alloc_fwnode_name(struct irq_domain *domain, const struct fwnode_handle *fwnode,
++ enum irq_domain_bus_token bus_token, const char *suffix)
++{
++ const char *sep = suffix ? "-" : "";
++ const char *suf = suffix ? : "";
++ char *name;
++
++ name = bus_token ? kasprintf(GFP_KERNEL, "%pfw-%s%s%d", fwnode, suf, sep, bus_token) :
++ kasprintf(GFP_KERNEL, "%pfw-%s", fwnode, suf);
++ if (!name)
++ return -ENOMEM;
++
++ /*
++ * fwnode paths contain '/', which debugfs is legitimately unhappy
++ * about. Replace them with ':', which does the trick and is not as
++ * offensive as '\'...
++ */
++ domain->name = strreplace(name, '/', ':');
++ domain->flags |= IRQ_DOMAIN_NAME_ALLOCATED;
++ return 0;
++}
++
++static int alloc_unknown_name(struct irq_domain *domain, enum irq_domain_bus_token bus_token)
+ {
+ static atomic_t unknown_domains;
+- struct irqchip_fwid *fwid;
++ int id = atomic_inc_return(&unknown_domains);
++
++ domain->name = bus_token ? kasprintf(GFP_KERNEL, "unknown-%d-%d", id, bus_token) :
++ kasprintf(GFP_KERNEL, "unknown-%d", id);
++
++ if (!domain->name)
++ return -ENOMEM;
++ domain->flags |= IRQ_DOMAIN_NAME_ALLOCATED;
++ return 0;
++}
++
++static int irq_domain_set_name(struct irq_domain *domain, const struct irq_domain_info *info)
++{
++ enum irq_domain_bus_token bus_token = info->bus_token;
++ const struct fwnode_handle *fwnode = info->fwnode;
+
+ if (is_fwnode_irqchip(fwnode)) {
+- fwid = container_of(fwnode, struct irqchip_fwid, fwnode);
++ struct irqchip_fwid *fwid = container_of(fwnode, struct irqchip_fwid, fwnode);
++
++ /*
++ * The name_suffix is only intended to be used to avoid a name
++ * collision when multiple domains are created for a single
++ * device and the name is picked using a real device node.
++ * (Typical use-case is regmap-IRQ controllers for devices
++ * providing more than one physical IRQ.) There should be no
++ * need to use name_suffix with irqchip-fwnode.
++ */
++ if (info->name_suffix)
++ return -EINVAL;
+
+ switch (fwid->type) {
+ case IRQCHIP_FWNODE_NAMED:
+ case IRQCHIP_FWNODE_NAMED_ID:
+- domain->name = bus_token ?
+- kasprintf(GFP_KERNEL, "%s-%d",
+- fwid->name, bus_token) :
+- kstrdup(fwid->name, GFP_KERNEL);
+- if (!domain->name)
+- return -ENOMEM;
+- domain->flags |= IRQ_DOMAIN_NAME_ALLOCATED;
+- break;
++ return alloc_name(domain, fwid->name, bus_token);
+ default:
+ domain->name = fwid->name;
+- if (bus_token) {
+- domain->name = kasprintf(GFP_KERNEL, "%s-%d",
+- fwid->name, bus_token);
+- if (!domain->name)
+- return -ENOMEM;
+- domain->flags |= IRQ_DOMAIN_NAME_ALLOCATED;
+- }
+- break;
++ if (bus_token)
++ return alloc_name(domain, fwid->name, bus_token);
+ }
+- } else if (is_of_node(fwnode) || is_acpi_device_node(fwnode) ||
+- is_software_node(fwnode)) {
+- char *name;
+
+- /*
+- * fwnode paths contain '/', which debugfs is legitimately
+- * unhappy about. Replace them with ':', which does
+- * the trick and is not as offensive as '\'...
+- */
+- name = bus_token ?
+- kasprintf(GFP_KERNEL, "%pfw-%d", fwnode, bus_token) :
+- kasprintf(GFP_KERNEL, "%pfw", fwnode);
+- if (!name)
+- return -ENOMEM;
+-
+- domain->name = strreplace(name, '/', ':');
+- domain->flags |= IRQ_DOMAIN_NAME_ALLOCATED;
++ } else if (is_of_node(fwnode) || is_acpi_device_node(fwnode) || is_software_node(fwnode)) {
++ return alloc_fwnode_name(domain, fwnode, bus_token, info->name_suffix);
+ }
+
+- if (!domain->name) {
+- if (fwnode)
+- pr_err("Invalid fwnode type for irqdomain\n");
+- domain->name = bus_token ?
+- kasprintf(GFP_KERNEL, "unknown-%d-%d",
+- atomic_inc_return(&unknown_domains),
+- bus_token) :
+- kasprintf(GFP_KERNEL, "unknown-%d",
+- atomic_inc_return(&unknown_domains));
+- if (!domain->name)
+- return -ENOMEM;
+- domain->flags |= IRQ_DOMAIN_NAME_ALLOCATED;
+- }
++ if (domain->name)
++ return 0;
+
+- return 0;
++ if (fwnode)
++ pr_err("Invalid fwnode type for irqdomain\n");
++ return alloc_unknown_name(domain, bus_token);
+ }
+
+ static struct irq_domain *__irq_domain_create(const struct irq_domain_info *info)
+@@ -211,7 +231,7 @@ static struct irq_domain *__irq_domain_create(const struct irq_domain_info *info
+ if (!domain)
+ return ERR_PTR(-ENOMEM);
+
+- err = irq_domain_set_name(domain, info->fwnode, info->bus_token);
++ err = irq_domain_set_name(domain, info);
+ if (err) {
+ kfree(domain);
+ return ERR_PTR(err);
+@@ -267,13 +287,20 @@ static void irq_domain_free(struct irq_domain *domain)
+ kfree(domain);
+ }
+
+-/**
+- * irq_domain_instantiate() - Instantiate a new irq domain data structure
+- * @info: Domain information pointer pointing to the information for this domain
+- *
+- * Return: A pointer to the instantiated irq domain or an ERR_PTR value.
+- */
+-struct irq_domain *irq_domain_instantiate(const struct irq_domain_info *info)
++static void irq_domain_instantiate_descs(const struct irq_domain_info *info)
++{
++ if (!IS_ENABLED(CONFIG_SPARSE_IRQ))
++ return;
++
++ if (irq_alloc_descs(info->virq_base, info->virq_base, info->size,
++ of_node_to_nid(to_of_node(info->fwnode))) < 0) {
++ pr_info("Cannot allocate irq_descs @ IRQ%d, assuming pre-allocated\n",
++ info->virq_base);
++ }
++}
++
++static struct irq_domain *__irq_domain_instantiate(const struct irq_domain_info *info,
++ bool cond_alloc_descs, bool force_associate)
+ {
+ struct irq_domain *domain;
+ int err;
+@@ -306,6 +333,19 @@ struct irq_domain *irq_domain_instantiate(const struct irq_domain_info *info)
+
+ __irq_domain_publish(domain);
+
++ if (cond_alloc_descs && info->virq_base > 0)
++ irq_domain_instantiate_descs(info);
++
++ /*
++ * Legacy interrupt domains have a fixed Linux interrupt number
++ * associated. Other interrupt domains can request association by
++ * providing a Linux interrupt number > 0.
++ */
++ if (force_associate || info->virq_base > 0) {
++ irq_domain_associate_many(domain, info->virq_base, info->hwirq_base,
++ info->size - info->hwirq_base);
++ }
++
+ return domain;
+
+ err_domain_gc_remove:
+@@ -315,6 +355,17 @@ struct irq_domain *irq_domain_instantiate(const struct irq_domain_info *info)
+ irq_domain_free(domain);
+ return ERR_PTR(err);
+ }
++
++/**
++ * irq_domain_instantiate() - Instantiate a new irq domain data structure
++ * @info: Domain information pointer pointing to the information for this domain
++ *
++ * Return: A pointer to the instantiated irq domain or an ERR_PTR value.
++ */
++struct irq_domain *irq_domain_instantiate(const struct irq_domain_info *info)
++{
++ return __irq_domain_instantiate(info, false, false);
++}
+ EXPORT_SYMBOL_GPL(irq_domain_instantiate);
+
+ /**
+@@ -413,28 +464,13 @@ struct irq_domain *irq_domain_create_simple(struct fwnode_handle *fwnode,
+ .fwnode = fwnode,
+ .size = size,
+ .hwirq_max = size,
++ .virq_base = first_irq,
+ .ops = ops,
+ .host_data = host_data,
+ };
+- struct irq_domain *domain;
+-
+- domain = irq_domain_instantiate(&info);
+- if (IS_ERR(domain))
+- return NULL;
++ struct irq_domain *domain = __irq_domain_instantiate(&info, true, false);
+
+- if (first_irq > 0) {
+- if (IS_ENABLED(CONFIG_SPARSE_IRQ)) {
+- /* attempt to allocated irq_descs */
+- int rc = irq_alloc_descs(first_irq, first_irq, size,
+- of_node_to_nid(to_of_node(fwnode)));
+- if (rc < 0)
+- pr_info("Cannot allocate irq_descs @ IRQ%d, assuming pre-allocated\n",
+- first_irq);
+- }
+- irq_domain_associate_many(domain, first_irq, 0, size);
+- }
+-
+- return domain;
++ return IS_ERR(domain) ? NULL : domain;
+ }
+ EXPORT_SYMBOL_GPL(irq_domain_create_simple);
+
+@@ -476,18 +512,14 @@ struct irq_domain *irq_domain_create_legacy(struct fwnode_handle *fwnode,
+ .fwnode = fwnode,
+ .size = first_hwirq + size,
+ .hwirq_max = first_hwirq + size,
++ .hwirq_base = first_hwirq,
++ .virq_base = first_irq,
+ .ops = ops,
+ .host_data = host_data,
+ };
+- struct irq_domain *domain;
++ struct irq_domain *domain = __irq_domain_instantiate(&info, false, true);
+
+- domain = irq_domain_instantiate(&info);
+- if (IS_ERR(domain))
+- return NULL;
+-
+- irq_domain_associate_many(domain, first_irq, first_hwirq, size);
+-
+- return domain;
++ return IS_ERR(domain) ? NULL : domain;
+ }
+ EXPORT_SYMBOL_GPL(irq_domain_create_legacy);
+
+diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
+index f88c75b3cea3b8..31d480b533033d 100644
+--- a/kernel/rcu/rcuscale.c
++++ b/kernel/rcu/rcuscale.c
+@@ -781,13 +781,15 @@ kfree_scale_init(void)
+ if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start < 2 * HZ)) {
+ pr_alert("ERROR: call_rcu() CBs are not being lazy as expected!\n");
+ WARN_ON_ONCE(1);
+- return -1;
++ firsterr = -1;
++ goto unwind;
+ }
+
+ if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start > 3 * HZ)) {
+ pr_alert("ERROR: call_rcu() CBs are being too lazy!\n");
+ WARN_ON_ONCE(1);
+- return -1;
++ firsterr = -1;
++ goto unwind;
+ }
+ }
+
+diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
+index 549c03336ee971..4dcbf8aa80ff73 100644
+--- a/kernel/rcu/srcutiny.c
++++ b/kernel/rcu/srcutiny.c
+@@ -122,8 +122,8 @@ void srcu_drive_gp(struct work_struct *wp)
+ ssp = container_of(wp, struct srcu_struct, srcu_work);
+ preempt_disable(); // Needed for PREEMPT_AUTO
+ if (ssp->srcu_gp_running || ULONG_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max))) {
+- return; /* Already running or nothing to do. */
+ preempt_enable();
++ return; /* Already running or nothing to do. */
+ }
+
+ /* Remove recently arrived callbacks and wait for readers. */
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 8af1354b223194..34f47c5b98216c 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3531,7 +3531,7 @@ static int krc_count(struct kfree_rcu_cpu *krcp)
+ }
+
+ static void
+-schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
++__schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
+ {
+ long delay, delay_left;
+
+@@ -3545,6 +3545,16 @@ schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
+ queue_delayed_work(system_wq, &krcp->monitor_work, delay);
+ }
+
++static void
++schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
++{
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&krcp->lock, flags);
++ __schedule_delayed_monitor_work(krcp);
++ raw_spin_unlock_irqrestore(&krcp->lock, flags);
++}
++
+ static void
+ kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp)
+ {
+@@ -3855,7 +3865,7 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
+
+ // Set timer to drain after KFREE_DRAIN_JIFFIES.
+ if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING)
+- schedule_delayed_monitor_work(krcp);
++ __schedule_delayed_monitor_work(krcp);
+
+ unlock_return:
+ krc_this_cpu_unlock(krcp, flags);
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index eece6244f9d2fe..9089ba62a21714 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -775,9 +775,8 @@ static int sugov_init(struct cpufreq_policy *policy)
+ if (ret)
+ goto fail;
+
+- sugov_eas_rebuild_sd();
+-
+ out:
++ sugov_eas_rebuild_sd();
+ mutex_unlock(&global_tunables_lock);
+ return 0;
+
+diff --git a/kernel/time/time.c b/kernel/time/time.c
+index 642647f5046be0..1ad88e97b4ebcf 100644
+--- a/kernel/time/time.c
++++ b/kernel/time/time.c
+@@ -556,9 +556,9 @@ EXPORT_SYMBOL(ns_to_timespec64);
+ * - all other values are converted to jiffies by either multiplying
+ * the input value by a factor or dividing it with a factor and
+ * handling any 32-bit overflows.
+- * for the details see __msecs_to_jiffies()
++ * for the details see _msecs_to_jiffies()
+ *
+- * __msecs_to_jiffies() checks for the passed in value being a constant
++ * msecs_to_jiffies() checks for the passed in value being a constant
+ * via __builtin_constant_p() allowing gcc to eliminate most of the
+ * code, __msecs_to_jiffies() is called if the value passed does not
+ * allow constant folding and the actual conversion must be done at
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 6dbbb3683ab2e9..9cb1c952c3ec42 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -3294,7 +3294,6 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
+ struct bpf_prog *prog = link->link.prog;
+ bool sleepable = prog->sleepable;
+ struct bpf_run_ctx *old_run_ctx;
+- int err = 0;
+
+ if (link->task && current->mm != link->task->mm)
+ return 0;
+@@ -3307,7 +3306,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
+ migrate_disable();
+
+ old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
+- err = bpf_prog_run(link->link.prog, regs);
++ bpf_prog_run(link->link.prog, regs);
+ bpf_reset_run_ctx(old_run_ctx);
+
+ migrate_enable();
+@@ -3316,7 +3315,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
+ rcu_read_unlock_trace();
+ else
+ rcu_read_unlock();
+- return err;
++ return 0;
+ }
+
+ static bool
+diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
+index 05e7912418126b..3ff9caa4a71bbd 100644
+--- a/kernel/trace/trace_event_perf.c
++++ b/kernel/trace/trace_event_perf.c
+@@ -352,10 +352,16 @@ void perf_uprobe_destroy(struct perf_event *p_event)
+ int perf_trace_add(struct perf_event *p_event, int flags)
+ {
+ struct trace_event_call *tp_event = p_event->tp_event;
++ struct hw_perf_event *hwc = &p_event->hw;
+
+ if (!(flags & PERF_EF_START))
+ p_event->hw.state = PERF_HES_STOPPED;
+
++ if (is_sampling_event(p_event)) {
++ hwc->last_period = hwc->sample_period;
++ perf_swevent_set_period(p_event);
++ }
++
+ /*
+ * If TRACE_REG_PERF_ADD returns false; no custom action was performed
+ * and we need to take the default action of enqueueing our event on
+diff --git a/lib/overflow_kunit.c b/lib/overflow_kunit.c
+index 2abc78367dd110..5222c6393f1168 100644
+--- a/lib/overflow_kunit.c
++++ b/lib/overflow_kunit.c
+@@ -1187,7 +1187,7 @@ static void DEFINE_FLEX_test(struct kunit *test)
+ {
+ /* Using _RAW_ on a __counted_by struct will initialize "counter" to zero */
+ DEFINE_RAW_FLEX(struct foo, two_but_zero, array, 2);
+-#if __has_attribute(__counted_by__)
++#ifdef CONFIG_CC_HAS_COUNTED_BY
+ int expected_raw_size = sizeof(struct foo);
+ #else
+ int expected_raw_size = sizeof(struct foo) + 2 * sizeof(s16);
+diff --git a/lib/string_helpers.c b/lib/string_helpers.c
+index 69ba49b853c775..365c2a7d1e3947 100644
+--- a/lib/string_helpers.c
++++ b/lib/string_helpers.c
+@@ -57,7 +57,7 @@ int string_get_size(u64 size, u64 blk_size, const enum string_size_units units,
+ static const unsigned int rounding[] = { 500, 50, 5 };
+ int i = 0, j;
+ u32 remainder = 0, sf_cap;
+- char tmp[8];
++ char tmp[12];
+ const char *unit;
+
+ tmp[0] = '\0';
+diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c
+index 989a12a6787214..6dc234913dd58e 100644
+--- a/lib/strncpy_from_user.c
++++ b/lib/strncpy_from_user.c
+@@ -120,6 +120,9 @@ long strncpy_from_user(char *dst, const char __user *src, long count)
+ if (unlikely(count <= 0))
+ return 0;
+
++ kasan_check_write(dst, count);
++ check_object_size(dst, count, false);
++
+ if (can_do_masked_user_access()) {
+ long retval;
+
+@@ -142,8 +145,6 @@ long strncpy_from_user(char *dst, const char __user *src, long count)
+ if (max > count)
+ max = count;
+
+- kasan_check_write(dst, count);
+- check_object_size(dst, count, false);
+ if (user_read_access_begin(src, max)) {
+ retval = do_strncpy_from_user(dst, src, count, max);
+ user_read_access_end();
+diff --git a/mm/internal.h b/mm/internal.h
+index 7da580dfae6c5a..c791312eae7646 100644
+--- a/mm/internal.h
++++ b/mm/internal.h
+@@ -42,7 +42,7 @@ struct folio_batch;
+ * when we specify __GFP_NOWARN.
+ */
+ #define WARN_ON_ONCE_GFP(cond, gfp) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(!(gfp & __GFP_NOWARN) && __ret_warn_once && !__warned)) { \
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index dfdbe1ca533872..b9ff69c7522a19 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -286,7 +286,7 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
+ if (!priv->rings[i].intf)
+ break;
+ if (priv->rings[i].irq > 0)
+- unbind_from_irqhandler(priv->rings[i].irq, priv->dev);
++ unbind_from_irqhandler(priv->rings[i].irq, ring);
+ if (priv->rings[i].data.in) {
+ for (j = 0;
+ j < (1 << priv->rings[i].intf->ring_order);
+@@ -465,6 +465,7 @@ static int xen_9pfs_front_init(struct xenbus_device *dev)
+ goto error;
+ }
+
++ xenbus_switch_state(dev, XenbusStateInitialised);
+ return 0;
+
+ error_xenbus:
+@@ -512,8 +513,10 @@ static void xen_9pfs_front_changed(struct xenbus_device *dev,
+ break;
+
+ case XenbusStateInitWait:
+- if (!xen_9pfs_front_init(dev))
+- xenbus_switch_state(dev, XenbusStateInitialised);
++ if (dev->state != XenbusStateInitialising)
++ break;
++
++ xen_9pfs_front_init(dev);
+ break;
+
+ case XenbusStateConnected:
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index a7ffce48512354..cc3253e9d41d03 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -953,6 +953,7 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t
+ conn->tx_power = HCI_TX_POWER_INVALID;
+ conn->max_tx_power = HCI_TX_POWER_INVALID;
+ conn->sync_handle = HCI_SYNC_HANDLE_INVALID;
++ conn->sid = HCI_SID_INVALID;
+
+ set_bit(HCI_CONN_POWER_SAVE, &conn->flags);
+ conn->disc_timeout = HCI_DISCONN_TIMEOUT;
+@@ -2063,105 +2064,225 @@ static int create_big_sync(struct hci_dev *hdev, void *data)
+
+ static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
+ {
+- struct hci_cp_le_pa_create_sync *cp = data;
+-
+ bt_dev_dbg(hdev, "");
+
+ if (err)
+ bt_dev_err(hdev, "Unable to create PA: %d", err);
++}
+
+- kfree(cp);
++static bool hci_conn_check_create_pa_sync(struct hci_conn *conn)
++{
++ if (conn->type != ISO_LINK || conn->sid == HCI_SID_INVALID)
++ return false;
++
++ return true;
+ }
+
+ static int create_pa_sync(struct hci_dev *hdev, void *data)
+ {
+- struct hci_cp_le_pa_create_sync *cp = data;
+- int err;
++ struct hci_cp_le_pa_create_sync *cp = NULL;
++ struct hci_conn *conn;
++ int err = 0;
+
+- err = __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC,
+- sizeof(*cp), cp, HCI_CMD_TIMEOUT);
+- if (err) {
+- hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+- return err;
++ hci_dev_lock(hdev);
++
++ rcu_read_lock();
++
++ /* The spec allows only one pending LE Periodic Advertising Create
++ * Sync command at a time. If the command is pending now, don't do
++ * anything. We check for pending connections after each PA Sync
++ * Established event.
++ *
++ * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
++ * page 2493:
++ *
++ * If the Host issues this command when another HCI_LE_Periodic_
++ * Advertising_Create_Sync command is pending, the Controller shall
++ * return the error code Command Disallowed (0x0C).
++ */
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (test_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags))
++ goto unlock;
++ }
++
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (hci_conn_check_create_pa_sync(conn)) {
++ struct bt_iso_qos *qos = &conn->iso_qos;
++
++ cp = kzalloc(sizeof(*cp), GFP_KERNEL);
++ if (!cp) {
++ err = -ENOMEM;
++ goto unlock;
++ }
++
++ cp->options = qos->bcast.options;
++ cp->sid = conn->sid;
++ cp->addr_type = conn->dst_type;
++ bacpy(&cp->addr, &conn->dst);
++ cp->skip = cpu_to_le16(qos->bcast.skip);
++ cp->sync_timeout = cpu_to_le16(qos->bcast.sync_timeout);
++ cp->sync_cte_type = qos->bcast.sync_cte_type;
++
++ break;
++ }
+ }
+
+- return hci_update_passive_scan_sync(hdev);
++unlock:
++ rcu_read_unlock();
++
++ hci_dev_unlock(hdev);
++
++ if (cp) {
++ hci_dev_set_flag(hdev, HCI_PA_SYNC);
++ set_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
++
++ err = __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC,
++ sizeof(*cp), cp, HCI_CMD_TIMEOUT);
++ if (!err)
++ err = hci_update_passive_scan_sync(hdev);
++
++ kfree(cp);
++
++ if (err) {
++ hci_dev_clear_flag(hdev, HCI_PA_SYNC);
++ clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
++ }
++ }
++
++ return err;
++}
++
++int hci_pa_create_sync_pending(struct hci_dev *hdev)
++{
++ /* Queue start pa_create_sync and scan */
++ return hci_cmd_sync_queue(hdev, create_pa_sync,
++ NULL, create_pa_complete);
+ }
+
+ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
+ __u8 dst_type, __u8 sid,
+ struct bt_iso_qos *qos)
+ {
+- struct hci_cp_le_pa_create_sync *cp;
+ struct hci_conn *conn;
+- int err;
+-
+- if (hci_dev_test_and_set_flag(hdev, HCI_PA_SYNC))
+- return ERR_PTR(-EBUSY);
+
+ conn = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_SLAVE);
+ if (IS_ERR(conn))
+ return conn;
+
+ conn->iso_qos = *qos;
++ conn->dst_type = dst_type;
++ conn->sid = sid;
+ conn->state = BT_LISTEN;
+
+ hci_conn_hold(conn);
+
+- cp = kzalloc(sizeof(*cp), GFP_KERNEL);
+- if (!cp) {
+- hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+- hci_conn_drop(conn);
+- return ERR_PTR(-ENOMEM);
++ hci_pa_create_sync_pending(hdev);
++
++ return conn;
++}
++
++static bool hci_conn_check_create_big_sync(struct hci_conn *conn)
++{
++ if (!conn->num_bis)
++ return false;
++
++ return true;
++}
++
++static void big_create_sync_complete(struct hci_dev *hdev, void *data, int err)
++{
++ bt_dev_dbg(hdev, "");
++
++ if (err)
++ bt_dev_err(hdev, "Unable to create BIG sync: %d", err);
++}
++
++static int big_create_sync(struct hci_dev *hdev, void *data)
++{
++ DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11);
++ struct hci_conn *conn;
++
++ rcu_read_lock();
++
++ pdu->num_bis = 0;
++
++ /* The spec allows only one pending LE BIG Create Sync command at
++ * a time. If the command is pending now, don't do anything. We
++ * check for pending connections after each BIG Sync Established
++ * event.
++ *
++ * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
++ * page 2586:
++ *
++ * If the Host sends this command when the Controller is in the
++ * process of synchronizing to any BIG, i.e. the HCI_LE_BIG_Sync_
++ * Established event has not been generated, the Controller shall
++ * return the error code Command Disallowed (0x0C).
++ */
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (test_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags))
++ goto unlock;
+ }
+
+- cp->options = qos->bcast.options;
+- cp->sid = sid;
+- cp->addr_type = dst_type;
+- bacpy(&cp->addr, dst);
+- cp->skip = cpu_to_le16(qos->bcast.skip);
+- cp->sync_timeout = cpu_to_le16(qos->bcast.sync_timeout);
+- cp->sync_cte_type = qos->bcast.sync_cte_type;
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (hci_conn_check_create_big_sync(conn)) {
++ struct bt_iso_qos *qos = &conn->iso_qos;
+
+- /* Queue start pa_create_sync and scan */
+- err = hci_cmd_sync_queue(hdev, create_pa_sync, cp, create_pa_complete);
+- if (err < 0) {
+- hci_conn_drop(conn);
+- kfree(cp);
+- return ERR_PTR(err);
++ set_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
++
++ pdu->handle = qos->bcast.big;
++ pdu->sync_handle = cpu_to_le16(conn->sync_handle);
++ pdu->encryption = qos->bcast.encryption;
++ memcpy(pdu->bcode, qos->bcast.bcode,
++ sizeof(pdu->bcode));
++ pdu->mse = qos->bcast.mse;
++ pdu->timeout = cpu_to_le16(qos->bcast.timeout);
++ pdu->num_bis = conn->num_bis;
++ memcpy(pdu->bis, conn->bis, conn->num_bis);
++
++ break;
++ }
+ }
+
+- return conn;
++unlock:
++ rcu_read_unlock();
++
++ if (!pdu->num_bis)
++ return 0;
++
++ return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC,
++ struct_size(pdu, bis, pdu->num_bis), pdu);
++}
++
++int hci_le_big_create_sync_pending(struct hci_dev *hdev)
++{
++ /* Queue big_create_sync */
++ return hci_cmd_sync_queue_once(hdev, big_create_sync,
++ NULL, big_create_sync_complete);
+ }
+
+ int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
+ struct bt_iso_qos *qos,
+ __u16 sync_handle, __u8 num_bis, __u8 bis[])
+ {
+- DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11);
+ int err;
+
+- if (num_bis < 0x01 || num_bis > pdu->num_bis)
++ if (num_bis < 0x01 || num_bis > ISO_MAX_NUM_BIS)
+ return -EINVAL;
+
+ err = qos_set_big(hdev, qos);
+ if (err)
+ return err;
+
+- if (hcon)
+- hcon->iso_qos.bcast.big = qos->bcast.big;
++ if (hcon) {
++ /* Update hcon QoS */
++ hcon->iso_qos = *qos;
+
+- pdu->handle = qos->bcast.big;
+- pdu->sync_handle = cpu_to_le16(sync_handle);
+- pdu->encryption = qos->bcast.encryption;
+- memcpy(pdu->bcode, qos->bcast.bcode, sizeof(pdu->bcode));
+- pdu->mse = qos->bcast.mse;
+- pdu->timeout = cpu_to_le16(qos->bcast.timeout);
+- pdu->num_bis = num_bis;
+- memcpy(pdu->bis, bis, num_bis);
++ hcon->num_bis = num_bis;
++ memcpy(hcon->bis, bis, num_bis);
++ }
+
+- return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC,
+- struct_size(pdu, bis, num_bis), pdu);
++ return hci_le_big_create_sync_pending(hdev);
+ }
+
+ static void create_big_complete(struct hci_dev *hdev, void *data, int err)
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 561c8cb87473ef..aa39929fa379a4 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6345,7 +6345,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ struct hci_ev_le_pa_sync_established *ev = data;
+ int mask = hdev->link_mode;
+ __u8 flags = 0;
+- struct hci_conn *pa_sync;
++ struct hci_conn *pa_sync, *conn;
+
+ bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
+
+@@ -6353,6 +6353,20 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+
+ hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+
++ conn = hci_conn_hash_lookup_sid(hdev, ev->sid, &ev->bdaddr,
++ ev->bdaddr_type);
++ if (!conn) {
++ bt_dev_err(hdev,
++ "Unable to find connection for dst %pMR sid 0x%2.2x",
++ &ev->bdaddr, ev->sid);
++ goto unlock;
++ }
++
++ clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
++
++ conn->sync_handle = le16_to_cpu(ev->handle);
++ conn->sid = HCI_SID_INVALID;
++
+ mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ISO_LINK, &flags);
+ if (!(mask & HCI_LM_ACCEPT)) {
+ hci_le_pa_term_sync(hdev, ev->handle);
+@@ -6379,6 +6393,9 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ }
+
+ unlock:
++ /* Handle any other pending PA sync command */
++ hci_pa_create_sync_pending(hdev);
++
+ hci_dev_unlock(hdev);
+ }
+
+@@ -6896,7 +6913,7 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ struct sk_buff *skb)
+ {
+ struct hci_evt_le_big_sync_estabilished *ev = data;
+- struct hci_conn *bis;
++ struct hci_conn *bis, *conn;
+ int i;
+
+ bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
+@@ -6907,6 +6924,20 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+
+ hci_dev_lock(hdev);
+
++ conn = hci_conn_hash_lookup_big_sync_pend(hdev, ev->handle,
++ ev->num_bis);
++ if (!conn) {
++ bt_dev_err(hdev,
++ "Unable to find connection for big 0x%2.2x",
++ ev->handle);
++ goto unlock;
++ }
++
++ clear_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
++
++ conn->num_bis = 0;
++ memset(conn->bis, 0, sizeof(conn->num_bis));
++
+ for (i = 0; i < ev->num_bis; i++) {
+ u16 handle = le16_to_cpu(ev->bis[i]);
+ __le32 interval;
+@@ -6956,6 +6987,10 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ hci_connect_cfm(bis, ev->status);
+ }
+
++unlock:
++ /* Handle any other pending BIG sync command */
++ hci_le_big_create_sync_pending(hdev);
++
+ hci_dev_unlock(hdev);
+ }
+
+diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c
+index 367e32fe30eb84..4b54dbbf0729a3 100644
+--- a/net/bluetooth/hci_sysfs.c
++++ b/net/bluetooth/hci_sysfs.c
+@@ -21,16 +21,6 @@ static const struct device_type bt_link = {
+ .release = bt_link_release,
+ };
+
+-/*
+- * The rfcomm tty device will possibly retain even when conn
+- * is down, and sysfs doesn't support move zombie device,
+- * so we should move the device before conn device is destroyed.
+- */
+-static int __match_tty(struct device *dev, void *data)
+-{
+- return !strncmp(dev_name(dev), "rfcomm", 6);
+-}
+-
+ void hci_conn_init_sysfs(struct hci_conn *conn)
+ {
+ struct hci_dev *hdev = conn->hdev;
+@@ -73,10 +63,13 @@ void hci_conn_del_sysfs(struct hci_conn *conn)
+ return;
+ }
+
++ /* If there are devices using the connection as parent reset it to NULL
++ * before unregistering the device.
++ */
+ while (1) {
+ struct device *dev;
+
+- dev = device_find_child(&conn->dev, NULL, __match_tty);
++ dev = device_find_any_child(&conn->dev);
+ if (!dev)
+ break;
+ device_move(dev, NULL, DPM_ORDER_DEV_LAST);
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 7a83e400ac77a0..5e2d9758bd3c1c 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -35,6 +35,7 @@ struct iso_conn {
+ struct sk_buff *rx_skb;
+ __u32 rx_len;
+ __u16 tx_sn;
++ struct kref ref;
+ };
+
+ #define iso_conn_lock(c) spin_lock(&(c)->lock)
+@@ -93,6 +94,49 @@ static struct sock *iso_get_sock(bdaddr_t *src, bdaddr_t *dst,
+ #define ISO_CONN_TIMEOUT (HZ * 40)
+ #define ISO_DISCONN_TIMEOUT (HZ * 2)
+
++static void iso_conn_free(struct kref *ref)
++{
++ struct iso_conn *conn = container_of(ref, struct iso_conn, ref);
++
++ BT_DBG("conn %p", conn);
++
++ if (conn->sk)
++ iso_pi(conn->sk)->conn = NULL;
++
++ if (conn->hcon) {
++ conn->hcon->iso_data = NULL;
++ hci_conn_drop(conn->hcon);
++ }
++
++ /* Ensure no more work items will run since hci_conn has been dropped */
++ disable_delayed_work_sync(&conn->timeout_work);
++
++ kfree(conn);
++}
++
++static void iso_conn_put(struct iso_conn *conn)
++{
++ if (!conn)
++ return;
++
++ BT_DBG("conn %p refcnt %d", conn, kref_read(&conn->ref));
++
++ kref_put(&conn->ref, iso_conn_free);
++}
++
++static struct iso_conn *iso_conn_hold_unless_zero(struct iso_conn *conn)
++{
++ if (!conn)
++ return NULL;
++
++ BT_DBG("conn %p refcnt %u", conn, kref_read(&conn->ref));
++
++ if (!kref_get_unless_zero(&conn->ref))
++ return NULL;
++
++ return conn;
++}
++
+ static struct sock *iso_sock_hold(struct iso_conn *conn)
+ {
+ if (!conn || !bt_sock_linked(&iso_sk_list, conn->sk))
+@@ -109,9 +153,14 @@ static void iso_sock_timeout(struct work_struct *work)
+ timeout_work.work);
+ struct sock *sk;
+
++ conn = iso_conn_hold_unless_zero(conn);
++ if (!conn)
++ return;
++
+ iso_conn_lock(conn);
+ sk = iso_sock_hold(conn);
+ iso_conn_unlock(conn);
++ iso_conn_put(conn);
+
+ if (!sk)
+ return;
+@@ -149,9 +198,14 @@ static struct iso_conn *iso_conn_add(struct hci_conn *hcon)
+ {
+ struct iso_conn *conn = hcon->iso_data;
+
++ conn = iso_conn_hold_unless_zero(conn);
+ if (conn) {
+- if (!conn->hcon)
++ if (!conn->hcon) {
++ iso_conn_lock(conn);
+ conn->hcon = hcon;
++ iso_conn_unlock(conn);
++ }
++ iso_conn_put(conn);
+ return conn;
+ }
+
+@@ -159,6 +213,7 @@ static struct iso_conn *iso_conn_add(struct hci_conn *hcon)
+ if (!conn)
+ return NULL;
+
++ kref_init(&conn->ref);
+ spin_lock_init(&conn->lock);
+ INIT_DELAYED_WORK(&conn->timeout_work, iso_sock_timeout);
+
+@@ -178,17 +233,15 @@ static void iso_chan_del(struct sock *sk, int err)
+ struct sock *parent;
+
+ conn = iso_pi(sk)->conn;
++ iso_pi(sk)->conn = NULL;
+
+ BT_DBG("sk %p, conn %p, err %d", sk, conn, err);
+
+ if (conn) {
+ iso_conn_lock(conn);
+ conn->sk = NULL;
+- iso_pi(sk)->conn = NULL;
+ iso_conn_unlock(conn);
+-
+- if (conn->hcon)
+- hci_conn_drop(conn->hcon);
++ iso_conn_put(conn);
+ }
+
+ sk->sk_state = BT_CLOSED;
+@@ -210,6 +263,7 @@ static void iso_conn_del(struct hci_conn *hcon, int err)
+ struct iso_conn *conn = hcon->iso_data;
+ struct sock *sk;
+
++ conn = iso_conn_hold_unless_zero(conn);
+ if (!conn)
+ return;
+
+@@ -219,20 +273,18 @@ static void iso_conn_del(struct hci_conn *hcon, int err)
+ iso_conn_lock(conn);
+ sk = iso_sock_hold(conn);
+ iso_conn_unlock(conn);
++ iso_conn_put(conn);
+
+- if (sk) {
+- lock_sock(sk);
+- iso_sock_clear_timer(sk);
+- iso_chan_del(sk, err);
+- release_sock(sk);
+- sock_put(sk);
++ if (!sk) {
++ iso_conn_put(conn);
++ return;
+ }
+
+- /* Ensure no more work items will run before freeing conn. */
+- cancel_delayed_work_sync(&conn->timeout_work);
+-
+- hcon->iso_data = NULL;
+- kfree(conn);
++ lock_sock(sk);
++ iso_sock_clear_timer(sk);
++ iso_chan_del(sk, err);
++ release_sock(sk);
++ sock_put(sk);
+ }
+
+ static int __iso_chan_add(struct iso_conn *conn, struct sock *sk,
+@@ -652,6 +704,8 @@ static void iso_sock_destruct(struct sock *sk)
+ {
+ BT_DBG("sk %p", sk);
+
++ iso_conn_put(iso_pi(sk)->conn);
++
+ skb_queue_purge(&sk->sk_receive_queue);
+ skb_queue_purge(&sk->sk_write_queue);
+ }
+@@ -711,6 +765,7 @@ static void iso_sock_disconn(struct sock *sk)
+ */
+ if (bis_sk) {
+ hcon->state = BT_OPEN;
++ hcon->iso_data = NULL;
+ iso_pi(sk)->conn->hcon = NULL;
+ iso_sock_clear_timer(sk);
+ iso_chan_del(sk, bt_to_errno(hcon->abort_reason));
+@@ -720,7 +775,6 @@ static void iso_sock_disconn(struct sock *sk)
+ }
+
+ sk->sk_state = BT_DISCONN;
+- iso_sock_set_timer(sk, ISO_DISCONN_TIMEOUT);
+ iso_conn_lock(iso_pi(sk)->conn);
+ hci_conn_drop(iso_pi(sk)->conn->hcon);
+ iso_pi(sk)->conn->hcon = NULL;
+@@ -1338,6 +1392,13 @@ static void iso_conn_big_sync(struct sock *sk)
+ if (!hdev)
+ return;
+
++ /* hci_le_big_create_sync requires hdev lock to be held, since
++ * it enqueues the HCI LE BIG Create Sync command via
++ * hci_cmd_sync_queue_once, which checks hdev flags that might
++ * change.
++ */
++ hci_dev_lock(hdev);
++
+ if (!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
+ err = hci_le_big_create_sync(hdev, iso_pi(sk)->conn->hcon,
+ &iso_pi(sk)->qos,
+@@ -1348,6 +1409,8 @@ static void iso_conn_big_sync(struct sock *sk)
+ bt_dev_err(hdev, "hci_le_big_create_sync: %d",
+ err);
+ }
++
++ hci_dev_unlock(hdev);
+ }
+
+ static int iso_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+@@ -1942,6 +2005,7 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+
+ if (sk) {
+ int err;
++ struct hci_conn *hcon = iso_pi(sk)->conn->hcon;
+
+ iso_pi(sk)->qos.bcast.encryption = ev2->encryption;
+
+@@ -1950,7 +2014,8 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+
+ if (!test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags) &&
+ !test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
+- err = hci_le_big_create_sync(hdev, NULL,
++ err = hci_le_big_create_sync(hdev,
++ hcon,
+ &iso_pi(sk)->qos,
+ iso_pi(sk)->sync_handle,
+ iso_pi(sk)->bc_num_bis,
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 4157d9f23f46ef..ac5684b38dbeb9 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1317,7 +1317,8 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ struct mgmt_mode *cp;
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
+ return;
+
+ cp = cmd->param;
+@@ -1350,7 +1351,13 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_powered_sync(struct hci_dev *hdev, void *data)
+ {
+ struct mgmt_pending_cmd *cmd = data;
+- struct mgmt_mode *cp = cmd->param;
++ struct mgmt_mode *cp;
++
++ /* Make sure cmd still outstanding. */
++ if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++ return -ECANCELED;
++
++ cp = cmd->param;
+
+ BT_DBG("%s", hdev->name);
+
+@@ -1510,7 +1517,8 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
+ bt_dev_dbg(hdev, "err %d", err);
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
+ return;
+
+ hci_dev_lock(hdev);
+@@ -1684,7 +1692,8 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
+ bt_dev_dbg(hdev, "err %d", err);
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
+ return;
+
+ hci_dev_lock(hdev);
+@@ -1916,7 +1925,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ bool changed;
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_SSP, hdev))
++ if (err == -ECANCELED || cmd != pending_find(MGMT_OP_SET_SSP, hdev))
+ return;
+
+ if (err) {
+@@ -3782,7 +3791,8 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
+
+ bt_dev_dbg(hdev, "err %d", err);
+
+- if (cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
+ return;
+
+ if (status) {
+@@ -3957,7 +3967,8 @@ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
+ struct sk_buff *skb = cmd->skb;
+ u8 status = mgmt_status(err);
+
+- if (cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
+ return;
+
+ if (!status) {
+@@ -5848,13 +5859,16 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ struct mgmt_pending_cmd *cmd = data;
+
++ bt_dev_dbg(hdev, "err %d", err);
++
++ if (err == -ECANCELED)
++ return;
++
+ if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) &&
+ cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) &&
+ cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
+ return;
+
+- bt_dev_dbg(hdev, "err %d", err);
+-
+ mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
+ cmd->param, 1);
+ mgmt_pending_remove(cmd);
+@@ -6087,7 +6101,8 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ struct mgmt_pending_cmd *cmd = data;
+
+- if (cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
+ return;
+
+ bt_dev_dbg(hdev, "err %d", err);
+@@ -8078,7 +8093,8 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
+ u8 status = mgmt_status(err);
+ u16 eir_len;
+
+- if (cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
+ return;
+
+ if (!status) {
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index f48250e3f2e103..8af1bf518321fd 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -729,7 +729,8 @@ static int rfcomm_sock_getsockopt_old(struct socket *sock, int optname, char __u
+ struct sock *l2cap_sk;
+ struct l2cap_conn *conn;
+ struct rfcomm_conninfo cinfo;
+- int len, err = 0;
++ int err = 0;
++ size_t len;
+ u32 opt;
+
+ BT_DBG("sk %p", sk);
+@@ -783,7 +784,7 @@ static int rfcomm_sock_getsockopt_old(struct socket *sock, int optname, char __u
+ cinfo.hci_handle = conn->hcon->handle;
+ memcpy(cinfo.dev_class, conn->hcon->dev_class, 3);
+
+- len = min_t(unsigned int, len, sizeof(cinfo));
++ len = min(len, sizeof(cinfo));
+ if (copy_to_user(optval, (char *) &cinfo, len))
+ err = -EFAULT;
+
+@@ -802,7 +803,8 @@ static int rfcomm_sock_getsockopt(struct socket *sock, int level, int optname, c
+ {
+ struct sock *sk = sock->sk;
+ struct bt_security sec;
+- int len, err = 0;
++ int err = 0;
++ size_t len;
+
+ BT_DBG("sk %p", sk);
+
+@@ -827,7 +829,7 @@ static int rfcomm_sock_getsockopt(struct socket *sock, int level, int optname, c
+ sec.level = rfcomm_pi(sk)->sec_level;
+ sec.key_size = 0;
+
+- len = min_t(unsigned int, len, sizeof(sec));
++ len = min(len, sizeof(sec));
+ if (copy_to_user(optval, (char *) &sec, len))
+ err = -EFAULT;
+
+diff --git a/net/core/filter.c b/net/core/filter.c
+index fe5ac8da5022f8..bab7fa44075b48 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2620,18 +2620,16 @@ BPF_CALL_2(bpf_msg_cork_bytes, struct sk_msg *, msg, u32, bytes)
+
+ static void sk_msg_reset_curr(struct sk_msg *msg)
+ {
+- u32 i = msg->sg.start;
+- u32 len = 0;
+-
+- do {
+- len += sk_msg_elem(msg, i)->length;
+- sk_msg_iter_var_next(i);
+- if (len >= msg->sg.size)
+- break;
+- } while (i != msg->sg.end);
++ if (!msg->sg.size) {
++ msg->sg.curr = msg->sg.start;
++ msg->sg.copybreak = 0;
++ } else {
++ u32 i = msg->sg.end;
+
+- msg->sg.curr = i;
+- msg->sg.copybreak = 0;
++ sk_msg_iter_var_prev(i);
++ msg->sg.curr = i;
++ msg->sg.copybreak = msg->sg.data[i].length;
++ }
+ }
+
+ static const struct bpf_func_proto bpf_msg_cork_bytes_proto = {
+@@ -2794,7 +2792,7 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ sk_msg_iter_var_next(i);
+ } while (i != msg->sg.end);
+
+- if (start >= offset + l)
++ if (start > offset + l)
+ return -EINVAL;
+
+ space = MAX_MSG_FRAGS - sk_msg_elem_used(msg);
+@@ -2819,6 +2817,8 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+
+ raw = page_address(page);
+
++ if (i == msg->sg.end)
++ sk_msg_iter_var_prev(i);
+ psge = sk_msg_elem(msg, i);
+ front = start - offset;
+ back = psge->length - front;
+@@ -2835,7 +2835,13 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ }
+
+ put_page(sg_page(psge));
+- } else if (start - offset) {
++ new = i;
++ goto place_new;
++ }
++
++ if (start - offset) {
++ if (i == msg->sg.end)
++ sk_msg_iter_var_prev(i);
+ psge = sk_msg_elem(msg, i);
+ rsge = sk_msg_elem_cpy(msg, i);
+
+@@ -2846,39 +2852,44 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ sk_msg_iter_var_next(i);
+ sg_unmark_end(psge);
+ sg_unmark_end(&rsge);
+- sk_msg_iter_next(msg, end);
+ }
+
+ /* Slot(s) to place newly allocated data */
++ sk_msg_iter_next(msg, end);
+ new = i;
++ sk_msg_iter_var_next(i);
++
++ if (i == msg->sg.end) {
++ if (!rsge.length)
++ goto place_new;
++ sk_msg_iter_next(msg, end);
++ goto place_new;
++ }
+
+ /* Shift one or two slots as needed */
+- if (!copy) {
+- sge = sk_msg_elem_cpy(msg, i);
++ sge = sk_msg_elem_cpy(msg, new);
++ sg_unmark_end(&sge);
+
++ nsge = sk_msg_elem_cpy(msg, i);
++ if (rsge.length) {
+ sk_msg_iter_var_next(i);
+- sg_unmark_end(&sge);
++ nnsge = sk_msg_elem_cpy(msg, i);
+ sk_msg_iter_next(msg, end);
++ }
+
+- nsge = sk_msg_elem_cpy(msg, i);
++ while (i != msg->sg.end) {
++ msg->sg.data[i] = sge;
++ sge = nsge;
++ sk_msg_iter_var_next(i);
+ if (rsge.length) {
+- sk_msg_iter_var_next(i);
++ nsge = nnsge;
+ nnsge = sk_msg_elem_cpy(msg, i);
+- }
+-
+- while (i != msg->sg.end) {
+- msg->sg.data[i] = sge;
+- sge = nsge;
+- sk_msg_iter_var_next(i);
+- if (rsge.length) {
+- nsge = nnsge;
+- nnsge = sk_msg_elem_cpy(msg, i);
+- } else {
+- nsge = sk_msg_elem_cpy(msg, i);
+- }
++ } else {
++ nsge = sk_msg_elem_cpy(msg, i);
+ }
+ }
+
++place_new:
+ /* Place newly allocated data buffer */
+ sk_mem_charge(msg->sk, len);
+ msg->sg.size += len;
+@@ -2907,8 +2918,10 @@ static const struct bpf_func_proto bpf_msg_push_data_proto = {
+
+ static void sk_msg_shift_left(struct sk_msg *msg, int i)
+ {
++ struct scatterlist *sge = sk_msg_elem(msg, i);
+ int prev;
+
++ put_page(sg_page(sge));
+ do {
+ prev = i;
+ sk_msg_iter_var_next(i);
+@@ -2945,6 +2958,9 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ if (unlikely(flags))
+ return -EINVAL;
+
++ if (unlikely(len == 0))
++ return 0;
++
+ /* First find the starting scatterlist element */
+ i = msg->sg.start;
+ do {
+@@ -2957,7 +2973,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ } while (i != msg->sg.end);
+
+ /* Bounds checks: start and pop must be inside message */
+- if (start >= offset + l || last >= msg->sg.size)
++ if (start >= offset + l || last > msg->sg.size)
+ return -EINVAL;
+
+ space = MAX_MSG_FRAGS - sk_msg_elem_used(msg);
+@@ -2986,12 +3002,12 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ */
+ if (start != offset) {
+ struct scatterlist *nsge, *sge = sk_msg_elem(msg, i);
+- int a = start;
++ int a = start - offset;
+ int b = sge->length - pop - a;
+
+ sk_msg_iter_var_next(i);
+
+- if (pop < sge->length - a) {
++ if (b > 0) {
+ if (space) {
+ sge->length = a;
+ sk_msg_shift_right(msg, i);
+@@ -3010,7 +3026,6 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ if (unlikely(!page))
+ return -ENOMEM;
+
+- sge->length = a;
+ orig = sg_page(sge);
+ from = sg_virt(sge);
+ to = page_address(page);
+@@ -3020,7 +3035,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ put_page(orig);
+ }
+ pop = 0;
+- } else if (pop >= sge->length - a) {
++ } else {
+ pop -= (sge->length - a);
+ sge->length = a;
+ }
+@@ -3054,7 +3069,6 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ pop -= sge->length;
+ sk_msg_shift_left(msg, i);
+ }
+- sk_msg_iter_var_next(i);
+ }
+
+ sk_mem_uncharge(msg->sk, len - pop);
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index a17d7eaeb00192..f8f8c1e50ae4eb 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -214,6 +214,7 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ return -ENOMEM;
+
+ rtnl_lock();
++ rcu_read_lock();
+
+ napi = napi_by_id(napi_id);
+ if (napi) {
+@@ -223,6 +224,7 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ err = -ENOENT;
+ }
+
++ rcu_read_unlock();
+ rtnl_unlock();
+
+ if (err)
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index bbf40b99971382..846fd672f0e529 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -1117,9 +1117,9 @@ static void sk_psock_strp_data_ready(struct sock *sk)
+ if (tls_sw_has_ctx_rx(sk)) {
+ psock->saved_data_ready(sk);
+ } else {
+- write_lock_bh(&sk->sk_callback_lock);
++ read_lock_bh(&sk->sk_callback_lock);
+ strp_data_ready(&psock->strp);
+- write_unlock_bh(&sk->sk_callback_lock);
++ read_unlock_bh(&sk->sk_callback_lock);
+ }
+ }
+ rcu_read_unlock();
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index 049e22bdaafb74..1f40259597bc06 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -266,6 +266,8 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
+ skb->dev = master->dev;
+ skb->priority = TC_PRIO_CONTROL;
+
++ skb_reset_network_header(skb);
++ skb_reset_transport_header(skb);
+ if (dev_hard_header(skb, skb->dev, ETH_P_PRP,
+ hsr->sup_multicast_addr,
+ skb->dev->dev_addr, skb->len) <= 0)
+@@ -273,8 +275,6 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
+
+ skb_reset_mac_header(skb);
+ skb_reset_mac_len(skb);
+- skb_reset_network_header(skb);
+- skb_reset_transport_header(skb);
+
+ return skb;
+ out:
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index cd7989b514eaa7..f5592670420b5b 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -1188,7 +1188,7 @@ static void reqsk_timer_handler(struct timer_list *t)
+
+ drop:
+ __inet_csk_reqsk_queue_drop(sk_listener, oreq, true);
+- reqsk_put(req);
++ reqsk_put(oreq);
+ }
+
+ static bool reqsk_queue_hash_req(struct request_sock *req,
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 6c750bd13dd8dd..035d6f0ff01eef 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -136,7 +136,7 @@ static struct mr_table *ipmr_mr_table_iter(struct net *net,
+ return ret;
+ }
+
+-static struct mr_table *ipmr_get_table(struct net *net, u32 id)
++static struct mr_table *__ipmr_get_table(struct net *net, u32 id)
+ {
+ struct mr_table *mrt;
+
+@@ -147,6 +147,16 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id)
+ return NULL;
+ }
+
++static struct mr_table *ipmr_get_table(struct net *net, u32 id)
++{
++ struct mr_table *mrt;
++
++ rcu_read_lock();
++ mrt = __ipmr_get_table(net, id);
++ rcu_read_unlock();
++ return mrt;
++}
++
+ static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4,
+ struct mr_table **mrt)
+ {
+@@ -188,7 +198,7 @@ static int ipmr_rule_action(struct fib_rule *rule, struct flowi *flp,
+
+ arg->table = fib_rule_get_table(rule, arg);
+
+- mrt = ipmr_get_table(rule->fr_net, arg->table);
++ mrt = __ipmr_get_table(rule->fr_net, arg->table);
+ if (!mrt)
+ return -EAGAIN;
+ res->mrt = mrt;
+@@ -314,6 +324,8 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id)
+ return net->ipv4.mrt;
+ }
+
++#define __ipmr_get_table ipmr_get_table
++
+ static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4,
+ struct mr_table **mrt)
+ {
+@@ -402,7 +414,7 @@ static struct mr_table *ipmr_new_table(struct net *net, u32 id)
+ if (id != RT_TABLE_DEFAULT && id >= 1000000000)
+ return ERR_PTR(-EINVAL);
+
+- mrt = ipmr_get_table(net, id);
++ mrt = __ipmr_get_table(net, id);
+ if (mrt)
+ return mrt;
+
+@@ -1373,7 +1385,7 @@ int ip_mroute_setsockopt(struct sock *sk, int optname, sockptr_t optval,
+ goto out_unlock;
+ }
+
+- mrt = ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT);
++ mrt = __ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT);
+ if (!mrt) {
+ ret = -ENOENT;
+ goto out_unlock;
+@@ -2261,11 +2273,13 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb,
+ struct mr_table *mrt;
+ int err;
+
+- mrt = ipmr_get_table(net, RT_TABLE_DEFAULT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return -ENOENT;
++ }
+
+- rcu_read_lock();
+ cache = ipmr_cache_find(mrt, saddr, daddr);
+ if (!cache && skb->dev) {
+ int vif = ipmr_find_vif(mrt, skb->dev);
+@@ -2550,7 +2564,7 @@ static int ipmr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ grp = tb[RTA_DST] ? nla_get_in_addr(tb[RTA_DST]) : 0;
+ tableid = tb[RTA_TABLE] ? nla_get_u32(tb[RTA_TABLE]) : 0;
+
+- mrt = ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT);
++ mrt = __ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT);
+ if (!mrt) {
+ err = -ENOENT;
+ goto errout_free;
+@@ -2604,7 +2618,7 @@ static int ipmr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
+ if (filter.table_id) {
+ struct mr_table *mrt;
+
+- mrt = ipmr_get_table(sock_net(skb->sk), filter.table_id);
++ mrt = __ipmr_get_table(sock_net(skb->sk), filter.table_id);
+ if (!mrt) {
+ if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IPMR)
+ return skb->len;
+@@ -2712,7 +2726,7 @@ static int rtm_to_ipmr_mfcc(struct net *net, struct nlmsghdr *nlh,
+ break;
+ }
+ }
+- mrt = ipmr_get_table(net, tblid);
++ mrt = __ipmr_get_table(net, tblid);
+ if (!mrt) {
+ ret = -ENOENT;
+ goto out;
+@@ -2920,13 +2934,15 @@ static void *ipmr_vif_seq_start(struct seq_file *seq, loff_t *pos)
+ struct net *net = seq_file_net(seq);
+ struct mr_table *mrt;
+
+- mrt = ipmr_get_table(net, RT_TABLE_DEFAULT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return ERR_PTR(-ENOENT);
++ }
+
+ iter->mrt = mrt;
+
+- rcu_read_lock();
+ return mr_vif_seq_start(seq, pos);
+ }
+
+diff --git a/net/ipv4/ipmr_base.c b/net/ipv4/ipmr_base.c
+index 271dc03fc6dbd9..f0af12a2f70bcd 100644
+--- a/net/ipv4/ipmr_base.c
++++ b/net/ipv4/ipmr_base.c
+@@ -310,7 +310,8 @@ int mr_table_dump(struct mr_table *mrt, struct sk_buff *skb,
+ if (filter->filter_set)
+ flags |= NLM_F_DUMP_FILTERED;
+
+- list_for_each_entry_rcu(mfc, &mrt->mfc_cache_list, list) {
++ list_for_each_entry_rcu(mfc, &mrt->mfc_cache_list, list,
++ lockdep_rtnl_is_held()) {
+ if (e < s_e)
+ goto next_entry;
+ if (filter->dev &&
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index fe6178715ba05f..915286c3615a27 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -221,11 +221,11 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
+ int flags,
+ int *addr_len)
+ {
+- struct tcp_sock *tcp = tcp_sk(sk);
+ int peek = flags & MSG_PEEK;
+- u32 seq = tcp->copied_seq;
+ struct sk_psock *psock;
++ struct tcp_sock *tcp;
+ int copied = 0;
++ u32 seq;
+
+ if (unlikely(flags & MSG_ERRQUEUE))
+ return inet_recv_error(sk, msg, len, addr_len);
+@@ -238,7 +238,8 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
+ return tcp_recvmsg(sk, msg, len, flags, addr_len);
+
+ lock_sock(sk);
+-
++ tcp = tcp_sk(sk);
++ seq = tcp->copied_seq;
+ /* We may have received data on the sk_receive_queue pre-accept and
+ * then we can not use read_skb in this context because we haven't
+ * assigned a sk_socket yet so have no link to the ops. The work-around
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index f70d8757af1a42..acf190723b1eab 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -2570,6 +2570,24 @@ static struct inet6_dev *addrconf_add_dev(struct net_device *dev)
+ return idev;
+ }
+
++static void delete_tempaddrs(struct inet6_dev *idev,
++ struct inet6_ifaddr *ifp)
++{
++ struct inet6_ifaddr *ift, *tmp;
++
++ write_lock_bh(&idev->lock);
++ list_for_each_entry_safe(ift, tmp, &idev->tempaddr_list, tmp_list) {
++ if (ift->ifpub != ifp)
++ continue;
++
++ in6_ifa_hold(ift);
++ write_unlock_bh(&idev->lock);
++ ipv6_del_addr(ift);
++ write_lock_bh(&idev->lock);
++ }
++ write_unlock_bh(&idev->lock);
++}
++
+ static void manage_tempaddrs(struct inet6_dev *idev,
+ struct inet6_ifaddr *ifp,
+ __u32 valid_lft, __u32 prefered_lft,
+@@ -3122,11 +3140,12 @@ static int inet6_addr_del(struct net *net, int ifindex, u32 ifa_flags,
+ in6_ifa_hold(ifp);
+ read_unlock_bh(&idev->lock);
+
+- if (!(ifp->flags & IFA_F_TEMPORARY) &&
+- (ifa_flags & IFA_F_MANAGETEMPADDR))
+- manage_tempaddrs(idev, ifp, 0, 0, false,
+- jiffies);
+ ipv6_del_addr(ifp);
++
++ if (!(ifp->flags & IFA_F_TEMPORARY) &&
++ (ifp->flags & IFA_F_MANAGETEMPADDR))
++ delete_tempaddrs(idev, ifp);
++
+ addrconf_verify_rtnl(net);
+ if (ipv6_addr_is_multicast(pfx)) {
+ ipv6_mc_config(net->ipv6.mc_autojoin_sk,
+@@ -4950,14 +4969,12 @@ static int inet6_addr_modify(struct net *net, struct inet6_ifaddr *ifp,
+ }
+
+ if (was_managetempaddr || ifp->flags & IFA_F_MANAGETEMPADDR) {
+- if (was_managetempaddr &&
+- !(ifp->flags & IFA_F_MANAGETEMPADDR)) {
+- cfg->valid_lft = 0;
+- cfg->preferred_lft = 0;
+- }
+- manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft,
+- cfg->preferred_lft, !was_managetempaddr,
+- jiffies);
++ if (was_managetempaddr && !(ifp->flags & IFA_F_MANAGETEMPADDR))
++ delete_tempaddrs(ifp->idev, ifp);
++ else
++ manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft,
++ cfg->preferred_lft, !was_managetempaddr,
++ jiffies);
+ }
+
+ addrconf_verify_rtnl(net);
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index eb111d20615c62..9a1c59275a1099 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -1190,8 +1190,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ while (sibling) {
+ if (sibling->fib6_metric == rt->fib6_metric &&
+ rt6_qualify_for_ecmp(sibling)) {
+- list_add_tail(&rt->fib6_siblings,
+- &sibling->fib6_siblings);
++ list_add_tail_rcu(&rt->fib6_siblings,
++ &sibling->fib6_siblings);
+ break;
+ }
+ sibling = rcu_dereference_protected(sibling->fib6_next,
+@@ -1252,7 +1252,7 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ fib6_siblings)
+ sibling->fib6_nsiblings--;
+ rt->fib6_nsiblings = 0;
+- list_del_init(&rt->fib6_siblings);
++ list_del_rcu(&rt->fib6_siblings);
+ rt6_multipath_rebalance(next_sibling);
+ return err;
+ }
+@@ -1970,7 +1970,7 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn,
+ &rt->fib6_siblings, fib6_siblings)
+ sibling->fib6_nsiblings--;
+ rt->fib6_nsiblings = 0;
+- list_del_init(&rt->fib6_siblings);
++ list_del_rcu(&rt->fib6_siblings);
+ rt6_multipath_rebalance(next_sibling);
+ }
+
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index dd342e6ecf3f45..f2b1e9210905b1 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -125,7 +125,7 @@ static struct mr_table *ip6mr_mr_table_iter(struct net *net,
+ return ret;
+ }
+
+-static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
++static struct mr_table *__ip6mr_get_table(struct net *net, u32 id)
+ {
+ struct mr_table *mrt;
+
+@@ -136,6 +136,16 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
+ return NULL;
+ }
+
++static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
++{
++ struct mr_table *mrt;
++
++ rcu_read_lock();
++ mrt = __ip6mr_get_table(net, id);
++ rcu_read_unlock();
++ return mrt;
++}
++
+ static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6,
+ struct mr_table **mrt)
+ {
+@@ -177,7 +187,7 @@ static int ip6mr_rule_action(struct fib_rule *rule, struct flowi *flp,
+
+ arg->table = fib_rule_get_table(rule, arg);
+
+- mrt = ip6mr_get_table(rule->fr_net, arg->table);
++ mrt = __ip6mr_get_table(rule->fr_net, arg->table);
+ if (!mrt)
+ return -EAGAIN;
+ res->mrt = mrt;
+@@ -304,6 +314,8 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
+ return net->ipv6.mrt6;
+ }
+
++#define __ip6mr_get_table ip6mr_get_table
++
+ static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6,
+ struct mr_table **mrt)
+ {
+@@ -382,7 +394,7 @@ static struct mr_table *ip6mr_new_table(struct net *net, u32 id)
+ {
+ struct mr_table *mrt;
+
+- mrt = ip6mr_get_table(net, id);
++ mrt = __ip6mr_get_table(net, id);
+ if (mrt)
+ return mrt;
+
+@@ -411,13 +423,15 @@ static void *ip6mr_vif_seq_start(struct seq_file *seq, loff_t *pos)
+ struct net *net = seq_file_net(seq);
+ struct mr_table *mrt;
+
+- mrt = ip6mr_get_table(net, RT6_TABLE_DFLT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return ERR_PTR(-ENOENT);
++ }
+
+ iter->mrt = mrt;
+
+- rcu_read_lock();
+ return mr_vif_seq_start(seq, pos);
+ }
+
+@@ -2275,11 +2289,13 @@ int ip6mr_get_route(struct net *net, struct sk_buff *skb, struct rtmsg *rtm,
+ struct mfc6_cache *cache;
+ struct rt6_info *rt = dst_rt6_info(skb_dst(skb));
+
+- mrt = ip6mr_get_table(net, RT6_TABLE_DFLT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return -ENOENT;
++ }
+
+- rcu_read_lock();
+ cache = ip6mr_cache_find(mrt, &rt->rt6i_src.addr, &rt->rt6i_dst.addr);
+ if (!cache && skb->dev) {
+ int vif = ip6mr_find_vif(mrt, skb->dev);
+@@ -2560,7 +2576,7 @@ static int ip6mr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ grp = nla_get_in6_addr(tb[RTA_DST]);
+ tableid = tb[RTA_TABLE] ? nla_get_u32(tb[RTA_TABLE]) : 0;
+
+- mrt = ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT);
++ mrt = __ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT);
+ if (!mrt) {
+ NL_SET_ERR_MSG_MOD(extack, "MR table does not exist");
+ return -ENOENT;
+@@ -2607,7 +2623,7 @@ static int ip6mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
+ if (filter.table_id) {
+ struct mr_table *mrt;
+
+- mrt = ip6mr_get_table(sock_net(skb->sk), filter.table_id);
++ mrt = __ip6mr_get_table(sock_net(skb->sk), filter.table_id);
+ if (!mrt) {
+ if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IP6MR)
+ return skb->len;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index b4dcd8f3e7baba..6b58842c5a73d8 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -374,6 +374,7 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev)
+ {
+ struct rt6_info *rt = dst_rt6_info(dst);
+ struct inet6_dev *idev = rt->rt6i_idev;
++ struct fib6_info *from;
+
+ if (idev && idev->dev != blackhole_netdev) {
+ struct inet6_dev *blackhole_idev = in6_dev_get(blackhole_netdev);
+@@ -383,6 +384,8 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev)
+ in6_dev_put(idev);
+ }
+ }
++ from = unrcu_pointer(xchg(&rt->from, NULL));
++ fib6_info_release(from);
+ }
+
+ static bool __rt6_check_expired(const struct rt6_info *rt)
+@@ -413,8 +416,8 @@ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ struct flowi6 *fl6, int oif, bool have_oif_match,
+ const struct sk_buff *skb, int strict)
+ {
+- struct fib6_info *sibling, *next_sibling;
+ struct fib6_info *match = res->f6i;
++ struct fib6_info *sibling;
+
+ if (!match->nh && (!match->fib6_nsiblings || have_oif_match))
+ goto out;
+@@ -440,8 +443,8 @@ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ if (fl6->mp_hash <= atomic_read(&match->fib6_nh->fib_nh_upper_bound))
+ goto out;
+
+- list_for_each_entry_safe(sibling, next_sibling, &match->fib6_siblings,
+- fib6_siblings) {
++ list_for_each_entry_rcu(sibling, &match->fib6_siblings,
++ fib6_siblings) {
+ const struct fib6_nh *nh = sibling->fib6_nh;
+ int nh_upper_bound;
+
+@@ -1455,7 +1458,6 @@ static DEFINE_SPINLOCK(rt6_exception_lock);
+ static void rt6_remove_exception(struct rt6_exception_bucket *bucket,
+ struct rt6_exception *rt6_ex)
+ {
+- struct fib6_info *from;
+ struct net *net;
+
+ if (!bucket || !rt6_ex)
+@@ -1467,8 +1469,6 @@ static void rt6_remove_exception(struct rt6_exception_bucket *bucket,
+ /* purge completely the exception to allow releasing the held resources:
+ * some [sk] cache may keep the dst around for unlimited time
+ */
+- from = unrcu_pointer(xchg(&rt6_ex->rt6i->from, NULL));
+- fib6_info_release(from);
+ dst_dev_put(&rt6_ex->rt6i->dst);
+
+ hlist_del_rcu(&rt6_ex->hlist);
+@@ -5195,14 +5195,18 @@ static void ip6_route_mpath_notify(struct fib6_info *rt,
+ * nexthop. Since sibling routes are always added at the end of
+ * the list, find the first sibling of the last route appended
+ */
++ rcu_read_lock();
++
+ if ((nlflags & NLM_F_APPEND) && rt_last && rt_last->fib6_nsiblings) {
+- rt = list_first_entry(&rt_last->fib6_siblings,
+- struct fib6_info,
+- fib6_siblings);
++ rt = list_first_or_null_rcu(&rt_last->fib6_siblings,
++ struct fib6_info,
++ fib6_siblings);
+ }
+
+ if (rt)
+ inet6_rt_notify(RTM_NEWROUTE, rt, info, nlflags);
++
++ rcu_read_unlock();
+ }
+
+ static bool ip6_route_mpath_should_notify(const struct fib6_info *rt)
+@@ -5547,17 +5551,21 @@ static size_t rt6_nlmsg_size(struct fib6_info *f6i)
+ nexthop_for_each_fib6_nh(f6i->nh, rt6_nh_nlmsg_size,
+ &nexthop_len);
+ } else {
+- struct fib6_info *sibling, *next_sibling;
+ struct fib6_nh *nh = f6i->fib6_nh;
++ struct fib6_info *sibling;
+
+ nexthop_len = 0;
+ if (f6i->fib6_nsiblings) {
+ rt6_nh_nlmsg_size(nh, &nexthop_len);
+
+- list_for_each_entry_safe(sibling, next_sibling,
+- &f6i->fib6_siblings, fib6_siblings) {
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
++ fib6_siblings) {
+ rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
+ }
++
++ rcu_read_unlock();
+ }
+ nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
+ }
+@@ -5721,7 +5729,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ lwtunnel_fill_encap(skb, dst->lwtstate, RTA_ENCAP, RTA_ENCAP_TYPE) < 0)
+ goto nla_put_failure;
+ } else if (rt->fib6_nsiblings) {
+- struct fib6_info *sibling, *next_sibling;
++ struct fib6_info *sibling;
+ struct nlattr *mp;
+
+ mp = nla_nest_start_noflag(skb, RTA_MULTIPATH);
+@@ -5733,14 +5741,21 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 0) < 0)
+ goto nla_put_failure;
+
+- list_for_each_entry_safe(sibling, next_sibling,
+- &rt->fib6_siblings, fib6_siblings) {
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(sibling, &rt->fib6_siblings,
++ fib6_siblings) {
+ if (fib_add_nexthop(skb, &sibling->fib6_nh->nh_common,
+ sibling->fib6_nh->fib_nh_weight,
+- AF_INET6, 0) < 0)
++ AF_INET6, 0) < 0) {
++ rcu_read_unlock();
++
+ goto nla_put_failure;
++ }
+ }
+
++ rcu_read_unlock();
++
+ nla_nest_end(skb, mp);
+ } else if (rt->nh) {
+ if (nla_put_u32(skb, RTA_NH_ID, rt->nh->id))
+@@ -6177,7 +6192,7 @@ void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info,
+ err = -ENOBUFS;
+ seq = info->nlh ? info->nlh->nlmsg_seq : 0;
+
+- skb = nlmsg_new(rt6_nlmsg_size(rt), gfp_any());
++ skb = nlmsg_new(rt6_nlmsg_size(rt), GFP_ATOMIC);
+ if (!skb)
+ goto errout;
+
+@@ -6190,7 +6205,7 @@ void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info,
+ goto errout;
+ }
+ rtnl_notify(skb, net, info->portid, RTNLGRP_IPV6_ROUTE,
+- info->nlh, gfp_any());
++ info->nlh, GFP_ATOMIC);
+ return;
+ errout:
+ if (err < 0)
+diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
+index c00323fa9eb66e..7929df08d4e023 100644
+--- a/net/iucv/af_iucv.c
++++ b/net/iucv/af_iucv.c
+@@ -1236,7 +1236,9 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ return -EOPNOTSUPP;
+
+ /* receive/dequeue next skb:
+- * the function understands MSG_PEEK and, thus, does not dequeue skb */
++ * the function understands MSG_PEEK and, thus, does not dequeue skb
++ * only refcount is increased.
++ */
+ skb = skb_recv_datagram(sk, flags, &err);
+ if (!skb) {
+ if (sk->sk_shutdown & RCV_SHUTDOWN)
+@@ -1252,9 +1254,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+
+ cskb = skb;
+ if (skb_copy_datagram_msg(cskb, offset, msg, copied)) {
+- if (!(flags & MSG_PEEK))
+- skb_queue_head(&sk->sk_receive_queue, skb);
+- return -EFAULT;
++ err = -EFAULT;
++ goto err_out;
+ }
+
+ /* SOCK_SEQPACKET: set MSG_TRUNC if recv buf size is too small */
+@@ -1271,11 +1272,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ err = put_cmsg(msg, SOL_IUCV, SCM_IUCV_TRGCLS,
+ sizeof(IUCV_SKB_CB(skb)->class),
+ (void *)&IUCV_SKB_CB(skb)->class);
+- if (err) {
+- if (!(flags & MSG_PEEK))
+- skb_queue_head(&sk->sk_receive_queue, skb);
+- return err;
+- }
++ if (err)
++ goto err_out;
+
+ /* Mark read part of skb as used */
+ if (!(flags & MSG_PEEK)) {
+@@ -1331,8 +1329,18 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ /* SOCK_SEQPACKET: return real length if MSG_TRUNC is set */
+ if (sk->sk_type == SOCK_SEQPACKET && (flags & MSG_TRUNC))
+ copied = rlen;
++ if (flags & MSG_PEEK)
++ skb_unref(skb);
+
+ return copied;
++
++err_out:
++ if (!(flags & MSG_PEEK))
++ skb_queue_head(&sk->sk_receive_queue, skb);
++ else
++ skb_unref(skb);
++
++ return err;
+ }
+
+ static inline __poll_t iucv_accept_poll(struct sock *parent)
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 4eb52add7103b0..0259cde394ba09 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -1098,7 +1098,7 @@ static int llc_ui_setsockopt(struct socket *sock, int level, int optname,
+ lock_sock(sk);
+ if (unlikely(level != SOL_LLC || optlen != sizeof(int)))
+ goto out;
+- rc = copy_from_sockptr(&opt, optval, sizeof(opt));
++ rc = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (rc)
+ goto out;
+ rc = -EINVAL;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index f2b5c18417ef71..fe11700c3c18ac 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -3046,6 +3046,7 @@ static int ieee80211_set_tx_power(struct wiphy *wiphy,
+ enum nl80211_tx_power_setting txp_type = type;
+ bool update_txp_type = false;
+ bool has_monitor = false;
++ int old_power = local->user_power_level;
+
+ lockdep_assert_wiphy(local->hw.wiphy);
+
+@@ -3128,6 +3129,10 @@ static int ieee80211_set_tx_power(struct wiphy *wiphy,
+ }
+ }
+
++ if (local->emulate_chanctx &&
++ (old_power != local->user_power_level))
++ ieee80211_hw_conf_chan(local);
++
+ return 0;
+ }
+
+@@ -4820,12 +4825,12 @@ void ieee80211_color_change_finalize_work(struct wiphy *wiphy,
+ ieee80211_color_change_finalize(link);
+ }
+
+-void ieee80211_color_collision_detection_work(struct work_struct *work)
++void ieee80211_color_collision_detection_work(struct wiphy *wiphy,
++ struct wiphy_work *work)
+ {
+- struct delayed_work *delayed_work = to_delayed_work(work);
+ struct ieee80211_link_data *link =
+- container_of(delayed_work, struct ieee80211_link_data,
+- color_collision_detect_work);
++ container_of(work, struct ieee80211_link_data,
++ color_collision_detect_work.work);
+ struct ieee80211_sub_if_data *sdata = link->sdata;
+
+ cfg80211_obss_color_collision_notify(sdata->dev, link->color_bitmap,
+@@ -4878,7 +4883,8 @@ ieee80211_obss_color_collision_notify(struct ieee80211_vif *vif,
+ return;
+ }
+
+- if (delayed_work_pending(&link->color_collision_detect_work)) {
++ if (wiphy_delayed_work_pending(sdata->local->hw.wiphy,
++ &link->color_collision_detect_work)) {
+ rcu_read_unlock();
+ return;
+ }
+@@ -4887,9 +4893,9 @@ ieee80211_obss_color_collision_notify(struct ieee80211_vif *vif,
+ /* queue the color collision detection event every 500 ms in order to
+ * avoid sending too much netlink messages to userspace.
+ */
+- ieee80211_queue_delayed_work(&sdata->local->hw,
+- &link->color_collision_detect_work,
+- msecs_to_jiffies(500));
++ wiphy_delayed_work_queue(sdata->local->hw.wiphy,
++ &link->color_collision_detect_work,
++ msecs_to_jiffies(500));
+
+ rcu_read_unlock();
+ }
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index a3485e4c6132ff..8650b3e8f1a9f1 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1051,7 +1051,7 @@ struct ieee80211_link_data {
+ } csa;
+
+ struct wiphy_work color_change_finalize_work;
+- struct delayed_work color_collision_detect_work;
++ struct wiphy_delayed_work color_collision_detect_work;
+ u64 color_bitmap;
+
+ /* context reservation -- protected with wiphy mutex */
+@@ -2004,7 +2004,8 @@ int ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
+ /* color change handling */
+ void ieee80211_color_change_finalize_work(struct wiphy *wiphy,
+ struct wiphy_work *work);
+-void ieee80211_color_collision_detection_work(struct work_struct *work);
++void ieee80211_color_collision_detection_work(struct wiphy *wiphy,
++ struct wiphy_work *work);
+
+ /* interface handling */
+ #define MAC80211_SUPPORTED_FEATURES_TX (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | \
+diff --git a/net/mac80211/link.c b/net/mac80211/link.c
+index 1a211b8d40573a..67f65ff79a7113 100644
+--- a/net/mac80211/link.c
++++ b/net/mac80211/link.c
+@@ -41,8 +41,8 @@ void ieee80211_link_init(struct ieee80211_sub_if_data *sdata,
+ ieee80211_csa_finalize_work);
+ wiphy_work_init(&link->color_change_finalize_work,
+ ieee80211_color_change_finalize_work);
+- INIT_DELAYED_WORK(&link->color_collision_detect_work,
+- ieee80211_color_collision_detection_work);
++ wiphy_delayed_work_init(&link->color_collision_detect_work,
++ ieee80211_color_collision_detection_work);
+ INIT_LIST_HEAD(&link->assigned_chanctx_list);
+ INIT_LIST_HEAD(&link->reserved_chanctx_list);
+
+@@ -70,7 +70,8 @@ void ieee80211_link_stop(struct ieee80211_link_data *link)
+ if (link->sdata->vif.type == NL80211_IFTYPE_STATION)
+ ieee80211_mgd_stop_link(link);
+
+- cancel_delayed_work_sync(&link->color_collision_detect_work);
++ wiphy_delayed_work_cancel(link->sdata->local->hw.wiphy,
++ &link->color_collision_detect_work);
+ wiphy_work_cancel(link->sdata->local->hw.wiphy,
+ &link->color_change_finalize_work);
+ wiphy_work_cancel(link->sdata->local->hw.wiphy,
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index a3104b6ea6f0b9..9ce942f3a4a4e0 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -167,6 +167,8 @@ static u32 ieee80211_calc_hw_conf_chan(struct ieee80211_local *local,
+ }
+
+ power = ieee80211_chandef_max_power(&chandef);
++ if (local->user_power_level != IEEE80211_UNSET_POWER_LEVEL)
++ power = min(local->user_power_level, power);
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(sdata, &local->interfaces, list) {
+diff --git a/net/netfilter/ipset/ip_set_bitmap_ip.c b/net/netfilter/ipset/ip_set_bitmap_ip.c
+index e4fa00abde6a2a..5988b9bb9029dc 100644
+--- a/net/netfilter/ipset/ip_set_bitmap_ip.c
++++ b/net/netfilter/ipset/ip_set_bitmap_ip.c
+@@ -163,11 +163,8 @@ bitmap_ip_uadt(struct ip_set *set, struct nlattr *tb[],
+ ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to);
+ if (ret)
+ return ret;
+- if (ip > ip_to) {
++ if (ip > ip_to)
+ swap(ip, ip_to);
+- if (ip < map->first_ip)
+- return -IPSET_ERR_BITMAP_RANGE;
+- }
+ } else if (tb[IPSET_ATTR_CIDR]) {
+ u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]);
+
+@@ -178,7 +175,7 @@ bitmap_ip_uadt(struct ip_set *set, struct nlattr *tb[],
+ ip_to = ip;
+ }
+
+- if (ip_to > map->last_ip)
++ if (ip < map->first_ip || ip_to > map->last_ip)
+ return -IPSET_ERR_BITMAP_RANGE;
+
+ for (; !before(ip_to, ip); ip += map->hosts) {
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 58503348ed3a3e..f69e29917087a6 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3293,25 +3293,37 @@ int nft_expr_inner_parse(const struct nft_ctx *ctx, const struct nlattr *nla,
+ if (!tb[NFTA_EXPR_DATA] || !tb[NFTA_EXPR_NAME])
+ return -EINVAL;
+
++ rcu_read_lock();
++
+ type = __nft_expr_type_get(ctx->family, tb[NFTA_EXPR_NAME]);
+- if (!type)
+- return -ENOENT;
++ if (!type) {
++ err = -ENOENT;
++ goto out_unlock;
++ }
+
+- if (!type->inner_ops)
+- return -EOPNOTSUPP;
++ if (!type->inner_ops) {
++ err = -EOPNOTSUPP;
++ goto out_unlock;
++ }
+
+ err = nla_parse_nested_deprecated(info->tb, type->maxattr,
+ tb[NFTA_EXPR_DATA],
+ type->policy, NULL);
+ if (err < 0)
+- goto err_nla_parse;
++ goto out_unlock;
+
+ info->attr = nla;
+ info->ops = type->inner_ops;
+
++ /* No module reference will be taken on type->owner.
++ * Presence of type->inner_ops implies that the expression
++ * is builtin, so it cannot go away.
++ */
++ rcu_read_unlock();
+ return 0;
+
+-err_nla_parse:
++out_unlock:
++ rcu_read_unlock();
+ return err;
+ }
+
+@@ -3410,13 +3422,15 @@ void nft_expr_destroy(const struct nft_ctx *ctx, struct nft_expr *expr)
+ * Rules
+ */
+
+-static struct nft_rule *__nft_rule_lookup(const struct nft_chain *chain,
++static struct nft_rule *__nft_rule_lookup(const struct net *net,
++ const struct nft_chain *chain,
+ u64 handle)
+ {
+ struct nft_rule *rule;
+
+ // FIXME: this sucks
+- list_for_each_entry_rcu(rule, &chain->rules, list) {
++ list_for_each_entry_rcu(rule, &chain->rules, list,
++ lockdep_commit_lock_is_held(net)) {
+ if (handle == rule->handle)
+ return rule;
+ }
+@@ -3424,13 +3438,14 @@ static struct nft_rule *__nft_rule_lookup(const struct nft_chain *chain,
+ return ERR_PTR(-ENOENT);
+ }
+
+-static struct nft_rule *nft_rule_lookup(const struct nft_chain *chain,
++static struct nft_rule *nft_rule_lookup(const struct net *net,
++ const struct nft_chain *chain,
+ const struct nlattr *nla)
+ {
+ if (nla == NULL)
+ return ERR_PTR(-EINVAL);
+
+- return __nft_rule_lookup(chain, be64_to_cpu(nla_get_be64(nla)));
++ return __nft_rule_lookup(net, chain, be64_to_cpu(nla_get_be64(nla)));
+ }
+
+ static const struct nla_policy nft_rule_policy[NFTA_RULE_MAX + 1] = {
+@@ -3731,7 +3746,7 @@ static int nf_tables_dump_rules_done(struct netlink_callback *cb)
+ return 0;
+ }
+
+-/* called with rcu_read_lock held */
++/* Caller must hold rcu read lock or transaction mutex */
+ static struct sk_buff *
+ nf_tables_getrule_single(u32 portid, const struct nfnl_info *info,
+ const struct nlattr * const nla[], bool reset)
+@@ -3758,7 +3773,7 @@ nf_tables_getrule_single(u32 portid, const struct nfnl_info *info,
+ return ERR_CAST(chain);
+ }
+
+- rule = nft_rule_lookup(chain, nla[NFTA_RULE_HANDLE]);
++ rule = nft_rule_lookup(net, chain, nla[NFTA_RULE_HANDLE]);
+ if (IS_ERR(rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_HANDLE]);
+ return ERR_CAST(rule);
+@@ -4057,7 +4072,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+
+ if (nla[NFTA_RULE_HANDLE]) {
+ handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_HANDLE]));
+- rule = __nft_rule_lookup(chain, handle);
++ rule = __nft_rule_lookup(net, chain, handle);
+ if (IS_ERR(rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_HANDLE]);
+ return PTR_ERR(rule);
+@@ -4079,7 +4094,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+
+ if (nla[NFTA_RULE_POSITION]) {
+ pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION]));
+- old_rule = __nft_rule_lookup(chain, pos_handle);
++ old_rule = __nft_rule_lookup(net, chain, pos_handle);
+ if (IS_ERR(old_rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]);
+ return PTR_ERR(old_rule);
+@@ -4296,7 +4311,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info,
+
+ if (chain) {
+ if (nla[NFTA_RULE_HANDLE]) {
+- rule = nft_rule_lookup(chain, nla[NFTA_RULE_HANDLE]);
++ rule = nft_rule_lookup(info->net, chain, nla[NFTA_RULE_HANDLE]);
+ if (IS_ERR(rule)) {
+ if (PTR_ERR(rule) == -ENOENT &&
+ NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_DESTROYRULE)
+@@ -7768,9 +7783,7 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
+ struct nft_trans *trans;
+ int err = -ENOMEM;
+
+- if (!try_module_get(type->owner))
+- return -ENOENT;
+-
++ /* caller must have obtained type->owner reference. */
+ trans = nft_trans_alloc(ctx, NFT_MSG_NEWOBJ,
+ sizeof(struct nft_trans_obj));
+ if (!trans)
+@@ -7838,15 +7851,16 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
+ if (info->nlh->nlmsg_flags & NLM_F_REPLACE)
+ return -EOPNOTSUPP;
+
+- type = __nft_obj_type_get(objtype, family);
+- if (WARN_ON_ONCE(!type))
+- return -ENOENT;
+-
+ if (!obj->ops->update)
+ return 0;
+
++ type = nft_obj_type_get(net, objtype, family);
++ if (WARN_ON_ONCE(IS_ERR(type)))
++ return PTR_ERR(type);
++
+ nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+
++ /* type->owner reference is put when transaction object is released. */
+ return nf_tables_updobj(&ctx, type, nla[NFTA_OBJ_DATA], obj);
+ }
+
+@@ -8082,7 +8096,7 @@ static int nf_tables_dump_obj_done(struct netlink_callback *cb)
+ return 0;
+ }
+
+-/* called with rcu_read_lock held */
++/* Caller must hold rcu read lock or transaction mutex */
+ static struct sk_buff *
+ nf_tables_getobj_single(u32 portid, const struct nfnl_info *info,
+ const struct nlattr * const nla[], bool reset)
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index f84aad420d4464..775d707ec708a7 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -2176,9 +2176,14 @@ netlink_ack_tlv_len(struct netlink_sock *nlk, int err,
+ return tlvlen;
+ }
+
++static bool nlmsg_check_in_payload(const struct nlmsghdr *nlh, const void *addr)
++{
++ return !WARN_ON(addr < nlmsg_data(nlh) ||
++ addr - (const void *) nlh >= nlh->nlmsg_len);
++}
++
+ static void
+-netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb,
+- const struct nlmsghdr *nlh, int err,
++netlink_ack_tlv_fill(struct sk_buff *skb, const struct nlmsghdr *nlh, int err,
+ const struct netlink_ext_ack *extack)
+ {
+ if (extack->_msg)
+@@ -2190,9 +2195,7 @@ netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb,
+ if (!err)
+ return;
+
+- if (extack->bad_attr &&
+- !WARN_ON((u8 *)extack->bad_attr < in_skb->data ||
+- (u8 *)extack->bad_attr >= in_skb->data + in_skb->len))
++ if (extack->bad_attr && nlmsg_check_in_payload(nlh, extack->bad_attr))
+ WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS,
+ (u8 *)extack->bad_attr - (const u8 *)nlh));
+ if (extack->policy)
+@@ -2201,9 +2204,7 @@ netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb,
+ if (extack->miss_type)
+ WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_TYPE,
+ extack->miss_type));
+- if (extack->miss_nest &&
+- !WARN_ON((u8 *)extack->miss_nest < in_skb->data ||
+- (u8 *)extack->miss_nest > in_skb->data + in_skb->len))
++ if (extack->miss_nest && nlmsg_check_in_payload(nlh, extack->miss_nest))
+ WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_NEST,
+ (u8 *)extack->miss_nest - (const u8 *)nlh));
+ }
+@@ -2232,7 +2233,7 @@ static int netlink_dump_done(struct netlink_sock *nlk, struct sk_buff *skb,
+ if (extack_len) {
+ nlh->nlmsg_flags |= NLM_F_ACK_TLVS;
+ if (skb_tailroom(skb) >= extack_len) {
+- netlink_ack_tlv_fill(cb->skb, skb, cb->nlh,
++ netlink_ack_tlv_fill(skb, cb->nlh,
+ nlk->dump_done_errno, extack);
+ nlmsg_end(skb, nlh);
+ }
+@@ -2491,7 +2492,7 @@ void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err,
+ }
+
+ if (tlvlen)
+- netlink_ack_tlv_fill(in_skb, skb, nlh, err, extack);
++ netlink_ack_tlv_fill(skb, nlh, err, extack);
+
+ nlmsg_end(skb, rep);
+
+diff --git a/net/rfkill/rfkill-gpio.c b/net/rfkill/rfkill-gpio.c
+index 84529886c2e660..bfd5ff967e9038 100644
+--- a/net/rfkill/rfkill-gpio.c
++++ b/net/rfkill/rfkill-gpio.c
+@@ -31,8 +31,12 @@ static int rfkill_gpio_set_power(void *data, bool blocked)
+ {
+ struct rfkill_gpio_data *rfkill = data;
+
+- if (!blocked && !IS_ERR(rfkill->clk) && !rfkill->clk_enabled)
+- clk_enable(rfkill->clk);
++ if (!blocked && !IS_ERR(rfkill->clk) && !rfkill->clk_enabled) {
++ int ret = clk_enable(rfkill->clk);
++
++ if (ret)
++ return ret;
++ }
+
+ gpiod_set_value_cansleep(rfkill->shutdown_gpio, !blocked);
+ gpiod_set_value_cansleep(rfkill->reset_gpio, !blocked);
+diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
+index f4844683e12039..9d8bd0b37e41da 100644
+--- a/net/rxrpc/af_rxrpc.c
++++ b/net/rxrpc/af_rxrpc.c
+@@ -707,9 +707,10 @@ static int rxrpc_setsockopt(struct socket *sock, int level, int optname,
+ ret = -EISCONN;
+ if (rx->sk.sk_state != RXRPC_UNBOUND)
+ goto error;
+- ret = copy_from_sockptr(&min_sec_level, optval,
+- sizeof(unsigned int));
+- if (ret < 0)
++ ret = copy_safe_from_sockptr(&min_sec_level,
++ sizeof(min_sec_level),
++ optval, optlen);
++ if (ret)
+ goto error;
+ ret = -EINVAL;
+ if (min_sec_level > RXRPC_SECURITY_MAX)
+diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
+index 19a49af5a9e527..afefe124d9039e 100644
+--- a/net/sched/sch_fq.c
++++ b/net/sched/sch_fq.c
+@@ -331,6 +331,12 @@ static bool fq_fastpath_check(const struct Qdisc *sch, struct sk_buff *skb,
+ */
+ if (q->internal.qlen >= 8)
+ return false;
++
++ /* Ordering invariants fall apart if some delayed flows
++ * are ready but we haven't serviced them, yet.
++ */
++ if (q->time_next_delayed_flow <= now)
++ return false;
+ }
+
+ sk = skb->sk;
+diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
+index 95ff747061046c..3298da2e37e43d 100644
+--- a/net/sunrpc/cache.c
++++ b/net/sunrpc/cache.c
+@@ -1431,7 +1431,9 @@ static int c_show(struct seq_file *m, void *p)
+ seq_printf(m, "# expiry=%lld refcnt=%d flags=%lx\n",
+ convert_to_wallclock(cp->expiry_time),
+ kref_read(&cp->ref), cp->flags);
+- cache_get(cp);
++ if (!cache_get_rcu(cp))
++ return 0;
++
+ if (cache_check(cd, cp, NULL))
+ /* cache_check does a cache_put on failure */
+ seq_puts(m, "# ");
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 6b3f01beb294b9..9a97ffef3adf0e 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1552,6 +1552,10 @@ static struct svc_xprt *svc_create_socket(struct svc_serv *serv,
+ newlen = error;
+
+ if (protocol == IPPROTO_TCP) {
++ __netns_tracker_free(net, &sock->sk->ns_tracker, false);
++ sock->sk->sk_net_refcnt = 1;
++ get_net_track(net, &sock->sk->ns_tracker, GFP_KERNEL);
++ sock_inuse_add(net, 1);
+ if ((error = kernel_listen(sock, 64)) < 0)
+ goto bummer;
+ }
+diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
+index 58ae6ec4f25b4f..415c0310101f0d 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma.c
++++ b/net/sunrpc/xprtrdma/svc_rdma.c
+@@ -233,25 +233,34 @@ static int svc_rdma_proc_init(void)
+
+ rc = percpu_counter_init(&svcrdma_stat_read, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err;
+ rc = percpu_counter_init(&svcrdma_stat_recv, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err_read;
+ rc = percpu_counter_init(&svcrdma_stat_sq_starve, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err_recv;
+ rc = percpu_counter_init(&svcrdma_stat_write, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err_sq;
+
+ svcrdma_table_header = register_sysctl("sunrpc/svc_rdma",
+ svcrdma_parm_table);
++ if (!svcrdma_table_header)
++ goto err_write;
++
+ return 0;
+
+-out_err:
++err_write:
++ rc = -ENOMEM;
++ percpu_counter_destroy(&svcrdma_stat_write);
++err_sq:
+ percpu_counter_destroy(&svcrdma_stat_sq_starve);
++err_recv:
+ percpu_counter_destroy(&svcrdma_stat_recv);
++err_read:
+ percpu_counter_destroy(&svcrdma_stat_read);
++err:
+ return rc;
+ }
+
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+index d72953f2925827..69d497a0ca204c 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+@@ -493,7 +493,13 @@ static bool xdr_check_write_chunk(struct svc_rdma_recv_ctxt *rctxt)
+ if (xdr_stream_decode_u32(&rctxt->rc_stream, &segcount))
+ return false;
+
+- /* A bogus segcount causes this buffer overflow check to fail. */
++ /* Before trusting the segcount value enough to use it in
++ * a computation, perform a simple range check. This is an
++ * arbitrary but sensible limit (ie, not architectural).
++ */
++ if (unlikely(segcount > RPCSVC_MAXPAGES))
++ return false;
++
+ p = xdr_inline_decode(&rctxt->rc_stream,
+ segcount * rpcrdma_segment_maxsz * sizeof(*p));
+ return p != NULL;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 1326fbf45a3479..b69e6290acfabe 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1198,6 +1198,7 @@ static void xs_sock_reset_state_flags(struct rpc_xprt *xprt)
+ clear_bit(XPRT_SOCK_WAKE_WRITE, &transport->sock_state);
+ clear_bit(XPRT_SOCK_WAKE_DISCONNECT, &transport->sock_state);
+ clear_bit(XPRT_SOCK_NOSPACE, &transport->sock_state);
++ clear_bit(XPRT_SOCK_UPD_TIMEOUT, &transport->sock_state);
+ }
+
+ static void xs_run_error_worker(struct sock_xprt *transport, unsigned int nr)
+@@ -1939,6 +1940,13 @@ static struct socket *xs_create_sock(struct rpc_xprt *xprt,
+ goto out;
+ }
+
++ if (protocol == IPPROTO_TCP) {
++ __netns_tracker_free(xprt->xprt_net, &sock->sk->ns_tracker, false);
++ sock->sk->sk_net_refcnt = 1;
++ get_net_track(xprt->xprt_net, &sock->sk->ns_tracker, GFP_KERNEL);
++ sock_inuse_add(xprt->xprt_net, 1);
++ }
++
+ filp = sock_alloc_file(sock, O_NONBLOCK, NULL);
+ if (IS_ERR(filp))
+ return ERR_CAST(filp);
+@@ -2614,11 +2622,10 @@ static int xs_tls_handshake_sync(struct rpc_xprt *lower_xprt, struct xprtsec_par
+ rc = wait_for_completion_interruptible_timeout(&lower_transport->handshake_done,
+ XS_TLS_HANDSHAKE_TO);
+ if (rc <= 0) {
+- if (!tls_handshake_cancel(sk)) {
+- if (rc == 0)
+- rc = -ETIMEDOUT;
+- goto out_put_xprt;
+- }
++ tls_handshake_cancel(sk);
++ if (rc == 0)
++ rc = -ETIMEDOUT;
++ goto out_put_xprt;
+ }
+
+ rc = lower_transport->xprt_err;
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index c9ebf9449fcc33..263c91e91fc390 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -603,16 +603,20 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv,
+ }
+ EXPORT_SYMBOL(wiphy_new_nm);
+
+-static int wiphy_verify_combinations(struct wiphy *wiphy)
++static
++int wiphy_verify_iface_combinations(struct wiphy *wiphy,
++ const struct ieee80211_iface_combination *iface_comb,
++ int n_iface_comb,
++ bool combined_radio)
+ {
+ const struct ieee80211_iface_combination *c;
+ int i, j;
+
+- for (i = 0; i < wiphy->n_iface_combinations; i++) {
++ for (i = 0; i < n_iface_comb; i++) {
+ u32 cnt = 0;
+ u16 all_iftypes = 0;
+
+- c = &wiphy->iface_combinations[i];
++ c = &iface_comb[i];
+
+ /*
+ * Combinations with just one interface aren't real,
+@@ -625,9 +629,13 @@ static int wiphy_verify_combinations(struct wiphy *wiphy)
+ if (WARN_ON(!c->num_different_channels))
+ return -EINVAL;
+
+- /* DFS only works on one channel. */
+- if (WARN_ON(c->radar_detect_widths &&
+- (c->num_different_channels > 1)))
++ /* DFS only works on one channel. Avoid this check
++ * for multi-radio global combination, since it hold
++ * the capabilities of all radio combinations.
++ */
++ if (!combined_radio &&
++ WARN_ON(c->radar_detect_widths &&
++ c->num_different_channels > 1))
+ return -EINVAL;
+
+ if (WARN_ON(!c->n_limits))
+@@ -648,13 +656,21 @@ static int wiphy_verify_combinations(struct wiphy *wiphy)
+ if (WARN_ON(wiphy->software_iftypes & types))
+ return -EINVAL;
+
+- /* Only a single P2P_DEVICE can be allowed */
+- if (WARN_ON(types & BIT(NL80211_IFTYPE_P2P_DEVICE) &&
++ /* Only a single P2P_DEVICE can be allowed, avoid this
++ * check for multi-radio global combination, since it
++ * hold the capabilities of all radio combinations.
++ */
++ if (!combined_radio &&
++ WARN_ON(types & BIT(NL80211_IFTYPE_P2P_DEVICE) &&
+ c->limits[j].max > 1))
+ return -EINVAL;
+
+- /* Only a single NAN can be allowed */
+- if (WARN_ON(types & BIT(NL80211_IFTYPE_NAN) &&
++ /* Only a single NAN can be allowed, avoid this
++ * check for multi-radio global combination, since it
++ * hold the capabilities of all radio combinations.
++ */
++ if (!combined_radio &&
++ WARN_ON(types & BIT(NL80211_IFTYPE_NAN) &&
+ c->limits[j].max > 1))
+ return -EINVAL;
+
+@@ -693,6 +709,34 @@ static int wiphy_verify_combinations(struct wiphy *wiphy)
+ return 0;
+ }
+
++static int wiphy_verify_combinations(struct wiphy *wiphy)
++{
++ int i, ret;
++ bool combined_radio = false;
++
++ if (wiphy->n_radio) {
++ for (i = 0; i < wiphy->n_radio; i++) {
++ const struct wiphy_radio *radio = &wiphy->radio[i];
++
++ ret = wiphy_verify_iface_combinations(wiphy,
++ radio->iface_combinations,
++ radio->n_iface_combinations,
++ false);
++ if (ret)
++ return ret;
++ }
++
++ combined_radio = true;
++ }
++
++ ret = wiphy_verify_iface_combinations(wiphy,
++ wiphy->iface_combinations,
++ wiphy->n_iface_combinations,
++ combined_radio);
++
++ return ret;
++}
++
+ int wiphy_register(struct wiphy *wiphy)
+ {
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+@@ -1705,6 +1749,13 @@ void wiphy_delayed_work_flush(struct wiphy *wiphy,
+ }
+ EXPORT_SYMBOL_GPL(wiphy_delayed_work_flush);
+
++bool wiphy_delayed_work_pending(struct wiphy *wiphy,
++ struct wiphy_delayed_work *dwork)
++{
++ return timer_pending(&dwork->timer);
++}
++EXPORT_SYMBOL_GPL(wiphy_delayed_work_pending);
++
+ static int __init cfg80211_init(void)
+ {
+ int err;
+diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
+index 4052041a19ead4..98637ded58a68d 100644
+--- a/net/wireless/mlme.c
++++ b/net/wireless/mlme.c
+@@ -340,12 +340,6 @@ cfg80211_mlme_check_mlo_compat(const struct ieee80211_multi_link_elem *mle_a,
+ return -EINVAL;
+ }
+
+- if (ieee80211_mle_get_eml_med_sync_delay((const u8 *)mle_a) !=
+- ieee80211_mle_get_eml_med_sync_delay((const u8 *)mle_b)) {
+- NL_SET_ERR_MSG(extack, "link EML medium sync delay mismatch");
+- return -EINVAL;
+- }
+-
+ if (ieee80211_mle_get_eml_cap((const u8 *)mle_a) !=
+ ieee80211_mle_get_eml_cap((const u8 *)mle_b)) {
+ NL_SET_ERR_MSG(extack, "link EML capabilities mismatch");
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 3766efacfd64fa..89e792ece3a1ef 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -9776,6 +9776,7 @@ nl80211_parse_sched_scan(struct wiphy *wiphy, struct wireless_dev *wdev,
+ request = kzalloc(size, GFP_KERNEL);
+ if (!request)
+ return ERR_PTR(-ENOMEM);
++ request->n_channels = n_channels;
+
+ if (n_ssids)
+ request->ssids = (void *)request +
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 7e16336044b2d6..6c042e03dad3c5 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -675,6 +675,8 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ len = desc->len;
+
+ if (!skb) {
++ first_frag = true;
++
+ hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom));
+ tr = dev->needed_tailroom;
+ skb = sock_alloc_send_skb(&xs->sk, hr + len + tr, 1, &err);
+@@ -685,12 +687,8 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ skb_put(skb, len);
+
+ err = skb_store_bits(skb, 0, buffer, len);
+- if (unlikely(err)) {
+- kfree_skb(skb);
++ if (unlikely(err))
+ goto free_err;
+- }
+-
+- first_frag = true;
+ } else {
+ int nr_frags = skb_shinfo(skb)->nr_frags;
+ struct page *page;
+@@ -758,6 +756,9 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ return skb;
+
+ free_err:
++ if (first_frag && skb)
++ kfree_skb(skb);
++
+ if (err == -EOVERFLOW) {
+ /* Drop the packet */
+ xsk_set_destructor_arg(xs->skb);
+diff --git a/rust/kernel/block/mq/request.rs b/rust/kernel/block/mq/request.rs
+index a0e22827f3f4ec..7943f43b957532 100644
+--- a/rust/kernel/block/mq/request.rs
++++ b/rust/kernel/block/mq/request.rs
+@@ -16,50 +16,55 @@
+ sync::atomic::{AtomicU64, Ordering},
+ };
+
+-/// A wrapper around a blk-mq `struct request`. This represents an IO request.
++/// A wrapper around a blk-mq [`struct request`]. This represents an IO request.
+ ///
+ /// # Implementation details
+ ///
+ /// There are four states for a request that the Rust bindings care about:
+ ///
+-/// A) Request is owned by block layer (refcount 0)
+-/// B) Request is owned by driver but with zero `ARef`s in existence
+-/// (refcount 1)
+-/// C) Request is owned by driver with exactly one `ARef` in existence
+-/// (refcount 2)
+-/// D) Request is owned by driver with more than one `ARef` in existence
+-/// (refcount > 2)
++/// 1. Request is owned by block layer (refcount 0).
++/// 2. Request is owned by driver but with zero [`ARef`]s in existence
++/// (refcount 1).
++/// 3. Request is owned by driver with exactly one [`ARef`] in existence
++/// (refcount 2).
++/// 4. Request is owned by driver with more than one [`ARef`] in existence
++/// (refcount > 2).
+ ///
+ ///
+-/// We need to track A and B to ensure we fail tag to request conversions for
++/// We need to track 1 and 2 to ensure we fail tag to request conversions for
+ /// requests that are not owned by the driver.
+ ///
+-/// We need to track C and D to ensure that it is safe to end the request and hand
++/// We need to track 3 and 4 to ensure that it is safe to end the request and hand
+ /// back ownership to the block layer.
+ ///
+ /// The states are tracked through the private `refcount` field of
+ /// `RequestDataWrapper`. This structure lives in the private data area of the C
+-/// `struct request`.
++/// [`struct request`].
+ ///
+ /// # Invariants
+ ///
+-/// * `self.0` is a valid `struct request` created by the C portion of the kernel.
++/// * `self.0` is a valid [`struct request`] created by the C portion of the
++/// kernel.
+ /// * The private data area associated with this request must be an initialized
+ /// and valid `RequestDataWrapper<T>`.
+ /// * `self` is reference counted by atomic modification of
+-/// self.wrapper_ref().refcount().
++/// `self.wrapper_ref().refcount()`.
++///
++/// [`struct request`]: srctree/include/linux/blk-mq.h
+ ///
+ #[repr(transparent)]
+ pub struct Request<T: Operations>(Opaque<bindings::request>, PhantomData<T>);
+
+ impl<T: Operations> Request<T> {
+- /// Create an `ARef<Request>` from a `struct request` pointer.
++ /// Create an [`ARef<Request>`] from a [`struct request`] pointer.
+ ///
+ /// # Safety
+ ///
+ /// * The caller must own a refcount on `ptr` that is transferred to the
+- /// returned `ARef`.
+- /// * The type invariants for `Request` must hold for the pointee of `ptr`.
++ /// returned [`ARef`].
++ /// * The type invariants for [`Request`] must hold for the pointee of `ptr`.
++ ///
++ /// [`struct request`]: srctree/include/linux/blk-mq.h
+ pub(crate) unsafe fn aref_from_raw(ptr: *mut bindings::request) -> ARef<Self> {
+ // INVARIANT: By the safety requirements of this function, invariants are upheld.
+ // SAFETY: By the safety requirement of this function, we own a
+@@ -84,12 +89,14 @@ pub(crate) unsafe fn start_unchecked(this: &ARef<Self>) {
+ }
+
+ /// Try to take exclusive ownership of `this` by dropping the refcount to 0.
+- /// This fails if `this` is not the only `ARef` pointing to the underlying
+- /// `Request`.
++ /// This fails if `this` is not the only [`ARef`] pointing to the underlying
++ /// [`Request`].
+ ///
+- /// If the operation is successful, `Ok` is returned with a pointer to the
+- /// C `struct request`. If the operation fails, `this` is returned in the
+- /// `Err` variant.
++ /// If the operation is successful, [`Ok`] is returned with a pointer to the
++ /// C [`struct request`]. If the operation fails, `this` is returned in the
++ /// [`Err`] variant.
++ ///
++ /// [`struct request`]: srctree/include/linux/blk-mq.h
+ fn try_set_end(this: ARef<Self>) -> Result<*mut bindings::request, ARef<Self>> {
+ // We can race with `TagSet::tag_to_rq`
+ if let Err(_old) = this.wrapper_ref().refcount().compare_exchange(
+@@ -109,7 +116,7 @@ fn try_set_end(this: ARef<Self>) -> Result<*mut bindings::request, ARef<Self>> {
+
+ /// Notify the block layer that the request has been completed without errors.
+ ///
+- /// This function will return `Err` if `this` is not the only `ARef`
++ /// This function will return [`Err`] if `this` is not the only [`ARef`]
+ /// referencing the request.
+ pub fn end_ok(this: ARef<Self>) -> Result<(), ARef<Self>> {
+ let request_ptr = Self::try_set_end(this)?;
+@@ -123,13 +130,13 @@ pub fn end_ok(this: ARef<Self>) -> Result<(), ARef<Self>> {
+ Ok(())
+ }
+
+- /// Return a pointer to the `RequestDataWrapper` stored in the private area
++ /// Return a pointer to the [`RequestDataWrapper`] stored in the private area
+ /// of the request structure.
+ ///
+ /// # Safety
+ ///
+ /// - `this` must point to a valid allocation of size at least size of
+- /// `Self` plus size of `RequestDataWrapper`.
++ /// [`Self`] plus size of [`RequestDataWrapper`].
+ pub(crate) unsafe fn wrapper_ptr(this: *mut Self) -> NonNull<RequestDataWrapper> {
+ let request_ptr = this.cast::<bindings::request>();
+ // SAFETY: By safety requirements for this function, `this` is a
+@@ -141,7 +148,7 @@ pub(crate) unsafe fn wrapper_ptr(this: *mut Self) -> NonNull<RequestDataWrapper>
+ unsafe { NonNull::new_unchecked(wrapper_ptr) }
+ }
+
+- /// Return a reference to the `RequestDataWrapper` stored in the private
++ /// Return a reference to the [`RequestDataWrapper`] stored in the private
+ /// area of the request structure.
+ pub(crate) fn wrapper_ref(&self) -> &RequestDataWrapper {
+ // SAFETY: By type invariant, `self.0` is a valid allocation. Further,
+@@ -152,13 +159,15 @@ pub(crate) fn wrapper_ref(&self) -> &RequestDataWrapper {
+ }
+ }
+
+-/// A wrapper around data stored in the private area of the C `struct request`.
++/// A wrapper around data stored in the private area of the C [`struct request`].
++///
++/// [`struct request`]: srctree/include/linux/blk-mq.h
+ pub(crate) struct RequestDataWrapper {
+ /// The Rust request refcount has the following states:
+ ///
+ /// - 0: The request is owned by C block layer.
+- /// - 1: The request is owned by Rust abstractions but there are no ARef references to it.
+- /// - 2+: There are `ARef` references to the request.
++ /// - 1: The request is owned by Rust abstractions but there are no [`ARef`] references to it.
++ /// - 2+: There are [`ARef`] references to the request.
+ refcount: AtomicU64,
+ }
+
+@@ -204,7 +213,7 @@ fn atomic_relaxed_op_return(target: &AtomicU64, op: impl Fn(u64) -> u64) -> u64
+ }
+
+ /// Store the result of `op(target.load)` in `target` if `target.load() !=
+-/// pred`, returning true if the target was updated.
++/// pred`, returning [`true`] if the target was updated.
+ fn atomic_relaxed_op_unless(target: &AtomicU64, op: impl Fn(u64) -> u64, pred: u64) -> bool {
+ target
+ .fetch_update(Ordering::Relaxed, Ordering::Relaxed, |x| {
+diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
+index 274bdc1b0a824a..4b25c9a474c242 100644
+--- a/rust/kernel/lib.rs
++++ b/rust/kernel/lib.rs
+@@ -80,7 +80,7 @@ pub trait Module: Sized + Sync + Send {
+
+ /// Equivalent to `THIS_MODULE` in the C API.
+ ///
+-/// C header: [`include/linux/export.h`](srctree/include/linux/export.h)
++/// C header: [`include/linux/init.h`](srctree/include/linux/init.h)
+ pub struct ThisModule(*mut bindings::module);
+
+ // SAFETY: `THIS_MODULE` may be used from all threads within a module.
+diff --git a/rust/macros/lib.rs b/rust/macros/lib.rs
+index 5be0cb9db3ee49..8a2ed9472bb089 100644
+--- a/rust/macros/lib.rs
++++ b/rust/macros/lib.rs
+@@ -355,7 +355,7 @@ pub fn pinned_drop(args: TokenStream, input: TokenStream) -> TokenStream {
+ /// macro_rules! pub_no_prefix {
+ /// ($prefix:ident, $($newname:ident),+) => {
+ /// kernel::macros::paste! {
+-/// $(pub(crate) const fn [<$newname:lower:span>]: u32 = [<$prefix $newname:span>];)+
++/// $(pub(crate) const fn [<$newname:lower:span>]() -> u32 { [<$prefix $newname:span>] })+
+ /// }
+ /// };
+ /// }
+diff --git a/samples/bpf/xdp_adjust_tail_kern.c b/samples/bpf/xdp_adjust_tail_kern.c
+index ffdd548627f0a4..da67bcad1c6381 100644
+--- a/samples/bpf/xdp_adjust_tail_kern.c
++++ b/samples/bpf/xdp_adjust_tail_kern.c
+@@ -57,6 +57,7 @@ static __always_inline void swap_mac(void *data, struct ethhdr *orig_eth)
+
+ static __always_inline __u16 csum_fold_helper(__u32 csum)
+ {
++ csum = (csum & 0xffff) + (csum >> 16);
+ return ~((csum & 0xffff) + (csum >> 16));
+ }
+
+diff --git a/samples/kfifo/dma-example.c b/samples/kfifo/dma-example.c
+index 48df719dac8c6d..8076ac410161a3 100644
+--- a/samples/kfifo/dma-example.c
++++ b/samples/kfifo/dma-example.c
+@@ -9,6 +9,7 @@
+ #include <linux/kfifo.h>
+ #include <linux/module.h>
+ #include <linux/scatterlist.h>
++#include <linux/dma-mapping.h>
+
+ /*
+ * This module shows how to handle fifo dma operations.
+diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
+index 4427572b24771d..b03d526e4c454a 100755
+--- a/scripts/checkpatch.pl
++++ b/scripts/checkpatch.pl
+@@ -3209,36 +3209,31 @@ sub process {
+
+ # Check Fixes: styles is correct
+ if (!$in_header_lines &&
+- $line =~ /^\s*fixes:?\s*(?:commit\s*)?[0-9a-f]{5,}\b/i) {
+- my $orig_commit = "";
+- my $id = "0123456789ab";
+- my $title = "commit title";
+- my $tag_case = 1;
+- my $tag_space = 1;
+- my $id_length = 1;
+- my $id_case = 1;
++ $line =~ /^\s*(fixes:?)\s*(?:commit\s*)?([0-9a-f]{5,40})(?:\s*($balanced_parens))?/i) {
++ my $tag = $1;
++ my $orig_commit = $2;
++ my $title;
+ my $title_has_quotes = 0;
+ $fixes_tag = 1;
+-
+- if ($line =~ /(\s*fixes:?)\s+([0-9a-f]{5,})\s+($balanced_parens)/i) {
+- my $tag = $1;
+- $orig_commit = $2;
+- $title = $3;
+-
+- $tag_case = 0 if $tag eq "Fixes:";
+- $tag_space = 0 if ($line =~ /^fixes:? [0-9a-f]{5,} ($balanced_parens)/i);
+-
+- $id_length = 0 if ($orig_commit =~ /^[0-9a-f]{12}$/i);
+- $id_case = 0 if ($orig_commit !~ /[A-F]/);
+-
++ if (defined $3) {
+ # Always strip leading/trailing parens then double quotes if existing
+- $title = substr($title, 1, -1);
++ $title = substr($3, 1, -1);
+ if ($title =~ /^".*"$/) {
+ $title = substr($title, 1, -1);
+ $title_has_quotes = 1;
+ }
++ } else {
++ $title = "commit title"
+ }
+
++
++ my $tag_case = not ($tag eq "Fixes:");
++ my $tag_space = not ($line =~ /^fixes:? [0-9a-f]{5,40} ($balanced_parens)/i);
++
++ my $id_length = not ($orig_commit =~ /^[0-9a-f]{12}$/i);
++ my $id_case = not ($orig_commit !~ /[A-F]/);
++
++ my $id = "0123456789ab";
+ my ($cid, $ctitle) = git_commit_info($orig_commit, $id,
+ $title);
+
+diff --git a/scripts/faddr2line b/scripts/faddr2line
+index fe0cc45f03be11..1fa6beef9f978e 100755
+--- a/scripts/faddr2line
++++ b/scripts/faddr2line
+@@ -252,7 +252,7 @@ __faddr2line() {
+ found=2
+ break
+ fi
+- done < <(echo "${ELF_SYMS}" | sed 's/\[.*\]//' | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2 | ${GREP} -A1 --no-group-separator " ${sym_name}$")
++ done < <(echo "${ELF_SYMS}" | sed 's/\[.*\]//' | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2)
+
+ if [[ $found = 0 ]]; then
+ warn "can't find symbol: sym_name: $sym_name sym_sec: $sym_sec sym_addr: $sym_addr sym_elf_size: $sym_elf_size"
+diff --git a/scripts/kernel-doc b/scripts/kernel-doc
+index 2791f819520387..320544321ecba5 100755
+--- a/scripts/kernel-doc
++++ b/scripts/kernel-doc
+@@ -569,6 +569,8 @@ sub output_function_man(%) {
+ my %args = %{$_[0]};
+ my ($parameter, $section);
+ my $count;
++ my $func_macro = $args{'func_macro'};
++ my $paramcount = $#{$args{'parameterlist'}}; # -1 is empty
+
+ print ".TH \"$args{'function'}\" 9 \"$args{'function'}\" \"$man_date\" \"Kernel Hacker's Manual\" LINUX\n";
+
+@@ -600,7 +602,10 @@ sub output_function_man(%) {
+ $parenth = "";
+ }
+
+- print ".SH ARGUMENTS\n";
++ $paramcount = $#{$args{'parameterlist'}}; # -1 is empty
++ if ($paramcount >= 0) {
++ print ".SH ARGUMENTS\n";
++ }
+ foreach $parameter (@{$args{'parameterlist'}}) {
+ my $parameter_name = $parameter;
+ $parameter_name =~ s/\[.*//;
+@@ -822,10 +827,16 @@ sub output_function_rst(%) {
+ my $oldprefix = $lineprefix;
+
+ my $signature = "";
+- if ($args{'functiontype'} ne "") {
+- $signature = $args{'functiontype'} . " " . $args{'function'} . " (";
+- } else {
+- $signature = $args{'function'} . " (";
++ my $func_macro = $args{'func_macro'};
++ my $paramcount = $#{$args{'parameterlist'}}; # -1 is empty
++
++ if ($func_macro) {
++ $signature = $args{'function'};
++ } else {
++ if ($args{'functiontype'}) {
++ $signature = $args{'functiontype'} . " ";
++ }
++ $signature .= $args{'function'} . " (";
+ }
+
+ my $count = 0;
+@@ -844,7 +855,9 @@ sub output_function_rst(%) {
+ }
+ }
+
+- $signature .= ")";
++ if (!$func_macro) {
++ $signature .= ")";
++ }
+
+ if ($sphinx_major < 3) {
+ if ($args{'typedef'}) {
+@@ -888,9 +901,11 @@ sub output_function_rst(%) {
+ # Put our descriptive text into a container (thus an HTML <div>) to help
+ # set the function prototypes apart.
+ #
+- print ".. container:: kernelindent\n\n";
+ $lineprefix = " ";
+- print $lineprefix . "**Parameters**\n\n";
++ if ($paramcount >= 0) {
++ print ".. container:: kernelindent\n\n";
++ print $lineprefix . "**Parameters**\n\n";
++ }
+ foreach $parameter (@{$args{'parameterlist'}}) {
+ my $parameter_name = $parameter;
+ $parameter_name =~ s/\[.*//;
+@@ -1704,7 +1719,7 @@ sub check_return_section {
+ sub dump_function($$) {
+ my $prototype = shift;
+ my $file = shift;
+- my $noret = 0;
++ my $func_macro = 0;
+
+ print_lineno($new_start_line);
+
+@@ -1769,7 +1784,7 @@ sub dump_function($$) {
+ # declaration_name and opening parenthesis (notice the \s+).
+ $return_type = $1;
+ $declaration_name = $2;
+- $noret = 1;
++ $func_macro = 1;
+ } elsif ($prototype =~ m/^()($name)\s*$prototype_end/ ||
+ $prototype =~ m/^($type1)\s+($name)\s*$prototype_end/ ||
+ $prototype =~ m/^($type2+)\s*($name)\s*$prototype_end/) {
+@@ -1796,7 +1811,7 @@ sub dump_function($$) {
+ # of warnings goes sufficiently down, the check is only performed in
+ # -Wreturn mode.
+ # TODO: always perform the check.
+- if ($Wreturn && !$noret) {
++ if ($Wreturn && !$func_macro) {
+ check_return_section($file, $declaration_name, $return_type);
+ }
+
+@@ -1814,7 +1829,8 @@ sub dump_function($$) {
+ 'parametertypes' => \%parametertypes,
+ 'sectionlist' => \@sectionlist,
+ 'sections' => \%sections,
+- 'purpose' => $declaration_purpose
++ 'purpose' => $declaration_purpose,
++ 'func_macro' => $func_macro
+ });
+ } else {
+ output_declaration($declaration_name,
+@@ -1827,7 +1843,8 @@ sub dump_function($$) {
+ 'parametertypes' => \%parametertypes,
+ 'sectionlist' => \@sectionlist,
+ 'sections' => \%sections,
+- 'purpose' => $declaration_purpose
++ 'purpose' => $declaration_purpose,
++ 'func_macro' => $func_macro
+ });
+ }
+ }
+@@ -2322,7 +2339,6 @@ sub process_inline($$) {
+
+ sub process_file($) {
+ my $file;
+- my $initial_section_counter = $section_counter;
+ my ($orig_file) = @_;
+
+ $file = map_filename($orig_file);
+@@ -2360,8 +2376,7 @@ sub process_file($) {
+ }
+
+ # Make sure we got something interesting.
+- if ($initial_section_counter == $section_counter && $
+- output_mode ne "none") {
++ if (!$section_counter && $output_mode ne "none") {
+ if ($output_selection == OUTPUT_INCLUDE) {
+ emit_warning("${file}:1", "'$_' not found\n")
+ for keys %function_table;
+diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
+index 5d1c61fa5a5509..bcb5a7e20775e1 100644
+--- a/scripts/mod/file2alias.c
++++ b/scripts/mod/file2alias.c
+@@ -809,10 +809,7 @@ static int do_eisa_entry(const char *filename, void *symval,
+ char *alias)
+ {
+ DEF_FIELD_ADDR(symval, eisa_device_id, sig);
+- if (sig[0])
+- sprintf(alias, EISA_DEVICE_MODALIAS_FMT "*", *sig);
+- else
+- strcat(alias, "*");
++ sprintf(alias, EISA_DEVICE_MODALIAS_FMT "*", *sig);
+ return 1;
+ }
+
+diff --git a/scripts/package/builddeb b/scripts/package/builddeb
+index c1757db6aa8a80..718fbf99e2cea6 100755
+--- a/scripts/package/builddeb
++++ b/scripts/package/builddeb
+@@ -97,16 +97,18 @@ install_linux_image_dbg () {
+
+ # Parse modules.order directly because 'make modules_install' may sign,
+ # compress modules, and then run unneeded depmod.
+- while read -r mod; do
+- mod="${mod%.o}.ko"
+- dbg="${pdir}/usr/lib/debug/lib/modules/${KERNELRELEASE}/kernel/${mod}"
+- buildid=$("${READELF}" -n "${mod}" | sed -n 's@^.*Build ID: \(..\)\(.*\)@\1/\2@p')
+- link="${pdir}/usr/lib/debug/.build-id/${buildid}.debug"
+-
+- mkdir -p "${dbg%/*}" "${link%/*}"
+- "${OBJCOPY}" --only-keep-debug "${mod}" "${dbg}"
+- ln -sf --relative "${dbg}" "${link}"
+- done < modules.order
++ if is_enabled CONFIG_MODULES; then
++ while read -r mod; do
++ mod="${mod%.o}.ko"
++ dbg="${pdir}/usr/lib/debug/lib/modules/${KERNELRELEASE}/kernel/${mod}"
++ buildid=$("${READELF}" -n "${mod}" | sed -n 's@^.*Build ID: \(..\)\(.*\)@\1/\2@p')
++ link="${pdir}/usr/lib/debug/.build-id/${buildid}.debug"
++
++ mkdir -p "${dbg%/*}" "${link%/*}"
++ "${OBJCOPY}" --only-keep-debug "${mod}" "${dbg}"
++ ln -sf --relative "${dbg}" "${link}"
++ done < modules.order
++ fi
+
+ # Build debug package
+ # Different tools want the image in different locations
+diff --git a/security/apparmor/capability.c b/security/apparmor/capability.c
+index 9934df16c8431d..bf7df60868308d 100644
+--- a/security/apparmor/capability.c
++++ b/security/apparmor/capability.c
+@@ -96,6 +96,8 @@ static int audit_caps(struct apparmor_audit_data *ad, struct aa_profile *profile
+ return error;
+ } else {
+ aa_put_profile(ent->profile);
++ if (profile != ent->profile)
++ cap_clear(ent->caps);
+ ent->profile = aa_get_profile(profile);
+ cap_raise(ent->caps, cap);
+ }
+diff --git a/security/apparmor/policy_unpack_test.c b/security/apparmor/policy_unpack_test.c
+index c64733d6c98fbb..f070902da8fcce 100644
+--- a/security/apparmor/policy_unpack_test.c
++++ b/security/apparmor/policy_unpack_test.c
+@@ -281,6 +281,8 @@ static void policy_unpack_test_unpack_strdup_with_null_name(struct kunit *test)
+ ((uintptr_t)puf->e->start <= (uintptr_t)string)
+ && ((uintptr_t)string <= (uintptr_t)puf->e->end));
+ KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA);
++
++ kfree(string);
+ }
+
+ static void policy_unpack_test_unpack_strdup_with_name(struct kunit *test)
+@@ -296,6 +298,8 @@ static void policy_unpack_test_unpack_strdup_with_name(struct kunit *test)
+ ((uintptr_t)puf->e->start <= (uintptr_t)string)
+ && ((uintptr_t)string <= (uintptr_t)puf->e->end));
+ KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA);
++
++ kfree(string);
+ }
+
+ static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test)
+@@ -313,6 +317,8 @@ static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test)
+ KUNIT_EXPECT_EQ(test, size, 0);
+ KUNIT_EXPECT_NULL(test, string);
+ KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, start);
++
++ kfree(string);
+ }
+
+ static void policy_unpack_test_unpack_nameX_with_null_name(struct kunit *test)
+diff --git a/security/integrity/integrity.h b/security/integrity/integrity.h
+index 660f76cb69d370..c2c2da6911233b 100644
+--- a/security/integrity/integrity.h
++++ b/security/integrity/integrity.h
+@@ -37,6 +37,8 @@ struct evm_ima_xattr_data {
+ );
+ u8 data[];
+ } __packed;
++static_assert(offsetof(struct evm_ima_xattr_data, data) == sizeof(struct evm_ima_xattr_data_hdr),
++ "struct member likely outside of __struct_group()");
+
+ /* Only used in the EVM HMAC code. */
+ struct evm_xattr {
+@@ -65,6 +67,8 @@ struct ima_digest_data {
+ );
+ u8 digest[];
+ } __packed;
++static_assert(offsetof(struct ima_digest_data, digest) == sizeof(struct ima_digest_data_hdr),
++ "struct member likely outside of __struct_group()");
+
+ /*
+ * Instead of wrapping the ima_digest_data struct inside a local structure
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 4057f9f10aeec2..f4ab7d7ee42c7e 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -3789,9 +3789,11 @@ static vm_fault_t snd_pcm_mmap_data_fault(struct vm_fault *vmf)
+ return VM_FAULT_SIGBUS;
+ if (substream->ops->page)
+ page = substream->ops->page(substream, offset);
+- else if (!snd_pcm_get_dma_buf(substream))
++ else if (!snd_pcm_get_dma_buf(substream)) {
++ if (WARN_ON_ONCE(!runtime->dma_area))
++ return VM_FAULT_SIGBUS;
+ page = virt_to_page(runtime->dma_area + offset);
+- else
++ } else
+ page = snd_sgbuf_get_page(snd_pcm_get_dma_buf(substream), offset);
+ if (!page)
+ return VM_FAULT_SIGBUS;
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index 7accf9a1ddf4c6..3386c2a038443a 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -724,8 +724,9 @@ static int resize_runtime_buffer(struct snd_rawmidi_substream *substream,
+ newbuf = kvzalloc(params->buffer_size, GFP_KERNEL);
+ if (!newbuf)
+ return -ENOMEM;
+- guard(spinlock_irq)(&substream->lock);
++ spin_lock_irq(&substream->lock);
+ if (runtime->buffer_ref) {
++ spin_unlock_irq(&substream->lock);
+ kvfree(newbuf);
+ return -EBUSY;
+ }
+@@ -733,6 +734,7 @@ static int resize_runtime_buffer(struct snd_rawmidi_substream *substream,
+ runtime->buffer = newbuf;
+ runtime->buffer_size = params->buffer_size;
+ __reset_runtime_ptrs(runtime, is_input);
++ spin_unlock_irq(&substream->lock);
+ kvfree(oldbuf);
+ }
+ runtime->avail_min = params->avail_min;
+diff --git a/sound/core/sound_kunit.c b/sound/core/sound_kunit.c
+index bfed1a25fc8f74..84e337ecbddd0a 100644
+--- a/sound/core/sound_kunit.c
++++ b/sound/core/sound_kunit.c
+@@ -172,6 +172,7 @@ static void test_format_fill_silence(struct kunit *test)
+ u32 i, j;
+
+ buffer = kunit_kzalloc(test, SILENCE_BUFFER_SIZE, GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer);
+
+ for (i = 0; i < ARRAY_SIZE(buf_samples); i++) {
+ for (j = 0; j < ARRAY_SIZE(valid_fmt); j++)
+@@ -208,8 +209,12 @@ static void test_playback_avail(struct kunit *test)
+ struct snd_pcm_runtime *r = kunit_kzalloc(test, sizeof(*r), GFP_KERNEL);
+ u32 i;
+
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r);
++
+ r->status = kunit_kzalloc(test, sizeof(*r->status), GFP_KERNEL);
+ r->control = kunit_kzalloc(test, sizeof(*r->control), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->status);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->control);
+
+ for (i = 0; i < ARRAY_SIZE(p_avail_data); i++) {
+ r->buffer_size = p_avail_data[i].buffer_size;
+@@ -232,8 +237,12 @@ static void test_capture_avail(struct kunit *test)
+ struct snd_pcm_runtime *r = kunit_kzalloc(test, sizeof(*r), GFP_KERNEL);
+ u32 i;
+
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r);
++
+ r->status = kunit_kzalloc(test, sizeof(*r->status), GFP_KERNEL);
+ r->control = kunit_kzalloc(test, sizeof(*r->control), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->status);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->control);
+
+ for (i = 0; i < ARRAY_SIZE(c_avail_data); i++) {
+ r->buffer_size = c_avail_data[i].buffer_size;
+@@ -247,6 +256,7 @@ static void test_capture_avail(struct kunit *test)
+ static void test_card_set_id(struct kunit *test)
+ {
+ struct snd_card *card = kunit_kzalloc(test, sizeof(*card), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, card);
+
+ snd_card_set_id(card, VALID_NAME);
+ KUNIT_EXPECT_STREQ(test, card->id, VALID_NAME);
+@@ -280,6 +290,7 @@ static void test_pcm_format_name(struct kunit *test)
+ static void test_card_add_component(struct kunit *test)
+ {
+ struct snd_card *card = kunit_kzalloc(test, sizeof(*card), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, card);
+
+ snd_component_add(card, TEST_FIRST_COMPONENT);
+ KUNIT_ASSERT_STREQ(test, card->components, TEST_FIRST_COMPONENT);
+diff --git a/sound/core/ump.c b/sound/core/ump.c
+index 0f0d7e895c5aa9..e1cb96eeaf3111 100644
+--- a/sound/core/ump.c
++++ b/sound/core/ump.c
+@@ -724,7 +724,10 @@ static void fill_fb_info(struct snd_ump_endpoint *ump,
+ info->ui_hint = buf->fb_info.ui_hint;
+ info->first_group = buf->fb_info.first_group;
+ info->num_groups = buf->fb_info.num_groups;
+- info->flags = buf->fb_info.midi_10;
++ if (buf->fb_info.midi_10 < 2)
++ info->flags = buf->fb_info.midi_10;
++ else
++ info->flags = SNDRV_UMP_BLOCK_IS_MIDI1 | SNDRV_UMP_BLOCK_IS_LOWSPEED;
+ info->active = buf->fb_info.active;
+ info->midi_ci_version = buf->fb_info.midi_ci_version;
+ info->sysex8_streams = buf->fb_info.sysex8_streams;
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 913880b090657f..ee4157992f6bf7 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -732,6 +732,10 @@ static const struct config_entry acpi_config_table[] = {
+ #if IS_ENABLED(CONFIG_SND_SST_ATOM_HIFI2_PLATFORM_ACPI) || \
+ IS_ENABLED(CONFIG_SND_SOC_SOF_BAYTRAIL)
+ /* BayTrail */
++ {
++ .flags = FLAG_SST_OR_SOF_BYT,
++ .acpi_hid = "LPE0F28",
++ },
+ {
+ .flags = FLAG_SST_OR_SOF_BYT,
+ .acpi_hid = "80860F28",
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 833635aaee1d02..d18c32c7e02b07 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -473,6 +473,8 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ break;
+ case 0x10ec0234:
+ case 0x10ec0274:
++ alc_write_coef_idx(codec, 0x6e, 0x0c25);
++ fallthrough;
+ case 0x10ec0294:
+ case 0x10ec0700:
+ case 0x10ec0701:
+@@ -3602,25 +3604,22 @@ static void alc256_init(struct hda_codec *codec)
+
+ hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+
+- if (hp_pin_sense)
++ if (hp_pin_sense) {
+ msleep(2);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+-
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(85);
+-
+- snd_hda_codec_write(codec, hp_pin, 0,
++ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ msleep(75);
++
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+
++ msleep(75);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
++ }
+ alc_update_coef_idx(codec, 0x46, 3 << 12, 0);
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
+ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */
+ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15);
+ /*
+@@ -3644,29 +3643,28 @@ static void alc256_shutup(struct hda_codec *codec)
+ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+ hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+
+- if (hp_pin_sense)
++ if (hp_pin_sense) {
+ msleep(2);
+
+- snd_hda_codec_write(codec, hp_pin, 0,
++ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(85);
++ msleep(75);
+
+ /* 3k pull low control for Headset jack. */
+ /* NOTE: call this before clearing the pin, otherwise codec stalls */
+ /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
+ * when booting with headset plugged. So skip setting it for the codec alc257
+ */
+- if (spec->en_3kpull_low)
+- alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
++ if (spec->en_3kpull_low)
++ alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+
+- if (!spec->no_shutup_pins)
+- snd_hda_codec_write(codec, hp_pin, 0,
++ if (!spec->no_shutup_pins)
++ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ msleep(75);
++ }
+
+ alc_auto_setup_eapd(codec, false);
+ alc_shutup_pins(codec);
+@@ -3761,33 +3759,28 @@ static void alc225_init(struct hda_codec *codec)
+ hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+ hp2_pin_sense = snd_hda_jack_detect(codec, 0x16);
+
+- if (hp1_pin_sense || hp2_pin_sense)
++ if (hp1_pin_sense || hp2_pin_sense) {
+ msleep(2);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+-
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(85);
+-
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ msleep(75);
+
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+
+- alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
++ msleep(75);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
++ }
+ }
+
+ static void alc225_shutup(struct hda_codec *codec)
+@@ -3799,36 +3792,35 @@ static void alc225_shutup(struct hda_codec *codec)
+ if (!hp_pin)
+ hp_pin = 0x21;
+
+- alc_disable_headset_jack_key(codec);
+- /* 3k pull low control for Headset jack. */
+- alc_update_coef_idx(codec, 0x4a, 0, 3 << 10);
+-
+ hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+ hp2_pin_sense = snd_hda_jack_detect(codec, 0x16);
+
+- if (hp1_pin_sense || hp2_pin_sense)
++ if (hp1_pin_sense || hp2_pin_sense) {
++ alc_disable_headset_jack_key(codec);
++ /* 3k pull low control for Headset jack. */
++ alc_update_coef_idx(codec, 0x4a, 0, 3 << 10);
+ msleep(2);
+
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(85);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ msleep(75);
+
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
++ msleep(75);
++ alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
++ alc_enable_headset_jack_key(codec);
++ }
+ alc_auto_setup_eapd(codec, false);
+ alc_shutup_pins(codec);
+ if (spec->ultra_low_power) {
+@@ -3839,9 +3831,6 @@ static void alc225_shutup(struct hda_codec *codec)
+ alc_update_coef_idx(codec, 0x4a, 3<<4, 2<<4);
+ msleep(30);
+ }
+-
+- alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
+- alc_enable_headset_jack_key(codec);
+ }
+
+ static void alc_default_init(struct hda_codec *codec)
+@@ -7544,6 +7533,8 @@ enum {
+ ALC290_FIXUP_SUBWOOFER_HSJACK,
+ ALC269_FIXUP_THINKPAD_ACPI,
+ ALC269_FIXUP_DMIC_THINKPAD_ACPI,
++ ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13,
++ ALC269VC_FIXUP_INFINIX_Y4_MAX,
+ ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO,
+ ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
+ ALC255_FIXUP_ASUS_MIC_NO_PRESENCE,
+@@ -7992,6 +7983,25 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_pincfg_U7x7_headset_mic,
+ },
++ [ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x14, 0x90170151 }, /* use as internal speaker (LFE) */
++ { 0x1b, 0x90170152 }, /* use as internal speaker (back) */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
++ },
++ [ALC269VC_FIXUP_INFINIX_Y4_MAX] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x1b, 0x90170150 }, /* use as internal speaker */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
++ },
+ [ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -11015,7 +11025,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++ SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13),
+ SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
++ SND_PCI_QUIRK(0x2782, 0x1701, "Infinix Y4 Max", ALC269VC_FIXUP_INFINIX_Y4_MAX),
++ SND_PCI_QUIRK(0x2782, 0x1705, "MEDION E15433", ALC269VC_FIXUP_INFINIX_Y4_MAX),
+ SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
+ SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index dc476bfb6da40f..5153a68d8c0795 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -227,6 +227,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "21M3"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21M4"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+@@ -234,6 +241,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "21M5"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21ME"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+@@ -530,8 +544,14 @@ static int acp6x_probe(struct platform_device *pdev)
+ struct acp6x_pdm *machine = NULL;
+ struct snd_soc_card *card;
+ struct acpi_device *adev;
++ acpi_handle handle;
++ acpi_integer dmic_status;
+ int ret;
++ bool is_dmic_enable, wov_en;
+
++ /* IF WOV entry not found, enable dmic based on AcpDmicConnected entry*/
++ is_dmic_enable = false;
++ wov_en = true;
+ /* check the parent device's firmware node has _DSD or not */
+ adev = ACPI_COMPANION(pdev->dev.parent);
+ if (adev) {
+@@ -539,9 +559,19 @@ static int acp6x_probe(struct platform_device *pdev)
+
+ if (!acpi_dev_get_property(adev, "AcpDmicConnected", ACPI_TYPE_INTEGER, &obj) &&
+ obj->integer.value == 1)
+- platform_set_drvdata(pdev, &acp6x_card);
++ is_dmic_enable = true;
+ }
+
++ handle = ACPI_HANDLE(pdev->dev.parent);
++ ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status);
++ if (!ACPI_FAILURE(ret))
++ wov_en = dmic_status;
++
++ if (is_dmic_enable && wov_en)
++ platform_set_drvdata(pdev, &acp6x_card);
++ else
++ return 0;
++
+ /* check for any DMI overrides */
+ dmi_id = dmi_first_match(yc_acp_quirk_table);
+ if (dmi_id)
+diff --git a/sound/soc/codecs/da7213.c b/sound/soc/codecs/da7213.c
+index f3ef6fb5530471..486db60bf2dd14 100644
+--- a/sound/soc/codecs/da7213.c
++++ b/sound/soc/codecs/da7213.c
+@@ -2136,6 +2136,7 @@ static const struct regmap_config da7213_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+
++ .max_register = DA7213_TONE_GEN_OFF_PER,
+ .reg_defaults = da7213_reg_defaults,
+ .num_reg_defaults = ARRAY_SIZE(da7213_reg_defaults),
+ .volatile_reg = da7213_volatile_register,
+diff --git a/sound/soc/codecs/da7219.c b/sound/soc/codecs/da7219.c
+index 311ea7918b3124..e2da3e317b5a3e 100644
+--- a/sound/soc/codecs/da7219.c
++++ b/sound/soc/codecs/da7219.c
+@@ -1167,17 +1167,20 @@ static int da7219_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ struct da7219_priv *da7219 = snd_soc_component_get_drvdata(component);
+ int ret = 0;
+
+- if ((da7219->clk_src == clk_id) && (da7219->mclk_rate == freq))
++ mutex_lock(&da7219->pll_lock);
++
++ if ((da7219->clk_src == clk_id) && (da7219->mclk_rate == freq)) {
++ mutex_unlock(&da7219->pll_lock);
+ return 0;
++ }
+
+ if ((freq < 2000000) || (freq > 54000000)) {
++ mutex_unlock(&da7219->pll_lock);
+ dev_err(codec_dai->dev, "Unsupported MCLK value %d\n",
+ freq);
+ return -EINVAL;
+ }
+
+- mutex_lock(&da7219->pll_lock);
+-
+ switch (clk_id) {
+ case DA7219_CLKSRC_MCLK_SQR:
+ snd_soc_component_update_bits(component, DA7219_PLL_CTRL,
+diff --git a/sound/soc/codecs/max9768.c b/sound/soc/codecs/max9768.c
+index e4793a5d179efc..8af3c7e5317fbb 100644
+--- a/sound/soc/codecs/max9768.c
++++ b/sound/soc/codecs/max9768.c
+@@ -54,10 +54,17 @@ static int max9768_set_gpio(struct snd_kcontrol *kcontrol,
+ {
+ struct snd_soc_component *c = snd_soc_kcontrol_component(kcontrol);
+ struct max9768 *max9768 = snd_soc_component_get_drvdata(c);
++ bool val = !ucontrol->value.integer.value[0];
++ int ret;
+
+- gpiod_set_value_cansleep(max9768->mute, !ucontrol->value.integer.value[0]);
++ if (val != gpiod_get_value_cansleep(max9768->mute))
++ ret = 1;
++ else
++ ret = 0;
+
+- return 0;
++ gpiod_set_value_cansleep(max9768->mute, val);
++
++ return ret;
+ }
+
+ static const DECLARE_TLV_DB_RANGE(volume_tlv,
+diff --git a/sound/soc/codecs/rt5640.c b/sound/soc/codecs/rt5640.c
+index 16f3425a3e35c0..855139348edb4c 100644
+--- a/sound/soc/codecs/rt5640.c
++++ b/sound/soc/codecs/rt5640.c
+@@ -2419,10 +2419,20 @@ static irqreturn_t rt5640_jd_gpio_irq(int irq, void *data)
+ return IRQ_HANDLED;
+ }
+
+-static void rt5640_cancel_work(void *data)
++static void rt5640_disable_irq_and_cancel_work(void *data)
+ {
+ struct rt5640_priv *rt5640 = data;
+
++ if (rt5640->jd_gpio_irq_requested) {
++ free_irq(rt5640->jd_gpio_irq, rt5640);
++ rt5640->jd_gpio_irq_requested = false;
++ }
++
++ if (rt5640->irq_requested) {
++ free_irq(rt5640->irq, rt5640);
++ rt5640->irq_requested = false;
++ }
++
+ cancel_delayed_work_sync(&rt5640->jack_work);
+ cancel_delayed_work_sync(&rt5640->bp_work);
+ }
+@@ -2463,13 +2473,7 @@ static void rt5640_disable_jack_detect(struct snd_soc_component *component)
+ if (!rt5640->jack)
+ return;
+
+- if (rt5640->jd_gpio_irq_requested)
+- free_irq(rt5640->jd_gpio_irq, rt5640);
+-
+- if (rt5640->irq_requested)
+- free_irq(rt5640->irq, rt5640);
+-
+- rt5640_cancel_work(rt5640);
++ rt5640_disable_irq_and_cancel_work(rt5640);
+
+ if (rt5640->jack->status & SND_JACK_MICROPHONE) {
+ rt5640_disable_micbias1_ovcd_irq(component);
+@@ -2477,8 +2481,6 @@ static void rt5640_disable_jack_detect(struct snd_soc_component *component)
+ snd_soc_jack_report(rt5640->jack, 0, SND_JACK_BTN_0);
+ }
+
+- rt5640->jd_gpio_irq_requested = false;
+- rt5640->irq_requested = false;
+ rt5640->jd_gpio = NULL;
+ rt5640->jack = NULL;
+ }
+@@ -2798,7 +2800,8 @@ static int rt5640_suspend(struct snd_soc_component *component)
+ if (rt5640->jack) {
+ /* disable jack interrupts during system suspend */
+ disable_irq(rt5640->irq);
+- rt5640_cancel_work(rt5640);
++ cancel_delayed_work_sync(&rt5640->jack_work);
++ cancel_delayed_work_sync(&rt5640->bp_work);
+ }
+
+ snd_soc_component_force_bias_level(component, SND_SOC_BIAS_OFF);
+@@ -3032,7 +3035,7 @@ static int rt5640_i2c_probe(struct i2c_client *i2c)
+ INIT_DELAYED_WORK(&rt5640->jack_work, rt5640_jack_work);
+
+ /* Make sure work is stopped on probe-error / remove */
+- ret = devm_add_action_or_reset(&i2c->dev, rt5640_cancel_work, rt5640);
++ ret = devm_add_action_or_reset(&i2c->dev, rt5640_disable_irq_and_cancel_work, rt5640);
+ if (ret)
+ return ret;
+
+diff --git a/sound/soc/codecs/rt722-sdca.c b/sound/soc/codecs/rt722-sdca.c
+index e5bd9ef812de13..f9f7512ca36087 100644
+--- a/sound/soc/codecs/rt722-sdca.c
++++ b/sound/soc/codecs/rt722-sdca.c
+@@ -607,12 +607,8 @@ static int rt722_sdca_dmic_set_gain_get(struct snd_kcontrol *kcontrol,
+
+ if (!adc_vol_flag) /* boost gain */
+ ctl = regvalue / boost_step;
+- else { /* ADC gain */
+- if (adc_vol_flag)
+- ctl = p->max - (((vol_max - regvalue) & 0xffff) / interval_offset);
+- else
+- ctl = p->max - (((0 - regvalue) & 0xffff) / interval_offset);
+- }
++ else /* ADC gain */
++ ctl = p->max - (((vol_max - regvalue) & 0xffff) / interval_offset);
+
+ ucontrol->value.integer.value[i] = ctl;
+ }
+diff --git a/sound/soc/codecs/tas2781-fmwlib.c b/sound/soc/codecs/tas2781-fmwlib.c
+index f3a7605f071043..6474cc551d5513 100644
+--- a/sound/soc/codecs/tas2781-fmwlib.c
++++ b/sound/soc/codecs/tas2781-fmwlib.c
+@@ -1992,6 +1992,7 @@ static int tasdevice_dspfw_ready(const struct firmware *fmw,
+ break;
+ case 0x202:
+ case 0x400:
++ case 0x401:
+ tas_priv->fw_parse_variable_header =
+ fw_parse_variable_header_git;
+ tas_priv->fw_parse_program_data =
+diff --git a/sound/soc/codecs/wcd937x.c b/sound/soc/codecs/wcd937x.c
+index af296b77a723ad..3c1224d8f2dffb 100644
+--- a/sound/soc/codecs/wcd937x.c
++++ b/sound/soc/codecs/wcd937x.c
+@@ -715,12 +715,17 @@ static int wcd937x_codec_enable_aux_pa(struct snd_soc_dapm_widget *w,
+ struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
+ struct wcd937x_priv *wcd937x = snd_soc_component_get_drvdata(component);
+ int hph_mode = wcd937x->hph_mode;
++ u8 val;
+
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
++ val = WCD937X_DIGITAL_PDM_WD_CTL2_EN |
++ WCD937X_DIGITAL_PDM_WD_CTL2_TIMEOUT_SEL |
++ WCD937X_DIGITAL_PDM_WD_CTL2_HOLD_OFF;
+ snd_soc_component_update_bits(component,
+ WCD937X_DIGITAL_PDM_WD_CTL2,
+- BIT(0), BIT(0));
++ WCD937X_DIGITAL_PDM_WD_CTL2_MASK,
++ val);
+ break;
+ case SND_SOC_DAPM_POST_PMU:
+ usleep_range(1000, 1010);
+@@ -741,7 +746,8 @@ static int wcd937x_codec_enable_aux_pa(struct snd_soc_dapm_widget *w,
+ hph_mode);
+ snd_soc_component_update_bits(component,
+ WCD937X_DIGITAL_PDM_WD_CTL2,
+- BIT(0), 0x00);
++ WCD937X_DIGITAL_PDM_WD_CTL2_MASK,
++ 0x00);
+ break;
+ }
+
+@@ -2049,6 +2055,8 @@ static const struct snd_kcontrol_new wcd937x_snd_controls[] = {
+ wcd937x_get_swr_port, wcd937x_set_swr_port),
+ SOC_SINGLE_EXT("HPHR Switch", WCD937X_HPH_R, 0, 1, 0,
+ wcd937x_get_swr_port, wcd937x_set_swr_port),
++ SOC_SINGLE_EXT("LO Switch", WCD937X_LO, 0, 1, 0,
++ wcd937x_get_swr_port, wcd937x_set_swr_port),
+
+ SOC_SINGLE_EXT("ADC1 Switch", WCD937X_ADC1, 1, 1, 0,
+ wcd937x_get_swr_port, wcd937x_set_swr_port),
+diff --git a/sound/soc/codecs/wcd937x.h b/sound/soc/codecs/wcd937x.h
+index 37bff16e88ddd5..a2bd47a93e507b 100644
+--- a/sound/soc/codecs/wcd937x.h
++++ b/sound/soc/codecs/wcd937x.h
+@@ -391,6 +391,10 @@
+ #define WCD937X_DIGITAL_PDM_WD_CTL0 0x3465
+ #define WCD937X_DIGITAL_PDM_WD_CTL1 0x3466
+ #define WCD937X_DIGITAL_PDM_WD_CTL2 0x3467
++#define WCD937X_DIGITAL_PDM_WD_CTL2_HOLD_OFF BIT(2)
++#define WCD937X_DIGITAL_PDM_WD_CTL2_TIMEOUT_SEL BIT(1)
++#define WCD937X_DIGITAL_PDM_WD_CTL2_EN BIT(0)
++#define WCD937X_DIGITAL_PDM_WD_CTL2_MASK GENMASK(2, 0)
+ #define WCD937X_DIGITAL_INTR_MODE 0x346A
+ #define WCD937X_DIGITAL_INTR_MASK_0 0x346B
+ #define WCD937X_DIGITAL_INTR_MASK_1 0x346C
+diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
+index f6c3aeff0d8eaf..a0c2ce84c32b1d 100644
+--- a/sound/soc/fsl/fsl-asoc-card.c
++++ b/sound/soc/fsl/fsl-asoc-card.c
+@@ -1033,14 +1033,15 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ }
+
+ /*
+- * Properties "hp-det-gpio" and "mic-det-gpio" are optional, and
++ * Properties "hp-det-gpios" and "mic-det-gpios" are optional, and
+ * simple_util_init_jack() uses these properties for creating
+ * Headphone Jack and Microphone Jack.
+ *
+ * The notifier is initialized in snd_soc_card_jack_new(), then
+ * snd_soc_jack_notifier_register can be called.
+ */
+- if (of_property_read_bool(np, "hp-det-gpio")) {
++ if (of_property_read_bool(np, "hp-det-gpios") ||
++ of_property_read_bool(np, "hp-det-gpio") /* deprecated */) {
+ ret = simple_util_init_jack(&priv->card, &priv->hp_jack,
+ 1, NULL, "Headphone Jack");
+ if (ret)
+@@ -1049,7 +1050,8 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ snd_soc_jack_notifier_register(&priv->hp_jack.jack, &hp_jack_nb);
+ }
+
+- if (of_property_read_bool(np, "mic-det-gpio")) {
++ if (of_property_read_bool(np, "mic-det-gpios") ||
++ of_property_read_bool(np, "mic-det-gpio") /* deprecated */) {
+ ret = simple_util_init_jack(&priv->card, &priv->mic_jack,
+ 0, NULL, "Mic Jack");
+ if (ret)
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index 1ecfa1184adac1..8cda36e0ae4d42 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -1061,7 +1061,7 @@ static irqreturn_t micfil_isr(int irq, void *devid)
+ regmap_write_bits(micfil->regmap,
+ REG_MICFIL_STAT,
+ MICFIL_STAT_CHXF(i),
+- 1);
++ MICFIL_STAT_CHXF(i));
+ }
+
+ for (i = 0; i < MICFIL_FIFO_NUM; i++) {
+@@ -1096,7 +1096,7 @@ static irqreturn_t micfil_err_isr(int irq, void *devid)
+ if (stat_reg & MICFIL_STAT_LOWFREQF) {
+ dev_dbg(&pdev->dev, "isr: ipg_clk_app is too low\n");
+ regmap_write_bits(micfil->regmap, REG_MICFIL_STAT,
+- MICFIL_STAT_LOWFREQF, 1);
++ MICFIL_STAT_LOWFREQF, MICFIL_STAT_LOWFREQF);
+ }
+
+ return IRQ_HANDLED;
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index 6fbcf33fd0dea6..8e7b75cf64db42 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -275,6 +275,9 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ /* Add AUDMIX Backend */
+ be_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "audmix-%d", i);
++ if (!be_name)
++ return -ENOMEM;
++
+ priv->dai[num_dai + i].cpus = &dlc[1];
+ priv->dai[num_dai + i].codecs = &snd_soc_dummy_dlc;
+
+diff --git a/sound/soc/generic/audio-graph-card2.c b/sound/soc/generic/audio-graph-card2.c
+index 56f7f946882e83..68f1da6931af21 100644
+--- a/sound/soc/generic/audio-graph-card2.c
++++ b/sound/soc/generic/audio-graph-card2.c
+@@ -270,16 +270,19 @@ static enum graph_type __graph_get_type(struct device_node *lnk)
+
+ if (of_node_name_eq(np, GRAPH_NODENAME_MULTI)) {
+ ret = GRAPH_MULTI;
++ fw_devlink_purge_absent_suppliers(&np->fwnode);
+ goto out_put;
+ }
+
+ if (of_node_name_eq(np, GRAPH_NODENAME_DPCM)) {
+ ret = GRAPH_DPCM;
++ fw_devlink_purge_absent_suppliers(&np->fwnode);
+ goto out_put;
+ }
+
+ if (of_node_name_eq(np, GRAPH_NODENAME_C2C)) {
+ ret = GRAPH_C2C;
++ fw_devlink_purge_absent_suppliers(&np->fwnode);
+ goto out_put;
+ }
+
+diff --git a/sound/soc/intel/atom/sst/sst_acpi.c b/sound/soc/intel/atom/sst/sst_acpi.c
+index 29d44c989e5fc2..cfa1632ae4f03f 100644
+--- a/sound/soc/intel/atom/sst/sst_acpi.c
++++ b/sound/soc/intel/atom/sst/sst_acpi.c
+@@ -125,6 +125,28 @@ static const struct sst_res_info bytcr_res_info = {
+ .acpi_ipc_irq_index = 0
+ };
+
++/* For "LPE0F28" ACPI device found on some Android factory OS models */
++static const struct sst_res_info lpe8086_res_info = {
++ .shim_offset = 0x140000,
++ .shim_size = 0x000100,
++ .shim_phy_addr = SST_BYT_SHIM_PHY_ADDR,
++ .ssp0_offset = 0xa0000,
++ .ssp0_size = 0x1000,
++ .dma0_offset = 0x98000,
++ .dma0_size = 0x4000,
++ .dma1_offset = 0x9c000,
++ .dma1_size = 0x4000,
++ .iram_offset = 0x0c0000,
++ .iram_size = 0x14000,
++ .dram_offset = 0x100000,
++ .dram_size = 0x28000,
++ .mbox_offset = 0x144000,
++ .mbox_size = 0x1000,
++ .acpi_lpe_res_index = 1,
++ .acpi_ddr_index = 0,
++ .acpi_ipc_irq_index = 0
++};
++
+ static struct sst_platform_info byt_rvp_platform_data = {
+ .probe_data = &byt_fwparse_info,
+ .ipc_info = &byt_ipc_info,
+@@ -268,10 +290,38 @@ static int sst_acpi_probe(struct platform_device *pdev)
+ mach->pdata = &chv_platform_data;
+ pdata = mach->pdata;
+
+- ret = kstrtouint(id->id, 16, &dev_id);
+- if (ret < 0) {
+- dev_err(dev, "Unique device id conversion error: %d\n", ret);
+- return ret;
++ if (!strcmp(id->id, "LPE0F28")) {
++ struct resource *rsrc;
++
++ /* Use regular BYT SST PCI VID:PID */
++ dev_id = 0x80860F28;
++ byt_rvp_platform_data.res_info = &lpe8086_res_info;
++
++ /*
++ * The "LPE0F28" ACPI device has separate IO-mem resources for:
++ * DDR, SHIM, MBOX, IRAM, DRAM, CFG
++ * None of which covers the entire LPE base address range.
++ * lpe8086_res_info.acpi_lpe_res_index points to the SHIM.
++ * Patch this to cover the entire base address range as expected
++ * by sst_platform_get_resources().
++ */
++ rsrc = platform_get_resource(pdev, IORESOURCE_MEM,
++ pdata->res_info->acpi_lpe_res_index);
++ if (!rsrc) {
++ dev_err(dev, "Invalid SHIM base\n");
++ return -EIO;
++ }
++ rsrc->start -= pdata->res_info->shim_offset;
++ rsrc->end = rsrc->start + 0x200000 - 1;
++ } else {
++ ret = kstrtouint(id->id, 16, &dev_id);
++ if (ret < 0) {
++ dev_err(dev, "Unique device id conversion error: %d\n", ret);
++ return ret;
++ }
++
++ if (soc_intel_is_byt_cr(pdev))
++ byt_rvp_platform_data.res_info = &bytcr_res_info;
+ }
+
+ dev_dbg(dev, "ACPI device id: %x\n", dev_id);
+@@ -280,11 +330,6 @@ static int sst_acpi_probe(struct platform_device *pdev)
+ if (ret < 0)
+ return ret;
+
+- if (soc_intel_is_byt_cr(pdev)) {
+- /* override resource info */
+- byt_rvp_platform_data.res_info = &bytcr_res_info;
+- }
+-
+ /* update machine parameters */
+ mach->mach_params.acpi_ipc_irq_index =
+ pdata->res_info->acpi_ipc_irq_index;
+@@ -344,6 +389,7 @@ static void sst_acpi_remove(struct platform_device *pdev)
+ }
+
+ static const struct acpi_device_id sst_acpi_ids[] = {
++ { "LPE0F28", (unsigned long)&snd_soc_acpi_intel_baytrail_machines},
+ { "80860F28", (unsigned long)&snd_soc_acpi_intel_baytrail_machines},
+ { "808622A8", (unsigned long)&snd_soc_acpi_intel_cherrytrail_machines},
+ { },
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 4479825c08b5e3..2e786836588fbc 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -17,6 +17,7 @@
+ #include <linux/acpi.h>
+ #include <linux/clk.h>
+ #include <linux/device.h>
++#include <linux/device/bus.h>
+ #include <linux/dmi.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/gpio/machine.h>
+@@ -32,6 +33,8 @@
+ #include "../atom/sst-atom-controls.h"
+ #include "../common/soc-intel-quirks.h"
+
++#define BYT_RT5640_FALLBACK_CODEC_DEV_NAME "i2c-rt5640"
++
+ enum {
+ BYT_RT5640_DMIC1_MAP,
+ BYT_RT5640_DMIC2_MAP,
+@@ -1129,6 +1132,21 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ BYT_RT5640_SSP0_AIF2 |
+ BYT_RT5640_MCLK_EN),
+ },
++ { /* Vexia Edu Atla 10 tablet */
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++ DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
++ /* Above strings are too generic, also match on BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "08/25/2014"),
++ },
++ .driver_data = (void *)(BYT_RT5640_IN1_MAP |
++ BYT_RT5640_JD_SRC_JD2_IN4N |
++ BYT_RT5640_OVCD_TH_2000UA |
++ BYT_RT5640_OVCD_SF_0P75 |
++ BYT_RT5640_DIFF_MIC |
++ BYT_RT5640_SSP0_AIF2 |
++ BYT_RT5640_MCLK_EN),
++ },
+ { /* Voyo Winpad A15 */
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+@@ -1698,9 +1716,33 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
+
+ codec_dev = acpi_get_first_physical_node(adev);
+ acpi_dev_put(adev);
+- if (!codec_dev)
+- return -EPROBE_DEFER;
+- priv->codec_dev = get_device(codec_dev);
++
++ if (codec_dev) {
++ priv->codec_dev = get_device(codec_dev);
++ } else {
++ /*
++ * Special case for Android tablets where the codec i2c_client
++ * has been manually instantiated by x86_android_tablets.ko due
++ * to a broken DSDT.
++ */
++ codec_dev = bus_find_device_by_name(&i2c_bus_type, NULL,
++ BYT_RT5640_FALLBACK_CODEC_DEV_NAME);
++ if (!codec_dev)
++ return -EPROBE_DEFER;
++
++ if (!i2c_verify_client(codec_dev)) {
++ dev_err(dev, "Error '%s' is not an i2c_client\n",
++ BYT_RT5640_FALLBACK_CODEC_DEV_NAME);
++ put_device(codec_dev);
++ }
++
++ /* fixup codec name */
++ strscpy(byt_rt5640_codec_name, BYT_RT5640_FALLBACK_CODEC_DEV_NAME,
++ sizeof(byt_rt5640_codec_name));
++
++ /* bus_find_device() returns a reference no need to get() */
++ priv->codec_dev = codec_dev;
++ }
+
+ /*
+ * swap SSP0 if bytcr is detected
+diff --git a/sound/soc/mediatek/mt8188/mt8188-mt6359.c b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+index 08ae962afeb929..4eed90d13a5326 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-mt6359.c
++++ b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+@@ -1279,10 +1279,12 @@ static int mt8188_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+
+ for_each_card_prelinks(card, i, dai_link) {
+ if (strcmp(dai_link->name, "DPTX_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8188_dptx_codec_init;
+ } else if (strcmp(dai_link->name, "ETDM3_OUT_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8188_hdmi_codec_init;
+ } else if (strcmp(dai_link->name, "DL_SRC_BE") == 0 ||
+ strcmp(dai_link->name, "UL_SRC_BE") == 0) {
+@@ -1294,6 +1296,9 @@ static int mt8188_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+ strcmp(dai_link->name, "ETDM2_OUT_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM1_IN_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM2_IN_BE") == 0) {
++ if (!dai_link->num_codecs)
++ continue;
++
+ if (!strcmp(dai_link->codecs->dai_name, MAX98390_CODEC_DAI)) {
+ /*
+ * The TDM protocol settings with fixed 4 slots are defined in
+diff --git a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+index 8b323fb199251b..de8737dcf53d70 100644
+--- a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
++++ b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+@@ -1099,7 +1099,7 @@ static int mt8192_mt6359_legacy_probe(struct mtk_soc_card_data *soc_card_data)
+ dai_link->ignore = 0;
+ }
+
+- if (dai_link->num_codecs && dai_link->codecs[0].dai_name &&
++ if (dai_link->num_codecs &&
+ strcmp(dai_link->codecs[0].dai_name, RT1015_CODEC_DAI) == 0)
+ dai_link->ops = &mt8192_rt1015_i2s_ops;
+ }
+@@ -1129,7 +1129,7 @@ static int mt8192_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+ int i;
+
+ for_each_card_prelinks(card, i, dai_link)
+- if (dai_link->num_codecs && dai_link->codecs[0].dai_name &&
++ if (dai_link->num_codecs &&
+ strcmp(dai_link->codecs[0].dai_name, RT1015_CODEC_DAI) == 0)
+ dai_link->ops = &mt8192_rt1015_i2s_ops;
+ }
+diff --git a/sound/soc/mediatek/mt8195/mt8195-mt6359.c b/sound/soc/mediatek/mt8195/mt8195-mt6359.c
+index 2832ef78eaed72..8ebf6c7502aa3d 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-mt6359.c
++++ b/sound/soc/mediatek/mt8195/mt8195-mt6359.c
+@@ -1380,10 +1380,12 @@ static int mt8195_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+
+ for_each_card_prelinks(card, i, dai_link) {
+ if (strcmp(dai_link->name, "DPTX_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8195_dptx_codec_init;
+ } else if (strcmp(dai_link->name, "ETDM3_OUT_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8195_hdmi_codec_init;
+ } else if (strcmp(dai_link->name, "DL_SRC_BE") == 0 ||
+ strcmp(dai_link->name, "UL_SRC1_BE") == 0 ||
+@@ -1396,6 +1398,9 @@ static int mt8195_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+ strcmp(dai_link->name, "ETDM2_OUT_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM1_IN_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM2_IN_BE") == 0) {
++ if (!dai_link->num_codecs)
++ continue;
++
+ if (!strcmp(dai_link->codecs->dai_name, MAX98390_CODEC_DAI)) {
+ if (!(codec_init & MAX98390_CODEC_INIT)) {
+ dai_link->init = mt8195_max98390_init;
+diff --git a/sound/soc/stm/stm32_sai_sub.c b/sound/soc/stm/stm32_sai_sub.c
+index ad2492efb1cdce..64f52c75e2aa8e 100644
+--- a/sound/soc/stm/stm32_sai_sub.c
++++ b/sound/soc/stm/stm32_sai_sub.c
+@@ -317,7 +317,7 @@ static int stm32_sai_get_clk_div(struct stm32_sai_sub_data *sai,
+ int div;
+
+ div = DIV_ROUND_CLOSEST(input_rate, output_rate);
+- if (div > SAI_XCR1_MCKDIV_MAX(version)) {
++ if (div > SAI_XCR1_MCKDIV_MAX(version) || div <= 0) {
+ dev_err(&sai->pdev->dev, "Divider %d out of range\n", div);
+ return -EINVAL;
+ }
+@@ -378,8 +378,8 @@ static long stm32_sai_mclk_round_rate(struct clk_hw *hw, unsigned long rate,
+ int div;
+
+ div = stm32_sai_get_clk_div(sai, *prate, rate);
+- if (div < 0)
+- return div;
++ if (div <= 0)
++ return -EINVAL;
+
+ mclk->freq = *prate / div;
+
+diff --git a/sound/usb/6fire/chip.c b/sound/usb/6fire/chip.c
+index 33e962178c9363..d562a30b087f01 100644
+--- a/sound/usb/6fire/chip.c
++++ b/sound/usb/6fire/chip.c
+@@ -61,8 +61,10 @@ static void usb6fire_chip_abort(struct sfire_chip *chip)
+ }
+ }
+
+-static void usb6fire_chip_destroy(struct sfire_chip *chip)
++static void usb6fire_card_free(struct snd_card *card)
+ {
++ struct sfire_chip *chip = card->private_data;
++
+ if (chip) {
+ if (chip->pcm)
+ usb6fire_pcm_destroy(chip);
+@@ -72,8 +74,6 @@ static void usb6fire_chip_destroy(struct sfire_chip *chip)
+ usb6fire_comm_destroy(chip);
+ if (chip->control)
+ usb6fire_control_destroy(chip);
+- if (chip->card)
+- snd_card_free(chip->card);
+ }
+ }
+
+@@ -136,6 +136,7 @@ static int usb6fire_chip_probe(struct usb_interface *intf,
+ chip->regidx = regidx;
+ chip->intf_count = 1;
+ chip->card = card;
++ card->private_free = usb6fire_card_free;
+
+ ret = usb6fire_comm_init(chip);
+ if (ret < 0)
+@@ -162,7 +163,7 @@ static int usb6fire_chip_probe(struct usb_interface *intf,
+ return 0;
+
+ destroy_chip:
+- usb6fire_chip_destroy(chip);
++ snd_card_free(card);
+ return ret;
+ }
+
+@@ -181,7 +182,6 @@ static void usb6fire_chip_disconnect(struct usb_interface *intf)
+
+ chip->shutdown = true;
+ usb6fire_chip_abort(chip);
+- usb6fire_chip_destroy(chip);
+ }
+ }
+ }
+diff --git a/sound/usb/caiaq/audio.c b/sound/usb/caiaq/audio.c
+index 4981753652a7fe..7a89872aa0cbd6 100644
+--- a/sound/usb/caiaq/audio.c
++++ b/sound/usb/caiaq/audio.c
+@@ -869,14 +869,20 @@ int snd_usb_caiaq_audio_init(struct snd_usb_caiaqdev *cdev)
+ return 0;
+ }
+
+-void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev)
++void snd_usb_caiaq_audio_disconnect(struct snd_usb_caiaqdev *cdev)
+ {
+ struct device *dev = caiaqdev_to_dev(cdev);
+
+ dev_dbg(dev, "%s(%p)\n", __func__, cdev);
+ stream_stop(cdev);
++}
++
++void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev)
++{
++ struct device *dev = caiaqdev_to_dev(cdev);
++
++ dev_dbg(dev, "%s(%p)\n", __func__, cdev);
+ free_urbs(cdev->data_urbs_in);
+ free_urbs(cdev->data_urbs_out);
+ kfree(cdev->data_cb_info);
+ }
+-
+diff --git a/sound/usb/caiaq/audio.h b/sound/usb/caiaq/audio.h
+index 869bf6264d6a09..07f5d064456cf7 100644
+--- a/sound/usb/caiaq/audio.h
++++ b/sound/usb/caiaq/audio.h
+@@ -3,6 +3,7 @@
+ #define CAIAQ_AUDIO_H
+
+ int snd_usb_caiaq_audio_init(struct snd_usb_caiaqdev *cdev);
++void snd_usb_caiaq_audio_disconnect(struct snd_usb_caiaqdev *cdev);
+ void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev);
+
+ #endif /* CAIAQ_AUDIO_H */
+diff --git a/sound/usb/caiaq/device.c b/sound/usb/caiaq/device.c
+index b5cbf1f195c48c..dfd820483849eb 100644
+--- a/sound/usb/caiaq/device.c
++++ b/sound/usb/caiaq/device.c
+@@ -376,6 +376,17 @@ static void setup_card(struct snd_usb_caiaqdev *cdev)
+ dev_err(dev, "Unable to set up control system (ret=%d)\n", ret);
+ }
+
++static void card_free(struct snd_card *card)
++{
++ struct snd_usb_caiaqdev *cdev = caiaqdev(card);
++
++#ifdef CONFIG_SND_USB_CAIAQ_INPUT
++ snd_usb_caiaq_input_free(cdev);
++#endif
++ snd_usb_caiaq_audio_free(cdev);
++ usb_reset_device(cdev->chip.dev);
++}
++
+ static int create_card(struct usb_device *usb_dev,
+ struct usb_interface *intf,
+ struct snd_card **cardp)
+@@ -489,6 +500,7 @@ static int init_card(struct snd_usb_caiaqdev *cdev)
+ cdev->vendor_name, cdev->product_name, usbpath);
+
+ setup_card(cdev);
++ card->private_free = card_free;
+ return 0;
+
+ err_kill_urb:
+@@ -534,15 +546,14 @@ static void snd_disconnect(struct usb_interface *intf)
+ snd_card_disconnect(card);
+
+ #ifdef CONFIG_SND_USB_CAIAQ_INPUT
+- snd_usb_caiaq_input_free(cdev);
++ snd_usb_caiaq_input_disconnect(cdev);
+ #endif
+- snd_usb_caiaq_audio_free(cdev);
++ snd_usb_caiaq_audio_disconnect(cdev);
+
+ usb_kill_urb(&cdev->ep1_in_urb);
+ usb_kill_urb(&cdev->midi_out_urb);
+
+- snd_card_free(card);
+- usb_reset_device(interface_to_usbdev(intf));
++ snd_card_free_when_closed(card);
+ }
+
+
+diff --git a/sound/usb/caiaq/input.c b/sound/usb/caiaq/input.c
+index 84f26dce7f5d03..a9130891bb696d 100644
+--- a/sound/usb/caiaq/input.c
++++ b/sound/usb/caiaq/input.c
+@@ -829,15 +829,21 @@ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev)
+ return ret;
+ }
+
+-void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev)
++void snd_usb_caiaq_input_disconnect(struct snd_usb_caiaqdev *cdev)
+ {
+ if (!cdev || !cdev->input_dev)
+ return;
+
+ usb_kill_urb(cdev->ep4_in_urb);
++ input_unregister_device(cdev->input_dev);
++}
++
++void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev)
++{
++ if (!cdev || !cdev->input_dev)
++ return;
++
+ usb_free_urb(cdev->ep4_in_urb);
+ cdev->ep4_in_urb = NULL;
+-
+- input_unregister_device(cdev->input_dev);
+ cdev->input_dev = NULL;
+ }
+diff --git a/sound/usb/caiaq/input.h b/sound/usb/caiaq/input.h
+index c42891e7be884d..fbe267f85d025f 100644
+--- a/sound/usb/caiaq/input.h
++++ b/sound/usb/caiaq/input.h
+@@ -4,6 +4,7 @@
+
+ void snd_usb_caiaq_input_dispatch(struct snd_usb_caiaqdev *cdev, char *buf, unsigned int len);
+ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev);
++void snd_usb_caiaq_input_disconnect(struct snd_usb_caiaqdev *cdev);
+ void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev);
+
+ #endif
+diff --git a/sound/usb/clock.c b/sound/usb/clock.c
+index 60fcb872a80b6c..5c3edb69b373d8 100644
+--- a/sound/usb/clock.c
++++ b/sound/usb/clock.c
+@@ -36,6 +36,12 @@ union uac23_clock_multiplier_desc {
+ struct uac_clock_multiplier_descriptor v3;
+ };
+
++/* check whether the descriptor bLength has the minimal length */
++#define DESC_LENGTH_CHECK(p, proto) \
++ ((proto) == UAC_VERSION_3 ? \
++ ((p)->v3.bLength >= sizeof((p)->v3)) : \
++ ((p)->v2.bLength >= sizeof((p)->v2)))
++
+ #define GET_VAL(p, proto, field) \
+ ((proto) == UAC_VERSION_3 ? (p)->v3.field : (p)->v2.field)
+
+@@ -58,6 +64,8 @@ static bool validate_clock_source(void *p, int id, int proto)
+ {
+ union uac23_clock_source_desc *cs = p;
+
++ if (!DESC_LENGTH_CHECK(cs, proto))
++ return false;
+ return GET_VAL(cs, proto, bClockID) == id;
+ }
+
+@@ -65,13 +73,27 @@ static bool validate_clock_selector(void *p, int id, int proto)
+ {
+ union uac23_clock_selector_desc *cs = p;
+
+- return GET_VAL(cs, proto, bClockID) == id;
++ if (!DESC_LENGTH_CHECK(cs, proto))
++ return false;
++ if (GET_VAL(cs, proto, bClockID) != id)
++ return false;
++ /* additional length check for baCSourceID array (in bNrInPins size)
++ * and two more fields (which sizes depend on the protocol)
++ */
++ if (proto == UAC_VERSION_3)
++ return cs->v3.bLength >= sizeof(cs->v3) + cs->v3.bNrInPins +
++ 4 /* bmControls */ + 2 /* wCSelectorDescrStr */;
++ else
++ return cs->v2.bLength >= sizeof(cs->v2) + cs->v2.bNrInPins +
++ 1 /* bmControls */ + 1 /* iClockSelector */;
+ }
+
+ static bool validate_clock_multiplier(void *p, int id, int proto)
+ {
+ union uac23_clock_multiplier_desc *cs = p;
+
++ if (!DESC_LENGTH_CHECK(cs, proto))
++ return false;
+ return GET_VAL(cs, proto, bClockID) == id;
+ }
+
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 24c981c9b2405d..199d0603cf8e59 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -324,7 +324,6 @@ YAMAHA_DEVICE(0x105a, NULL),
+ YAMAHA_DEVICE(0x105b, NULL),
+ YAMAHA_DEVICE(0x105c, NULL),
+ YAMAHA_DEVICE(0x105d, NULL),
+-YAMAHA_DEVICE(0x1718, "P-125"),
+ {
+ USB_DEVICE(0x0499, 0x1503),
+ QUIRK_DRIVER_INFO {
+@@ -391,6 +390,19 @@ YAMAHA_DEVICE(0x1718, "P-125"),
+ }
+ }
+ },
++{
++ USB_DEVICE(0x0499, 0x1718),
++ QUIRK_DRIVER_INFO {
++ /* .vendor_name = "Yamaha", */
++ /* .product_name = "P-125", */
++ QUIRK_DATA_COMPOSITE {
++ { QUIRK_DATA_STANDARD_AUDIO(1) },
++ { QUIRK_DATA_STANDARD_AUDIO(2) },
++ { QUIRK_DATA_MIDI_YAMAHA(3) },
++ QUIRK_COMPOSITE_END
++ }
++ }
++},
+ YAMAHA_DEVICE(0x2000, "DGP-7"),
+ YAMAHA_DEVICE(0x2001, "DGP-5"),
+ YAMAHA_DEVICE(0x2002, NULL),
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index cee49341dabc16..e6bbb6e21dbc94 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -555,6 +555,7 @@ int snd_usb_create_quirk(struct snd_usb_audio *chip,
+ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interface *intf)
+ {
+ struct usb_host_config *config = dev->actconfig;
++ struct usb_device_descriptor new_device_descriptor;
+ int err;
+
+ if (le16_to_cpu(get_cfg_desc(config)->wTotalLength) == EXTIGY_FIRMWARE_SIZE_OLD ||
+@@ -566,10 +567,14 @@ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interfac
+ if (err < 0)
+ dev_dbg(&dev->dev, "error sending boot message: %d\n", err);
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &dev->descriptor, sizeof(dev->descriptor));
+- config = dev->actconfig;
++ &new_device_descriptor, sizeof(new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
++ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
++ new_device_descriptor.bNumConfigurations);
++ else
++ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_reset_configuration: %d\n", err);
+@@ -901,6 +906,7 @@ static void mbox2_setup_48_24_magic(struct usb_device *dev)
+ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
+ {
+ struct usb_host_config *config = dev->actconfig;
++ struct usb_device_descriptor new_device_descriptor;
+ int err;
+ u8 bootresponse[0x12];
+ int fwsize;
+@@ -936,10 +942,14 @@ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
+ dev_dbg(&dev->dev, "device initialised!\n");
+
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &dev->descriptor, sizeof(dev->descriptor));
+- config = dev->actconfig;
++ &new_device_descriptor, sizeof(new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
++ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
++ new_device_descriptor.bNumConfigurations);
++ else
++ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
+
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+@@ -1249,6 +1259,7 @@ static void mbox3_setup_defaults(struct usb_device *dev)
+ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
+ {
+ struct usb_host_config *config = dev->actconfig;
++ struct usb_device_descriptor new_device_descriptor;
+ int err;
+ int descriptor_size;
+
+@@ -1262,10 +1273,14 @@ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
+ dev_dbg(&dev->dev, "MBOX3: device initialised!\n");
+
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &dev->descriptor, sizeof(dev->descriptor));
+- config = dev->actconfig;
++ &new_device_descriptor, sizeof(new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "MBOX3: error usb_get_descriptor: %d\n", err);
++ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ dev_dbg(&dev->dev, "MBOX3: error too large bNumConfigurations: %d\n",
++ new_device_descriptor.bNumConfigurations);
++ else
++ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
+
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+diff --git a/sound/usb/usx2y/us122l.c b/sound/usb/usx2y/us122l.c
+index 709ccad972e2ff..612047ca5fe7ac 100644
+--- a/sound/usb/usx2y/us122l.c
++++ b/sound/usb/usx2y/us122l.c
+@@ -617,10 +617,7 @@ static void snd_us122l_disconnect(struct usb_interface *intf)
+ usb_put_intf(usb_ifnum_to_if(us122l->dev, 1));
+ usb_put_dev(us122l->dev);
+
+- while (atomic_read(&us122l->mmap_count))
+- msleep(500);
+-
+- snd_card_free(card);
++ snd_card_free_when_closed(card);
+ }
+
+ static int snd_us122l_suspend(struct usb_interface *intf, pm_message_t message)
+diff --git a/sound/usb/usx2y/usbusx2y.c b/sound/usb/usx2y/usbusx2y.c
+index 52f4e6652407d5..4c4ce0319d624d 100644
+--- a/sound/usb/usx2y/usbusx2y.c
++++ b/sound/usb/usx2y/usbusx2y.c
+@@ -423,7 +423,7 @@ static void snd_usx2y_disconnect(struct usb_interface *intf)
+ }
+ if (usx2y->us428ctls_sharedmem)
+ wake_up(&usx2y->us428ctls_wait_queue_head);
+- snd_card_free(card);
++ snd_card_free_when_closed(card);
+ }
+
+ static int snd_usx2y_probe(struct usb_interface *intf,
+diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c
+index 7b8d9ec89ebd35..c032d2c6ab6d55 100644
+--- a/tools/bpf/bpftool/jit_disasm.c
++++ b/tools/bpf/bpftool/jit_disasm.c
+@@ -80,7 +80,8 @@ symbol_lookup_callback(__maybe_unused void *disasm_info,
+ static int
+ init_context(disasm_ctx_t *ctx, const char *arch,
+ __maybe_unused const char *disassembler_options,
+- __maybe_unused unsigned char *image, __maybe_unused ssize_t len)
++ __maybe_unused unsigned char *image, __maybe_unused ssize_t len,
++ __maybe_unused __u64 func_ksym)
+ {
+ char *triple;
+
+@@ -109,12 +110,13 @@ static void destroy_context(disasm_ctx_t *ctx)
+ }
+
+ static int
+-disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc)
++disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc,
++ __u64 func_ksym)
+ {
+ char buf[256];
+ int count;
+
+- count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, pc,
++ count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, func_ksym + pc,
+ buf, sizeof(buf));
+ if (json_output)
+ printf_json(buf);
+@@ -136,8 +138,21 @@ int disasm_init(void)
+ #ifdef HAVE_LIBBFD_SUPPORT
+ #define DISASM_SPACER "\t"
+
++struct disasm_info {
++ struct disassemble_info info;
++ __u64 func_ksym;
++};
++
++static void disasm_print_addr(bfd_vma addr, struct disassemble_info *info)
++{
++ struct disasm_info *dinfo = container_of(info, struct disasm_info, info);
++
++ addr += dinfo->func_ksym;
++ generic_print_address(addr, info);
++}
++
+ typedef struct {
+- struct disassemble_info *info;
++ struct disasm_info *info;
+ disassembler_ftype disassemble;
+ bfd *bfdf;
+ } disasm_ctx_t;
+@@ -215,7 +230,7 @@ static int fprintf_json_styled(void *out,
+
+ static int init_context(disasm_ctx_t *ctx, const char *arch,
+ const char *disassembler_options,
+- unsigned char *image, ssize_t len)
++ unsigned char *image, ssize_t len, __u64 func_ksym)
+ {
+ struct disassemble_info *info;
+ char tpath[PATH_MAX];
+@@ -238,12 +253,13 @@ static int init_context(disasm_ctx_t *ctx, const char *arch,
+ }
+ bfdf = ctx->bfdf;
+
+- ctx->info = malloc(sizeof(struct disassemble_info));
++ ctx->info = malloc(sizeof(struct disasm_info));
+ if (!ctx->info) {
+ p_err("mem alloc failed");
+ goto err_close;
+ }
+- info = ctx->info;
++ ctx->info->func_ksym = func_ksym;
++ info = &ctx->info->info;
+
+ if (json_output)
+ init_disassemble_info_compat(info, stdout,
+@@ -272,6 +288,7 @@ static int init_context(disasm_ctx_t *ctx, const char *arch,
+ info->disassembler_options = disassembler_options;
+ info->buffer = image;
+ info->buffer_length = len;
++ info->print_address_func = disasm_print_addr;
+
+ disassemble_init_for_target(info);
+
+@@ -304,9 +321,10 @@ static void destroy_context(disasm_ctx_t *ctx)
+
+ static int
+ disassemble_insn(disasm_ctx_t *ctx, __maybe_unused unsigned char *image,
+- __maybe_unused ssize_t len, int pc)
++ __maybe_unused ssize_t len, int pc,
++ __maybe_unused __u64 func_ksym)
+ {
+- return ctx->disassemble(pc, ctx->info);
++ return ctx->disassemble(pc, &ctx->info->info);
+ }
+
+ int disasm_init(void)
+@@ -331,7 +349,7 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
+ if (!len)
+ return -1;
+
+- if (init_context(&ctx, arch, disassembler_options, image, len))
++ if (init_context(&ctx, arch, disassembler_options, image, len, func_ksym))
+ return -1;
+
+ if (json_output)
+@@ -360,7 +378,7 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
+ printf("%4x:" DISASM_SPACER, pc);
+ }
+
+- count = disassemble_insn(&ctx, image, len, pc);
++ count = disassemble_insn(&ctx, image, len, pc, func_ksym);
+
+ if (json_output) {
+ /* Operand array, was started in fprintf_json. Before
+diff --git a/tools/include/nolibc/arch-s390.h b/tools/include/nolibc/arch-s390.h
+index 5d60fd43f88309..f72df614db3a24 100644
+--- a/tools/include/nolibc/arch-s390.h
++++ b/tools/include/nolibc/arch-s390.h
+@@ -10,6 +10,7 @@
+
+ #include "compiler.h"
+ #include "crt.h"
++#include "std.h"
+
+ /* Syscalls for s390:
+ * - registers are 64-bit
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index d3a542649e6ba2..f1da986ab92177 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3985,7 +3985,7 @@ static bool sym_is_subprog(const Elf64_Sym *sym, int text_shndx)
+ return true;
+
+ /* global function */
+- return bind == STB_GLOBAL && type == STT_FUNC;
++ return (bind == STB_GLOBAL || bind == STB_WEAK) && type == STT_FUNC;
+ }
+
+ static int find_extern_btf_id(const struct btf *btf, const char *ext_name)
+@@ -4389,7 +4389,7 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
+
+ static bool prog_is_subprog(const struct bpf_object *obj, const struct bpf_program *prog)
+ {
+- return prog->sec_idx == obj->efile.text_shndx && obj->nr_programs > 1;
++ return prog->sec_idx == obj->efile.text_shndx;
+ }
+
+ struct bpf_program *
+@@ -5094,6 +5094,7 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ enum libbpf_map_type map_type = map->libbpf_type;
+ char *cp, errmsg[STRERR_BUFSIZE];
+ int err, zero = 0;
++ size_t mmap_sz;
+
+ if (obj->gen_loader) {
+ bpf_gen__map_update_elem(obj->gen_loader, map - obj->maps,
+@@ -5107,8 +5108,8 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ if (err) {
+ err = -errno;
+ cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+- pr_warn("Error setting initial map(%s) contents: %s\n",
+- map->name, cp);
++ pr_warn("map '%s': failed to set initial contents: %s\n",
++ bpf_map__name(map), cp);
+ return err;
+ }
+
+@@ -5118,11 +5119,43 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ if (err) {
+ err = -errno;
+ cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+- pr_warn("Error freezing map(%s) as read-only: %s\n",
+- map->name, cp);
++ pr_warn("map '%s': failed to freeze as read-only: %s\n",
++ bpf_map__name(map), cp);
+ return err;
+ }
+ }
++
++ /* Remap anonymous mmap()-ed "map initialization image" as
++ * a BPF map-backed mmap()-ed memory, but preserving the same
++ * memory address. This will cause kernel to change process'
++ * page table to point to a different piece of kernel memory,
++ * but from userspace point of view memory address (and its
++ * contents, being identical at this point) will stay the
++ * same. This mapping will be released by bpf_object__close()
++ * as per normal clean up procedure.
++ */
++ mmap_sz = bpf_map_mmap_sz(map);
++ if (map->def.map_flags & BPF_F_MMAPABLE) {
++ void *mmaped;
++ int prot;
++
++ if (map->def.map_flags & BPF_F_RDONLY_PROG)
++ prot = PROT_READ;
++ else
++ prot = PROT_READ | PROT_WRITE;
++ mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map->fd, 0);
++ if (mmaped == MAP_FAILED) {
++ err = -errno;
++ pr_warn("map '%s': failed to re-mmap() contents: %d\n",
++ bpf_map__name(map), err);
++ return err;
++ }
++ map->mmaped = mmaped;
++ } else if (map->mmaped) {
++ munmap(map->mmaped, mmap_sz);
++ map->mmaped = NULL;
++ }
++
+ return 0;
+ }
+
+@@ -5439,8 +5472,7 @@ bpf_object__create_maps(struct bpf_object *obj)
+ err = bpf_object__populate_internal_map(obj, map);
+ if (err < 0)
+ goto err_out;
+- }
+- if (map->def.type == BPF_MAP_TYPE_ARENA) {
++ } else if (map->def.type == BPF_MAP_TYPE_ARENA) {
+ map->mmaped = mmap((void *)(long)map->map_extra,
+ bpf_map_mmap_sz(map), PROT_READ | PROT_WRITE,
+ map->map_extra ? MAP_SHARED | MAP_FIXED : MAP_SHARED,
+@@ -7352,8 +7384,14 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog,
+ opts->prog_flags |= BPF_F_XDP_HAS_FRAGS;
+
+ /* special check for usdt to use uprobe_multi link */
+- if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK))
++ if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK)) {
++ /* for BPF_TRACE_UPROBE_MULTI, user might want to query expected_attach_type
++ * in prog, and expected_attach_type we set in kernel is from opts, so we
++ * update both.
++ */
+ prog->expected_attach_type = BPF_TRACE_UPROBE_MULTI;
++ opts->expected_attach_type = BPF_TRACE_UPROBE_MULTI;
++ }
+
+ if ((def & SEC_ATTACH_BTF) && !prog->attach_btf_id) {
+ int btf_obj_fd = 0, btf_type_id = 0, err;
+@@ -7443,6 +7481,7 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
+ load_attr.attach_btf_id = prog->attach_btf_id;
+ load_attr.kern_version = kern_version;
+ load_attr.prog_ifindex = prog->prog_ifindex;
++ load_attr.expected_attach_type = prog->expected_attach_type;
+
+ /* specify func_info/line_info only if kernel supports them */
+ if (obj->btf && btf__fd(obj->btf) >= 0 && kernel_supports(obj, FEAT_BTF_FUNC)) {
+@@ -7474,9 +7513,6 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
+ insns_cnt = prog->insns_cnt;
+ }
+
+- /* allow prog_prepare_load_fn to change expected_attach_type */
+- load_attr.expected_attach_type = prog->expected_attach_type;
+-
+ if (obj->gen_loader) {
+ bpf_gen__prog_load(obj->gen_loader, prog->type, prog->name,
+ license, insns, insns_cnt, &load_attr,
+@@ -13872,46 +13908,11 @@ int bpf_object__load_skeleton(struct bpf_object_skeleton *s)
+ for (i = 0; i < s->map_cnt; i++) {
+ struct bpf_map_skeleton *map_skel = (void *)s->maps + i * s->map_skel_sz;
+ struct bpf_map *map = *map_skel->map;
+- size_t mmap_sz = bpf_map_mmap_sz(map);
+- int prot, map_fd = map->fd;
+- void **mmaped = map_skel->mmaped;
+-
+- if (!mmaped)
+- continue;
+
+- if (!(map->def.map_flags & BPF_F_MMAPABLE)) {
+- *mmaped = NULL;
++ if (!map_skel->mmaped)
+ continue;
+- }
+-
+- if (map->def.type == BPF_MAP_TYPE_ARENA) {
+- *mmaped = map->mmaped;
+- continue;
+- }
+
+- if (map->def.map_flags & BPF_F_RDONLY_PROG)
+- prot = PROT_READ;
+- else
+- prot = PROT_READ | PROT_WRITE;
+-
+- /* Remap anonymous mmap()-ed "map initialization image" as
+- * a BPF map-backed mmap()-ed memory, but preserving the same
+- * memory address. This will cause kernel to change process'
+- * page table to point to a different piece of kernel memory,
+- * but from userspace point of view memory address (and its
+- * contents, being identical at this point) will stay the
+- * same. This mapping will be released by bpf_object__close()
+- * as per normal clean up procedure, so we don't need to worry
+- * about it from skeleton's clean up perspective.
+- */
+- *mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map_fd, 0);
+- if (*mmaped == MAP_FAILED) {
+- err = -errno;
+- *mmaped = NULL;
+- pr_warn("failed to re-mmap() map '%s': %d\n",
+- bpf_map__name(map), err);
+- return libbpf_err(err);
+- }
++ *map_skel->mmaped = map->mmaped;
+ }
+
+ return 0;
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index 9cd3d4109788c0..7489306cd6f7fd 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -396,6 +396,8 @@ static int init_output_elf(struct bpf_linker *linker, const char *file)
+ pr_warn_elf("failed to create SYMTAB data");
+ return -EINVAL;
+ }
++ /* Ensure libelf translates byte-order of symbol records */
++ sec->data->d_type = ELF_T_SYM;
+
+ str_off = strset__add_str(linker->strtab_strs, sec->sec_name);
+ if (str_off < 0)
+diff --git a/tools/lib/thermal/Makefile b/tools/lib/thermal/Makefile
+index 2d0d255fd0e1c4..8890fd57b110cc 100644
+--- a/tools/lib/thermal/Makefile
++++ b/tools/lib/thermal/Makefile
+@@ -121,7 +121,9 @@ all: fixdep
+
+ clean:
+ $(call QUIET_CLEAN, libthermal) $(RM) $(LIBTHERMAL_A) \
+- *.o *~ *.a *.so *.so.$(VERSION) *.so.$(LIBTHERMAL_VERSION) .*.d .*.cmd LIBTHERMAL-CFLAGS $(LIBTHERMAL_PC)
++ *.o *~ *.a *.so *.so.$(VERSION) *.so.$(LIBTHERMAL_VERSION) \
++ .*.d .*.cmd LIBTHERMAL-CFLAGS $(LIBTHERMAL_PC) \
++ $(srctree)/tools/$(THERMAL_UAPI)
+
+ $(LIBTHERMAL_PC):
+ $(QUIET_GEN)sed -e "s|@PREFIX@|$(prefix)|" \
+diff --git a/tools/lib/thermal/commands.c b/tools/lib/thermal/commands.c
+index 73d4d4e8d6ec0b..27b4442f0e347a 100644
+--- a/tools/lib/thermal/commands.c
++++ b/tools/lib/thermal/commands.c
+@@ -261,9 +261,25 @@ static struct genl_ops thermal_cmd_ops = {
+ .o_ncmds = ARRAY_SIZE(thermal_cmds),
+ };
+
+-static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int cmd,
+- int flags, void *arg)
++struct cmd_param {
++ int tz_id;
++};
++
++typedef int (*cmd_cb_t)(struct nl_msg *, struct cmd_param *);
++
++static int thermal_genl_tz_id_encode(struct nl_msg *msg, struct cmd_param *p)
++{
++ if (p->tz_id >= 0 && nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, p->tz_id))
++ return -1;
++
++ return 0;
++}
++
++static thermal_error_t thermal_genl_auto(struct thermal_handler *th, cmd_cb_t cmd_cb,
++ struct cmd_param *param,
++ int cmd, int flags, void *arg)
+ {
++ thermal_error_t ret = THERMAL_ERROR;
+ struct nl_msg *msg;
+ void *hdr;
+
+@@ -274,45 +290,55 @@ static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int
+ hdr = genlmsg_put(msg, NL_AUTO_PORT, NL_AUTO_SEQ, thermal_cmd_ops.o_id,
+ 0, flags, cmd, THERMAL_GENL_VERSION);
+ if (!hdr)
+- return THERMAL_ERROR;
++ goto out;
+
+- if (id >= 0 && nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, id))
+- return THERMAL_ERROR;
++ if (cmd_cb && cmd_cb(msg, param))
++ goto out;
+
+ if (nl_send_msg(th->sk_cmd, th->cb_cmd, msg, genl_handle_msg, arg))
+- return THERMAL_ERROR;
++ goto out;
+
++ ret = THERMAL_SUCCESS;
++out:
+ nlmsg_free(msg);
+
+- return THERMAL_SUCCESS;
++ return ret;
+ }
+
+ thermal_error_t thermal_cmd_get_tz(struct thermal_handler *th, struct thermal_zone **tz)
+ {
+- return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_TZ_GET_ID,
++ return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_TZ_GET_ID,
+ NLM_F_DUMP | NLM_F_ACK, tz);
+ }
+
+ thermal_error_t thermal_cmd_get_cdev(struct thermal_handler *th, struct thermal_cdev **tc)
+ {
+- return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_CDEV_GET,
++ return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_CDEV_GET,
+ NLM_F_DUMP | NLM_F_ACK, tc);
+ }
+
+ thermal_error_t thermal_cmd_get_trip(struct thermal_handler *th, struct thermal_zone *tz)
+ {
+- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TRIP,
+- 0, tz);
++ struct cmd_param p = { .tz_id = tz->id };
++
++ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
++ THERMAL_GENL_CMD_TZ_GET_TRIP, 0, tz);
+ }
+
+ thermal_error_t thermal_cmd_get_governor(struct thermal_handler *th, struct thermal_zone *tz)
+ {
+- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz);
++ struct cmd_param p = { .tz_id = tz->id };
++
++ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
++ THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz);
+ }
+
+ thermal_error_t thermal_cmd_get_temp(struct thermal_handler *th, struct thermal_zone *tz)
+ {
+- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz);
++ struct cmd_param p = { .tz_id = tz->id };
++
++ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
++ THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz);
+ }
+
+ thermal_error_t thermal_cmd_exit(struct thermal_handler *th)
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 9fccdff682af70..0ee690498d311e 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -1197,7 +1197,7 @@ endif
+ ifneq ($(NO_LIBTRACEEVENT),1)
+ $(call feature_check,libtraceevent)
+ ifeq ($(feature-libtraceevent), 1)
+- CFLAGS += -DHAVE_LIBTRACEEVENT
++ CFLAGS += -DHAVE_LIBTRACEEVENT $(shell $(PKG_CONFIG) --cflags libtraceevent)
+ LDFLAGS += $(shell $(PKG_CONFIG) --libs-only-L libtraceevent)
+ EXTLIBS += $(shell $(PKG_CONFIG) --libs-only-l libtraceevent)
+ LIBTRACEEVENT_VERSION := $(shell $(PKG_CONFIG) --modversion libtraceevent).0.0
+diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
+index eb30c8eca48878..8eab3cdb2c6deb 100644
+--- a/tools/perf/builtin-ftrace.c
++++ b/tools/perf/builtin-ftrace.c
+@@ -771,7 +771,7 @@ static void display_histogram(int buckets[], bool use_nsec)
+
+ bar_len = buckets[0] * bar_total / total;
+ printf(" %4d - %-4d %s | %10d | %.*s%*s |\n",
+- 0, 1, "us", buckets[0], bar_len, bar, bar_total - bar_len, "");
++ 0, 1, use_nsec ? "ns" : "us", buckets[0], bar_len, bar, bar_total - bar_len, "");
+
+ for (i = 1; i < NUM_BUCKET - 1; i++) {
+ int start = (1 << (i - 1));
+diff --git a/tools/perf/builtin-list.c b/tools/perf/builtin-list.c
+index 82cb4b1010aa36..2624ec24fc73b0 100644
+--- a/tools/perf/builtin-list.c
++++ b/tools/perf/builtin-list.c
+@@ -112,7 +112,7 @@ static void wordwrap(FILE *fp, const char *s, int start, int max, int corr)
+ }
+ }
+
+-static void default_print_event(void *ps, const char *pmu_name, const char *topic,
++static void default_print_event(void *ps, const char *topic, const char *pmu_name,
+ const char *event_name, const char *event_alias,
+ const char *scale_unit __maybe_unused,
+ bool deprecated, const char *event_type_desc,
+@@ -353,7 +353,7 @@ static void fix_escape_fprintf(FILE *fp, struct strbuf *buf, const char *fmt, ..
+ fputs(buf->buf, fp);
+ }
+
+-static void json_print_event(void *ps, const char *pmu_name, const char *topic,
++static void json_print_event(void *ps, const char *topic, const char *pmu_name,
+ const char *event_name, const char *event_alias,
+ const char *scale_unit,
+ bool deprecated, const char *event_type_desc,
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 661832756a248f..668a8ad2bf604f 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -712,15 +712,19 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ }
+
+ if (!cpu_map__is_dummy(evsel_list->core.user_requested_cpus)) {
+- if (affinity__setup(&saved_affinity) < 0)
+- return -1;
++ if (affinity__setup(&saved_affinity) < 0) {
++ err = -1;
++ goto err_out;
++ }
+ affinity = &saved_affinity;
+ }
+
+ evlist__for_each_entry(evsel_list, counter) {
+ counter->reset_group = false;
+- if (bpf_counter__load(counter, &target))
+- return -1;
++ if (bpf_counter__load(counter, &target)) {
++ err = -1;
++ goto err_out;
++ }
+ if (!(evsel__is_bperf(counter)))
+ all_counters_use_bpf = false;
+ }
+@@ -763,7 +767,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+
+ switch (stat_handle_error(counter)) {
+ case COUNTER_FATAL:
+- return -1;
++ err = -1;
++ goto err_out;
+ case COUNTER_RETRY:
+ goto try_again;
+ case COUNTER_SKIP:
+@@ -804,7 +809,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+
+ switch (stat_handle_error(counter)) {
+ case COUNTER_FATAL:
+- return -1;
++ err = -1;
++ goto err_out;
+ case COUNTER_RETRY:
+ goto try_again_reset;
+ case COUNTER_SKIP:
+@@ -817,6 +823,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ }
+ }
+ affinity__cleanup(affinity);
++ affinity = NULL;
+
+ evlist__for_each_entry(evsel_list, counter) {
+ if (!counter->supported) {
+@@ -829,8 +836,10 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ stat_config.unit_width = l;
+
+ if (evsel__should_store_id(counter) &&
+- evsel__store_ids(counter, evsel_list))
+- return -1;
++ evsel__store_ids(counter, evsel_list)) {
++ err = -1;
++ goto err_out;
++ }
+ }
+
+ if (evlist__apply_filters(evsel_list, &counter)) {
+@@ -851,20 +860,23 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ }
+
+ if (err < 0)
+- return err;
++ goto err_out;
+
+ err = perf_event__synthesize_stat_events(&stat_config, NULL, evsel_list,
+ process_synthesized_event, is_pipe);
+ if (err < 0)
+- return err;
++ goto err_out;
++
+ }
+
+ if (target.initial_delay) {
+ pr_info(EVLIST_DISABLED_MSG);
+ } else {
+ err = enable_counters();
+- if (err)
+- return -1;
++ if (err) {
++ err = -1;
++ goto err_out;
++ }
+ }
+
+ /* Exec the command, if any */
+@@ -874,8 +886,10 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ if (target.initial_delay > 0) {
+ usleep(target.initial_delay * USEC_PER_MSEC);
+ err = enable_counters();
+- if (err)
+- return -1;
++ if (err) {
++ err = -1;
++ goto err_out;
++ }
+
+ pr_info(EVLIST_ENABLED_MSG);
+ }
+@@ -895,7 +909,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ if (workload_exec_errno) {
+ const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg));
+ pr_err("Workload failed: %s\n", emsg);
+- return -1;
++ err = -1;
++ goto err_out;
+ }
+
+ if (WIFSIGNALED(status))
+@@ -942,6 +957,13 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ evlist__close(evsel_list);
+
+ return WEXITSTATUS(status);
++
++err_out:
++ if (forks)
++ evlist__cancel_workload(evsel_list);
++
++ affinity__cleanup(affinity);
++ return err;
+ }
+
+ static int run_perf_stat(int argc, const char **argv, int run_idx)
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index 8449f2beb54d74..65cab0bdd668ab 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -2423,6 +2423,7 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
+ char msg[1024];
+ void *args, *augmented_args = NULL;
+ int augmented_args_size;
++ size_t printed = 0;
+
+ if (sc == NULL)
+ return -1;
+@@ -2438,8 +2439,8 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
+
+ args = perf_evsel__sc_tp_ptr(evsel, args, sample);
+ augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size, trace->raw_augmented_syscalls_args_size);
+- syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);
+- fprintf(trace->output, "%s", msg);
++ printed += syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);
++ fprintf(trace->output, "%.*s", (int)printed, msg);
+ err = 0;
+ out_put:
+ thread__put(thread);
+@@ -2802,7 +2803,7 @@ static size_t trace__fprintf_tp_fields(struct trace *trace, struct evsel *evsel,
+ printed += syscall_arg_fmt__scnprintf_val(arg, bf + printed, size - printed, &syscall_arg, val);
+ }
+
+- return printed + fprintf(trace->output, "%s", bf);
++ return printed + fprintf(trace->output, "%.*s", (int)printed, bf);
+ }
+
+ static int trace__event_handler(struct trace *trace, struct evsel *evsel,
+@@ -2811,13 +2812,8 @@ static int trace__event_handler(struct trace *trace, struct evsel *evsel,
+ {
+ struct thread *thread;
+ int callchain_ret = 0;
+- /*
+- * Check if we called perf_evsel__disable(evsel) due to, for instance,
+- * this event's max_events having been hit and this is an entry coming
+- * from the ring buffer that we should discard, since the max events
+- * have already been considered/printed.
+- */
+- if (evsel->disabled)
++
++ if (evsel->nr_events_printed >= evsel->max_events)
+ return 0;
+
+ thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
+@@ -3922,6 +3918,9 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
+ sizeof(__u32), BPF_ANY);
+ }
+ }
++
++ if (trace->skel)
++ trace->filter_pids.map = trace->skel->maps.pids_filtered;
+ #endif
+ err = trace__set_filter_pids(trace);
+ if (err < 0)
+@@ -5038,6 +5037,10 @@ int cmd_trace(int argc, const char **argv)
+ if (trace.summary_only)
+ trace.summary = trace.summary_only;
+
++ /* Keep exited threads, otherwise information might be lost for summary */
++ if (trace.summary)
++ symbol_conf.keep_exited_threads = true;
++
+ if (output_name != NULL) {
+ err = trace__open_output(&trace, output_name);
+ if (err < 0) {
+diff --git a/tools/perf/tests/attr/test-stat-default b/tools/perf/tests/attr/test-stat-default
+index a1e2da0a9a6ddb..e47fb49446799b 100644
+--- a/tools/perf/tests/attr/test-stat-default
++++ b/tools/perf/tests/attr/test-stat-default
+@@ -88,98 +88,142 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+diff --git a/tools/perf/tests/attr/test-stat-detailed-1 b/tools/perf/tests/attr/test-stat-detailed-1
+index 1c52cb05c900d7..3d500d3e0c5c8a 100644
+--- a/tools/perf/tests/attr/test-stat-detailed-1
++++ b/tools/perf/tests/attr/test-stat-detailed-1
+@@ -90,99 +90,143 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+
+@@ -190,8 +234,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event25:base-stat]
+-fd=25
++[event29:base-stat]
++fd=29
+ type=3
+ config=0
+ optional=1
+@@ -200,8 +244,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event26:base-stat]
+-fd=26
++[event30:base-stat]
++fd=30
+ type=3
+ config=65536
+ optional=1
+@@ -210,8 +254,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event27:base-stat]
+-fd=27
++[event31:base-stat]
++fd=31
+ type=3
+ config=2
+ optional=1
+@@ -220,8 +264,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event28:base-stat]
+-fd=28
++[event32:base-stat]
++fd=32
+ type=3
+ config=65538
+ optional=1
+diff --git a/tools/perf/tests/attr/test-stat-detailed-2 b/tools/perf/tests/attr/test-stat-detailed-2
+index 7e961d24a885a7..01777a63752fe6 100644
+--- a/tools/perf/tests/attr/test-stat-detailed-2
++++ b/tools/perf/tests/attr/test-stat-detailed-2
+@@ -90,99 +90,143 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+
+@@ -190,8 +234,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event25:base-stat]
+-fd=25
++[event29:base-stat]
++fd=29
+ type=3
+ config=0
+ optional=1
+@@ -200,8 +244,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event26:base-stat]
+-fd=26
++[event30:base-stat]
++fd=30
+ type=3
+ config=65536
+ optional=1
+@@ -210,8 +254,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event27:base-stat]
+-fd=27
++[event31:base-stat]
++fd=31
+ type=3
+ config=2
+ optional=1
+@@ -220,8 +264,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event28:base-stat]
+-fd=28
++[event32:base-stat]
++fd=32
+ type=3
+ config=65538
+ optional=1
+@@ -230,8 +274,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event29:base-stat]
+-fd=29
++[event33:base-stat]
++fd=33
+ type=3
+ config=1
+ optional=1
+@@ -240,8 +284,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event30:base-stat]
+-fd=30
++[event34:base-stat]
++fd=34
+ type=3
+ config=65537
+ optional=1
+@@ -250,8 +294,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event31:base-stat]
+-fd=31
++[event35:base-stat]
++fd=35
+ type=3
+ config=3
+ optional=1
+@@ -260,8 +304,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event32:base-stat]
+-fd=32
++[event36:base-stat]
++fd=36
+ type=3
+ config=65539
+ optional=1
+@@ -270,8 +314,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event33:base-stat]
+-fd=33
++[event37:base-stat]
++fd=37
+ type=3
+ config=4
+ optional=1
+@@ -280,8 +324,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event34:base-stat]
+-fd=34
++[event38:base-stat]
++fd=38
+ type=3
+ config=65540
+ optional=1
+diff --git a/tools/perf/tests/attr/test-stat-detailed-3 b/tools/perf/tests/attr/test-stat-detailed-3
+index e50535f45977c6..8400abd7e1e488 100644
+--- a/tools/perf/tests/attr/test-stat-detailed-3
++++ b/tools/perf/tests/attr/test-stat-detailed-3
+@@ -90,99 +90,143 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+
+@@ -190,8 +234,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event25:base-stat]
+-fd=25
++[event29:base-stat]
++fd=29
+ type=3
+ config=0
+ optional=1
+@@ -200,8 +244,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event26:base-stat]
+-fd=26
++[event30:base-stat]
++fd=30
+ type=3
+ config=65536
+ optional=1
+@@ -210,8 +254,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event27:base-stat]
+-fd=27
++[event31:base-stat]
++fd=31
+ type=3
+ config=2
+ optional=1
+@@ -220,8 +264,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event28:base-stat]
+-fd=28
++[event32:base-stat]
++fd=32
+ type=3
+ config=65538
+ optional=1
+@@ -230,8 +274,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event29:base-stat]
+-fd=29
++[event33:base-stat]
++fd=33
+ type=3
+ config=1
+ optional=1
+@@ -240,8 +284,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event30:base-stat]
+-fd=30
++[event34:base-stat]
++fd=34
+ type=3
+ config=65537
+ optional=1
+@@ -250,8 +294,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event31:base-stat]
+-fd=31
++[event35:base-stat]
++fd=35
+ type=3
+ config=3
+ optional=1
+@@ -260,8 +304,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event32:base-stat]
+-fd=32
++[event36:base-stat]
++fd=36
+ type=3
+ config=65539
+ optional=1
+@@ -270,8 +314,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event33:base-stat]
+-fd=33
++[event37:base-stat]
++fd=37
+ type=3
+ config=4
+ optional=1
+@@ -280,8 +324,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event34:base-stat]
+-fd=34
++[event38:base-stat]
++fd=38
+ type=3
+ config=65540
+ optional=1
+@@ -290,8 +334,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event35:base-stat]
+-fd=35
++[event39:base-stat]
++fd=39
+ type=3
+ config=512
+ optional=1
+@@ -300,8 +344,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event36:base-stat]
+-fd=36
++[event40:base-stat]
++fd=40
+ type=3
+ config=66048
+ optional=1
+diff --git a/tools/perf/tests/shell/test_stat_intel_tpebs.sh b/tools/perf/tests/shell/test_stat_intel_tpebs.sh
+new file mode 100755
+index 00000000000000..c60b29add98012
+--- /dev/null
++++ b/tools/perf/tests/shell/test_stat_intel_tpebs.sh
+@@ -0,0 +1,19 @@
++#!/bin/bash
++# test Intel TPEBS counting mode
++# SPDX-License-Identifier: GPL-2.0
++
++set -e
++grep -q GenuineIntel /proc/cpuinfo || { echo Skipping non-Intel; exit 2; }
++
++# Use this event for testing because it should exist in all platforms
++event=cache-misses:R
++
++# Without this cmd option, default value or zero is returned
++echo "Testing without --record-tpebs"
++result=$(perf stat -e "$event" true 2>&1)
++[[ "$result" =~ $event ]] || exit 1
++
++# In platforms that do not support TPEBS, it should execute without error.
++echo "Testing with --record-tpebs"
++result=$(perf stat -e "$event" --record-tpebs -a sleep 0.01 2>&1)
++[[ "$result" =~ "perf record" && "$result" =~ $event ]] || exit 1
+diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
+index 5e9fbcfad7d443..f4615fa4280d8d 100644
+--- a/tools/perf/util/cs-etm.c
++++ b/tools/perf/util/cs-etm.c
+@@ -2439,12 +2439,6 @@ static void cs_etm__clear_all_traceid_queues(struct cs_etm_queue *etmq)
+
+ /* Ignore return value */
+ cs_etm__process_traceid_queue(etmq, tidq);
+-
+- /*
+- * Generate an instruction sample with the remaining
+- * branchstack entries.
+- */
+- cs_etm__flush(etmq, tidq);
+ }
+ }
+
+@@ -2587,7 +2581,7 @@ static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm)
+
+ while (1) {
+ if (!etm->heap.heap_cnt)
+- goto out;
++ break;
+
+ /* Take the entry at the top of the min heap */
+ cs_queue_nr = etm->heap.heap_array[0].queue_nr;
+@@ -2670,6 +2664,23 @@ static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm)
+ ret = auxtrace_heap__add(&etm->heap, cs_queue_nr, cs_timestamp);
+ }
+
++ for (i = 0; i < etm->queues.nr_queues; i++) {
++ struct int_node *inode;
++
++ etmq = etm->queues.queue_array[i].priv;
++ if (!etmq)
++ continue;
++
++ intlist__for_each_entry(inode, etmq->traceid_queues_list) {
++ int idx = (int)(intptr_t)inode->priv;
++
++ /* Flush any remaining branch stack entries */
++ tidq = etmq->traceid_queues[idx];
++ ret = cs_etm__end_block(etmq, tidq);
++ if (ret)
++ return ret;
++ }
++ }
+ out:
+ return ret;
+ }
+diff --git a/tools/perf/util/disasm.c b/tools/perf/util/disasm.c
+index 766cbd005f32a8..c2ce33e447e35b 100644
+--- a/tools/perf/util/disasm.c
++++ b/tools/perf/util/disasm.c
+@@ -1391,7 +1391,7 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym,
+ */
+ list_for_each_entry_safe(dl, tmp, ¬es->src->source, al.node) {
+ list_del(&dl->al.node);
+- free(dl);
++ disasm_line__free(dl);
+ }
+ }
+ count = -1;
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index 3a719edafc7ad2..6c4227e9832fac 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -46,6 +46,7 @@
+ #include <sys/mman.h>
+ #include <sys/prctl.h>
+ #include <sys/timerfd.h>
++#include <sys/wait.h>
+
+ #include <linux/bitops.h>
+ #include <linux/hash.h>
+@@ -1413,6 +1414,8 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
+ int child_ready_pipe[2], go_pipe[2];
+ char bf;
+
++ evlist->workload.cork_fd = -1;
++
+ if (pipe(child_ready_pipe) < 0) {
+ perror("failed to create 'ready' pipe");
+ return -1;
+@@ -1465,7 +1468,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
+ * For cancelling the workload without actually running it,
+ * the parent will just close workload.cork_fd, without writing
+ * anything, i.e. read will return zero and we just exit()
+- * here.
++ * here (See evlist__cancel_workload()).
+ */
+ if (ret != 1) {
+ if (ret == -1)
+@@ -1529,7 +1532,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
+
+ int evlist__start_workload(struct evlist *evlist)
+ {
+- if (evlist->workload.cork_fd > 0) {
++ if (evlist->workload.cork_fd >= 0) {
+ char bf = 0;
+ int ret;
+ /*
+@@ -1540,12 +1543,24 @@ int evlist__start_workload(struct evlist *evlist)
+ perror("unable to write to pipe");
+
+ close(evlist->workload.cork_fd);
++ evlist->workload.cork_fd = -1;
+ return ret;
+ }
+
+ return 0;
+ }
+
++void evlist__cancel_workload(struct evlist *evlist)
++{
++ int status;
++
++ if (evlist->workload.cork_fd >= 0) {
++ close(evlist->workload.cork_fd);
++ evlist->workload.cork_fd = -1;
++ waitpid(evlist->workload.pid, &status, WNOHANG);
++ }
++}
++
+ int evlist__parse_sample(struct evlist *evlist, union perf_event *event, struct perf_sample *sample)
+ {
+ struct evsel *evsel = evlist__event2evsel(evlist, event);
+diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
+index cb91dc9117a272..12f929ffdf920d 100644
+--- a/tools/perf/util/evlist.h
++++ b/tools/perf/util/evlist.h
+@@ -184,6 +184,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target,
+ const char *argv[], bool pipe_output,
+ void (*exec_error)(int signo, siginfo_t *info, void *ucontext));
+ int evlist__start_workload(struct evlist *evlist);
++void evlist__cancel_workload(struct evlist *evlist);
+
+ struct option;
+
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 706be5e4a07617..b2e6e73d7a925c 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -1342,7 +1342,7 @@ static int maps__set_module_path(struct maps *maps, const char *path, struct kmo
+ * we need to update the symtab_type if needed.
+ */
+ if (m->comp && is_kmod_dso(dso)) {
+- dso__set_symtab_type(dso, dso__symtab_type(dso));
++ dso__set_symtab_type(dso, dso__symtab_type(dso)+1);
+ dso__set_comp(dso, m->comp);
+ }
+ map__put(map);
+diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
+index 051feb93ed8d40..bf5090f5220bbd 100644
+--- a/tools/perf/util/mem-events.c
++++ b/tools/perf/util/mem-events.c
+@@ -366,6 +366,12 @@ static const char * const mem_lvl[] = {
+ };
+
+ static const char * const mem_lvlnum[] = {
++ [PERF_MEM_LVLNUM_L1] = "L1",
++ [PERF_MEM_LVLNUM_L2] = "L2",
++ [PERF_MEM_LVLNUM_L3] = "L3",
++ [PERF_MEM_LVLNUM_L4] = "L4",
++ [PERF_MEM_LVLNUM_L2_MHB] = "L2 MHB",
++ [PERF_MEM_LVLNUM_MSC] = "Memory-side Cache",
+ [PERF_MEM_LVLNUM_UNC] = "Uncached",
+ [PERF_MEM_LVLNUM_CXL] = "CXL",
+ [PERF_MEM_LVLNUM_IO] = "I/O",
+@@ -448,7 +454,7 @@ int perf_mem__lvl_scnprintf(char *out, size_t sz, const struct mem_info *mem_inf
+ if (mem_lvlnum[lvl])
+ l += scnprintf(out + l, sz - l, mem_lvlnum[lvl]);
+ else
+- l += scnprintf(out + l, sz - l, "L%d", lvl);
++ l += scnprintf(out + l, sz - l, "Unknown level %d", lvl);
+
+ l += scnprintf(out + l, sz - l, " %s", hit_miss);
+ return l;
+diff --git a/tools/perf/util/pfm.c b/tools/perf/util/pfm.c
+index 5ccfe4b64cdfe4..0dacc133ed3960 100644
+--- a/tools/perf/util/pfm.c
++++ b/tools/perf/util/pfm.c
+@@ -233,7 +233,7 @@ print_libpfm_event(const struct print_callbacks *print_cb, void *print_state,
+ }
+
+ if (is_libpfm_event_supported(name, cpus, threads)) {
+- print_cb->print_event(print_state, pinfo->name, topic,
++ print_cb->print_event(print_state, topic, pinfo->name,
+ name, info->equiv,
+ /*scale_unit=*/NULL,
+ /*deprecated=*/NULL, "PFM event",
+@@ -267,8 +267,8 @@ print_libpfm_event(const struct print_callbacks *print_cb, void *print_state,
+ continue;
+
+ print_cb->print_event(print_state,
+- pinfo->name,
+ topic,
++ pinfo->name,
+ name, /*alias=*/NULL,
+ /*scale_unit=*/NULL,
+ /*deprecated=*/NULL, "PFM event",
+diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c
+index 3fcabfd8fca190..efa556e01ade21 100644
+--- a/tools/perf/util/pmus.c
++++ b/tools/perf/util/pmus.c
+@@ -492,8 +492,8 @@ void perf_pmus__print_pmu_events(const struct print_callbacks *print_cb, void *p
+ goto free;
+
+ print_cb->print_event(print_state,
+- aliases[j].pmu_name,
+ aliases[j].topic,
++ aliases[j].pmu_name,
+ aliases[j].name,
+ aliases[j].alias,
+ aliases[j].scale_unit,
+diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
+index 630e16c54ed5cb..a30f88ed030044 100644
+--- a/tools/perf/util/probe-finder.c
++++ b/tools/perf/util/probe-finder.c
+@@ -1379,6 +1379,10 @@ int debuginfo__find_trace_events(struct debuginfo *dbg,
+ if (ret >= 0 && tf.pf.skip_empty_arg)
+ ret = fill_empty_trace_arg(pev, tf.tevs, tf.ntevs);
+
++#if _ELFUTILS_PREREQ(0, 142)
++ dwarf_cfi_end(tf.pf.cfi_eh);
++#endif
++
+ if (ret < 0 || tf.ntevs == 0) {
+ for (i = 0; i < tf.ntevs; i++)
+ clear_probe_trace_event(&tf.tevs[i]);
+@@ -1583,8 +1587,21 @@ int debuginfo__find_probe_point(struct debuginfo *dbg, u64 addr,
+
+ /* Find a corresponding function (name, baseline and baseaddr) */
+ if (die_find_realfunc(&cudie, (Dwarf_Addr)addr, &spdie)) {
+- /* Get function entry information */
+- func = basefunc = dwarf_diename(&spdie);
++ /*
++ * Get function entry information.
++ *
++ * As described in the document DWARF Debugging Information
++ * Format Version 5, section 2.22 Linkage Names, "mangled names,
++ * are used in various ways, ... to distinguish multiple
++ * entities that have the same name".
++ *
++ * Firstly try to get distinct linkage name, if fail then
++ * rollback to get associated name in DIE.
++ */
++ func = basefunc = die_get_linkage_name(&spdie);
++ if (!func)
++ func = basefunc = dwarf_diename(&spdie);
++
+ if (!func ||
+ die_entrypc(&spdie, &baseaddr) != 0 ||
+ dwarf_decl_line(&spdie, &baseline) != 0) {
+diff --git a/tools/perf/util/probe-finder.h b/tools/perf/util/probe-finder.h
+index 3add5ff516e12d..724db829b49e02 100644
+--- a/tools/perf/util/probe-finder.h
++++ b/tools/perf/util/probe-finder.h
+@@ -64,9 +64,9 @@ struct probe_finder {
+
+ /* For variable searching */
+ #if _ELFUTILS_PREREQ(0, 142)
+- /* Call Frame Information from .eh_frame */
++ /* Call Frame Information from .eh_frame. Owned by this struct. */
+ Dwarf_CFI *cfi_eh;
+- /* Call Frame Information from .debug_frame */
++ /* Call Frame Information from .debug_frame. Not owned. */
+ Dwarf_CFI *cfi_dbg;
+ #endif
+ Dwarf_Op *fb_ops; /* Frame base attribute */
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 089220aaa5c929..a5ebee8b23bbe3 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -5385,6 +5385,9 @@ static int parse_cpu_str(char *cpu_str, cpu_set_t *cpu_set, int cpu_set_size)
+ if (*next == '-') /* no negative cpu numbers */
+ return 1;
+
++ if (*next == '\0' || *next == '\n')
++ break;
++
+ start = strtoul(next, &next, 10);
+
+ if (start >= CPU_SUBSET_MAXCPUS)
+@@ -9781,7 +9784,7 @@ void cmdline(int argc, char **argv)
+ * Parse some options early, because they may make other options invalid,
+ * like adding the MSR counter with --add and at the same time using --no-msr.
+ */
+- while ((opt = getopt_long_only(argc, argv, "MPn:", long_options, &option_index)) != -1) {
++ while ((opt = getopt_long_only(argc, argv, "+MPn:", long_options, &option_index)) != -1) {
+ switch (opt) {
+ case 'M':
+ no_msr = 1;
+diff --git a/tools/testing/selftests/arm64/abi/hwcap.c b/tools/testing/selftests/arm64/abi/hwcap.c
+index d8909b2b535a08..14e146a23879a7 100644
+--- a/tools/testing/selftests/arm64/abi/hwcap.c
++++ b/tools/testing/selftests/arm64/abi/hwcap.c
+@@ -355,8 +355,8 @@ static void sveaes_sigill(void)
+
+ static void sveb16b16_sigill(void)
+ {
+- /* BFADD ZA.H[W0, 0], {Z0.H-Z1.H} */
+- asm volatile(".inst 0xC1E41C00" : : : );
++ /* BFADD Z0.H, Z0.H, Z0.H */
++ asm volatile(".inst 0x65000000" : : : );
+ }
+
+ static void svepmull_sigill(void)
+@@ -484,7 +484,7 @@ static const struct hwcap_data {
+ .name = "F8DP2",
+ .at_hwcap = AT_HWCAP2,
+ .hwcap_bit = HWCAP2_F8DP2,
+- .cpuinfo = "f8dp4",
++ .cpuinfo = "f8dp2",
+ .sigill_fn = f8dp2_sigill,
+ },
+ {
+diff --git a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
+index 2b1425b92b6991..a3d1e23fe02aff 100644
+--- a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
++++ b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
+@@ -65,7 +65,7 @@ static int check_single_included_tags(int mem_type, int mode)
+ ptr = mte_insert_tags(ptr, BUFFER_SIZE);
+ /* Check tag value */
+ if (MT_FETCH_TAG((uintptr_t)ptr) == tag) {
+- ksft_print_msg("FAIL: wrong tag = 0x%x with include mask=0x%x\n",
++ ksft_print_msg("FAIL: wrong tag = 0x%lx with include mask=0x%x\n",
+ MT_FETCH_TAG((uintptr_t)ptr),
+ MT_INCLUDE_VALID_TAG(tag));
+ result = KSFT_FAIL;
+@@ -97,7 +97,7 @@ static int check_multiple_included_tags(int mem_type, int mode)
+ ptr = mte_insert_tags(ptr, BUFFER_SIZE);
+ /* Check tag value */
+ if (MT_FETCH_TAG((uintptr_t)ptr) < tag) {
+- ksft_print_msg("FAIL: wrong tag = 0x%x with include mask=0x%x\n",
++ ksft_print_msg("FAIL: wrong tag = 0x%lx with include mask=0x%lx\n",
+ MT_FETCH_TAG((uintptr_t)ptr),
+ MT_INCLUDE_VALID_TAGS(excl_mask));
+ result = KSFT_FAIL;
+diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
+index 00ffd34c66d301..1120f5aa76550f 100644
+--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
++++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
+@@ -38,7 +38,7 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc)
+ if (cur_mte_cxt.trig_si_code == si->si_code)
+ cur_mte_cxt.fault_valid = true;
+ else
+- ksft_print_msg("Got unexpected SEGV_MTEAERR at pc=$lx, fault addr=%lx\n",
++ ksft_print_msg("Got unexpected SEGV_MTEAERR at pc=%llx, fault addr=%lx\n",
+ ((ucontext_t *)uc)->uc_mcontext.pc,
+ addr);
+ return;
+@@ -64,7 +64,7 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc)
+ exit(1);
+ }
+ } else if (signum == SIGBUS) {
+- ksft_print_msg("INFO: SIGBUS signal at pc=%lx, fault addr=%lx, si_code=%lx\n",
++ ksft_print_msg("INFO: SIGBUS signal at pc=%llx, fault addr=%lx, si_code=%x\n",
+ ((ucontext_t *)uc)->uc_mcontext.pc, addr, si->si_code);
+ if ((cur_mte_cxt.trig_range >= 0 &&
+ addr >= MT_CLEAR_TAG(cur_mte_cxt.trig_addr) &&
+diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h
+index 11ee801e75e7e0..6c3b4d4f173ac6 100644
+--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h
++++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h
+@@ -34,6 +34,12 @@ DECLARE_TRACE(bpf_testmod_test_write_bare,
+ TP_ARGS(task, ctx)
+ );
+
++/* Used in bpf_testmod_test_read() to test __nullable suffix */
++DECLARE_TRACE(bpf_testmod_test_nullable_bare,
++ TP_PROTO(struct bpf_testmod_test_read_ctx *ctx__nullable),
++ TP_ARGS(ctx__nullable)
++);
++
+ #undef BPF_TESTMOD_DECLARE_TRACE
+ #ifdef DECLARE_TRACE_WRITABLE
+ #define BPF_TESTMOD_DECLARE_TRACE(call, proto, args, size) \
+diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+index 72f565af4f829e..a89af5b9587cd7 100644
+--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
++++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+@@ -356,6 +356,8 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj,
+ if (bpf_testmod_loop_test(101) > 100)
+ trace_bpf_testmod_test_read(current, &ctx);
+
++ trace_bpf_testmod_test_nullable_bare(NULL);
++
+ /* Magic number to enable writable tp */
+ if (len == 64) {
+ struct bpf_testmod_test_writable_ctx writable = {
+diff --git a/tools/testing/selftests/bpf/network_helpers.c b/tools/testing/selftests/bpf/network_helpers.c
+index e0cba4178e41d3..cff78179642342 100644
+--- a/tools/testing/selftests/bpf/network_helpers.c
++++ b/tools/testing/selftests/bpf/network_helpers.c
+@@ -448,6 +448,52 @@ char *ping_command(int family)
+ return "ping";
+ }
+
++int remove_netns(const char *name)
++{
++ char *cmd;
++ int r;
++
++ r = asprintf(&cmd, "ip netns del %s >/dev/null 2>&1", name);
++ if (r < 0) {
++ log_err("Failed to malloc cmd");
++ return -1;
++ }
++
++ r = system(cmd);
++ free(cmd);
++ return r;
++}
++
++int make_netns(const char *name)
++{
++ char *cmd;
++ int r;
++
++ r = asprintf(&cmd, "ip netns add %s", name);
++ if (r < 0) {
++ log_err("Failed to malloc cmd");
++ return -1;
++ }
++
++ r = system(cmd);
++ free(cmd);
++
++ if (r)
++ return r;
++
++ r = asprintf(&cmd, "ip -n %s link set lo up", name);
++ if (r < 0) {
++ log_err("Failed to malloc cmd for setting up lo");
++ remove_netns(name);
++ return -1;
++ }
++
++ r = system(cmd);
++ free(cmd);
++
++ return r;
++}
++
+ struct nstoken {
+ int orig_netns_fd;
+ };
+diff --git a/tools/testing/selftests/bpf/network_helpers.h b/tools/testing/selftests/bpf/network_helpers.h
+index aac5b94d637991..7b05fc3fedb069 100644
+--- a/tools/testing/selftests/bpf/network_helpers.h
++++ b/tools/testing/selftests/bpf/network_helpers.h
+@@ -1,6 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #ifndef __NETWORK_HELPERS_H
+ #define __NETWORK_HELPERS_H
++#include <arpa/inet.h>
+ #include <sys/socket.h>
+ #include <sys/types.h>
+ #include <linux/types.h>
+@@ -92,6 +93,8 @@ struct nstoken;
+ struct nstoken *open_netns(const char *name);
+ void close_netns(struct nstoken *token);
+ int send_recv_data(int lfd, int fd, uint32_t total_bytes);
++int make_netns(const char *name);
++int remove_netns(const char *name);
+
+ static __u16 csum_fold(__u32 csum)
+ {
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+index 871d16cb95cfde..1a2f99596916fb 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+@@ -5,6 +5,7 @@
+ #include <test_progs.h>
+ #include <pthread.h>
+ #include <network_helpers.h>
++#include <sys/sysinfo.h>
+
+ #include "timer_lockup.skel.h"
+
+@@ -52,6 +53,11 @@ void test_timer_lockup(void)
+ pthread_t thrds[2];
+ void *ret;
+
++ if (get_nprocs() < 2) {
++ test__skip();
++ return;
++ }
++
+ skel = timer_lockup__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "timer_lockup__open_and_load"))
+ return;
+diff --git a/tools/testing/selftests/bpf/prog_tests/tp_btf_nullable.c b/tools/testing/selftests/bpf/prog_tests/tp_btf_nullable.c
+new file mode 100644
+index 00000000000000..accc42e01f8a88
+--- /dev/null
++++ b/tools/testing/selftests/bpf/prog_tests/tp_btf_nullable.c
+@@ -0,0 +1,14 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <test_progs.h>
++#include "test_tp_btf_nullable.skel.h"
++
++void test_tp_btf_nullable(void)
++{
++ if (!env.has_testmod) {
++ test__skip();
++ return;
++ }
++
++ RUN_TESTS(test_tp_btf_nullable);
++}
+diff --git a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
+index 43f40c4fe241ac..1c8b678e2e9a39 100644
+--- a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
++++ b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
+@@ -28,8 +28,8 @@ struct {
+ },
+ };
+
+-SEC(".data.A") struct bpf_spin_lock lockA;
+-SEC(".data.B") struct bpf_spin_lock lockB;
++static struct bpf_spin_lock lockA SEC(".data.A");
++static struct bpf_spin_lock lockB SEC(".data.B");
+
+ SEC("?tc")
+ int lock_id_kptr_preserve(void *ctx)
+diff --git a/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
+new file mode 100644
+index 00000000000000..5aaf2b065f86c2
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
+@@ -0,0 +1,28 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include "vmlinux.h"
++#include <bpf/bpf_helpers.h>
++#include <bpf/bpf_tracing.h>
++#include "../bpf_testmod/bpf_testmod.h"
++#include "bpf_misc.h"
++
++SEC("tp_btf/bpf_testmod_test_nullable_bare")
++/* This used to be a failure test, but raw_tp nullable arguments can now
++ * directly be dereferenced, whether they have nullable annotation or not,
++ * and don't need to be explicitly checked.
++ */
++__success
++int BPF_PROG(handle_tp_btf_nullable_bare1, struct bpf_testmod_test_read_ctx *nullable_ctx)
++{
++ return nullable_ctx->len;
++}
++
++SEC("tp_btf/bpf_testmod_test_nullable_bare")
++int BPF_PROG(handle_tp_btf_nullable_bare2, struct bpf_testmod_test_read_ctx *nullable_ctx)
++{
++ if (nullable_ctx)
++ return nullable_ctx->len;
++ return 0;
++}
++
++char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index d5d0cb4eb1975b..080e4fd012d3d4 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -18,11 +18,15 @@
+ #include <bpf/btf.h>
+ #include "json_writer.h"
+
++#include "network_helpers.h"
++
++/* backtrace() and backtrace_symbols_fd() are glibc specific,
++ * use header file when glibc is available and provide stub
++ * implementations when another libc implementation is used.
++ */
+ #ifdef __GLIBC__
+ #include <execinfo.h> /* backtrace */
+-#endif
+-
+-/* Default backtrace funcs if missing at link */
++#else
+ __weak int backtrace(void **buffer, int size)
+ {
+ return 0;
+@@ -32,6 +36,7 @@ __weak void backtrace_symbols_fd(void *const *buffer, int size, int fd)
+ {
+ dprintf(fd, "<backtrace not supported>\n");
+ }
++#endif /*__GLIBC__ */
+
+ static bool verbose(void)
+ {
+@@ -624,6 +629,92 @@ int compare_stack_ips(int smap_fd, int amap_fd, int stack_trace_len)
+ return err;
+ }
+
++struct netns_obj {
++ char *nsname;
++ struct tmonitor_ctx *tmon;
++ struct nstoken *nstoken;
++};
++
++/* Create a new network namespace with the given name.
++ *
++ * Create a new network namespace and set the network namespace of the
++ * current process to the new network namespace if the argument "open" is
++ * true. This function should be paired with netns_free() to release the
++ * resource and delete the network namespace.
++ *
++ * It also implements the functionality of the option "-m" by starting
++ * traffic monitor on the background to capture the packets in this network
++ * namespace if the current test or subtest matching the pattern.
++ *
++ * nsname: the name of the network namespace to create.
++ * open: open the network namespace if true.
++ *
++ * Return: the network namespace object on success, NULL on failure.
++ */
++struct netns_obj *netns_new(const char *nsname, bool open)
++{
++ struct netns_obj *netns_obj = malloc(sizeof(*netns_obj));
++ const char *test_name, *subtest_name;
++ int r;
++
++ if (!netns_obj)
++ return NULL;
++ memset(netns_obj, 0, sizeof(*netns_obj));
++
++ netns_obj->nsname = strdup(nsname);
++ if (!netns_obj->nsname)
++ goto fail;
++
++ /* Create the network namespace */
++ r = make_netns(nsname);
++ if (r)
++ goto fail;
++
++ /* Start traffic monitor */
++ if (env.test->should_tmon ||
++ (env.subtest_state && env.subtest_state->should_tmon)) {
++ test_name = env.test->test_name;
++ subtest_name = env.subtest_state ? env.subtest_state->name : NULL;
++ netns_obj->tmon = traffic_monitor_start(nsname, test_name, subtest_name);
++ if (!netns_obj->tmon) {
++ fprintf(stderr, "Failed to start traffic monitor for %s\n", nsname);
++ goto fail;
++ }
++ } else {
++ netns_obj->tmon = NULL;
++ }
++
++ if (open) {
++ netns_obj->nstoken = open_netns(nsname);
++ if (!netns_obj->nstoken)
++ goto fail;
++ }
++
++ return netns_obj;
++fail:
++ traffic_monitor_stop(netns_obj->tmon);
++ remove_netns(nsname);
++ free(netns_obj->nsname);
++ free(netns_obj);
++ return NULL;
++}
++
++/* Delete the network namespace.
++ *
++ * This function should be paired with netns_new() to delete the namespace
++ * created by netns_new().
++ */
++void netns_free(struct netns_obj *netns_obj)
++{
++ if (!netns_obj)
++ return;
++ traffic_monitor_stop(netns_obj->tmon);
++ close_netns(netns_obj->nstoken);
++ remove_netns(netns_obj->nsname);
++ free(netns_obj->nsname);
++ free(netns_obj);
++}
++
+ /* extern declarations for test funcs */
+ #define DEFINE_TEST(name) \
+ extern void test_##name(void) __weak; \
+diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
+index b1e949fb16cf33..dbc4127f9dbd27 100644
+--- a/tools/testing/selftests/bpf/test_progs.h
++++ b/tools/testing/selftests/bpf/test_progs.h
+@@ -428,6 +428,10 @@ int write_sysctl(const char *sysctl, const char *value);
+ int get_bpf_max_tramp_links_from(struct btf *btf);
+ int get_bpf_max_tramp_links(void);
+
++struct netns_obj;
++struct netns_obj *netns_new(const char *name, bool open);
++void netns_free(struct netns_obj *netns);
++
+ #ifdef __x86_64__
+ #define SYS_NANOSLEEP_KPROBE_NAME "__x64_sys_nanosleep"
+ #elif defined(__s390x__)
+diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c
+index 3e02d7267de8bb..61a747afcd05fb 100644
+--- a/tools/testing/selftests/bpf/test_sockmap.c
++++ b/tools/testing/selftests/bpf/test_sockmap.c
+@@ -56,6 +56,8 @@ static void running_handler(int a);
+ #define BPF_SOCKHASH_FILENAME "test_sockhash_kern.bpf.o"
+ #define CG_PATH "/sockmap"
+
++#define EDATAINTEGRITY 2001
++
+ /* global sockets */
+ int s1, s2, c1, c2, p1, p2;
+ int test_cnt;
+@@ -86,6 +88,10 @@ int ktls;
+ int peek_flag;
+ int skb_use_parser;
+ int txmsg_omit_skb_parser;
++int verify_push_start;
++int verify_push_len;
++int verify_pop_start;
++int verify_pop_len;
+
+ static const struct option long_options[] = {
+ {"help", no_argument, NULL, 'h' },
+@@ -418,16 +424,18 @@ static int msg_loop_sendpage(int fd, int iov_length, int cnt,
+ {
+ bool drop = opt->drop_expected;
+ unsigned char k = 0;
++ int i, j, fp;
+ FILE *file;
+- int i, fp;
+
+ file = tmpfile();
+ if (!file) {
+ perror("create file for sendpage");
+ return 1;
+ }
+- for (i = 0; i < iov_length * cnt; i++, k++)
+- fwrite(&k, sizeof(char), 1, file);
++ for (i = 0; i < cnt; i++, k = 0) {
++ for (j = 0; j < iov_length; j++, k++)
++ fwrite(&k, sizeof(char), 1, file);
++ }
+ fflush(file);
+ fseek(file, 0, SEEK_SET);
+
+@@ -510,42 +518,111 @@ static int msg_alloc_iov(struct msghdr *msg,
+ return -ENOMEM;
+ }
+
+-static int msg_verify_data(struct msghdr *msg, int size, int chunk_sz)
++/* In push or pop test, we need to do some calculations for msg_verify_data */
++static void msg_verify_date_prep(void)
+ {
+- int i, j = 0, bytes_cnt = 0;
+- unsigned char k = 0;
++ int push_range_end = txmsg_start_push + txmsg_end_push - 1;
++ int pop_range_end = txmsg_start_pop + txmsg_pop - 1;
++
++ if (txmsg_end_push && txmsg_pop &&
++ txmsg_start_push <= pop_range_end && txmsg_start_pop <= push_range_end) {
++ /* The push range and the pop range overlap */
++ int overlap_len;
++
++ verify_push_start = txmsg_start_push;
++ verify_pop_start = txmsg_start_pop;
++ if (txmsg_start_push < txmsg_start_pop)
++ overlap_len = min(push_range_end - txmsg_start_pop + 1, txmsg_pop);
++ else
++ overlap_len = min(pop_range_end - txmsg_start_push + 1, txmsg_end_push);
++ verify_push_len = max(txmsg_end_push - overlap_len, 0);
++ verify_pop_len = max(txmsg_pop - overlap_len, 0);
++ } else {
++ /* Otherwise */
++ verify_push_start = txmsg_start_push;
++ verify_pop_start = txmsg_start_pop;
++ verify_push_len = txmsg_end_push;
++ verify_pop_len = txmsg_pop;
++ }
++}
++
++static int msg_verify_data(struct msghdr *msg, int size, int chunk_sz,
++ unsigned char *k_p, int *bytes_cnt_p,
++ int *check_cnt_p, int *push_p)
++{
++ int bytes_cnt = *bytes_cnt_p, check_cnt = *check_cnt_p, push = *push_p;
++ unsigned char k = *k_p;
++ int i, j;
+
+- for (i = 0; i < msg->msg_iovlen; i++) {
++ for (i = 0, j = 0; i < msg->msg_iovlen && size; i++, j = 0) {
+ unsigned char *d = msg->msg_iov[i].iov_base;
+
+ /* Special case test for skb ingress + ktls */
+ if (i == 0 && txmsg_ktls_skb) {
+ if (msg->msg_iov[i].iov_len < 4)
+- return -EIO;
++ return -EDATAINTEGRITY;
+ if (memcmp(d, "PASS", 4) != 0) {
+ fprintf(stderr,
+ "detected skb data error with skb ingress update @iov[%i]:%i \"%02x %02x %02x %02x\" != \"PASS\"\n",
+ i, 0, d[0], d[1], d[2], d[3]);
+- return -EIO;
++ return -EDATAINTEGRITY;
+ }
+ j = 4; /* advance index past PASS header */
+ }
+
+ for (; j < msg->msg_iov[i].iov_len && size; j++) {
++ if (push > 0 &&
++ check_cnt == verify_push_start + verify_push_len - push) {
++ int skipped;
++revisit_push:
++ skipped = push;
++ if (j + push >= msg->msg_iov[i].iov_len)
++ skipped = msg->msg_iov[i].iov_len - j;
++ push -= skipped;
++ size -= skipped;
++ j += skipped - 1;
++ check_cnt += skipped;
++ continue;
++ }
++
++ if (verify_pop_len > 0 && check_cnt == verify_pop_start) {
++ bytes_cnt += verify_pop_len;
++ check_cnt += verify_pop_len;
++ k += verify_pop_len;
++
++ if (bytes_cnt == chunk_sz) {
++ k = 0;
++ bytes_cnt = 0;
++ check_cnt = 0;
++ push = verify_push_len;
++ }
++
++ if (push > 0 &&
++ check_cnt == verify_push_start + verify_push_len - push)
++ goto revisit_push;
++ }
++
+ if (d[j] != k++) {
+ fprintf(stderr,
+ "detected data corruption @iov[%i]:%i %02x != %02x, %02x ?= %02x\n",
+ i, j, d[j], k - 1, d[j+1], k);
+- return -EIO;
++ return -EDATAINTEGRITY;
+ }
+ bytes_cnt++;
++ check_cnt++;
+ if (bytes_cnt == chunk_sz) {
+ k = 0;
+ bytes_cnt = 0;
++ check_cnt = 0;
++ push = verify_push_len;
+ }
+ size--;
+ }
+ }
++ *k_p = k;
++ *bytes_cnt_p = bytes_cnt;
++ *check_cnt_p = check_cnt;
++ *push_p = push;
+ return 0;
+ }
+
+@@ -598,10 +675,14 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ }
+ clock_gettime(CLOCK_MONOTONIC, &s->end);
+ } else {
++ float total_bytes, txmsg_pop_total, txmsg_push_total;
+ int slct, recvp = 0, recv, max_fd = fd;
+- float total_bytes, txmsg_pop_total;
+ int fd_flags = O_NONBLOCK;
+ struct timeval timeout;
++ unsigned char k = 0;
++ int bytes_cnt = 0;
++ int check_cnt = 0;
++ int push = 0;
+ fd_set w;
+
+ fcntl(fd, fd_flags);
+@@ -615,12 +696,22 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ * This is really only useful for testing edge cases in code
+ * paths.
+ */
+- total_bytes = (float)iov_count * (float)iov_length * (float)cnt;
+- if (txmsg_apply)
++ total_bytes = (float)iov_length * (float)cnt;
++ if (!opt->sendpage)
++ total_bytes *= (float)iov_count;
++ if (txmsg_apply) {
++ txmsg_push_total = txmsg_end_push * (total_bytes / txmsg_apply);
+ txmsg_pop_total = txmsg_pop * (total_bytes / txmsg_apply);
+- else
++ } else {
++ txmsg_push_total = txmsg_end_push * cnt;
+ txmsg_pop_total = txmsg_pop * cnt;
++ }
++ total_bytes += txmsg_push_total;
+ total_bytes -= txmsg_pop_total;
++ if (data) {
++ msg_verify_date_prep();
++ push = verify_push_len;
++ }
+ err = clock_gettime(CLOCK_MONOTONIC, &s->start);
+ if (err < 0)
+ perror("recv start time");
+@@ -693,10 +784,11 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+
+ if (data) {
+ int chunk_sz = opt->sendpage ?
+- iov_length * cnt :
++ iov_length :
+ iov_length * iov_count;
+
+- errno = msg_verify_data(&msg, recv, chunk_sz);
++ errno = msg_verify_data(&msg, recv, chunk_sz, &k, &bytes_cnt,
++ &check_cnt, &push);
+ if (errno) {
+ perror("data verify msg failed");
+ goto out_errno;
+@@ -704,7 +796,11 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ if (recvp) {
+ errno = msg_verify_data(&msg_peek,
+ recvp,
+- chunk_sz);
++ chunk_sz,
++ &k,
++ &bytes_cnt,
++ &check_cnt,
++ &push);
+ if (errno) {
+ perror("data verify msg_peek failed");
+ goto out_errno;
+@@ -786,8 +882,6 @@ static int sendmsg_test(struct sockmap_options *opt)
+
+ rxpid = fork();
+ if (rxpid == 0) {
+- if (txmsg_pop || txmsg_start_pop)
+- iov_buf -= (txmsg_pop - txmsg_start_pop + 1);
+ if (opt->drop_expected || txmsg_ktls_skb_drop)
+ _exit(0);
+
+@@ -812,7 +906,7 @@ static int sendmsg_test(struct sockmap_options *opt)
+ s.bytes_sent, sent_Bps, sent_Bps/giga,
+ s.bytes_recvd, recvd_Bps, recvd_Bps/giga,
+ peek_flag ? "(peek_msg)" : "");
+- if (err && txmsg_cork)
++ if (err && err != -EDATAINTEGRITY && txmsg_cork)
+ err = 0;
+ exit(err ? 1 : 0);
+ } else if (rxpid == -1) {
+@@ -1456,8 +1550,8 @@ static void test_send_many(struct sockmap_options *opt, int cgrp)
+
+ static void test_send_large(struct sockmap_options *opt, int cgrp)
+ {
+- opt->iov_length = 256;
+- opt->iov_count = 1024;
++ opt->iov_length = 8192;
++ opt->iov_count = 32;
+ opt->rate = 2;
+ test_exec(cgrp, opt);
+ }
+@@ -1586,17 +1680,19 @@ static void test_txmsg_cork_hangs(int cgrp, struct sockmap_options *opt)
+ static void test_txmsg_pull(int cgrp, struct sockmap_options *opt)
+ {
+ /* Test basic start/end */
++ txmsg_pass = 1;
+ txmsg_start = 1;
+ txmsg_end = 2;
+ test_send(opt, cgrp);
+
+ /* Test >4k pull */
++ txmsg_pass = 1;
+ txmsg_start = 4096;
+ txmsg_end = 9182;
+ test_send_large(opt, cgrp);
+
+ /* Test pull + redirect */
+- txmsg_redir = 0;
++ txmsg_redir = 1;
+ txmsg_start = 1;
+ txmsg_end = 2;
+ test_send(opt, cgrp);
+@@ -1618,12 +1714,16 @@ static void test_txmsg_pull(int cgrp, struct sockmap_options *opt)
+
+ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
+ {
++ bool data = opt->data_test;
++
+ /* Test basic pop */
++ txmsg_pass = 1;
+ txmsg_start_pop = 1;
+ txmsg_pop = 2;
+ test_send_many(opt, cgrp);
+
+ /* Test pop with >4k */
++ txmsg_pass = 1;
+ txmsg_start_pop = 4096;
+ txmsg_pop = 4096;
+ test_send_large(opt, cgrp);
+@@ -1634,6 +1734,12 @@ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
+ txmsg_pop = 2;
+ test_send_many(opt, cgrp);
+
++ /* TODO: Test for pop + cork should be different,
++ * - It makes the layout of the received data difficult
++ * - It makes it hard to calculate the total_bytes in the recvmsg
++ * Temporarily skip the data integrity test for this case now.
++ */
++ opt->data_test = false;
+ /* Test pop + cork */
+ txmsg_redir = 0;
+ txmsg_cork = 512;
+@@ -1647,16 +1753,21 @@ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
+ txmsg_start_pop = 1;
+ txmsg_pop = 2;
+ test_send_many(opt, cgrp);
++ opt->data_test = data;
+ }
+
+ static void test_txmsg_push(int cgrp, struct sockmap_options *opt)
+ {
++ bool data = opt->data_test;
++
+ /* Test basic push */
++ txmsg_pass = 1;
+ txmsg_start_push = 1;
+ txmsg_end_push = 1;
+ test_send(opt, cgrp);
+
+ /* Test push 4kB >4k */
++ txmsg_pass = 1;
+ txmsg_start_push = 4096;
+ txmsg_end_push = 4096;
+ test_send_large(opt, cgrp);
+@@ -1667,16 +1778,24 @@ static void test_txmsg_push(int cgrp, struct sockmap_options *opt)
+ txmsg_end_push = 2;
+ test_send_many(opt, cgrp);
+
++ /* TODO: Test for push + cork should be different,
++ * - It makes the layout of the received data difficult
++ * - It makes it hard to calculate the total_bytes in the recvmsg
++ * Temporarily skip the data integrity test for this case now.
++ */
++ opt->data_test = false;
+ /* Test push + cork */
+ txmsg_redir = 0;
+ txmsg_cork = 512;
+ txmsg_start_push = 1;
+ txmsg_end_push = 2;
+ test_send_many(opt, cgrp);
++ opt->data_test = data;
+ }
+
+ static void test_txmsg_push_pop(int cgrp, struct sockmap_options *opt)
+ {
++ txmsg_pass = 1;
+ txmsg_start_push = 1;
+ txmsg_end_push = 10;
+ txmsg_start_pop = 5;
+diff --git a/tools/testing/selftests/mount_setattr/mount_setattr_test.c b/tools/testing/selftests/mount_setattr/mount_setattr_test.c
+index c6a8c732b80217..304e6422a1f1ce 100644
+--- a/tools/testing/selftests/mount_setattr/mount_setattr_test.c
++++ b/tools/testing/selftests/mount_setattr/mount_setattr_test.c
+@@ -1026,7 +1026,7 @@ FIXTURE_SETUP(mount_setattr_idmapped)
+ "size=100000,mode=700"), 0);
+
+ ASSERT_EQ(mount("testing", "/mnt", "tmpfs", MS_NOATIME | MS_NODEV,
+- "size=100000,mode=700"), 0);
++ "size=2m,mode=700"), 0);
+
+ ASSERT_EQ(mkdir("/mnt/A", 0777), 0);
+
+diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
+index 9d5aa817411b65..d79942e6ff76f6 100644
+--- a/tools/testing/selftests/net/Makefile
++++ b/tools/testing/selftests/net/Makefile
+@@ -95,6 +95,7 @@ TEST_PROGS += fdb_flush.sh
+ TEST_PROGS += fq_band_pktlimit.sh
+ TEST_PROGS += vlan_hw_filter.sh
+ TEST_PROGS += bpf_offload.py
++TEST_PROGS += ipv6_route_update_soft_lockup.sh
+
+ TEST_FILES := settings
+ TEST_FILES += in_netns.sh lib.sh net_helper.sh setup_loopback.sh setup_veth.sh
+diff --git a/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh b/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh
+new file mode 100755
+index 00000000000000..a6b2b1f9c641c9
+--- /dev/null
++++ b/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh
+@@ -0,0 +1,262 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++#
++# Testing for potential kernel soft lockup during IPv6 routing table
++# refresh under heavy outgoing IPv6 traffic. If a kernel soft lockup
++# occurs, a kernel panic will be triggered to prevent associated issues.
++#
++#
++# Test Environment Layout
++#
++# ┌----------------┐ ┌----------------┐
++# | SOURCE_NS | | SINK_NS |
++# | NAMESPACE | | NAMESPACE |
++# |(iperf3 clients)| |(iperf3 servers)|
++# | | | |
++# | | | |
++# | ┌-----------| nexthops |---------┐ |
++# | |veth_source|<--------------------------------------->|veth_sink|<┐ |
++# | └-----------|2001:0DB8:1::0:1/96 2001:0DB8:1::1:1/96 |---------┘ | |
++# | | ^ 2001:0DB8:1::1:2/96 | | |
++# | | . . | fwd | |
++# | ┌---------┐ | . . | | |
++# | | IPv6 | | . . | V |
++# | | routing | | . 2001:0DB8:1::1:80/96| ┌-----┐ |
++# | | table | | . | | lo | |
++# | | nexthop | | . └--------┴-----┴-┘
++# | | update | | ............................> 2001:0DB8:2::1:1/128
++# | └-------- ┘ |
++# └----------------┘
++#
++# The test script sets up two network namespaces, source_ns and sink_ns,
++# connected via a veth link. Within source_ns, it continuously updates the
++# IPv6 routing table by flushing and inserting IPV6_NEXTHOP_ADDR_COUNT nexthop
++# IPs destined for SINK_LOOPBACK_IP_ADDR in sink_ns. This refresh occurs at a
++# rate of 1/ROUTING_TABLE_REFRESH_PERIOD per second for TEST_DURATION seconds.
++#
++# Simultaneously, multiple iperf3 clients within source_ns generate heavy
++# outgoing IPv6 traffic. Each client is assigned a unique port number starting
++# at 5000 and incrementing sequentially. Each client targets a unique iperf3
++# server running in sink_ns, connected to the SINK_LOOPBACK_IFACE interface
++# using the same port number.
++#
++# The number of iperf3 servers and clients is set to half of the total
++# available cores on each machine.
++#
++# NOTE: We have tested this script on machines with various CPU specifications,
++# ranging from lower to higher performance as listed below. The test script
++# effectively triggered a kernel soft lockup on machines running an unpatched
++# kernel in under a minute:
++#
++# - 1x Intel Xeon E-2278G 8-Core Processor @ 3.40GHz
++# - 1x Intel Xeon E-2378G Processor 8-Core @ 2.80GHz
++# - 1x AMD EPYC 7401P 24-Core Processor @ 2.00GHz
++# - 1x AMD EPYC 7402P 24-Core Processor @ 2.80GHz
++# - 2x Intel Xeon Gold 5120 14-Core Processor @ 2.20GHz
++# - 1x Ampere Altra Q80-30 80-Core Processor @ 3.00GHz
++# - 2x Intel Xeon Gold 5120 14-Core Processor @ 2.20GHz
++# - 2x Intel Xeon Silver 4214 24-Core Processor @ 2.20GHz
++# - 1x AMD EPYC 7502P 32-Core @ 2.50GHz
++# - 1x Intel Xeon Gold 6314U 32-Core Processor @ 2.30GHz
++# - 2x Intel Xeon Gold 6338 32-Core Processor @ 2.00GHz
++#
++# On less performant machines, you may need to increase the TEST_DURATION
++# parameter to enhance the likelihood of encountering a race condition leading
++# to a kernel soft lockup and avoid a false negative result.
++#
++# NOTE: The test may not produce the expected result in virtualized
++# environments (e.g., qemu) due to differences in timing and CPU handling,
++# which can affect the conditions needed to trigger a soft lockup.
++
++source lib.sh
++source net_helper.sh
++
++TEST_DURATION=300
++ROUTING_TABLE_REFRESH_PERIOD=0.01
++
++IPERF3_BITRATE="300m"
++
++
++IPV6_NEXTHOP_ADDR_COUNT="128"
++IPV6_NEXTHOP_ADDR_MASK="96"
++IPV6_NEXTHOP_PREFIX="2001:0DB8:1"
++
++
++SOURCE_TEST_IFACE="veth_source"
++SOURCE_TEST_IP_ADDR="2001:0DB8:1::0:1/96"
++
++SINK_TEST_IFACE="veth_sink"
++# ${SINK_TEST_IFACE} is populated with the following range of IPv6 addresses:
++# 2001:0DB8:1::1:1 to 2001:0DB8:1::1:${IPV6_NEXTHOP_ADDR_COUNT}
++SINK_LOOPBACK_IFACE="lo"
++SINK_LOOPBACK_IP_MASK="128"
++SINK_LOOPBACK_IP_ADDR="2001:0DB8:2::1:1"
++
++nexthop_ip_list=""
++termination_signal=""
++kernel_softlokup_panic_prev_val=""
++
++terminate_ns_processes_by_pattern() {
++ local ns=$1
++ local pattern=$2
++
++ for pid in $(ip netns pids ${ns}); do
++ [ -e /proc/$pid/cmdline ] && grep -qe "${pattern}" /proc/$pid/cmdline && kill -9 $pid
++ done
++}
++
++cleanup() {
++ echo "info: cleaning up namespaces and terminating all processes within them..."
++
++
++ # Terminate iperf3 instances running in the source_ns. To avoid race
++ # conditions, first iterate over the PIDs and terminate those
++ # associated with the bash shells running the
++ # `while true; do iperf3 -c ...; done` loops. In a second iteration,
++ # terminate the individual `iperf3 -c ...` instances.
++ terminate_ns_processes_by_pattern ${source_ns} while
++ terminate_ns_processes_by_pattern ${source_ns} iperf3
++
++ # Repeat the same process for sink_ns
++ terminate_ns_processes_by_pattern ${sink_ns} while
++ terminate_ns_processes_by_pattern ${sink_ns} iperf3
++
++ # Check if any iperf3 instances are still running. This could happen
++ # if a core has entered an infinite loop and the timeout for detecting
++ # the soft lockup has not expired, but either the test interval has
++ # already elapsed or the test was terminated manually (e.g., with ^C)
++ for pid in $(ip netns pids ${source_ns}); do
++ if [ -e /proc/$pid/cmdline ] && grep -qe 'iperf3' /proc/$pid/cmdline; then
++ echo "FAIL: unable to terminate some iperf3 instances. Soft lockup is underway. A kernel panic is on the way!"
++ exit ${ksft_fail}
++ fi
++ done
++
++ if [ "$termination_signal" == "SIGINT" ]; then
++ echo "SKIP: Termination due to ^C (SIGINT)"
++ elif [ "$termination_signal" == "SIGALRM" ]; then
++ echo "PASS: No kernel soft lockup occurred during this ${TEST_DURATION} second test"
++ fi
++
++ cleanup_ns ${source_ns} ${sink_ns}
++
++ sysctl -qw kernel.softlockup_panic=${kernel_softlokup_panic_prev_val}
++}
++
++setup_prepare() {
++ setup_ns source_ns sink_ns
++
++ ip -n ${source_ns} link add name ${SOURCE_TEST_IFACE} type veth peer name ${SINK_TEST_IFACE} netns ${sink_ns}
++
++ # Setting up the Source namespace
++ ip -n ${source_ns} addr add ${SOURCE_TEST_IP_ADDR} dev ${SOURCE_TEST_IFACE}
++ ip -n ${source_ns} link set dev ${SOURCE_TEST_IFACE} qlen 10000
++ ip -n ${source_ns} link set dev ${SOURCE_TEST_IFACE} up
++ ip netns exec ${source_ns} sysctl -qw net.ipv6.fib_multipath_hash_policy=1
++
++ # Setting up the Sink namespace
++ ip -n ${sink_ns} addr add ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK} dev ${SINK_LOOPBACK_IFACE}
++ ip -n ${sink_ns} link set dev ${SINK_LOOPBACK_IFACE} up
++ ip netns exec ${sink_ns} sysctl -qw net.ipv6.conf.${SINK_LOOPBACK_IFACE}.forwarding=1
++
++ ip -n ${sink_ns} link set ${SINK_TEST_IFACE} up
++ ip netns exec ${sink_ns} sysctl -qw net.ipv6.conf.${SINK_TEST_IFACE}.forwarding=1
++
++
++ # Populate nexthop IPv6 addresses on the test interface in the sink_ns
++ echo "info: populating ${IPV6_NEXTHOP_ADDR_COUNT} IPv6 addresses on the ${SINK_TEST_IFACE} interface ..."
++ for IP in $(seq 1 ${IPV6_NEXTHOP_ADDR_COUNT}); do
++ ip -n ${sink_ns} addr add ${IPV6_NEXTHOP_PREFIX}::$(printf "1:%x" "${IP}")/${IPV6_NEXTHOP_ADDR_MASK} dev ${SINK_TEST_IFACE};
++ done
++
++ # Preparing list of nexthops
++ for IP in $(seq 1 ${IPV6_NEXTHOP_ADDR_COUNT}); do
++ nexthop_ip_list=$nexthop_ip_list" nexthop via ${IPV6_NEXTHOP_PREFIX}::$(printf "1:%x" $IP) dev ${SOURCE_TEST_IFACE} weight 1"
++ done
++}
++
++
++test_soft_lockup_during_routing_table_refresh() {
++ # Start num_of_iperf_servers iperf3 servers in the sink_ns namespace,
++ # each listening on ports starting at 5001 and incrementing
++ # sequentially. Since iperf3 instances may terminate unexpectedly, a
++ # while loop is used to automatically restart them in such cases.
++ echo "info: starting ${num_of_iperf_servers} iperf3 servers in the sink_ns namespace ..."
++ for i in $(seq 1 ${num_of_iperf_servers}); do
++ cmd="iperf3 --bind ${SINK_LOOPBACK_IP_ADDR} -s -p $(printf '5%03d' ${i}) --rcv-timeout 200 &>/dev/null"
++ ip netns exec ${sink_ns} bash -c "while true; do ${cmd}; done &" &>/dev/null
++ done
++
++ # Wait for the iperf3 servers to be ready
++ for i in $(seq ${num_of_iperf_servers}); do
++ port=$(printf '5%03d' ${i});
++ wait_local_port_listen ${sink_ns} ${port} tcp
++ done
++
++ # Continuously refresh the routing table in the background within
++ # the source_ns namespace
++ ip netns exec ${source_ns} bash -c "
++ while \$(ip netns list | grep -q ${source_ns}); do
++ ip -6 route add ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK} ${nexthop_ip_list};
++ sleep ${ROUTING_TABLE_REFRESH_PERIOD};
++ ip -6 route delete ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK};
++ done &"
++
++ # Start num_of_iperf_servers iperf3 clients in the source_ns namespace,
++ # each sending TCP traffic on sequential ports starting at 5001.
++ # Since iperf3 instances may terminate unexpectedly (e.g., if the route
++ # to the server is deleted in the background during a route refresh), a
++ # while loop is used to automatically restart them in such cases.
++ echo "info: starting ${num_of_iperf_servers} iperf3 clients in the source_ns namespace ..."
++ for i in $(seq 1 ${num_of_iperf_servers}); do
++ cmd="iperf3 -c ${SINK_LOOPBACK_IP_ADDR} -p $(printf '5%03d' ${i}) --length 64 --bitrate ${IPERF3_BITRATE} -t 0 --connect-timeout 150 &>/dev/null"
++ ip netns exec ${source_ns} bash -c "while true; do ${cmd}; done &" &>/dev/null
++ done
++
++ echo "info: IPv6 routing table is being updated at the rate of $(echo "1/${ROUTING_TABLE_REFRESH_PERIOD}" | bc)/s for ${TEST_DURATION} seconds ..."
++ echo "info: A kernel soft lockup, if detected, results in a kernel panic!"
++
++ wait
++}
++
++# Make sure 'iperf3' is installed, skip the test otherwise
++if [ ! -x "$(command -v "iperf3")" ]; then
++ echo "SKIP: 'iperf3' is not installed. Skipping the test."
++ exit ${ksft_skip}
++fi
++
++# Determine the number of cores on the machine
++num_of_iperf_servers=$(( $(nproc)/2 ))
++
++# Check if we are running on a multi-core machine, skip the test otherwise
++if [ "${num_of_iperf_servers}" -eq 0 ]; then
++ echo "SKIP: This test is not valid on a single core machine!"
++ exit ${ksft_skip}
++fi
++
++# Since the kernel soft lockup we're testing causes at least one core to enter
++# an infinite loop, destabilizing the host and likely affecting subsequent
++# tests, we trigger a kernel panic instead of reporting a failure and
++# continuing
++kernel_softlokup_panic_prev_val=$(sysctl -n kernel.softlockup_panic)
++sysctl -qw kernel.softlockup_panic=1
++
++handle_sigint() {
++ termination_signal="SIGINT"
++ cleanup
++ exit ${ksft_skip}
++}
++
++handle_sigalrm() {
++ termination_signal="SIGALRM"
++ cleanup
++ exit ${ksft_pass}
++}
++
++trap handle_sigint SIGINT
++trap handle_sigalrm SIGALRM
++
++(sleep ${TEST_DURATION} && kill -s SIGALRM $$)&
++
++setup_prepare
++test_soft_lockup_during_routing_table_refresh
+diff --git a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
+index dc056fec993bd3..7e8ffe6b95a453 100644
+--- a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
++++ b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
+@@ -43,6 +43,8 @@ static int build_cta_tuple_v4(struct nlmsghdr *nlh, int type,
+ mnl_attr_nest_end(nlh, nest_proto);
+
+ mnl_attr_nest_end(nlh, nest);
++
++ return 0;
+ }
+
+ static int build_cta_tuple_v6(struct nlmsghdr *nlh, int type,
+@@ -71,6 +73,8 @@ static int build_cta_tuple_v6(struct nlmsghdr *nlh, int type,
+ mnl_attr_nest_end(nlh, nest_proto);
+
+ mnl_attr_nest_end(nlh, nest);
++
++ return 0;
+ }
+
+ static int build_cta_proto(struct nlmsghdr *nlh)
+@@ -90,6 +94,8 @@ static int build_cta_proto(struct nlmsghdr *nlh)
+ mnl_attr_nest_end(nlh, nest_proto);
+
+ mnl_attr_nest_end(nlh, nest);
++
++ return 0;
+ }
+
+ static int conntrack_data_insert(struct mnl_socket *sock, struct nlmsghdr *nlh,
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 5175c0c83a238c..d5af6099d6477d 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -2062,7 +2062,7 @@ check_running() {
+ pid=${1}
+ cmd=${2}
+
+- [ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "{cmd}" ]
++ [ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "${cmd}" ]
+ }
+
+ test_cleanup_vxlanX_exception() {
+diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
+index ae120f1735c0bc..34e5df721430ee 100644
+--- a/tools/testing/selftests/resctrl/fill_buf.c
++++ b/tools/testing/selftests/resctrl/fill_buf.c
+@@ -127,7 +127,7 @@ unsigned char *alloc_buffer(size_t buf_size, int memflush)
+ {
+ void *buf = NULL;
+ uint64_t *p64;
+- size_t s64;
++ ssize_t s64;
+ int ret;
+
+ ret = posix_memalign(&buf, PAGE_SIZE, buf_size);
+diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c
+index 6b5a3b52d861b8..cf08ba5e314e2a 100644
+--- a/tools/testing/selftests/resctrl/mbm_test.c
++++ b/tools/testing/selftests/resctrl/mbm_test.c
+@@ -40,7 +40,8 @@ show_bw_info(unsigned long *bw_imc, unsigned long *bw_resc, size_t span)
+ ksft_print_msg("%s Check MBM diff within %d%%\n",
+ ret ? "Fail:" : "Pass:", MAX_DIFF_PERCENT);
+ ksft_print_msg("avg_diff_per: %d%%\n", avg_diff_per);
+- ksft_print_msg("Span (MB): %zu\n", span / MB);
++ if (span)
++ ksft_print_msg("Span (MB): %zu\n", span / MB);
+ ksft_print_msg("avg_bw_imc: %lu\n", avg_bw_imc);
+ ksft_print_msg("avg_bw_resc: %lu\n", avg_bw_resc);
+
+@@ -138,15 +139,26 @@ static int mbm_run_test(const struct resctrl_test *test, const struct user_param
+ .setup = mbm_setup,
+ .measure = mbm_measure,
+ };
++ char *endptr = NULL;
++ size_t span = 0;
+ int ret;
+
+ remove(RESULT_FILE_NAME);
+
++ if (uparams->benchmark_cmd[0] && strcmp(uparams->benchmark_cmd[0], "fill_buf") == 0) {
++ if (uparams->benchmark_cmd[1] && *uparams->benchmark_cmd[1] != '\0') {
++ errno = 0;
++ span = strtoul(uparams->benchmark_cmd[1], &endptr, 10);
++ if (errno || *endptr != '\0')
++ return -EINVAL;
++ }
++ }
++
+ ret = resctrl_val(test, uparams, uparams->benchmark_cmd, ¶m);
+ if (ret)
+ return ret;
+
+- ret = check_results(DEFAULT_SPAN);
++ ret = check_results(span);
+ if (ret && (get_vendor() == ARCH_INTEL))
+ ksft_print_msg("Intel MBM may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n");
+
+diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
+index 8c275f6b4dd777..f118f659e89600 100644
+--- a/tools/testing/selftests/resctrl/resctrl_val.c
++++ b/tools/testing/selftests/resctrl/resctrl_val.c
+@@ -83,13 +83,12 @@ void get_event_and_umask(char *cas_count_cfg, int count, bool op)
+ char *token[MAX_TOKENS];
+ int i = 0;
+
+- strcat(cas_count_cfg, ",");
+ token[0] = strtok(cas_count_cfg, "=,");
+
+ for (i = 1; i < MAX_TOKENS; i++)
+ token[i] = strtok(NULL, "=,");
+
+- for (i = 0; i < MAX_TOKENS; i++) {
++ for (i = 0; i < MAX_TOKENS - 1; i++) {
+ if (!token[i])
+ break;
+ if (strcmp(token[i], "event") == 0) {
+diff --git a/tools/testing/selftests/vDSO/parse_vdso.c b/tools/testing/selftests/vDSO/parse_vdso.c
+index 7dd5668ea8a6e3..28f35620c49919 100644
+--- a/tools/testing/selftests/vDSO/parse_vdso.c
++++ b/tools/testing/selftests/vDSO/parse_vdso.c
+@@ -222,8 +222,7 @@ void *vdso_sym(const char *version, const char *name)
+ ELF(Sym) *sym = &vdso_info.symtab[chain];
+
+ /* Check for a defined global or weak function w/ right name. */
+- if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC &&
+- ELF64_ST_TYPE(sym->st_info) != STT_NOTYPE)
++ if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC)
+ continue;
+ if (ELF64_ST_BIND(sym->st_info) != STB_GLOBAL &&
+ ELF64_ST_BIND(sym->st_info) != STB_WEAK)
+diff --git a/tools/testing/selftests/watchdog/watchdog-test.c b/tools/testing/selftests/watchdog/watchdog-test.c
+index bc71cbca0dde70..a1f506ba557864 100644
+--- a/tools/testing/selftests/watchdog/watchdog-test.c
++++ b/tools/testing/selftests/watchdog/watchdog-test.c
+@@ -334,7 +334,13 @@ int main(int argc, char *argv[])
+
+ printf("Watchdog Ticking Away!\n");
+
++ /*
++ * Register the signals
++ */
+ signal(SIGINT, term);
++ signal(SIGTERM, term);
++ signal(SIGKILL, term);
++ signal(SIGQUIT, term);
+
+ while (1) {
+ keep_alive();
+diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh
+index 405ff262ca93d4..55500f901fbc36 100755
+--- a/tools/testing/selftests/wireguard/netns.sh
++++ b/tools/testing/selftests/wireguard/netns.sh
+@@ -332,6 +332,7 @@ waitiface $netns1 vethc
+ waitiface $netns2 veths
+
+ n0 bash -c 'printf 1 > /proc/sys/net/ipv4/ip_forward'
++[[ -e /proc/sys/net/netfilter/nf_conntrack_udp_timeout ]] || modprobe nf_conntrack
+ n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout'
+ n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout_stream'
+ n0 iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -d 10.0.0.0/24 -j SNAT --to 10.0.0.1
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index a3907c390d67a5..829511a712224f 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -1064,7 +1064,7 @@ timerlat_hist_apply_config(struct osnoise_tool *tool, struct timerlat_hist_param
+ * If the user did not specify a type of thread, try user-threads first.
+ * Fall back to kernel threads otherwise.
+ */
+- if (!params->kernel_workload && !params->user_workload) {
++ if (!params->kernel_workload && !params->user_hist) {
+ retval = tracefs_file_exists(NULL, "osnoise/per_cpu/cpu0/timerlat_fd");
+ if (retval) {
+ debug_msg("User-space interface detected, setting user-threads\n");
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index 210b0f533534ab..3b62519a412fc9 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -830,7 +830,7 @@ timerlat_top_apply_config(struct osnoise_tool *top, struct timerlat_top_params *
+ * If the user did not specify a type of thread, try user-threads first.
+ * Fall back to kernel threads otherwise.
+ */
+- if (!params->kernel_workload && !params->user_workload) {
++ if (!params->kernel_workload && !params->user_top) {
+ retval = tracefs_file_exists(NULL, "osnoise/per_cpu/cpu0/timerlat_fd");
+ if (retval) {
+ debug_msg("User-space interface detected, setting user-threads\n");
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 7164a9ece20874..16f0c3566f1614 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -6573,106 +6573,3 @@ void kvm_exit(void)
+ kvm_irqfd_exit();
+ }
+ EXPORT_SYMBOL_GPL(kvm_exit);
+-
+-struct kvm_vm_worker_thread_context {
+- struct kvm *kvm;
+- struct task_struct *parent;
+- struct completion init_done;
+- kvm_vm_thread_fn_t thread_fn;
+- uintptr_t data;
+- int err;
+-};
+-
+-static int kvm_vm_worker_thread(void *context)
+-{
+- /*
+- * The init_context is allocated on the stack of the parent thread, so
+- * we have to locally copy anything that is needed beyond initialization
+- */
+- struct kvm_vm_worker_thread_context *init_context = context;
+- struct task_struct *parent;
+- struct kvm *kvm = init_context->kvm;
+- kvm_vm_thread_fn_t thread_fn = init_context->thread_fn;
+- uintptr_t data = init_context->data;
+- int err;
+-
+- err = kthread_park(current);
+- /* kthread_park(current) is never supposed to return an error */
+- WARN_ON(err != 0);
+- if (err)
+- goto init_complete;
+-
+- err = cgroup_attach_task_all(init_context->parent, current);
+- if (err) {
+- kvm_err("%s: cgroup_attach_task_all failed with err %d\n",
+- __func__, err);
+- goto init_complete;
+- }
+-
+- set_user_nice(current, task_nice(init_context->parent));
+-
+-init_complete:
+- init_context->err = err;
+- complete(&init_context->init_done);
+- init_context = NULL;
+-
+- if (err)
+- goto out;
+-
+- /* Wait to be woken up by the spawner before proceeding. */
+- kthread_parkme();
+-
+- if (!kthread_should_stop())
+- err = thread_fn(kvm, data);
+-
+-out:
+- /*
+- * Move kthread back to its original cgroup to prevent it lingering in
+- * the cgroup of the VM process, after the latter finishes its
+- * execution.
+- *
+- * kthread_stop() waits on the 'exited' completion condition which is
+- * set in exit_mm(), via mm_release(), in do_exit(). However, the
+- * kthread is removed from the cgroup in the cgroup_exit() which is
+- * called after the exit_mm(). This causes the kthread_stop() to return
+- * before the kthread actually quits the cgroup.
+- */
+- rcu_read_lock();
+- parent = rcu_dereference(current->real_parent);
+- get_task_struct(parent);
+- rcu_read_unlock();
+- cgroup_attach_task_all(parent, current);
+- put_task_struct(parent);
+-
+- return err;
+-}
+-
+-int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_vm_thread_fn_t thread_fn,
+- uintptr_t data, const char *name,
+- struct task_struct **thread_ptr)
+-{
+- struct kvm_vm_worker_thread_context init_context = {};
+- struct task_struct *thread;
+-
+- *thread_ptr = NULL;
+- init_context.kvm = kvm;
+- init_context.parent = current;
+- init_context.thread_fn = thread_fn;
+- init_context.data = data;
+- init_completion(&init_context.init_done);
+-
+- thread = kthread_run(kvm_vm_worker_thread, &init_context,
+- "%s-%d", name, task_pid_nr(current));
+- if (IS_ERR(thread))
+- return PTR_ERR(thread);
+-
+- /* kthread_run is never supposed to return NULL */
+- WARN_ON(thread == NULL);
+-
+- wait_for_completion(&init_context.init_done);
+-
+- if (!init_context.err)
+- *thread_ptr = thread;
+-
+- return init_context.err;
+-}
^ permalink raw reply related [flat|nested] 26+ messages in thread
end of thread, other threads:[~2024-12-05 14:07 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-04 20:27 [gentoo-commits] proj/linux-patches:6.11 commit in: / Mike Pagano
-- strict thread matches above, loose matches on Subject: below --
2024-12-05 14:07 Mike Pagano
2024-11-22 17:54 Mike Pagano
2024-11-22 17:46 Mike Pagano
2024-11-21 18:02 Mike Pagano
2024-11-19 18:42 Mike Pagano
2024-11-17 18:15 Mike Pagano
2024-11-14 14:53 Mike Pagano
2024-11-14 13:49 Mike Pagano
2024-11-13 19:54 Mike Pagano
2024-11-08 16:29 Mike Pagano
2024-11-03 11:29 Mike Pagano
2024-11-01 11:54 Mike Pagano
2024-11-01 11:26 Mike Pagano
2024-10-27 13:40 Mike Pagano
2024-10-25 11:42 Mike Pagano
2024-10-22 16:56 Mike Pagano
2024-10-21 13:35 Mike Pagano
2024-10-17 14:31 Mike Pagano
2024-10-17 14:04 Mike Pagano
2024-10-10 11:34 Mike Pagano
2024-10-04 22:53 Mike Pagano
2024-10-04 15:21 Mike Pagano
2024-09-30 16:02 Mike Pagano
2024-09-21 15:21 Mike Pagano
2024-09-15 19:09 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox